
Dan Bucureanu
Cyberquant, RomaniaDan is a seasoned professional in quality engineering, with over 20 years of expertise in the field. He has progressed from a junior QA role to Director of Quality Engineering, embracing principles such as built-in quality, lean practices, and continuous delivery. Currently, he is the Managing Director of an outsourcing company, overseeing operations in Romania. In his free time, he codes AI agents for testing and investigates gender and inherited bias in AI Models.
Validating Gender BIAS in AI Systems
Artificial intelligence (AI) systems are increasingly integrated into high-stakes decision-making processes, including hiring, admissions, and content moderation. However, these systems often reflect and perpetuate societal biases, particularly gender bias.
In this paper, I present a framework for validating gender bias in AI models, with a specific focus on large language models and classification systems. I introduce Job Fair, an interactive evaluation framework designed to surface and measure discriminatory behavior in AI outputs across controlled gender-based prompts. Additionally, I perform an empirical analysis using the Bias-in-Bios dataset, which contains biographies labeled by profession and gender, allowing me to quantify disparities in model predictions and associations. The results highlight systematic gender imbalances in occupational predictions and reveal the limitations of conventional fairness metrics when applied post hoc. I also outline a replicable methodology for identifying, visualizing, and mitigating gender bias in AI systems, contributing toward the development of more equitable machine learning models.

Oleksiy Slavutskyy & Tatiana Maksimova
EPAM SystemsOleksiy Slavutskyy, with a career spanning over 15 years in the Quality Assurance industry, Oleksiy began as a manual tester, gradually progressing to a Test automation role and eventually becoming a quality architect. This rich journey has equipped him with valuable know-how in diverse QA techniques, including mobile testing, and a specialization in domains like e-commerce and IoT product testing.
Over the years, Oleksiy has made a significant impact as a mentor and trainer at EPAM's Quality Architect School, helping shape the future minds of the industry.
Tatiana Maksimova, is a Quality Architect with over 12 years of experience in QA and more than 9 years specializing in designing and implementing test strategies across complex systems. Her expertise lies in building scalable quality architectures that align with both business goals and technical realities. Her current focus is on the applicability of Quality Architecture for AI-infused products - designing testing strategies that address the unique risks, unpredictability, and emergent behaviors of AI systems.
Testing of AI-driven applications: how not to fail when testing products with AI under the hood!
Testing of AI-driven applications: how not to fail when testing products with AI under the hood! delves into the unique challenges presented by AI/ML-driven applications and strategies to overcome them.
It outlines the various types of AI/ML applications while addressing the key testing difficulties associated with their data, AI layer, and overall system behavior. Additionally, the discussion highlights critical challenges in system testing due to the dynamic and non-deterministic nature of AI-powered ecosystems. To help address these issues, the article evaluates available testing approaches, tools and introduces accelerators that can enhance the testing process by ensuring thorough validation and robustness of AI-infused products.

Milcho Hekimov
Endurosat, BulgariaMilcho is a Quality Assurance engineer with more than 7 years of experience. He has experience in various industries including Automotive and E-commerce. The roles he has occupied include Test Consultant and Penetration test consultant. Currently, he is a founding member of Bulgarian Association of Certified Ethical Hackers.
He holds both a Master’s degree and a Bachelor’s degree in Computer systems and technologies from the University of Ruse. He also holds a CEH (Certified Ethical Hacker) v10 and Offensive Security Web Expert certificates.
Milcho has proven experience in developing and presenting series of courses and trainings for Quality House. He was a part-time lecturer at the University of Library Studies and Information Technologies, Sofia (Bulgaria) where he was teaching Penetration testing for the students enrolled in the Cybersecurity and Cybercrime Investigation MA program. Milcho is also involved in delivering ISTQB courses as a trainer in his field – ISTQB Specialist Level Security Tester.
Test Like a Hacker: Red Team Techniques Every Software Tester Should Know
Security testing is no longer just about scanning for vulnerabilities and checklists - it’s about thinking like an attacker. Attackers take a different path than QAs: they map systems, find weak spots - bugs in systems, configurations or features, and chain them together for maximum impact.
In this talk, we’ll bridge the gap between traditional software testing and offensive security testing by introducing core Red Team techniques that every software tester can—and should—learn. This isn’t about becoming a pentester; it’s about upgrading your testing approach to detect flaws before real adversaries do.
You’ll walk away with a new testing mindset, a toolbox of red team tricks, and the ability to simulate attacker behavior in your everyday test cases without needing a red team badge or a hoodie.

ŽAKLINA POLAK MATANOVIĆ
Conet, CroatiaŽaklina Polak Matanović is a seasoned QA professional with over two decades of experience in software quality assurance, test and release management, and software development. She thrives in international environments and is known for her structured, collaborative approach to ensuring high-quality software delivery.
Outside of work, Žaklina is a passionate salsa dancer, chorus singer and an avid explorer—whether she's sailing, skiing, biking, or capturing the world through her photography. Her love of travel and adventure fuels her curiosity and creativity, both professionally and personally.
Test Like a Parent; Skills from Motherhood that Enhance Software Quality
You have kids, and suddenly, everything changes. Life starts revolving around them, and you feel like you're losing touch with the business world. Sound famil-iar?
I used to think that being a mom would hold me back in my testing career. But then I realized that many of the experiences I had at home could actually help me grow in the testing world. Cooking with the kids—or leaving them unattended—has led to unpredictable situations that taught me valuable lessons. And time spent at the playground with my twins? It helped me automate some surprisingly useful parent-ing skills—skills that turned out to be just as effective in testing environments.
Just as parents adapt to ever-changing situations and diverse needs, testers must navigate complex requirements and evolving user expectations. In this session, I’ll share stories and parallels between family moments and testing practices, exploring qualities like patience, multitasking, and problem-solving.
Whether you’re a parent or not, you’ll walk away with practical insights and maybe a laugh or two, ready to approach testing with fresh eyes and renewed creativity. Curious how chaos can lead to better software? Join me to find out!

Klaus Skafte & Karina Clemmensen
DenmarkKlaus Skafte works at the Danish Agency for Digital Government where he ensures that the suppliers used for the big Digital Infrastructure projects deliver on the quality needed. It sounds easy – the supplier just need to do as the contract requires of them, but in reality it is not an easy task.
In addition, Klaus is part of exec for DSTB (the Danish member board of ISTQB), when he is not busy building Lego, reading comic books, working on home automation… the list goes on and on.
Karina Clemmensen, Clemmensen Consultinghas been working with test for the last 15+ years, primarily as a test manager, but also as a manager for testers and test managers. She is focusing at delivering test in large complex setups where she uses a combination of her management experience and her strong ability to cooperate, to manage test in a pragmatic way. It’s important for her, to prioritize continuous improvement in projects and to ensure that its agreed quality, that is delivered. She has solid experience from working with both private and public customers and is enjoying her role as a consultant.
Like it or not, We are all negotiators
/With a harsh deep male voice
In a world of hard deadlines, cost-cutting exercises and demand for perfect quality!
Two Test Managers, one good! the other evil! are facing off against each other!
Backed by layers, contracts and upper management these two brave souls go into battle to determine the one and only truth!
/With a soft mellow gender-neutral voice
But on the way they discover that they are not enemies and battling to the death is not the way to win.
There are a third solution, understanding of principle and through it finding common solution.
/Again with a harsh deep male voice
Coming to a conference near you!
Book your ticket already now!

Vipin Jain & Dr Anubha Jain
Vipin Jain, metacube software, has 26 years of experience in the IT industry and h has dedicated the last 20 years of his career to Software Quality. Currently, he works for Metacube Software as Head QA and Deliveries, and is involved in establishing QCE in his company, directing delivery operations. He is an avid speaker and writer, loves to participate in conferences and has given talks nationally and internationally. He is a member of several technical committees of various international organizations like HUSTEF and QA&Test, and has presented in Eurostar, Testcon, SQA Days, ATD, HUSTEF, TestingUY Uruguay, TestingUnited, TestingCup, WrotQA, QA & Test, ExpoQA, Belgrade Testing Conference, World Testing Conference in Bangalore and other national conferences. He has published over 80 blogs on various platforms. Some of his articles have been published in Testing Planet and Testing Planet magazine, and he has also been involved in the writing of several books on software engineering and web technologies.Dr Anubha Jain, the iis university, with over twenty-five years in software development, teaching, and research, is the Director of the School of Computer Science & IT at IIS (deemed to be University) in Jaipur, India. She holds a PhD in the field of Information Retrieval and Information Architecture. Her research interests include Soft Computing, Computational Studies, Deep Learning and Software Engineering. She has co-authored nine books, 12 book chapters and published 35 papers. She is a reviewer for numerous scientific journals and conferences and is actively involved in academic bodies like being the convener of the Board of Studies and as a member of the Academic Council and Curriculum Committees. She is an avid speaker and resource person in many conferences and workshops. As a Green Software Champion at the Green Software Foundation, she promotes green technologies. She has presented internationally on topics related to Green IoT and Sustainable Testing at QA&Test 2022, QA&Test 2023 and HUSTEF2023. She has also presented earlier at QA&Test in 2018 and 2016 QA&Test in Bilbao on IoT and DevOps.
Artificial intelligence-biases and what we can do as testers
As humans, we tend to make choices, whenever and wherever possible. These choices are almost always dependent on our bias. Our preferences, our color choices, our taste choices and our likes and dislikes shape this bias. The integration of Artificial Intelligence (AI) systems into decision-making processes across various sectors in both our personal and professional lives, like finance, healthcare, hiring, and law enforcement started a few years back. With a promise of efficiency and objectivity, AI seemed as the perfect cog of the digital machine to make bias-free decisions. However, a bias-free AI hasn’t really been observed as much as a biased AI system has been. This presentation is centered around the problem of possible AI Biases across the smallest of factors like gender, race and age in a hiring context, that is using a machine learning model. Hiring done via AI Softwares has often been found to be biased towards patterns seen in historical data and bring to life certain stereotypes of the employment sector that have no place in the modern, digital age. The avoidance, if not complete elimination, of this bias is necessary to ensure that AI Systems are giving a fair and accountable outcome. This is where we, the testers, equipped with modern tools and testing mindset, come into play. It is our role and responsibility to ensure that any AI System, when being put into use, is displaying a fair and accountable outcome. It is critical that AI systems are being tested thoroughly, otherwise the bias will constantly thrive and will never be reduced. In my talk, I will present a use case with respect to the hiring industry, but it is also useful for other industries. The use case will highlight the following five aspects of my testing:- 1.Data Analysis 2.Fairness Metrics 3.Testing Framework 4.Simulation of different Scenarios 5.Mitigation Strategies Through systematic bias testing, this use case highlights the importance of fairness, accountability, and transparency in AI systems, especially in sensitive applications like hiring. Ensuring that AI does not perpetuate or exacerbate social biases is crucial for fostering trust and equity in AI-driven decisions.
What will attendees learn?:
1. The Importance of AI Bias Awareness: Attendees will understand that AI systems, despite being designed for objectivity and efficiency, can still inherit biases—such as gender, race, and age—based on historical data.
2. The Role of Testing in Reducing AI Bias: Attendees will learn how testers play a crucial role in ensuring fairness, accountability, and transparency in AI systems. By thoroughly testing AI systems using tools, frameworks, and fairness metrics, testers can identify and mitigate bias, ensuring AI delivers equitable outcomes.
3. Key Approaches to Bias Testing in AI: Attendees will gain insights into the specific strategies used to evaluate and address AI bias, including data analysis, fairness metrics, testing frameworks, scenario simulations, and mitigation strategies. These methods are essential for making AI-driven decision-making systems more reliable and just.

Luiz Gustavo Lucena & Neli Duarte
Smarter Test, BrazilLuiz Gustavo Lucena is a QA and Test Automation manager with over 20 years’ experience driving QAOps, security, performance and automated testing for enterprises like IBM, Disney+, Accenture and Motorola.
He has been recognized for his contributions to quality engineering and test automation with multiple awards from leading global organizations including IBM, Motorola and Accenture, highlighting his impact in driving innovation, process improvement and excellence in software quality.
An MBA in Project Management and holder of multiple IBM and security certifications, he champions innovative, accessible testing solutions.
Neli Duarte is a QA professional with over 20 years of experience in software testing, test automation, and Agile/DevOps integration. As the founder of Smarter Test, she helps organizations improve software quality through tailored automation frameworks, CI/CD pipelines, and data quality strategies. Neli has worked with global companies across fintech, e-commerce, and telecom sectors, delivering scalable testing solutions using tools like Playwright, Selenium, and Great Expectations. She served for five years as an ISTQB Technical Advisor and is an international speaker passionate about Agile coaching, mentoring, and fostering quality-driven engineering practices.
TESTRIDE: A Scalable Testing Strategy Model for Distributed Agile Projects with Outsourced Teams
Software testing plays a critical role in ensuring product quality, yet it remains underutilized across many organizations. This challenge becomes more pronounced in agile environments, where rapid iterations and minimal documentation demand a rethinking of traditional testing practices. Agile methodologies require testing to be continuous, adaptive, and capable of delivering timely feedback to stakeholders. This paper presents a case study conducted within a multinational corporation during the development of a project using Scrum, complemented by practices from Feature-Driven Development (FDD). The study outlines how the entire testing strategy was restructured to support distributed development and outsourcing, focusing on process alignment, resource coordination, and feedback efficiency across all testing phases.

Anton Angelov
Automate the Planet, BulgariaAnton Angelov is the Managing Director, Co-Founder, and Chief Test Automation Architect at Automate The Planet - a boutique consultancy specializing in test automation strategy, implementation, and enablement. He is the creator of BELLATRIX, a cutting-edge automation framework that supports web, mobile, desktop, and API testing across modern enterprise environments. A bestselling author of eight books, including Design Patterns for High-Quality Automated Tests (C#/Java) and the Automated Testing Unleashed series, Anton is known for introducing innovative and pragmatic approaches to building scalable, maintainable test automation solutions.
His thought leadership has earned him multiple 'QA of the Year' awards, inclusion among the Top 100 IT Leaders in Bulgaria, and the prestigious 'QA of the Decade' recognition for outstanding contributions to the global QA industry. At Automate The Planet, Anton leads teams of world-class consultants, enabling organizations to architect and scale their automation frameworks while equipping internal teams for long-term success. Beyond consulting, he is a passionate advocate for the QA community - frequently speaking at major international conferences, contributing to open-source initiatives, and collaborating with leading tech companies on the future of AI-driven testing and scalable automation architectures. With over 60 international speaking engagements, Anton is committed to establishing Bulgaria as a global center of excellence in QA and test automation—advancing the field through education, mentorship, and visionary engineering leadership.
Transforming QA Organization: Unveiling the ATOM Model for Advanced Testing Optimization
In an era where software quality is paramount, the need for effective and efficient test automation is undeniable. This talk introduces the 'ATOM Model – Advanced Testing Optimization Maturity Model', a groundbreaking framework designed to transform organizational approaches to test automation. Developed through years of practical application and refinement, the ATOM Model provides a comprehensive methodology for assessing and enhancing automated testing processes in any organization. Attendees will gain insights into the model's development, its practical application in various projects, and how it aligns with the evolving demands of quality assurance in the software industry. This session is not just about presenting a new model; it's about inspiring a shift in how we perceive and implement test automation for superior software quality.

Valery Penev
Adastra, BulgariaValery Penev brings over a decade and a half of experience in data warehouse consulting and data engineering at Adastra. Throughout his career, he has worked across diverse industries, clients, and roles. In recent years, his focus has expanded to include cloud technologies. Valery is goal-oriented and analytical, with excellent interpersonal skills. For eight years, he served as a Talent Manager, mentoring a team of ten, and since 2016 has lectured at the Adastra Academy. He spent almost a decade playing a key role in technical interviews as part of Adastra’s hiring process
In 2020, he co-founded “Out of the Box Ltd.” – a company focused on web services, digital marketing, and testing services, helping small and mid-sized businesses in building a strong online presence.
His superpower is to be creative and always positive.
QA files: The bug is out there
Inspired by the spirit of The X-Files, this session invites you into the world of QA investigations, where the unexplained is often just undocumented - and where the truth (and the bug) is out there, waiting to be found.
Join me for a deep dive into the art of discovering complex, hidden, or downright bizarre bugs that evade even the best-planned test cases. Through real QA stories and case studies, We’ll uncover how structured exploration, targeted test design, and lateral thinking can reveal issues no automated check could catch.
We’ll examine key strategies, heuristics, and mental models that turn testers into detectives—equipped not just with tools but also with curiosity, skepticism, and a sharp eye for the unexpected.
Whether debugging in the wild or crafting smarter tests, this session will leave you with actionable techniques, a sharper QA mindset, and maybe a few goosebumps. Because in quality assurance, as in life, the strange cases are often the most enlightening.

Gjore Zaharchev
Avenga, North MacedoniaGjore Zaharchev is an Agile Evangelist and Heuristic Testing fighter with more than 15 years of experience in Automated, Manual, and also Performance Software Testing for various domains and clients. In this period Gjore has lead and managed QA people and QA teams from different locations in Europe and the USA and different team sizes. He recognizes testers as people with various problem-solving skills and an engineering mindset and believes that Software Testers are more than mere numbers to clients. Currently working at Avenga, with an official title of a Head of Quality Engineering responsible for the Software Testing Team. Also, he is an active speaker at several conferences and events in Europe and Testing Coach at SEDC Software Academy in Skopje.
Since 2020 he is SEETB (an ISTQB) board member for Macedonia.
Writing Clean and Maintainable Automated Tests
Automation tests are like any other code—they must be readable, maintainable, and scalable. But too often, test suites become a tangled mess of poorly named variables, redundant comments, and unclear logic. This talk will explore clean code practices for automated tests, helping you write tests that are easy to understand and maintain.
You will learn:
By the end of this session, you’ll have practical techniques to clean up your automation code, making it easier for you (and your team) to debug, extend, and scale your tests. Whether you're just starting with test automation or looking to improve your existing suite, this talk will help you write tests that stand the test of time!

Jeroen Rosink
Squerist, NetherlandsJeroen Rosink is a passionate test professional with about 26 years of experience in testing, test management and executing, coordinating, coaching and advising roles. Driven by his passion he always search for those things which makes testing valuable and interesting. Besides presenting several times on the Dutch TestNet conferences (2010, 2012,2016,2019, 2023), he also gave presentations and/or workshops on SeeTest 2017 Sofia, SeeTest 2018 Belgrade, SeeTest 2019 Bucharest, TestCon 2018 Vilnius, TestCon 2019 Moscow, QA Expo 2019 Madrid, Seetest 2021 Bucharest. Inflectracon 2023 Washington D.C. TestCon 2023 Vilnius, DevCon2023 Bucharest Also he made his contributions to an anniversary book of TestNet “Set your course: Future and trends in testing” (translated from Dutch), Co Author "The characteristics of a modern testing process" (translated from Dutch) and the book “How to reduce cost of software testing”.
AI-Driven Mindset in Software Testing: A Paradigm Shift Towards Efficiency, Innovation, and Collaboration
As Artificial Intelligence (AI) transforms the software industry, testing methodologies must evolve to keep pace. AI is not merely a collection of tools but a paradigm shift in how testers approach problem-solving, decision-making, and automation. This paper explores the critical aspects of adopting an AI-driven mindset in software testing, emphasizing efficiency, data-driven decision-making, continuous learning, and collaboration with AI tools. By shifting focus from traditional manual testing to AI-enhanced strategies, software testers can improve efficiency, enhance test coverage, and accelerate defect detection. Additionally, this paper highlights psychological models such as the Transtheoretical Model of Change, Growth Mindset Theory, and the Technology Acceptance Model to facilitate the transition. Through case studies and real-world examples, we illustrate how testers can practically adopt AI in their workflows. Attendees will leave with a structured learning path and actionable insights to integrate AI into their testing workflows effectively.

Matthias Rasking
Sixsentix Deutschland GmbH, GermanyMatthias Rasking is the Managing Director for Sixsentix Germany. He is also a board member responsible for Technical Strategy at the TMMi foundation, a not-for-profit organization aimed at promoting an open standard for Test Maturity assessments, and supports the ISO/IEC Working Group 26 on formulating and enhancing ISO 29119 and 20246 standards for software testing and work product reviews.
He has over 25 years experience working in several large scale Information Technology and Process Improvement projects globally. He maintains a broad knowledge in all things quality, stretching across Project Management, Software Engineering, Development methodologies as well as process improvement, focusing on the human factor in each project.
Testing as a cornerstone of regulation - DORA, AI Act and Cyber Resilience Act
As the world gets more complex and we have already witnessed how big the impact of software quality failures or improper risk management can be, several organizations are trying to establish guardrails within different industries and the IT world in general. Hidden between many words and paragraphs there is actually some very good thinking behind these new acts, which influence how we deal with functional and non-functional testing, test organization, IT security and risk management. How we test traditional systems and the new breed of AI-based systems.
What do all these buzzwords like DORA, AIA, CRA etc actually mean for me as a tester? How can a testing organization help to interpret and implement the regulatory requirements mentioned in these acts even if we're not lawyers? And how can we actually use these new compliance topics to our advantage and finally achieve our ultimate goal of better software quality?
This paper and the accompanying talk will examine some key aspects of current regulatory requirements, how they influence our thinking as testers and how we can use them to drive process improvement measures in the software testing and IT world.

Victor Ionascu
Axway, RomaniaVictor has 15+ years of extensive experience in experimenting, learning from failures, and helping others think outside the box. Currently, he works on integrating multiple products into high-quality solutions at Axway. He has spoken at many international conferences, sharing his love for eliminating unnecessary tasks and focusing on what truly adds value. Outside of work, he enjoys hiking, motorbiking, climbing mountains, and spending time with his three kids.
The Sunset of Technical Minds in the Age of AI
In this talk, I’ll explore how rapid advances in large language models (LLMs) reshape the software industry from development to testing.
With tools outperforming human engineers on tasks like competitive programming and with costs dropping exponentially, we must ask:
What happens to the role of technical specialists when machines can reason, test, and even code better than we do?
Using data like MMLU cost trends and AI vs. top coders, I’ll reflect on what this means for testers, QA engineers, and technical leaders in the next 2–3 years.
This isn't about fear — it's about adaptation, redefinition, and the future of value creation in software.

Jovana Milanovic
Stress Test, SerbiaJovana is a QA Consultant and Strategist known for fusing technical knowledge, systems thinking and agile principles. With over a decade of experience, she designs scalable test architectures and leads high-impact teams across industries, with her current focus in mobile banking. Beyond engineering, Jovana brings a deep awareness of organizational, human and agile dynamics. She believes that both technical and operational excellence is needed to achieve top quality products.
OpenAPI Magic in API Testing
This presentation shares how OpenAPI specification can be the backbone of a clean, resilient API testing framework. I’ll walk you through examples of integrating OpenAPI-generated models into both Java and TypeScript test ecosystems, using Cucumber and custom REST clients. With a focus on clarity, reusability, and long-term maintainability, this talk offers concrete strategies for evolving QA teams and provides an easy solution to get smart and useful API tests.

Natalia Romanska
Brainhub, PolandOnce an accountant, now a QA. A fan of a holistic approach to Quality Assurance - the one that begins with neatly designed processes and thoughtful planning. Truly enjoys having things balanced, both in private and professional life. After hours, you’ll find me rewatching Friends, exploring new destinations, and always on the hunt for the next great scent
Oops!... I did fail again! Why is the bigger the failure, the better the lesson?
We (usually) don’t like failures. Instead of appreciation, pride, and gratitude, we often feel ashamed, demotivated, and angry - nothing fancy or pleasant. Even though we as QAs are pretty used to various kinds of discrepancies.
The reality is that failures happen and will continue to do so. The harder they hit, the longer they stay with us. And more can be taken out of them. No matter if it’s the issue in your accountancy books, stuff you had ownership of, process gaps, or just a bit of bad luck - I truly believe that in any case, there is something to learn from.
I don’t have a general pattern for learning from failures, but I have gone through a few, quite big ones, both as an accountant and a QA. None were top-notch experiences - instead, connected to various emotions, stress, and facepalms. But each of those has affected my work attitude ever since. I'd like to share my not-so-successful stories so you can make your own :)

Kaloyan Ginchev
Flutterint, BulgariaI am a QA team leader in Flutter International with 15 years of experience. Throughout my career, I have worked with various technology packages, including Java, Selenium, JMeter, Verilog, Python, and others.
For the past 10 years, I have been a lecturer at an academy, where I train people with no testing experience and help them find jobs as QA.
I am passionate about mentoring and participate in various leadership projects.
Dynamic Real Test Data and Lack of It
"Dynamic Real Test Data and Lack of It" explores the critical role of realistic test data in software development and quality assurance. It highlights how dynamic test data can enhance testing processes, improve accuracy, and ensure better product outcomes. The discussion also addresses the challenges and consequences of insufficient test data, including increased bugs, longer development cycles, and compromised user experiences. Ultimately, the piece emphasizes the importance of robust data strategies in achieving effective testing and successful software releases.

Tal Pe'er
Grove Software Testing, NorwayTal has been working in software and system testing for over 25 years, getting his Foundation Certification back in 1999. He’s been working as a tester and test manager before becoming a trainer and consultant, helping organizations to establish and improve test teams and test processes, as well as training testing professionals with various ISTQB® courses and testing workshops.
Tal has been active with ISTQB® since 2008 and has been a member of the Executive Committee between 2017-2023.
Tal is a Principal Consultant at Grove Software Testing, one of UK’s leading training providers and course training materials.
Enhancing Risk Mitigation Through a Combined Approach to Software Testing
In a world that rushes to test automation and AI, we need to remember that manual testing is still important. We still require human ingenuity and creativity when designing tests that will challenge the systems’ boundaries and mitigate the risks of failure.
While traditional structured and systematic testing methods are important to show the system is implemented as specified, exploratory testing, driven by tester’s creativity and intuition, offers flexibility, rapid feedback, and stretch of the system’s boundaries.
The presentation focuses on the benefits of combining both approaches, emphasizing the synergy of structure and flexibility, comprehensive risk coverage, improved test effectiveness, and quick detection of critical issues.

Petko Petkov
Dreamix, BulgariaPetko Petkov is a senior quality assurance engineer with more than 7 years of experience, interested in automation testing, automation of tasks, improving the efficiency of the teams even when it was not requested by the client, and supporting others to develop as professionals (1:1 mentorship, preparing workshops, writing articles and having presentations). Joining Dreamix in 2020 he had the chance to take part in several different projects where he helped with the CI/CD implementations, automation of test creation and execution within test management systems, and preparing custom solutions for testing NLP software. In addition to that Petko’s interests in philosophy led him to becoming a PHD student in contemporary philosophy in the Sofia University.
Automating Asynchronous Events
Do your automation tests fall apart when facing unpredictable timing events? Is your team still manually testing critical asynchronous functionality? Discover how to conquer one of QA's most challenging frontiers - asynchronous events - and revolutionize your testing approach!
In the world of test automation, we've mastered many challenges - but asynchronous events remain the final frontier that separate good automation frameworks from great ones. When cron jobs run on their own schedule, emails arrive "eventually," and file generation happens behind the scenes, traditional automation approaches fall short.
Many teams reluctantly resort to manual testing for these scenarios, introducing inconsistency and delays that undermine the benefits of your automation investment. But what if there was a better way?
By the end, you won't just understand asynchronous automation - you'll have expanded your professional toolkit with versatile solutions that can be applied across projects, technologies, and industries. These are the skills that separate average QA engineers from the automation experts who drive innovation and efficiency.

Istvan Forgacs
4-Test Plus, HungaryIstván Forgács, PhD, began his career as a researcher at the Computer and Automation Research Institute of the Hungarian Academy of Sciences. He has published numerous scientific articles in leading international journals and conference proceedings. He is the lead author of the books Modern Software Testing Techniques (Apress), and Practical Test Design (BCS), and the co-author of Agile Testing Foundations (BCS).
In 1998, István transitioned from academia to entrepreneurship by founding Y2KO, a startup that provided an efficient solution to the Y2K problem.
He is also the founder and CEO of 4Test-Plus. István is a notable figure in the software testing community. He has served as an author for the Advanced Test Analyst Working Group and was a member of the Agile Working Group at ISTQB. He developed an intelligent mutation testing framework to help testers enhance their test design skills. Additionally, he is the creator and key contributor of Harmony, the only two-phase model-based test automation tool. He was a keynote, invited, and regular speaker at several academic and industrial conferences.
Automate Tests Better, Faster, Cheaper
Most test automation tools focus solely on automating test execution, often overlooking the critical aspect of test design, which means they can only detect simple bugs. Model-based testing (MBT) addresses this gap by incorporating test design into the process. However, current MBT methods face several challenges:
• Coding Requirements: They necessitate guard conditions, meaning testers must engage in coding.
• Complex Output Modeling: Modeling outputs remains a difficult task.
• Learning Curves: Testers need to adapt to a model editor, which can be time-consuming.
A new model-based technique, Action-State Testing, effectively overcomes these challenges. This specialized modeling approach comprises a potential initial state, labels, and action-state (model) steps. Labels encapsulate requirements and model elements, while model steps define the test steps. By utilizing states, models become more concise and capable of detecting most defects.
In this method, model steps are represented as nodes within a graph, described through simple text. Each step begins with an action (input), followed by zero to multiple responses (output), with the option of specifying an arriving state. For example:
add items for 40 euros ⇒ total price is 40 ⇒ free beverage can be selected STATE free beverage offered
Steps can be sequentially connected within the same test if there is an edge from one to another. A fork occurs when there’s an edge from step 0 to both step A and step B, resulting in two separate test cases (step 0, step A) and (step 0, step B). After forking, common steps may require execution for both test cases, effectively joining the forked paths. Using a text editor simplifies the addition of fork and join model steps.
Advantages of Action-State Testing:
• Entirely codeless
• Inherently adheres to the DRY (Don't Repeat Yourself) principle
• Quick model creation
• Simple and cost-effective maintenance
• High Defect Detection Percentage (DDP)
Action-State Testing has been rigorously evaluated using a specialized intelligent mutation framework, which you can explore at https://test-design.org/practical-exercises/. By comparing action-state methods with AI, manual testing, and stateless solutions, we observed that their DDP ranged from 40-80%, whereas action-state testing achieved a perfect 100% DDP, effectively killing all 65 mutants. This method has already been applied to numerous real-world projects, identifying several defects. The average time for test design and execution automation of a single test case is less than one hour—a significant improvement over the industry average of over four hours without systematic test design.
In my presentation, I will demonstrate the functionality of the action-state testing tool and how to implement test cases from the model. I will also address key challenges we've encountered, such as flaky tests, dynamic selectors, and code changes. Additionally, I will illustrate why action-state testing is particularly effective for identifying intricate bugs using the new test version of the ISTQB Glossary application.

Georgi Georgiev
DevExperts, BulgariaGeorgi is an experienced Software Quality Assurance engineer, holding a Master’s degree in Computer Science and having seven years of work within the industry. Besides his expertise in professional work, he also holds a position as Assistant Professor at Technical University of Sofia.
He performs both Manual and Automation testing, which gives him the opportunity to do various kinds of challenging tests with ease. Currently, he’s working on a FinTech project for a major US bank, where quality assurance plays a critical role due to strict financial regulations and security standards.
Apart from his career, he loves traveling and does so whenever free. Traveling helps him relax but at the same time opens up new avenues of thinking as well as offering different perspectives that can be applied to his professional work.
His profile as a Software QA engineer, an academician, and a person with personal interests has given him good experience not only in the field of software quality but also on continuous learning as well as maintaining balance.
AI Driven QA: Journey from Manual -> Automation -> Autonomous. Reality vs Hype
AI is redefining how we think about software testing — from reducing regression time to generating tests automatically. But with all the buzz, it’s getting harder to tell what’s real progress and what’s just hype.
In this talk, we explore the journey of QA evolution: from manual testing, to automation, and now to AI-augmented and semi-autonomous testing. We’ll break down what AI can actually do today — including self-healing tests, intelligent test prioritization, and predictive defect analysis — and where it still relies heavily on human intuition, like exploratory testing, context understanding, and usability validation.
Using real case studies from FinTech and eCommerce, we’ll show how teams are already:
link[Reducing regression cycles by up to 80%
Cutting flaky test noise by over 60%
Using ML to focus QA on high-risk areas]
But beyond the tools, we’ll focus on what this means for you — the QA engineer, the tester, the strategist. As AI takes over the repetitive work, your role evolves into something more critical: a test strategist, a quality coach, and even an AI co-pilot trainer.
This talk is for anyone who wants a clear, honest look at what’s possible with AI in QA — and how to stay ahead of the curve by embracing the tech without losing the human touch.
AI won’t replace testers. But testers who embrace AI will replace those who don’t.

Jana Markovikj
iborn, North MacedoniaQA Automation Engineer with 4 years of experience in the field. Previously presented about automated testing techniques, and now interested in sharing knowledge and opening discussions about design approaches. Eager to broaden the perspective from which testers create their frameworks, as well as connecting and learning from participants.
OKRs for Your Tests: Elevate Your Approach to Test Design
The OKR (Objective and Key Results) system is a performance management tool with the aim to transform the way people set goals for themselves and build a purpose driven environment, ensuring difficult objectives are fulfilled. However, what if this could be applicable to tests themselves as well? What if we hand the same or at least similar type of responsibilities and challenges to our own tests? We could transform suites into active and goal-oriented components of the development lifecycle and broaden the perspective from which they can be designed and managed. Too many times have tests been designed, executed and even automated with no clear purpose in mind, resulting in them existing just to fill a quota or reach certain coverage. By using this approach you can orchestrate hundreds of tests ready to support you in achieving a shared goal.