Erik van VeenendaalImprove IT Services BV, Bonaire
Erik van Veenendaal is a leading international consultant and trainer, and a recognized expert in the area of software testing and requirement engineering. He is the author of a number of books and papers within the profession, one of the core developers of the TMap testing methodology and the TMMi test improvement model, and currently the CEO of the TMMi Foundation. Erik is a frequent keynote and tutorial speaker at international testing and quality conferences. For his major contribution to the field of testing, Erik received the European Testing Excellence Award (2007) and the ISTQB International Testing Excellence Award (2015). You can follow Erik on twitter via @ErikvVeenendaal.
Requirements Engineering for Testers
Testers use requirements as the basis of test cases, review them for testability, and often participate in general requirements reviews or inspections. Unfortunately, many testers have little knowledge or skills in requirements engineering. What level of quality and detail is realistic to expect in requirements documents? What does testability really mean? How can testers help improve requirements? These questions and more will be answered while helping the attendee to develop skills in requirements engineering. Requirements issues and solutions are illustrated with practical case studies, and hands-on classroom exercises in finding, specifying and evaluating requirements are conducted. Walk through the requirements process from a tester’s viewpoint to learn what you can and should contribute to requirements quality. At the end attendees will collaboratively create a set of “Golden rules” that every tester needs to successfully follow in the requirements engineering process.
Paul GerrardGerrard Consulting, UK
Paul Gerrard is a consultant, teacher, author, webmaster, developer, tester, conference speaker, rowing coach and a publisher. He has conducted consulting assignments in all aspects of software testing and quality assurance, specialising in test assurance. He has presented keynote talks and tutorials at testing conferences across Europe, the USA, Australia, South Africa and occasionally won awards for them.Paul wrote, with Neil Thompson, “Risk-Based E-Business Testing” and several other Pocketbooks - “The Tester’s Pocketbook”, “The Business Story Pocketbook”, "Lean Python" and “Digital Assurance”.He is Principal of Gerrard Consulting Limited, Director of TestOpera Limited and is the host of the Assurance Leadership Forum in the UK.
Introducing Digital Test Management
The transformation of IT and businesses that depend on IT to a greater degree than ever before has been given the label: Digital. There are as many definitions of Digital as there are organisations who have active Digital, so called transformation, programmes. That is, almost every business is ‘Going Digital’,in their own way. These transformations cover business change as well as changes in the software development approaches.
The IT response varies, but it is clear there is a steady evolution from structured or waterfall approaches to agility. Beyond that, many purely online businesses are adopting DevOps and continuous delivery approaches. Partly this is a response to the need to move away from software requirements-based development to a constantly evolving approach to software based on small scale changes, driven by experimentation in production and the use of analytics, machine learning and increasingly – artificial intelligence.
The technology landscape is also changing – at speed. Every organisation operates a different ecosystem. Systems of record exist to a greater or lesser degree in all environments, but the emphasis on Digital is towards systems of engagement.Engagement includes social media of course, but the internet of things and devices that people use every day are all part of the landscape. If you include the potential for analytics, machine learning and AI, the ‘simplicity’ of traditional systems becomes only a memory.
Digital gives us new challenges. How should testers respond to the rapidity of change?How is test management affected? Is test management a team responsibility or a leadership role? Tools can automate test execution to some degree, but can we automate decision-making too? Is it safe? Where will AI fit? How do we assure the quality of our systems when automation is so pervasive? Are test managers a threatened species?
In this tutorial, Paul will introduce the challenge of Digital, explore these hard questions and propose approaches to testing and assurance that require new thinking, a focus on modelling, measurement and decision-making.
Olivier DenooCFTL (the French ISTQB board), Belgium
Olivier is the VP of ps_testware SAS, the French subsidiary of ps_testware group. His role is business development, recruiting the local expert team, building sustainable partnerships, promoting software testing and ps_testware.
He is also involved in auditing test projects and organizations and provides high-level consultancy and support.
Olivier is the President of the CFTL - the French ISTQB Board and also currently is the Governance Officer of the ISTQB.
For 20 years he has been an international speaker who spoke at Test-IT Africa; SQA-days; BA-days; JFIE; TestWarez; ReQuest, SEETEST; STF; Iqnite; JFTL; JMTL; JTTL; Analyst-days; Quality Week; Eurostar; Dasia… He's also actively participating in the development of new certification schemes, like IQBBA (Business Analysis), IREB (requirements engineering), IBUQ (usability) or more recently the "7 skills for effective teams" (soft skills and team organization).
Whtvr srs wnt (whatever users want)
When developing software products or services, we all tend to believe that we know who our users are and how they would use those in real life. We rely on predefined components, coding standards and routines and we truly think that will do the trick.
Now what happens when users are not exactly who or what we thought they were? What if our models and concepts are wrong or ill-designed? What if they suddenly decide to use our products and services in environments and conditions we never thought of?
In this tutorial, I will invite you to a journey into users’ needs. We will visit major trends and guidelines of our software applications and how they relate to users; discuss traps and pitfalls in modern software testing, explore some consequences of new business concepts and modern society paradoxes we need to integrate and how they relate to the future of our profession.
We will approach major usability concepts and heuristics. We will experience in-use context, understand what user experience really means and how to avoid major pitfalls. All of that with a twist of humor and practical hands-on activities.
If you think that usability is useless, that software testing is not funny and that users do not matter much once you have a grip on what you have to deliver, be prepared for a serious shock!
Yaron TsuberySmartest Technologies, Israel
Yaron Tsubery has been working in software since 1990, and has more than 20 years in the Software development, QA and Testing field as a Test Engineer, Customer Support Engineer, Testing TL, and Testing Manager, as well as Product Manager, Project Manager and Developer, before becoming the Senior Director of QA & Testing and PMO.
Yaron is an IT executive with excessive experience in managing QA & Testing organizations conducting large-scale, complex, real-time system testing in various sectors: Telecom, Banking, IT and Medical Devices. In charge of planning and implementing control quality assurance systems and SDLCs, while using best practice industrial standards (e.g. CMMi, TMMi, ISO, TPI etc.). Preparing and leading a large international company in the Telecom industry through ISO 9001: 2000 certification. Management of large operations,both local and distributed outsourced teams in USA, India, Hungary and Singapore.
Yaron is the former President of ISTQB® (International Software Testing Qualifications Board and is also the President and founder of the ITCB® (Israeli Testing Certification Board). He is a member of IQAMF (Israeli QA Managers Forum) and SIGiST Israel.
Load & Performance: Basic practical principles
Load & performance belong to the top ten list of most frightening words in software development especially when it comes to the point of sale of your product at times of extremely competitive market. One thing that is really frightening is the word… STRESS (doesn’t it give you the shivers just hearing it?). This tutorial will get you acquainted with the terms in subject and will contribute to your understanding of the whole desired process stages from Sales meetings through Design and architecture, Ways of implementation, Testing aspects and finally presenting results and reports. A part of the process you’ll be advised about is ways to be aligned with your customer’s expectations, which means that you’ll need to know and control the required lingo of the load profession, whether you’re a sales person, developer or test engineer (e.g. ‘usage & traffic model’, ‘throughput’ etc’). You’ll be exposed to design and architecture solutions along with special guidelines for code writing and implementation. The load test engineers will understand better what the required information to initiate load & performance is; what to search for and how to improve their testing coverage, how & what to report and present in order to give an added value to those who make the decisions.
This practice is focused on complex systems projects, delivered to telecommunication companies under strict rules and stiff exit criteria elements, and in tense delivery timelines.
Vipul KocherSALT, India
Vipul is a co-founder of SALT and Verity software and President of the Indian Testing Board, the ISTQB board for India. Prior to that he co-founded and ran PureTesting, a very successful testing services company. He has over 23 years of experience including managing Adobe Reader testing.
He has won several awards including the best paper award at STAREast 2006 and the Logica CMG Triple Star Award for the most original contribution at EuroStar 2005. He has been a Keynote speaker at many testing conferences in the USA, Europe, Asia and Australia-New Zealand.
Vipul invented Q-Patterns, a method of capturing testing knowledge and writing reusable test cases. This method is used by various organizations across the world. He is also the inventor of the extension to Noun-and-Verb technique for creating large number of tests from minimal documentation in shortest time possible.
AI and AI in Testing
Artificial intelligence and the discussion surrounding it are neither new nor surprising. However, the fever pitch regarding AI has never been stronger as the current times. It appears, for the first time, that viable AI solutions for what was previously science fiction will become available.
In this half-day tutorial we will learn about AI, Machine learning and how it works. We will also look at how to test AI solutions and how to use AI in testing.
This tutorial should open up the path for you to knowledge about how to learn about AI, the pre-requisites and skills required to create your own AI solutions and the framework on the basis of which you can develop your own understanding of AI in testing and Testing AI.
There is no programming knowledge requirement for this tutorial but knowledge of a scripting language such as Python is helpful.
Mette Bruhn-PedersenSafe Journey, Denmark
Mette Bruhn-Pedersen has experience working in testing as a tester, test manager, test lead, and agile transformation leader primarily in the financial sector. She has used both traditional and agile approaches to testing on team level and company level. Since 2014 Mette has helped managers, QA & Test leads, agile teams and business stakeholders on their journey from a team based agile setup to a scaled setup. Following the SAFe Implementation Roadmap and good practices Mette helps define value streams, launch and run Solution and Agile Release Trains. She also conducts training and coaches people on how to transition to new roles and responsibilities.
Based on these experiences and discussions with peers, Mette has co-authored an eBook about Quality and Testing in Scaled Agile Framework for Lean Enterprises, which was published in April 2018. In her spare time Mette promotes software testing in Denmark and worldwide through her engagement in the Danish Software Testing Board (DSTB) and ISTQB.
Typical Challenges when Scaling Agile
Business agility is key as disruption of sectors and industries becomes the new normal. Earlier IT was a means to faster and cheaper production of products and services. Today, IT is an integrated part of most products and services. Therefore, becoming agile at scale is vital but at the same time very challenging for most large organizations.
One approach, which more and more companies embrace, is to implement one of the frameworks for scaling agile, which have been developed by practitioners over the last decade, for example LeSS, Spotify, Nexus or Scaled Agile Framework (SAFe).
Scaled Agile Framework is one of the most popular frameworks used by enterprises to become more agile not just on team level but on all levels. SAFe as a framework describes many good practices to build-in quality. However, when it comes to the actual implementation, in large organizations there is a need for better guidance for people in test departments who work like testers, test analysts, test coordinators, test managers, to name a few.
In this tutorial we will discuss some of the typical challenges from a testing perspective using SAFe as an example. We will especially take a look at some of the classic testing roles and how people might feel challenged by the fact that these roles do not exist in SAFe. We will also take a look at typical challenges related to test planning and test coordination and how these activities can be embedded in some of the SAFe practices.
The goal of the tutorial is to give you ideas to proactively address typical challenges which may arise when your entire organization wants to become more agile.