A major challenge facing the test team is how to achieve maximum test coverage without creating massive numbers of test cases. During this tutorial Huw Price will demonstrate how to slash test case creation time while radically improving functional coverage. During the tutorial Huw will reflect on the all too common practice of linking software testing only to requirements which often relies too heavily on the “gut feel” of the tester. Huw describes how software testing must become more like electronics testing by applying mathematical models to the creation of test cases. Huw will demonstrate how a more rigorous approach provides big benefits. The tutorial includes a review of current coverage techniques and a detailed explanation of path modelling and how it can be applied to the software industry. Join Huw to get maximum coverage in minimum time for your testing.
In an ideal world, each project would have time to execute all test cases for all combinations of logic and devices. In reality, there is rarely enough time or resources to do this. We need to be proactive in determining what we test in order to maximize the value of our testing. Risk based testing is an approach that maximizes the likelihood of identifying important defects and minimizes the risk of unpleasant surprises. Managing testing risk requires continuously analyzing the health of projects, and their deliverables, and addressing the identified risks. Learn the guidelines for adopting risk based testing, identifying the best use of available resources, and updating testing plans to reflect and mitigate current risks throughout the project lifecycle and even before a project starts. Join Carolyn to discover how her team delivers applications with no unexpected defects being found in production.
Performance engineering has become increasingly critical to the success and user adoption of web applications, especially with increasing market competition and the demand to be at internet scale. It is well known that site performance directly impacts the bottom line of online businesses. But, not every performance testing effort is implemented in a valuable fashion, nor does it fulfill the needs of the business. Its failure and successes are dependent on its foundational blocks. Performance engineers can no longer linger in the comfort zone and the techniques of the past. Not every performance testing strategy needs to be equally elaborate, nor does it need to leverage similar tools and techniques. To effectively deliver on performance testing, we need to be adaptive and flexible riding the wave of change and pressures for faster delivery. We need to better understand the technologies, project drivers, and constraints to better assess and design our testing approach and implement the appropriate flavor of performance testing.
- Learn about technology trends and their implications on performance testing
- Explore the different shades of performance testing and explore applicable scenarios
- Understand the critical steps to delivering an actionable performance testing solution to drive faster and more scalable applications
Most of us communicate relatively well with our peers, especially those we work with closely. However, when it comes to communicating with customers, upper management, or even our own manager, we face more challenges. Technical professionals are often perceived as non-communicative, too blunt, or too likely to fill a discussion with jargon and needless detail. It doesn’t have to be this way! We are adept at learning the technical language of our discipline or even computer languages, so we do have the capacity to learn the languages of business. In this session we will learn the powerful language of Aligned Assertive Communication, which can be adapted to our many interactions with all the stakeholders we encounter. This hands-on practical workshop will help you develop your own specific versions of questions and statements that get others engaged, avoid unproductive conflict, and lead to positive results.
- Use Aligned Assertive Communication successfully with all stakeholders
- Avoid miscommunication, being boring, or acrimonious encounters
- Proactively engage customers, upper management, and others in productive conversations that lead to solutions.
The 3 Pillars framework facilitates effective agile transformation and establishes an agile quality/testing strategy that is balanced across various activities and tactics. Far too often agile adoptions focus just on the development teams, agile frameworks, or technical practices as a part of their adoption strategies without considering the quality and testing issues. Risk-based testing, exploratory testing, paired collaboration around agile requirements, agile test design, and TDD-BDD-Functional testing automation are explored as tactics within a balanced Three Pillars framework.
Join experienced agile coach Bob Galen as he shares the Three Pillars focus on development and test automation, testing practices, and collaboration activities. Discover how to create a more balanced approach for adopting agile testing strategies across your organization Learn tactics to support an effective and balanced agile testing strategy via Case Studies and stories. Finish with the tools to immediately initiate or re-tool a much more effective and balanced agile testing strategy.
Most companies do not measure Cost of Poor Quality (CoPQ), but if they did, they would typically find that they are spending 10-20% of their IT budget fixing problems that shouldn’t exist in the first place. CoPQ improves as quality maturity improves. Maturing quality requires a holistic view of quality beyond a successful testing function. This workshop will focus on Accenture’s Quality Maturity Framework and provide experience in how to measure quality maturity using the Test Assessment Framework. The framework is built around the Testing Maturity Model (TMMi) concepts and Accenture’s contributions to the TMMi Process Reference and Assessment Model.
- Understand how quality maturity contributes to reducing the Cost of Poor Quality (CoPQ)
- Think holistically about quality and that quality is more than the testing function
- Learn how to assess quality using an assessment based on the Testing Maturity Model Integration (TMMi)
ATDD (Acceptance Test-Driven Development) was created as part of Dan North’s BDD (Behavior-Driven Development) model and is almost exclusively associated with Agile methodologies. With short sprints and acceptance criteria defined early, ATDD does seem like a natural fit for Agile. When developing software using waterfall methodologies teams rarely have a clear definition of done. It would seem that ATDD would be a difficult fit for a waterfall project. In his presentation, Mr. Eakin shows how making minor tweaks to waterfall projects using ATDD principles and practices can change a waterfall project from ‘destined for failure’ to a ‘smashing success.’
- Apply ATDD principles and practices to waterfall projects
- Get to done faster and still meet the business’s needs
- Start the ATDD process with UAT scripts
The introduction of mobile devices and applications presents new challenges to traditional usability testing practices. Identifying the differences between usability testing techniques for mobile applications and traditional desktop applications is critical to ensuring the acceptance and use of mobile applications. New equipment requirements for the mobile platforms testing add to the transition issues.
Experience a short mobility usability test and identify the process changes that need to be considered for your change to the mobile platform. Create a plan that will help you transition your traditional usability testing program into a mobile environment. Learn a series of tips that will smooth the path of starting to test in the mobile area. Join Susan to make your transition into the mobile testing field successful.
Many companies do all the right things in terms of quality, including reducing their defects to near zero, eliminating call waiting times, etc., and still struggle to hold onto their customers. Repeatedly, it appears that their customers do not perceive them as delivering quality.
How can a company determine if customers are not perceiving the true quality of its products and services? How much is that perception negatively impacting the current and future bottom line? And what can be done to address the problem?
This fast paced, highly participative workshop will answer these questions from the perspective of a software tester through an engaging presentation, a break-out exercise, and an open discussion.
Conference participants will be divided into smaller groups (requirements traceability, test design, test script execution, etc.) for a break-out session during which each group will be guided through an innovative model to work out some actionable steps to incorporate the fundamentals of customer perception of quality in their line of work that fosters customer loyalty and drives increased revenue.
- Grasp the fundamentals of customer perception of quality
- Understand the methods and techniques that are relevant to software testing in measurably improving the customer perception of quality
- Take away some actionable steps for innovations to the software testing practice that enhances customer perception of quality and drives increased revenue
Reusability and maintainability are two of the most important concepts to consider when building an automation test suite. This 90-minute interactive workshop covers the three components needed to design a test architecture that not only allows for multiple test types, but is also flexible, scalable, and easily maintained. We will discuss how using a custom automated test engine allows you to incorporate several tools and frameworks into one architecture allowing you to test multiple types of tests, such as UI, API, Database, Analytics, and more. We will also cover how to design data tables that will not only drive your test scripts, but allow for scalable yet separated parameterization that gives you full control over your test runs. Finally, we will discuss a data-driven approach to script writing that will minimize the number of scripts to be maintained, as well as how to design these scripts so that each one is reusable in other scripts. This allows for a more condensed, robust, and maintainable test suite.
After attending this workshop, you will be able to:
- Design a custom automation test engine
- Design a database that drives your test scripts
- Design reusable and maintainable test scripts