Build new skills in specific areas through in-depth tutorials led by expert practitioners and industry experts. Available tutorials include:


Michael Mah, QSM Associates, Inc.

Benchmarking Agile Productivity and Time-to-Market

Tuesday, April 8: 8:30 AM – 12:00 PM

Instructor:

How do you compare the productivity, schedules, and quality you achieve with agile practices against those of traditional waterfall projects?  Join Michael to learn how agile and waterfall metrics behave in real projects. Learn to use your own data to move from sketches on a whiteboard to create agile project trends on productivity, time-to-market, and defect rates. Using recent, real-world case studies, Michael offers a practical, expert view of agile measurement, showing you these metrics in action on retrospectives. In hands-on exercises, you will learn to replicate these techniques to make your own comparisons for time, cost, and quality.  You will work in pairs to calculate productivity metrics using the templates Michael employs in his consulting practice.  You will be able to leverage these new metrics to make the case for changing to more Agile practices and creating realistic project commitments within your organization.

  • Use your own data to create agile project trends
  • Learn to make comparisons for time, cost, and quality
  • Take back new ways of communicating the value of agile

Anne Hungate, Union Bank

From Quality Assurance to Quality Engineering

Tuesday, April 8: 8:30 AM – 12:00 PM

Instructor:

“Why didn’t testing catch that?” The better question is, “What is wrong with our engineering, why did we let that escape to testing?”  Even agile teams allow defects to escape into system test where test teams work to align data, applications, and environments in order to validate production readiness.  Truly bringing better software to production requires collaboration among all the professional domains and a shared vision of what the customer experience should be.  Union Bank made the transition from QA to QE resulting in a better experience for customers and a more engaging work life for associates.  Join Anne and learn how to take your test team from being victims at the end of the delivery lifecycle to collaborators and partners in quality engineering.  Leave with useful methods and tools to improve your team’s impact on and contribution to delivery.

  • Learn to assess  your team’s readiness to move to quality engineering
  • Discover which professions are the best allies of testing and how to grow those relationships
  • Develop a quality engineering dashboard to quantify customer experience as well as application and process health

Carolyn Swadron, CIBC

Producing Effective Testing Estimates

Tuesday, April 8: 8:30 AM – 12:00 PM

Instructor:

Our managers and project managers want to know how long project quality control will take and how many people are needed to accomplish the task.  Many of us struggle with producing realistic estimates.  Our estimates often omit key information and assumptions that would enable us to explain why, when circumstances change, our estimates must also change.  After we provide our estimates, we repeatedly receive requests to justify and reduce them.  Sometimes, we are simply told how much time and what resources we have for testing and we are required to meet those parameters.  This presents a challenge, especially when our experience indicates that testing will require more than we are given and project issues result in further reductions.  Do we just accept imposed numbers and hope for the best, or do we clearly show the impacts and increased risks of reductions so that we can agree on what we will and will not deliver during testing?  This tutorial will answer all these questions and give you the tools for developing credible estimates and justifying changes that may be needed to these estimates over the life of the project.  This tutorial is for all quality assurance and quality control professionals who are responsible for producing project estimates and defining estimating processes.

  • Learn how to identify the components of an accurate estimate
  • Understand how to structure an estimate to be clear and defensible
  • Practice calculating and documenting estimate assumptions and values for a specific project

James Campbell, Tulkita Technologies, Inc.

Testing Failure: Top Sources & Prevention

Tuesday, April 8: 8:30 AM – 12:00 PM

Instructor:

As IT organizations grow and increase in complexity, so do the challenges facing testers.  IT senior leadership is increasingly scrutinizing budgets, forcing speed-to-market agendas, and measuring the value of testers and quality organizations. This changing landscape is requiring greater analysis be made on identifying failure points, inefficiencies, and behavioral changes required to streamline the test organization.  But where should one focus? Through his years of test consulting experience, James has analyzed and helped hundreds of organizations around the world identify and fix enterprise testing issues impacting success.  Join James as he unlocks the mysteries of the test organization and exposes the real truths of the sources of failure, and more importantly, how to address and fix them.  He will arm you with, not only the sources, but the quantifiable proof of how these sources are impacting the results of both the test team and the broader quality agenda of the IT organization.  James will introduce you to his own assessment tool to help you proactively pinpoint possible issues.

  • Identify top sources of failure points and ineffectiveness that occur within the enterprise
  • Understand how these failure points can be identified, evaluated, and assessed as to their impact on testing, quality, and the broader IT organization
  • Develop a remediation plan and measurement framework to analyze and evaluate success.

Mais Tawfik Ashkar, PerfNG, LLC

Client Side Web Performance Optimization and Measurement

Tuesday, April 8: 8:30 AM – 12:00 PM

Instructor:

Web performance optimization (WPO) is an emerging movement that focuses on trimming down the layers and speeding up the page download and display of web applications in the browser. In recent years, there has been a change in web application architecture where the bulk of the page load time and processing has shifted to the front-end (browser) due to emerging technologies and practices such as client-side scripting, Ajax, Web 2.0, etc. If mismanaged, this can lead to a bloated front end and sluggish performance. As more functionality moves to the browser and applications become more asynchronous, measuring client-side performance becomes an invaluable practice and a necessary precursor or parallel step to augment other performance testing alternatives offering a relatively faster feedback loop and faster ROI. Join Mais to:

  • Discover web performance recommended practices, WPO patterns and anti-patterns
  • Explore a set of front-end profiling tools & techniques
  • Explore examples of front-end waterfalls and how to interpret, analyze, and identify performance offenders

Lynn McKee, Quality Perspectives

The Power of Mind Mapping

Tuesday, April 8: 1:00 PM – 4:30 PM

Instructor:

Testers are known for their critical and creative thinking. Stakeholders rely on a tester’s ability to rapidly assess the context of the project and to then gather timely, valuable information about the product’s quality. Does your current testing approach enable or stifle the thinking process? Mind Mapping is a powerful idea generation tool that can be used for everything from test plans, strategies, test cases, and even to testing status reports! Switching from traditional test documentation to mind maps can be a powerful transformation that significantly increases the value of your testing efforts.

  • Understand the effectively infinite space of test coverage and the pitfalls in the common approaches to test coverage
  • Learn about mind maps and the creative thinking process
  • Share approaches to implementing mind maps and important considerations for measuring test progress and quality.

Thomas Cagley, David Consulting Group

Make Integration and Acceptance Testing Truly Agile

Tuesday, April 8: 1:00 PM – 4:30 PM

Instructor:

The flow of testing is different in an agile project.  In many cases, organizations have either not recognized the change in flow, or have created agile/waterfall hybrids with test groups holding onto waterfall patterns.  While some of the hybrids are driven by mandated contractual relationships, the majority are driven by lack of understanding or fear of how testing should flow in agile projects which leads to the mistaken belief that that integration and acceptance testing can’t be performed within agile frameworks.  Rather, integration testing is an important testing technique in any project, perhaps even more so in agile projects because it is core to the concept of the “definition of done.”  Additionally, user acceptance testing in an agile project generally is more rigorous and timely than the classic end of project UAT found in waterfall projects.  Join Tom and learn how to truly bring agile methods to your integration and acceptance testing.

  • Discover why the flow of testing in a typical agile project is much more integrated
  • Learn how testing built incrementally and performed in waves can save rework
  • Understand that continuous retests as software is built incrementally generate highly rigorous testing.

Michael Yudanin, Conflair

Mobile Testing: Manual and Automated

Tuesday, April 8: 1:00 PM – 4:30 PM

Instructor:

The tutorial is designed to provide software quality assurance and testing professionals with the background and tools necessary to organize manual and automated testing efforts for mobile applications. Michael will survey the state of mobile technology and the software quality challenges the mobile market poses.  He will then focus on translating these challenges into a mobile testing strategy that will offer ways to increase the efficiency and effectiveness of mobile testing in its functional, performance, usability and other aspects. Michael will also offer a focused look at the automation of mobile testing: the need, the options, the tools, and the criteria for selecting an approach that would best fit your needs. The tutorial will end with a live demonstration of a mobile test automation approach.

  • Explore how to plan tests for mobile applications and websites based on the factors unique to the mobile market: distribution of platforms, technological challenges, etc.
  • Learn how to plan non-functional tests for mobile devices: performance, usability, survivability and recovery, and other tests.
  • Understand how to automate mobile application testing, the criteria for choosing a mobile automation approach, and the main types of tools, their advantages and disadvantages

Clyneice Chaney, Quality Squared

Risk Based Test Management

Tuesday, April 8: 1:00 PM – 4:30 PM

Instructor:

Risk management is a key component of doing business in any industry.  But what about testing managers?  What risks are associated with managing the test portion of a software development project, and how can they be minimized? What happens to testing projects when the manager doesn’t address potential risks?  Testing managers are faced with meeting tighter deadlines while still delivering products that meet customer requirements.  It is the greatest challenge testers face today. Formulating answers to age-old questions like “What should we test?” and “How long do we test?” requires different strategies in fast-paced environments.  Risk based test management is about identifying which testing activities are important for a particular release and how much of the potential activities should be done.  It’s about managing the testing tasks in a schedule that optimizes the available resources and addresses the risks of not meeting the desired testing schedule.  Join Clyneice to learn a risk management approach tailored for testing managers and leads, providing the necessary techniques and tools for managing testing risk.

  • Understand standard definition and approaches for risk
  • Discover how to use testing-specific checklists designed to assess risks relative to testing projects
  • Explore methods for using the output of a test project risk assessment  to create an approach for test project management
  • Learn measurement and reporting techniques to provide relevant test project status

Richard Bechtold, PhD, Abridge Technology

Improving Quality through Self-Sustaining Process Improvements

Tuesday, April 8: 1:00 PM – 4:30 PM

Instructor:

A well-known principle within manufacturing is that product or service quality is nearly always a function of process quality.  However, improvement efforts often end up requiring excessive time, effort, or money and yield only a fraction of the expected benefit.  This tutorial describes a set of key methods that can be used to improve product and service quality through the design and implementation of process improvements that are readily implemented, effective, and self-sustaining.  Central to this set of methods are principles of objective proof, positive feedback loops, and self-correcting systems.  Improvement methods described include: clarifying quality objectives, identifying process alternatives, leveraging quality standards and models, determining feasibility, analyzing trade-offs, designing process experiments, deploying incrementally, evaluating impacts, verifying and validating process capability, and managing security and risk.  This session is not only for people relatively new to process and product improvement, but also for experienced quality veterans looking to increase success rates and benefits. This material is designed for attendees with virtually any level of responsibility, from hands-on developers and testers to executive decision-makers.

  • Understand fundamental principles of introducing and implementing new processes and techniques
  • Design improved processes that are intentionally self-correcting and self-sustaining
  • Increase the objectivity of your techniques for tracking and managing improved processes, and for demonstrating and communicating improvement results