Karen Johnson, Software Test Management, inc.

One-Day Class: Build Your Mobile Testing Knowledge

Monday, April 15, 2013: 8:30 AM - 4:30 PM



This full-day course reviews the mobile marketplace as well as mobile design concepts and terms. Students learn about the mobile marketplace from both a competitive analysis and by a review of marketplace statistics. The class training covers device and application settings that impact your mobile app. This class is designed for testers new to mobile testing and for students who want to understand mobile design and the mobile marketplace in more detail. The class time is divided between lecture, interactive discussion and classroom exercises.

Bring your own smartphone to class to enhance your learning.

Course Objectives

In this full day course on mobile testing, the primary objective of the class is on three key challenges in mobile testing: understanding the mobile device market, learning about device and application settings and learning about the mobile user interface experience. Attendees will learn how to:

Join this full day course build your mobile testing knowledge

  • Find ways to keep up-to-date with the mobile market
  • Learn mobile UX concepts and design
  • Understand device and application settings
  • Learn how to conduct a competitive analysis

Course Outline

Introduction to Mobile Testing – Are you overwhelmed by the number of mobile devices you need to test? The market is large and new devices become available almost every week.  Specific topics include:

  • The device market
  • The app market
  • The “global” market versus North America
  • Keeping up with marketplace news and statistics

Mobile User Interface, Design & User Experience – A detailed review of terminology and design concepts of the mobile user experience.  Topics include:

  • Uniquely mobile: touch, pinch & zoom, one eye, one hand
  • Responsive design
  • Phone versus tablet
  • iOS versus Android versus Windows Phone 8
  • Navigation, menus and forms
  • Search, sort, filter

Mobile App & Device Settings – A review device and app settings. Topics include:

  • Connectivity: Wi-Fi, 3G, 4G
  • Data Storage: SD cards, Sim Cards
  • Location-Awareness
  • App Settings & App Permissions

The Competitive Mobile Marketplace – Learning how to conduct a survey of mobile apps and becoming aware of what your competitors are offering is one method of growing your own mobile knowledge.  Specific topics include:

  • Market ratings & comments
  • Reviewing the vertical competition
  • Reviewing the mobile competition
  • Bring your own smartphone to conduct a competitive analysis


Clareice Chaney, MITRE CorporationClyneice Chaney, Quality Squared

Half-Day Tutorial: Outsource Testing: How to Monitor Contractor Performance

Tuesday, April 16, 2013: 8:30 AM - 12:00 PM

Instructors: ;

In this age of offshore, near shore, and outsourced testing, who’s monitoring performance outcomes? Can you tell if you’re getting what you paid for? Is there a “best” or “better” way to measure performance in service contracts? Contractor performance is a new area for many testing and quality managers but supplier management is not. This session provides best practices for test mangers that lead to more effective measurements of the difference between expected and actual performance. Quality assessments in a performance-based environment represent a significant shift from the more traditional quality assurance concepts of scrutiny of process compliance to measuring outcomes. The key is to identify the best method to align performance with measurement. This session addresses issues related to measuring supplier efforts in a performance-based environment, determining what a “good job” looks like, and identifying key problem areas and some best practices to assess whether or not outcomes are being achieved.

  • Discover cornerstones of performance measurement and surveillance
  • Explore issues in performance measurement in a performance-based environment
  • Learn what to measure and how as well as tips on identifying outcomes

James Campbell, Tulkita Technologies, Inc.

Half-Day Tutorial: Demystifying Metrics: Showing Your Real Value

Tuesday, April 16, 2013: 1:00 PM - 4:30 PM


Test professionals and organizations alike use test metrics on a daily basis to track and monitor the health of their test practice. Conferences, books and training courses offer a plethora of information and theory related to test metrics, and yet, we still struggle with them. In this tutorial, James will get back to the basics and examine how a test measurement program should be properly designed and implemented. Learn what your business and IT stakeholders want from your metrics and how to effectively communicate the results. Understand that test metrics can and should go well beyond execution status. Based on his practical experiences implementing real world measurement programs for both test management and IT senior, leadership alike, James will share the essential ingredients required to be successful – it goes well beyond the numbers! Identify opportunities, synthesize action plans and equip yourself with the tools necessary to show your value and achieve success.

  • Identify and align QA metrics with your Business and IT Stakeholders
  • Discover the steps required to setup and institutionalize the right metrics & reports
  • Learn to automate metrics collection and reporting – cut your time in half

Jeremy Scott, Deloitte Consulting, LLPRohit Pereira, Deloitte Consulting, LLP

Presentation: Test Estimation: Planning for Reality

Wednesday, April 17, 2013: 9:45 AM - 10:45 AM

Speakers: ;

What do you say when your project manager asks you how long testing will take? Even when you think you have a great answer, how many times has your test schedule jumped off track shortly into the test execution phase? Join Jeremy and Rohit for a guide to reaching an answer you can live with. You will discuss how to come up with a realistic answer that factors in many of the variables and unknowns. You’ll cover back of the napkin estimation, parametric estimation, velocity measurements, and estimations for a dynamic environment. Jeremy and Rohit will present a framework for estimation, demonstrate three estimation models in MS Excel, and then explain back-of-the-napkin estimation with a whiteboard and marker. They will use examples based on client experience to demonstrate the complexity of real projects. At the end of this session, you will have an understanding of what you need to do today to give better estimates next month.

Yolonda Kennedy, WellPoint, Inc.

Presentation: Three Strategies to Improve Your Testing Effort

Wednesday, April 17, 2013: 11:00 AM - 12:00 PM


Do you want to improve the effectiveness and efficiency of your testing effort? Would you like to learn key strategies that are sure to help you improve the quality of testing? If you answered yes, then this session is for you. There are many things that can be done to improve testing, but you cannot implement them all. This session will focus on 3 key strategies that will improve the quality of your testing and allow you to focus on what is most important. In this session you will learn how to implement and use a formal review process to ensure the quality and testability of requirements, how to identify testing risks early in the project focusing on both product and project risks, and finally, how to use adaptive test planning to plan and manage your testing efforts.

Fiona Charles, Quality Intelligence

Workshop: Using a Mindmap to Develop and Communicate a Test Strategy

Wednesday, April 17, 2013: 1:00 PM - 2:30 AM

Workshop Leader:

A test strategy is the set of big-picture ideas embodying the overarching direction or design of a test effort. It’s the significant values that will inspire, influence and ultimately drive your testing, and the overall decisions you have made about ways and means of delivering on those values. Rather than the weighty templates standard in many organizations, a lightweight medium like a mindmap is a far superior tool for developing a test strategy and communicating its essentials to your stakeholders. In this workshop, you will work together with Fiona to develop a test strategy mindmap. Along the way, you’ll explore what really matters in a test strategy, how best to capture it in a mindmap, and how then to use the mindmap to communicate the strategy to stakeholders. Come join this lively workshop and have fun developing your own mindmap!

  • Learn the differences between a strategy and a tactical plan
  • Understand the essential elements of a test strategy
  • Discover how to use a mindmap to develop your test strategy

Shivakumar Balasubramaniyan, Cognizant

Presentation: Test Architecture Planning for Test Coverage and Efficiency

Wednesday, April 17, 2013: 3:00 PM - 4:00 PM


Test coverage and efficiency go hand in hand and share a distinct relationship in directly impacting quality and cost respectively. There are number of popular test management tools in the market today, however, achieving optimal test coverage with minimal tests has become more challenging and the need for using the right tools and methods has become more significant than ever. Architecting tests in order to achieve the right test coverage involves robust test design, execution, management methods, and best practices. In this presentation, Shiva will discuss methodologies and best practices coupled with an objective metrics and risk based test model that can achieve test coverage without compromising cost or quality.

Coming soon!

Presentation: Intelligent Virtual Services for SOA Test Data

Thursday, April 18, 2013: 9:45 AM - 10:45 AM


Data stubbing and message virtualization are quickly becoming a standard practice for many large and important projects, yet data-driven organizations are constantly faced with numerous challenges when implementing changes to SOA frameworks. Huw will present new and exciting ideas on basic virtual message management and how it can be moved to the next level with the simple addition of better test data management and coverage analysis techniques. He will present the advantages of having an array of techniques to choose from when preparing data for SOA testing. Some example techniques include: creating simple echo responses, provisioning obfuscated or masked production data, utilizing constrained orthogonal arrays, understanding cause and effect and using complex multi-level sets of responses, which include expected results. You will leave with progressive ideas for securing, controlling and enhancing the data used for SOA testing. These ideas can be implemented in your organization to encourage improvements in SOA frameworks.

Karen Johns, Mosaic, Inc.

Presentation: Test Data: Is it Really Unmanageable?

Thursday, April 18, 2013: 11:00 AM - 12:00 PM


Test data is a critical component of our testing. The data files, input, and expected results are fundamental to creating the conditions that must be tested. No one disagrees on the importance of test data; yet, it is still getting the better of us. Data is sensitive, ever-changing, complex, and difficult to maintain. What can be done? In this session you will discuss the test data challenges and look at techniques that are used to improve the reusability, maintainability, and manageability of test data. Come and share your test data challenges and techniques. Together with Karen, you will explore ways to manage test data and improve your ability to create and maintain the test conditions necessary to ensure reliable, well-tested systems.

Richard Bechtold, PhD, Abridge Technology

Workshop: Eight Steps Toward Establishing and Enhancing Your Measurement Capability

Thursday, April 18, 2013: 1:00 PM - 2:30 PM

Workshop Leader:

In most organizations, some type of a measurement effort is already in place. Almost all organizations track dollars and most organizations track other items of interest as well. Nevertheless, an enduring challenge is ensuring that your set of measurements and associated activities result in a sustainable and improvable capability and that they truly provide a solid foundation for your ongoing pursuit of quality. In this workshop you will learn eight simple and effective steps for establishing and enhancing measurement capability. These steps are: determining measurement motivations, prioritizing candidate measurements, conducting experimentation and evaluation, performing measurement reduction, implementing and deploying targeted measurements, visualizing, analyzing, and verifying deployed measurements, packaging and communicating, and systemically validating measurement capability. This workshop is not only for beginners, but is also for seasoned professionals who are seeking to streamline, simplify, integrate, and strengthen the accuracy and relevancy of the measurement activities within their organization.

  • Apply eight simple steps to achieve an effective, robust, and valuable set of measurements
  • Identify and evaluate the measurement needs of your organization
  • Learn techniques for streamlining you measurement activities

David Herron, David Consulting Group

Presentation: 100 Defects Isn’t That Bad!

Thursday, April 18, 2013: 3:00 PM - 4:00 PM


How do you properly compare the quality of two or more software deliverables without an effective normalizing metric? The answer is you can’t. If project A has 100 defects and project B has 50 defects, do you automatically assume project B is a higher quality deliverable? That may not be the case. But, often times, that is the perception of the end user. An effective normalizing metric allows you to properly measure and compare the level of quality across software deliverables. It can also be used to manage end user expectations regarding the quality of the software in relation to the functional value being delivered. Furthermore, the normalizing metric can be used to predict future quality outcomes and can be used to establish service levels of quality performance. Learn how you can quickly and easily incorporate this all-important metric into your quality program.

Coming soon!Beverly Edwards, Allstate Insurance Company

Presentation: Regression Transformation Initiative: A Case Study

Friday, April 19, 2013: 9:45 AM - 10:45 AM

Speakers: ;

Software regression testing is an essential and challenging task for software test groups. As any software application expands and evolves, it becomes increasingly complex and difficult to test every piece of functionality. And, it becomes imperative to understand the key attributes that can lead to a comprehensive, lean, and effective regression test bed. Allstate Insurance Company, along with an outsourcing partner, collaborated on a regression transformation initiative for a very complex policy administration application. This application undergoes constant changes across multiple releases during the year. The initiative implemented a very structured, scientific, and statistical approach to create an effective regression test bed that maximizes coverage, optimizes scenarios, and is easily maintainable for ongoing updates. Join Beverly and Ashwini to discuss key imperatives for regression testing, a structured approach to building a regression suite and the implementation of statistical techniques and tools to maximize coverage with an optimized set of test cases.

Gregg Donner, Compuware Corporation

Presentation: Business Intelligence and Data Warehouse Testing Techniques

Friday, April 19, 2013: 11:00 AM - 12:00 PM


Today, business intelligence solutions are rivaling enterprise transaction-based applications in importance to the organization. Immediate access to a trusted source of business information is critical for organizations to understand what is occurring within their various lines of business and to help make better future business decisions. As more data is collected and transformed to useful business information, implementations have grown more complex and the need to get the data right the first time has become the critical success factor for business intelligence and data warehouse solutions. The tools and techniques for testing business intelligence solutions are not mature and have caused many implementations to fail because the business users do not deem the information reliable. In this session, we will review testing activities that have led to successful business intelligence implementations, pitfalls to avoid and trends in business intelligence that will affect how solutions are tested in the future.