Course Duration: 2 Days

Course Category: Software Testing

 

14 Contact Hours

Software Testing Techniques for User Acceptance and Systems Integration Testing (A Practical Approach to Software Testing)

 

Course Overview

 

There are many training courses offering a solid grounding in the theory and practice of software testing but software testers attending these courses often struggle to apply what they have learnt when they return to the workplace.  This gap between theory and practice is known as the “testing gap”.
 
This two day course is designed to bridge the “testing gap”. It is aimed at software testers who need a practical approach to User Acceptance Testing (UAT) and System Integration Testing (SIT) that can be applied to software projects in the real world.
 
The course is presented by an experienced software tester who shares a wealth of practical tips and recent project experience with participants during the course.
 
Course Features
  • Presents a practical, non-academic approach to UAT and SIT
  • Based on a proven framework for planning UAT and SIT
  • Highlights the importance of exploratory testing to complement traditional specification based testing
  • Detailed discussion of date based testing with practical solutions to common problems and issues
  • Presented by an experienced tester who has been responsibility for the successful completion of UAT and SIT projects
Participant Benefits
  • Offers an alternative perspective of software testing that bridges the 'testing gap'
  • Builds confidence in those who need to develop UAT and SIT strategies and plans
  • Develops the skills required to write effective UAT and SIT test specifications
  • Provides a factual introduction to exploratory testing
Who Should Attend
  • Those who want develop their careers as UAT and SIT test specialists such as Test Engineers, Tester Analysts, Quality Assurance Staff, Agile Teams, Business Analysts, Business Systems Analysts, Systems Analysts, Functional Analysts
  • Those who need to plan and manage the UAT and SIT test effort such as Test Managers, Quality Assurance Staff, Project Managers, Program Managers Product Owners, Product Managers, Scrum Masters
  • Those who who want to gain an understanding of UAT and SIT techniques such as Test Managers, Project Managers, Program Managers Software Engineers, User Representatives, Product Owners, Product Managers, Scrum Masters, Requirements Engineers, Requirements Analysts, Human Factors Specialists, Software Developers, Process Engineers, Software Engineering Process Group (SEPG) Staff, Methodologists, Process Improvement Staff
Course Agenda

The Problem With Software Testing

  • Problems With Confusing Terminology and Popular Myths
  • The 'Testing Gap'
  • Product vs. Project Life Cycles
  • Views of Quality
  • Software Testing Objectives

A Framework for Better Understanding Software Testing

  • Definition of Test Levels
  • Understanding the Integration Test Level
    • Component Integration
    • Feature Integration
  • Identifying Test Items
  • Understanding the Acceptance Test Level
  • Testing is Not Black and White
    • Alternatives to the Traditional 'Black-Box' – 'White-Box' View
    • Defining the Test Basis
  • Who Executes the Tests?
  • Automating Test Execution
    • Two Compelling Drivers For Test Automation
    • When Is Manual Test Execution Impractical?
    • Comparing Automated and Manual Testing
  • A Practical Software Testing Framework
  • Test Case Definition

Planning User Acceptance Testing (UAT) and System Integration Testing (SIT)

  • Using the Software Testing Framework for Test Planning
    • Typical UAT Strategy
    • Typical SIT Strategy
    • Developing Custom Strategies
  • Risk-Based Testing
    • Potential Failures
    • Risk Analysis
    • Mitigation Strategy
  • Expanding a Test Strategy into Full UAT and SIT Test Plans

Feature Testing

  • What are Features?
  • Features vs. Components
  • What is Feature Testing?
    • Testing Features in Isolation
    • Testing Component Integration
    • Feature vs. Component Testing
  • When is Feature Testing Performed?
    • The Role of Feature Testing During SIT
    • The Role of Feature Testing During UAT
  • Modelling Features
    • Why Model Features?
    • How to Model Features
      • Natural Language
      • Use Case Diagrams
      • User Stories
      • Flow Diagrams
  • Developing a Feature Test Specification
    • Testing Features in Isolation and Sequentially
    • Testing Between Boundaries and On Boundaries
    • Test To Fail (Negative Tests)
    • Testing Business Rules and Decision Outcomes
    • Testing Form Actions
  • Feature Testing Tools and Test Automation
  • Planning Feature Testing

End-to-End Testing

  • What Is End-to-End Testing?
    • End to End Scenarios and Scenario Based Testing
    • Testing Feature Integration
    • End-to-End Verification of Business Rules
  • When Is End-to-End Testing Performed?
    • The Role of End-to-End Testing During SIT
    • The Role of End-to-End Testing During UAT
  • Modelling Business Object Life Cycles
  • State Charts
    • Modelling States and Changes of State
    • Modelling Interactive and Batch Features
    • Modelling Decisions and Conditions
    • Modelling Batch Features
    • Dealing With Duplicate Transitions and Grouping Into Sub-States
  • Developing an End-to-EndTest Specification
    • Generating Scenarios
    • Scenario Steps
    • Client Stereotypes
  • End-to-End Testing Tools and Test Automation
  • Planning End-to-End Testing

Date-Based Testing

  • What Is Date-Based Testing?
    • Events
    • Dates and Online Features
    • Dates and Batch Features
  • System Clocks
  • Changing the System Clock
    • Proxy Clock
    • Real Time Clock(s)
    • Date API
    • Date Utility
    • Other Issues
  • Developing a Date-Based Test Specification
    • Date Partitions
    • Syncronising Scenarios
    • Developing a Test Schedule
  • Date-Based Testing Tools and Test Automation
  • Planning Date-Based Testing

Exploratory Testing

  • What Is Exploratory Testing?
  • Exploratory vs. Scripted Testing?
  • When Is Exploratory Testing Performed?
  • Exploratory Testing Skills
  • Exploratory Testing Tools and Test Automation
  • Planning Exploratory Testing

Managing the UAT and SIT Effort

  • Tracking Progress
  • Managing Issues
    • Tracking Product Quality
    • Triaging Issue Reports
    • Tracking Product Quality
  • Test Completion Criteria
  • Finding a Voice for Product Confidence
     

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <font color="" face="" size=""> <span style="">

PMI, PMP, PMBOK, CAPM, PMI-ACP and the Registered Education Provider logo are registered marks of the Project Management Institute, Inc.
CMMI®, Capability Maturity Model®, Capability Maturity Modeling®, CMM®, PCMM® and Carnegie Mellon® are registered in the US Patent and Trademark Office by Carnegie Mellon University.
ISTQB® is a Registered Trade Mark of the International Software Testing Qualifications Board.
IIBA®, BABOK® and Business Analysis Body of Knowledge® are registered trademarks owned by International Institute of Business Analysis. CBAP® and CCBA® are registered certification marks owned by International Institute of Business Analysis. Certified Business Analysis Professional, Certification of Competency in Business Analysis, Endorsed Education Provider, EEP and the EEP logo are trademarks owned by International Institute of Business Analysis.
The APMG-International Agile Project Management, AgilePM and Swirl Device logos are trademarks of The APM Group Limited.
PRINCE2®, ITIL®, IT Infrastructure Library®, and MSP® are registered trademarks of AXELOS Limited. The Swirl logo™ is a trade mark of AXELOS Limited.
The ITIL Licensed Affiliate logo is a trademark of AXELOS Limited.
SCRUM Alliance REP SM is a service mark of Scrum Alliance, Inc.