Course Duration: 2 Days

Course Category: Software Testing

 

14 Contact Hours

Software Testing Techniques for User Acceptance and Systems Integration Testing

 

Course Overview

 

There are many training courses offering a solid grounding in the theory and practice of software testing but software testers attending these courses often struggle to apply what they have learnt when they return to the workplace.  This gap between theory and practice is known as the “testing gap”.
 
This two day course is designed to bridge the “testing gap”. It is aimed at software testers who need a practical approach to User Acceptance Testing (UAT) and System Integration Testing (SIT) that can be applied to software projects in the real world.
 
The course is presented by an experienced software tester who shares a wealth of practical tips and recent project experience with participants during the course.
 
Course Agenda
  • The problem with software testing
  • A framework for better understanding software testing
  • Planning User Acceptance Testing (UAT) and System Integration Testing (SIT)
  • Feature testing
  • End-to-end testing
  • Date-based testing
  • Exploratory testing
  • managing the UAT and SIT Effort
Who Should Attend
  • Test Managers, Test Engineers, Testers, Quality Assurance Staff
  • Business Analysts, Business Systems Analysts, Systems Analysts, Functional Analysts
  • User Representatives, Project managers, Program Managers
  • Software Engineers, Developers, Requirements Engineers, Requirements Analysts, Human Factors Specialists
  • Process Engineers, Software Engineering Process Group (SEPG) Staff, Methodologists, Process Improvement Staff
Classification of Knowledge Levels
  • Level 1: Remember (K1) – the student will recognise, remember and recall a term or concept.
  • Level 2: Understand (K2) – the student can select the reasons or explanations for statements related to the topic, and can summarize, compare, classify, categorize and give examples for the testing concept.
  • Level 3: Apply (K3) – the student can select the correct application of a concept or technique and apply it to a given context.
Course Topics

Problems with Software Testing

  • Confusing terminology
  • Software testing popular myths and incorrect beliefs
  • The testing “gap”
  • Product vs. project life cycles
  • Views of software quality
    • Measure of “excellence”
    • Fit for intended purpose
    • Conform to specification
    • Absence of defects and other quality goals
    • Provides value
  • Summarising views of quality in the quality triangle
  • Aligning software testing objectives with views of quality
    • Validating that software is fit for its intended purpose
    • Verifying that software conforms to its specification
    • Identifying defects
    • Measuring product attributes
    • Building confidence

Exercises and Practice

  • A short quiz based on the concepts learnt in this section

A Framework for Better Understanding Software Testing

  • The need for a multidimensional view of software testing
  • Levels of software testing
    • Unit and component testing level
    • Integration testing level
      • Understanding multiple levels of integration
      • Vertical vs. horizontal integration
    • System testing level
    • The “recursive” nature of test levels
  • Identifying test items and their relationship to test levels
  • Selecting a test basis
    • Requirements as a test basis
    • Program code as a test basis
    • Variations on requirements and code
    • Models as a test basis
    • Experience as a test basis
  • Designing test cases
    • What is a test case?
    • Test to pass or “positive” test cases
    • Test to fail or “negative” test cases
    • Overview of test case design techniques
  • Who executes the tests?
  • Automating test execution

Exercises and Practice

  • Apply the software testing framework to describing the UAT and SIT strategies that are currently in place at the participant’s organisation

Planning User Acceptance Testing (UAT) and System Integration Testing (SIT)

  • Applying the software testing framework
    • The standard User Acceptance Testing (UAT) strategy
    • The standard System Integration Testing (SIT) strategy
    • Customising UAT and SIT strategies
  • Risk-based testing
    • Product risks
    • Project risks
  • Developing UAT and SIT Test Plans
    • Identifying test activities
    • Estimating test effort
    • Assigning resources
    • Developing a test schedule
    • Test environment and tools

Exercises and Practice

  • Identify the key elements of a customised UAT Test Plan based on the details of a case study

Feature Testing

  • What are features?
  • Features vs. components
  • What is feature testing?
    • Testing features in isolation
    • Testing vertical integration
  • When is feature testing performed?
    • The role of feature testing during SIT
    • The role of feature testing during UAT
  • Modelling features
    • Why model features?
    • How to model features
  • Developing a feature test specification
  • Feature testing tools and test automation
  • Planning and managing feature testing

Exercises and Practice

  • Develop a consolidated model of software features based on the details of a case study

End-to-End Testing

  • What is end-to-end testing?
  • Business object life cycles and scenarios
  • What is end-to-end testing?
    • Testing features in a specific sequence
    • Testing horizontal integration
  • When is end-to-end testing performed?
    • The role of end-to-end testing during SIT
    • The role of end-to-end testing during UAT
  • Modelling business object life cycles
    • Business object states
    • External and internal events that trigger changes of state
    • Grouping states
    • Conditions and actions
    • Modelling decisions
  • Generating test scenarios
    • Basing test scenarios on the life cycle of a business object
    • Prioritising test scenarios based on risk
  • Developing an end-to-end test specification
  • End-to-end testing tools and test automation
  • Planning and managing end-to-end testing

Exercises and Practice

  • Develop a state model that describes the life cycle of some core business objects based on the details of a case study
  • Use the life cycle model to generate a set of business scenarios suitable for end-to-end testing

Date-Based Testing

  • What is date-based testing?
  • When is date-based testing performed?
    • Date-based testing and feature testing
    • Date-based testing and end-to-end testing
  • Date-based events
    • Relative and absolute dates
    • Anniversaries and time periods
    • Adding date-based events to the life cycle models
  • System clocks
    • Real time clocks
    • Proxy clocks
  • Date-Based testing combined with feature and end-to-end testing
  • Developing a date-based test specification
  • Date-based testing tools and test automation
  • Planning and managing date-based testing

Exercises and Practice

  • Identify date-based events based on the details of a case study and incorporate them into a life cycle model
  • Develop a test calendar for the date-based events and a schedule of date changes together with a suitable strategy for changing the system clock during the testing effort

Exploratory Testing

  • What is exploratory testing?
  • When is exploratory testing performed?
    • Exploratory testing and feature testing
    • Exploratory testing and end-to-end testing
    • Exploratory testing and date-based testing
  • Skills required for exploratory testing
  • Test specifications vs. checklists
  • Exploratory testing tools and test automation
  • Planning and managing exploratory testing

Exercises and Practice

  • View some videos describing the principles of exploratory testing
  • Discuss strategies for developing and sharing checklists

Managing the UAT and SIT Effort

  • Managing issues
    • The importance of bug “triage”
    • Bug tracking
    • Requirements and change management
  • Tracking progress
    • Using “burn down” charts to track test execution progress
    • Using the rate of finding bugs to track product quality
    • Using the gap between finding and fixing bugs to track product readiness
  • Finding a voice for product confidence!

Exercises and Practice

  • Draw some conclusions about the measured software quality by viewing a number of charts that describe progress of an imaginary test effort
  • Discuss strategies for encouraging all involved to share their level of confidence in the software under test

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <font color="" face="" size=""> <span style="">

PMI, PMP, PMBOK, CAPM, PMI-ACP and the Registered Education Provider logo are registered marks of the Project Management Institute, Inc.
CMMI®, Capability Maturity Model®, Capability Maturity Modeling®, CMM®, PCMM® and Carnegie Mellon® are registered in the US Patent and Trademark Office by Carnegie Mellon University.
ISTQB® is a Registered Trade Mark of the International Software Testing Qualifications Board.
IIBA®, BABOK® and Business Analysis Body of Knowledge® are registered trademarks owned by International Institute of Business Analysis. CBAP® and CCBA® are registered certification marks owned by International Institute of Business Analysis. Certified Business Analysis Professional, Certification of Competency in Business Analysis, Endorsed Education Provider, EEP and the EEP logo are trademarks owned by International Institute of Business Analysis.
The APMG-International Agile Project Management, AgilePM and Swirl Device logos are trademarks of The APM Group Limited.
PRINCE2®, ITIL®, IT Infrastructure Library®, and MSP® are registered trademarks of AXELOS Limited. The Swirl logo™ is a trade mark of AXELOS Limited.
The ITIL Licensed Affiliate logo is a trademark of AXELOS Limited.
SCRUM Alliance REP SM is a service mark of Scrum Alliance, Inc.