Title: Software Testing and Validation
1Software Testing and Validation
- MSIT 182 Software Engineering
- Topic 4
2Testing, Verification and Validation
- Verification "Are we building the product
right" - The software should conform to its specification
- Validation "Are we building the right product"
- The software should do what the user really
requires (i.e., assuring that a software system
meets a user's needs) - Testing
- Establish the presence of defects in
systems/programs
3The Verification Validation process
- Is a whole life-cycle process - verification
validation must be applied at each stage in the
software process. - Has the following principal objectives/goals
- The discovery of defects in a system
- The assessment of whether or not the system is
usable in an operational situation. - Verification and validation should establish
confidence that the software is fit for purpose - This does NOT mean completely free of defects
- Rather, it must be good enough for its intended
use and the type of use will determine the degree
of confidence that is needed
4Testing, verification and validation and the
software process
5Software Quality
- Not excellence
- Extent to which software satisfies its
specifications - Software Quality Assurance (SQA)
- Goes far beyond verification validation
- Managerial independence
- development group
- SQA group
6Static and dynamic verification
- Software inspections Concerned with analysis of
the static system representation to discover
problems (static verification) - May be supplement by tool-based document and code
analysis - Nonexecution-based testing
- Software (or program) testing Concerned with
exercising and observing product behaviour
(dynamic verification) - The system is executed with test data and its
operational behaviour is observed - Execution-based testing
7Static verification dynamic validation
8Defective Software Causes of Errors
- Specification may be wrong
- Specification may be a physical impossibility
- Faulty program design
- Program incorrect.
9Defective Software Types of Errors
- Algorithmic error
- Computation precision error
- Documentation error
- Capacity error or boundary error
- Timing and coordination error
- Throughput or performance error
- Recovery error
- Hardware system software error
- Standards procedure errors
10Program testing
- Can reveal the presence of errors NOT their
absence - A successful test is a test which discovers one
or more errors - The only validation technique for non-functional
requirements - Should be used in conjunction with static
verification to provide full verification
validation coverage
11Testing and debugging
- Defect testing and debugging are distinct
processes - Verification and validation is concerned with
establishing the existence of defects in a
program - Debugging is concerned with locating and
repairing these errors - Debugging involves formulating a hypothesis about
program behaviour then testing these hypotheses
to find the system error
12Types of program testing
- Defect testing
- Tests designed to discover system defects.
- A successful defect test is one which reveals the
presence of defects in a system. - Statistical testing
- tests designed to reflect the frequence of user
inputs. Used for reliability estimation.
13The debugging process
14Nonexecution-based Testing
- Underlying principles
- We should not review our own work
- Group synergy
- Walkthroughs and Inspections
- 46 members, chaired by SQA
- Preparationlists of items
- Inspection
- Up to 2 hours
- Detect, dont correct
- Document-driven, not participant-driven
- Verbalization leads to fault finding
- Performance appraisal
15Execution-Based Testing
- Definitions
- Failure (incorrect behavior)
- Fault (NOT bug)
- Error (mistake made by programmer)
- Nonsensical statement
- Testing is demonstration that faults are not
present - The process of inferring certain behavioral
properties of product based, in part, on results
of executing product in known environment with
selected inputs. - Inference
- Known environment
- Selected inputs
16What should be tested?
- Utility
- Does it meet users needs?
- Ease of use
- Useful functions
- Cost-effectiveness
- Reliability
- Frequency and criticality of failure
- Mean time between failures
- Mean time to repair
- Mean time, cost to repair results of failure
17What should be tested? (contd)
- Robustness
- Range of operating conditions
- Possibility of unacceptable results with valid
input - Effect of invalid input
- Performance
- Extent to which space and time constraints are
met - Real-time software
18The testing process
- Component testing
- Testing of individual program components
- Usually the responsibility of the component
developer (except sometimes for critical systems) - Tests are derived from the developers experience
- Integration testing
- Testing of groups of components integrated to
create a system or sub-system - The responsibility of an independent testing team
- Tests are based on a system specification
19Testing phases
20Defect testing
- The goal of defect testing is to discover defects
in programs - A successful defect test is a test which causes a
program to behave in an anomalous way - Tests show the presence not the absence of defects
21Testing priorities
- Only exhaustive testing can show a program is
free from defects. However, exhaustive testing
is impossible - Tests should exercise a system's capabilities
rather than its components - Testing old capabilities is more important than
testing new capabilities - Testing typical situations is more important than
boundary value cases
22Test data and test cases
- Test data Inputs which have been devised to
test the system - Test cases Inputs to test the system and the
predicted outputs from these inputs if the
system operates according to its specification
23The defect testing process
24Black-box testing
- An approach to testing where the program is
considered as a black-box - The program test cases are based on the system
specification - Test planning can begin early in the software
process
25Black-box testing
26Structural testing
- Sometime called white-box testing
- Derivation of test cases according to program
structure. Knowledge of the program is used to
identify additional test cases - Objective is to exercise all program statements
(not all path combinations)
27White-box testing
28Integration testing
- Tests complete systems or subsystems composed of
integrated components - Integration testing should be black-box testing
with tests derived from the specification - Main difficulty is localising errors
- Incremental integration testing reduces this
problem
29Incremental integration testing
30Approaches to integration testing
- Top-down testing
- Start with high-level system and integrate from
the top-down replacing individual components by
stubs where appropriate - Bottom-up testing
- Integrate individual components in levels until
the complete system is created - In practice, most integration involves a
combination of these strategies
31Top-down testing
32Bottom-up testing
33Testing approaches
- Architectural validation
- Top-down integration testing is better at
discovering errors in the system architecture - System demonstration
- Top-down integration testing allows a limited
demonstration at an early stage in the
development - Test implementation
- Often easier with bottom-up integration testing
- Test observation
- Problems with both approaches. Extra code may be
required to observe tests
34Interface testing
- Takes place when modules or sub-systems are
integrated to create larger systems - Objectives are to detect faults due to interface
errors or invalid assumptions about interfaces - Particularly important for object-oriented
development as objects are defined by their
interfaces
35Stress testing
- Exercises the system beyond its maximum design
load. Stressing the system often causes defects
to come to light - Stressing the system test failure behaviour..
Systems should not fail catastrophically. Stress
testing checks for unacceptable loss of service
or data - Particularly relevant to distributed systems
which can exhibit severe degradation as a
network becomes overloaded
36When Testing and Validation Can Stop?
- Only when the product has been irrevocably
retired - Because it is a whole life-cycle process -
testing validation must be applied at each
stage in the software process including
maintenance/evolution phase.