Title: Software Engineering 9. Testing the software
1Software Engineering9. Testing the software
- Leszek J Chmielewski
- Faculty of Applied Informatics and Mathematics
(WZIM) - Warsaw University of Life Sciences (SGGW)
- lchmiel.pl
2Bibliography and source
- Ian Sommerville. Software Engineering. 6th, 7th
Edition Chapter 20 8th Edition Chapter 23
Software testing - Slides prepared by Ian Sommerville are directly
used - Source http//www.cs.st-andrews.ac.uk/ifs/Book
s/SE8/
3Defect testing
- Testing programs to establish the presence of
system defects
4Objectives
- To understand testing techniques that are geared
to discover program faults - To introduce guidelines for interface testing
- To understand specific approaches to
object-oriented testing - To understand the principles of CASE tool support
for testing
5Overview
- Introduction
- Defect testing
- Integration testing
- Object-oriented testing
- Testing workbenches
- Summary Key points
6Overview
- Introduction
- Defect testing
- Integration testing
- Object-oriented testing
- Testing workbenches
- Summary Key points
7The testing process
Requirementssecification
Systemspecification
Systemdesign
Detaileddesign
Componenttesting
Integrationtesting
Componentdeveloper
Independent testing team
Acceptancetest plan
Integrationtest plan
Subsystemintegrationtest plan
Tests of modules and units
- Formal methods for critical systems (Cleanroom)
- Component testing
- Testing of individual program components
- Usually the responsibility of the component
developer (except sometimes for critical systems) - Tests derived from the developers experience
- Integration testing
- Testing of groups of components integrated to
create a system or sub-system - The responsibility of an independent testing team
- Tests based on a system specification
Acceptancetest
Systemintegration test
Subsystemintegration test
Service
8Overview
- Introduction
- Defect testing
- Integration testing
- Object-oriented testing
- Testing workbenches
- Summary Key points
9Defect testing
- The goal of defect testing is to discover defects
in programs - A successful defect test is a test which causes a
program to behave in an anomalous way - Tests show the presence not the absence of defects
10Testing priorities
- Only exhaustive testing can show a program is
free from defects. However, exhaustive testing is
impossible - Tests should exercise a system's capabilities
rather than its components - Testing old capabilities is more important than
testing new capabilities - Testing typical situations is more important than
boundary value cases
11Test data and test cases
- Test data Inputs which have been devised to test
the system - Test cases Inputs to test the system and the
predicted outputs from these inputs if the system
operates according to its specification - Test data can be prepared automatically test
cases can not
12The defect testing process
Test cases
Test data
Test results
Report from tests
Design the test cases
Prepare the test data
Run the program on them
Compare the results with test cases
- Only a subset of the admissible test cases can be
tested - A strategy is needed to choose it
13Strategy for choosing the test cases
- Example
- Test all the functions from the menu
- Test all the combinations of functions accessible
from the same menu (imagine a text editor) - Test all the functions in which the user inputs
data - Correct data
- Incorrect data
- (In this way, the rare combinations of
fuctionalities can not be accessed)
14Black-box testing
- An approach to testing where the program is
considered as a black-box - The program test cases are based on the system
specification - The system functionality is considered, not the
implementation - Test planning can begin early in the software
process
15Black-box testing
Input datacausing anomalous behaviour
Inputa
Input test data
Output datamaking it possibleto detect the
defects
System
Outputd
Resultsof tests
16Equivalence partitioning
- Input data and output results often fall into
different classes where all members of a class
are related - Each of these classes is an equivalence partition
where the program behaves in an equivalent way
for each class member - Test cases should be chosen from each partition ?
equivalence set (or class)
17Example
- Partition system inputs and outputs into
equivalence sets - If input is a 5-digit integer between 10,000 and
99,999, equivalence partitions are lt10,000,
10,000-99, 999 and gt10, 000 - Choose test cases at the boundary of these sets
- 00000, 09999, 10000, 99999, 10001
18Equivalence partitions
19Search routine - input partitions
- Inputs which conform to the pre-conditions
- Inputs where a pre-condition does not hold
- Inputs where the key element is a member of the
array - Inputs where the key element is not a member of
the array
20Testing guidelines for sequences
- Test software with sequences which have only a
single value - Use sequences of different sizes in different
tests - Derive tests so that the first, middle and last
elements of the sequence are accessed - Test with sequences of zero length
- Test with sequences of even and odd length
21Search routine - input partitions
22Structural testing
- Sometime called white-box testing
- Derivation of test cases according to program
structure - knowledge of the program is used to identify
additional test cases - all the conditions in the program structure are
known - Objective is to exercise all program statements
- not all path combinations!
23Path testing
- The objective of path testing is to ensure that
the set of test cases is such that each path
through the program is executed at least once - The starting point for path testing is a program
flow graph that shows nodes representing program
decisions and arcs representing the flow of
control - Statements with conditions are therefore nodes in
the flow graph
24Path testing
- 1, 2, 3, 8, 9
- 1, 2, 3, 4, 6, 7, 2
- 1, 2, 3, 4, 5, 7, 2
- 1, 2, 3, 4, 6, 7, 2, 8, 9
- Not 1, 2, 8, 9
1
2
3
4
8
5
6
9
7
25Program flow graphs
- Describes the program control flow. Each branch
is shown as a separate path and loops are shown
by arrows looping back to the loop condition node - Used as a basis for computing the cyclomatic
complexity
26Cyclomatic complexity
- Number of independent paths?
- O(n)? O(n2)? O(n3)?
- cyclomatic complexity of the graph
- ZC(G) N_arcs(G) N_nodes(G) 2
- 11-924
- If there are no jumps in the ProgramZC(P)
N_conditions(P) 1 - Test cases for each path
- Dynamic program analyzer - Profiler
1
2
3
4
8
5
6
9
7
27Cyclomatic complexity
- The number of tests to test all control
statements equals the cyclomatic complexity - Cyclomatic complexity equals number of conditions
in a program 1 - Useful if used with care does not imply
adequacy of testing - Although all paths are executed, all combinations
of paths are not executed
28Overview
- Introduction
- Defect testing
- Integration testing
- Object-oriented testing
- Testing workbenches
- Summary Key points
29Integration testing
- Tests complete systems or subsystems composed of
integrated components - Integration testing should be black-box testing
with tests derived from the specification - Main difficulty is localising errors
- Incremental integration testing reduces this
problem
30Incremental integration testing
31Approaches to integration testing
- Top-down testing
- Start with high-level system and integrate from
the top-down replacing individual components by
stubs where appropriate - Bottom-up testing
- Integrate individual components in levels until
the complete system is created - In practice, most integration involves a
combination of these strategies
32Bottom-up and Top-down testing
- Bottom-up
- Low level
- Upper level
- Top-down
- System structure, upper level
- Lower levels
N levelcomponent
N1 levelstub
Test driver
N1 levelstub
N1 levelstub
N1 levelcomponent
N1 levelcomponent
N1 levelcomponent
33Testing approaches pros and cons
- Architectural validation
- Top-down integration testing is better at
discovering errors in the system architecture - System demonstration
- Top-down integration testing allows a limited
demonstration at an early developent stage - Test implementation
- Often easier with bottom-up integration testing
- Test observation
- Problems with both approaches extra code may be
required to observe tests
34Interface testing
- Takes place when modules or sub-systems are
integrated to create larger systems - Objectives are to detect faults due to interface
errors or invalid assumptions about interfaces - Particularly important for object-oriented
development as objects are defined by their
interfaces
35Interfaces types
- Parameter interfaces
- Data passed from one procedure to another
- Shared memory interfaces
- Block of memory is shared between procedures
- Procedural interfaces
- Sub-system encapsulates a set of procedures to be
called by other sub-systems - Message passing interfaces
- Sub-systems request services from other
sub-systems
36Interface errors
- Interface misuse
- A calling component calls another component and
makes an error in its use of its interface e.g.
parameters in the wrong order - Interface misunderstanding
- A calling component embeds assumptions about the
behaviour of the called component which are
incorrect - Timing errors
- The called and the calling component operate at
different speeds and out-of-date information is
accessed
37Interface testing guidelines
- Design tests so that parameters to a called
procedure are at the extreme ends of their ranges - Always test pointer parameters with null pointers
- Design tests which cause the component to fail
- Use stress testing in message passing systems
- In shared memory systems, vary the order in which
components are activated
38Stress testing
- Exercises the system beyond its maximum design
load. Stressing the system often causes defects
to come to light - Stressing the system test failure behaviour
- Stress testing checks for unacceptable loss of
service or data - Systems should not fail catastrophically, they
should rather degrade gracefully - Particularly relevant to distributed systems
which can exhibit severe degradation as a
network becomes overloaded
39Overview
- Introduction
- Defect testing
- Integration testing
- Object-oriented testing
- Testing workbenches
- Summary Key points
40Object-oriented testing
- The components to be tested are object classes
that are instantiated as objects - Larger grain than individual functions so
approaches to white-box testing have to be
extended - No obvious top to the system for top-down
integration and testing
41Testing levels
- Testing operations associated with objects
- Testing object classes
- Testing clusters of cooperating objects
- Testing the complete OO system
42Object class testing
- Complete test coverage of a class involves
- Testing all operations associated with an object
- Setting and interrogating all object attributes
- Exercising the object in all possible states
- Inheritance makes it more difficult to design
object class tests as the information to be
tested is not localised
43Weather station object interface
- Test cases are needed for all operations
- Use a state model to identify state transitions
for testing - Examples of testing sequences
- Shutdown Waiting Shutdown
- Waiting Calibrating Testing Transmitting
Waiting - Waiting Collecting Waiting Summarising
Transmitting Waiting
44Object integration
- Levels of integration are less distinct in
object-oriented systems - Cluster testing is concerned with integrating and
testing clusters of cooperating objects - Identify clusters using knowledge of the
operation of objects and the system features that
are implemented by these clusters
45Approaches to cluster testing
- Use-case or scenario testing
- Testing is based on a user interactions with the
system - Has the advantage that it tests system features
as experienced by users - Thread testing
- Tests the systems response to events as
processing threads through the system - Object interaction testing
- Tests sequences of object interactions that stop
when an object operation does not call on
services from another object
46Scenario-based testing
- Identify scenarios from use-cases and supplement
these with interaction diagrams that show the
objects involved in the scenario - Consider the scenario in the weather station
system where a report is generated
47Collect weather data
48Weather station testing
- Thread of methods executed
- CommsControllerrequest WeatherStationreport
WeatherDatasummarise - Inputs and outputs
- Input of report request with associated
acknowledge and a final output of a report - Can be tested by creating raw data and ensuring
that it is summarised properly - Use the same raw data to test the WeatherData
object
49Overview
- Introduction
- Defect testing
- Integration testing
- Object-oriented testing
- Testing workbenches
- Summary Key points
50Testing workbenches
- Testing is an expensive process phase
- Testing workbenches provide a range of tools to
reduce the time required and total testing costs - Most testing workbenches are open systems because
testing needs are organisation-specific - Difficult to integrate with closed design and
analysis workbenches
51A testing workbench
Specification
Test datagenerato
Test data
Oracle
Testmanager
Sourcecode
Testresults
Expectedresults
Programbeingtested
Dynamic analyser(profiler)
Filecomparator
Environmentsimulator
Executionreport
Reportgenerator
Testresultsreport
52A testing workbench
Specification
Test datagenerato
Test data
Oracle
Testmanager
Sourcecode
Testresults
Expectedresults
Programbeingtested
Dynamic analyser(profiler)
Filecomparator
Environmentsimulator
Executionreport
Reportgenerator
Testresultsreport
53Testing workbench adaptation
- Scripts may be developed for user interface
simulators and patterns for test data generators - Test outputs may have to be prepared manually for
comparison - Special-purpose file comparators may be developed
54Overview
- Introduction
- Defect testing
- Integration testing
- Object-oriented testing
- Testing workbenches
- Summary Key points
55Key points
- Test parts of a system which are commonly used
rather than those which are rarely executed - Equivalence partitions are sets of test cases
where the program should behave in an equivalent
way - Black-box testing is based on the system
specification - Structural (white-box) testing identifies test
cases which cause all paths through the program
to be executed
56Key points
- Test coverage measures ensure that all statements
have been executed at least once - Interface defects arise because of specification
misreading, misunderstanding, errors or invalid
timing assumptions - To test object classes, test all operations,
attributes and states - Integrate object-oriented systems around clusters
of objects