Title: Software Testing Techniques
1Lecture 9
- Software Testing Techniques
2 3Software Testing
Testing is the process of exercising a program
with the specific intent of finding (and
removing) errors prior to delivery to the end
user.
4What Testing Shows
5Who Tests the Software?
6Testing Principles
- All tests should be traceable to customer
requirements - Tests should be planned long before testing
begins - The Pareto (80/20) principle applies to software
testing - Testing should begin in the small and progress
toward testing in the large - Exhaustive testing is not possible
- Testing should be conducted by an independent
third party
7Testability
- Operabilityit operates cleanly
- Observabilitythe results of each test case are
readily observed - Controlabilitythe degree to which testing can be
automated and optimized - Decomposabilitytesting can be targeted
- Simplicityreduce complex architecture and logic
to simplify tests - Stabilityfew changes are requested during
testing - Understandabilityof the design
8What is a good test?
- Has a high probability of finding an error need
to develop a classification of perceivable errors
and design test for each case - Not redundant no two tests intend to uncover
the same error - Representative have highest likelihood of
uncovering error - Having right level of complexity not too simple
or too complex
9Exhaustive Testing
10Selective Testing
11Software Testing
black-box methods
white-box methods
Test internal operations, i.e. procedural details
Test functions (at the software interface)
Methods
Strategies
12Test Case Design
13White-Box Testing
Paths Decisions Loops Internal data
14Why Cover?
15Flow Graph
Flow Graph
Flowchart
16Flow Graph (cont.)
Predicate node
17Basis Path Testing
Basis path set path 1 1-11 path 2
1-2-3-4-5-10-1-11 path 3 1-2-3-6-8-9-10-1-11 path
4 1-2-3-6-7-9-10-1-11
18Cyclomatic Complexity
19Derive Test Cases
- Draw a flow graph based on the design or code
- Determine cyclomatic complexity
- Determine basis set of linearly independent paths
- Prepare test cases for basis set
20Example
PROCEDURE average INTERFACE RETURNS average,
total.input, total.valid INTERFACE ACCEPTS
value, minimum, maximum TYPE value1100 IS
SCALAR ARRAY TYPE average, total.input,
total.valid minimum, maximum, sum IS
SCALAR TYPE i IS INTEGER i 1
total.input total.valid 0 sum 0 DO
WHILE valuei ltgt -999 AND total.input lt 100
increment total.input by 1 IF valuei gt
minimum AND valuei lt maximum THEN
increment total.valid by 1 sum
sum valuei ELSE skip ENDIF
increment i by 1 ENDDO IF total.valid gt
0 THEN average sum / total.valid
ELSE average -999 ENDIF END average
21Example (cont.)
- path 1 1-2-10-11-13
- path 2 1-2-10-12-13
- path 3 1-2-3-10-11-13
- path 4 1-2-3-4-5-8-9-2-
- path 5 1-2-3-4-5-6-8-9-2-
- path 6 1-2-3-4-5-6-7-8-9-2-
- loop back
- Predicate nodes 2,3,5,6,10
- V(G) 6
- Path 1 test case
- value(k) valid input, where klti for 2?i? 100
- value(i)-999 where 2?i? 100
- Expected results correct average based on k
values and proper totals - Note must be tested as part of path 4, 5, and 6
tests - Path 2 test case
- value(i) -999
- Expected results average -999
22Example (cont.)
- Path 5 test case
- value(i) valid input where ilt100
- value(k) gt minimum where k?i
- Expected results correct average based on k
values and proper totals - Path 6 test case
- value(i) valid input where ilt100
- Expected results correct average based on k
values and proper totals
- Path 3 test case
- Attempt to process 101 or more values
- First 100 values should be valid
- Expected results Same as test case 1
- Path 4 test case
- value(i) valid input where ilt100
- value(k) lt minimum where klti
- Expected results Correct average based on k
values and proper totals
23Loop Testing
Simple loop
Nested Loops
Concatenated
Loops
Unstructured
Loops
24Simple Loops
Minimum conditionsSimple Loops
1. skip the loop entirely
2. only one pass through the loop
3. two passes through the loop
4. m passes through the loop m lt n
5. (n-1), n, and (n1) passes through
the loop
where n is the maximum number
of allowable passes
25Nested Loops
- Start at the innermost loop. Set all outer loops
to their minimum iteration parameter values. - Test the min1, typical, max-1 and max for the
innermost loop, while holding the outer loops at
their minimum values. Add other tests for
out-of-range or excluded values - Move out one loop and set it up as in step 2,
holding all other loops at typical values.
Continue this step until the outermost loop has
been tested.
26Concatenated Loops
If the loops are independent of one another
then treat each as a simple loop else treat as
nested loops endif
Redesign unstructured loops if possible
27Black-Box Testing
requirements
output
input
events
28Equivalence Partitioning
- An equivalence class represents a set of valid or
invalid states for input conditions - A test case can uncover classes of errors
user queries
FK input
output formats
mouse clicks
data
prompts
29Sample Equivalence Classes
Valid data
User supplied commands
- If condition needs
- Range / Specific value
- one valid and two invalid equivalence classes
- Member of a set/ Boolean
- one valid and one invalid equivalence classes
Responses to system prompts
File names
Computational data
physical parameters
bounding values
initiation values
Output data formatting
Responses to error messages
Graphical data (e.g., mouse clicks)
Invalid data
Data outside bounds of the program
Physically impossible data
Proper value supplied in wrong place
30Bank ID Example
- Area code blank or three-digit number
- Boolean code may or may not be present
- Prefix three-digit number not beginning with 0
or 1 - Range 200 999, with specific exceptions
- Suffix four-digit number
- Value four digits
- Password six-digit alphanumeric string
- Boolean password may or may not present
- Value six-character string
- Commands check, deposit, bill payment, and the
like - Set valid commands
31Boundary Value Analysis
Concentrates on the boundary conditions!
user queries
FK input
output formats
mouse picks
data
prompts
output domain
input domain
32BVA Guidelines
- Range (e.g. a and b) test with a, b, and just
above and below a and b - Values test with min., max., and values just
above and below min. and max. - The above two guidelines apply to output
conditions - Test data structure at its boundary (e.g. array)
33Testing Strategy
unit test (components)
Integration test (architecture)
Validation test (requirement)
System test (interface other system)
34Unit Testing
white box
module to be tested
results
software engineer
test cases
35Unit Test Environment
driver
interface
local data structures
module
boundary conditions
dummies
independent paths
error handling paths
stub
stub
test cases
RESULTS
36Integration Testing
white box
Options the big bang approach an
incremental construction strategy
37Top Down Integration
- Disadvantages
- Incur development overhead
- Difficult to determine real cause of error
A
top module is tested with
stubs
B
F
G
stubs are replaced one at
a time, depth first
C
breadth first approach would replace stubs of
the same level first
D
E
as new modules are integrated,
some subset of tests is re-run
38Bottom-Up Integration
A
- Disadvantage
- Dont have a full product until the test is
finished
B
F
G
drivers are replaced one at a
time, "depth first"
C
worker modules are grouped into
builds and integrated
D
E
cluster / build
39Sandwich Testing
A
Top modules are tested with stubs
B
F
G
C
Worker modules are grouped into
builds and integrated
D
E
cluster / build
40Regression Testing
- Retest a corrected module or whenever a new
module is added as part of the integration test. - Use part of the previously used test cases
- Can also use automated capture/playback tools
- Regression test suite should contain
- Representative sample tests
- Tests cases for functions likely to be affected
by the integration - Tests cases that focus on the changed components
41Validation Testing
black box
- Aims to test conformity with requirements
- A test plan outlines the classes of tests to be
conducted - A test schedule defines specific test cases to be
used - Configuration review (audit) make sure the
correct software configuration is developed - User acceptance test user test drive
- Alpha Test conducted at the developers site
- Beta Test conducted at customers site
42System Testing
- Testing interface with other systems
- Recovery testing force some error and see if the
system can recovery (mean-time-to-repair) - Security testing verify that protection
mechanisms are correctly implemented - Stress testing test with abnormal
quantity,frequency or volume of transactions - Performance testing test run time performance of
software within the context of an integrated
system
43The Debugging Process
test cases
results
new test cases
regression tests
suspected causes
Debugging
corrections
identified causes
44Debugging Effort
time required to diagnose the symptom
and determine the cause
time required to correct the error and
conduct regression tests
45Symptoms Causes
symptom and cause may be
geographically separated
symptom may disappear when
another problem is fixed
cause may be due to a
combination of non-errors
cause may be due to a system
or compiler error
cause may be due to
symptom
assumptions that everyone
cause
believes
symptom may be intermittent
46Consequences of Bugs
infectious
damage
catastrophic
extreme
serious
disturbing
annoying
mild
Bug Type
Bug Categories
function-related bugs, system-related bugs,
data bugs, coding bugs, design bugs,
documentation bugs,
standards violations, etc.
47Debugging Techniques
brute force / testing
backtracking
induction
deduction
48Debugging Final Thoughts
1.
Don't run off half-cocked,
think
about the
symptom you're seeing.
2.
Use tools
(e.g., dynamic debugger) to gain
more insight.
3.
If at an impasse,
get help
from someone else.
4.
Be absolutely sure to
conduct regression tests
when you do "fix" the bug.
49 50Object-Oriented Testing
- begins by evaluating the correctness and
consistency of the OOA and OOD models - testing strategy changes
- the concept of the unit broadens due to
encapsulation - integration focuses on classes and their
execution across a thread or in the context of
a usage scenario - validation uses conventional black box methods
- test case design draws on conventional methods,
but also encompasses special features - Encapsulation and inheritance makes OOT difficult
51Testing the CRC Model
- Revisit the CRC model and the object-relationship
model. - Inspect the description of each CRC index card to
determine if a delegated responsibility is part
of the collaborators definition. - Invert the connection to ensure that each
collaborator that is asked for service is
receiving requests from a reasonable source. - Using the inverted connections examined in step
3, determine whether other classes might be
required or whether responsibilities are properly
grouped among the classes. - Determine whether widely requested
responsibilities might be combined into a single
responsibility. - Steps 1 to 5 are applied iteratively to each
class and through each evolution of the OOA model.
52OOT Strategy
- Class testing is the equivalent of unit testing
- operations within the class are tested
- the state behavior of the class is examined
- Integration applied three different strategies
- thread-based testingintegrates the set of
classes required to respond to one input or event - use-based testingintegrates the set of classes
required to respond to one use case - cluster testingintegrates the set of classes
required to demonstrate one collaboration - Validation testing using use cases
53Test Case Design
- Each test case should be uniquely identified and
should be explicitly associated with the class to
be tested, - The purpose of the test should be stated,
- A list of testing steps should be developed for
each test and should contain - a list of specified states for the object that is
to be tested - a list of messages and operations that will be
exercised as a consequence of the test - a list of exceptions that may occur as the object
is tested - a list of external conditions (i.e., changes in
the environment external to the software that
must exist in order to properly conduct the test) - supplementary information that will aid in
understanding or implementing the
test. BER93
54Random Testing
- identify operations applicable to a class
- define constraints on their use
- identify a minimum test sequence
- an operation sequence that defines the minimum
life history of the class (object) - generate a variety of random (but valid) test
sequences - exercise other (more complex) class instance life
histories
55Partition Testing
- reduces the number of test cases required to test
a class in much the same way as equivalence
partitioning for conventional software - state-based partitioning
- categorize and test operations based on their
ability to change the state of a class - attribute-based partitioning
- categorize and test operations based on the
attributes that they use - category-based partitioning
- categorize and test operations based on the
generic function each performs
56Inter-Class Testing
- For each client class, use the list of class
operators to generate a series of random test
sequences. The operators will send messages to
other server classes. - For each message that is generated, determine the
collaborator class and the corresponding operator
in the server object. - For each operator in the server object (that has
been invoked by messages sent from the client
object), determine the messages that it
transmits. - For each of the messages, determine the next
level of operators that are invoked and
incorporate these into the test sequence