Software Testing Day 1: Preliminaries - PowerPoint PPT Presentation

1 / 69
About This Presentation
Title:

Software Testing Day 1: Preliminaries

Description:

Minneapolis/St Paul, MN. Graduate Assistants: Baskar Sridharan. Ramkumar Natarajan ... How does it differ from verification? How and why does testing improve ... – PowerPoint PPT presentation

Number of Views:12
Avg rating:3.0/5.0
Slides: 70
Provided by: ValuedGate2215
Category:

less

Transcript and Presenter's Notes

Title: Software Testing Day 1: Preliminaries


1
Software Testing Day 1 Preliminaries
  • Aditya P. Mathur
  • Purdue University
  • August 12-16
  • _at_ Guidant Corporation
  • Minneapolis/St Paul, MN

Last update July 23, 2002
2
Part I Preliminaries
  • Learning Objectives
  • What is testing? How does it differ from
    verification?
  • How and why does testing improve our confidence
    in program correctness?
  • What is coverage and what role does it play in
    testing?
  • What are the different types of testing?

3
Testing Preliminaries
  • What is testing?
  • The act of checking if a part or a product
    performs as expected.
  • Why test?
  • Gain confidence in the correctness of a part or a
    product.
  • Check if there are any errors in a part or a
    product.

4
What to test?
  • During software lifecycle several products are
    generated.
  • Examples
  • Requirements document
  • Design document
  • Software subsystems
  • Software system

5
Test all!
  • Each of these products needs testing.
  • Methods for testing various products are
    different.
  • Examples
  • Test a requirements document using scenario
    construction and simulation
  • Test a design document using simulation.
  • Test a subsystem using functional testing.

6
What is our focus?
  • We focus on testing programs.
  • Programs may be subsystems or complete systems.
  • These are written in a formal programming
    language.
  • There is a large collection of techniques and
    tools to test programs.

7
Few basic terms
  • Program
  • A collection of functions, as in C, or a
    collection of classes as in java.
  • Specification
  • Description of requirements for a program. This
    might be formal or informal.

8
Few basic terms-continued
  • Test case or test input
  • A set of values of input variables of a program.
    Values of environment variables are also
    included.
  • Test set
  • Set of test inputs
  • Program execution
  • Execution of a program on a test input.

9
Few basic terms-continued
  • Oracle
  • A function that determines whether or not the
    results of executing a program under test is as
    per the programs specifications.

10
Correctness
  • Let P be a program (say, an integer sort
    program).
  • Let S denote the specification for P.
  • For sort let S be

11
Sample Specification
  • P takes as input an integer Ngt0 and a sequence of
    N integers called elements of the sequence.
  • Let K denote any element of this sequence,
  • P sorts the input sequence in descending order
    and prints the sorted sequence.

12
Correctness again
  • P is considered correct with respect to a
    specification S if and only if
  • For each valid input the output of P is in
    accordance with the specification S.

13
Errors, defects, faults
  • Error A mistake made by a programmer
  • Example Misunderstood the requirements.
  • Defect/fault Manifestation of an error in a
    program.
  • Example
  • Incorrect code if (altb) foo(a,b)
  • Correct code if (agtb) foo(a,b)

14
Failure
  • Incorrect program behavior due to a fault in the
    program.
  • Failure can be determined only with respect to a
    set of requirement specifications.
  • A necessary condition for a failure to occur is
    that execution of the program force the erroneous
    portion of the program to be executed. What is
    the sufficiency condition?

15
Errors and failure
Error-revealing inputs cause failure
Inputs
Program
Erroneous outputs indicate failure
Outputs
16
Debugging
  • Suppose that a failure is detected during the
    testing of P.
  • The process of finding and removing the cause of
    this failure is known as debugging.
  • The word bug is slang for fault.
  • Testing usually leads to debugging
  • Testing and debugging usually happen in a cycle.

17
Test-debug cycle
Test
Failure?
Yes
No
Testing complete?
Debug
Yes
No
Done!
18
Testing and code inspection
  • Code inspection is a technique whereby the source
    code is inspected for possible errors.
  • Code inspection is generally considered
    complementary to testing. Neither is more
    important than the other!
  • One is not likely to replace testing by code
    inspection or by verification.

19
Testing for correctness?
  • Identify the input domain of P.
  • Execute P against each element of the input
    domain.
  • For each execution of P, check if P generates
    the correct output as per its specification S.

20
What is an input domain ?
  • Input domain of a program P is the set of all
    valid inputs that P can expect.
  • The size of an input domain is the number of
    elements in it.
  • An input domain could be finite or infinite.
  • Finite input domains might be very large!

21
Identifying the input domain
  • For the sort program
  • N size of the sequence, K each element of the
    sequence.
  • Example For Nlt3, e3, some sequences in the
    input domain are
  • An empty sequence (N0).
  • 0 A sequence of size 1 (N1)
  • 2 1 A sequence of size 2 (N2).

22
Size of an input domain
  • Suppose that
  • The size of the input domain is the number of all
    sequences of size 0, 1, 2, and so on.
  • The size can be computed as

23
Testing for correctness? Sorry!
  • To test for correctness P needs to be executed on
    all inputs.
  • For our example, it will take several light years
    to execute a program on all inputs on the most
    powerful computers of today!

24
Exhaustive Testing
  • This form of testing is also known as exhaustive
    testing as we execute P on all elements of the
    input domain.
  • For most programs exhaustive testing is not
    feasible.
  • What is the alternative?

25
Verification
  • Verification for correctness is different from
    testing for correctness.
  • There are techniques for program verification
    which we will not discuss.

26
Partition Testing
  • In this form of testing the input domain is
    partitioned into a finite number of sub-domains.
  • P is then executed on a few elements of each
    sub-domain.
  • Let us go back to the sort program.

27
Sub-domains
  • Suppose that and e3. The size of
    the partitions is
  • We can divide the input
  • domain into three
  • sub-domains as shown.

28
Fewer test inputs
  • Now sort can be tested on one element selected
    from each domain.
  • For example, one set of three inputs is
  • Empty sequence from sub-domain 1.
  • 2 Sequence from sub-domain 2.
  • 2 0 Sequence from sub-domain 3.
  • We have thus reduced the number of inputs used
    for testing from 13 to 3!

29
Confidence in your program
  • Confidence is a measure of ones belief in the
    correctness of the program.
  • Correctness is not measured in binary terms a
    correct or an incorrect program.
  • Instead, it is measured as the probability of
    correct operation of a program when used in
    various scenarios.

30
Measures of confidence
  • Reliability Probability that a program will
    function correctly in a given environment over a
    certain number of executions.
  • We do not plan to cover Reliability.
  • Test completeness The extent to which a program
    has been tested and errors found have been
    removed.

31
Example Increase in Confidence
  • We consider a non-programming example to
    illustrate what is meant by increase in
    confidence.
  • Example A rectangular field has been prepared to
    certain specifications.
  • One item in the specifications is
  • There should be no stones remaining in the
    field.

32
Rectangular Field
Search for stones inside the rectangle.
W
0
L
33
Organizing the search
  • We divide the entire field into smaller search
    rectangles.
  • The length and breadth of each search rectangle
    is one half that of the smallest stone.

34
Testing the rectangular field
  • The field has been prepared and our task is to
    test it to make sure that it has no stones.
  • How should we organize our search?

35
Partitioning the field
  • We divide the entire field into smaller search
    rectangles.
  • The length and breadth of each search rectangle
    is one half that of the smallest stone.

36
Partitioning into search rectangles
Stone
Width
1
2
3
4
5
6
7
Length
37
Input domain
  • Input domain is the set of all possible inputs to
    the search process.
  • In our example this is the set of all points in
    the field. Thus, the input domain is infinite!
  • To reduce the size of the input domain we
    partition the field into finite size rectangles.

38
Rectangle size
  • The length and breadth of each search rectangle
    is one half that of the smallest stone.
  • This ensures that each stone covers at least one
    rectangle. (Is this always true?)

39
Constraints
  • Testing must be completed in less than H hours.
  • Any stone found during testing is removed.
  • Upon completion of testing the probability of
    finding a stone must be less than p.

40
Number of search rectangles
  • Let
  • L Length of the field
  • W Width of the field
  • l Length of the smallest stone
  • w Width of the smallest stone
  • Size of each rectangle l/2 x w/2
  • Number of search rectangles (R)(L/l)(W/w)4
  • Assume that L/l and W/w are integers.

41
Time to test
  • Let t be the time to look inside one search
    rectangle. No rectangle is examined more than
    once.
  • Let o be the overhead in moving from one search
    rectangle to another.
  • Total time to search (T)Rt(R-1)o
  • Testing with R rectangles is feasible only if
  • TltH.

42
Partitioning the input domain
  • This set consists of all search rectangles (R).
  • Number of partitions of the input domain is
    finite (R).
  • However, if TgtH then the number of partitions is
    is too large and scanning each rectangle once is
    infeasible.
  • What should we do in such a situation?

43
Option 1 Do a limited search
  • Of the R search rectangles we examine only r
    where r is such that (tro(r-1)) lt H.
  • This limited search will satisfy the time
    constraint.
  • Will it satisfy the probability constraint?

44
Distribution of stones
  • To satisfy the probability constraint we must
    scan enough search rectangles so that the
    probability of finding a stone, after testing,
    remains less than p.
  • Let us assume that
  • there are stones remaining after i test
    cycles.

45
Distribution of stones
  • There are search rectangles remaining after
    i test cycles.
  • Stones are distributed uniformly over the field
  • An estimate of the probability of finding a stone
    in a randomly selected remaining search
    rectangle is

46
Probability constraint
  • We will stop looking into rectangles if
  • Can we really apply this test method in practice?

47
Confidence
  • Number of stones in the field is not known in
    advance.
  • Hence we cannot compute the probability of
    finding a stone after a certain number of
    rectangles have been examined.
  • The best we can do is to scan as many rectangles
    as we can and remove the stones found.

48
Coverage
  • After a rectangle has been scanned for a stone
    and any stone found has been removed, we say that
    the rectangle has been covered.
  • Suppose that r rectangles have been scanned from
    a total of R. Then we say that the coverage is
    r/R.

49
Coverage and confidence
  • What happens when coverage increases?
  • As coverage increases so does our confidence in
    a stone-free field.
  • In this example, when the coverage reaches 100,
    all stones have been found and removed. Can you
    think of a situation when this might not be true?

50
Option 2 Reduce number of partitions
  • If the number of rectangles to scan is too large,
    we can increase the size of a rectangle. This
    reduces the number of rectangles.
  • Increasing the size of a rectangle also implies
    that there might be more than one stone within a
    rectangle.

51
Rectangle size
  • As a stone may now be smaller than a rectangle,
    detecting a stone inside a rectangle is not
    guaranteed.
  • Despite this fact our confidence in a
    stone-free field increases with coverage.
  • However, when the coverage reaches100 we cannot
    guarantee a stone-free field.

52
Coverage vs. Confidence
Does not imply that the field is stone-free.
1
Confidence
0
Coverage
1(100)
53
Rectangle size
pProbability of detecting a stone inside a
rectangle, given that the stone is there.
ttime to complete a test.
t, p
small
large
Rectangle size
54
Analogy
  • Field Program
  • Stone Error
  • Scan a rectangleTest program on one input
  • Remove stone Remove error
  • Partition Subset of input domain
  • Size of stone Size of an error
  • Rectangle size Size of a partition

55
Analogycontinued
  • Size of an error is the number of inputs in the
    input domain each of which will cause a failure
    due to that error.

Inputs that cause failure due to Error 1
Inputs that cause failure due to Error 2.
Error 1 is larger than Error 2.
Input domain
56
Confidence and probability
  • Increase in coverage increases our confidence in
    a stone-free field.
  • It might not increase the probability that the
    field is stone-free.
  • Important Increase in confidence is NOT
    justified if detected stones are not guaranteed
    to be removed!

57
Types of testing
Basis for classification
Object under test
All of these methods can be applied here.
58
Testing based on source of test inputs
  • Functional testing/specification
    testing/black-box testing/conformance testing
  • Clues for test input generation come from
    requirements.
  • White-box testing/coverage testing/code-based
    testing
  • Clues come from program text.

59
Testing based on source of test inputs
  • Stress testing
  • Clues come from load requirements. For example,
    a telephone system must be able to handle 1000
    calls over any 1-minute interval. What happens
    when the system is loaded or overloaded?

60
Testing based on source of test inputs
  • Performance testing
  • Clues come from performance requirements. For
    example, each call must be processed in less than
    5 seconds. Does the system process each call in
    less than 5 seconds?
  • Fault- or error- based testing
  • Clues come from the faults that are injected into
    the program text or are hypothesized to be in the
    program.

61
Testing based on source of test inputs
  • Random testing
  • Clues come from requirements. Test are generated
    randomly using these clues.
  • Robustness testing
  • Clues come from requirements. The goal is to test
    a program under scenarios not stipulated in the
    requirements.

62
Testing based on source of test inputs
  • OO testing
  • Clues come from the requirements and the design
    of an OO-program.
  • Protocol testing
  • Clues come from the specification of a protocol.
    As, for example, when testing for a communication
    protocol.

63
Testing based on item under test
  • Unit testing
  • Testing of a program unit. A unit is the
    smallest testable piece of a program. One or more
    units form a subsystem.
  • Subsystem testing
  • Testing of a subsystem. A subsystem is a
    collection of units that cooperate to provide a
    part of system functionality

64
Testing based on item under test
  • Integration testing
  • Testing of subsystems that are being integrated
    to form a larger subsystem or a complete system.
  • System testing
  • Testing of a complete system.

65
Testing based on item under test
  • Regression testing
  • Test a subsystem or a system on a subset of the
    set of existing test inputs to check if it
    continues to function correctly after changes
    have been made to an older version.

And the list goes on and on!
66
Test input construction and objects under test
Requirements
Source of clues for test inputs
Code
subsystem
unit
system
Test object
67
Summary Terms
  • Testing and debugging
  • Specification
  • Correctness
  • Input domain
  • Exhaustive testing
  • Confidence

68

Summary Terms continued
Reliability Coverage Error, defect, fault,
failure Debugging, test-debug cycle Types of
testing, basis for classification

69
Summary Questions
  • What is the effect of reducing the partition size
    on the probability of finding errors?
  • How does coverage effect our confidence in
    program correctness?
  • Does 100 coverage imply that a program is
    fault-free?
  • What decides the type of testing?
Write a Comment
User Comments (0)
About PowerShow.com