Software Testing - PowerPoint PPT Presentation

1 / 65
About This Presentation
Title:

Software Testing

Description:

Software testing objectives and principles. Testing techniques ... Boolean parenthesis error. Relational operator error ( , ,=,!=,.Arithmetic expression error ... – PowerPoint PPT presentation

Number of Views:111
Avg rating:3.0/5.0
Slides: 66
Provided by: onurku
Category:

less

Transcript and Presenter's Notes

Title: Software Testing


1
Software Testing
  • Objectives and principles
  • Techniques
  • Process
  • Object-oriented testing
  • Test workbenches and frameworks

2
Lecture Objectives
  • Understand
  • Software testing objectives and principles
  • Testing techniques black-box and white-box
  • Testing process unit and integration
  • Object-oriented testing
  • Test workbenches and frameworks

3
Can We Exhaustively Test Software?
A
B
  • There are 250 billion unique paths between A and
    B.
  • If each set of possible data is used, and a
    single run takes 1 millisecond to execute, it
    would take 8 years to test all paths.

4
Can we test all types of software bugs?
  • Software testing is mainly suitable for dealing
    with faults that consistently define themselves
    under well defined conditions
  • Testers do encounter failures they cant
    reproduce.
  • Under seemingly exact conditions, the actions
    that a test case specifies can sometimes, but not
    always, lead to a failure
  • Software engineers sometimes refer to faults with
    this property as Mandelbugs (an allusion to
    Benoit Mandelbrot, a leading researcher in
    fractal geometry)
  • Example the software fault in the Patriot
    missile defense system responsible for the Scud
    incident in Dhahran
  • To project a targets trajectory, the weapons
    control computer required its velocity and the
    time as real values
  • The system, however, kept time internally as an
    integer, counting tenths of seconds and storing
    them in a 24 bit register
  • The necessary conversion into a real value caused
    imprecision in the calculated range where a
    detected target was expected next
  • For a given velocity of the target, these
    inaccuracies were proportional to the length of
    time the system had been continuously running

5
Testing Objectives
  • Software testing can show the presence of bugs,
    but it can never show their absence. Therefore,
  • Testing is the process of exercising a program
    with the specific intent of finding errors prior
    to delivery to the end user.
  • A good test case is one that has a high
    probability of finding an error.
  • A successful test is one that uncovers an error.

6
Testing Principles
  • All tests should be traceable to customer
    requirements
  • Tests should be planned long before testing
    begins
  • The Pareto principle applies to software testing
  • Testing should begin in the small and progress
    toward testing in the large
  • Exhaustive testing is not possible
  • To be most effective, testing should be conducted
    by an independent third party

7
Test Case Design
  • Testing must be planned and performed
    systematicallynot ad hoc or random.
  • Testing can be performed in two ways
  • Knowing the specified function that a product has
    been designed to perform black-box testing.
  • Knowing the internal workings of the product and
    testing to ensure all parts are exercised
    adequately white-box testing.

8
Black-box Testing
  • An approach to testing where the program is
    considered as a black-box
  • The program test cases are based on the system
    specification
  • Test planning can begin early in the software
    process

9
Equivalence Partitioning
  • Divide the input domain into classes of data from
    which test cases can be derived.
  • Strives to define a test that uncovers classes of
    errors reducing total number of test cases
    required.

10
Example
  • Specifications for DBMS state that product must
    handle any number of records between 1 and 16,383
    (2 14 1)
  • If system can handle 34 records and 14,870
    records, then probably will work fine for 8,252
    records, say.
  • If system works for any one test case in range
    (1..16,383), then it will probably work for any
    other test case in range
  • Range (1..16,383) constitutes an equivalence
    class
  • Any one member is as good a test case as any
    other member of the class

11
Example
  • Range (1..16,383) defines three different
    equivalence classes
  • Equivalence Class 1 Fewer than 1 record
  • Equivalence Class 2 Between 1 and 16,383 records
  • Equivalence Class 3 More than 16,383 records

12
Boundary Value Analysis
  • Technique that leads to selection of test cases
    that exercise bounding values.
  • Selecting test case on or just to one side of
    boundary of equivalence class increases
    probability of detecting fault.

"Bugs lurk in corners and congregate at
boundaries"
13
DBMS Example
  • Test case 1 0 records Member of equivalence
    class 1 ( adjacent to boundary value)
  • Test case 2 1 record Boundary value
  • Test case 3 2 records Adjacent to boundary
    value
  • Test case 4 723 records Member of equivalence
    class 2
  • Test case 5 16,382 records Adjacent to boundary
    value
  • Test case 6 16,383 records Boundary value
  • Test case 7 16,384 records Member of
    equivalence class 3 ( adjacent to boundary
    value)

14
White-box Testing
  • Test case design method that uses the control
    structure of the procedural design to derive test
    cases.
  • Can derive tests that
  • Guarantee all independent paths have been
    exercised at least once
  • Exercise all logical decisions on their true and
    false sides
  • Execute all loops at their boundaries and within
    operational bounds
  • Exercise internal data structures to ensure
    validity

15
Basis Path Testing
  • Proposed by Tom McCabe.
  • Use cyclomatic complexity measure as guide for
    defining a basis set of execution paths.
  • Test cases derived to exercise the basis set are
    guaranteed to execute every statement at least
    once.

16
Independent Paths
  • CC 5
  • So 5 independent paths
  • a, c, f
  • a, d, c, f
  • a, b, e, f
  • a, b, e, a,
  • a, b, e, b, e,

17
The Flowgraph
  • Before the cyclomatic complexity can be
    calculated, and the paths determined, the
    flowgraph must be created.
  • Done by translating the source code into
    flowgraph notation

sequence
if
while
until
case
18
Example
  • PROCEDURE average
  • INTERFACE RETURNS average, total.input,
    total.valid
  • INTERFACE ACCEPTS value, minimum, maximum
  • TYPE value1100 IS SCALAR ARRAY
  • TYPE average, total.input, total.valid
  • minimum, maximum, sum IS SCALAR
  • TYPE i IS INTEGER
  • i 1
  • total.input total.valid 0
  • sum 0
  • DO WHILE valuei ltgt -999 AND total.input lt 100
  • increment total.input by 1
  • IF valuei gt minimum AND value1 lt maximum
  • THEN increment total.valid by 1

1
2
3
4
6
5
7
8
9
10
11
12
13
19
Example
  • Flowgraph for average
  • Determine the
  • Cyclomatic complexity
  • Independent paths

20
Condition Testing
  • Exercises logical conditions contained within a
    program module.
  • Types of errors found include
  • Boolean operator error (OR, AND, NOT)
  • Boolean variable error
  • Boolean parenthesis error
  • Relational operator error (gt,lt,,!,)
  • Arithmetic expression error

21
Loop Testing
  • Focus exclusively on the validity of loop
    constructs.
  • 4 types of loop can be defined
  • Simple
  • Nested
  • Concatenated
  • Unstructured

22
Loop Types

Simple
Nested
Concatenated
Unstructured
23
Simple Loops
  • Where n is the max number of passes, the
    following test can be applied
  • Skip loop entirely
  • Only one pass
  • 2 passes
  • m passes (where mltn)
  • n-1, n, n1 passes

24
Nested Loops
  • If the approach for simple loops is extended,
    number of possible tests would grow geometrically
    impractical.
  • Instead
  • Start at innermost loop. Set all other loops to
    minimum values.
  • Conduct simple loop test for innermost loop while
    holding outer loops at minimum loop counter
    values. Add other test for out-of-range or
    excluded values.
  • Work outward, conducting tests for next loop, but
    keeping all other outer lops at minimum values
    and other nested loops to typical values.
  • Continue until all loops tested.

25
Concatenated Loops
  • Test as simple loops provided each loop is
    independent.
  • If two loops are concatenated and loop counter
    for loop 1 is used as initial value for loop 2,
    then test as nested loops.

26
Unstructured Loops
  • Cant test unstructured loops effectively.
  • Reflects very bad practice and should be
    redesigned.

27
The Tester
  • Who does the testing?
  • Developer
  • Member of development team
  • SQA
  • All of the above

28
Independent Test Group
  • Strictly speaking, testing should be performed by
    an independent group (SQA or 3rd party)
  • Members of the development team are inclined to
    be more interested in meeting the
    rapidly-approaching due-date.
  • The developer of the code is prone to test
    gently.
  • Must remember that the objective is to find
    errors, not to complete test without finding them
    (because theyre always there!)

29
Successful Testing
  • The success of testing can be measured by
    applying a simple metric
  • So as defect removal efficiency approaches 1,
    process approaches perfection.

P
30
The Testing Process
  • Unit testing
  • Testing of individual program components
  • Often performed by the component developer
  • Tests often derived from the developers
    experience!
  • Increased productivity possible with xUnit
    framework
  • Integration testing
  • Testing of groups of components integrated to
    create a system or sub-system
  • The responsibility of an independent testing team
  • Tests are based on a system specification

31
Testing Phases
Unit testing
Integration testing
Software developer
Development team/ SQA/ Independent Test Group
32
Integration Testing
  • Tests complete systems or subsystems composed of
    integrated components
  • Integration testing should be black-box testing
    with tests derived from the specification
  • Main difficulty is localizing errors
  • Incremental integration testing reduces this
    problem

33
Incremental Integration Testing
34
Approaches to Integration Testing
  • Top-down testing
  • Start with high-level system and integrate from
    the top-down replacing individual components by
    stubs where appropriate
  • Bottom-up testing
  • Integrate individual components in levels until
    the complete system is created
  • In practice, most integration involves a
    combination of these strategies

35
Top-down Testing
36
Bottom-up Testing
37
Which is Best?
  • In bottom-up testing
  • Test harnesses must be constructed and this takes
    time.
  • Integration errors are found later rather than
    earlier.
  • Systems-level design flaws that could require
    major reconstruction are found last.
  • There is no visible, working system until the
    last stage so is harder to demonstrate progress
    to clients.

38
Interface Testing
  • Takes place when modules or sub-systems are
    integrated to create larger systems
  • Objectives are to detect faults due to interface
    errors or invalid assumptions about interfaces
  • Particularly important for object-oriented
    development as objects are defined by their
    interfaces

39
Interface Testing
40
Interfaces Types
  • Parameter interfaces
  • Data passed from one procedure to another
  • Shared memory interfaces
  • Block of memory is shared between procedures
  • Procedural interfaces
  • Sub-system encapsulates a set of procedures to be
    called by other sub-systems
  • Message passing interfaces
  • Sub-systems request services from other
    sub-systems

41
Interface Errors
  • Interface misuse
  • A calling component calls another component and
    makes an error in its use of its interface e.g.
    parameters in the wrong order
  • Interface misunderstanding
  • A calling component embeds assumptions about the
    behaviour of the called component which are
    incorrect
  • Timing errors
  • The called and the calling component operate at
    different speeds and out-of-date information is
    accessed

42
Interface Testing Guidelines
  • Design tests so that parameters to a called
    procedure are at the extreme ends of their ranges
  • Always test pointer parameters with null pointers
  • Use stress testing in message passing systems
  • In shared memory systems, vary the order in which
    components are activated
  • Design tests which cause the component to fail

43
Stress Testing
  • Exercises the system beyond its maximum design
    load.
  • Stressing the system often causes defects to come
    to light
  • Stressing the system test failure behaviour.
  • Systems should not fail catastrophically. Stress
    testing checks for unacceptable loss of service
    or data
  • Particularly relevant to distributed systems
    which can exhibit severe degradation as a
    network becomes overloaded

44
Object-Oriented Testing
  • The components to be tested are object classes
    that are instantiated as objects
  • Larger grain than individual functions so
    approaches to white-box testing have to be
    extended
  • No obvious top to the system for top-down
    integration and testing

45
Testing Levels
  • Test object classes
  • Test clusters of cooperating objects
  • Test the complete OO system

46
Object Class Testing
  • Complete test coverage of a class involves
  • Testing all operations associated with an object
  • Setting and interrogating all object attributes
  • Exercising the object in all possible states
  • Inheritance makes it more difficult to design
    object class tests as the information to be
    tested is not localized

47
Object Integration
  • Levels of integration are less distinct in
    object-oriented systems
  • Cluster testing is concerned with integrating and
    testing clusters of cooperating objects
  • Identify clusters using knowledge of the
    operation of objects and the system features that
    are implemented by these clusters

48
Approaches to Cluster Testing
  • Use-case or scenario testing
  • Testing is based on a user interactions with the
    system
  • Has the advantage that it tests system features
    as experienced by users
  • Thread testing
  • A thread consists of all the classes needed to
    respond to a single external input. Each class is
    unit tested, and then the thread set is
    exercised.
  • Object interaction testing
  • Tests sequences of object interactions that stop
    when an object operation does not call on
    services from another object
  • Uses-based testing
  • Begins by testing classes that use few or no
    server classes. Next, classes that use the first
    group of classes are tested, followed by classes
    that use the second group, and so on.

49
Scenario-Based Testing
  • Identify scenarios from use-cases and supplement
    these with interaction diagrams that show the
    objects involved in the scenario
  • Consider the scenario in the weather station
    system where a report is generated

50
Collect Weather Data
51
Weather Station Testing
  • Thread of methods executed
  • CommsControllerrequest WeatherStationreport
    WeatherDatasummarize
  • Inputs and outputs
  • Input of report request with associated
    acknowledge and a final output of a report
  • Can be tested by creating raw data and ensuring
    that it is summarized properly
  • Use the same raw data to test the WeatherData
    object

52
OO Testing Myths Reality
  • Inheritance means never having to say you are
    sorry
  • Reuse means never having to say you are sorry
  • Black box testing is sufficient

53
Implications of Inheritance
  • Myth
  • specializing from tested superclasses means
    subclasses will be correct
  • Reality
  • Subclasses create new ways to misuse inherited
    features
  • Different test cases needed for each context
  • Need to retest inherited methods, even if
    unchanged.

54
Implications of Reuse
  • Myth
  • Reusing a tested class means that the behavior of
    the server object is trustworthy
  • Reality
  • Every new usage provides ways to misuse a server.
  • Even if many server object of a given class
    function correctly, nothing is to prevent a new
    client class from using it incorrectly
  • we can't automatically trust a server because it
    performs correctly for one client

55
Implication of Encapsulation
  • Myth
  • White-box testing violates encapsulation, surely
    black-box testing (of class interfaces) is
    sufficient.
  • Reality
  • Studies indicate that thorough BBT sometimes
    exercises only 1/3 of code.
  • BBT exercises all specified behaviors, what about
    unspecified behaviors?!
  • Need to examine implementation.

56
And What About Polymorphism?
  • Each possible binding of a polymorphic component
    requires separate testprobably separate test
    case!

57
Testing Workbenches
  • Testing is an expensive process phase. Testing
    workbenches provide a range of tools to reduce
    the time required and total testing costs
  • Most testing workbenches are open systems because
    testing needs are organization-specific
  • Difficult to integrate with closed design and
    analysis workbenches

58
A Testing Workbench
59
Workbench Components
  • Test manager manages the running of program
    tests
  • Test data generator selects test data from
    database or uses patterns to generate random
    data of correct form
  • Oracle Predicts expected results (may be
    previous version/prototype)
  • Comparator compare results of oracle and
    program, or program and previous version
    (regression test)
  • Dynamic analyzer counts number of times each
    statement is executed during test.
  • Simulator simulates environment (target
    platform, user interaction, etc)

60
xUnit Framework
  • Developed by Kent Beck
  • Makes object-oriented unit testing more
    accessible.
  • Freeware versions available for most
    object-oriented languages
  • www.xprogramming.com/software.htm

61
jUnit successful
62
jUnit unsuccessful
63
Simple Guide to Using xUnit
  • Subclass TestCase class for the object under test
  • Ensure test class has scope over object under
    test.
  • Add a test method to the test class for each
    method.
  • An xUnit test method is an ordinary method
    without parameters.
  • Code the test case in the test method
  • Creates objects necessary for the test (fixture)
    (1)
  • Exercises objects in the fixture (2)
  • Verifies the result. (3)

64
Key Points
  • Exhaustive testing is not possible
  • Testing must be done systematically using
    black-box and white-box testing techniques
  • Testing must be done at both unit and integration
    levels
  • Object-oriented programming offers its own
    challenges for testing
  • Testing workbenches and frameworks can help with
    the testing process

65
References
  • M. Grottke and K.S. Trivedi. Fighting Bugs
    Remove, Retry, Replicate and Rejuvinate. IEEE
    Computer, February 2007, pp. 107 109.
  • R. Pressman. Software Engineering A
    Practitioners Approach, New York, NY
    McGraw-Hill, 6th Ed, 2004.
  • I. Sommerville. Software Engineering, 6th Ed. New
    York, NY Addison-Wesley, 2000.
Write a Comment
User Comments (0)
About PowerShow.com