Software Testing - PowerPoint PPT Presentation

1 / 47
About This Presentation
Title:

Software Testing

Description:

How do black-box and white-box testing differ? ... d. To show the product is free from defect. e. All of the above. Testing in the Life-Cycle ... – PowerPoint PPT presentation

Number of Views:81
Avg rating:3.0/5.0
Slides: 48
Provided by: richardu
Category:

less

Transcript and Presenter's Notes

Title: Software Testing


1
Software Testing
2
Overview
  • To frame our discussion, consider
  • Why do we need a conceptual structure to guide
    the testing process?
  • How do black-box and white-box testing differ?
  • What is the economic guidelines of software
    testing, and how does it relate to different
    methods?

3
Outline
  • The Purpose of Software Testing
  • Testing in the Lifecycle
  • Objectives
  • Testing Principles
  • White-Box Testing
  • Basis Path Testing
  • Loop Testing
  • Black-Box Testing

4
The purpose of software testing is
  • a. To demonstrate that the product performs each
    function intended
  • b. To demonstrate that the internal operation of
    the product performs according to specification
    and all internal components have been adequately
    exercised
  • c. To increase our confidence in the proper
    functioning of the software.
  • d. To show the product is free from defect.
  • e. All of the above.

5
Testing in the Life-Cycle
6
Definition
  • Testing is finding faults once the code is
    written.

7
Objectives of Testing
  • Testing cannot show the absence of defects, it
    can only show that software defects are present.

8
  • 1. Testing is a process of executing a program
    with the intent of finding an error.
  • 2. A good test case is one that has a high
    probability of finding an as yet undiscovered
    error.
  • 3. A successful test is one that uncovers an as
    yet undiscovered error.

9
Testers
  • Software testers improve software quality by
    finding faults. They prepare their findings so
    developers can locate the cause and effect a
    repair.

10
Types of Faults
  • Algorithmic
  • Syntax
  • Computation and Precision
  • Documentation
  • Capacity or Boundary
  • Timing and Coordination
  • Throughput and Performance
  • Recover Faults

11
Orthogonal Defect Classification
  • IBM developers track faults by placing each fault
    in a category. These category are mutually
    exclusive. The distribution of faults by
    category help suggest changes in development
    processes.

12
Economics
  • With large systems, it is almost always true
    that more testing will find more defects. The
    question is not whether all the defects have been
    found, but whether the cost of discovering the
    remaining defects can be justified. Hence,
    strategies for testing should be adopted that
    optimize the effort expended.

13
Testing Principles
  • 1. All tests should be traceable to customer
    requirements.
  • 2. Tests should be planned long before testing
    begins.
  • 3. 80 of errors are traceable to 20 of the
    modules (Pareto Principle - Error prone modules)
  • 4. Testing should begin in the small and progress
    to larger components.
  • 5. Exhaustive testing is NOT possible.
  • 6. Testing is more effective when conducted by an
    independent party. (SQA?)

14
Types of Testing (1)
  • Module, unit, component testing
  • Each program component is tested on its own.
    Includes internal data structures, logic and
    boundary conditions.
  • Integration Testing
  • This is the process of verifying that the program
    components work together.

15
Types of Testing (2)
  • Function
  • Here we evaluate the system to determine if the
    functions described in the SRS are performed by
    the system.
  • Performance
  • This looks at the system in the working
    environment and evaluates the performance of the
    functions in this context.

16
White-Box Testing (2)
  • With white-box methods, tests cases are derived
    that
  • 1) guarantee all independent paths in a module
    have been tested (exercised) at least once
  • 2) exercise all logical decisions for both true
    and false conditions
  • 3) execute all loops at their boundary values and
    within their operational bounds
  • 4) exercise internal data structures to ensure
    their validity.

17
White-Box Testing (1)
  • White-box (glass-box) testing is a test case
    design method that uses the control structure of
    the procedural design to derive test cases.

18
Approaches to Unit Testing (1)
  • Statement Testing
  • Every statement in a component is executed at
    least once in some test.
  • Branch Testing
  • For every decision point, each branch is chosen
    at least once in some test.
  • Path Testing
  • Every distinct path through the code is executed
    at least once in some test.

19
Approaches to Unit Testing (2)
  • Condition Testing
  • Every conditional statement is tested so that
    every possible condition is exercised.
  • Definition-use path Testing
  • Every path from every definition of every
    variable is exercised in some test.
  • All-uses Testing
  • The test set includes at least one path from
    every definition to every use that can be reached
    by that definition.

20
Strength
  • Testing strategy A is stronger than a testing
    strategy B implies we can have more confidence
    that A has caught more problems.

21
Reality of Strength
  • The stronger a strategy the more test cases
    involved.
  • Strength is associated with test coverage.
  • Path coverage is stronger than branch coverage is
    stronger than statement coverage.

22
Path Testing
  • The basis path method allows for the
    construction of test cases that are guaranteed to
    execute every statement in the program at least
    once. This method can be applied to detailed
    procedural design or source code.

23
Path Testing - Method
  • 1. Draw the flow graph corresponding to the
    procedural design or code.
  • 2. Determine the cyclomatic complexity of the
    flow graph.
  • 3. Determine the basis set of independent paths.
    (The cyclomatic complexity indicates the number
    of paths required.)
  • 4. Determine a test case that will force the
    execution of each path.

24
Flow Graphs
25
i1 total.inputtotal.valid0 1 sum0 DO
WHILE valuei-999 2 AND total.input 3 increment total.input by 1 4 IF
valueiminimum 5 AND valuei 6 THEN increment total.valid by 1 7
sumsumvaluei ELSE
skip ENDIF increment i by 1 8 ENDDO 9
IF total.valid0 10 THEN
averagesum/total.valid 11 ELSE
average-999 12 ENDIF 13 END AVERAGE
26
(No Transcript)
27
Determine Cyclomatic Complexity
  • V(G) E - N 2
  • V(G) 17 - 13 2 6

28
  • Step 3 Determine the basis set of independent
    paths.
  • 1-2-10-11-13
  • 1-2-10-12-13
  • 1-2-3-10-11-13
  • 1-2-3-4-5-8-9-2 ...
  • 1-2-3-4-5-6-8-9-2 ...
  • 1-2-3-4-5-6-7-8-9-2 ...
  • Step 4 Prepare test cases.

29
Loop Testing (1)
  • Simple Loops
  • The following set of tests should be applied
    to simple loops, where n is the maximum number of
    allowable passes
  • 1. Skip the loop entirely.
  • 2. Only one pass through the loop.
  • 3. Two passes through the loop.
  • 4. m passes through the loop where m
  • 5. n-1, n, n1 passes through the loop

30
Loop Testing (2)
  • Nested Loops
  • 1. Start with the innermost loop. Set all other
    loops to minimum values.
  • 2. Conduct simple loop tests for the innermost
    loop while holding the outer loops at their
    minimum iteration values.
  • 3. Work outward, conducting tests for the next
    loop, but keeping all other outer loops at this
    minimum iteration count.
  • 4. Continue until all loops have been tested.

31
Loop Testing (3)
  • Concatenated Loops
  • Concatenated loops can be tested using the
    approach defined for simple loops, if the loops
    are independent. If the loop counter from a loop
    i is used as the initial value for loop I1 then
    the loops are not independent. When loops are not
    independent use the concatenated loop strategy.

32
Loop Testing (4)
  • Unstructured Loops
  • Redesign the loops so they are one of the
    above categories.

33
Integration Testing
  • Integration testing assumes you have a collection
    of components that work correctly.
  • Integration of components is planned and
    coordinated so that when a failure occurs, we
    have an ideas of where the problem resides.

34
Approaches to Integration Testing
  • Top-down
  • Start at the top of the control flow. Add each
    called component one layer at a time.
  • Bottom-up
  • Test the bottom components. Then introduce the
    components that call them.

35
Black-Box Testing
  • Black box testing methods focus on the functional
    requirements of the software. Tests sets are
    derived that fully exercise all functional
    requirements. This strategy tends to be applied
    during the latter part of the lifecycle.

36
Black-Box Testing (2)
  • Tests are designed to answer questions such as
  • 1) How is functional validity tested?
  • 2) What classes of input make good test cases?
  • 3) Is the system particularly sensitive to
    certain input values?
  • 4) How are the boundaries of data classes
    isolated?
  • 5) What data rates or volumes can the system
    tolerate?
  • 6) What effect will specific combinations of data
    have on system operation?

37
Equivalence Partitioning
  • This method divides the input of a program
    into classes of data. Test case design is based
    on defining an equivalent class for a particular
    input. An equivalence class represents a set of
    valid and invalid input values.

38
Equivalence Partitioning (2)
  • Guidelines for equivalence partitioning -
  • 1) If an input condition specifies a range, one
    valid and two invalid equivalence classes are
    defined.
  • 2) If an input condition requires a specific
    value, one valid and two invalid equivalence
    classes are defined.
  • 3) If an input condition specifies a member of a
    set, one valid and one invalid equivalence class
    are defined.
  • 4) If an input condition is boolean, one valid
    and one invalid class are defined.

39
Boundary Value Analysis
  • Boundary value analysis is complementary to
    equivalence partitioning. Rather than selecting
    arbitrary input values to partition the
    equivalence class, the test case designer chooses
    values at the extremes of the class. Furthermore,
    boundary value analysis also encourages test case
    designers to look at output conditions and design
    test cases for the extreme conditions in output.

40
Boundary Value Analysis (2)
  • Guidelines for boundary value analysis -
  • 1) If an input condition specifies a range
    bounded by values a and b, test cases should be
    designed with values a and b, and values just
    above and just below and b.

41
  • 2) If an input condition specifies a number of
    values, test cases should be developed that
    exercise the minimum and maximum numbers. Values
    above and below the minimum and maximum are also
    tested.

42
  • 3) Apply the above guidelines to output
    conditions. For example, if the requirement
    specifies the production of an table as output
    then you want to choose input conditions that
    produce the largest and smallest possible table.

43
  • 4) For internal data structures be certain to
    design test cases to exercise the data structure
    at its boundary. For example, if the software
    includes the maintenance of a personnel list,
    then you should ensure the software is tested
    with conditions where the list size is 0, 1 and
    maximum (if constrained).

44
Cause-Effect Graphs
  • A weakness of the two methods is that do not
    consider potential combinations of input/output
    conditions. Cause-effect graphs connect input
    classes (causes) to output classes (effects)
    yielding a directed graph.

45
Cause-Effect Graphs (2)
  • Guidelines for cause-effect graphs -
  • 1) Causes and effects are listed for a modules
    and an identifier is assigned to each.
  • 2) A cause-effect graph is developed (special
    symbols are required).
  • 3) The graph is converted to a decision table.
  • 4) Decision table rules are converted to test
    cases.

46
System Testing Strategies
  • Recovery Testing
  • Security Testing
  • Stress Testing
  • Performance Testing
  • Usage Testing
  • Usability Testing

47
Acceptance Testing
  • Determine if the system really meets their needs
    and expectations.
Write a Comment
User Comments (0)
About PowerShow.com