Module 5 - PowerPoint PPT Presentation

1 / 92
About This Presentation
Title:

Module 5

Description:

Title: PowerPoint Presentation Last modified by: system Created Date: 1/1/1601 12:00:00 AM Document presentation format: On-screen Show Other titles – PowerPoint PPT presentation

Number of Views:68
Avg rating:3.0/5.0
Slides: 93
Provided by: nscnetwor
Category:

less

Transcript and Presenter's Notes

Title: Module 5


1
Module 5 Testing ---Phase where the errors
remaining from all the previous phases must be
detected. --program to be tested is executed with
a set of test cases and the output of the program
for the test cases is evaluated to determine if
the program is performing as expected.
2
Incremental testing Components and subsystems are
tested separately before integrating them to form
the system for system testing.
3
Testing Fundamentals Error fault and
failure Usage of term error in 2 ways Error
refers to the difference between actual o/p of a
software and the correct o/p. --also refers to
human actions that result in s/w containing a
defect or fault.
4
Fault --Condition that causes a system to fail in
performing the required action. A fault is the
basic reason for malfunction and is synonymous
with the commonly used term bug.
5
Failure Inability of the system to perform a
required function according to its
specification. A software failure occurs if the
behaviour of the s/w is different from the
specified behaviour. Failure is produced when
there is a fault.
6
During testing, only failures are observed from
which faults are deduced. Actual faults are
identified by separate activities, referred as
debugging.
7
Test Oracle To test any program we need to have a
description of its expected behavior and a method
of determining whether the observed behavior
conforms to the expected behavior. For this we
need test oracle.
8
(No Transcript)
9
A test oracle is a mechanism different from the
program itself, that can be used to check the
correctness of the o/p of the program for the
test cases. Conceptually, we can consider testing
a process in which the test cases are given to
the test oracle and the program under testing.
The o/p of the 2 is then compared to determine if
the program behaved correctly for the test cases.
10
Test oracles are necessary for testing. Ideally,
we would like an automated oracle which always
gives a correct answer. Of ten the oracles are
human beings who mostly compute by hand what the
o/p of the program should be. As it is often
extremely difficult to determine whether the
behavior confirms to the expected behavior, our
human oracle may make mistakes.
11
As a result when there is no discrepancy between
the results of the program and the oracle, we
have to verify the result produced by the oracle,
before declaring that there is a fault in the
program. This is one of the reasons testing is so
cumbersome and expensive.
12
To help the oracle determine the correct
behaviour , it is important that the behavior of
the system or component be unambiguously
specified and the specification itself is error
free.
13
Top down and bottom up approaches Top down
start testing the top of the hierarchy
incrementally and modules that it calls and then
test the combined system. This require stubs.
(stub is a dummy routine that simulates a module.
Stubs simulate the behavior of the subordinates.).
14
The bottom up approach starts from the bottom of
the hierarchy. To perform this drivers are
needed. (job of the driver is to invoke the
module under testing with the different set of
test cases.) Stubs writing is difficult than
drivers. Top down is advantageous if major flaws
occur toward the top of the hierarchy, if flaws
occur toward bottom then bottom up is
advantageous.
15
If the system is developed in a top-down manner
top down testing should be used, and if the
system is developed in a bottom up manner a
bottom up testing strategy should be used.
16
Test cases and test criteria Having test cases
are good at revealing the presence of
faults. Determine a set of test cases such that
successful execution of all of them implies that
there are no errors in the program.
17
2 fundamental goals of practical testing
activity --maximize the no. of errors
detected. --minimize the no. of test cases (cost
minimize).
18
Selection of test cases is complex An ideal test
case set is one that succeeds only if there are
no errors in the program. One possible ideal set
of test cases is one that include all the
possible i/p to the program. This is called
exhaustive testing. ---impractical and infeasible.
19
Select a set of test cases that is ideal -- for
this test selection criterion (or simply test
criterion) is used. For a given program P and
its specification S, a test selection criterion
specifies the conditions that must be satisfied
by a set of test cases T. The criterion becomes a
basis for test case selection.
20
2 aspects of test case selection Specifying a
criterion for evaluating a set of test cases
and Generating a set of test cases that satisfy a
given criterion.
21
2 fundamental properties for testing
criterion ---Reliability A criterion is
reliable if all the sets that satisfy the
criterion detect the same errors. ---Validity A
criterion is valid if for any errors in the
program there is some set satisfying the
criterion that will reveal the errors.
22
Fundamental theorem of testing is that if a
testing criterion is valid and reliable, if a set
satisfying the criterion succeeds then the
program contains no errors.
23
Axioms capturing desirable properties of test
criteria Applicability axiom For every program
there exist a test set T that satisfies the
criterion. Antiextensionality axiom There are
programs P and Q, both of which implement the
same specification, such that a test set T
satisfies the criterion for P but does not
satisfy the criterion for Q.
24
Antidecomposition axiom There exists a program P
and its component Q such that a test case set T
satisfies the criterion for P and T is the set
of values that variables can assume on entering Q
for some test case in T and T does not satisfy
the criterion for Q. Anticomposition axiom There
exists programs P and Q such that T satisfies the
criterion for P and the o/ps of P for T
(represented as P(T)) satisfy the criterion for Q
but T does not satisfy the criterion for PQ.
25
Psychology of testing Aim of testing To
demonstrate that a program works by showing that
it has no errors.. This is opposite to what
testing is. Testing is to detect errors present
in the program. One should not start testing with
the intent of showing that a program works but
the intent should be to show that a program does
not work. Testing is the process of executing a
program with the intent of finding errors.
26
Testing is destructive process. A test case is
good if it detect an as-yet-undetected errors in
the program. Organizations require a product to
be tested by people not involved with developing
the program before finally delivering it to the
customer is this psychological factor.
27
Another reason for independent testing is that
sometimes errors occur because the programmer did
not understand the specification clearly. Testing
of a program by its programmer will not detect
such errors.
28
Functional testing 2 basic approaches of
testing ----Functional ----Structural
29
In functional testing the structure of the
program is not considered. Test cases are decided
solely on the basis of requirements or
specifications of the module, and the internals
of the module or the program are not considered
for selection of test cases. Due to its nature,
functional testing is often called black box
testing.
30
In the structural approach, test cases are
generated based on the actual code of the program
or module to be tested. Structural approach is
sometimes called glass box testing.
31
Techniques for generating test cases for
functional testing The basis for deciding test
cases is the requirement specifications of the
module or the system. For the entire system, the
test cases are designed from the requirement
specification document for the system. For
modules created during design, test cases for
functional testing are decided from the module
specifications produced during the design.
32
The most obvious functional testing procedure is
exhaustive testing, is impractical.
33
Equivalence class partitioning As we cannot do
exhaustive testing the next approach is to divide
the domain of all the i/ps into a set of
equivalence classes, so that if any test in an
equivalence class succeeds, then every test in
that class succeeds. ie., we want to identify
classes of test cases such that the success of
one test case in a class implies the success of
others. The success of one test case from each
equivalence class is equivalent to successfully
completing an exhaustive test of the program.
34
Different equivalence classes are formed by
putting i/ps for which the behaviour pattern of
the module is specified to be different into
similar groups and then regarding these new
classes as forming equivalence classes. For
eg, the specifications of a module that
determine the absolute value for integers specify
one behaviour for positive integers and another
for negative integers , here we form 2
equivalence classes one consisting of positive
integers and other of negative integers.
35
Equivalence classes are usually formed by
considering each condition specified on an i/p as
specifying a valid equivalence class and one or
more invalid equivalence classes. For example,
if an i/p condition specifies a range of values,
say 0ltcountltmax, then form a valid equivalence
class with that range and 2 invalid equivalence
classes, one with values less than the lower
bound of the range , count lt 0 and the other with
higher bound, count gt max.
36
The entire range of an i/p will not treated in
the same manner, then the range should be split
into two or more equivalence classes. For an o/p
equivalence class, the goal is to generate test
cases such that the o/p for that test case lies
in the o/p equivalence class.
37
Boundary value analysis It has been observed
that programs that work correctly for a set of
values in an equivalence class fail on some
special values. These values lie on the boundary
of the equivalence class. Test cases that have
values on the boundaries of equivalence class are
therefore likely to be high-yield test cases
and selecting such test cases is the aim of the
boundary value analysis.
38
In boundary value analysis, we choose an i/p for
test case from an equivalence class, such that
the i/p lies at the edge of the equivalence
classes. Boundary value test cases are also
called extreme cases.
39
Cause effect graphing One weakness with
equivalence class partitioning and boundary value
method is that they consider each i/p separately.
That is both concentrate on the conditions and
the classes of one i/p. They do not consider
combinations of i/p.
40
One way to exercise combinations of different i/p
conditions is to consider all valid combinations
of the equivalence classes of i/p conditions.
This simple approach will result in an unusually
large no. of test cases, many of which will not
be useful for revealing any new errors.
41
Cause-effect graphing is a technique that aids in
selecting combinations of i/p conditions in a
systematic way, such that the no. of test cases
does not become unmanageably large. The
technique starts with identifying causes and
effects of the system under testing.
42
A cause is a distinct i/p condition, and an
effect is a distinct o/p condition. Each
condition forms a node in the cause-effect graph.
The condition should be stated such that they
can be set to either true or false.
43
For eg., an i/p condition can be file is empty
which can be set to true by having an empty i/p
file and false by nonempty file. After
identifying the causes and effects , for each
effect we identify the causes that can produce
that effect and how the conditions have to be
combined to make the effect true.
44
Conditions are combined using the Boolean
operators and, or and not which are represented
in the graph by , and . Then for each effect
all combinations that causes that the effect
depends on which will make the effect true are
generated.
45
Cause-effect graphing beyond generating
high-yield test cases also aids the understanding
of the functionality of the system, because the
tester must identify the distinct causes and
effects.
46
Special cases There are no rules to determine
special cases, and the tester has to use his
intuition and experience to identify such test
cases. Determining special cases is also called
error guessing.
47
  • Structural testing
  • Structural testing is concerned with testing the
    implementation of the program.
  • --- Intent is to exercise the different
    programming structures and data structures used.
  • Control flow based testing
  • Data flow based testing
  • Mutation testing

48
Control flow based criteria -- most common
structure based criteria -- here the control
flow graph (flow graph) of a program is
considered and coverage of various aspects of the
graph are specified as criteria.
49
Let control flow graph of a program P be G. --
node --- block of statements executed
together --edge(i,j) ---from node i to j-----
represent possible transfer of control after
executing the last statement of the block in node
i to first statement of block represented by
j. Start node --- A node corresponding to a
block whose I statement is the start statement of
P. Exit node Block where the last statement is
exit
50
Path finite sequence of nodes (n1, n2,nk,
where kgt1) such that there is an edge (ni, ni1)
for all node ni in the sequence except for
nk Complete path path whose I node is the
start node and the last node is an exit node.
51
Control Flow based criteria Statement coverage
all nodes criterion ----simplest coverage
criteria --- requires that each statement of the
program be executed at least once during
testing. --- not very strong and can leave
errors undetected.
52
Branch coverage all edges criterion Each
edge in the control flow graph must be traversed
at least once during testing. ie, each decision
in the program must be evaluated to true or
false. --- branch testing
53
Path coverage all paths criterion All
possible paths be executed during
testing. ---path testing ---difficulty with
programs that contain loops can have infinite
number possible paths. -- not strong enough to
guarantee the detection of errors.
54
Data Flow based testing -- To make sure that
during testing, the definitions of variables and
their subsequent use is tested. ---definition
use graph (def/use graph) is I constructed from
the control flow of the program.
55
A statement in a node in the flow graph
representing a block of code has variable
occurrences in it. A variable occurrence can be
one of the following 3 types def represents the
definition of a variable. The variable on the
left hand side of an assignment statement is the
one getting defined.
56
C-use ---- computational use of a variable.
---Any statement that uses value of variables for
computational purpose. eg in an assignment
statement, all variables on the right hand side.
57
P-use ---- predicate use. ----all the occurrences
of the variables in a predicate (variables whose
values are used for computing the value of the
predicate) which is used for transfer control.
58
Mutation testing ---takes the program and
creates many mutants of it by making simple
changes to the program. ---make sure that during
the course of testing, each mutant produces an
o/p different from the o/p of the original
program. ---it requires the set of test cases to
be such that they can distinguish between the
original program and its mutants.
59
In mutation testing faults of some predecided
types are introduced in the program being tested.
Testing then tries to identify those faults in
the mutants. If all these faults can be
identified then the original program should not
have these faults. Otherwise they would have been
identified in that program by the set of test
cases.
60
Mutation testing of a program P proceeds as I a
set of test cases T is prepared by the tester,
and P is tested by the set of test cases in T.
If P fails then T reveals some errors, and
they are corrected. If P does not fail during
testing by T, then it could mean that either the
program P is correct or that P is not correct but
T is not sensitive enough to detect the faults in
P.
61
The sensitivity of T is evaluated through
mutation testing and more test cases are added to
T until the set is considered sensitive enough
for more faults.
62
  • If P does not fail on T then
  • Generate mutants for P. Suppose there are N
    mutants.
  • By executing each mutant and P on each test case
    in t, find how many mutants can be distinguished
    by T. Let it be D and are called dead.

63
3. For each mutant that cannot be distinguished
by T (called a live mutant) find out which of
them are equivalent to P. ie, determine the
mutants that will always produce the same output
as P. (E) 4. Mutation score D/(N-E) 5. Add
more test cases to T and continue testing until
the mutation score is 1.
64
Testing Process Basic goal of the s/w development
process is to produce s/w that has no errors or
very few errors. Comparison of different
techniques It is not easy to compare the
effectiveness of techniques. By effectiveness we
mean the fault detecting capability. The
effectiveness of a technique for testing depends
on the type of errors that exist in the s/w.
65
One expect structural testing to be good for
detecting logic errors but not very good for
detecting data handling errors. For data handling
errors static analysis is good. Functional
testing is good for I/O errors as it focuses on
the external behavior but is not as good as for
detecting logic errors.
66
Structural testing is not suitable for testing
the entire programs, because it is extremely
difficult to generate test cases to the desired
coverage. It is well suited for module testing.
So structural testing is often done only for
modules, and only functional testing is done for
systems.
67
Another way of measuring effectiveness is to
consider the cost effectiveness of different
strategies, that is cost of detecting an error by
using a particular strategy.
68
Levels of testing
69
Test Plan Testing commences with test plan and
terminates with acceptance testing. ----general
document for the entire project that defines the
scope, approach to be taken, and the schedule of
testing as well as identifies the test items for
the entire testing process and the personnel
responsible for the different activities of
testing.
70
Inputs for forming that plan are(1) project plan
(2) requirements document and (3) systems design
document. The project plan is needed to make sure
that the test plan is consistent with the overall
plan for the project and the testing schedule
matches that of the project plan. The
requirements document and the design document are
the basic documents used for selecting the test
units and deciding the approaches to be used
during testing.
71
A test plan should contain the following ---test
unit specification ---features to be
tested ---approach for testing ---test
deliverables ---schedule ---personnel allocation
72
Test unit specification --Important activity in
test plan to identify test units. A test unit is
a set of one or more modules together with
associated data, and that are the object of
testing. Basic idea behind test units- to make
sure that testing is being performed
incrementally. Testability of the unit - should
be easily tested. (---possible to form test
cases and execute the unit without much effort
with these test cases.)
73
Features to be tested Includes all s/w features
(---is a characteristic specified by
requirements/design document -gt includes
functionality, performance, design constraints
and attributes) and combinations of features that
should be tested.
74
Approach for testing Specifies the overall
approach to be followed in current project. The
technique that will be used to judge the testing
effort should also be specified. This is called
the testing criterion.
75
Testing deliverables --Should be specified in the
test plan before the actual testing begins.
Deliverables could be a list of test cases that
were used, detailed results of testing, test
summary report, test log and data about the code
coverage.
76
Schedule Specifies the amount of time and effort
to be spent on different activities of testing,
and testing of different units that have been
identified.
77
Personnel allocation --identifies the persons
responsible for performing the different
activities.
78
Test case Specifications Test case specification
is a major activity in the testing process. ---to
be done separately for each unit. Based on the
approach specified in the test plan. Test case
specification gives for each unit to be tested,
all test cases, i/ps to be used in the test
cases, conditions being tested by the test case,
and the o/ps expected for those test cases.
79
Careful selection of test cases that satisfy the
criterion and approach specified is essential for
proper testing.
80
The test case specification is in the form of a
document. The document is reviewed using a
formal review process to make sure that the test
cases are consistent with the policy specified in
the plan, satisfy the chosen criterion, and cover
the various aspects of the unit to be tested.
81
Test case Execution and Analysis Steps to be
performed to execute the test cases are specified
in a separate document called the test procedure
specification.
82
Various o/ps needed to evaluate if the testing
has been satisfactory, are produced as a result
of test case execution for the unit under test.
The most common o/ps are the test log, the test
summary report and the error report.
83
The test log describes the details of the
testing. The test summary report is meant for
project management, where the summary of the
entire test case execution is provided. The
summary gives the total no. of test cases
executed, the no. and nature of errors found, and
a summary of any metrics data (eg., effort)
collected.
84
The error report gives the summary of all the
errors found. The errors might also be
categorized into different levels. This
information can also be obtained from the test
log, but it is usually given as a separate
document.
85
Testing and coding are the 2 phases that require
careful monitoring, as these phases involve the
maximum no. of people. Testing effort is the
total effort spent by the project team in testing
activities and is an excellent indicator of
whether or not testing is sufficient. The total
testing effort should be about 40 of the total
effort for developing the s/w.
86
Computer time consumed during testing is another
measure that can give valuable information to
project management. Maximum computer time is
consumed during that latter part of coding and
testing.
87
Error tracing is an activity that does not
directly affect the testing of the current
project, but it has many long term control
benefits. By error tracing we mean that when a
fault is detected after testing, it should be
studied and traced back in the development cycle
to determine where it was introduced.
88
Metrics - Reliability estimation Look at the
overall productivity achieved by the programmers
during the project. Productivity data can be used
to manage the resources and reduce cost by
increasing productivity in the future. One
common method for measuring productivity is LOC
or function-points per programmer-month.
89
This measure can be obtained easily from the data
about the total programmer months spent on the
project and size of the project. Productivity by
this measure depends considerably on the source
language. This productivity measure cannot
handle reuse of code properly.
90
Another process metric is defect removal
efficiency. The defect removal efficiency of a
defect removing process is defined as the
percentage reduction of the defects that are
present before the initiation of the process.
91
Another metric that is frequently used is defects
per thousand LOC or defects per function point.
This is a rough measure of the reliability of the
software as the defect density directly impacts
the reliability of the s/w.
92
Reliability of s/w often depends considerably on
the quality of testing. Hence by assessing
reliability we can also judge the quality of
testing. Reliability estimation can be used to
decide whether enough testing has been done.
Reliability estimation has a direct role in
project management. The reliability models are
handled by the project manager to decide when to
stop testing.
Write a Comment
User Comments (0)
About PowerShow.com