Tutorial 2, Q1 Answers - PowerPoint PPT Presentation

1 / 61
About This Presentation
Title:

Tutorial 2, Q1 Answers

Description:

11 ... the state-of-the-art evolutionary test data generation ... Who discovered DNA? Why is this theory so important? Explains so much. Applies so widely ... – PowerPoint PPT presentation

Number of Views:57
Avg rating:3.0/5.0
Slides: 62
Provided by: disc8
Category:
Tags: answers | tutorial

less

Transcript and Presenter's Notes

Title: Tutorial 2, Q1 Answers


1
Tutorial 2, Q1 Answers
  • Consider the following simple program (in no
    particular language) to find the smallest prime
    factor of a number
  • smallest(int p) / precondition p gt 1 /
  • int q 2
  • while(p mod q gt 0 AND q lt sqrt(p) )
  • q q1
  • if (p mod q 0)
  • then
  • print(q,''is the smallest factor'')
  • else
  • print(p, is prime'')
  • Draw the control for this program and (if it
    helps) the structural tree-form of this program.
  • Calculate
  • number of nodes
  • number of edges
  • number of simple paths
  • all paths with k2

2
Tree
CFG
Program
smallest(int p) (pgt1) int q 2 while(p mod
q gt 0 AND q lt sqrt p) do q q1
if (p mod q 0) then print(q,is factor)
else print(p,is prime)
1 2 3 4 5 6 7 8
S
A1
S
S
A2
L3
C5
A4
A6
A7
3
S8
  • Number of Nodes
  • FA() 2
  • Fs(m1,m2) m1 m2 - 1
  • FC(m1,m2) m1 m2
  • FL(m1) m1 1

A2
S7
S6
A2
L3
C4
A2
A2
A2
4
S9
  • Number of Edges
  • FA() 1
  • Fs(m1,m2) m1 m2
  • FC(m1,m2) m1 m2 2
  • FL(m1) m1 2

A1
S8
S7
A1
L3
C4
A1
A1
A1
5
  • Number of simple paths
  • A simple path is one in which no edge is
    traversed more than once
  • nsp(A) 1
  • nsp(S(P1,P2)) nsp(P1) x nsp(P2)
  • nsp(C(P1,P2)) nsp(P1) nsp(P2)
  • nsp(L(P1)) nsp(P1)1

S4
A1
S4
S4
A1
L2
C2
A1
A1
A1
6
  • Number of All Paths (k2)
  • ap(A) 1
  • ap(S(P1,P2)) ap(P1) x ap(P2)
  • ap(C(P1,P2)) ap(P1) ap(P2)
  • ap(L(P1)) ap(P1)2 ap(P1) 1

S6
A1
S6
S6
A1
L3
C2
A1
A1
A1
7
Tutorial 2, Q2
  • Consider the following simple program (in no
    particular language) to find the smallest prime
    factor of a number
  • smallest(int p) / precondition p gt 1 /
  • int q 2
  • while(p mod q gt 0 AND q lt sqrt(p) )
  • q q1
  • if (p mod q 0)
  • then
  • print(q,''is the smallest factor'')
  • else
  • print(p, is prime'')
  • Draw the control for this program and annotate
    the nodes to indicate the definitions/uses of
    variables p and q.
  • Label the nodes and give All DU Paths for p and
    q.
  • Which of these paths are required for AU, APUC,
    ACUP, APU, ACU, and AD coverage.
  • Give a test suite which gives 100 coverage for
    All DU paths.
  • Explain whether or not the all predicate uses"
    test set is sufficient for branch coverage of
    this program?

8
All DU paths
Usage p q
CFG
Program
For p 123 12343 1235 123435 12357 1234357 For
q 23 234 235 2356 43 434 435 4356
smallest(int p) (pgt1) int q 2 while(p mod
q gt 0 AND q lt sqrt p) do q q1
if (p mod q 0) then print(q,is factor)
else print(p,is prime)
d u u u
d u ud u u
1 2 3 4 5 6 7 8
9
Required paths p 123 12343 1235 123435 12357 123
4357 q 23 234 235 2356 43 434 435 4356
Actual path 123578 12343578 123568 12
34343578 12343568
Test Input p3 p5 P2 P11 p9
Test Output 3 is prime 5 is prime 2
is factor 11 is prime 3 is factor
Badly drawn wiggly line indicates paths which
subsume other paths
10
Other coverages
So to cover all du paths we need to have 5 test
cases p3 and p5 and p 2, p9 and p11. Now
to cover AU, we need only execute a single du
path between each pair (d,u) that starts at a
definition, d and ends at a use, u. However, we
need to ensure that we have done this for both
variables in the program. Lets consider the
variable p first. Of the du paths, we have 123
and 12343, but both these exercise the same pair
(1,3) so just 123 will do. Also we dont need
both 1235 and 123435, just 1235 would do.
Finally, we dont need 12357 and 1234357, just
12357 will do. In conclusion, we see that we only
need 12357 to cover all uses of the variable p.
There is only one definition of the variable p
(at node 1) and the case which passes through
12357 hits all three uses of the variable p at 3,
5, and 7. Therefore, to cover all uses of the
variable p, we need only execute 123578
(p3). Notice how all du paths requires us (in
this case) to go through the loop, whereas (for
the variable p) AU does not. That is because
there was a choice of du paths for each du pair,
where one went through the loop and one did not.
However, AU will force us to go through loops if
the only du path from a definition to a use goes
through a loop. This is the case with the
variable q in this example, as we shall see.
Lets look at the AU criterion for the variable
q. To understand the difference between All du
paths coverage and AU coverage, we need to be
clear in our mind about the distinction between a
du path and a du pair. The start and end node of
a DU path is a Definition and a Use respectively.
The definition and use, together, form a du pair.
We notice that the 8 du paths identified for q
all have different du pairs. That is, there are 8
distinct du pairs for q, each of which
corresponds to one of the du paths. This means
that covering all uses of q will be the same as
covering all du paths for q. Therefore we need
the same three test inputs to achieve this.
11
For APU, we do not need to cover the
computational uses of variables, but we must
cover the predicate uses. Therefore, we could
choose the same three test cases that we chose
for AU, but this would be wasteful we can manage
with only two test cases for APU. I will show you
how. Notice that, for variable q, we still need
to cover the pair (2,5) without passing through 4
and also the pair (4,5), so we must have at least
two test cases. However, because we do not need
to also cover node 6 (the computational use of
the variable q), we can use the path 123578 to
cover the pair (2,5) for the variable q. This
case, 123578, will also cover all predicate uses
of the variable p. So now we only need add a case
for the pairs (4,3) and (4,5). Either of 12343568
(p9) or 12343578 (p5) would do. For ACU, we
need to cover the computational use of the
variable p. In fact, every test case that goes
through node 7 does this. However, we need to
cover all computational uses of the variable q
also. To do this we need to go through node 6.
Clearly this cannot re-use the test case for the
variable p, since that has to go through node 7
and we cannot go through both nodes 7 and node 6
in the same test case. Now, for variable q, there
are two definitions that reach the computational
use at node 6, these are the definitions at nodes
2 and 4. The last criterion is easy For AD, we
need only hit all definitions of all variables
that reach some use (of any kind either
predicate or computation). These occur at nodes
1,2 and 4, so any test case that goes through all
three nodes will do. We could choose, 12343578
(p5) for example.
12
APU versus Branch Cover
All predicate uses (as defined in the lecture) is
not sufficient for branch coverage as it only
requires each predicate to be tested in one
direction, whereas branch coverage requires each
predicate to be taken in both directions. For
this program, as there are no predicates inside
the body of the loop, 100 APU coverage is
possible without entering the loop, whilst branch
cover requires loop body to be executed.
13
Coursework 1
  • Due (meaningless)
  • Comprises two past exam questions
  • BUT NOTE OLD STYLE EXAM
  • Each question was expected to take 40 minutes in
    exam conditions
  • This was the coursework last year
  • I am making it available to you so you can
    practice (if you want to)

14
Coursework Question 1
  • For a simple language with assignments (A),
    binary conditionals (C), loops (L), and
    sequencing (S) a measure, M , for lines of
    code" can be defined hierarchically as follows
    (where p1 and p2 are sections of the program)
  • M(A) 1
  • M(C(p1 p2))
  • M(p1) M(p2) 1
  • M(L(p1)) M(p1) 1
  • M(S(p1 p2)) M(p1) M(p2)

An example program in this language is while(x
gt 1) do if (x mod 2 0) then x x/2 else x
x 2 endif if (x mod 3 0) then x
x/3 else x x 3 endif enddo
15
Coursework 1, Question 1 continued
  • Test data for this simple program is an initial
    value for x.
  • Give test data to achieve 100 statement
    coverage.
  • Give test data to achieve 100 branch coverage
  • Make these two sets of test data as small as
    possible.
  • (a) Give the structure as a tree of the given
    program fragment and draw the control flowgraph
    for it. Use the above definition to calculate
    (showing working) the number of lines of code.
  • (b) The visit each loop" strategy executes all
    combinations of possible outcomes of decisions at
    conditionals and loops. Loops are not required to
    be executed more than once.
  • Define a measure for the maximum number of tests
    required by this strategy and calculate its value
    (again showing working) for the given program
    fragment.
  • (c) For the visit each loop strategy, what is the
    largest number of tests which could be required
    for 100 coverage of a program of 7 lines of
    code.
  • Give a formula in n, the number of lines of
    code, which expresses the maximum number of tests
    which could be required for 100 coverage with
    this strategy.

16
Coursework 1, Question 2
  • Consider the following piece of pseudo-code
  • 1 n in
  • 2 while (n gt 1)
  • do
  • if (n mod 2 0)
  • then
  • 4 n n/2
  • else
  • 5 n 3n1
  • endif
  • enddo
  • 6 out n

17
Coursework 1, Question 2 continued
  • Draw the Control Flowgraph for this code and
    annotate the nodes to indicate the definitions
    and uses of the variable n.
  • Give all the du pairs for n and for each pair
    give a d-clear, simple path fragment which
    exercises it.
  • The single input, n 5, exercises many aspects
    of this program. Give the path taken for this
    input value and give the test case for this path.
    Define the term test coverage. What coverage of
    the above path fragments is achieved by the
    single test.
  • Give some further test cases which maximise the
    du-path coverage and explain why it is impossible
    to achieve 100 coverage with the all du-paths
    strategy for this example.
  • The four parts carry, respectively, 25, 25, 30
    and 20 marks.

18
Evolutionary testing
  • These slides are based on work at DaimlerChrysler
    AG Berlin, who have the state-of-the-art
    evolutionary test data generation system
  • My thanks are due to Dr. Joachim Wegener of
    DaimlerChrysler for permission to use these
    slides in CS3SMT

19
Theory of Evolution
  • Hands up who has heard of Evolution?
  • Who was its inventor?
  • Charles Darwin
  • Who discovered DNA?
  • Why is this theory so important?
  • Explains so much
  • Applies so widely
  • even to software testing!

20
This time 2004
Also front page story for Times Independent Tel
egraph
21
Introduction and Motivation
22
Evolutionary Algorithms
  • Iterative optimization method which is based on
    the processes of natural genetics, the theory of
    evolution, and Darwins survival of the fittest
    principle.
  • Population based approach (maintain a set of
    potential problem solutions)
  • Iterative approach (in each iteration a new
    populationof individuals is generated and
    evaluated).
  • From the current population new populations
  • Are generated via
  • selection,
  • recombination,
  • mutation,
  • fitness assignment, and
  • reinsertion of offspring
  • until
  • an optimal solution has been found or
  • a predetermined termination criteria is met.

Initialization
Fitness Assignment
T
Termination?
Result
F
Selection
Recombination
Mutation
Fitness Assignment
Reinsertion
23
Evolutionary Algorithms
  • Fitness assignment (assess performance of
    individuals)
  • Rank-based (fitness values depend only on
    individuals rank),
  • Proportional (fitness values proportional to
    objective values)
  • Selection (selecting parent individuals for
    recombination)
  • roulette-wheel selection,
  • tournament and truncation selection

Recombination (recombining parent individuals to
generate (hopefully better) offspring)
24
Evolutionary Algorithms
  • Fitness assignment (assess performance of
    individuals)
  • Rank-based (fitness values depend only on
    individuals rank),
  • Proportional (fitness values proportional to
    objective values)
  • Selection (selecting parent individuals for
    recombination)
  • roulette-wheel selection,
  • tournament selection

Discrete recombination(uniform-crossover)
Intermediary recombination
  • Recombination (recombining parent individuals to
    generate (hopefully better) offspring)
  • discrete recombination and various
    crossover-variants,
  • intermediary recombination (not for binary
    encoded individuals)

25
Evolutionary Algorithms
Evolutionary Operators
  • Mutation (random change of individuals)
  • real or integer variables,
  • binary variables,
  • discrete variables (exchange mutation)
  • Reinsertion of offspring (replacing parent
    individuals by offspring)
  • elitist or random reinsertion (population size gt
    number of offspring),
  • simple reinsertion (population size number of
    offspring),
  • with a selection of offspring (population size lt
    number of offspring)
  • Additional mechanisms
  • division of population into sub-populations,
  • migration between sub-populations,
  • competition between sub-populations.
  • Essential mutation parameters
  • Mutation rate what is the probability that
    variables/bits will mutate,
  • Mutations step size value domain of mutation

26
Evolutionary Testing
Application of Evolutionary Algorithms to Test
Case Design
27
Evolutionary Testing
Average distribution of software development
costs for embedded systems
System andacceptance testing
Other development activities
50
Module and integration testing
up to 60 communicating embedded controllers up to
1,000,000 lines of software code various buses,
hundreds of messages
  • Testing is too resource intensive

Test automation
  • high costs

28
Evolutionary Testing
Test Planning
Test Case Design
Test Execution
Monitoring
Test Evaluation
29
Evolutionary Testing
Test Planning
Test Case Design
Test Execution
Monitoring
Test Evaluation
30
Evolutionary Testing
Test Planning
Test CaseDesignby Meansof EA
Test Execution
Monitoring
Test Evaluation
31
Evolutionary Testing
Initial Population (random generation)
1 19 65 30 99 44 2 4 13 22 17 56 3 29 48 23
49 78 4 89 34 59 39 90 ... N 23 62 69 43 67
Reinsertion
Mutation
Recombination
Termination ?
Selection
Test Results
32
Evolutionary Testing
Initial Population (random generation)
1 19 65 30 99 44 2 4 13 22 17 56 3 29 48 23
49 78 4 89 34 59 39 90 ... N 23 62 69 43 67
Reinsertion
Mutation
Recombination
Termination ?
Selection
Test Results
33
Evolutionary Testing
Initial Population (random generation)
1 19 65 30 99 44 2 4 13 22 17 56 3 29 48 23
49 78 4 89 34 59 39 90 ... N 23 62 69 43 67
Reinsertion
Individuals
Mutation
Test Data
TestExecution
1 0.51 2 0.75 3 0.20 4 0.21 ... N 0.33
Evaluation
Monitoring
Recombination
Fitness Values
Termination ?
Test Results
34
Evolutionary Testing
Initial Population (random generation)
1 19 65 30 99 44 2 4 13 22 17 56 3 29 48 23
49 78 4 89 34 59 39 90 ... N 23 62 69 43 67
Reinsertion
Individuals
Mutation
Test Data
7 29 48 59 49 90 8 19 34 23 99 78 ... N
23 45 69 43 81
TestExecution
1 0.51 2 0.75 3 0.20 4 0.21 ... N 0.33
Evaluation
Monitoring
3 29 48 23 49 78 4 89 34 59 39 90 ... N 23 62
69 43 67
Fitness Values
Recombination
Termination ?
Selection
Test Results
3 29 48 23 49 78 4 89 34 59 39 90 ... N 23 62
69 43 67
35
Evolutionary Testing
Initial Population (random generation)
1 19 65 30 99 44 2 4 13 22 17 56 3 29 48 23
49 78 4 89 34 59 39 90 ... N 23 62 69 43 67
Reinsertion
Individuals
Test Data
7 29 48 59 49 90 8 19 34 23 99 78 ... N
23 45 69 43 81
TestExecution
1 0.51 2 0.75 3 0.20 4 0.21 ... N 0.33
Evaluation
Monitoring
3 29 48 23 49 78 4 89 34 59 39 90 ... N 23 62
69 43 67
Fitness Values
Recombination
Termination ?
Selection
Test Results
3 29 48 23 49 78 4 89 34 59 39 90 ... N 23 62
69 43 67
36
Evolutionary Testing
3 29 48 23 49 78 4 89 34 59 39 90 7 29 39
59 82 90 8 19 34 23 99 78 ... N 23 45 69 70
81
Initial Population (random generation)
1 19 65 30 99 44 2 4 13 22 17 56 3 29 48 23
49 78 4 89 34 59 39 90 ... N 23 62 69 43 67
7 29 39 59 82 90 8 19 34 23 99 78 ... N
23 45 69 70 81
Reinsertion
Individuals
Mutation
Test Data
7 29 48 59 49 90 8 19 34 23 99 78 ... N
23 45 69 43 81
TestExecution
1 0.51 2 0.75 3 0.20 4 0.21 ... N 0.33
Evaluation
Monitoring
3 29 48 23 49 78 4 89 34 59 39 90 ... N 23 62
69 43 67
Fitness Values
Recombination
Termination ?
Selection
Test Results
3 29 48 23 49 78 4 89 34 59 39 90 ... N 23 62
69 43 67
37
Evolutionary Testing
Reinsertion
Individuals
Mutation
Test Data
TestExecution
Evaluation
Monitoring
Essential Definition of suitable fitness
function for test objective
Recombination
Fitness Values
Termination ?
Selection
38
Evolutionary Testing
  • automatic test case generation
  • may be used as independent testmethod
    specialized in testingnon-functional properties,

Functional Testing
Structural Testing
Statistical Testing
Mutation Testing
Evolutionary Testing
  • test objective has to be defined numerically and
    transformed into a fitness function
  • test objects input domain forms search space,
  • fitness assessment for generated individuals
    based on monitoring test data execution
  • iterative procedure, combining goodtest data to
    achieve better test data

Safety Testing
Robustness Testing
Temporal Behavior Testing
39
Evolutionary Testing
Different test objectives require different
fitness functions
  • Safety testing ? search for test datum
    violating system safety constraints
  • Robustness ? search for test datum
    stressing fault-tolerance
  • Temporal testing ? search for test datum with
    extreme execution time
  • Functional testing ? search for test datum
    causing logical error
  • Structural testing ? search for test datum
    executing particular construct
  • Mutation testing ? search for test datum which
    detects the injected fault
  • usually the search is performed to find one test
    datum
  • however, a single search might yield more than
    one test datum
  • the search process is typically repeated to
    obtain a set of test data

40
Evolutionary Testing
Test Planning
Test Case Design
Test Execution
Monitoring
Test Evaluation
41
Evolutionary Testing
Test Planning
Test CaseDesignby Meansof EA
Test Execution
Monitoring
Test Evaluation
42
Evolutionary Testing
Characteristics
  • input parameters of system under test form the
    variables to be generated by EC
  • system under test is executed, fitness values
    based on monitoring execution
  • complex search spaces (multi-modal, jumps,
    plateaus, multi-dimension)

cost 3
cost 1
cost 3
43
Evolutionary Testing - Applications
  • Aim
  • For safety critical systems, safety constraints
    are specified, which should not be violated. If
    test data results in a violation of safety
    constraints an error is found
  • Idea
  • Generate test data in order to violate safety
    constraints
  • Fitness function defined as the distance from
    violating safety condition

Generated test data
Safety condition speed ? 150 mph
if F 0 test successful, safety condition
violated
speed
k smallest step size
44
Evolutionary Testing - Applications
upper
  • Aim
  • Temporal behavior of real-time systems is
    erroneous wheninput situations exist for which
    the computation violatesthe specified timing
    constraints

time limit
bottom
  • Idea
  • Find test data with longest and shortest
    execution timesto check whether they cause
    temporal error
  • Fitness values for individuals based on execution
    times of corresponding test data

45
Evolutionary Testing - Applications
  • Most embedded systems have to achieve timing
    constraints (technical processes to be
    controlled, operational comfort)

optimal point of time for triggering the airbag
igniter
deceleration
46
Evolutionary Testing - Applications
Results
variation between ET and RT results when
searching longest and shortest execution times
for various examples (in )
  • for all test objects (except Motor VI) ET results
    are superior to RT
  • for several test objects variances gt 50

directed ET search considerably moreeffective
than RT
47
Evolutionary Testing - Applications
Detailed Analysis of Selected Results
Comparison of test runs for evolutionary testing
and random testing when searching the longest
execution time for railroad electronics example
48
Evolutionary Testing - Applications
ET compared to Functional and Structural Testing
Comparing the longest execution times of
evolutionary testing (ET), functional and
structural testing (FST) as well as random
testing (RT) for engine control tasks (execution
times in ?s)
120,8
58,4
58,4
68,8
116,0
69,6
112,0
59,6
110,0
108,4
67,2
108,4
54,0
64,0
57,8
66,4
54,0
45,2
Results of FST in each case as 100
49
Evolutionary Testing - Applications
  • Aim
  • Generate set of test data covering structural
    test criteria automatically in order to reduce
    testing effort
  • Fully automated testing of logical behavior if
    test oracle exists, otherwise test evaluation has
    to be performed by tester
  • Idea
  • Coverage oriented approach
  • Test data (individuals) that cover many nodes of
    code receive high fitness values
  • Distance oriented approach
  • Test partitioned into single sub-goals
  • Separate fitness function for each sub-goal
    measures distance from fulfilling branch
    predicates in desired way
  • Approaches
  • Coverage oriented Original proposal
  • Distance oriented More recent work uses this
    approach

50
Evolutionary Testing - Applications
  • Fitness of individual for statement and branch
    coverage
  • based on the number of statements or branches
    covered by corresponding test datum (Roper,
    Weichselbaum)
  • based on the number of control-dependence graph
    nodes covered by test datum (Pargas et al.)
  • Fitness of individual for path coverage
  • 1/overall_execution_frequency_of_path (Watkins)

Number of covered branches Individual 1 F 3
Individual 2 F 5
  • Results
  • promising results, better performance thanrandom
    testing

51
Evolutionary Testing - Applications
1. Approximation level
Level 4
  • Identify relevant branching statements for target
    node on basis of control-flow graph
  • Relevant branching statements can lead to a miss
    of the desired target
  • In this sense approximation-level corresponds to
    distance from target

Level 3
Level 2
Level 1
Target
52
Evolutionary Testing - Applications
Some results achieved with distance
orientedapproach (Wegener, Baresel, Sthamer)
Equal number of generated test data
53
Conclusion
Conclusion
  • Evolutionary Testing is a promising approach for
    the automation of test case design - contributing
    to better product quality and more efficient
    development
  • Evolutionary Testing is based upon transformation
    of test aim into an optimization problem,
    subsequently solved with the assistance of
    evolutionary computation
  • Evolutionary Testing is employed by various
    researchers to solve different test objectives.
    Very good results were consistently attained
  • May be utilized as an independent test method for
    certain test objectives
  • Can also be employed for the automation of
    traditional test methods

54
Evolutionary Testing
  • Seminal - Software Engineering using
    Metaheuristic INnovative Algorithms
  • http//www.discbrunel.org.uk/seminal
  • Evolutionary Testing
  • University of York (Nigel Tracey, John Clark,
    ...) http//www.cs.york.ac.uk/testsig/publications
  • Reliable Software Technologies/Cigital (Christoph
    Michael, Gary McGraw, ...) http//www.cigital.com/
    papers
  • DaimlerChrysler (Harmen Sthamer, Andre Baresel,
    Joachim Wegener, ...) http//www.systematic-testin
    g.com
  • Genetic and Evolutionary Algorithms Toolbox
  • http//www.geatbx.com

55
Evolutionary Testing
  • Mark Harman, Bryan Jones.
  • Search Based Software Engineering
  • Journal of Information and Software Technology,
    43(14)833-839, 2001.

56
Evolutionary Testing
  • Phil McMinns excellent survey paper which is
    available
  • from the course website.
  • Here is the reference in case you ever want to
    cite it-
  • Search-based Software Test Data Generation A
    Survey. pp. 105-156.Phil McMinn. Volume 14,
    Number 2 (June 2004).
  • SOFTWARE TESTING, VERIFICATION RELIABILITY
    (STVR)
  • ISSN 0960-0833.
  • But note that it is on the course website as a
    PDF document (thanks Phil).

57
Here are some more references
  • Please dont try to read all of these
  • I just provide a selection
  • Choose one or two if you like the sound of them
  • The survey by Phil McMinn should be a good source
    of information
  • Read that first and maybe only that.

58
Evolutionary Testing
Jones, B. Eyres, D., and Sthamer, H. A Strategy
for using Genetic Algorithms to Automate Branch
and Fault-based Testing. Computer Journal, vol.
41, no. 2, pp. 98107 (1998). Tracey, N. Clark,
J. Mander, K. and McDermid, J. An Automated
Framework for Structural Test-Data Generation.
Proceedings of the 13th IEEE Conference on
Automated Software Engineering, Hawaii, USA
(1998). Pargas, R. Harrold, M., and Peck, R.
Test data generation using genetic algorithms.
Software Testing, Verification Reliability,
vol. 9, no. 4, pp. 263-282 (1999). Wegener, J.
Baresel, A. and Sthamer, H. Evolutionary Test
Environment for Automatic Structural Testing.
Information and Software Technology, Special
Issue devoted to the Application of Metaheuristic
Algorithms to Problems in Software Engineering,
vol. 43, pp. 841 854 (2001). Michael, C.
McGraw, G. and Schatz, M. Generating Software
Test Data by Evolution. IEEE Transactions on
Software Engineering, vol. 27, no. 12, pp.
1085-1110 (2001).
59
Evolutionary Testing
Gross, H.-G. Jones, B. and Eyres, D. Structural
performance measure of evolutionary testing
applied to worst-case timing of real-time
systems. IEE Proceedings Software, Vol. 147, No.
2, pp. 2530 (2000). Tracey, N. Clark, J.
McDermid, J. and Mander, K. A Search Based
Automated Test-Data Generation Framework for
High-Integrity Systems. Journal of Software
Practice and Experience, January
(2000). Wegener, J. and Mueller, F. A
Comparison of Static Analysis and Evolutionary
Testing for the Verification of Timing
Constraints. Real-Time Systems, vol. 21, no. 3,
pp. 241 268 (2001).
60
Evolutionary Testing
Schultz, A. Grefenstette, J. and Jong, K. Test
and Evaluation by Genetic Algorithms. IEEE
Expert, vol. 8, no. 5, pp. 9-14 (1993).
61
Tutorial
  • No tutorial this week try the sample coursework
  • This coursework is NOT ASSESSED
  • But it is good practice.
Write a Comment
User Comments (0)
About PowerShow.com