Title: The Nature of Testing and Analysis
1The Nature of Testing and Analysis
- Does the software do what it is supposed to do?
- What is the nature of the artifact(s) that have
been built? - What can I count on?
- What should I worry about?
- What are its capabilities and its strengths?
2(No Transcript)
3?
Design
?
Requirements Spec.
?
?
?
Hi Level
?
?
?
?
?
Low level
?
?
?
?
?
?
?
?
?
?
?
?
?
?
Test Plan
Code
4Basic Notions and Definitions
- Consistency determination is fundamental
- Qualities are types of requirements
- Specific requirements are statements of intent
- Product "has" these qualities if its behavior is
consistent with (satisfies) stmts of intent - Basic Definitions
- failure inconsistency between actual behavior of
software and specification of intent - fault software flaw whose execution caused the
failure - error human action that results in software
containing a fault
5Quality/Reliability Improvement Approaches
- Fault avoidance software development techniques
that reduce the incidence of faults (e.g., design
principles and methods, formal specification,
prototyping) - Fault elimination software analysis techniques
that detect and remove faults (e.g., reviews and
inspections, static analysis, testing, formal
verification) - Fault prediction software analysis techniques
that predict the occurence of faults and direct
further efforts (e.g., reliability assessment,
metrics) - Fault tolerance software execution techniques
that detect and correct errors before a failure
occurs (e.g., self-checking assertions, recovery
blocks, n-version programming)
6VERIFICATION VALIDATION
Informal Requirements
Validation
Formal Specifications
Verification
Software Implementation
7 VV must permeate all developmentactivities
- Verification and validation should occur at each
phase - requirements validated against user needs
- requirements shown internally consistent for each
phase - validate current phase against user needs
- use information from previous phase to verify
current phase - Test plans should begin with requirements and be
reviewed and refined with each phase - test plans should be executed as early as
possible to further facilitate early error
detection
8Design
Requirements Spec.
Hi Level
Characteristics of System to be built
must match required characteristics
consistent views
Low level
Hi level design must show HOW requirements can be
met
Test Results must match required behavior
Code must implement design
Test plan exercises this code
Test Plan
Code
9Code-related artifacts
Object
relations to low level design
parse trees
relations to test cases
Source
executables
instrumented source
10Design
functional requirements
speed requirements
accuracy requirements
safety requirements
11Requirements
Functional
Safety
Robustness
Accuracy
Performance
Testplan
Inputs
Setup
Outputs
Timing
Knockdown
12Requirements
Functional
Safety
Test input/output behavior must match
functional requirements
Robustness
Accuracy
Performance
Timing limit must meet performance requirement
Testplan
Inputs
Setup
Outputs
Timing
Knockdown
13Requirements
Functional
Safety
Robustness
Accuracy
Performance
and these are the specific timing
(accuracy, speed, ...) requirements
Timing limit must meet performance requirement
Testplan
Inputs
Setup
Outputs
Timing
Knockdown
14Requirements Spec.
These are the test cases that are used to
test for satisfaction of this requirement
Test Plan
15VV Phases
- Unit/Module
- Comparing a code unit or module with design
specifications. - planned during coding done after coding
- Integration
- Systematic combination of software components and
modules to insure consistency of component
interfaces and adherence to design specification - planned during design done after unit/module VV
- Software System
- Comparing an integrated software system with
software system requirements - planned during requirements done after
integration VV - System
- Acceptance evaluation of an integrated hardware
and software system - planned during informal requirements done after
sw system VV - Regression
- reevaluation after changes made during evolution
16DEVELOPMENT PHASES
Architecting
Requirements Specification
Implementation Designing
Coding
Integration Test Plan
System Test Plan
Unit Test Plan
Software Sys. Test Plan
TEST PLANNING
Software Sys Testing
System Testing
Integration Testing
Unit Testing
TESTING PHASES
17More Definitions
Testing The systematic search of a
program's execution space for the occurance
of a failure Debugging Searching for the
fault that caused an observed failure
Analysis The static examination of a program's
textual representation for the purpose of
inferring characteristics Verification
Using analytic inferences to formally prove that
all executions of a program must be
consistent with intent
18The Basic Approach
COMPARISON of BEHAVIOR to INTENT
INTENT - Originates with requirements
- Different types of intent (requirements)
- Each type relatively better captured with
different set of formalisms
BEHAVIOR - Can be observed as software
executes - Can be inferred from execution
model - Different models support different
sorts of inferences COMPARISON - Can be
informal--done by human eyeballs - Can be
done by computers--eg. comparing text strings
- Can be done by formal machines (eg. FSM's)
- Can be done by rigorous mathematical
reasoning
- Results obtained will vary as a function of the
above
19The Framework
Specification of Actual Behavior
Development (Synthesis) Process
Specification of Intended Behavior
Evaluation (Analysis) Process
Testing/Analysis Results
Comparison of Behavior to Intent (constraint
evaluation)
20Testing
- Behavior determined by examining test execution
results - Intent derived (somehow) from (various)
specifications - Comparison is done by textual examination
- Testing must select test cases likely to reveal
failures - Equivalence partitioning is the typical approach
- a test of any value in a given class is
equivalent to a test of any other value in that
class - if a test case in a class reveals a failure, then
any other test case in that class should reveal
the failure - some approaches limit conclusions to some chosen
class of faults and/or failures - Two basic types of testing
- functional testing is based on the functional
specification (black box testing) - structural testing is based on the software
structure (white box testing)
21Dynamic Testing
Test Execution Results
Specification of Intended Behavior
Specification of Actual Behavior
Required Outputs
Result Comparator (Human or Machine)
Failure Reporting
Comparison of Behavior to Intent
22Testing Aims to Answer
- Does the software do what it is supposed to do?
- When might it fail?
- How fast does it run?
- How accurate are the results?
- What are its failure modes and characteristics?
- What can I count on?
- What should I worry about?
- What are its strengths and weaknesses?
23Black Box Testing
Input Space
Program
Output Space
24Testing is Sampling the Input Space
- Key problem What is the input space?
- What is the software intended to do?
- Subproblem The input space is large
- One dimension for each program input
- Each dimension can have as many elements as there
are legal inputs (eg. 232 different integers) - Each input really is different
- How different? Which difference matter?
- Key Problem How to sample from it?
25What is the input space?
Specification
Implementation
sum_of_roots takes an arbitrarily long sequence
of real numbers and computes the sum of
their square roots. The real number sequence must
be ended with the number 9999.99
Program sum_of_roots Real sum, x, r sum
0 Do forever input x if x 9999.99 then
exit else r sqrt(x) sum
sum r end do print sum end
26Computing the Input Space
- There are 232 possible different values for
each input - If n values are read in, then there are
(232)n different points in the input space - The number of different input values read in is
unlimited - There is no limit (theoretically) to the size of
the input space
27Some observations about theexample program input
space
- There is no real need to test every possible
combination of input values - Most executions behave the same
- But some input combinations are different
- Negative values will produce a failure
- There is a virtually limitless number of inputs
that dont cause the negative square root failure - A sufficiently large sequence of input values
will cause an overflow failure
Effective selection of test cases requires
thought and care
28Testing is too long and hard to do all at once
at the end of development
- Divide the job into subtasks
- Do some activities during development
- Can do test planning during development
- And should do so
- Phase testing at the end
- Using test plans previously developed
29The Testcase Selection Problem
- Testing (especially assertion-based) lets you put
your program under a microscope - Can examine minutiae
- But only for current execution
- To find faults you need to select test data to
cause failures - Testing can demonstrate the presence of faults
(when suitable test cases are selected) - But demonstrating the absence of faults requires
knowing the behaviors of all executions - But there are (virtually) infinitely many
possible executions - So how to sample the inputs representatively
30Partitioning the Input Space
- Rationale All points in the same subdomain are
processed equivalently by the program - But
- How to determine the partition?
- How to know how far the equivalence holds?
- How to select the point(s) within each domain to
use as the actual input(s)? - Active research in the 1970s (eg. White and
Cohen) - The manual and/or requirements specification can
help - Often called Black Box testing
31Input Space Partitioning
Program
Input Space Divided into Domains
32Black Box vs. White Box Testing
(CLEAR BOX TESTING)
33Functional vs. Structural Testing
- Cannot determine if software does what it is
supposed to do without considering the intent - a special case not handled by the implementation
will not be tested unless the specification/
requirements are considered - Cannot ignore what is actually done by the
software - program may treat one element in domain
differently than stated in the specification - implementation often employs use an algorithmic
technique with special characteristics that are
not highlighted in the specification - Both functional and structural testing must be
done
Should use all available information in testing
34Functional (Black Box)Testing Guidelines
- Result in many test cases
- Some test cases may satisfy many heuristics
- Keep track of the goal of each test case
- Changes in specification will cause changes in
functional test cases - Need a way to organize, use, reuse, and monitor
functional testing - NOTE many of the functional testing guidelines
can also be applied in structural testing (at a
more detailed and formal level)
35Structural (White Box) Testing
- Testcase choices driven by program structure
- Flowgraph is most commonly used structure
- Represent statements by nodes
- If a statement can execute immediately after
another, connect the nodes representing them by
an edge - Every program execution sequence is a path
- Criteria based on coverage of program
constructs (Howden and Miller in the early
1970s) - All statements (node coverage)
- All control transitions (edge coverage)
- All possible paths, loop iterations (path, loop
coverage) - How to generate input data to do this?
- What exact data sets are used to force these
coverages? - It matters
36Example Flowgraph
totalpay 0.0
totalpay 0.0 for i 1 to last_employee
if salaryi lt 50000. then salaryi
salaryi 1.05 else salaryi
salaryi 1.10 totalpay totalpay
salaryi end loop print totalpay
for i 1 to last_employee
if salaryi lt 50000.
salaryi salaryi 1.05
salaryi salaryi 1.10
totalpay totalpay salaryi
end loop
print totalpay
37Improvements in Testing (60s, 70s)
- Specification of Intent
- Expressed explicitly
- Increasingly completely
- Functionality, timing, accuracy, robustness, ...
- Increasingly rigorously
- Mathematics, FSAs
- Ideally arise directly from requirements and
design specifications - Comparison
- With automatic comparators
- Specification of Behavior
- Tools to capture test ouputs (inputs too)
38Assertion-Based Testing
- Zoom in on internal workings of the program
- Examine behaviors at internal program locations
while the program is executing - Augments examining only final outputs
- Assertions Specifications of intended relations
among the values of program variables - Development of increasingly elaborate assertion
languages (eg. Anna) in the 70s and 80s - Comparison Runtime evaluation of assertions
- Facilities for programming reactions to
violations - Also useful as a debugging aid
39 ltcode sequencegt X Y Time Y 2.0
T ASSERT Time gt 0.0 ltrest of codegt
Automatically processed into
Inserted by Tester
if (Time gt 0.0) Then Assertion_violation_handler
40Assertion-Based Dynamic Testing
Specification of Intended Behavior
Functional Behavior Assertions
Intermediate Execution Results
Specification of Actual Behavior
Runtime Assertion Checking
Reports on Internal Failures
Comparison of Behavior to Intent
41Mutation Testing
- DeMillo et. al. developed this approach through
the 80s - Determines the adequacy of sets of testcases
- Theory Differences in programs should manifest
themselves as differences (somewhere) in
functioning - Approach
- Produce a family of mutants of the original
program - Use it to test the adequacy of the programs
testcase set - Run mutants and original program over the set
- Make sure some testcase produces different
results - If not, make sure mutant didnt really change the
program - If it did, then add a new testcase that forces
different results
42Difficulties in Doing Testing Effectively
Hard to cover program execution space
effectively Hard to select test data
effectively Hard to tell if test results are
right or wrong --if program computes
complex function(s) --if the number of test
cases gets large Best to detect errors
early--before they become faults Testing
comes at the end of the lifecycle, when time and
budget tend to be short
What do you know when testing is done?
43Common types of erroneous behavior
Domain fault flow of control through a
component is specified incorrectly missing path
fault a special case has not been
specified path selection fault a path exists
to handle a case but its domain has been
specified incorrectly Computation fault the
instructions on a path compute the wrong results
44This classification based on the effect of a
fault, not necessarily the fault itself
acb should be aeb if(agtd)then.. A
single fault could cause many erroneous behaviors
45Exercising an erroneous instruction may not
reveal a failure
Coincidental correctness is when a fault is
executed but no failure is revealed outputa2
instead of outputa2 If a2 is the selected
test case then the fault is not
revealed Coincidental correctness is very
common, if not, statement coverage would be an
adequate test selection criterion
46Test Plans
- The problem How to devise a strategy for
testing that is cost effective? - Meets product quality objectives passably well
- At a cost that is acceptable
- The solution Devise a Test Plan
- A Test Plan is itself a software product
- Requirements (what are the quality objectives)
- Architecture (what is the approach to achieving
them) - Design (how to implement the approach)
- Implementation (specific test cases, analysis
problems) - Evaluation (online evaluation of how things are
going)
47Example Test Plan Requirements
- Need to be sure key functional capabilities are
met - Which are key?
- What about the rest?
- Assign priorities
- Need to be sure response time is fast enough
- Need to be specific here
- Software has to fail soft, be crash tested
- What specifically does that mean?
- what contingencies are we worried about?
- what kinds of responses are required?
These are generally very close to original
Product Requirements
48Test Plan Architecture
- Outlines how these assurances are to be provided
- How to allocate testing/analysis effort between
different objectives - How much for performance, func., robustness, etc.
testing - How much emphasis given to analysis vs. testing
- What reactions to which kinds of failures
- What kinds of testing/analysis products will need
to be developed or purchased? - Test harnesses
- Test data suites
- Test tools
49Test Plan Design
- Specific strategies for orchestrating suites of
testcases in order to study specific behaviors - Algorithmic specifications of how to phases,
parallelize, coordinate the execution of sets of
test cases - Details of how to examine test results to
determine whether they indicate failures - Specific responses to occurrence of specific
kinds of failures
50Test Plan Implementation
- Consists primarily of
- Test cases
- Static analysis problems
- Verification problems
- Also
- Support tools
- Data sets
- Test harnesses
51Anatomy of a Test Case
- Goal/Requirements for this test case
- Needed databases/datasets
- Setup procedure
- Input data, which may be
- fixed, randomly selected, selected from a list
- Output results required
- timing
- functional, which may be
- fixed number, range, formula
- Response to failure(s)
- Cleanup/knockdown
52Summary of Dynamic Testing
- Strengths
- Microscopic examination of execution details
- Evaluation in actual runtime environment
- Oldest approach, most familiar
- Weaknesses
- Cannot demonstrate absence of faults
- Hard to generate test data
- Hard to know when testsets are adequate
- Testing aids (eg. assertion checkers) alter
execution
53Static Analysis
- Technique for demonstrating the absence of faults
without the need for execution - Specification of Intent derived from
requirements - Specification of Behavior derived from model(s)
- Comparison Done analytically and mathematically
- Results Theorems about the program (eg. proofs
that certain behaviors are impossible--or
mandatory)
54Inspection
Specification of Intended Behavior
Informal Specification
Source Text
Specification of Actual Behavior
Informal Error Findings
Human Inspector
Comparison of Behavior to Intent
55Early Static Analyzers
- Syntax checker Proves that all executions are
syntactically correct - Static semantics checkers Demonstrate adherence
to certain semantic rules and conditions - Line at a time checks
- Combinational checks
- Type mismatches
- Argument/Parameter list mismatches
56Syntax Checker
Specification of Intended Behavior
Syntax Specification
Source Text
Specification of Actual Behavior
Syntax Faults
Parser
Comparison of Behavior to Intent
57Static Semantic Analyzer
Specification of Intended Behavior
Semantic Specification
Source Text
Specification of Actual Behavior
Static Semantic Faults
Semantic Analyzer
Comparison of Behavior to Intent
58Dataflow Analysis
- Specification of Intent Sequence of events
- Specification of Behavior Derived from flowgraph
model - Nodes annotated with events of interest
- All possible executions modeled as all sequences
of events along all flowgraph paths - Comparison Analytic process
- Are all possible event sequences the desired
one(s)? - Result Theorems demonstrating absence of event
sequence errors - Examples
- No variable referenced before definition
- No file read before it is opened
- Elevator doesnt move until doors are shut
- Rocket wont try to fire thrusters after fuel is
exhausted
59Static Dataflow Analysis
Specification of Intended Behavior
Possible Execution Sequences
Event Sequences
Specification of Actual Behavior
Proofs of the Presence or Absence of Faults
Dataflow Propagation Algorithms
Comparison of Behavior to Intent
60Evaluation of Static Analysis
- Strengths
- Can demonstrate the absence of faults
- Proofs can be automatically generated and proven
- Algorithms are fast (low-order polynomial)
- No need to generate test data
- You know when you are done
- Weaknesses
- Behavior specification is a model with
inaccuracies - Not all paths are executable
- Only certain classes of faults analyzable
- Mostly sequence specific
- Weak on functionality
61Symbolic Execution
- Specification of Intent Formulae, functions
- Specification of Behavior Functions derived
from annotated flowgraph, symbol table - Annotate nodes with function(s) computed there
- Specify path to be studied
- Compute function(s) computed as composition(s) of
fumctions at path nodes, constraints of path
edges - Comparison Solving simultaneous constraints
symbolic algebra - Results Demonstrations that given paths
computed the right function(s)
62Symbolic Execution
Specification of Intended Behavior
Function to be Computed
Formula Inferred from Actual Code
Specification of Actual Behavior
Proofs of Functional Correctness
Functional Equivalence TheoremProver
Comparison of Behavior to Intent
63Example Symbolic Representation
- P1 (1,2,4,5,6,8)
- n path values path condition 1 Aa,
Bb true - 2 a0
- 4 Cab
- 5 a0 ??bgt0
- 6 X ab(a2a)
- 8 out 3a2b
- DP1 a0 ??bgt0
- CP1 X 3a2b
64Applications of Symbolic Evaluation
- Symbolic Testing
- examination of path domain and computation for
detecting failures - especially usful for scientific applications
- Path and Test Data Selection
- select paths to cover structure and determine
feasibility of condition - select data to satisfy path condition or
revealing condition - Debugging
- examine symbolic representation for faulty data
manipulation - Verification
- prove consistency of specification assertions
- inductive assertion method for proving
correctness ... I S O ...
65Formal Verification
Specification of Intended Behavior
Symbolic Execution of Path Segments
Final and Intermediate 1st-Order Logic
Assertions
Specification of Actual Behavior
Proof of the Absence of All Functional
Faults
First-Order Logic Theorems and Proofs
Comparison of Behavior to Intent
66Formal Verification
Proof of Correctness
INTENT o Usually specification of
functionality --What function(s) does
the software compute? o Sometimes accuracy,
timing, ... BEHAVIOR o Inferred from
semantically rich program model o Generally
requires most of semantics of programming
language o Generally uses symbolic
execution COMPARISON o Use of formal
mathematics (eg. predicate logic) o
Probably source of misleading name PROOF of
correctness --Proof is probably OK
--Correctness is dangerously misleading
67Floyd Method of Inductive Assertions
Intent o Captured by sets of assertions
written in Predicate Logic Behavior o
Inferred by symbolic execution of sequences of
program statements Comparison o
Lemmas and Theorems in Predicate Logic
Strategy Show that all short sequences of
statements behave as
intended Use induction to prove
that all sequences of
statements behave as intended
Show that the program must terminate
gt Program produces the intended results at
the end
68Use of Assertions
Assertion Specification of a condition that
is intended to be true at a specific
given site in the program text In Floyd's
Method, assertions are written in Predicate
Logic In Floyd's Method there are three types
of assertions o Initial, A0 Sited at the
program initial statement o Final, AF
Sited at the program final statement o
Intermediate Ai Often called a "loop
invariant" Sited at various
internal program locations subject to the
rule EVERY LOOP
ITERATION SHALL PASS THRU THE SITE
OF AT LEAST ONE INTERMEDIATE ASSERTION
Net Effect Every program execution sequence is
divided into a finite number of
segments of non-looping code bounded on
each end by a predicate logic assertion
69Observations about Formal Verification
Proofs are long, tedious, tricky. Can be
hard Assertions are hard to get right o
Initial assertions define range of validity.
Often overlooked, or misjudged. Often subtle
o Final assertions largely capture overall
intent. This can be difficult to
get right precise, accurate, complete
--Example Assertions for a sort routine
? Invariants are difficult to get right.
Need to be invariant, but also need to
support overall proof strategy o Probably
worth the effort--provide intellectual control
over loops Proofs themselves often require
deep program insight o The assertions
themselves o Placement of the assertions
o Proofs of lemmas --This, too, is
probably worthwhile, however
70Deeper Issues
Final proof shows consistency of intent and
behavior, but o Assertions of intent may be
wrong or incomplete o Proof step(s) may be
wrong o Lemmas may be missing
Unsuccessful proof attempt gt ??? o
Incorrect software o Incorrect assertions
o Incorrect placement of assertions o
Inept prover o Any combination (or all) of
the above --but failed proofs often
indicate which of the above is
likely to be true (especially to an astute
prover) Undecidability of Predicate calculus
gt No way to be sure when you have a false
theorem Because of the above No sure way to
know when to quit trying to prove a theorem (and
change something) Proofs are generally longer
(often MUCH longer) than the software being
verified oSuggests that errors in the proof
are more likely than error in the
software being verified
71Software Tools Can Help
Proof Checkers o Scrutinize proof steps
and determine if they are sound o Identify
the rules, axioms,. needed to justify each step
o How to know the proof checker is right
(verify it? How?) Verification Assistants
o Facilitate precise expression of assertions
o Accept rules of inference o Accept
axioms o Construct statements of needed
lemmas o Check proofs o Assist in
construction of proofs (theorem provers)
Theorem Proving Assistants o Totally
automatic theorem proving is progressing slowly
o Some impressive theorems have been proven
o "Common sense is a most uncommon
commodity" o Most successful approach has
been human/computer --Human architects
the proof --Computer attempts the proof
(by exhaustive search of possible axioms and
inferences at each step) --Human
intervention after computer has tried for a while
72Some Pragmatic Issues
How accurate are inferences about behavior?
o Rest upon language semantics How often are
they complete and accurate? o
Computer integers are almost integers
(overflow!) o Computer reals trucated power
series--not real numbers o Assumes no
compiler errors o Assumes no runtime system
errors o Assumes no operating system
errors o Assumes no hardware errors
--The CLINC verified stack "verified"
compiler, runtime system, OS
kernel, VLSI chip design..... This is a costly
approach--because it is human-intensive o
Can cost thousands of dollars per (verified)
line There is a niche for this Security
kernels of secure operating systems, etc.
The paradigm is important even when the complete
process is not practical
73Formal Development
Start with assertions, develop code to fulfil
them A top-down approach Need to prove
lemmas in higher levels of program dictates
the functional requirements (eg. input/output
assertion) pairs of lower level
procedures. Also suggests the use of libraries
of reusable verified procedures for
commonly needed utilities Very popular in
Europe A hard sell in the U.S.
74Integration of Testing Analysisand Formal Methods
Testing --Is dynamic in nature,
entailing execution of the program
--Requires skillful selection of test data to
assure good exercising of the program
--Can show program executing in usage
environment --Can support arbitrarily
detailed examination of virtually any
program characteristics and behavior --Is
generally not suitable for showing absence of
faults Analysis --Is static, operating
on abstract program representations
--Supports definitive demonstration of absence of
faults --Generally only for certain
selected classes of faults Formal Methods
--Most through, rigorous, mathematical
--Apply primarily to checking functional
characteristics --Most human and cost
intensive The types of capabilities are
complementary suggests need for skillful
integration
75Applying Process Technology
Treat Software as a PRODUCT produced in a
systematic way by a PROCESS designed and
implemented to demonstrably achieve
explicit quality objectives
Define software product formally Define
software process formally Reason about the
product and process formally Program testing
and analysis as integral steps within the
development process
76The Anatomy of a Quality Determination Process
- Requirements What do you want to know?
- What qualities,What aspects of them,To what level
of assurance - Evaluation Criteria How will you know you have
acquired the desired knowledge? - Architecture What technologies and tools will
you use? - Static analyzers, Test aids, Formal Verification,
... - Design Details of the Process
- Implementation Testcases, analysis problems,
theorems to prove - Evaluation Comparing Analysis and Testing
results actually obtained to results desired
77Testing Analysis Process Architecture
CODE
PATHS WITH FLAWS
SYMBOLIC EXECUTION
STATIC ANALYSIS FOR CODE FLAWS
DEVOLOP CODE
NO ERRORS FOUND
ERRORS FOUND
ERRORS
ERROR FEEDBACK (FIX ERRORS)
ERRORS
ERRORS
CODE
DYNAMIC TESTING
FULLY INSTRUMENTED CODE
INSERT TEST PROBES
STATIC ANALYSIS FOR PROBE REMOVAL
DEVELOP ASSERTIONS
SYMBOLIC EXECUTION
CONSTRAINT SOLUTION
PATHS THROUGH PROBES
CODE WITH SOME PROBES REMOVED
PATH SELECTION
VERIFICATION FEEDBACK (GET MORE COMPLETE
ASSERTIONS)
78Assessing the Quality of Such Processes
- Problem Quality is multifaceted/multidimensional
- Examples correctness, speed, comprehensibility,
robustness, evolvability, safety...... - In fact, we prefer to say SOFTWARE QUALITIES
- How to determine the qualities that a software
product has - Basis for a solution KNOW WHAT YOU KNOW
- AXIOM The goal of this process is to acquire
knowledge about your program--and to know that
you have acquired it - Solution Assess how successfully a process
pursues the acquisition of software product
knowledge
79Using Process Technology To Do This
- Some Suggestions
- Look for testplan requirements What do you want
to know? - Look for testplan evaluation criteria How can
you be sure you have learned what you wanted to
learn? - Is the testplan executable?
- Does the testplan incorporate dynamically
checkable assertions? - Does the testing process produce an execution
history trace?
80Characteristics of a Continuously Improving
Quality Determination Process
- Ongoing Evaluation of results and progress
- Evolution of goals and requirements
- Evolution of testing plans
GOAL Know more about (what you want to know
about) and be sure that you really know it
81Summary
- Process Technology can be used to integrate
testing and analysis tools and technologies - Different environments drive different
requirements - Which in turn dictate different integrations
- Software Product Quality determination processes
are software too, and they also need assessment
and improvement