Title: Theory of Testing and SATEL
1Theory of TestingandSATEL
2Presentation Structure
- Theory of testing
- SATEL (Semi-Automatic TEsting Language)
- Test Intentions
- SATEL semantics
- CO-OPN/2c
3Exhaustive test set - Definition
The exhaustive set of tests for a given
specification can be formalized as
The exhaustive test set describes fully the
expected semantics of the specification,
including valid and invalid behaviors
4Specification-Based Test Generation
- program P satisfies (has the same semantics
as) specification SP - o program P reacts according to test set TSP
(as observed by an Oracle O).
5Pertinence and Practicability
According to the previous slide, the following
formula holds
(P SP) ltgt (Po TSP)
- IF test set TSP is pertinent valid and unbiased
- valid no incorrect programs are accepted
- unbiased no correct programs are rejected
But, exhaustive test sets are not practicable in
the real world (infinite testing time)
6Test Selection
We thus need a way of reducing the exhaustive
(infinite) test set to a test set that is
practicable, while keeping pertinence
How do we do this?
By stating hypotheses about the behavior of the
program the idea is to find good hypotheses
that generalize correctly the behavior of the SUT!
7Stating Hypotheses
Example consider as SUT a (simplified) embedded
controller for a drink vending machine
Drinks available Coke (2 coins), Water (1 coin),
Rivella (3 coins)
8Stating Hypotheses (2)
Hypotheses 1 if the SUT works well for sequences
of at least 3 operations, then the system works
well (regularity)
AND
Hypotheses 2 if the system works well while
choosing one kind of drink, then it will work
well for choosing all kinds (uniformity).
9Where to find Hypotheses?
- From the Test Engineer
- The knowledge of a test engineer about the
functioning of the SUT should be used - He/She can provide truthful generalizations about
the behavior of the SUT! - In the Specification
- The specification contains an abstraction of all
possible behaviors of the SUT - It is possible complement users hypotheses
automatically if the specification is formal!
10Specification Complementing human Hypotheses
Example imagine the following example from the
DVM the user inserts 2 coins insertMoney(2)
and then selects a drink selectDrink(X). There
are then 3 interesting behaviors
- The buyer doesnt insert enough coins for drink X
and gets nothing (Rivella) - The buyer inserts just enough coins, chooses X
and gets it (Coke) - The buyer inserts too many coins, chooses X, gets
it and the change (Water).
Assuming the specification is precise enough, the
points of choice stated in the operation
selectDrink(X) of the specification can be used
to add further behavior classification that can
be combined with the hypotheses stated by the
test engineer. This is called sub-domain
decomposition.
11Applying Hypotheses
- Our idea is to use a formal language to describe
tests (HML) and defined a language to apply
constraints (hypotheses) to those tests - Of course, the final test set will be pertinent
(valid and unbiased) only when all the hypotheses
correspond to valid generalizations of the
behavior of the program!
12Oracle
The Oracle is a decision procedure that
decides whether a test was successful or not
Test
Oracle
? ltselect_drink(water), give_drinkgt,
ltinsert_coin, accept_coingt, false ?
Program
Drink Vending Machine
Yes (test passes) No (test doesnt pass)
13Presentation Structure
- Theory of testing
- SATEL (Semi-Automatic TEsting Language)
- Test Intentions
- SATEL semantics
- CO-OPN/2c
14State of the ArtRunning Example
- ATM System with the operations
- login(password) / logged, wrongPass, blocked
- logout
- withdraw(amount) / giveMoney(amount),
notEnoughMoney - Following the second wrong login no more
operations are allowed - The ATM distributes 20 or 100 CHF bills
- Initial state
- There are 100(CHF) in the account
- a is the right password, b and c are wrong
passwords
15SATELWhat are Test Intentions?
A test intention defines both a subset of the SUT
behavior and hypothesis about how to test it
16SATEL Recursive Test Intentions and Composition
- One test intention is defined by a set of axioms
Variables f PrimitiveHML T in
loginLogout f in loginLogout gt f .
HML(login(a) with loggedgt
logout T) in loginLogout
- Test intentions may be composed
f in loginLogout nbEvents(f) lt 4 gt f in
4LessLoginLogout
HML(T), true HML(login(a) with logged logout
T), true HML(login(a) with logged logout
login(a) with logged logout T),
true HML(login(a) with logged logout
login(a) with logged logout login(a)
with logged logout T), true
17SATELUniformity
HML(login(b) with wrongPass login(b) with
blocked T), true HML(login(a) with wrongPass
T), false HML(login(b) with wrongPass login(a)
with blocked T), false
18SATELRegularity and subUniformity
HML(login(a) with logged withdraw(120) with
notEnoughMoney T), true HML(login(a) with
logged withdraw(80) with giveMoney(80) T), true
19SATELTest Intention Definition Mechanisms
20Presentation Structure
- Theory of testing
- SATEL (Semi-Automatic TEsting Language)
- Test Intentions
- SATEL semantics
- CO-OPN/2c
21SATEL SemanticsSemantics of a test intention
- Test intention unfolding (solve recursion)
- Calculate the exhaustive test set
- Replace all variables exhaustively
- Generate oracles by validating against the model
- Positive tests are annotated with true
- Negative tests extracted from test intentions but
having an impossible last event, annotated with
false - Reduce the exhaustive test set by solving all
predicates on the variables
22SATEL SemanticsAnnotations
- Annotations correspond the conditions in the
model allowing a state transition - In CO-OPN an annotation is a conjunction of
conditions reflecting the hierarchy and
dynamicity of the model
23SATEL SemanticsEquivalence Class Calculation
C1 correct password C2 wrong password
HML(login(a) with logged T), true HML(login(b)
with wrongPass T), true
HML(login(a) with logged T), true HML(login(c)
with wrongPass T), true
24SATEL SemanticsEquivalence Class Calculation
(cont)
C1 correct password C2 wrong password C3 two
wrong login C4 true C5 not enough money C6
enough money
25Presentation Structure
- Theory of testing
- SATEL (Semi-Automatic TEsting Language)
- Test Intentions
- SATEL semantics
- CO-OPN
26CO-OPN/2cATM Model
loggedIn
annotation context sync conditions
27Tools
- Developed an IDE for SATELs concrete syntax
integrated with CoopnBuilder
- A case study with an industrial partner (CTI)
allowed beginning to identify methodological
capabilities of SATEL