Title: Conformance Test Experiments for Distributed RealTime Systems
1Conformance Test Experiments for Distributed
Real-Time Systems
- Rachel Cardell-Oliver
- Complex Systems Group
- Department of Computer Science Software
Engineering - The University of Western Australia
- July 2002
2Talk Overview
- Research Goal to build correct distributed
real-time systems - 1. Distributed Real-Time Systems
- 2. Correctness Formal Methods Testing
- 3. Experiments A New Test Method
31. Distributed Real-Time Systems
4Real Time Reactions
5Distributed
6System Characteristics
- React or interact with their environment
- Must respond to events within fixed time
- Distributed over two or more processors
- Fixed network topology
- Each processor runs a set of tasks
- Processors embedded in other systems
- Built with limited HW SW resources
7Testing Issues for these Systems
- Many sources of non-determinism
- 2 or more processors with independent clocks
- Set of tasks scheduled on each processor
- Independent but concurrent subsystems
- Inputs from uncontrolled environment e.g. people
- Limited resources affects test control e.g. speed
- Our goal to develop robust test specification
and execution methods
82. Correctness Formal Methods Testing
9Goal Building Correct Systems
10Software Tests
- are experiments designed to answer the question
does this implementation behave as intended? - Defect tests are tests which try to try to force
the implementation NOT to behave as intended - Our focus is to specify and execute robust defect
tests
11Related Work on Test Case Generation
- Chow TSE 1978 deterministic Mealy FSM
- Clarke Lee 1997 timed requirements graphs
- Neilsen TACAS 2000 event recording automata
- Cardell-Oliver FACJ 2000 Uppaal timed automata
- Specific experiments are described by a test
case a timed sequence of inputs and outputs - Non-determinism is not handled well (if at all)
- Not robust enough for our purposes
123. ExperimentsA New Test Method
13Our Method for Defect Testing
- Identify types of behaviour which are likely to
uncover implementation defects (e.g. extreme
cases) - Describe these behaviours using a formal
specification language - Translate the formal test specification into a
test program to run on a test driver - Connect the test driver to the system under test
and execute the test program - Analyse test results (on-the-fly or off-line)
14Example System to Test
15Step 1 Identify interesting behaviours
- Usually extreme behaviours such as
- Inputs at the maximum allowable rate
- Maximum response time to events
- Timely scheduling of tasks
16Example Property to Test
- Whenever the light level changes from low to high
- then the valve starts to open
- within 60cs
- assuming the light level alternates between high
and low every 100cs
17Step 2 choose a formal specification language
- which is able to model
- real-time clocks
- persistent data
- concurrency and communication
- use Uppaal Timed Automata (UTA)
18Example UTA for timely response
m0
m0
19Writing Robust Tests with UTA
- Test cases specify all valid test inputs
- no need to test outside these bounds
- Test cases specify all expected test outputs
- if an output doesnt match then its wrong
- No need to model the implementation explicitly
- Test cases may be concurrent programs
- Test cases are executed multiple times
20Step 3. Translate Spec to Exec
- UTA specs are already program-like
- Identify test inputs and how they will be
controlled by the driver - Identify test outputs and how they will be
observed by the driver - then straightforward translation into NQC (not
quite C) programs
21Example NQC for timely response
- task dolightinput()
- while (iltMAXRUNS)
- Wait(100)
- setlighthigh(OUT_C) setlighthigh(OUT_A)
record(FastTimer(0),HIGH-LIGHT) - i
- Wait(100)
- setlightlow(OUT_C) setlightlow(OUT_A)
record(FastTimer(0),LOW-LIGHT) - i
- // end while
- // end task
- task monitormessages()
- while (iltMAXRUNS)
- monitor (EVENT_MASK(1))
- Wait(LONGINTERVAL)
-
- catch (EVENT_MASK(1))
- record(FastTimer(0), Message())
- i
- ClearMessage()
-
- // end while
- // end task
22Step 4 test driver
23Step 4 - connect tester and execute tests
24Step 5 Analyse Results
25Scheduling Deadlines Test Results
26Concluding Observations
- Defect testing requires active test drivers able
to control extreme inputs and observe relevant
outputs - Test generation methods must take into account
the constraints of executing test cases - Robust to non-determinism in the SUT
- Measure what can be measured
- Engineers must design for testability
27Results 1Observation Issues
- Things you cant see
- Probe effect
- Clock skew
- Tester speed
28Things you cant see
- Motor outputs cant be observed directly because
of power drain - so we used IR messages to signal motor changes
- But we can observe
- touch light sensors via piggybacked wires
- broadcast IR messages
29The probe effect
- We can instrument program code to observe program
variables - but the time taken to record results disturbs the
timing of the system under test - Solutions
- observe only externally visible outputs
- design for testability allow for probe effects
30Clock Skew
- Clocks may differ for local results from two or
more processors - Solutions
- user observations timed only by the tester
- including tester events gives a partial order
31Tester speed
- Tester must be sufficiently fast to observe and
record all interesting events - Beware
- scheduling and monitoring overheads
- execution time variability
- Solution use NQC parallel tasks and off-line
analysis for speed
32Results 2Input Control Issues
- Input value control
- Input timing control
33Input Values can be Controlled
- Touch sensor input (0..1)
- good by piggybacked wire
- Light sensor input (0..100)
- OK by piggybacked wire
- Broadcast IR Messages
- good from tester
- Also use inputs directly from the env.
- natural light or button pushed by hand
34Input Timing Control is hard to control
- Cant control input timing precisely
- e.g. offered just before SUT task is called
- Solution Run tests multiple times and analyse
average and spread of results - Cant predict all system timings for a fully
accurate model - c.f. WCET research, but our problem is harder
35Conclusions from Experiments
- Defect testing requires active test drivers able
to control extreme inputs and observe relevant
outputs - Test generation methods must take into account
the constraints of executing test cases - Robust to non-determinism in the SUT
- Measure what can be measured
- Engineers must design for testability