Title: Building Correct Distributed Embedded Systems
1Building CorrectDistributed Embedded Systems
- Rachel Cardell-Oliver
- Department of Computer Science Software
Engineering - The University of Western Australia
- July 2002
2Talk Overview
- Research Goal to build correct distributed
embedded systems - 1. Distributed Embedded Systems
- 2. Correctness Formal Methods Testing
- 3. Practice Executing Test Cases
- 4. Future Distributed Embedded Systems
31. Distributed Embedded Systems
4Embedded
5Distributed
6Real Time Reactions
7Characteristics of Current DES
- React or interact with their environment
- Must respond to events within fixed time
- Distributed over two or more processors
- Fixed network topology
- Each processor runs a set of tasks
- Often embedded in other systems
- Often built with limited HW SW resources
82. Correctness Formal Methods Testing
9Goal Building Correct DES
behaves like this?
10Software Tests
- are experiments designed to answer the question
does this implementation behave as intended? - Defect tests
- are tests which try to try to force the
implementation NOT to behave as intended
11A New Test Method
12Our Method for Defect Testing
- Identify types of behaviour which are likely to
uncover implementation defects (e.g. extreme
cases) - Describe these behaviours using a formal
specification language - Translate the formal test specification into a
test program to run on a test driver - Connect the test driver to the system under test
and execute the test program - Analyse test results (on-the-fly or off-line)
13Test Generation some history
- Chow IEEE SE 1978
- deterministic Mealy machines
- Clarke Lee 1997
- timed requirements language
- Neilsen PhD and TACAS 2000
- Event recording automata
- Cardell-Oliver FACJ 2000
- Uppaal timed automata
- and many more
14Test Execution some history
- Peters Parnas ISSTA 2000
- Test monitors for reliability testing
- Cardell-Oliver ISSTA 2002
- Test automata for defect testing
15Example System to Test
16Step 1 Identify interesting behaviours
- Usually extreme behaviours such as
- Inputs at the maximum allowable rate
- Maximum response time to events
- Timely scheduling of tasks
17Example Property to Test
- Whenever the light level changes from low to high
- then the valve starts to open
- within 60cs
- assuming the light level alternates between high
and low every 100cs
18Step 2 choose a formal specification language
- which is able to describe
- concurrent tasks
- real time constraints
- persistent data
- communication
- use Uppaal Timed Automata (UTA)
19Example UTA for timely response
20Writing Tests with UTA
- Test cases specify all valid test inputs
- no need to test outside these bounds
- Test cases specify all expected test outputs
- if an output doesnt match then its wrong
- No need to model the implementation explicitly
- Test cases may be concurrent programs
- Test cases are executed multiple times
21Step 3. Translate Spec to Exec
- UTA specs are already program-like
- Identify test inputs and how they will be
controlled by the driver - Identify test outputs and how they will be
observed by the driver - then straightforward translation into NQC (not
quite C) programs
22Example NQC for timely response
- task dolightinput()
- while (ilt400)
- Wait(100)
- setlighthigh(OUT_C) setlighthigh(OUT_A)
record(FastTimer(0),2000) - i
- Wait(100)
- setlightlow(OUT_C) setlightlow(OUT_A)
record(FastTimer(0),1000) - i
- // end while
- // end task
- task monitormessages()
- while (ilt400)
- monitor (EVENT_MASK(1))
- Wait(1000)
-
- catch (EVENT_MASK(1))
- record(FastTimer(0), Message())
- i
- ClearMessage()
-
- // end while
- // end task
23Step 4 test driver
24Step 4 - connect tester and execute tests
25Step 5 Analyse Results
26(No Transcript)
27Results 1Observation Issues
- Things you cant see
- Probe effect
- Clock skew
- Tester speed
28Things you cant see
- Motor outputs cant be observed directly because
of power drain - so we used IR messages to signal motor changes
- But we can observe
- touch light sensors via piggybacked wires
- broadcast IR messages
29The probe effect
- We can instrument program code to observe program
variables - but the time taken to record results disturbs the
timing behaviour of the system under test - Solutions
- observe only externally visible outputs
- design for testability allow for probe effects
30Clock Skew
- Clocks may differ for local results from 2 or
more processors - Solutions
- user observations timed only by the tester
- including tester events gives a partial order
31Tester speed
- Tester must be sufficiently fast to observe and
record all interesting events - Beware
- scheduling and monitoring overheads
- execution time variability
- Solution NQC parallel tasks and off-line
analysis helped here
32Results 2Input Control Issues
- Input value control
- Input timing control
33Input Value Control
- SUT touch light sensors can be controlled by
piggybacked wires from test driver to SUT - Test driver sends IR messages to SUT
- Use inputs directly from the environment such as
natural light or button pushed by hand
34Input Timing Control
- Cant control input timing precisely
- e.g. offered just before SUT task is called
- Solution Run tests multiple times and analyse
average and spread of results - Cant predict all SUT timings for a fully
accurate model - c.f. WCET research, but our problem is harder
35Conclusions from Experiments
- Defect testing requires active test drivers able
to control extreme inputs and observe relevant
outputs - Test generation methods should take into account
the constraints of executing test cases - Engineers should design for testability
365. Future Distributed Real-Time Systems
37Embedded Everywhere
- IT is on the verge of another revolution
- Wireless networked systems of 1000s of tiny
embedded computers will allow information to be
collected, shared, and processed in unprecedented
ways - Embedded Everywhere Report 2001
38Embedded Everywhere A Research Agenda for
Networked Systems of Embedded Computers
- A study by the Computer Science and
Telecommunications Board of the National Research
Council (USA) - For DARPA and the National Institute of Standards
and Technology (NIST) - 236 pages, 2001
- http//books.nap.edu/html/embedded_everywhere/
39Future DES Applications
Precision Agriculture Motorola weather
station UWA CSSE
40Future DES Applications
Intelligent Inhabited Environments
http//cswww.essex.ac.uk/Research/intelligent
-buildings/
41Future DES Applications
Smart Dust Prof. Kris Pister University of
California at Berkeley http//www-bsac.eecs.berkel
ey.edu/pister/SmartDust/
42Future vs Current
- ? React or interact with their environment
- ? Must respond to events within fixed time
- ? Two or more processors
- ? Fixed network topology
- ? Each processor runs a set of tasks
- ? Often embedded in other systems
- ? Often built with limited HW SW resources
- Reactive
- Real-time
- 103 to 106
- Dynamic
- 1-1
- More so
- Tiny scale
43Future Challenges
- Scale
- 1000s of independent processes
- Complexity
- concurrency communication data time
- Dynamic topologies
- of wireless networks
- very long life networks
44the end
45Traditional Embedded System
Algorithms for Digital Control
Engineering System
Real-Time Clock
Interface
Remote Monitoring System
Data Logging
Database
Data Retrieval and Display
Display Devices
Operators Console
Operator Interface
Real-Time Computer
46Observations Traces
specification
implementation