Playing the Devils Advocate: Verifying RealTime Systems - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

Playing the Devils Advocate: Verifying RealTime Systems

Description:

By definition issue supposed to include Reliability, Safety, ... safety levels A - E (DO-178B standards in avionics) ... Avionics. Aeronautics. Robotics, ... – PowerPoint PPT presentation

Number of Views:55
Avg rating:3.0/5.0
Slides: 34
Provided by: yuw7
Category:

less

Transcript and Presenter's Notes

Title: Playing the Devils Advocate: Verifying RealTime Systems


1
Playing the Devils Advocate Verifying Real-Time
Systems
  • Jan Jürjens
  • Software Systems Engineering TU Munich,
    Germany
  • juerjens_at_in.tum.de
  • http//www.jurjens.de/jan

2
Dependability
  • Ability to deliver service that can justifiably
    be trusted.
  • By definition issue supposed to include
    Reliability, Safety, Security attributes, the
    threats against them and the means to encounter
    the threats. (Although historically security
    and dependability communities have been largely
    disjoint.)

3
Safety
  • Safety software execution without contributing
    to hazards.
  • Safety-critical systems five failure condition
    categories catastrophic, hazardous, major,
    minor, no effect.
  • Corresponding safety levels A - E (DO-178B
    standards in avionics).

4
Reliability
  • For safety need sufficient level of reliability
    probability of failure-free functioning of
    software component for specified period in
    specified environment.
  • Reliability goals via the maximum allowed
    failure rate. For high degree of reliability,
    testing not sufficient (1 failure per 100,000
    years).

5
Embedded Systems
  • In particular, embedded software increasingly
    used in safety-critical systems (flexibility)
  • Automotive
  • Avionics
  • Aeronautics
  • Robotics, Telemedicine
  • Our treatment of dependable systems in particular
    applies to embedded systems.

6
Fault-tolerance
  • Redundancy model determines which level of
    redundancy provided.
  • Goal no hazards in presence of single-point
    failures.
  • In the following treatment
  • focus on reliability, in particular for safety
  • focus on fault-tolerance aspects of reliability

7
Faults vs. Failures
  • Failures perceived deviation of output values
    from expected values.
  • Faults possible cause of failures in hardware,
    code or other artefacts.
  • For example, a faulty communication line may
    result in a communication failure.
  • Failures relative to system requirements
    (real-time inacceptable communication delay
    failure).

8
Faults vs. Failures II
  • Faults in component cause failures of component.
    Are faults of subsystem containing component.
    Leads to failures of subsystem
  • ? Faults / failures relative distinction. Can
    be both.

9
From UMLsec to UMLdep
  • Reliability Security against stupid
    adversaries
  • Security Reliability for paranoids
  • Adversaries in security correspond to failures in
    reliability.
  • Replace adversary model in UMLsec by failure
    model to get UMLdep.

10
Fault Semantics Modeling
  • For redundancy model R, stereotypes?crash/perfo
    rmanceÀ, valueÀ, have set FaultsR(s)?delay(t),
    loss(p), corrupt(q), with interpretation
  • t expected maximum time delay,
  • p probability that value not delivered within t,
  • q probability that value delivered in time
    corrupted
  • (in each case incorporating redundancy). Or use
    riskÀ stereotype with fault tag.

11
Example
  • Suppose redundancy model R uses controller with
    redundancy 3 and fastest result. Then can define
  • delay(t) t delay of fastest controller,
  • loss(p) p probability that fastest result not
    delivered within t,
  • corrupt(q) q probability that fastest result is
    corrupted
  • (each wrt. given fault semantics).

12
guaranteeÀ
  • Describe guarantees required from communication
    dependencies resp. system components.
  • Tags goal with value subset of immediate(t),
    eventual(p), correct(q), where
  • t expected maximum time delay,
  • p probability that value is delivered within t,
  • q probability that value delivered in time not
    corrupted.

13
Reliable Architecture
  • Is this a reliable architecture ?

14
reliable linksÀ
  • Physical layer should meet reliability
    requirements on communication given redundancy
    model R.
  • Constraint For dependency d stereotyped
    guaranteeÀ and each corresponding communication
    link l with stereotype s
  • if goal has immediate(t) as value then
    delay(t) 2 FaultsR(s) implies tt,
  • if goal has eventual(p) as value then loss(p)
    2 FaultsR(s) implies p1-p, and
  • if goal has correct(q) as value then
    corruption(q) 2 FaultsR(s) implies q1-q.

15
Example reliable linksÀ
  • Given redundancy model none, reliable linksÀ
    fulfilled iff T t where delay(t) 2
    Faultsnone(crash/performanceÀ).

16
Reliable Data Structure
  • Assuming immediate(t) ? goals(realtime), data
    structure reliable ?

17
reliable dependencyÀ
  • Communication dependencies should respect
    reliability requirements on criticalÀ data.
  • For each reliability level l for criticalÀ
    data, have goals(l)?immediate(t), eventual(p),
    correct(q).
  • Constraint for each dependency d from C to D
    stereotyped guaranteeÀ
  • Goals on data in D same as those in C.
  • Goals on data in C that also appears in D met by
    guarantees of d.

18
Example reliable dependencyÀ
  • Assuming immediate(t) ? goals(realtime), violates
    reliable dependencyÀ, since Sensor and
    dependency do not provide realtime goal
    immediate(t) for measure() required by Controller.

19
reliable behaviorÀ
  • Ensures that system behavior in presence of
    failure model provides required reliability
    goals
  • For any execution trace h, any transmission of a
    value along a communication dependency
    stereotyped guaranteeÀ, following constraints
    should hold, given the reliability goal
  • eventual(p) With probability at least p,
  • immediate(t) every value is delivered after at
    most t time steps.
  • correct(q) Probability that a delivered value is
    corrupted during transmission is at most 1-q.

20
Dependable Interference
  • Acceptable interference between safe and unsafe
    data ?

21
containmentÀ
  • Prevent indirect corruption of data.
  • Constraint
  • Value of any data element d may only be
    influenced by data whose requirements attached to
    criticalÀ imply those of d.
  • Make precise by referring to execution semantics
    (view of history associated with dependability
    level).

22
Example containmentÀ
  • Violates containment because a safe value
    depends on unsafe value.
  • Can check this mechanically.

23
Tool Support Fault Models
  • lqln messages on link l delayed further n time
    units.
  • phn probability of fault at nth iteration in
    history h.
  • For link l stereotyped s where loss(p)?FaultsR(s),
  • history may give lql0? then append p to phn,
  • or no change, then append 1-p.
  • For link l stereotyped s where corruption(q)?Fault
    sR(s),
  • history may give lql0 then append q,
  • or no change append 1-q.
  • For link l stereotyped s with delay(t)?FaultsR(s),
    and lql0??, history may give lqlnlql0 for nt
    append 1/t.
  • Then distribute lql0 for each n, lqlnlqln1.

24
Other Checks
  • Have other consistency checks such as
  • Is the softwares response to out-of-range values
    specified for every input ?
  • If input arrives when it shouldn't, is a response
    specified ?
  • and other safety checks from the literature.

25
  • Any Questions ?

26
Testing Real-Time Systems
  • Very challenging.
  • For high level of assurance, would need full
    coverage (test every possible execution).
  • Usually infeasible (especially reactive systems).
  • Have heuristics for trade-off between development
    effort and reliability.
  • Need to ask yourself
  • How complete is the heuristic ?
  • How can I validate it ?

27
Recent Trends in Academic Research
  • Model-based Testing (e.g. based on Real-time
    UML). Advantages
  • Precise measures for completeness.
  • Can be formally validated.
  • Two complementary strategies
  • Conformance testing
  • Testing for criticality requirements

28
Conformance Testing
  • Classical approach in model-based test-generation
    (much literature).
  • Can be superfluous when using code-generation
    except to check your code-generator, but only
    once and for all.
  • Works independently of real-time requirements.

29
Conformance Testing Caveats
  • Complete test-coverage still infeasible (although
    can measure coverage).
  • Can only test code against what is contained in
    model. Usually, model more abstract than code.
    May lead to blind spots.
  • For both reasons, may miss critical test-cases.
    Want criticality testing.

30
Criticality Testing Strategies
  • Internal Ensure test-case selection from models
    does not miss critical cases Select according to
    information on criticality.
  • External Test code against possible environment
    interaction generated from parts of the model
    (e.g. deployment diagram with information on
    physical environment).

31
Criticality Testing
  • Shortcoming of classical model-based
    test-generation (conformance testing) motivates
    criticality testing.
  • Goal model-based test-generation adequate for
    critical real-time systems.

32
Internal Criticality Testing
  • Need behavioral semantics of used specification
    language (precise enough to be understood by a
    tool).
  • Here semantics for simplified fragment of UML in
    pseudo-code (ASMs).
  • Select test-cases according to criticality
    annotations in the class diagrams.
  • Test-cases critical selections of intended
    behavior of the system.

33
External Criticality Testing
  • Generate test-sequences representing the
    environment behaviour from the criticality
    information in the deployment diagrams.
Write a Comment
User Comments (0)
About PowerShow.com