An Integrated Approach to Security Management - PowerPoint PPT Presentation

About This Presentation
Title:

An Integrated Approach to Security Management

Description:

Reliability: Probability of Failure Free Operation. ... of Interference Free Operation (exposure, ... Spirited debate, dogmatic positions, jokes, etc. ... – PowerPoint PPT presentation

Number of Views:50
Avg rating:3.0/5.0
Slides: 50
Provided by: chen216
Learn more at: https://www.csm.ornl.gov
Category:

less

Transcript and Presenter's Notes

Title: An Integrated Approach to Security Management


1
An Integrated Approach to Security Management
  • M. Shereshevsky,
  • R. Ben Ayed, A. Mili
  • Monday, March 14, 2005

2
Target A Prototype for Managing Security Goals
and Claims
  • Composing Claims Given a set of security
    measures, each resulting in a claim,
  • Can we add up the claims (resulting from
    different security measures) in a comprehensive
    manner?
  • Can we infer specific security properties
    (probability/ cost of specific attacks)?
  • Can we expose redundancies between claims?
  • Can we expose loopholes/ vulnerabilities?

3
Target A Prototype for Managing Security Goals
and Claims
  • Decomposing Security Goals Given a security
    goal we want to achieve,
  • How can we decompose it into achievable
    sub-goals?
  • Dispatch sub-goals to layers/ sites to maximize
    impact.
  • Dispatch sub-goals to methods so as to control
    (minimize?) verification effort.
  • Dispatch verification effort to sub-goals so as
    to control (minimize?) failure cost?
  • Issues Representing, Reasoning about claims and
    goals.

4
Outline
  • Dependability A Multi-dimensional attribute
  • An Integrated Approach to Reliability
  • A Uniform Representation for Dependability
    Measures
  • Inference Rules
  • General Applications
  • Specialization A Prototype for Managing Security

5
Dependability A Multi Dimensional Attribute
  • Four Dimensions to Dependability
  • Availability Probability of providing services
    when needed.
  • Reliability Probability of Failure Free
    Operation.
  • Safety Probability of Disaster Free Operation.
  • Security Probability of Interference Free
    Operation (exposure, intrusion, damage).
  • Conceptually orthogonal, actually interdependent.

6
Availability
  • Depends on
  • Reliability.
  • Repair Time.
  • Dependent on Reliability.
  • Related to Security DoS affects availability.

7
Reliability
  • Basic Concepts
  • Fault. System feature that may cause system to
    misbehave.
  • Error. Impact of fault on system state. Early
    warning of misbehavior.
  • Failure. Impact of fault on system (mis)
    behavior. Observable misbehavior.
  • System feature State feature Output feature.

8
Reliability, II
  • Basic Means
  • Fault Avoidance. Fault Free Design.
  • Fault Removal. Debugging/ Testing.
  • Fault Tolerance. Error detection and recovery.
  • Three successive, increasingly desperate, lines
    of defense.

9
Safety
  • Key Concepts
  • Hazard. System feature that makes accidents
    possible.
  • Mishap. Operational conditions that makes
    accident imminent.
  • Accident. Unplanned, undesirable event.
  • Damage. Loss that results from an accident.

10
Safety, II
  • Key Measures
  • Hazard Avoidance. Hazard Free design.
  • Hazard Removal. Intervene before hazards cause
    accidents.
  • Damage Limitation. Limit the impact of an
    accident.
  • Three successive lines of defense.

11
Reliability vs. Safety
12
Security
  • Key Concepts
  • Vulnerability. System feature that makes an
    attack possible.
  • Threat. Situational condition that makes an
    attack possible.
  • Exposure/ Attack. Deliberate or accidental loss
    of data and/or resources.

13
Security, II
  • Key Measures
  • Vulnerability Avoidance. Vulnerability Free
    design.
  • Attack Detection and Neutralization. Intervene
    before Attacks cause loss/ damage.
  • Exposure Limitation. Limit the impact of
    attacks/ exposure. Intrusion-tolerance.
  • Three successive lines of defense.

14
Special Role of Security
  • Without security, there can be no reliability nor
    safety. All claims about reliability and safety
    become void if the systems data and programs can
    be altered.
  • Without reliability, there can be no security.
    Security measures can be viewed as part of the
    functional requirements of the system.

15
Integrated Approach to Reliability
  • Three Broad families of methods Fault
    avoidance, fault removal, fault tolerance.
  • Which works best? Spirited debate, dogmatic
    positions, jokes, etc.
  • Pragmatic position use all three in concert,
    whenever possible/ warranted.

16
Rationale for Eclectic Approach
  • The Law of Diminishing Returns.
  • Method effectiveness varies greatly according to
    specification.
  • Refinement calculus allows us to compose
    verification efforts/ claims.
  • Refinement calculus allows us to decompose
    verification goals.

17
Mapping Methods to Specifications
  • Proving Reflexive, transitive relations.
  • Testing Relations that are good candidates for
    oracles (reliably implemented).
  • Fault Tolerance Unary relations that are
    reliably and efficiently implemented.

18
Composing Verification Effort
  • All methods must be cast in a common logical
    framework.
  • Refinement calculus (based on relations) offers
    such a common framework.
  • Specifications and programs are relations
    refinement ordering between relations lattice
    properties.

19
Modeling Verification Methods
  • Proving Proving that P is correct with respect
    to specification V
  • P ? V.
  • Testing Certification testing, Oracle ?, test
    data D, successful test on D.
  • P ? T,
  • where T D\? .

20
Modeling Verification Methods, II
  • Fault Tolerance Upon each recovery block, check
    condition C, invoke recovery routine R. Because
    we do not which outcome we have each time, all we
    can claim is
  • P ? F,
  • where F C ? R.

21
Cumulating Results
  • Proving P ? V.
  • Testing P ? T.
  • Fault Tolerance P ? F.
  • Lattice Identity
  • P ? (V?T?F).
  • Cumulating verification results into a
    compre-hensive correctness claim.

22
Lattice Properties
23
Decomposing Verification Goals
  • Premises
  • A Complex Specification can be decomposed into
    simpler sub-specifications in a
    refinement-compatible manner
  • S S1 ? S2 ? ?SN.
  • We can consider each SI in turn, mapping it to
    the method that is most efficient for it.

24
Mapping Specifications to Methods
25
A Uniform Representation for Dependability
Measures
  • Logical representation of verification results
    unrealistic
  • Most verification results are best viewed as
    probabilistic, not logical, statements.
  • Most verification results are conditional,
    contingent upon different conditions.
  • Many verification results can be interpreted in
    more than one way.

26
Probabilistic (vs Logical) Claims
  • No absolute certainty.
  • Even highly dependable, totally formal,
    verification systems may fail.
  • We want to quantify level of confidence.

27
Verification Results are Conditional
  • Proving Conditional on verification rules being
    consistent with actual compiler.
  • Testing Conditional on testing environment
    being identical to operating environment.
  • Fault Tolerance Conditional on system
    preserving recoverability.

28
Multiple Interpretations
  • Testing, first interpretation P ? D\? , with
    probability 1.0.
  • Testing, second interpretation P ? ? , with
    probability plt1.0, conditional on D being
    representative.
  • Which interpretation do we choose? We do not
    have to choose, in fact. We can keep both, and
    learn to add/ cumulate them.

29
Characterizing Verification Claims
  • Goal. Correctness preservation, recoverability
    preservation, operational attribute.
  • Reference. Specification, Safety requirement,
    security requirement, etc.
  • Assumption. Implicit conditions in each method.
  • Certainty. Quantified by probability.
  • Stake. Cost of failure to meet the goal applies
    to safety and security.
  • Method. Static vs dynamic, others. May affect
    the cost of performing the verification task.

30
Representing Verification Claims
  • List
  • ?(G,R,A,P,S,M)
  • G Goal R Reference A Assumption
  • P Probability S Stake M Method.

31
Sample Representations
  • Certification Testing
  • First Y( ?, D\?, true, 1.0, , Testing),
  • where D is successful test data.
  • Second Y( ?, ?, A, 0.7, , Testing),
  • where A is representativity of D.

32
Sample Representations
  • Formal Verification
  • First Y( ?, R, A, 0.99, , Proving),
  • where R is system specification, A is
    condition that the verification method is
    consistent with operational environment.
  • Second Y( ?, R1, A, 0.995, HighC, Proving) ?
    Y( ?, R2, A, 0.97, LowC, Proving).

33
Sample Representations
  • Verifying Initialization Sequence with respect to
    N
  • Y( ?, R, A ? B, P, , Proving),
  • R is system specification,
  • A is condition that the verification method is
    consistent with operational environment,
  • B is condition that the body of the system
    refines right residual of R by N (solution to
    N?X ? R).
  • Approach also applicable to regression testing
    updating claims after maintenance operation.
    Open issue negating, overriding, replacing
    previous claims?

34
Sample Representation
  • Safety Measure
  • Y( ?, S, A, 0.999, HiC, Proving),
  • Where S is safety requirement, A is consistency
    of verification method with operational
    environment, HiC is cost of failing to meet the
    condition P ? S.
  • How does security differ from safety/
    reliability a different goal or a different
    reference?

35
Inference Rules
  • Collecting claims is insufficient.
  • Cumulating/Synthesizing claims (as we did with
    logical results) is impractical.
  • Build inference mechanisms that can infer
    conclusions from a set of claims.
  • We will explore applications of this capability,
    subsequently.

36
Inference Rules
  • Orthogonal Representation, for the purpose of
    enabling inferences
  • ?(S ? R A) P.
  • Two additional cost functions
  • Verification cost
  • Goal ? Ref ? Meth ? Assum ? Cost.
  • Failure cost
  • Goal ? Ref ? Cost.

37
Inference Rules
  • Derived from refinement calculus
  • ?(S?(R1?R2) A)
  • ? ?(S?R1 A) ? ?(S?R2 A).
  • ?(S?(R1?R2) A)
  • ? ?(S?R1 A) ?(S?R2 A).
  • ?(S?(R1?R2) A)
  • ? max(?(S?R1 A), ?(S?R2 A)).

38
Inference Rules
  • Derived from Probability Calculus
  • R1?R2 ? ?(S?R1 A) ? ?(S?R2 A).
  • (A1?A2) ? ?(S?R A1) ? ?(S?R A2).
  • ?(S?R A ? B)
  • ?(S?R A) ? ?(S?R B) / ?(S?R).
  • Bayes Theorem.

39
Inference Rules
  • Derived from Cost functions
  • R1?R2 ? VC(?,R1, M, A) ? VC(?,R2, M, A).
  • (A1 ? A2) ?
  • VC(?,R, M, A2) ? VC(?,R, M, A1).
  • R1?R2 ? FC(?,R1) ? FC(?,R2).

40
General Applications
  • Providing dependability
  • Deploy eclectic approaches.
  • Dispatch goals to methods to control verification
    costs.
  • Dispatch verification effort to verification
    goals to control failure costs.
  • Budget verification cost.
  • Minimize / assess failure risk (probability,
    severity of failure).

41
General Applications
  • Assessing dependability
  • Deploy eclectic/ diverse approaches.
  • Record all measures in knowledge base.
  • Updating knowledge base as system evolves.
  • Maximize coverage.
  • Minimize overlap.
  • Budget verification cost.
  • Minimize / assess failure risk (probability,
    severity of failure).

42
General Applications
  • Certifying dependability
  • Deploy eclectic/ diverse approaches.
  • Budget certification cost.
  • Target certification effort (certification goal
    inferred from claims established by certification
    activity).

43
Security Specific Considerations
  • Unbounded Cost.
  • Distributed Measures.
  • Distributed Control.
  • Focus on Components
  • Refinement by Parts Requirements RA, RB
    components CA, CB.

44
A Prototype for Managing Security
  • Theoretical/ Modeling steps.
  • Adapting/ specializing dependability model to
    security properties.
  • Exploring Security in the broader context of
    dependability (clarifying dependencies).
  • Modeling security measures (vulnerability
    avoidance, attack detection and neutralization,
    intrusion tolerance, exposure limitation, etc).
  • Representing such measures in a uniform knowledge
    base.

45
A Prototype for Managing Security
  • Experimental/ Empirical steps.
  • Designing, Coding inference rules.
  • Modeling / Representing Claims, Rules.
  • Specifying query capabilities.
  • Selecting a sample system (modeling security
    aspects, relations to dependability).
  • Experimenting with query capabilities.
  • Build a demo.

46
Sample Queries
  • Can we infer
  • ?(S ? R A) ? P.
  • for a given ?, R, A, P?
  • Provide lower bound for
  • ?(S ? R A).
  • Provide Upper Bound for weighted failure cost,
    for given ?, R, A.

47
Conclusion
  • Premise Consider security in the broader
    context of dependability, of which it depends.
  • Premise Exploit analogies between aspects of
    dependability to migrate methods.
  • Premise Capture all measures taken during
    design and maintenance in uniform representation,
    that lends itself to inferences.

48
Prospects
  • Eclectic, yet Integrated, Approach.
  • Allows us to model diverse approaches, and
    combine their results.
  • Allows us to measure claims.
  • Allows us to budget cost, risks.
  • Support Tool.

49
Relevant Wisdom
  • Une Science a lage de ses instruments de mesure.
  • Louis Pasteur.
  • A Science is as advanced as its measurement tools.
Write a Comment
User Comments (0)
About PowerShow.com