Decentralized Robustness - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

Decentralized Robustness

Description:

Decentralized Robustness. Stephen Chong. Andrew C. Myers. Cornell University ... Any principal not trusting o gives policy no credence. Confidentiality policies ... – PowerPoint PPT presentation

Number of Views:246
Avg rating:3.0/5.0
Slides: 18
Provided by: stephe234
Category:

less

Transcript and Presenter's Notes

Title: Decentralized Robustness


1
Decentralized Robustness
  • Stephen Chong
  • Andrew C. Myers
  • Cornell University
  • CSFW 19
  • July 6th 2006

2
Information flow
  • Strong end-to-end security guarantees
  • Noninterference Goguen and Meseguer 1982
  • Enforceable with type systems
  • Volpano, Smith, and Irvine 1996 and many others
  • But programs declassify information
  • Breaks noninterference!
  • Need semantic security conditions that hold in
    presence of declassification

3
Robustness
  • Intuition An attacker should not be able to
    control the release of information
  • Zdancewic and Myers CSFW 2001
  • Defined semantic security condition
  • An active attacker should not learn more
    information than a passive attacker.
  • Myers, Sabelfeld, and Zdancewic CSFW 2004
  • Language-based setting
  • Data to declassify and decision to declassify
    should not be influenced by attacker
  • Type system for enforcement
  • Track integrity
  • High-integrity not influenced by attacker

4
Issues with robustness
Bob
Charlie
  • May be many attackers
  • Different attackers have different powers
  • How to ensure a system is robust against an
    unknown attacker?
  • Different users trust different things

Alice
Damien
5
Decentralized robustness
  • Define robustness against all attackers
  • Generalization of robustness
  • Not specialized for a particular attacker
  • Uses decentralized label model Myers and Liskov
    2000 to characterize the power of different
    attackers
  • Enforce robustness against all attackers
  • Complicated by unbounded, unknown attackers
  • Sound type system
  • Implemented in Jif 3.0
  • Decentralized robustness robustness DLM

6
Attackers
  • Language-based setting
  • An attacker A may
  • view certain memory locations
  • knows program text
  • inject code at certain program points
  • (and thus modify memory)
  • Power of attacker is characterized by security
    labels RA and WA
  • RA is upper bound on info A can read
  • WA is lower bound on info A can write
  • Defn Program c has robustness w.r.t. attacks by
    A with power RA and WA if As attacks cannot
    influence information released to A.

security lattice L
most restrictive
RA
WA
least restrictive
7
Example
// Charlie can read pub, cant read salaryi,
totalSalary, avgSalary // pub ? RCharlie
avgSalary ? RCharlie // Charlie can
modify employeeCount // WCharlie ?
employeeCount totalSalary i 0 while (i lt
employeeCount) totalSalary salaryi i
1 avgSalary totalSalary / i pub
declassify(avgSalary, (Alice or Bob) to
(Alice or Bob or Charlie))
employeeCount 1
?from
?to
8
Decentralized label model
  • Allows mutually distrusting principals to
    independently specify security policies Myers
    and Liskov 2000
  • This work full lattice and integrity policies
  • Security labels built from reader policies and
    writer policies
  • Reader policies o?r
  • Owner o allows r (and implicitly o) to read
    information
  • Any principal that trusts o adheres to policy
  • (i.e., allows at most o and r to read)
  • Any principal not trusting o gives policy no
    credence
  • Confidentiality policies
  • Close reader policies under conjunction and
    disjunction

9
Integrity policies
  • Writer policies o?w
  • Owner o allows w (and implicitly o) to have
    influenced (written) information
  • Any principal that trusts o adheres to policy
  • (i.e., allows at most o and w to have written)
  • Any principal not trusting o gives policy no
    credence
  • Integrity policies
  • Close writer policies under conjunction and
    disjunction

10
Semantics of policies
  • Confidentiality
  • readers(p, c) is set of principals that principal
    p allows to read based on confidentiality policy
    c
  • c is no more restrictive than d (written c
    ?C d) if for all p, readers(p, c) ?
    readers(p, d)
  • ?C forms a lattice meet is disjunction, join is
    conjunction
  • Integrity
  • writers(p, c) is set of principals that principal
    p has allowed to write based on integrity policy
    c
  • c is no more restrictive than d (written c
    ?I d) if for all p, writers(p, c) ?
    writers(p, d)
  • ?I forms a lattice meet is conjunction, join is
    disjunction
  • Dual to confidentiality

11
Labels
  • A label ltc, dgt is a pair of a confidentiality
    policy c and an integrity policy d
  • ltc, dgt ? ltc', d'gt if and only if c ?C c' and d ?I
    d'
  • Labels are expressive and concise language for
    confidentiality and integrity policies

12
Attacker power in the DLM
  • For arbitrary principals p and q, need to
    describe what p believes is the power of q
  • Define label Rp?q as least upper bound of labels
    that p believes q can read.
  • l ? Rp?q if and only if q ? readers(p, l)
  • Define label Wp?q as greatest lower bound of
    labels that p believes q can write.
  • Wp?q ? l if and only if q ? writers(p, l)

Rp?q
Wp?q
13
Robustness against all attackers
  • Defn Command c has robustness against all
    attackers if
  • for all principals p and q,
  • c has robustness with respect to
  • attacks by q with power Rp?q and Wp?q

14
Enforcement
  • Enforcing robustness Myers, Sabelfeld, Zdancewic
    2004
  • If declassification gives attacker A info, then
    A cant influence data to declassify, or decision
    to declassify.
  • Enforcing robustness against all attackers
  • For all p and q, if p believes
    declassification gives q info, then p
    believes q cant influenced data to declassify,
    or
    decision to declassify.
  • More formally
  • For all principals p and q,
  • if lfrom ? Rp?q and lto ? Rp?q then
  • Wp?q ? pc and Wp?q ? lfrom
  • Cant use MSZ type system for all possible
    attackers
  • Would require different type system for each p
    and q!

15
A sound unusable typing rule
G, pc ? elfrom
lto ? pc ? G(v)
?p, q. if lfrom ? Rp?q and lto ? Rp?q then Wp?q ?
pc
?p, q. if lfrom ? Rp?q and lto ? Rp?q then Wp?q ?
lfrom
G, pc ? v declassify(e, lfrom to lto)
?
For all principals p and q, if lfrom ? Rp?q and
lto ? Rp?q then Wp?q ? pc and Wp?q ? lfrom
16
Sound typing rule
G, pc ? elfrom
lto ? pc ? G(v)
lfrom ? lto ? writersToReaders(pc)
lfrom ? lto ? writersToReaders(lfrom)
G, pc ? v declassify(e, lfrom to lto)
  • Conservatively converts writers of a label into
    readers.
  • Used to compare integrity against
    confidentiality.
  • ?l.?p. writers(p, l) ?
  • readers(p, writersToReaders(l))

?
  • For all principals p,
  • readers(p, lfrom) ? readers(p, lto) n
    writers(p, pc)
  • and readers(p, lfrom) ? readers(p, lto) n
    writers(p, lfrom)

?
For all principals p and q, if lfrom ? Rp?q and
lto ? Rp?q then Wp?q ? pc and Wp?q ? lfrom
17
Conclusion
  • Decentralized robustness robustness DLM
  • Defined robustness against all attackers
  • Semantic security condition
  • Generalizes robustness to arbitrary attackers
  • Decentralized label model expresses attackers
    powers
  • Sound type system
  • Implemented in Jif 3.0
  • Available at http//www.cs.cornell.edu/jif
  • Paper also considers downgrading integrity
  • Qualified robustness Myers, Sabelfeld, and
    Zdancewic 2004 is generalized to qualified
    robustness against all attackers
Write a Comment
User Comments (0)
About PowerShow.com