Erik Jonsson School of Engineering and Computer Science - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

Erik Jonsson School of Engineering and Computer Science

Description:

Erik Jonsson School of Engineering and Computer Science. FEARLESS Engineering. www.utdallas.edu ... For more information on what I've presented so far, please ... – PowerPoint PPT presentation

Number of Views:75
Avg rating:3.0/5.0
Slides: 36
Provided by: laurae
Category:

less

Transcript and Presenter's Notes

Title: Erik Jonsson School of Engineering and Computer Science


1
Erik Jonsson School of Engineering and Computer
Science
A Model-Checking In-lined Reference Monitoring
system for ActionScript
Meera Sridhar and Kevin W. Hamlen (The University
of Texas at Dallas) Supported in part by grants
from AFOSR Presentation at Adobe Systems
Inc. San Francisco, CA November 19, 2009
FEARLESS Engineering
www.utdallas.edu
2
Beyond Access Control
  • Security Policy A security policy for a system
    defines desired secure behavior for a system.
    More formally, consider a computer system to be a
    finite-state automaton, with a set of transition
    functions that change state. Then, a security
    policy is a statement that partitions the states
    of the system into a set of authorized, or
    secure, states and a set of unauthorized, or
    nonsecure, states.
  • - Computer Security, Art and Science by Matt
    Bishop

3
Trustworthy Small
  • 2. Trusted Computing Base (TCB) a small amount
    of software and hardware that security depends on
    and that we distinguish from a much larger amount
    that can misbehave without affecting security.
  • - Lampson et al, Authentication in Distributed
    Systems
  • An important goal in computer security is to
    minimize the TCB.

4
Separation of Concerns
  • 3. code-producer and code-consumer
  • will use this terminology frequently
  • code-producer the party that wrote the code
  • note that this could be different from the
    code-distributor, for example, in AIR, Adobe is
    considered the code-distributor, but the
    code-producer can be any person who chooses to
    create an AIR application and release it to the
    public.
  • code-consumer the end-user who will run the
    application

5
The need for more sophisticated security
enforcement tools for Flash and AIR
  • Existing security mechanisms for ActionScript and
    related technologies (including the AVM, Flash,
    Flex and AIR) mainly fall into two categories
  • Code-signing
  • Sandboxing
  • Suffice for certain classes of attacks (mainly
    access control), BUT
  • code-signing
  • places the code-producer in the TCB
  • consumer can either choose to trust the
    code-producer and run the program or choose not
    to trust the code-producer there is no
    in-between
  • sandboxing
  • enforces only a small class of coarse-grained
    access control policies
  • policies are built into the AVM and runtime
    libraries.

6
The need for more sophisticated security
enforcement tools for Flash and AIR, contd.
  • Limitations
  • cannot enforce system- and application-specific
    policies
  • e.g. prohibiting write access to files with only
    certain file extensions
  • cannot enforce finer-grained policies
  • e.g. those that constrain arguments to individual
    ActionScript instructions (such as arguments to
    eval)
  • programs cannot use file system too coarse
    grained
  • policy is encoded in the underlying OS/VM, so
  • code-producer needs to know the policy policy
    cannot be modified/specified by code-consumer
    after receiving the code
  • e.g. AIR force wants to enforce policies that
    they cant even talk about
  • changes to policy would need to be reflected in
    the OS/VM by modifying it expensive, inflexible

7
Enter IRMs What are they?
  • In-Lined Reference Monitors (IRMs)
  • inject runtime security guards directly into
    untrusted binaries
  • guards test whether an impending operation
    constitutes a policy violation
  • if so, some corrective action is taken to prevent
    the violation, such as premature termination.
  • result is self-monitoring code that can be safely
    executed without external monitoring

8
IRMs contd.
  • IRMs
  • keep track of history of security-relevant events
    observed
  • enables enforcement of a powerful class of
    security policies
  • (not precisely enforceable by any purely static
    analysis)
  • do not require any admin/VM-level privileges to
    enforce policy
  • IRMs also
  • allow code consumers to specify or modify the
    security policy after receiving the code
  • code-producer need not know the policy, which is
    the case with static analyses/code-signing/sandbox
    ing
  • changes to policy doesnt require modifications
    to underlying OS/VM

9
A Very Simple Example
  • Lets say the desired security policy is the
    following
  • The program should not open more than 3 pop-up
    windows.

10
A Very Simple Example, contd.
  • public class RUnsafeURL extends Sprite
  • public function RUnsafeURL()
  • //create textfield object here
  • var theFieldTextField new TextField()
  • theField.text "Click me!"
  • theField.border true
  • theField.background true
  • theField.selectable false
  • addChild(theField)
  • //add event listener here
  • theField.addEventListener(MouseEvent.CLICK,
    clickListener)
  • //constructor RUnsafeURL
  • private function clickListener(eMouseEvent)void
    var
    urlString"javascriptwindow.open(
  • http//www.youtube.com/watch?vpXT9WSaH8uw
    featuretopvideos ','title',04.'width800,height6
    00,toolbarno,resizableo,menubar no,05.status
    no,scrollbarsno')void(0)"

11
A Very Simple Example, contd.
  • public class RUnsafeURL extends Sprite
  • private var securityNumber 0
  • public function RUnsafeURL()
  • //create textfield object here
  • var theFieldTextField new TextField()
  • theField.text "Click me!"
  • theField.border true
  • theField.background true
  • theField.selectable false
  • addChild(theField)
  • //add event listener here
  • theField.addEventListener(MouseEvent.CLICK,
    clickListener)
  • //constructor RUnsafeURL
  • private function clickListener(eMouseEvent)void
    var
    urlString"javascriptwindow.open(
  • http//www.youtube.com/watch?vpXT9WSaH8uw
    featuretopvideos ','title',04.'width800,height6
    00,toolbarno,resizableo,menubar no,05.status
    no,scrollbarsno')void(0)"
  • while(true)

12
Binary-Level IRM Architecture
Original SWF File
Safe SWF File
Original SWF Data
ABC Extractor
  • ABC Injector

Instrumented ABC File
ABC File
Parser
Code Generator
Abstract Syntax Tree (AST)

Rewriter
Instrumented AST
Security Policy
13
  • For more information on what Ive presented so
    far, please refer to the following paper
  • M. Sridhar and K.W. Hamlen. ActionScript In-Lined
    Reference Monitoring in Prolog. In Proc. Intl.
    Symposium on Practical Aspects of Declarative
    Languages, 2010. to appear.

14
Two Important Concerns
  • Does the IRM preserve good code?
  • inserted guards have no visible side affect,
    unless the code is policy-violating
  • depends on strictly one defines preserve
  • not applicable to self-modifying code for example
  • best determined through testing
  • fairly long history of doing this (may not always
    be known as IRM)
  • Java compiler for example inserts array bounds
    check before any array access
  • user doesnt notice any difference, unless there
    is a array out-of-bounds violation
  • More important concern how do we know that the
    rewriter prevents policy violations?
  • testing not good enough, since attacker is
    actively trying to find loop-holes in the code
  • needs rigorous proving

15
The need for verification
  • we would formally verify the rewriter
  • also, want to minimize the TCB
  • rewriters are powerful, but therefore complex.
  • we generally have a rewriter tailored to a
    particular security policy class
  • so, the combined size of all the rewriters can be
    large
  • rewriters employ many optimization schemes, so
    any change to the optimization means a change to
    the TCB
  • not desirable, since we want to keep the TCB
    fairly stable

16
The need for verification, contd.
  • a verifier (separate, smaller program) checks the
    rewritten program against the security policy,
    and
  • either
  • deems the program to be policy-adherent
    (accepts), or
  • deems it to be policy-violating (rejects), and
    produces a counterexample a sequence of events
    that led up to the security policy being violated
    in a certain program state
  • can take the rewriter out of the TCB
  • typically, a good verifier is general and less
    subject to change, and the verifier code is the
    same across security policy classes
  • therefore, much more desirable to have the
    verifier rather than the rewriter in the TCB

17
Verifying IRM Architecture
Original SWF File
Verifier
ABC Extractor
Instrumented ABC File
Parser
ABC File
AST
IRM Framework
Parser
Reject Counter-example
Model-Checker Abstract Interpreter
Abstract Syntax Tree (AST)
Security Policy
Accept Verified ABC File
  • ABC Injector

Original SWF Data
Instrumented AST
Safe SWF File
Code Generator
Trusted Computing Base
18
Using Model-Checking for Verification
  • Model-checking is an extremely powerful software
    verification paradigm that can verify properties
  • more complex than those typically expressible by
    type-systems
  • ActionScript already has some basic type-checking
  • while type-checking suffices to prove policies
    such as ensuring the argument to a jump is indeed
    a label, does not prove policies like do not
    open more than 3 pop-up windows
  • more semantically flexible and abstract than
    those typically encoded by contracts
  • contracts tend to expect strict syntactic-
    adherence
  • not flexible for various rewriting-strategies

19
What is model-checking?
  • Informally, given a model of a system (in this
    case, the ActionScript program), and a safety
    property, a model-checker explores all possible
    states that the system can be in, checking
    whether the property is violated in any state.
  • If no violations occur, then the model-checker
    accepts.
  • If in a particular state the property is
    violated, then the model-checker produces the
    sequence of states that led up to the violating
    state.
  • clearly suited for verification
  • main challenge with model-checking state space
    explosion since the MC explores each possible
    state, it quickly runs out of memory, especially
    if there are loops
  • tackling this for IRMs has been one of the main
    focuses of our research

20
Key Elements of the Verifier
  • security policy is modeled as a security
    automaton
  • policy-writer uses a high-level policy language
  • can be anything from check-boxes to Linear
    Temporal Logic
  • high-level policy is reduced to an automaton
  • this idea is obtained from literature
  • abstractly interprets untrusted programs,
    reasoning about possible event histories

21
Key Elements of the Verifier
  • uses memoization and co-induction to reason about
    fixed points
  • suppose sec-relevant property no iteration
    changes whether property is true or false
  • the infinite number of states can be replaced
    with a single, representative state
  • needed for termination of the model-checker
  • For more information on this aspect specifically,
    please see
  • B. W. DeVries, G. Gupta, K. W. Hamlen, S. Moore,
    and M. Sridhar. ActionScript Bytecode
    Verification with Co-logic Programming. In Proc.
    of the ACM SIGPLAN Workshop on Prog. Languages
    and Analysis for Security (PLAS), 2009.

22
Key Elements of the Verifier, contd.
  • another way to combat state space explosion is
    to use abstract interpretation
  • when certain properties of variable values,
    rather than their concrete values suffice for
    reasoning about policy-adherence, concrete
    variable values can be replaced with abstract
    values from an abstract domain
  • can greatly reduce state space, since several
    concrete states can be mapped onto a single
    abstract state

23
Key Elements of the Verifier, contd.
  • Abstract Interpretation, contd.
  • domain is constructed based on knowledge of
    properties crucial for interpretation
  • of course, one has to take care that this
    abstraction preserves soundness, but also,
  • the precision of this abstraction determines the
    power of our analysis
  • it is possible to create an abstraction that is
    correct, but abstracts away too much information
    this results in a high conservative rejection
    rate that is, no policy-violating programs are
    accepted, but many policy-adherent programs are
    conservatively rejected.

24
Key Elements of the Verifier, contd.
  • model-checking rewritten code requires abstract
    interpretation of
  • not only program variable values, but also
  • history of sec-relevant events
  • uses a sophisticated abstract interpretation
    scheme
  • by creating an abstract interpretation lattice
    built from the security policy automaton

25
Key Elements of the Verifier, contd.
  • the model-checking step is encoded in the
    operational semantics of the abstract machine,
    where policy-violations are modeled as stuck
    states
  • thus, if abstract machine reaches a stuck state,
    program is rejected as policy-violating

26
Key Elements of the Verifier, contd.
  • We have proved that our method is sound that
    is, adopting the proposed abstract interpretation
    scheme does not result in any policy-violating
    program being accepted by the model-checker.
  • We have shown convergence - that our
    model-checker will always terminate in polynomial
    time.

27
  • Please refer to the following paper for a
    detailed treatment
  • M. Sridhar and K. W. Hamlen. Model-checking
    In-lined Reference Monitors. In Proc. Intl. Conf.
    on Verification, Model-Checking and Abstract
    Interpretation, 2010. to appear.

28
Implementation and Results
  • Rewriting system is a prototype system not a
    fully-production level system yet. Model-checking
    system is a very simple system that can verify
    these properties. We dont want a very complex
    model-checker.
  • ABC Extractor/Injector in C
  • Rest in Prolog (simplifies implementation, but
    doesnt have to be in Prolog)
  • Definite Clause Grammars greatly simplify parsing
  • reversible nature of predicates facilitates
    code-generation as reverse-parsing
  • Prolog already has memoization (tabling) and
    coinduction
  • Lines of code
  • binary-rewriters 400 lines of Prolog code per
    security policy family
  • shared parser/generator 900 lines
  • veriifer 2000 lines of code

29
Sample Policies
  • redir policy
  • prohibits malicious URL-redirections by ABC ad
    applets
  • redirections are implemented at the bytecode
    level by navigateToURL system calls
  • The policy requires that method check_url(s) be
    called to validate destination s before any
    redirection to s may occur. Method check_url has
    a trusted implementation provided by the ad
    distributor and/or web host, and may incorporate
    dynamic information such as ad hit counts or
    webpage context.
  • Our IRM enforces this policy by injecting calls
    to check_url into untrusted applets. For better
    runtime efficiency, it positions some of these
    calls early in the program's execution (to
    pre-validate certain URL's) and injects runtime
    security state variables that avoid potentially
    expensive duplicate calls by tracking the history
    of past calls.

30
Sample Policies, contd.
  • postok policy
  • sanitizes strings entered into message box
    widgets
  • This can be helpful in preventing cross-site
    scripting attacks, privacy violations, and
    buffer-overflow exploits that affect older
    versions of the ActionScript VM.
  • We enforced the policy on the Posty AIR
    application, which allows users to post messages
    to social networking sites such as Twitter,
    Jaiku, Tumblr, and Friendfeed.

31
Sample Policies, contd.
  • flimit policy
  • enforces a resource bound that disallows the
    creation of more than n files on the user's
    machine
  • enforced this policy on the FedEx Desktop AIR
    application, which continuously monitors a user's
    shipment status and sends tracking information
    directly to his or her desktop
  • IRM implements the policy by injecting a counter
    into the untrusted code that tracks file creations

32
Experimental Results
Program Tested Policy Enforced Size Before Size After Rewriting Time Verification
countdownBadge redir 1.80 KB 1.95 KB 1.429s 0.532s
navToURL redir 0.93 KB 1.03 KB 0.863s 0.233s
fiona redir 58.9 KB 59.3 KB 15.876s 0.891s
calder redir 58.2 KB 58.6 KB 16.328s 0.880s
posty postok 112.0 KB 113.0 KB 54.170s 2.443s
fedex flimit 77.3 KB 78.0 KB 39.648s 1.729s
All tests were performed on an Intel Pentium Core
2 Duo machine running Yap Prolog v5.1.4.
33
  • Our papers are available at
  • www.utdallas.edu/meera.sridhar

34
Bibliography
  1. B. W. DeVries, G. Gupta, K. W. Hamlen, S. Moore,
    and M. Sridhar. ActionScript Bytecode
    Verification with Co-logic Programming. In Proc.
    of the ACM SIGPLAN Workshop on Prog. Languages
    and Analysis for Security (PLAS), 2009.
  2. K. W. Hamlen, G. Morrisett, and F. B. Schneider.
    Computability Classes for Enforcement Mechanisms.
    In ACM Trans. Prog. Languages and Systems, 2006.
  3. F. B. Schneider. Enforceable Security Policies.
    ACM Trans. Information and System Security,
    33050, 2000.
  4. M. Sridhar and K. W. Hamlen. Model-checking
    In-lined Reference Monitors. In Proc. Intl. Conf.
    on Verification, Model-Checking and Abstract
    Interpretation, 2010. to appear.
  5. M. Sridhar and K.W. Hamlen. ActionScript In-Lined
    Reference Monitoring in Prolog. In Proc. Intl.
    Symposium on Practical Aspects of Declarative
    Languages, 2010. to appear.

35
Acknowledgements
  • We thank the Air Force Office of Sponsored
    Research (AFOSR) for funding part of this
    research.
  • We thank Dr. Gopal Gupta, Dr. Feliks Kluzniak,
    and Dr. R. Chandrasekaran from The University of
    Texas at Dallas for many helpful discussions and
    feedback.
  • Finally, many thanks to Peleus Uhley here for
    giving us this opportunity to share our work with
    you.
Write a Comment
User Comments (0)
About PowerShow.com