An Automated Framework for Validating Firewall Policy Enforcement PowerPoint PPT Presentation

presentation player overlay
1 / 31
About This Presentation
Transcript and Presenter's Notes

Title: An Automated Framework for Validating Firewall Policy Enforcement


1
An Automated Framework for Validating Firewall
Policy Enforcement
  • Adel El-Atawy, Taghrid Samak, Zein Wali, Ehab
    Al-Shaer
  • School of Computer Science, MNLAB
  • DePaul University, Chicago, IL
  • Frank Lin, Christopher Pham, Sheng Li
  • Cisco
  • 3rd Midwest Security Workshop
  • West Lafayette, IN
  • April 21, 2007

2
Agenda
  • Motivation and Challenges
  • Architecture Overview
  • System Components
  • Policy Segmentation Traffic Generation
  • Augmented Grammar Syntax Specification
  • Policy Generation
  • Architecture Revisited
  • System Features
  • Conclusion

3
Motivation
  • Testing security devices is very important
    (critical for quality assurance) and also
    challenging.
  • Updates and patches are frequent Various
    aspects needs to be addressed ? Automated Tools
    are always needed
  • Source of bugs in security devices (e.g.,
    firewall)
  • New interfaces/features ? Parsing/CLI errors
  • Polices complexity evolving syntactically and
    semantically ? Policy refinement/translating
    errors
  • Matching optimization algorithms ? Filtering
    errors
  • Forwarding error

4
Challenges of Testing Security Devices
  • Policy Generation
  • Many different configuration parameters to
    consider
  • ACL Keywords, protocols, header options, field
    values, rules overlapping, wild-card vs specific
    value, rule order, policy size, and combination
    of these
  • The generation process needs to be very tunable.
  • The space from which field values are to be
    chosen is huge, and good/selective coverage is
    essential.
  • Selecting certain dimensions to stress on.

5
Challenges of Testing Security Devices
  • Extensible ACL syntax
  • The testing procedure should be totally
    transparent to the different changes in the
    supported syntaxes, keywords, predefined services
    and protocols, ...
  • Traffic Generation
  • Given a policy, exhaustive testing requires
    4x1013 years when all tuples are used and 4.5
    when address domain and other dimensions are
    fixed ? simply infeasible
  • Random sampling requires exponential number of
    samples in number of filtering fields and
    confidence required ? high false negative rate
    and inefficient

6
Challenges of Testing Security Devices
  • Device Interface
  • Devices have different logging capabilities.
    Moreover, Logs in many cases lack the
    completeness needed to perform a thorough
    analysis.
  • Preparing the firewall for testing is, in most
    cases, very OS-specific.

7
Project Objectives
  • Have an Automated Framework for Testing Network
    Security Devices.
  • It should include
  • Test Policy Generation
  • Syntax is flexible
  • Generation can be tuned
  • Test Case Generation
  • Efficient selection of packets

8
INSPEC Architecture
  • The flow of the testing procedures can be shown
    as follows

9
INSPEC Component Description
10
Grammar Parser and Processor
  • Using a flexible grammar specification model
    future changes, modifications, additions to ACL
    syntax/capabilities can be smoothly integrated.
  • Most syntax changes can be enforced without
    changing the program/code.
  • Adding features, (e.g., extra protocols with
    their extra fields) can be achieved with minimal
    effort.

11
Grammar Parser and Processor
  • Sample grammar (Standard IOS)

\\Rules S "access-list" acl-num action SrcAddr
opt acl-num\FieldID(100) \number(1,99) actio
n\FieldID(0) "permit\V(1)
"deny\V(0) SrcAddr\FieldID(2) IPany
IPpair IPany "any"\Translate("IP","operato
r","any") IPpair \IPvalue\Translate("IP","o
perator",value") Mask Mask
\IPMask\Translate("IP","operator",mask") opt\Fie
ldID(80) "log\V(1) \\EndRules
\\Rules S "access-list" acl-num action SrcAddr
opt acl-num number(1,99) action
"permit" "deny" SrcAddr IPany
IPpair IPany "any" IPpair IPvalue
Mask Mask IPMask opt
"log" \\EndRules
12
Grammar Parser and Processor
  • Sample grammar (Standard IOS)

\\Rules S "access-list" acl-num action proto
SrcAddr opt acl-num\FieldID(100)
\number(1,99) action\FieldID(0) "permit\V(1)
"deny\V(0) allow\V(1) Proto\FieldID(1)
\Lookup(number,proto.txt) SrcAddr\FieldID(2)
IPany IPpair IPany
"any"\Translate("IP","operator","any") IPpair
\IPvalue\Translate("IP","operator",value")
Mask Mask \IPMask\Translate("IP","operat
or",mask") opt\FieldID(80)
"log\V(1) \\EndRules
13
Grammar Parser/Processor
  • A more complex example (extended ACL)

S "access-list" Policy-Number action protocol
SrcAddr SrcPort DestAddr DestPort icmpquals
igmpquals REST REST ACK FIN PSH RST
SYN URG Prec Tos Established Logging
Fragments Policy-Number\FieldID(100)
\number(100,199) action\FieldID(0)
"permit"\V(1) "deny"\V(0) protocol\FieldID(1)
\number(0,255) \Lookup("number","protocols.t
xt") SrcAddr\FieldID(2) IPaddr DestAddr\Field
ID(3) IPaddr SrcPort\FieldID(4)
Port DestPort\FieldID(5) Port IPaddr
IPany IPhost IPpair IPany
"any"\Translate("IP","operator","any") IPhost
"host" \IPvalue\Translate("IP","operator","IP
host") IPpair \IPvalue\Translate("IP","operat
or","IPsubnet") \IPmask\Translate("IP","operator",
"IPmask") Port\Cond(1,17) PortOp1PortOp2Port
Op3PortOp4PortOp5 Port\Cond(1,6)
PortOp1PortOp2PortOp3PortOp4PortOp5 PortOp1
"eq" Y\Translate("port","operator","eq") Port
Op2 "lt" Y\Translate("port","operator","lt")
PortOp3 "range" Y\Translate("port","operator"
,"ge") Y\Translate("port","operator","le") PortOp4
"gt" Y\Translate("port","operator","gt") Por
tOp5 "neq" Y\Translate("port","operator","ne"
) Y\Cond(1,17) \number(1,65535)
\Lookup("number","udpports.txt") Y\Cond(1,6)
\number(1,65535) \Lookup("number","tcpports.txt"
) icmpquals\FieldID(6)\Cond(1,1)
\Lookup("number","icmpquals.txt") igmpquals\FieldI
D(19)\Cond(1,2) \Lookup("number","igmpquals.tx
t") Prec\FieldID(7) "precedence"
\number(0,7) Tos\FieldID(8) "tos"
\number(0,15) Logging\FieldID(80)
"log"\V(1) Established\Cond(1,6)\FieldID(9)
"established"\V(1) Fragments\FieldID(10)
"fragments"\V(1) ACK\FieldID(11)\Cond(1,6)
"ack\V(1) FIN\FieldID(12)\Cond(1,6)
"fin"\V(1) PSH .. RST .. SYN .. URG
14
Policy Generation
Embedded statistics guide the navigation
  • Using the given BNF, rule-by-rule generation
    takes place.
  • The edges of the graph representing the BNF carry
    the statistical guidelines for tunable policy
    structures.
  • Can be provided independently of a specific BNF.

15
Policy Segmentation
  • How to identify the different decision paths a
    firewall will take?
  • Different rule interactions within the policy
    create different decision paths
  • Testing every decision path will cover the whole
    operation space of the firewall.

16
Policy Segmentation
  • Each segment will be tested independently with a
    set of tailored packets.
  • The action for such packets are dictated by the
    first rule on top of the segments
    intersecting-rules list

17
Policy Segmentation
  • The interaction between rules within a single
    policy partitions the space into different areas
    (i.e., Segments).
  • Each segment is fully specified by the
    intersection of a specific subset of the policy
    rules.
  • Segments are
  • Disjoint
  • Cover the whole space of possible packet
    configurations (w.r.t. a given policy)
  • Number of Segments is not much larger than policy
    size (experimental 2n 5n)

18
Segment Weight Analysis
  • The weight of a segment is our estimate of the
    probability of a fault (wrong firewall decision)
    occurring in its space.
  • Factors affecting the weight includes
  • Segments area
  • Number of intersecting rules over the segments
    space
  • Number of rules that affected the shape of this
    segment
  • The complexity of the top most rule (owner rule)
    of this segment
  • The complexity of each overlapping rule.

19
Formulation of Segment Weights
  • Weight of a segment is a function of different
    parameters that specify how critical is this
    segment w.r.t. testing
  • During a test interval of T seconds and using a
    rate of R packets/second, the number of generated
    packets ni for segment Si is given by the formula

20
Packet Generation
  • Packets are selected independently from each
    segment. Each segment is allocated packets in
    ratio to its weight.
  • Packets are selected with changing high order
    bits (from optional fields) before those of the
    lower order fields (IPs, protocol).
  • Example A segment with tcp protocol will first
    select packets with all possible tcp control bits
    before changing the port number (if free), then
    when this space is exhausted the IP will be
    changed accordingly.
  • Example Generic ip segments, will go directly to
    changing the DstIP field as they are the first
    available bits.

21
Spy module
  • Responsible for monitoring the output of the
    firewall and report it back to the engine.
  • It can support multiple connecting Engines, and
    multiple simultaneous tests per Engine.
  • Designed to withstand high speed injections.

22
Spy module
  • The Engine requests a new test monitoring task.
  • The Main process of the spy, creates a thread,
    that will start processing this request.
  • The Capture process will capture the packets and
    sends them to the DeMux process that will
    redirect the packet to the corresponding thread.
  • Once the thread decides the test is over, it will
    destroy the pipe, and return to the Engine any
    discrepancies.

23
INSPEC Architecture
  • The flow of the testing procedures can be shown
    as follows

24
INSPEC Architecture (detailed)
BNF
BNF
BNF Parser
Configuration
Translation
Administration
File
Graph
Information
(BNFGraph)
Policy
Test Scenario
Policy
Policy Checker/
FW Admin
Generation
Generator
Compiler
Options
Manually
Entered Policy
Segmentation
Injection
Weight
awaits
Analyzer
success of
Weight
Options
configuration
Analyzer
Test Case
Test Packet
Generation
Generator
Options
Injection
Compression
Repeating
Block
Spy
FIREWALL
Post test
Reporting
Decompression
Analysis
25
Key Features
  • Flexible
  • Customizable parsing
  • Testing engine (test and scenario generation) is
    independent of CLI specifics
  • Highly flexible to accommodate future CLI
    extensions
  • Universal
  • Intermediate representation (BDD Segments)
    abstracts the firewall model and filtering
    language capabilities

26
Key Features
  • Efficient
  • Test Generation
  • Smart test packet selection based on segmentation
  • Controllable based on the test complexity/critical
    ly
  • Generated Packets cover all possible decision
    paths of the Filtering Algorithm.
  • Scenario Generation
  • Testing using a wide tuneable (e.g., distribution
    of fields) selection of automatically generated
    policies.
  • To avoid using a huge number of packets for
    reasonable coverage Packets are selected in
    order to exhaust as many dimensions as possible.

27
Key Features
  • Comprehensive/Coverage
  • Scenario Generation
  • Policies generated cover a wide variety of ACEs,
    and their relations.
  • All possible rules (ACEs) interactions
  • The generation is guaranteed to generate
    legitimate ACEs for the DUT's syntax.
  • Test Generation
  • Different values/combinations of
    tos/prec/tcp_flags are exhausted before port
    numbers, before IP addresses.
  • Enumerating all constants (e.g., protocol port
    names)
  • Exhaustive testing if possible (reasonable size
    segments).
  • Packets generated such that all possible
    rule-rule interactions are tested
  • Controllable based on the test complexity/critical
    ity

28
Key Features
  • Fast
  • The packet generation mechanism guarantees there
    will be no duplicates in the packets selected, in
    an extremely efficient way.
  • Test can be terminated based on time or traffic
  • To avoid using a huge number of packets for
    reasonable coverage Packets are selected in
    order to exhaust as many dimensions as possible.
  • Example Different values/combinations of
    tos/prec/tcp_flags are exhausted before port
    numbers, before IP addresses.

29
Value to the Industry
  • No manual inspection can beat INSPEC
  • Easy to use and extend
  • Comprehensive works in multi-dimensions
  • Intelligent
  • Fast
  • Nothing out there does similar thing particularly
    automatic policy generation
  • INSPEC business values It decreases
  • Time-to-market for new products and new features
  • MTTR (mean-time-to-repair)
  • Man-hour

30
Discussion Q A
31
Grammar Parser and Processor
  • Sample grammar (Standard IOS)

\\Rules S "access-list" acl-num action SrcAddr
opt acl-num\FieldID(100) \number(1,99) actio
n\FieldID(0) "permit\V(1)
"deny\V(0) SrcAddr\FieldID(2) IPany
IPpair IPany "any"\Translate("IP","operato
r","any") IPpair \IPvalue\Translate("IP","o
perator",value") Mask Mask
\IPMask\Translate("IP","operator",mask") opt\Fie
ldID(80) "log\V(1) \\EndRules
32
Evaluation of the Segmentation-based testing
33
Traffic / Test Generation
  • Main idea behind segmentation
  • Utilizing the information provided by the policy
    and network to maximize the test coverage
    (decision paths) while minimizing the number of
    test packets used.
  • Policy address space divided into independent
    segments
  • Use a heuristic-based approach to give each
    segment a weight based on space, contribution
    rules, rules complexity, intersecting/interacting
    with other segments, .. etc
  • Segment weights reflect the significance of these
    segment in the testing (significance fault
    susceptibility)
  • Test intensity is proportional to the segment
    weights

34
Evaluation of the Segmentation-based testing
  • In most problems with FW implementations, the
    fault appears over whole rules and segments.
  • This makes the segmentation even more effective.
    One Single Packet is theoretically enough per
    segment to discover the problem.
  • This is guaranteed in our test case selection
    model.
  • The probability to find the error is 100.
Write a Comment
User Comments (0)
About PowerShow.com