Hash, Dont Cache: Fast Packet Forwarding for Enterprise Edge Routers - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Hash, Dont Cache: Fast Packet Forwarding for Enterprise Edge Routers

Description:

Number of destinations per next hop. The maximum number of hash functions ... Send to a random matching next hop ... a matching next hop. Techniques to avoid ... – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 26
Provided by: minl1
Category:

less

Transcript and Presenter's Notes

Title: Hash, Dont Cache: Fast Packet Forwarding for Enterprise Edge Routers


1
Hash, Dont Cache Fast Packet Forwarding for
Enterprise Edge Routers
SIGCOMM WREN09
  • Minlan Yu
  • Princeton University
  • minlanyu_at_cs.princeton.edu
  • Joint work with Jennifer Rexford

2
Enterprise Edge Router
  • Enterprise edge routers
  • Connects upstream providers and internal routers
  • A few outgoing links
  • A small data structure for each next hop

Provider 2
Provider 1
Enterprise Network
3
Challenges of Packet Forwarding
  • Full routes forwarding table (FIB)
  • For load balancing, fault tolerance, etc.
  • More than 250K entries, and growing
  • Increasing link speed
  • Over 10 Gbps
  • Requires large, expensive memory
  • Expensive, complicated high-end routers
  • More cost-efficient, less power-hungry solution?
  • Perform fast packet forwarding in a small SRAM

4
Using a Small SRAM
  • Route caching is not a viable solution
  • Store the most frequently used entries in cache
  • Bad performance during cache miss
  • Low throughput and high packet loss
  • Bad performance under worst-case workloads
  • Malicious traffic with a wide range of
    destinations
  • Route changes, link failures
  • Our solution should be workload independent
  • Fit the entire FIB in the small SRAM

5
Bloom Filter
  • Bloom filters in fast memory (SRAM)
  • A compact data structure for a set of elements
  • Calculate s hash functions to store element x
  • Easy to check membership
  • Reduce memory at the expense of false positives

6
Bloom Filter Forwarding
  • One Bloom filter (BF) per next hop
  • Store all addresses forwarded to that next hop
  • Consider flat addresses in the talk
  • See paper for extensions to longest prefix match

Bloom Filters
Nexthop 1
query
hit
Nexthop 2
Packet destination

Nexthop T
T is small for enterprise edge routers
7
Contributions
  • Make efficient use of limited fast memory
  • Formulate and solve optimization problem to
    minimize false-positive rate
  • Handle false positives
  • Leverage properties of enterprise edge routers
  • Adapt Bloom filters for routing changes
  • Leverage counting Bloom filter in slow memory
  • Dynamically adjust Bloom filter size

8
Outline
  • Optimize memory usage
  • Handle false positives
  • Handle routing dynamics

9
Outline
  • Optimize memory usage
  • Handle false positives
  • Handle routing dynamics

10
Memory Usage Optimization
  • Consider fixed forwarding table
  • Goal Minimize overall false-positive rate
  • Probability one or more BFs have a false positive
  • Input
  • Fast memory size M
  • Number of destinations per next hop
  • The maximum number of hash functions
  • Output the size of each Bloom filter
  • Larger BF for next-hops with more destinations

11
Constraints and Solution
  • Constraints
  • Memory constraint
  • Sum of all BF sizes fast memory size M
  • Bound on number of hash functions
  • To bound CPU calculation time
  • Bloom filters share the same hash functions
  • Proved to be a convex optimization problem
  • An optimal solution exists
  • Solved by IPOPT (Interior Point OPTimizer)

12
Evaluation of False Positives
  • The FIB with 200K entries, 10 next hop
  • 8 hash functions
  • Takes at most 50 msec to solve the optimization

13
Outline
  • Optimize memory usage
  • Handle false positives
  • Handle routing dynamics

14
False Positive Detection
  • Multiple matches in the Bloom filters
  • One of the matches is correct
  • The others are caused by false positives

Bloom Filters
Multiple hits
Nexthop 1
query
Nexthop 2
Packet destination

Nexthop T
15
Handle False Positives on Fast Path
  • Leverage multi-homed enterprise edge router
  • Send to a random matching next hop
  • Packets can get to the destination even through a
    less-preferred outgoing link occasionally
  • No extra traffic, but may cause packet loss
  • Send duplicate packets
  • Send copy of packet to all matching next hops
  • Guarantees reachability, but introduce extra
    traffic

16
Prevent Future False Positives
  • For a packet that experiences a false positive
  • Conventional lookup in the background
  • Cache the result
  • For the subsequent packets
  • No longer experience false positives
  • Compared to conventional route cache
  • Much smaller (only for false-positive
    destinations)
  • Not easily invalidated by an adversary

17
Outline
  • Optimize memory usage
  • Handle false positives
  • Handle routing dynamics

18
Problem of Bloom Filters
  • Routing changes
  • Add/delete entries in BFs
  • Problem of Bloom Filters (BF)
  • Do not allow deleting an element
  • Counting Bloom Filters (CBF)
  • Use a counter instead of a bit in the array
  • CBFs can handle adding/deleting elements
  • But, require more memory than BFs

19
Update on Routing Change
  • Use CBF in slow memory
  • Assist BF to handle forwarding-table updates
  • Easy to add/delete a forwarding-table entry

CBF in slow memory
Delete a route
BF in fast memory
20
Occasionally Resize BF
  • Under significant routing changes
  • Number of addresses in BFs changes significantly
  • Re-optimize BF sizes
  • Use CBF to assist resizing BF
  • Large CBF and small BF
  • Easy to expand BF size by contracting CBF

CBF
BF
Hard to expand to size 4
Easy to contract CBF to size 4
21
BF-based Router Architecture
22
Prototype and Evaluation
  • Prototype in kernel-level Click
  • Experiment environment
  • 3.0 GHz 64-bit Intel Xeon
  • 2 MB L2 data cache, used as fast memory size M
  • Forwarding table
  • 10 next hops, 200K entries
  • Peak forwarding rate
  • 365 Kpps for 64 Byte packets
  • 10 faster than conventional lookup

23
Conclusion
  • Improve packet forwarding for enterprise edge
    routers
  • Use Bloom filters to represent forwarding table
  • Only require a small SRAM
  • Optimize usage of a fixed small memory
  • Multiple ways to handle false positives
  • Leverage properties of enterprise edge routers
  • React quickly to FIB updates
  • Leverage Counting Bloom Filter in slow memory

24
Ongoing Work BUFFALO
  • Bloom filter forwarding in large enterprise
  • Deploy BF-based switches in the entire network
  • Forward all the packets on the fast path
  • Gracefully handling false positives
  • Randomly select a matching next hop
  • Techniques to avoid loops and bound path stretch

www.cs.princeton.edu/minlanyu/
writeup/conext09.pdf
25
Thanks
  • Questions?
Write a Comment
User Comments (0)
About PowerShow.com