Building big router from lots of little routers - PowerPoint PPT Presentation

About This Presentation
Title:

Building big router from lots of little routers

Description:

Assistant Professor of Electrical Engineering. and Computer Science, Stanford University ... Can its outputs be busy all the time? i.e. can it be work-conserving? ... – PowerPoint PPT presentation

Number of Views:43
Avg rating:3.0/5.0
Slides: 30
Provided by: nic8160
Learn more at: http://yuba.stanford.edu
Category:

less

Transcript and Presenter's Notes

Title: Building big router from lots of little routers


1
Building big router from lots of little routers
Nick McKeown Assistant Professor of Electrical
Engineering and Computer Science, Stanford
University nickm_at_stanford.edu http//www.stanford.
edu/nickm
2
What Id like
R
R
NxN
R
R
The building blocks Id like to use
R
R
R
R
3
Why this might be a good idea
  • Larger overall capacity
  • Reduces aggregate capacity of any one router
  • Slower memory per router
  • Lower power per router
  • Faster line rates
  • Redundancy
  • Familiarity
  • After all, this is how the Internet is built

4
Larger overall capacityIs it still interesting?
Processing power
Link speed
5
Larger overall capacityIs it still interesting?
Processing Power
Link Speed (Fiber)
10000
1000
2x / 2 years
2x / 7 months
100
Fiber Capacity (Gbit/s)
10
1
1985
1990
1995
2000
0,1
TDM
DWDM
Source SPEC95Int David Miller, Stanford.
6
Larger overall capacityWhat limits capacity
today?
  • Memory bandwidth for packet buffers
  • Shared memory B 2NR
  • Input queued B 2R
  • Would like ltltR
  • Perhaps via intelligent load-balancing?
  • Memory bandwidth for IP lookups
  • Must perform lookup for each packet
  • B R

7
Why this might be a bad idea
  • Manageability its harder to manage, maintain,
    upgrade, , a large number of small systems than
    one large one.
  • The total space power might be larger.
  • The interconnect between the routers might be
    highly redundant.
  • How will it perform? (Throughput)
  • Can it provide delay guarantees? (QoS)

8
The interconnect might be highly redundant and
wasteful
Excess links contribute to power
R
1
R
2
R
N
R
Today Big routers are limited by their overall
power. Chip-to-chip connections make up approx
50 of the power.
9
Ill be considering Load Balancing architectures
1
R/k
R/k
R
2
R

R

k
10
Method 1 Random packet load-balancing
  • Method As packets arrive they are randomly
    distributed, packet by packet over each router.
  • Advantages
  • Load-balancer is simple
  • Load-balancer needs no packet buffering
  • Disadvantages
  • Random fluctuations in traffic a each router is
    loaded differently
  • Packets within a flow may become mis-sequenced
  • It is hard to predict the system performance

11
Method 2 Random flow load-balancing
  • Method Each new flow (e.g. TCP connection) is
    randomly assigned to a router. All packets in a
    flow follow the same path.
  • Advantages
  • Load-balancer is simple (e.g. hashing of flow
    ID).
  • Load-balancer needs no packet buffering.
  • No mis-sequencing of packets within a flow.
  • Disadvantages
  • Random fluctuations in traffic a each router is
    loaded differently
  • It is hard to predict the system performance

12
Observations
  • Random load-balancing Its hard to predict
    system performance.
  • Flow-by-flow load-balancing Worst-case
    performance is very poor.
  • If designers, system builders, network operators
    etc. need to know the worst case performance,
    random load-balancing will not suffice.

13
Method 3 Intelligent packet load-balancing
  • Goal Each new flow (e.g. TCP connection) is
    carefully assigned to a router so that
  • Packets are not mis-sequenced.
  • The throughput is maximized and understood.
  • Delay of packets can be controlled.
  • We call this Parallel Packet Switching

14
Method 3 Intelligent packet load-balancingParal
lel Packet Switching
Router
R/k
R/k
1
rate, R
rate, R
1
1
2
rate, R
rate, R
N
N
k
A packet keeps a link at speed R/k busy for k
times longer than a link of speed R.
Bufferless
15
Parallel Packet Switching
  • Advantages
  • Single-stage of buffering
  • No excess link capacity (maybe)
  • kh a power per subsystem i
  • kh a memory bandwidth i
  • kh a lookup rate i (maybe)

16
Parallel Packet Switch
  • Questions
  • Switching What is the performance?
  • IP Lookups How do they work?

17
A Parallel Packet Switch
Arriving packet tagged with egress port
1
Output Queued Switch
rate, R
rate, R
2
1
1
Output Queued Switch
rate, R
rate, R
N
N
k
Output Queued Switch
18
Performance Questions
  • Can its outputs be busy all the time? i.e. can it
    be work-conserving? Can it achieve 100
    throughput?
  • Can it emulate a single big shared memory switch?
  • Can it support delay guarantees,
    strict-priorities, WFQ, ?

19
Work Conservation and 100 Throughput
1
Shared memory switch
R/k
R/k
2
Shared memory switch
R/k
R/k
rate, R
rate, R
1
1
R/k
R/k
k
Shared memory switch
Output Link Constraint
Input Link Constraint
20
Work Conservation
1
1
4
5
R/k
R/k
4
1
2
R/k
2
R/k
2
rate, R
rate, R
1
1
3
R/k
R/k
k
3
Output Link Constraint
21
Work Conservation
1
Shared Memory Switch
S(R/k)
S(R/k)
rate, R
rate, R
S(R/k)
S(R/k)
2
1
1
Shared Memory Switch
rate, R
rate, R
N
N
k
Shared Memory Switch
S(R/k)
S(R/k)
22
Precise Emulation of a Shared Memory Switch
Shared Memory Switch
1
N
N
N
23
Parallel Packet SwitchTheorems
  • If S gt 2k/(k2) _at_ 2 then a parallel packet switch
    can be work-conserving for all traffic.
  • If S gt 2k/(k2) _at_ 2 then a parallel packet switch
    can precisely emulate a FCFS shared memory switch
    for all traffic.

24
Parallel Packet SwitchTheorems
  • 3. If S gt 3k/(k3) _at_ 3 then a parallel packet
    switch can precisely emulate a switch with WFQ,
    strict priorities, and other types of QoS, for
    all traffic.

25
Parallel Packet SwitchTheorems
  • 4. If S 1 then a parallel packet switch with a
    small co-ordination buffer at rate R, can
    precisely emulate a FCFS shared memory switch for
    all traffic.

26
Co-ordination buffers
1
Size Nk
Size Nk
Shared Memory Switch
R/k
R/k
rate, R
rate, R
R/k
R/k
2
Shared Memory Switch
rate, R
rate, R
k
Shared Memory Switch
R/k
R/k
27
Parallel Packet Switch
  • Questions
  • Switching What is the performance?
  • Forwarding Lookups How do they work?

28
Parallel Packet SwitchLookahead Lookups
Packet tagged with egress port at next router
Lookup performed in parallel at rate R/k
29
Parallel Packet Switch
Router
1
rate, R
rate, R
1
1
2
rate, R
rate, R
N
N
k
  • Possibly gt100Tb/s aggregate capacity
  • Linerates in excess of 100Gb/s
Write a Comment
User Comments (0)
About PowerShow.com