Terabit Backbones A Reality Check - PowerPoint PPT Presentation

About This Presentation
Title:

Terabit Backbones A Reality Check

Description:

NEWS FLASH. Simple & Stupid Trumps Complex & Smart Every Time. Networks Powered by PowerPoint ... Go drop 2-4 racks in colocation facilities. Q in QoS stands ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 40
Provided by: vij62
Category:

less

Transcript and Presenter's Notes

Title: Terabit Backbones A Reality Check


1
Terabit BackbonesA Reality Check
  • Vijay Gill ltvgill_at_mfnx.netgt

2
Agenda
  • Current State of the Internet
  • Side detour through
  • VPNs
  • DiffServ/QoS/CoS
  • The Converged Core (hype machine that goes to 11)

3
State Of the Internet Address
Reality Based Internet Economics
  • Amount of state at any level should be
    constrained and must not exceed Moores Law for
    economically viable solutions.
  • Ideally growth of state should trail Moores
    Law
  • Were in trouble
  • If youre not scared, you dont understand
    Mike ODell

4
Growth of State
  • Recent trends show high growth in Internet state
    (routes, prefixes etc.)
  • Isolate this growth as a predictor of future
    growth
  • Compare growth to Moores law

5
Source Tony Li (Procket Networks)
6
Source Tony Li (Procket Networks)
7
The Very Bad News
  • Growth rate is Increasing
  • Hyper-exponential growth
  • Will eventually outgrow Moores law
  • Moores law may fail

8
Source Tony Li (Procket Networks)
9
The Real Problems
  • If we dont fix these, the other problems wont
    matter
  • Hyper-exponential growth will exceed Moores law
  • Safety margins are at risk
  • We need concerted effort on a new routing
    architecture
  • Multi-homing must not require global prefixes
  • Example IPv6 plus EIDs

10
BGP Advertisement Span
  • Nov 1999 - 16,000 individual addresses
  • Dec 2000 - 12,100 individual addresses
  • Increase in the average prefix length from /18.03
    to /18.44.
  • Dense peering (Rise of Exchange Points) and
    Multi-homing

Source Geoff Houston
11
State Now
  • of Paths vs. of Prefixes
  • Large amounts of peering
  • CPU to crunch RIB to populate FIB
  • More state requires more CPU time
  • Leads to Delayed Convergence
  • BGP TCP rate limited, just adding pure CPU
    isnt the entire answer
  • Issue is with propagating state around

12
Convergence Times
13
Problem With State
  • Issues with interactions of increased state, CPU,
    and message processing
  • Run to completion processing lt-gt missed hellos
  • IGP meltdowns
  • Time diameter exceeds hold down
  • Pegged CPU on primary causes slave to initiate
    takeover
  • Decoupled Hello processing from Routing Process

14
VoIP? What VoIP?
  • IGPs Converge on average converge an order of
    magnitude faster than BGP
  • Leads to temporary black holing
  • Router reboots (like that ever happens)
  • IGP converges away, BGP teardown
  • Router comes back up
  • IGP converges and places router in forwarding
    path
  • BGP is still converging
  • Packets check in, but dont check out

15
VPNs - Operational Reality Check
  • Vendors can barely keep one routing table
    straight
  • Customer Enragement Feature, IBGP withdraw bugs
  • Into this mess, were going to throw in another
    couple of hundred routing tables like some VPN
    proposals?
  • Potential for several thousand internal customer
    prefixes inside our Edge routers
  • Revenge of RIP
  • Provider Provisioned VPNs Just Say No.

16
What Is Going to Work
  • Some people will optimize for high touch edges
    Provider provisioned VPNs etc.
  • But if they are talking with the rest of the
    world, welcome to the new reality It sucks.
  • For the Rest.

17
Already, data dominates voice traffic on our
networks
-Fred Douglis, ATT Labs
These exhibits were originally published in Peter
Ewens, Simon Landless, and Stagg Newman, "Showing
some backbone," The McKinsey Quarterly, 2001
Number 1, and can be found on the publication's
Web site, www.mckinseyquarterly.com. Used by
permission.
18
What to optimize for
  • Optimize for IP
  • Parallel backbones
  • Some ISPs already have to do this based on volume
    of traffic for IP alone
  • Do not cross the streams
  • Voice traffic has well known properties
  • Utilize them
  • Optical network Utilize DWDM and OXCs to
    virtualize the fiber

19
Solution
Internet (IP)
Internet (IP)
VPN
VPN
Voice/Video
Voice/Video
CES
CES
Multi Service Optical Transport Fabric
20
NEWS FLASH
  • Simple Stupid Trumps Complex Smart Every Time
  • Networks Powered by PowerPoint
  • Stuff looks good on slides, then we try and hire
    people to implement and operate it
  • Operational Reality Beats PowerPoint every time

21
The Converged Core
  • For the fortunate few
  • Utilize OXCs DWDM to impose arbitrary
    topologies onto fiber
  • For the rest trying to run IP over Voice
  • Nice knowing you.
  • Voice - Use SONET as normal, its not growing
    very fast, so dont mess with it
  • WCOM, T

22
Network Design Principle
  • The main problem is SCALING
  • Everything else is a secondary
  • If we can scale, were doing something right
  • State Mitigation
  • Partition State
  • What you dont know, cant hurt you

23
Common Backbone
  • Application Unaware
  • Rapid innovation
  • Clean separation between transport, service, and
    application
  • Allows new applications to be constructed without
    modification to the transport fabric.
  • Less Complex (overall)

24
Why A Common Backbone?
  • Spend once, use many
  • Easier capacity planning and implementation
  • Elastic Demand
  • Increase of N on edge necessitates 3-4 N core
    growth
  • Flexibility in upgrading bandwidth allows you to
    drop pricing faster than rivals

25
By carrying more traffic, a carrier can lower
costs by up to 64
These exhibits were originally published in Peter
Ewens, Simon Landless, and Stagg Newman, "Showing
some backbone," The McKinsey Quarterly, 2001
Number 1, and can be found on the publication's
Web site, www.mckinseyquarterly.com. Used by
permission.
26
Bandwidth Blender - Set on Frappe
18,000
16,000
14,000
PRICE
12,000
Price per STM-1 (m)
10,000
8,000
6,000
4,000
COST
2,000
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
Historical and forecast market price and unit
cost of Transatlantic STM-1 circuit (on 25 year
IRU lease)
Source KPCB
27
Problem
  • We keep hearing the phrase bandwidth glut
  • So are we experiencing a glut or not?
  • No matter how much terabits of core bandwidth
    gets turned up.
  • Capacity Constraints are at the edges
  • Go drop 2-4 racks in colocation facilities
  • Q in QoS stands for Quantity, not Quality
  • We dont need to boil the oceans, all we want is
    a poached fish

28
How To Build A Stupid Backbone
  • Optical backbones cannot scale at the STS-1 level
  • High speed backbone reduces complexity and
    increases manageability.
  • Impose a Hierarchy
  • Optical Backbone provides High-speed
    provisioning/management OC-192/48
  • Sub-rate clouds multiplex lower speed traffic
    onto core lightpaths

29
Regional-Core Network Infrastructure
Core network
Metro SubNetwork
Metro SubNetwork
Metro SubNetwork
30
Requirements
  • Support multiple services
  • Voice, VPN, Internet, Private Line
  • Improving service availability with stable
    approaches where possible
  • Use MPLS if your SONET ring builds are taking too
    long (anyone still building SONET rings for
    data?)
  • If you have to use MPLS.

31
Stabilize The Edge
  • LSPs re-instantiated as p2p links in IGP
  • e.g. ATL to DEN LSP looks like p2p link with
    metric XYZ
  • Helps obviate BGP blackholing issues

32
Stabilize The Core
  • Global instability propagated via BGP
  • Fate sharing with the global Internet
  • All decisions are made at the edge where the
    traffic comes in
  • Rethink functionality of BGP in the core

33
LSP Distribution
  • LDP alongside RSVP
  • Routers on edge of RSVP domain do fan-out
  • Multiple Levels of Label Stacking
  • Backup LSPs
  • Primary and Backup in RSVP Core
  • Speed convergence
  • Removes hold down issues (signaling too fast in a
    bouncing network)
  • Protect path should be separate from working
  • There are other ways, including RSVP E2E

34
Implementation
  • IP Optical
  • Virtual Fiber
  • Mesh Protection
  • Overlay
  • We already know where the big traffic will be
  • NFL Cities, London, Paris, Frankfurt, Amsterdam
  • DWDM Routers

35
IP Optical
IP / Routers
  • Virtual Fiber
  • Embed Arbitrary fiber topology onto physical
    fiber.
  • Mesh restoration.
  • Private Line
  • Increased Velocity of service provisioning
  • Higher cost, added complexity

Optical Switching
Fiber
DWDM / 3R
36
Backbone Fiber
Metro Collectors
DWDM Terminal
Optical Switch
Core
Edge
37
IP Optical Network
Big Flow
Big Flow
Out of port capacity, switching speeds on
routers? Bypass intermediate hops
38
Dual Network Layers
  • Optical Core (DWDM Fronted by OXC)
  • Fast Lightpath provisioning
  • Remember - Routers are very expensive OEO devices
  • Metro/Sub-rate Collectors
  • Multiservice Platforms, Edge Optical Switches
  • Groom into lightpaths or dense fiber.
  • Demux in the PoP (light or fiber)
  • Eat Own Dog Food
  • Utilize customer private line provisioning
    internally to run IP network.

39
Questions
  • Vijay Route around the congestion, we must Gill

Many thanks to Tellium (Bala Rajagopalan and
Krishna Bala) for providing icons at short notice!
Nota Bene This is not a statement of direction
for MFN!
Write a Comment
User Comments (0)
About PowerShow.com