UIUCnet Status Report - PowerPoint PPT Presentation

1 / 61
About This Presentation
Title:

UIUCnet Status Report

Description:

Redundancy in the core only. Each node distribution switch is dual-connected to both core devices ... nice feature of an MPLS core is support for virtual leased ... – PowerPoint PPT presentation

Number of Views:29
Avg rating:3.0/5.0
Slides: 62
Provided by: charle178
Category:
Tags: report | status | uiucnet

less

Transcript and Presenter's Notes

Title: UIUCnet Status Report


1
UIUCnet Status Report
  • Charley Kline
  • UIUCnet Architect
  • CITES

2
Current Status
  • Architecture Review
  • Current Issues
  • Current Projects

3
Architecture
Core
Node Distribution
Node
Building
Building Collection
(or Building Distribution)
Access
4
Architecture
  • Redundancy in the core only
  • Each node distribution switch is dual-connected
    to both core devices
  • Backup STP paths form triangles
  • No node device redundancy (but plenty within a
    device)

5
Current Issues - Redundancy
  • The current design has but one node-based
    distribution router
  • The idea was they were so distributed that a card
    failure would not affect other connections...
    this has not borne out
  • Our total uptime is thus very sensitive to the
    health of the individual dist devices
  • First priority to address

6
Current Issues - Dist1-1 Problem
  • (actually a vulnerability in all the node
    devices, but it hits dist1-1 the worst)
  • Extremely hard to isolate... utterly
    unreproducible
  • Vendor spent significant resources to debug
    including special software builds and extra
    hardware probes

7
Current Issues - Dist1-1 Problem
  • We believe we have a root cause nowa certain
    traffic pattern seen on some networks
  • Vendor has implemented a fix in a new software
    load and provided configuration changes to
    ameliorate the effect of the traffic
  • Will schedule this to load during Summer I and
    test

8
Current Issues - Dist1-1 Problem
  • Also redoing connectivity for critical services
    fed through this device to remove single point of
    failure
  • DNS
  • VPN / Wireless
  • CITES Data Center

9
Current Issues - Multicast
  • IP Multicast is increasingly important both for
    on-LAN services such as disk imaging and also for
    netwide services such as video distribution
  • Were getting closer to offering this as a true
    service, but were not quite there yet
  • Its difficult to debug and ill-understood
  • Need to set a new target date for the service

10
Current Projects
  • Full Multicast deployment
  • Research Network Experiment
  • Will work with one or two departments on the need
    to provide very high performance networking (near
    10 Gb/s clear channel) to other research
    institutions
  • Improving resiliency of critical services
  • DNS anycast
  • Router based (not VLAN based) redundancy for
    critical nets such as the CITES data center and
    node-based services such as dialup

11
Timeline A (quick) Trip Through UIUCnet History
12
1985
  • NCSA receives grant
  • NCSA Startup with CSO
  • CSO ad hoc networking P10, P80 in VMEbus Sun
    workstations
  • Sytek, UUCP, Bitnet

13
(No Transcript)
14
1986
  • NCSA CRAY XMP-24
  • Hyperchannel
  • NSFnet goes live (two 56kb lines)
  • Start of what would be NOC
  • UIC (56kb) and other
  • research nets
  • Sytek, UUCP, Bitnet, Hosts

15
(No Transcript)
16
1987
  • NCSA/CSO divorce
  • New Phone system, B-jacks and fiber
  • Proteon multibus gateways arrive
  • CEN ring becomes part of UIUCnet
  • Proteon 3s and Cs temperature problem
  • More national nets campus buildings
  • Sytek, UUCP, Bitnet, Hosts, Class B, Decnet,
    phone noise, gateway memory

17
(No Transcript)
18
(No Transcript)
19
1988
  • Finally get official access to 88k ft of fiber
  • CEN funds more engineering buildings
  • More national nets
  • More campus buildings
  • Sytek, UUCP, Bitnet, Hosts, Class B, Decnet,
    phone noise, DCL v3, policy

20
1989
  • netIllinois
  • CICnet
  • Network Design Office
  • More national nets
  • More campus buildings
  • Sytek, UUCP, Bitnet, Hosts, Class B, Decnet, DCL
    v3

21
1990
  • CCSP
  • SLIP testing on terminal servers
  • Start of FDDI rings on Proteon CNXs
  • More campus buildings
  • Sytek, UUCP, Bitnet, Hosts, Class B, Decnet, DCL
    v3

22
1992
  • ISDN voice pilot
  • Prairienet
  • Even more terminal servers
  • Residence hall networking continues
  • More campus buildings
  • Telecom and CSO merge
  • Funding, netIllinois, decnet, appletalk

23
1993
  • ATM
  • ISDN data testing
  • Yet more terminal servers
  • Residence hall networking continues
  • More campus buildings
  • Funding, more growing pains
  • Decnet, Appletalk, IPX

24
UIUCnet Looking Forward
  • Charley Kline
  • UIUCnet Architect
  • CITES

25
Current Major Projects
  • ICCN (Intercampus Communication Network)
  • Next Generation UIUCnet Core Architecture
  • Network Upgrade / Convergence

26
ICCN Update
  • ICCN Defined
  • Overview of the Optical Layer
  • Overview of the Routing/Switching Layer
  • ICCN Services, Current and Future

27
Origins (from CCSP 1999)
  • Proposal to build a triangle of OC-3 links
    between all three campuses
  • AITS would use this as their own University-wide
    backbone to deploy ERP and other UA services
  • Each campus benefits as well by taking advantage
    of increased reliability to the Internet and MREN
    (redundant links)

28
...as envisioned in 1999
29
ICCN as Implemented Today
  • Provide a facility to all three Illinois campuses
    for interconnectivity and access to both
    commodity ISPs and to national research networks
  • Bandwidth to accommodate all current and
    foreseeable applications
  • Must be expandable and extendable throughout its
    20-year lifespan

30
ICCN as Implemented Today
  • Bidirectional dark fiber ring comprised of four
    strands of fiber leased from several vendors
  • POPs on UIUC, UIC, UIS campuses (two each) plus
    at strategic locations in Chicago to pick up
    research network and inexpensive ISP access
  • Two interlocking 10G Ethernet rings plus three
    dedicated 10G Ethernets for research connectivity

31
ICCN as Implemented Today
  • DWDM technology
  • Provides in theory up to 40 10G channels
  • Channels can be provisioned from any location to
    any other location
  • Amplifies and regenerates signals as necessary to
    compensate for loss and chromatic dispersion
  • ICCN uses four channels to implement five services

32
ICCN Fiber Map
33
ICCN Optical Channel Map
34
ICCN IP Layer Connectivity
35
ICCN IP Layer Connectivity
  • ICCN Routers implement MPLS-IP which
  • Routes packets across the network very
    efficiently
  • Lets us advertise routes to peers as we need to
  • Allows for private IP networks in parallel with
    the main network (think of an IP-layer VLAN)
  • Allows for the tunneling of Ethernet networks
    from one endpoint to another without having to
    actually switch Ethernet on the ICCN backbone

36
ICCN IP Layer Connectivity
37
ICCN Services
  • High-speed connectivity between campuses
  • Fast and highly reliable backbone for AITS
  • Bulk-rate ISP service to all three campuses
  • Transport service to allow subnets of one campus
    network to appear on another
  • Independent clear-channel transport for research
    efforts
  • Other independent transports
  • Alien waves pure optical transport

38
UIUCnet Backbone Futures
  • Redundancy Options
  • Multi-VRF architecture
  • MPLS
  • VLL

39
Redundancy Options
  • Current backbone architecture is well-positioned
    to allow considerable redundancy for departments
  • ...but we didnt implement redundancy at the
    distribution layer, only in the core itself

40
Redundancy Options
41
Redundancy Within a Node
  • We can easily add additional node distribution
    devices and use additional node-to-building fiber
    to increase resiliency for building-routed
    subnets (not VLANs!)

42
Redundancy Within a Node
  • Adding new distribution routers does not alter
    the core architecture at all
  • At the very least, we will be pursuing this for
    some fraction of UIUCnet-connected buildings

43
Redundancy Between Nodes
  • We can do the same trick to connect a building to
    different nodes
  • ...but this doesnt gain too much and uses up
    internodal fiber which is rather precious

44
Redundancy Between Nodes
  • We can think about this for really critical
    connections like the CITES data center and some
    UA/AITS sites but it probably isnt worth the
    consumption of internode fiber in the vast
    majority of cases
  • Remember nearly all outages have been
    device-related, not node-related

45
Multi-VRF Architectures
  • We currently use VLANs to isolate Ethernet
    traffic inside one common set of equipment
  • 802.1q tags keep VLANs separate across multi-VLAN
    links
  • Extremely powerful tool for Layer-2 delivery

46
Multi-VRF Architectures
  • What is missing is a mechanism to isolate
    separate Layer-3 IP networks in the same way
  • As it stands now, once traffic leaves its subnet
    via its router, its all mixed into the campus IP
    backbone and can no longer be treated specially.
  • Router features we are looking at can address
    that.

47
Multi-VRF Architectures
  • The equivalent to a VLAN in the IP world is
    called a Virtual Routing and Forwarding Instance
    or a VRF
  • VLAN Ethernet VRF IP
  • Router keeps separate IP routing tables and
    separate routing control protocol instances per
    each VRF, and each routed interface is assigned a
    VRF just like each Ethernet port is in a VLAN

48
Multi-VRF Architectures
  • Across shared links, we need an equivalent to the
    VLAN tag to keep traffic in separate VRFs from
    mixing
  • This tag is called a label and is a part of a
    data-carrying mechanism called Multiprotocol
    Label Switching or MPLS

49
MPLS
  • In normal IP routing, the destination IP address
    is mapped into a next-hop router address and the
    packet is forwarded hop by hop
  • In MPLS, a label edge router takes information
    about the packet (destination IP but also VRF,
    QoS, etc) and maps that into a class of packets
    all to be treated the same
  • That class is then assigned a label

50
MPLS
  • The LER then prepends that label to the packet
    and sends it on to the next MPLS router
  • The cool thing is that each MPLS router now needs
    only examine the label to determine how to treat
    the packet
  • This greatly simplifies the job of core routing

51
MPLS
  • At the LER on the far side of the network, the
    label is popped off and the last hop to the
    destination is taken via normal IP routing
    mechanisms
  • Note that because packets are forwarded based on
    their label and not the destination IP address,
    traffic in separate VRFs is kept isolated

52
Multi-VRF and MPLS
  • So what can we do with this?
  • Multiple security zones some VRFs may exit the
    campus without traversing the campus firewalls
    while others may not be allowed to exit the
    campus at all except via a proxy server
  • Research network a special VRF may allow traffic
    to the national research backbones to bypass the
    firewalls for performance reasons while sending
    other Internet traffic out the normal way
  • Some VRFs may be given less Internet bandwidth
    than others for various reasons

53
Another 1999 CCSP Slide
54
Multi-VRF and MPLS
  • Whats the downside?
  • MPLS is very complex and will create a whole new
    set of things for us to engineer and debug
  • Just cutting our teeth now on the ICCN
  • MPLS-capable routers are more expensive than
    traditional IP routers

55
MPLS VLLs
  • One last nice feature of an MPLS core is support
    for virtual leased line or VLL
  • This is a way to take a physical Ethernet port
    and connect it to a remote Ethernet port on the
    other side of an MPLS network
  • The transport is not Ethernet Frames are
    encapsulated inside MPLS-IP packets and routed
    over the MPLS network
  • Much more reliable than Ethernet switching

56
MPLS VLLs
  • The downside here is all this is point to point
    and not a substitute for the current VLAN
    architecture (there is a more direct substitute
    called VPLS but its younger and a little
    scarier)
  • So we can think about bridging an Ethernet
    network across campus. Not as generically as we
    do today but as we get rid of big VLANs this may
    be able to address some corner cases

57
Convergence
  • With voice and video increasingly integrating
    with the data network, there may be new
    requirements placed on UIUCnet

58
Convergence Requirements
  • Campus phone system over IP
  • Higher reliability requirements of voice imply a
    change in uptime requirements of UIUCnet
    (currently about 3 9s, needs to be about 4.5)
  • QoS needs to be more formally implemented (it
    just works now but voice SLAs will require
    well-defined packet delivery mechanisms

59
Convergence Requirments
  • Cable TV delivery over the data network can only
    reasonably happen via MPEG-2 over IP Multicast
  • Good news is, as long as we can leverage
    multicast, there should be plenty of available
    bandwidth to support this (about 4-5 Mb/s per
    channel)
  • Multicast IP distribution over UIUCnet is almost
    there, but we have work yet to do (cant commit
    to a highly available service just yet)

60
Convergence Requirements
  • Life-safety (E911) telephony imposes requirements
    upon the network such that it can reliably
    communicate the location of a handset to the PSAP
  • Wired IP phones will require stringent jack and
    cable management
  • Wireless phones will require a location-by-AP
    service which doesnt exist yet
  • Third-party VoIP systems will simply have to come
    with a stern warning

61
Questions / Discussion
Write a Comment
User Comments (0)
About PowerShow.com