London Internet Exchange Point Update - PowerPoint PPT Presentation

About This Presentation
Title:

London Internet Exchange Point Update

Description:

Not-for-profit co-operative of ISPs. Main aim to keep UK ... Mistral. ICLnet. Dialnet. Recent non-UK members: Carrier 1. GTE. Above Net. Telecom Eireann ... – PowerPoint PPT presentation

Number of Views:55
Avg rating:3.0/5.0
Slides: 29
Provided by: nic1181
Category:

less

Transcript and Presenter's Notes

Title: London Internet Exchange Point Update


1
London Internet ExchangePoint Update
  • Keith Mitchell, Executive Chairman
  • NANOG15 Meeting
  • Denver, Jan 1999

2
LINX Update
  • LINX now has 63 members
  • Second site now in use
  • New Gigabit backbone in place
  • Renumbered IXP LAN
  • Some things we have learned!
  • Statistics
  • Whats coming in 1999

3
What is the LINX ?
  • UK National IXP
  • Not-for-profit co-operative of ISPs
  • Main aim to keep UK domestic Internet traffic in
    UK
  • Increasingly keeping EU traffic in EU

4
LINX Status
  • Established Oct 94 by 5 member ISPs
  • Now has 7 FTE dedicated staff
  • Sub-contracts co-location to 2 neutral sites in
    London Docklands
  • Telehouse
  • TeleCity
  • Traffic doubling every 4-6 months !

5
LINX membership
  • Now totals 63
  • 10 since Oct 98
  • Recent UK members
  • RedNet
  • XTML
  • Mistral
  • ICLnet
  • Dialnet
  • Recent non-UK members
  • Carrier 1
  • GTE
  • Above Net
  • Telecom Eireann
  • Level 3

6
LINX Members by Country
7
Second Site
  • Existing Telehouse site full until 99Q3 extension
    ready
  • TeleCity is new dedicated co-lo facility, 3 miles
    from Telehouse
  • Awarded LINX contract by open tender (8
    submissions)
  • LINX has 16-rack suite
  • Space for 800 racks

8
Second Site
  • LINX has diverse dark fibre between sites (5km)
  • Same switch configuration as Telehouse site
  • Will have machines to act as hot backups for the
    servers in Telehouse
  • Will have a K.root server behind a transit router
    soon

9
LINX Traffic Issues
  • Bottleneck was inter-switch link between Catalyst
    5000s
  • Cisco FDDI could no longer cope
  • 100baseT nearly full
  • Needed to upgrade to Gigabit backbone within
    existing site 98Q3

10
Gigabit Switch Options
  • Looked at 6 vendors
  • Cabletron/Digital, Cisco, Extreme, Foundry,
    Packet Engines, Plaintree
  • Some highly cost-effective options available
  • But needed non-blocking, modular, future-proof
    equipment, not workgroup boxes

11
Old LINX Infrastructure
  • 5 Cisco Switches
  • 2 x Catalyst 5000, 3 x Catalyst 1200
  • 2 Plaintree switches
  • 2 x WaveSwitch 4800
  • FDDI backbone
  • Switched FDDI ports
  • 10baseT 100baseT ports
  • Media convertors for fibre ether (gt100m)

12
Old LINX Topology
13
New Infrastructure
  • Catalyst and Plaintree switches no longer in use
  • Catalyst 5000s appeared to have broadcast scaling
    issues regardless of Supervisor Engine
  • Plaintree switches had proven too unstable and
    unmanageable
  • Catalyst 1200s at end of useful life

14
(No Transcript)
15
New Infrastructure
  • Packet Engines PR-5200
  • Chassis based 16 slot switch
  • Non-blocking 52Gbps backplane
  • Used for our core, primary switches
  • One in Telehouse, one in TeleCity
  • Will need a second one in Telehouse within this
    quarter
  • Supports 1000LX, 1000SX, FDDI and 10/100 ethernet

16
New Infrastructure
  • Packet Engines PR-1000
  • Small version of PR-5200
  • 1U switch 2x SX and 20x 10/100
  • Same chipset as 5200
  • Extreme Summit 48
  • Used for second connections
  • Gives vendor resiliency
  • Excellent edge switch - low cost per port and
  • 2x Gigabit, 48x 10/100 ethernet

17
New Infrastructure
  • Topology changes
  • Aim to be able to have major failure in one
    switch without affecting member connectivity
  • Aim to have major failures on inter-switch links
    with out affecting connectivity
  • Ensure that inter-switch connections are not
    bottlenecks

18
New backbone
  • All primary inter-switch links are now gigabit
  • New kit on order to ensure that all inter-switch
    links are gigabit
  • Inter-switch traffic minimised by keeping all
    primary and all backup traffic on their own
    switches

19
IXP Switch Futures
  • Vendor claims of 1000baseProprietary 50km range
    are interesting
  • Need abuse prevention tools
  • port filtering, RMON
  • Need traffic control tools
  • member/member bandwidth limiting and measurement

20
Address Transition
  • Old IXP LAN was 194.68.130/24
  • New allocation 195.66.224/19
  • New IXP LAN 195.66.224/23
  • Striped allocation on new LAN
  • 2 addresses per member, same last octet
  • About 100 routers involved

21
Address Migration Plan
  • Configured new address(es) as secondaries
  • Brought up peerings with their new addresses
  • When all peers are peering on new addresses,
    stopped old peerings
  • Swap over the secondary to the primary IP address

22
Address Migration Plan
  • Collector dropped peerings with old
    194.68.130.0/24 addresses
  • Anyone not migrated at this stage lost direct
    peering with AS5459
  • Eventually, old addresses no longer in use

23
What we have learned
  • ... the hard way!
  • Problems after renumbering
  • Some routers still using /24 netmask
  • Some members treating the /23 network as two /24s
  • Big problem if proxy ARP is involved!
  • Broadcast traffic bad for health
  • We have seen gt50 ARP requests per second at worst
    times.

24
ARP Scaling Issues
  • Renumbering led to lots of ARP requests for
    unused IP addresses
  • ARP no-reply retransmit timer fixed time-out
  • Maintenance work led to groups of routers going
    down/up together
  • Synchronised waves of ARP requests

25
New MoU Prohibitions
  • Proxy ARP
  • ICMP redirects
  • Directed broadcasts
  • Spanning Tree
  • IGP broadcasts
  • All non-ARP MAC layer broadcasts

26
Statistics
  • LINX total traffic
  • 300 M/sec avg, 405 M/sec peak
  • Routing table
  • 9,200 out of 55,000 routes
  • k.root-servers
  • 2.2 Mbit/sec out, 640 Kbit/sec in
  • nic.uk
  • 150 Kbit/sec out, 60 Kbit/sec in

27
Statistics and looking glass at
http//www2.linx.net/
28
Things planned for 99
  • Infrastructure spanning tree implementation
  • Completion of Stratum-1 NTP server
  • Work on an ARP server
  • Implementation of route server
  • Implementation of RIPE NCC test traffic box
Write a Comment
User Comments (0)
About PowerShow.com