SCinet: The Annual Convergence of Advanced Networking and High Performance Computing - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

SCinet: The Annual Convergence of Advanced Networking and High Performance Computing

Description:

Debbie Mantano. Dennis Duke. Doug Luce. Doug Nordwall. Eli Dart. Erik Plesset. Gayle Allen ... Debbie Montano, Qwest Denver connectivity. Linda Winkler, ANL ... – PowerPoint PPT presentation

Number of Views:122
Avg rating:3.0/5.0
Slides: 36
Provided by: greg432
Category:

less

Transcript and Presenter's Notes

Title: SCinet: The Annual Convergence of Advanced Networking and High Performance Computing


1
SCinet  The Annual Convergence of Advanced
Networking and High Performance Computing
  • Steve Corbató, Internet2
  • MasterWorks track
  • 14 November 2001

2
SC99 GNAP Demo Network
  • 15-18 November, 1999
  • Portland, Oregon

3
Outline
  • SCinet
  • Wide area connectivity
  • Fiber
  • Wireless
  • Infrastructure
  • Operations, Measurement, Security
  • Events
  • Xnet, Bandwidth Challenge, SC Global
  • Trends
  • QA

4
SCinet is 4 networks
  • Production commodity network
  • Ubiquitous wireless network
  • High-performance/availability exhibit floor
    network
  • Bleeding-edge testbed - Xnet

5
Scinet is people (and employers)
  • Basil Decina
  • Bill Iles
  • Bill Kramer
  • Bill Nickless
  • Bill Wing
  • Bob Stevens
  • Brad Pope
  • Brent Sweeny
  • Caren Litvanyi
  • Chris Wright
  • Chuck Fisher
  • Dave Koester
  • Davey Wheeler
  • David Mitchell
  • David Richardson
  • Debbie Mantano
  • Dennis Duke
  • Doug Luce
  • Doug Nordwall
  • Linda Winkler
  • Martin Swany
  • Marvin Drake
  • Matt Zekauskas
  • Paola Grosso
  • Patrick Dorn
  • Paul Daspit
  • Paul Love
  • Paul Reisinger
  • Rex Duncan
  • Rick Bagwell
  • Rick Mauer
  • Riki Kurihara
  • Rob Jaeger
  • Robert Riehl
  • Roland Gonzalez
  • Russ Wolf
  • Seth Viddal
  • Stanislav Shalunov

6
(No Transcript)
7
SC2001 Leadership
  • Bill Wing, ORNL chair
  • Jim Rogers, CSC vice chair
  • Dennis Duke, FSU incoming chair
  • Chuck Fisher, ORNL hardware
  • Jeff Mauth, PNNL fiber
  • Martin Swany, UTK monitoring
  • Eli Dart, NERSC security
  • Bill Nickless, ANL routing
  • Tim Toole, SNL wireless
  • David Koester, Mitre Xnet
  • Jon Dugan, NCSA net mgmt
  • Bill Kramer, NERSC Bandwidth Challenge
  • Greg Goddard, UFl monitoring
  • Kevin Oberman, LBL Denver fiber
  • Steve Corbató, Internet2 WAN
  • Debbie Montano, Qwest Denver connectivity
  • Linda Winkler, ANL SC Global

8
SCinet Committee process
  • Conference calls biweekly ? weekly
  • Planning meetings (x3)
  • Venue recon trips (fiber, wireless)
  • Staging (3 weeks before SCxy)
  • Build (starts Monday before SCxy)
  • Booth drops (36 hours before gala reception)
  • Operate network for 6 days
  • Tear down (starts Thursday 401p)
  • Rest do day job for four months and then start
    again

9
Staging
10
Wide area connectivity
  • Denver 15 Gbps
  • 2xOC-48c Abilene (Denver)
  • 2xGigE STAR LIGHT (Chicago)
  • 1xOC-48c Pacific/Northwest Gigapop (Seattle)
  • 2xOC-48c ESnet (Sunnyvale Chicago)
  • Level(3) provided wide area connectivity
  • Qwest provided local dark fiber

11
WAN Bandwidth trends
  • SC98 (Orlando) 200 Mbps
  • SC99 (Portland) 13 Gbps
  • SC2000 (Dallas) 10 Gbps
  • SC2001 (Denver) 15 Gbps
  • SC2002 (Baltimore) Nx10-Gbps ?s??
  • Increasing focus on BW utilization

12
(No Transcript)
13
Abilene SCxy
  • Escalating bandwidth
  • SC99 Portland OC-12c SONET (622 Mbps)
  • SC2000 Dallas OC-48c SONET (2.5 Gbps)
  • SC2001 Denver 2xOC-48c SONET (5 Gbps)
  • SCxy transit connectivity offered to domestic
    international RE nets
  • Backbone MTU raised to 9K bytes
  • Traffic engineering for SC2001
  • End-to-End Performance GigaTCP testing
  • SC2002 Baltimore 10-Gbps ? (planned)

14
Abilene traffic engineering SC2002
15
Fiber (Jeff Mauth)
  • 60 miles of fiber deployed in exhibit hall
  • 0.3 FTE-year
  • 1.5 fiber-miles/hour
  • 120 fiber drops (90 multimode)
  • Pirelli 24 strand MM fiber used since 98
  • Deployment custom engineered to the venue
    selected for SCxy
  • ST fiber connectors standard
  • Will review choice for SC2002

16
Fiber timeline SC2002
  • 5 scouting trips
  • Tue 11/6 9p gained access to 2/3 of hall
  • Thu 11/8 6p gained access to rest of hall
  • Fri 11/9 a.m. fiber done
  • Sun 11/11 a.m. equipment patching
  • Sun 11/12 p.m. booth drops start
  • wireless HP Jornada
  • Mon 11/12 noon drops complete
  • Mon 11/12 7p gala opening (D-DAY)
  • DANGER carpet layers (20-30 cuts this year)

17
Wireless (Tom Hutton)
  • Significant 802.11b effort this year
  • 35 Cisco wireless access points (13 in exhibit
    hall)
  • One on DCC roof pointed at Embassy Suites
  • Wireless still requires a lot of wires work
  • 5000 of wiring in exhibition hall
  • Several site surveys over the year
  • Totally flat LAN (3.5 Gbps switched BW)
  • Wireless really helps show set-up
  • Booth drop teams, booth connectivity prior to
    fiber
  • Clients seen 618 peak, 246 average

18
Infrastructure (Chuck Fisher)
  • SC98
  • Core Routing provided by traditional Cisco 7500
    series routers
  • First "production" use of gigabit Gigabit
    Ethernet (only 1 customer drop requested)
  • Most booth service was 10Base-FL and 100-FX
    provided via Fore Power Hubs
  • Limited use of network monitoring and statistics

19
An earlier topology
20
Infrastructure trends - II
  • SC99
  • Core Routing provided by Cisco GSR series routers
  • Concept of a routing core and a layer of L3
    distribution switches adopted
  • Extensive use of DWDM hardware to provide WAN
    badwidth
  • Xnet introduced as a showcase for "bleeding edge"
    hardware

21
Infrastructure trends - III
  • SC2000
  • Core routing provided by Cisco and Juniper
  • Increased focus on network monitoring and
    statistics
  • First Xnet demonstration of 10 Gigabit Ethernet
  • Bandwidth Challenge introduced to SC

22
SCinet 2001 Network Topology
23
Infrastructure trends -IV
  • SC2001 Contributing Hardware Vendors
  • Cisco
  • Juniper
  • Marconi
  • Nortel
  • Spirent
  • Force10
  • Foundry
  • ONI
  • LuxN
  • Equivalent to 3-5 bldg advanced campus network on
    major RE backbones

24
Operations
  • Servers
  • DNS, DHCP, NTP, Performance, beacons
  • Database
  • Network monitoring
  • Help desk
  • Trouble ticket system
  • Routing support (unicast, multicast, v6)

25
Measurement and Security
26
(No Transcript)
27
Security monitoring
28
Xnet
29
TeraGrid Distributed Backplane - NCSA, ANL,
SDSC, Caltech
StarLight International Optical Peering
Point (see www.startap.net)
Abilene
Chicago
DTF Backplane (4x? 40 Gbps)
Indianapolis
Urbana
Los Angeles
Starlight / NW Univ
UIC
San Diego
I-WIRE
Multiple Carrier Hubs
Ill Inst of Tech
ANL
OC-48 (2.5 Gb/s, Abilene)
Univ of Chicago
Indianapolis (Abilene NOC)
Multiple 10 GbE (Qwest)
Multiple 10 GbE (I-WIRE Dark Fiber)
NCSA/UIUC
  • Solid lines in place and/or available by October
    2001
  • Dashed I-WIRE lines planned for summer 2002

Source Charlie Catlett, Argonne
30
Xnet
31
Trends
  • or what we might see in Baltimore?

32
Optical networking
  • Dense Wave Division Multiplexing (DWDM)
  • Current systems can support gt160 10-Gbps ?s (1.6
    Tbps!)
  • Optical growth can overwhelm Moores Law
    (routers)
  • Costs scale dramatically with distance
  • Three possible scenarios for the future
  • Enhanced IP transport (higher BW and circuit
    multiplicity)
  • Fine-grained traffic engineering
  • p2p links between campuses, HPC centers,
    Gigapops
  • Physical e2e switched circuits (a la ATM SVCs)
  • Evolution of optical switching will be critical
  • Dont write off OEO

33
Future of Abilene
  • Extension of Qwests original commitment to
    Abilene for another 5 years 10/01/2006
  • Originally expired March, 2003
  • Upgrade of Abilene backbone to optical transport
    capability - ?s
  • x4 increase in the core backbone bandwidth
  • OC-48c SONET (2.5 Gbps) to 10-Gbps DWDM
  • Capability for flexible provisioning of 10-Gbps
    ?s to support future point-to-point
    experimentation other projects
  • Emphasis on v6, network measurement,
    measurement capabilities

34
SC2002/Baltimore crystal ball
  • Strong local networking community
  • MAX Gigapop (University of Maryland)
  • DARPA Supernet (ISI-East, NRL)
  • Dark fiber network presences in region
  • Abilene is aiming for 10-Gbps ? connectivity
  • Increased focus on e2e performance multicast
    reliability
  • More wireless (add 802.11a) less ATM?
  • 10 Gigabit Ethernet should be standardized
  • Optical switch in Xnet?

35
Conclusion
  • Scinet is
  • a diverse group of very committed and talented
    people and companies working very hard under
    extreme time constraints and trying conditions to
    make both the expected and the new and impossible
    in SCxy networking happen for one week in
    November and then return to do it again the next
    year
Write a Comment
User Comments (0)
About PowerShow.com