ESnet Joint Techs, Feb. 2005 - PowerPoint PPT Presentation

1 / 22
About This Presentation
Title:

ESnet Joint Techs, Feb. 2005

Description:

MAX GPOP. GEANT - Germany - France - Italy - UK - etc. SInet ... Mary Anne Scott, Chair, Dave Bader, Steve Eckstrand. Marvin Frazier, Dale Koelling, Vicky White ... – PowerPoint PPT presentation

Number of Views:36
Avg rating:3.0/5.0
Slides: 23
Provided by: willia520
Category:
Tags: and | esnet | feb | joint | mary | max | techs

less

Transcript and Presenter's Notes

Title: ESnet Joint Techs, Feb. 2005


1
ESnet Joint Techs, Feb. 2005
  • William E. Johnston, ESnet Dept. Head and Senior
    Scientist
  • R. P. Singh, Federal Project Manager
  • Michael S. Collins, Stan Kluz,Joseph Burrescia,
    and James V. Gagliardi, ESnet Leads
  • Gizella Kapus, Resource Manager
  • and the ESnet Team
  • Lawrence Berkeley National Laboratory

2
ESnets Mission
  • Support the large-scale, collaborative science of
    DOEs Office of Science
  • Provide high reliability networking to support
    the operational traffic of the DOE Labs
  • Provide network services to other DOE facilities
  • Provide leading-edge network and Grid services to
    support collaboration
  • ESnet is a component of the Office of Science
    infrastructure critical to the success of its
    research programs (program funded through Office
    of Advanced Scientific Computing Research / MICS
    managed and operated by ESnet staff at LBNL)

3
ESnet Physical Network mid 2005High-Speed
Interconnection of DOE Facilitiesand Major
Science Collaborators
SEA HUB
ESnet Science Data Network (SDN)core
ESnet IP core
CHI-SL HUB
QWEST ATM
MAE-E
SNV HUB
Equinix
PAIX-PA Equinix, etc.
SNV SDN HUB
42 end user sites
Office Of Science Sponsored (22)
NNSA Sponsored (12)
International (high speed) 10 Gb/s SDN core 10G/s
IP core 2.5 Gb/s IP core MAN rings (gt 10
G/s) OC12 ATM (622 Mb/s) OC12 / GigEthernet OC3
(155 Mb/s) 45 Mb/s and less
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
peering points

ESnet IP core Packet over SONET Optical Ring and
Hubs
SND core hubs
IP core hubs
SNV HUB
SNV HUB
high-speed peering points
4
ESnet Logical Network Peering and Routing
Infrastructure
ESnet peering points (connections to other
networks)
GEANT - Germany - France - Italy - UK - etc
SInet (Japan) KEK Japan Russia (BINP)
University
Australia CAnet4 Taiwan (TANet2) Singaren
International
CAnet4 CERN France GLORIAD Kreonet2
MREN Netherlands StarTap Taiwan (ASCC) TANet2
Commercial
PNW-GPOP
SEA HUB
2 PEERS
Distributed 6TAP 18 Peers
Abilene
6 PEERS
1 PEER
CalREN2
NYC HUBS
1 PEER
LBNL
Abilene 6 Universities
SNV HUB
10 PEERS
Abilene
2 PEERS
16 PEERS
MAX GPOP
PAIX-W
MAE-W
36 PEERS
13 PEERS
28 PEERS
TECHnet
2 PEERS
FIX-W
NGIX
14 PEERS
2 PEERS
LANL
CENIC SDSC
Abilene
ATL HUB
  • ESnet supports collaboration by providing full
    Internet access
  • manages the full complement of Global Internet
    routes (about 150,000 IPv4 from 180 peers) at 40
    general/commercial peering points
  • high-speed peerings w/ Abilene and the
    international RE networks.
  • This is a lot of work, and is very visible, but
    provides full Internet access for DOE.

5
Drivers for the Evolution of ESnet
August, 2002 Workshop Organized by Office of
Science Mary Anne Scott, Chair, Dave Bader, Steve
Eckstrand. Marvin Frazier, Dale Koelling, Vicky
White Workshop Panel Chairs Ray Bair,
Deb Agarwal, Bill Johnston, Mike Wilde,
Rick Stevens, Ian Foster, Dennis Gannon,
Linda Winkler, Brian Tierney, Sandy Merola, and
Charlie Catlett
  • The network and middleware requirements to
    support DOE science were developed by the OSC
    science community representing major DOE science
    disciplines
  • Climate simulation
  • Spallation Neutron Source facility
  • Macromolecular Crystallography
  • High Energy Physics experiments
  • Magnetic Fusion Energy Sciences
  • Chemical Sciences
  • Bioinformatics
  • (Nuclear Physics)
  • The network is essential for
  • long term (final stage) data analysis
  • control loop data analysis (influence an
    experiment in progress)
  • distributed, multidisciplinary simulation

Available at www.es.net/research
6
Evolving Quantitative Science Requirements for
Networks
Science Areas Today End2End Throughput 5 years End2End Throughput 5-10 Years End2End Throughput Remarks
High Energy Physics 0.5 Gb/s 100 Gb/s 1000 Gb/s high bulk throughput
Climate (Data Computation) 0.5 Gb/s 160-200 Gb/s N x 1000 Gb/s high bulk throughput
SNS NanoScience Not yet started 1 Gb/s 1000 Gb/s QoS for control channel remote control and time critical throughput
Fusion Energy 0.066 Gb/s(500 MB/s burst) 0.198 Gb/s(500MB/20 sec. burst) N x 1000 Gb/s time critical throughput
Astrophysics 0.013 Gb/s(1 TBy/week) NN multicast 1000 Gb/s computational steering and collaborations
Genomics Data Computation 0.091 Gb/s(1 TBy/day) 100s of users 1000 Gb/s QoS for control channel high throughput and steering
7
ESnet is Currently Transporting About 350
terabytes/mo.
ESnet Monthly Accepted Traffic Jan., 1990 Dec.
2004
Annual growth in the past five years about 2.0x
annually.
TBytes/Month
8
A Small Number of Science UsersAccount for a
Significant Fraction of all ESnet Traffic
Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day
Averaged
DOE Lab-International RE
Total ESnet traffic (Dec, 2004) 330 TBy
TBytes/Month
Lab-U.S. RE
Domestic
Lab-Lab
International
  • Top 100 host-host flows 99 TBy

Note that this data does not include intra-Lab
traffic.ESnet ends at the Lab border routers, so
science traffic on the Lab LANs is invisible to
ESnet.
9
Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day
Averaged
Fermilab (US) ? IN2P3 (FR)
SLAC (US) ? INFN CNAF (IT)
SLAC (US) ? RAL (UK)
Fermilab (US) ? WestGrid (CA)
Fermilab (US) ? WestGrid (CA)
TBytes/Month
SLAC (US) ? RAL (UK)
SLAC (US) ? IN2P3 (FR)
BNL (US) ? IN2P3 (FR)
SLAC (US) ? IN2P3 (FR)
FNAL ? Karlsruhe (DE)
LIGO ? Caltech
FNAL ? Johns Hopkins
NERSC ? NASA Ames
NERSC ? NASA Ames
LLNL ? NCAR
LBNL ? U. Wisc.
FNAL ? MIT
FNAL ? SDSC
?? ? LBNL
NERSC ? LBNL
FNAL ? MIT
NERSC ? LBNL
NERSC ? LBNL
NERSC ? LBNL
NERSC ? LBNL
NERSC ? LBNL
BNL ? LLNL
BNL ? LLNL
BNL ? LLNL
BNL ? LLNL
10
ESnet Traffic
  • Since BaBar (SLAC high energy physics experiment)
    production started, the top 100 ESnet flows have
    consistently accounted for 30 - 50 of ESnets
    monthly total traffic
  • As LHC (CERN high energy physics accelerator)
    data starts to move, this will increase a lot
    (200-2000 times)
  • Both LHC tier 1 (primary U.S. experiment data
    centers) are at DOE Labs Fermilab and
    Brookhaven
  • U.S. tier 2 (experiment data analysis) centers
    will be at universities when they start pulling
    data from the tier 1 centers the traffic
    distribution will change a lot

11
Monitoring DOE Lab ? University Connectivity
  • Current monitor infrastructure (redgreen) and
    target infrastructure
  • Uniform distribution around ESnet and around
    Abilene

AsiaPac
SEA
CERN
CERN
Europe
Europe
LBNL
Abilene
OSU
Japan
Japan
FNAL
CHI
ESnet
NYC
DEN
SNV
BNL
DC
KC
IND
LA
Japan
NCS
ATL
ALB
SDG
ESnet Abilene ORNL
SDSC
ELP
DOE Labs w/ monitors Universities w/
monitors network hubs high-speed cross connects
ESnet ? Internet2/Abilene
HOU
Initial site monitors
12
ESnet Evolution
  • With the current architecture ESnet cannot
    address
  • the increasing reliability requirements
  • Labs and science experiments are insisting on
    network redundancy
  • the long-term bandwidth needs
  • LHC will need dedicated 10/20/30/40 Gb/s into and
    out of FNAL and BNL
  • Specific planning drivers include HEP, climate,
    SNS, ITER and SNAP, et al
  • The current core ring cannot handle the
    anticipated large science data flows at
    affordable cost
  • The current point-to-point tail circuits are
    neither reliable nor scalable to the required
    bandwidth

13
ESnet Strategy A New Architecture
  • Goals derived from science needs
  • Fully redundant connectivity for every site
  • High-speed access to the core for every site (at
    least 20 Gb/s)
  • 100 Gbps national bandwidth by 2008
  • Three part strategy
  • 1) Metropolitan Area Network (MAN) rings to
    provide dual site connectivity and much higher
    site-to-core bandwidth
  • 2) A Science Data Network core for
  • large, high-speed science data flows
  • multiply connecting MAN rings for protection
    against hub failure
  • a platform for provisioned, guaranteed bandwidth
    circuits
  • alternate path for production IP traffic
  • 3) A High-reliability IP core (e.g. the current
    ESnet core) to address Lab operational
    requirements

14
ESnet MAN Architecture
core router
RE peerings
ESnet production IP core
International peerings
core router
ESnet SDN core
switches managingmultiple lambdas
ESnet managed? / circuit services
ESnet managed? / circuit services tunneled
through the IP backbone
ESnet management and monitoring
2-4 x 10 Gbps channels
ESnet production IP service
Lab
Lab
site equip.
Site gateway router
site equip.
Site gateway router
Site LAN
Site LAN
15
New ESnet StrategyScience Data Network IP
Core MANs
CERN
Asia-Pacific
GEANT (Europe)
ESnet Science Data Network (2nd Core)
Seattle (SEA)
Chicago (CHI)
New York(AOA)
MetropolitanAreaRings
Core loops
Washington, DC (DC)
Sunnyvale(SNV)
ESnetIP Core
Atlanta (ATL)
Albuquerque (ALB)
Existing IP core hubs
El Paso (ELP)
SDN hubs
New hubs
Primary DOE Labs
Possible new hubs
16
Tactics for Meeting Science Requirements
2007/2008
AsiaPac
  • 10 Gbps enterprise IP traffic
  • 40-60 Gbps circuit based transport

SEA
CERN
Aus.
Europe
Europe
ESnet Science Data Network (2nd Core 30-50
Gbps,National Lambda Rail)
Japan
Japan
CHI
SNV
NYC
DEN
DC
MetropolitanAreaRings
ESnetIP Core (gt10 Gbps ??)
Aus.
ALB
ATL
SDG
ESnet hubs
ESnet hubs
ELP
Metropolitan Area Rings
Production IP ESnet core
10Gb/s 30Bg/s 40Gb/s
High-impact science core
2.5 Gbs10 Gbs
Lab supplied
Future phases
Major international
17
ESnet Services Supporting Science Collaboration
  • In addition to the high-bandwidth network
    connectivity for DOE Labs, ESnet provides several
    other services critical for collaboration
  • That is ESnet provides several science services
    services that support the practice of science
  • Access to collaborators (peering)
  • Federated trust
  • identity authentication
  • PKI certificates
  • crypto tokens
  • Human collaboration video, audio, and data
    conferencing

18
DOEGrids CA Usage Statistics
User Certificates 1386 Total No. of Certificates 3569
Service Certificates 2168 Total No. of Requests 4776
Host/Other Certificates 15 Internal PKI SSL Server certificates 36
Report as of Jan 11,2005
FusionGRID CA certificates not included here.
19
DOEGrids CA Usage - Virtual Organization Breakdown

DOE-NSF collab.
20
ESnet Collaboration Services Production Services
  • Web-based registration and audio/data bridge
    scheduling
  • Ad-Hoc H.323 and H.320 videoconferencing
  • Streaming on the Codian MCU using Quicktime or
    REAL
  • Guest access to the Codian MCU via the
    worldwide Global Dialing System (GDS)
  • Over 1000 registered users worldwide

21
ESnet Collaboration Services H.323 Video
Conferencing
  • Radvision and Codian
  • 70 ports on Radvision available at 384 kbps
  • 40 ports on Codian at 2 Mbps plus streaming
  • Usage leveled, but, expect increase in early 2005
    (new groups joining ESnet Collaboration)
  • Radvision increase to 200 ports at 384 kbps by
    mid-2005

22
Conclusions
  • ESnet is an infrastructure that is critical to
    DOEs science mission and that serves all of DOE
  • ESnet is working on providing the DOE mission
    science networking requirements with several new
    initiatives and a new architecture
  • ESnet is very different today in both planning
    and business approach and in goals than in the
    past
Write a Comment
User Comments (0)
About PowerShow.com