ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004 - PowerPoint PPT Presentation

About This Presentation
Title:

ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004

Description:

In the near term MAN rings will be built in the San Francisco and Chicago areas ... Qwest - NBC Bld. 455 N Cityfront Plaza Dr, Chicago, IL 60611. ANL ... – PowerPoint PPT presentation

Number of Views:54
Avg rating:3.0/5.0
Slides: 29
Provided by: joe5
Category:

less

Transcript and Presenter's Notes

Title: ESnet Status and Plans Ineternet2 All Hands Meeting, Sept. 28, 2004


1
ESnet Status and PlansIneternet2 All Hands
Meeting, Sept. 28, 2004
  • William E. Johnston, ESnet Dept. Head and Senior
    Scientist
  • R. P. Singh, Federal Project Manager
  • Michael S. Collins, Stan Kluz,Joseph Burrescia,
    and James V. Gagliardi, ESnet Leads
  • Gizella Kapus, Resource Manager
  • and the ESnet Team
  • Lawrence Berkeley National Laboratory

2
ESnet Provides Full Internet Serviceto DOE
Facilities and Collaborators with High-Speed
Access to Major Science Collaborators
SEA HUB
ESnet mid-2004
Japan
NY-NAP
QWEST ATM
MAE-E
SNV HUB
MAE-W
PAIX-E
Fix-W
PAIX-W
Euqinix
42 end user sites
Office Of Science Sponsored (22)
International (high speed) OC192 (10G/s
optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet
(1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155
Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s)
NNSA Sponsored (12)
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
peering points

ESnet core Packet over SONET Optical Ring and
Hubs
hubs
SNV HUB
high-speed peering points
3
ESnets Peering InfrastructureConnects the DOE
Community With its Collaborators
CAnet4 CERN MREN Netherlands Russia StarTap Taiwa
n (ASCC)
Australia CAnet4 Taiwan (TANet2) Singaren
GEANT - Germany - France - Italy - UK - etc
SInet (Japan) KEK Japan Russia (BINP)
KDDI (Japan) France
PNW-GPOP
SEA HUB
2 PEERS
Distributed 6TAP 19 Peers
Abilene
Japan
1 PEER
CalREN2
NYC HUBS
1 PEER
LBNL
Abilene 7 Universities
SNV HUB
5 PEERS
Abilene
2 PEERS
PAIX-W
26 PEERS
MAX GPOP
MAE-W
22 PEERS
39 PEERS
20 PEERS
FIX-W
6 PEERS
3 PEERS
LANL
CENIC SDSC
Abilene
ATL HUB
TECHnet
ESnet provides access to all of the Internet by
managing the full complement of Global Internet
routes (about 150,000) at 10 general/commercial
peering points high-speed peerings w/ Abilene
and the international RE networks. This is a lot
of work, and is very visible, but provides full
access for DOE.
ESnet Peering (connections to other networks)
University
International
Commercial
4
Major ESnet Changes in FY04
  • Dramatic increase in International traffic as
    major large-scale science experiments start to
    ramp up
  • CERNlink connected at 10 Gb/s
  • GEANT (main European RE network like Abilene
    and ESnet) connected at 2.5 Gb/s
  • Abilene-ESnet high-speed cross-connects (3_at_2.5
    Gb/s and 1_at_10 Gb/s)
  • In order to meet the Office of Science program
    needs, a new architectural approach has been
    developed
  • Science Data Network (a second core network for
    high-volume traffic)
  • Metropolitan Area Networks (MANs)

5
Predictive Drivers for Change
August 13-15, 2002
Organized by Office of Science Mary Anne Scott,
Chair Dave Bader Steve Eckstrand Marvin
Frazier Dale Koelling Vicky White
Workshop Panel Chairs Ray Bair and Deb
Agarwal Bill Johnston and Mike Wilde Rick
Stevens Ian Foster and Dennis Gannon Linda
Winkler and Brian Tierney Sandy Merola and
Charlie Catlett
  • Focused on science requirements that drive
  • Advanced Network Infrastructure
  • Middleware Research
  • Network Research
  • Network Governance Model
  • The requirements for DOE science were developed
    by the OSC science community representing major
    DOE science disciplines
  • Climate
  • Spallation Neutron Source
  • Macromolecular Crystallography
  • High Energy Physics
  • Magnetic Fusion Energy Sciences
  • Chemical Sciences
  • Bioinformatics

Available at www.es.net/research
6
Evolving Quantitative Science Requirements for
Networks
7
Observed Drivers for Change
ESnet Inter-Sector Traffic Summary,Jan 2003 /
Feb 2004 1.7X overall traffic increase, 1.9X OSC
increase (The international traffic is
increasing due to BABAR at SLACand the LHC tier
1 centers at FNAL and BNL)
72/68
21/14
Commercial
14/12
DOE is a net supplier of data because DOE
facilities are used by universities and
commercial entities, as well as by DOE researchers
ESnet
25/18
17/10
RE (mostlyuniversities)
DOE sites
10/13
Peering Points
53/49
9/26
DOE collaborator traffic, inc.data
International(almost entirelyRE sites)
4/6
Note that more that 90 of the ESnet traffic is
OSC traffic ESnet Appropriate Use Policy
(AUP) All ESnet traffic must originate and/or
terminate on an ESnet an site (no transit traffic
is allowed)
Traffic coming into ESnet Green Traffic leaving
ESnet Blue Traffic between sites of total
ingress or egress traffic
8
ESnet Top 20 Data Flows, 24 hr. avg., 2004-04-20
A small number of science users account for a
significant fraction of all ESnet traffic
SLAC (US) ? IN2P3 (FR)
1 Terabyte/day
Fermilab (US) ? CERN
SLAC (US) ? INFN Padva (IT)
Fermilab (US) ? U. Chicago (US)
U. Toronto (CA) ? Fermilab (US)
Helmholtz-Karlsruhe (DE)? SLAC (US)
CEBAF (US) ? IN2P3 (FR)
INFN Padva (IT) ? SLAC (US)
Fermilab (US) ? JANET (UK)
SLAC (US) ? JANET (UK)
DOE Lab ? DOE Lab
Argonne (US) ? Level3 (US)
DOE Lab ? DOE Lab
Fermilab (US) ? INFN Padva (IT)
Argonne ? SURFnet (NL)
IN2P3 (FR) ? SLAC (US)
  • Since BaBar production started, the top 20 ESnet
    flows have consistently accounted for gt 50 of
    ESnets monthly total traffic (130 of 250
    TBy/mo)
  • As LHC data starts to move, this will increase a
    lot (200-2000 times)

9
ESnet Top 10 Data Flows, 1 week avg., 2004-07-01
  • The traffic is not transient Daily and weekly
    averages are about the same.

SLAC (US) ? INFN Padua (IT)5.9 Terabytes
SLAC (US) ? IN2P3 (FR) 5.3 Terabytes
FNAL (US) ? IN2P3 (FR)2.2 Terabytes
FNAL (US) ? U. Nijmegen (NL)1.0 Terabytes
SLAC (US)? Helmholtz-Karlsruhe (DE) 0.9 Terabytes
CERN ? FNAL (US)1.3 Terabytes
U. Toronto (CA) ? Fermilab (US)0.9 Terabytes
FNAL (US)? Helmholtz-Karlsruhe (DE) 0.6 Terabytes
U. Wisc. (US)? FNAL (US) 0.6 Terabytes
FNAL (US)? SDSC (US) 0.6 Terabytes
10
ESnet and Abilene
  • Abilene and ESnet together provide most of the
    nations transit networking for science
  • Abilene provides national transit networking for
    most of the US universities by interconnecting
    the regional networks (mostly via the GigaPoPs)
  • ESnet connects the DOE Labs
  • ESnet and Abilene have recently established
    high-speed interconnects and cross-network
    routing
  • Goal is that DOE Lab ? Univ. connectivity should
    be as good as Lab ? Lab and Univ. ? Univ.
  • Constant monitoring is the key

11
Monitoring DOE Lab ? University Connectivity
  • Current monitor infrastructure (red) and target
    infrastructure
  • Uniform distribution around ESnet and around
    Abilene
  • Need to set up similar infrastructure with GEANT

AsiaPac
SEA
Europe
CERN/Europe
LBNL
OSU
Japan
Japan
FNAL
CHI
Abilene
ESnet
NYC
DEN
SNV
DC
BNL
KC
IND
LA
Japan
NCSU
ATL
ALB
SDG
ESnet Abilene ORNL
SDSC
ELP
DOE Labs w/ monitors Universities w/
monitors network hubs high-speed cross connects
ESnet ? Internet2/Abilene
HOU
Initial site monitors
12
Initial Monitoring is with OWAMP One-Way Delay
Tests
  • These measurements are very sensitive e.g. NCSU
    Metro DWDM reroute of about 350 micro seconds is
    easily visible

ms
Fiber Re-Route
42.0 41.9 41.8 41.7 41.6 41.5
13
Initial Monitor Results (http//measurement.es.net
)
14
ESnet, GEANT, and CERNlink
  • GEANT plays a role in Europe similar to Abilene
    and ESnet in the US it interconnects the
    European National Research and Education
    networks, to which the European RE sites connect
  • GEANT currently carries essentially all ESnet
    international traffic (LHC use of CERNlink to DOE
    labs is still ramping up)
  • GN2 is the second phase of the GEANT project
  • The architecture of GN2 is remarkably similar to
    the new ESnet Science Data Network IP core
    network model
  • CERNlink will be the main CERN to US, LHC data
    path
  • Both US, LHC tier 1 centers are on ESnet (FNAL
    and BNL)
  • ESnet directly connects at 10 Gb/s to the
    CERNlink
  • The ESnet new architecture (Science Data Network)
    will accommodate the anticipated 40 Gb/s from LHC
    to US

15
GEANT and CERNlink
  • A recent meeting between ESnet and GEANT produced
    proposals in a number of areas designed to ensure
    robust and reliable science data networking
    between ESnet and Europe
  • A US-EU joint engineering task force (ITechs)
    should be formed to coordinate US-EU science data
    networking
  • Will include, e.g., ESnet, Abilene, GEANT, CERN
  • Will develop joint operational procedures
  • ESnet will collaborate in GEANT development
    activities to ensure some level of compatibility
  • Bandwidth-on-demand (dynamic circuit setup)
  • Performance measurement and authentication
  • End-to-end QoS and performance enhancement
  • Security
  • 10 Gb/s connectivity between GEANT and ESnet will
    be established by mid-2005 and backup 2.5 Gb/s
    will be added

16
New ESnet Architecture Needed to Accommodate OSC
  • The essential DOE Office of Science requirements
    cannot be met with the current, telecom provided,
    hub and spoke architecture of ESnet

Chicago (CHI)
New York (AOA)
ESnetCore
DOE sites
Washington, DC (DC)
Sunnyvale (SNV)
Atlanta (ATL)
El Paso (ELP)
  • The core ring has good capacity and resiliency
    against single point failures, but the
    point-to-point tail circuits are neither reliable
    nor scalable to the required bandwidth

17
A New ESnet Architecture
  • Goals
  • full redundant connectivity for every site
  • high-speed access for every site (at least 10
    Gb/s)
  • Three part strategy
  • 1) MAN rings provide dual site connectivity and
    much higher site-to-core bandwidth
  • 2) A Science Data Network core for
  • multiply connected MAN rings for protection
    against hub failure
  • expanded capacity for science data
  • a platform for provisioned, guaranteed bandwidth
    circuits
  • alternate path for production IP traffic
  • carrier circuit and fiber access neutral hubs
  • 3) An IP core (e.g. the current ESnet core) for
    high-reliability

18
A New ESnet ArchitectureScience Data Network
IP Core
CERN
Asia-Pacific
GEANT (Europe)
ESnet Science Data Network (2nd Core)
Chicago (CHI)
New York(AOA)
MetropolitanAreaRings
Washington, DC (DC)
Sunnyvale(SNV)
ESnetIP Core
Atlanta (ATL)
Existing hubs
El Paso (ELP)
New hubs
DOE/OSC Labs
Possible new hubs
19
ESnet Long-Term Architecture
site
10 GigEthernet switch(s) ESnet management domain
ESnet management and monitoring equipment
core router Esnet management domain
one or moreindependent fiber pairs
ESnetSDNcorering
ESnet MetropolitanAreaNetworks
ESnetIP corering
one or moreindep. fiber pairs
ESnet hub(typ.)
Optical channel (?) equipmen Carrier management
domain
production IP
provisioned circuits carriedover optical
channels / lambdas
provisioned circuits tunneledthrough the IP core
via MPLS
site router site management domain
site (typ.)
20
ESnet New Architecture, Part 1 MANs
  • The MAN architecture is designed to provide
  • At least one redundant path from sites to ESnet
    hub
  • Scalable bandwidth options from sites to ESnet
    hub
  • The first step in point-to-point provisioned
    circuits
  • With endpoint authentication, these are private
    and intrusion resistant circuits, so they should
    be able to bypass site firewalls if the endpoints
    trust each other
  • End-to-end provisioning will be initially
    provided by a combination of Ethernet switch
    management of ? paths in the MAN and MPLS paths
    in the ESnet POS backbone (OSCARS project)
  • Provisioning will initially be provided by manual
    circuit configuration, on-demand in the future
    (OSCARS)
  • Cost savings over two or three years, when
    including the future site needs in increased
    bandwidth

21
ESnet MAN Architecture logical (Chicago, e.g.)
CERN (DOE funded link)
Qwest hub
International peerings
ESnet IP core
ESnet SDN core
StarLight
ESnet managed? / circuit services
ESnet managed? / circuit services tunneled
through the IP backbone
ESnet management and monitoring
ESnet production IP service
ANL
FNAL
site equip.
Site gateway router
site equip.
Site gateway router
Site LAN
Site LAN
22
ESnet Metropolitan Area Network ring (MANs)
  • In the near term MAN rings will be built in the
    San Francisco and Chicago areas
  • In long term there will likely be MAN rings on
    Long Island, in the Newport News, VA area, in No.
    New Mexico, in Idaho-Wyoming, etc.
  • San Francisco Bay Area MAN ring progress
  • Feasibility has been demonstrated with an
    engineering study from CENIC
  • A competitive bid and best value source
    selection methodology will select the ring
    provider within two months

23
SF Bay Area MAN
Seattle and Chicago
Chicago
Joint Genome Institute
LBNL
NERSC
SF BA MAN
SF Bay Area
ESnet Science Data Network core
LLNL
SNLL
NLR / UltraScienceNet
SLAC
Qwest /ESnet hub
Level 3hub
ESnet IP Core Ring
LA and San Diego
El Paso
24
Proposed Chicago MAN
ESnet CHI-HUB Qwest - NBC Bld 455 N Cityfront
Plaza Dr, Chicago, IL 60611
StarLight 910 N Lake Shore Dr, Chicago, IL 60611
FNAL Feynman Computing Center, Batavia, IL 60510
ANL 9700 S Cass Ave, Lemont, IL 60439
25
ESnet New Architecture Part 2 Science Data
Network
  • SDN (second core) Rationale
  • Add major points of presence in carrier circuit
    and fiber access neutral facilities atSunnyvale,
    Seattle, San Diego, and Chicago
  • Enable UltraSciNet cross-connect with ESnet
  • Provide access to NLR and other fiber-based
    networks
  • Allow for more competition in acquiring circuits
  • Initial steps toward Science Data Network (SDN)
  • Provide a second, independent path between major
    northern route hubs
  • Alternate route for ESnet core IP traffic
  • Provide for high-speed paths on the West Coast to
    reachPNNL, GA, and AsiaPac peering
  • Increase ESnet connectivity to other RE networks

26
ESnet New Architecture Goal FY05Science Data
Network Phase 1 and SF BA MAN
AsiaPac
SEA
Europe
  • CERN (2X10Gb/s)

Europe
Japan
Japan
NewCore
CHI
SNV
NYC
DEN
DC
Japan
ALB
SDG
Existing ESnet Core
ATL
MANs
current ESnet hubs
ELP
new ESnet hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
ESnet IP core (Qwest)
UltraSciNet
ESnet SDN core
2.5 Gbs10 Gbs
Lab supplied
Future phases
Major international
27
ESnet New Architecture Goal FY06Science Data
Network Phase 2 and Chicago MAN
AsiaPac
SEA
Europe
  • CERN (3X10Gb/s)

Europe
Japan
Japan
CHI
SNV
NYC
DEN
DC
Japan
ALB
SDG
ATL
MANs
current ESnet hubs
ELP
new ESnet hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
ESnet IP core (Qwest)
UltraSciNet
ESnet SDN core
2.5 Gbs10 Gbs
Lab supplied
Future phases
Major international
28
ESnet Beyond FY07
AsiaPac
SEA
CERN
Europe
Europe
Japan
Japan
CHI
SNV
NYC
DEN
DC
Japan
ALB
ATL
SDG
MANs
ESnet IP core (Qwest) hubs
ELP
ESnet SDN core hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
Production IP ESnet core
10Gb/s 30Bg/s 40Gb/s
High-impact science core
2.5 Gbs10 Gbs
Lab supplied
Future phases
Major international
Write a Comment
User Comments (0)
About PowerShow.com