The Advanced Networks and Services Underpinning Modern, LargeScale Science: DOEs ESnet - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

The Advanced Networks and Services Underpinning Modern, LargeScale Science: DOEs ESnet

Description:

Cosmic Microwave Background is another key tool in ... Phenology. Bud Break. Leaf Senescence. Hydrologic. Cycle. Vegetation. Dynamics. Minutes-To-Hours ... – PowerPoint PPT presentation

Number of Views:128
Avg rating:3.0/5.0
Slides: 55
Provided by: weje
Category:

less

Transcript and Presenter's Notes

Title: The Advanced Networks and Services Underpinning Modern, LargeScale Science: DOEs ESnet


1
The Advanced Networks and ServicesUnderpinning
Modern, Large-Scale ScienceDOEs ESnet
  • William E. Johnston ESnet Dept. Head and Senior
    Scientist
  • Lawrence Berkeley National Laboratory

2
Science Drivers
  • Modern, large-scale science is dependent on
    networks
  • Data sharing
  • Collaboration
  • Distributed data processing
  • Distributed simulation
  • Data management

3
Complicated Workflow Many Sites
4
Data Management Intensive
  • Cosmic Microwave Background is another key tool
    in understanding the cosmology of the universe
  • Its study requires combined observations from
    many instruments in order to isolate the
    extremely weak signals of the CMB
  • Many different observations represent the
    material between us and the CMB
  • The ability to federate such astrophysical survey
    data is enormously important
  • Collected from different instruments and are
    stored and curated at many different institutions
  • Accessing and integrating the many data sets is
    done with Grid-based approaches that provide a
    uniform interface for all of the different data
    formats and locations (e.g. the National Virtual
    Observatory project)

(Julian Borrill, NERSC, LBNL)
5
Multidisciplinary Simulation
A complete approach to climate modeling
involves many interacting models and data that
are provided by different groups at different
locations
Chemistry CO2, CH4, N2O ozone, aerosols
Climate Temperature, Precipitation, Radiation,
Humidity, Wind
Heat Moisture Momentum
CO2 CH4 N2O VOCs Dust
Minutes-To-Hours
Biogeophysics
Biogeochemistry
Carbon Assimilation
Aero- dynamics
Decomposition
Water
Energy
Mineralization
Microclimate Canopy Physiology
Phenology
Hydrology
Inter- cepted Water
Bud Break
Soil Water
Snow
Days-To-Weeks
Leaf Senescence
Evaporation Transpiration Snow Melt Infiltration R
unoff
Gross Primary Production Plant
Respiration Microbial Respiration Nutrient
Availability
Species Composition Ecosystem Structure Nutrient
Availability Water
Years-To-Centuries
Ecosystems Species Composition Ecosystem Structure
WatershedsSurface Water Subsurface
Water Geomorphology
Disturbance Fires Hurricanes Ice Storms Windthrows
Vegetation Dynamics
Hydrologic Cycle
(Courtesy Gordon Bonan, NCAR Ecological
Climatology Concepts and Applications. Cambridge
University Press, Cambridge, 2002.)
6
CERN / LHC High Energy Physics Data Provides One
of Sciences Most Challenging Data Management
Problems
100 MBytes/sec
event simulation
Online System
PByte/sec
Tier 0 1
eventreconstruction
human2m
HPSS
human
CERN LHC CMS detector 15m X 15m X 22m, 12,500
tons, 700M.
2.5 Gbits/sec
Tier 1
German Regional Center
French Regional Center
FermiLab, USA Regional Center
Italian Center
0.6-2.5 Gbps
analysis
Tier 2
0.6-2.5 Gbps
Tier 3
Institute 0.25TIPS
  • 2000 physicists in 31 countries are involved in
    this 20-year experiment in which DOE is a major
    player.
  • Grid infrastructure spread over the US and Europe
    coordinates the data analysis

Institute
Institute
Institute
100 - 1000 Mbits/sec
Physics data cache
Tier 4
Courtesy Harvey Newman, CalTech
Workstations
7
Predictive Drivers for the Evolution of ESnet
August, 2002 Workshop Organized by Office of
Science Mary Anne Scott, Chair, Dave Bader, Steve
Eckstrand. Marvin Frazier, Dale Koelling, Vicky
White Workshop Panel Chairs Ray Bair,
Deb Agarwal, Bill Johnston, Mike Wilde,
Rick Stevens, Ian Foster, Dennis Gannon,
Linda Winkler, Brian Tierney, Sandy Merola, and
Charlie Catlett
  • The network and middleware requirements to
    support DOE science were developed by the OSC
    science community representing major DOE science
    disciplines
  • Climate simulation
  • Spallation Neutron Source facility
  • Macromolecular Crystallography
  • High Energy Physics experiments
  • Magnetic Fusion Energy Sciences
  • Chemical Sciences
  • Bioinformatics
  • The network is essential for
  • long term (final stage) data analysis
  • control loop data analysis (influence an
    experiment in progress)
  • distributed, multidisciplinary simulation

Available at www.es.net/research
8
The Analysis was Driven by the Evolving Process
of Science
analysis was driven by
9
Evolving Quantitative Science Requirements for
Networks
10
Interlude How Do IP Networks Work?
  • Accessing a service, Grid or otherwise, such as a
    Web server, FTP server, etc., from a client
    computer and client application (e.g. a Web
    browser_ involves
  • Target host names
  • Host addresses
  • Service identification
  • Routing

11
How Do IP Networks Work?
  • core routers
  • focus on high-speed packet forwarding

LBNL
ESnet (transit network)
router
core router
router
  • peering routers
  • Exchange reachability information (routes)
  • implement/enforce routing policy for each
    provider
  • provide cyberdefense

border router
core router
gateway router
peering router
DNS
  • border/gateway routers
  • implement separate site and network provider
    policy (including site firewall policy)

peeringrouter
router
router
Big ISP(e.g. SprintLink)
router
router
router
Google, Inc.
router
12
ESnet Core is a High-Speed Optical Network
Site ESnet network policy demarcation (DMZ)
ESnet site
site LAN
Site IP router
tail circuitlocal loop
ESnet IP router
ESnet hub
  • Wave division multiplexing
  • today typically 64 x 10 Gb/s optical channels per
    fiber
  • channels (referred to as lambdas) are usually
    used in bi-directional pairs
  • Lambda channels are converted to electrical
    channels
  • usually SONET data framing or Ethernet data
    framing
  • can be clear digital channels (no framing e.g.
    for digital HDTV)

10GE
10GE
ESnet core
optical fiber ring
A ring topology network is inherently reliable
all single point failures are mitigated by
routing traffic in the other direction around the
ring.
13
ESnets Role in DOE
  • Mission
  • Provide, interoperable, effective, reliable, and
    high-performance communications infrastructure
    andleading-edge network and Grid services that
    support missions of the Department of Energy,
    especially the science mission of the Office of
    Science
  • Vision
  • Provide seamless and ubiquitous access to DOE
    facilities, data, and colleagues by enabling
    shared collaborative information and
    computational environments
  • Role
  • A component of the Office of Science
    infrastructure critical to the success of its
    research programs (program funded through
    ASCR/MICS managed and operated by ESnet staff at
    LBNL)

14
What Does ESnet Provide?Full Internet Service to
DOE Facilities and Collaboratorswith High-Speed
Access to all Major Science Collaborators
CAnet4 MREN Netherlands Russia StarTap Taiwan
(ASCC)
SEA HUB
ESnet IP
Japan
Chi NAP
NY-NAP
QWEST ATM
MAE-E
SNV HUB
MAE-W
PAIX-E
Fix-W
PAIX-W
Euqinix
42 end user sites
Office Of Science Sponsored (22)
International (high speed) OC192 (10G/s
optical) OC48 (2.5 Gb/s optical) Gigabit Ethernet
(1 Gb/s) OC12 ATM (622 Mb/s) OC12 OC3 (155
Mb/s) T3 (45 Mb/s) T1-T3 T1 (1 Mb/s)
NNSA Sponsored (12)
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
ESnet core Packet over SONET Optical Ring and
Hubs (Qwest)
peering points

hubs
SNV HUB
high-speed peering points
15
What Does ESnet Provide?
  • An architecture tailored to accommodate DOEs
    large-scale science
  • move huge amounts of data between a small number
    of sites
  • High bandwidth access to DOEs primary science
    collaborators Research and Education
    institutions in the US, Europe, Asia Pacific, and
    elsewhere
  • Full access to the global Internet for DOE Labs
  • Comprehensive user support, including owning
    all trouble tickets involving ESnet users
    (including problems at the far end of an ESnet
    connection) until they are resolved 24x7
    coverage
  • Grid middleware and collaboration services
    supporting collaborative science
  • trust, persistence, and science oriented policy

16
What is ESnet Today?
  • ESnet builds a comprehensive IP network
    infrastructure (routing, IPv4, IP multicast,
    IPv6)based on commercial circuits
  • ESnet purchases telecommunications services
    ranging from T1 (1 Mb/s) to OC192 SONET (10 Gb/s)
    and uses these to connect core routers and sites
    to form the ESnet IP network
  • ESnet purchases peering access to commercial
    networks to provide full Internet connectivity
  • Essentially all of the national data traffic
    supporting US science is carried by two networks
    ESnet and Internet-2 / Abilene (which plays a
    similar role for the university community)

17
ESnet is Architected and Operated to Move a Lot
of Data
ESnet is currently transporting about 300
terabytes/mo.(300,000,000 Mbytes/mo.)
ESnet Monthly Accepted Traffic Through Sept. 2004
Annual growth in the past five years about 2.0x
annually.
TBytes/Month
18
ESnet is Architected and Operated to Move a Lot
of Data
ESnet Monthly Accepted Traffic Through Sept. 2004
Annual growth in the past five years about 2.0x
annually.
log10 (TBytes/Month)
19
Connecting DOE Science Labs to the RE Community
is a Primary Goal of ESnet
ESnet Inter-Sector Traffic Summary,Jan 2003 /
Feb 2004 1.7X overall traffic increase, 1.9X OSC
increase
72/68
21/14
Commercial
14/12
DOE is a net supplier of data because DOE
facilities are used by universities and
commercial entities, as well as by DOE researchers
ESnet
25/18
17/10
RE (mostlyuniversities)
DOE sites
10/13
Peering Points
53/49
9/26
DOE collaborator traffic, inc.data
International(almost entirelyRE sites)
4/6
Note that more that 90 of the ESnet traffic is
OSC traffic ESnet Appropriate Use Policy
(AUP) All ESnet traffic must originate and/or
terminate on an ESnet an site (no transit traffic
is allowed)
Traffic coming into ESnet Green Traffic leaving
ESnet Blue Traffic between sites of total
ingress or egress traffic
20
ESnet is Tailored to Accommodate DOEs
Large-Scale ScienceA Small Number of Science
Users at Major Facilities Account for a
Significant Fraction of all ESnet Traffic
ESnet Top 20 Data Flows, 24 hr. avg., 2004-04-20
1 Terabyte/day
SLAC (US) ? IN2P3 (FR)
Fermilab (US) ? CERN
SLAC (US) ? INFN Padva (IT)
Fermilab (US) ? U. Chicago (US)
U. Toronto (CA) ? Fermilab (US)
Helmholtz-Karlsruhe (DE)? SLAC (US)
CEBAF (US) ? IN2P3 (FR)
INFN Padva (IT) ? SLAC (US)
Fermilab (US) ? JANET (UK)
SLAC (US) ? JANET (UK)
Argonne (US) ? Level3 (US)
Fermilab (US) ? INFN Padva (IT)
Argonne ? SURFnet (NL)
IN2P3 (FR) ? SLAC (US)
DOE Lab ? ?
DOE Lab ? ?
  • Since BaBar production started, the top 20 ESnet
    flows have consistently accounted for gt 50 of
    ESnets monthly total traffic (130 of 250
    TBy/mo)
  • As LHC data starts to move, this will increase a
    lot (200-2000 times)

21
ESnet Top 10 Data Flows, 1 week avg., 2004-07-01
  • The traffic is not transient Daily and weekly
    averages are about the same.

SLAC (US) ? INFN Padua (IT)5.9 Terabytes
SLAC (US) ? IN2P3 (FR) 5.3 Terabytes
FNAL (US) ? IN2P3 (FR)2.2 Terabytes
FNAL (US) ? U. Nijmegen (NL)1.0 Terabytes
SLAC (US)? Helmholtz-Karlsruhe (DE) 0.9 Terabytes
CERN ? FNAL (US)1.3 Terabytes
U. Toronto (CA) ? Fermilab (US)0.9 Terabytes
FNAL (US)? Helmholtz-Karlsruhe (DE) 0.6 Terabytes
U. Wisc. (US)? FNAL (US) 0.6 Terabytes
FNAL (US)? SDSC (US) 0.6 Terabytes
22
Top 50 Traffic Flows Monitoring 24hr 1 Intl
Peering Point
10 flowsgt 100 GBy/day each
More than 50 flowsgt 10 GBy/day
  • These observed large science data flows show that
    the projectionsof the science community are
    reasonable and being realized.

23
ESnet and Internet2/Abilene
  • Abilene and ESnet together provide most of the
    nations transit networking for science
  • Abilene provides national transit networking for
    most of the US universities by interconnecting
    the regional networks (mostly via the GigaPoPs)
  • ESnet connects the DOE Labs
  • ESnet and Abilene have recently established four
    direct high-speed interconnects
  • Goal is that DOE Lab ? Univ. connectivity should
    be as good as Lab ? Lab and Univ. ? Univ.
    connectivity
  • Constant monitoring is the key
  • ESnet and Abilene has a joint engineering team
    that meets two-three times a year

24
Monitoring DOE Lab ? University Connectivity
  • Current monitor infrastructure and target
    infrastructure
  • Uniform distribution around ESnet and around
    Abilene
  • Need to set up similar infrastructure with GEANT

AsiaPac
SEA
Europe
CERN/Europe
LBNL
OSU
Japan
Japan
Abilene
FNAL
CHI
ESnet
NYC
DEN
SNV
DC
BNL
KC
IND
LA
Japan
NCSU
ATL
ALB
SDG
ESnet Abilene ORNL
SDSC
ELP
DOE Labs w/ monitors Universities w/
monitors network hubs high-speed cross connects
ESnet ? Internet2/Abilene
HOU
Initial site monitors
25
ESnet, GEANT, and CERNlink
  • GEANT plays a role in Europe similar to Abilene
    and ESnet in the US
  • interconnects the European National Research and
    Education networks, to which the European RE
    sites connect
  • GEANT currently carries essentially all ESnet
    international traffic (LHC use of CERNlink to DOE
    labs is still ramping up)
  • GN2 is the second phase of the GEANT project
  • The architecture of GN2 is remarkably similar to
    the new ESnet Science Data Network IP core
    network model
  • CERNlink will be the main CERN/LHC data path to
    the US
  • Both US, LHC tier 1 centers are DOE Labs on ESnet
    (FNAL and BNL)
  • ESnet directly connects at 10 Gb/s to the
    CERNlink
  • The ESnet new architecture (Science Data Network)
    will accommodate the anticipated 40 Gb/s from LHC
    to US
  • ESnet, GEANT, and CERN are forming a joint
    engineering team

26
Scalable Operation is Essential
  • RE networks typically operate with a small staff
  • The key to everything that the network provides
    is scalability
  • How do you manage a huge infrastructure with a
    small number of people?
  • This issue dominates all others when looking at
    whether to support new services (e.g. Grid
    middleware)
  • Can the service be structured so that its
    operational aspects do not scale as a function of
    the use population?
  • If not, then it cannot be offered as a service

27
Automated, real-time monitoring of traffic levels
and operating state of some 4400 network entities
is the primary network operational and diagnosis
tool
Network Configuration
Performance
OSPF Metrics (internalrouting and connectivity)
SecureNet
Hardware Configuration
IBGP Mesh (WAN routing and connectivity)
28
Scalable Operation is Essential
  • The entire ESnet network is operated by fewer
    than 15 people

Core Engineering Group (5 FTE)
7X24 On-Call Engineers (7 FTE)
Science Services(middleware andcollaboration
tools) (5 FTE)
7X24 Operations Desk (2-4 FTE)
Management, resource management,circuit
accounting, group leads (4 FTE)
Infrastructure (6 FTE)
29
The Hardware Configuration Database Allows Remote
Repairs
  • Equipment wiring detail for two modules at the
    AOA, NYC Hub
  • This allows smart hands e.g., Qwest
    personnel at the NYC site to replace modules
    for ESnet

30
What Does this Equipment Actually Look Like?
Equipment rack detail at NYC Hub,32 Avenue of
the Americas (one of the 10 Gb/s core optical
ring sites)

Picture detail

31
Typical Equipment of an ESnet Core Network Hub
Qwest DS3 DCX
Sentry power 48v 30/60 amp panel (3900 list)
AOA Performance Tester (4800 list)
Sentry power 48v 10/25 amp panel (3350 list)
DC / AC Converter (2200 list)
Cisco 7206 AOA-AR1 (low speed links to MIT
PPPL) (38,150 list)
Lightwave Secure Terminal Server (4800 list)
ESnet core equipment _at_ Qwest 32 AofA HUB NYC,
NY (1.8M, list)
Juniper T320 AOA-CR1 (Core router) (1,133,000
list)
Juniper OC192 Optical Ring Interface (the AOA end
of the OC192 to CHI (195,000 list)
Juniper OC48 Optical Ring Interface (the AOA end
of the OC48 to DC-HUB (65,000 list)
Juniper M20 AOA-PR1 (peering RTR) (353,000 list)
32
Operating Science Mission Critical Infrastructure
  • ESnet is a visible and critical piece of DOE
    science infrastructure
  • if ESnet fails,10s of thousands of DOE and
    University users know it within minutes if not
    seconds
  • Requires high reliability and high operational
    security in the systems that are integral to the
    operation and management of the network
  • Secure and redundant mail and Web systems are
    central to the operation and security of ESnet
  • trouble tickets are by email
  • engineering communication by email
  • engineering database interfaces are via Web
  • Secure network access to Hub routers
  • Backup secure telephone modem access to Hub
    equipment
  • 24x7 help desk and 24x7 on-call network engineer
  • trouble_at_es.net (end-to-end problem resolution)

33
Disaster Recovery and Stability
  • Engineers, 24x7 Network Operations Center,
    generator backed power
  • Spectrum (net mgmt system)
  • DNS (name IP address translation)
  • Eng database
  • Load database
  • Config database
  • Public and private Web
  • E-mail (server and archive)
  • PKI cert. repository and revocation lists
  • collaboratory authorization service
  • Remote Engineer
  • partial duplicate infrastructure

DNS
Remote Engineer
Duplicate Infrastructure Currently deploying full
replication of the NOC databases and servers and
Science Services databases in the NYC Qwest
carrier hub
  • Remote Engineer
  • partial duplicate infrastructure
  • The network must be kept available even if, e.g.,
    the West Coast is disabled by a massive
    earthquake, etc.
  • high physical security for all equipment
  • non-interruptible core - ESnet core operated
    without interruption through
  • N. Calif. Power blackout of 2000
  • the 9/11/2001 attacks, and
  • the Sept., 2003 NE States power blackout
  • Reliable operation of the network involves
  • remote Network Operation Centers (3)
  • replicated support infrastructure
  • generator backed UPS power at all critical
    network and infrastructure locations

34
ESnet WAN Security and Cybersecurity
  • Cybersecurity is a new dimension of ESnet
    security
  • Security is now inherently a global problem
  • As the entity with a global view of the network,
    ESnet has an important role in overall security

30 minutes after the Sapphire/Slammer worm was
released, 75,000 hosts running Microsoft's SQL
Server (port 1434) were infected. (The Spread of
the Sapphire/Slammer Worm, David Moore (CAIDA
UCSD CSE), Vern Paxson (ICIR LBNL), Stefan
Savage (UCSD CSE), Colleen Shannon (CAIDA),
Stuart Staniford (Silicon Defense), Nicholas
Weaver (Silicon Defense UC Berkeley EECS)
http//www.cs.berkeley.edu/nweaver/sapphire )
Jan., 2003
35
Maintaining Science Mission Critical
Infrastructurein the Face of Cyberattack
  • A Phased Response Security Architecture protects
    the network and the ESnet sites
  • The phased response ranges from blocking certain
    site traffic to a complete isolation of the
    network which allows the sites to continue
    communicating among themselves in the face of the
    most virulent attacks
  • Separates ESnet core routing functionality from
    external Internet connections by means of a
    peering router that can have a policy different
    from the core routers
  • Provide a rate limited path to the external
    Internet that will insure site-to-site
    communication during an external denial of
    service attack
  • Provide lifeline connectivity for downloading
    of patches, exchange of e-mail and viewing web
    pages (i.e. e-mail, dns, http, https, ssh, etc.)
    with the external Internet prior to full
    isolation of the network

36
Cyberattack Defense
ESnet third response shut down the main peering
paths and provide only limited bandwidth paths
for specific lifeline services
ESnet second response filter traffic from
outside of ESnet
ESnet first response filters to assist a site
peeringrouter
X
X
router
ESnet
router
LBNL
attack traffic
router
X
borderrouter
  • Lab first response filter incoming traffic at
    their ESnet gateway router

gatewayrouter
peeringrouter
border router
Lab
gatewayrouter
Lab
  • Sapphire/Slammer worm infection created a Gb/s of
    traffic on the ESnet core until filters were put
    in place (both into and out of sites) to damp it
    out.

37
ESnet Services Supporting Science
  • In addition to the high-bandwidth network
    connectivity for DOE Labs, ESnet provides several
    other services critical for collaboration
  • Access to collaborators (peering)
  • Federated trust
  • identity authentication
  • PKI certificates
  • crypto tokens
  • Human collaboration

38
Connecting the DOE Community With its
CollaboratorsESnets Peering Infrastructure
CAnet4 CERN MREN Netherlands Russia StarTap Taiwa
n (ASCC)
Australia CAnet4 Taiwan (TANet2) Singaren
GEANT - Germany - France - Italy - UK - etc
SInet (Japan) KEK Japan Russia (BINP)
KDDI (Japan) France
PNW-GPOP
SEA HUB
2 PEERS
Distributed 6TAP 19 Peers
Abilene
Japan
1 PEER
CalREN2
NYC HUBS
1 PEER
LBNL
Abilene 7 Universities
SNV HUB
5 PEERS
Abilene
2 PEERS
PAIX-W
26 PEERS
MAX GPOP
MAE-W
22 PEERS
39 PEERS
20 PEERS
FIX-W
6 PEERS
3 PEERS
LANL
CENIC SDSC
Abilene
ATL HUB
TECHnet
ESnet provides access to all of the Internet by
managing the full complement of Global Internet
routes (about 150,000) at 10 general/commercial
peering points high-speed peerings w/ Abilene
and the international RE networks. This is a lot
of work, and is very visible, but provides full
access for DOE.
ESnet Peering (connections to other networks)
University
International
Commercial
39
What is Peering?
  • Peering points exchange routing information that
    says which packets I can get closer to their
    destination
  • ESnet daily peeringreport(top 20 of about 100)
  • This is a lot of work

peering with this outfitis not random, it
carriesroutes that ESnet needs(e.g. to the
Russian Backbone Net)
40
What is Peering?
  • Why so many routes? So that when I want to get
    to someplace out of the ordinary, I can get
    there. For examplehttp//www-sbras.nsc.ru/eng/sb
    ras/copan/microel_main.html (Technological Design
    Institute of Applied Microelectronics,
    Novosibirsk, Russia)

41
Support for Grid and CollaborationScience
Environments
  • ESnet provides several science services
    services that support the practice of science
  • A number of such services have an organization
    like ESnet as the natural provider
  • ESnet is trusted, persistent, and has a large
    (almost comprehensive within DOE) user base
  • ESnet has the facilities to provide reliable
    access and high availability through assured
    network access to replicated services at
    geographically diverse locations
  • However, service must be scalable in the sense
    that as its user base grows, ESnet interaction
    with the users does not grow (otherwise not
    practical for a small organization like ESnet to
    operate)

42
Grid Middleware Federated Trust Services
  • Managing cross site trust agreements among many
    organizations is crucial for science
    collaboration
  • ESnet negotiates the cross-site,
    cross-organization, and international trust
    relationships to provide policies that are
    tailored to collaborative science in order to
    permit sharing computing and data resources, and
    other Grid services
  • X.509 identity certificates and Public Key
    Infrastructure provides the basis of secure,
    cross-site authentication of people and Grid
    systems (www.doegrids.org)
  • Certification Authority (CA) issues certificates
    after validating request against policy
  • This service was the basis of the first routine
    sharing of HEP computing resources between US and
    Europe

43
Grid Middleware Federated Trust Services
  • Strong authentication is needed to address
    identity theft (the mechanism of the successful
    attacks on US supercomputers last spring)
  • crypto tokens (e.g. Secure Id cards) are
    effective, but every site uses a different
    approach
  • ESnet is developing a crypto token service to
    serve a fereration role
  • Token based requests for Grid access are send to
    an ESnet service that identifies the token and
    contacts the issuing site for validation, the
    result is then retuned to the Grid application

44
Science Services Public Key Infrastructure
Registration Authorities ANL LBNL ORNL DOESG (DOE
Science Grid) ESG (Climate) FNAL PPDG
(HEP) Fusion Grid iVDGL (NSF-DOE HEP
collab.) NERSC PNNL
Report as of July 15,2004
45
Voice, Video, and Data Tele-Collaboration Service
  • Another highly successful ESnet Science Service
    is the audio, video, and data teleconferencing
    service to support human collaboration
  • Seamless voice, video, and data teleconferencing
    is important for geographically dispersed
    scientific collaborators
  • Provides the central scheduling essential for
    global collaborations
  • ESnet serves more than a thousand DOE researchers
    and collaborators worldwide
  • H.323 (IP) videoconferences (4000 port hours per
    month and rising)
  • audio conferencing (2500 port hours per month)
    (constant)
  • data conferencing (150 port hours per month)
  • Web-based, automated registration and scheduling
    for all of these services

46
Enabling the FutureESnets Evolution over the
Next 10-20 Years
  • Upgrading ESnet to accommodate the anticipated
    increase from the current 100/yr traffic growth
    to 300/yr over the next 5-10 years is priority
    number 7 out of 20 in DOEs Facilities for the
    Future of Science A Twenty Year Outlook
  • Based on the requirements of the OSC Network
    Workshops, ESnet must address
  • Capable, scalable, and reliable production IP
    networking
  • University and international collaborator
    connectivity
  • Scalable, reliable, and high bandwidth site
    connectivity
  • Network support of high-impact science
  • provisioned circuits with guaranteed quality of
    service(e.g. dedicated bandwidth)
  • Science Services to support Grids,
    collaboratories, etc

47
New ESnet Architecture Needed to Accommodate OSC
  • The essential DOE Office of Science requirements
    cannot be met with the current, telecom provided,
    hub and spoke architecture of ESnet

Chicago (CHI)
New York (AOA)
ESnetCore
DOE sites
Washington, DC (DC)
Sunnyvale (SNV)
Atlanta (ATL)
El Paso (ELP)
  • The core ring has good capacity and resiliency
    against single point failures, but the
    point-to-point tail circuits are neither reliable
    nor scalable to the required bandwidth

48
Evolving Requirements for DOE Science Network
Infrastructure
S
C
S
C
guaranteedbandwidthpaths
I
1-40 Gb/s,end-to-end
I
2-4 yr Requirements
1-3 yr Requirements
C
C
C
C
storage
S
S
S
compute
C
instrument
I
cache compute
CC
S
C
CC
CC
I
CC
CC
CC
C
3-5 yr Requirements
4-7 yr Requirements
CC
100-200 Gb/s,end-to-end
C
S
49
A New ESnet Architecture
  • Goals
  • Fully redundant connectivity for every site
  • High-speed access for every site (at least 20
    Gb/s)
  • Three part strategy
  • 1) Metropolitan Area Network (MAN) rings provide
    dual site connectivity and much higher
    site-to-core bandwidth
  • 2) A Science Data Network core for
  • multiply connected MAN rings for protection
    against hub failure
  • expanded capacity for science data
  • a platform for provisioned, guaranteed bandwidth
    circuits
  • alternate path for production IP traffic
  • carrier circuit and fiber access neutral hubs
  • 3) High-reliability IP core (e.g. the current
    ESnet core)

50
ESnet MAN Architecture
CERN (DOE funded link)
Qwest hub
International peerings
ESnet IP core
ESnet SDN core
StarLight
ESnet managed? / circuit services
ESnet managed? / circuit services tunneled
through the IP backbone
ESnet management and monitoring
ESnet production IP service
ANL
FNAL
site equip.
Site gateway router
site equip.
Site gateway router
Site LAN
Site LAN
51
A New ESnet ArchitectureScience Data Network
IP Core
CERN
Asia-Pacific
GEANT (Europe)
ESnet Science Data Network (2nd Core)
Chicago (CHI)
New York(AOA)
MetropolitanAreaRings
Washington, DC (DC)
Sunnyvale(SNV)
ESnetIP Core
Atlanta (ATL)
Existing hubs
El Paso (ELP)
New hubs
DOE/OSC Labs
Possible new hubs
52
Goals for ESnet FY07
AsiaPac
SEA
CERN
Europe
Europe
Japan
Japan
CHI
SNV
NYC
DEN
DC
Japan
ALB
ATL
SDG
MANs
Qwest ESnet hubs
ELP
SDN ESnet hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
Production IP ESnet core
10Gb/s 30Bg/s 40Gb/s
High-impact science core
2.5 Gbs10 Gbs
Lab supplied
Future phases
Major international
53
Conclusions
  • ESnet is an infrastructure that is critical to
    DOEs science mission
  • Focused on the Office of Science Labs, but serves
    many other parts of DOE
  • ESnet is implementing a new network architecture
    in order to meet the science networking
    requirements of DOEs Office of Science
  • Grid middleware services for large numbers of
    users are hard but they can be provided if
    careful attention is paid to scaling

54
Reference -- Planning Workshops
  • High Performance Network Planning Workshop,
    August 2002
  • http//www.doecollaboratory.org/meetings/hpnpw
  • DOE Workshop on Ultra High-Speed Transport
    Protocols and Network Provisioning for
    Large-Scale Science Applications, April 2003
  • http//www.csm.ornl.gov/ghpn/wk2003
  • Science Case for Large Scale Simulation, June
    2003
  • http//www.pnl.gov/scales/
  • DOE Science Networking Roadmap Meeting, June 2003
  • http//www.es.net/hypertext/welcome/pr/Roadmap/ind
    ex.html
  • Workshop on the Road Map for the Revitalization
    of High End Computing, June 2003
  • http//www.cra.org/Activities/workshops/nitrd
  • http//www.sc.doe.gov/ascr/20040510_hecrtf.pdf
    (public report)
  • ASCR Strategic Planning Workshop, July 2003
  • http//www.fp-mcs.anl.gov/ascr-july03spw
  • Planning Workshops-Office of Science
    Data-Management Strategy, March May 2004
  • http//www-conf.slac.stanford.edu/dmw2004
    (report coming soon)
Write a Comment
User Comments (0)
About PowerShow.com