Grids, Networks and ICFA SCIC - PowerPoint PPT Presentation

1 / 47
About This Presentation
Title:

Grids, Networks and ICFA SCIC

Description:

Grids, Networks and ICFA SCIC. Harvey B. Newman. California ... Xmas 01. Peak load ~88% J. Macallister. RNP Brazil (to 20 Mbps) STARTAP/Abilene OC3 (to 80 Mbps) ... – PowerPoint PPT presentation

Number of Views:80
Avg rating:3.0/5.0
Slides: 48
Provided by: harv193
Category:
Tags: icfa | scic | grids | networks | xmas

less

Transcript and Presenter's Notes

Title: Grids, Networks and ICFA SCIC


1
  • Grids, Networks and ICFA SCIC
  • Harvey B. Newman
  • California Institute of TechnologyHICB,
    TorontoFebruary 17, 2002
  • http//l3www.cern.ch/newman/HICBGridsNets_hbn0217
    02.ppt

2
Next Generation Networks for Experiments Goals
and Needs
  • Providing rapid access to event samples and
    subsets from massive data stores
  • From 500 Terabytes in 2001, Petabytes by 2002,
    100 Petabytes by 2007, to 1 Exabyte by 2012.
  • Providing analyzed results with rapid turnaround,
    bycoordinating and managing the LIMITED
    computing, data handling and NETWORK resources
    effectively
  • Enabling rapid access to the data and the
    collaboration
  • Across an ensemble of networks of varying
    capability
  • Advanced integrated applications, such as Data
    Grids, rely on seamless high performance
    operation of our WANs, and LANs
  • With reliable, quantifiable (monitored), high
    performance
  • For Grid-enabled event processing and data
    analysis,and collaboration

3
Transatlantic Net WG (HN, L. Price)
Bandwidth Requirements

Installed BW. Maximum Link Occupancy 50
Assumed See http//gate.hep.anl.gov/lprice/TAN
4
Total U.S. Internet Traffic
100 Pbps
Limit of same GDP as Voice
10 Pbps
1 Pbps
100Tbps
New Measurements
10Tbps
1Tbps
100Gbps
Projected at 4/Year
Voice Crossover August 2000
10Gbps
1Gbps
ARPA NSF Data to 96
100Mbps
4X/Year
10Mbps
2.8X/Year
1Mbps
100Kbps
10Kbps
1Kbps
100 bps
10 bps
U.S. Internet Traffic
Source Roberts et al., 2001
5
AMS-IX Internet Exchange Throughput
Accelerating Growth in Europe (NL)
Monthly Traffic2X Growth from 8/00 - 3/012X
Growth from 8/01 - 12/01
?
Hourly Traffic2/03/02
6.0 Gbps
4.0 Gbps
2.0 Gbps
0
6
ICFA Standing Committee on Interregional
Connectivity (SCIC)
  • Created by ICFA in July 1998 in Vancouver
  • CHARGE
  • Make recommendations to ICFA concerning the
    connectivity between the Americas, Asia and
    Europe
  • As part of the process of developing
    theserecommendations, the committee should
  • Monitor traffic
  • Keep track of technology developments
  • Periodically review forecasts of future
    bandwidth needs, and
  • Provide early warning of potential problems
  • Create subcommittees when necessary to meet the
    charge
  • The chair of the committee should report to ICFA
    once peryear Reports February, July and
    October 2002

7
Internet2 HENP WG
  • Mission To help ensure that the required
  • National and international network
    infrastructures(end-to-end)
  • Standardized tools and facilities for high
    performance and end-to-end monitoring and
    tracking, and
  • Collaborative systems
  • are developed and deployed in a timely manner,
    and used effectively to meet the needs of the
    US LHC and other major HENP Programs, as well as
    the at-large scientific community.
  • To carry out these developments in a way that is
    broadly applicable across many fields
  • Formed an Internet2 WG as a suitable framework
    Oct. 26 2001
  • Co-Chairs S. McKee (Michigan), H. Newman
    (Caltech) Secy J. Williams (Indiana)
  • Website http//www.internet2.edu/henp also see
    the Internet2 End-to-end Initiative
    http//www.internet2.edu/e2e

8
ICFA SCIC Topics
  • Network Status and Upgrade Plans in Each Country
  • Bandwidth and Performance Evolution in Each
    Country, and Transatlantic
  • Performance World-Overview (Les Cottrell, IEPM
    Project)
  • Specific Topics (Examples)
  • Bulk transfer, VoIP, Collaborative Systems, QoS,
    Security
  • Identification of problem areas ideas on how to
    improve, or encourage to improve
  • Last Meeting was December 8, 2001 at CERN
  • Executive Summary. Plus individual country
    reports, performance report, and DataTAG Report
    as appendices.
  • http//www.slac.stanford.edu/grp/scs/trip/notes-i
    cfa-dec01-cottrell.html

9
Daily, Weekly, Monthly and Yearly Statistics on
155 Mbps US-CERN Link
BW Upgrades Quickly Followedby Upgraded
Production Use
20 - 75 Mbps Used Routinely in 01
10
Monitoring UK-US Traffic
UKERNA Traffic data Kbit/s. Blue Traffic
from US Maroon Traffic to US 17 Jul 2000
(300Mbit/s) 7 Dec 2001
(900Mbit/s) peak is 88 of total BW
930 Mbit 10 minute averages 10 minute
averages
11
Monitoring UK-US Traffic (R.Hughes Jones)
Trends Jan 99 Nov 02
UKERNA US Traffic Daily totals Mbytes
Peak load 88
Weekday
Weekend
J. Macallister
Xmas 99
Xmas 01
12
RNP Brazil (to 20 Mbps)
FIU Miami (to 80 Mbps)
STARTAP/Abilene OC3 (to 80 Mbps)
13
Natl and Intl Network Backbones, Major
Transoceanic Links are Advancing Rapidly
  • Next generation 10 Gbps national network
    backbones are starting to appear in the US,
    Europe and Japan
  • Major transoceanic links are/will be at 2.5 - 10
    Gbps in 2002-3
  • Removing regional, last mile bottlenecks and
    compromises in network quality are now
    all on the critical path

14
2.5 Gbps Backbone
201 Primary ParticipantsAll 50 States, D.C. and
Puerto Rico75 Partner Corporations and
Non-Profits14 State Research and Education Nets
15 GigaPoPs Support 70 of Members
15
Rapid Advances of Natl BackbonesNext
Generation Abilene
  • Abilene partnership with Qwest extended through
    2006
  • Backbone is being upgraded to 10-Gbps in three
    phases, to be Completed by October 2003
  • GigaPoP Upgrade started in February 2002
  • Capability for flexible ? provisioning in support
    of future experimentation in optical networking
  • In a multi- ? infrastructure

16
Baseline BW for the US-CERN Link HENP
Transatlantic WG (DOENSF)
Transoceanic NetworkingIntegrated with the
Abilene, TeraGrid, Regional Netsand Continental
NetworkInfrastructuresin US, Europe, Asia,
South America
Evolution typicalof major HENPlinks 2001-2006
US-CERN Link 2 X 155 Mbps NowPlans 622 Mbps
in April 2002DataTAG 2.5 Gbps Research Link in
Summer 200210 Gbps Research Link in 2003 or
Early 2004
17
Networks in Canada (Dean Karlen)www.hepnet.carlet
on.ca
  • CAnet3 Now 8 OC192 wavelengths
  • Federal Announcement 1/02 of CAnet4 120 M for
    dark fiber across Canada
  • Connectivity Acceptable inside CA to ESNet,
    UK, DE Good-to-Acceptableto Italy Good to
    CERN Excellent to US universities Very
    Variable

Congestion ?
Improvement !
18
National RE Network ExampleGermany DFN
TransAtlanticConnectivity Q1 2002
  • 2 X OC12 Now NY-Hamburg and NY-Frankfurt
  • ESNet peering at 34 Mbps
  • Upgrade to 2 X OC48 expected in Q1 2002
  • Direct Peering to Abilene (US) and Canarie
    (Canada) expected
  • UCAID will add (?) another 2 OC48s Proposing a
    Global Terabit Research Network (GTRN)
  • FSU Connections via satelliteYerevan, Minsk,
    Almaty, Baikal
  • Speeds of 32 - 512 kbps
  • SILK Project (2002) NATO funding
  • Links to Caucasus and Central Asia (8
    Countries)
  • Currently 64-512 kbps
  • Propose VSAT for 10-50 X BW NATO State
    Funding

19
International Connections of Super/SINET
  • 5 X 155 Mps to US/EU (IP Over ATM)and 75 Mbps
    to GEANT in Europe
  • Contains HEP Links (but no more dedicated PVCs)
  • KEK-ESnet 20 Mbps
  • KEK-CERN 40 Mbps
  • KEK-DESY 10 Mbps
  • Upgrades being studied WDM ? OC48 POS ?
  • HEP dedicated lines KEK-
  • Taiwan(AS) 1.5 Mbps
    FrameRelay
  • Novosibirsk(BINP) 128 Kbps ? 256 or 512
    Kbps Soon
  • KEK-Beijing(IHEP) 128 Kbps ?
  • APAN links

To 2 X 622 Mbps to the US
20
ICFA SCIC December 2001 Backbone and
International Link Progress
  • GEANT Pan-European backbone (http//www.dante.net/
    geant) now interconnects 31 countries includes
    many trunks at OC48 and OC192
  • Abilene upgrade from 2.5 to 10 Gbps additional
    lambdas on demand planned for targeted
    applications
  • SuperSINET (Japan) Two OC12 Links, to Chicago
    and Seattle Plan upgrade to 2 X OC48 Connection
    to US West Coast in 2003
  • CAnet4 Interconnect customer-owned dark fiber
    nets across Canada, starting in 2003
  • RENATER (France) Connected to GEANT at OC48
    CERN link to OC12
  • GWIN (Germany) Connection to Abilene to 2 X OC48
    Expected in 2002
  • SuperJANET4 (UK) Mostly OC48 links, connected to
    academic MANs typically at OC48
    (http//www.superjanet4.net)
  • US-CERN link (2 X OC3 Now) to OC12 this Spring
    OC192 by 2005DataTAG research link OC48 Summer
    2002 to OC192 in 2003-4.
  • SURFNet (NL) link to US at OC48
  • ESnet 2 X OC12 backbone, with OC12 to HEP labs
    Plans to connectto STARLIGHT using Gigabit
    Ethernet

21
HENP Networks Outlook and HighPerformance Issues
  • Higher speeds are soon going to reach limits of
    existing protocols
  • TCP/IP 25 years old built for 64 kbps
  • Ethernet 20 years old built for 10 Mbps
  • We need to understand how to use and deploy new
    network technologies in the 1 to 10 Gbps range
  • Optimize throughput large windows perhaps many
    streams
  • Will then need new concepts of fair sharing and
    managed use of networks
  • New sometimes expensive hardware and new
    protocols
  • GigE and soon 10 GigE on some WAN paths
  • MPLS/GMPLS for network policy QoS
  • Alternatives to TCP ?? (e.g. UDP/RTP FEC)
  • DWDM and management of Lambdas at 2.5 then 10 Gbps

22
Throughput News(155 Mbps WAN)
  • 8/01 105 Mbps reached with 30 Streams
    SLAC-IN2P3
  • 9/1/01 102 Mbps in One Stream CIT-CERN
  • 11/5/01 125 Mbps in One Stream (modified
    kernel) CIT-CERN
  • 1/02 190 Mbps Balancing traffic on two
    OC3s CIT-CERN




Also see http//www-iepm.slac.stanford.edu/monitor
ing/bulk/ and the Internet2 E2E Initiative
http//www.internet2.edu/e2e
23
Maximizing TCP Throughput(S.Ravot, Caltech)
  • TCP Protocol Study Limits
  • We determined Precisely
  • The parameters which limit the throughput over
    a high-BW, long delay (170 msec) network
  • How to avoid intrinsic limits unecessary
    packet loss
  • Methods Used to Improve TCP
  • Linux kernel programming in order to
    tune TCP parameters
  • We modified the TCP algorithm
  • A Linux patch will soon be available
  • Result The Current State of the Art for
    Reproducible Throughput (at OC3)
  • 125 Mbps between CERN/Caltech
  • 135 Mbps between CERN/Chicago
  • 190 Mbps on two links CERN/Chicago
  • Status Ready for Tests at Higher Bandwidth on
    OC12 Link in Spring 02

Congestion window behavior of a TCP connection
over the transatlantic line
Reproducible 125 Mbps BetweenCERN and
Caltech/CACR
24
Key Network Issues Challenges
  • Net Infrastructure Requirements for High
    Throughput
  • Packet Loss must be Zero (Well below 0.01)
  • I.e. No Commodity networks
  • Need to track down uncongested packet loss
  • No Local infrastructure bottlenecks
  • Gigabit Ethernet clear paths between selected
    host pairs are needed (some being set up) now
  • To 10 Gbps Ethernet by 2003 or 2004
  • TCP/IP stack configuration and tuning Absolutely
    Required
  • Large Windows Possibly Multiple Streams
  • Careful Router configuration monitoring
  • Server and Client CPU, I/O and NIC throughput
    sufficient
  • End-to-end monitoring and tracking of performance
  • Close collaboration with local and regional
    network staffs
  • TCP Does Not Scale to the 1-10 Gbps Range

25
TeraGrid (www.teragrid.org)NCSA, ANL, SDSC,
Caltech
A Preview of the Grid Hierarchyand Networks of
the LHC Era
Abilene
Chicago
Indianapolis
DTF Backplane(4x?) 40 Gbps)
Urbana
Pasadena
Starlight / NW Univ
UIC
I-WIRE
San Diego
Multiple Carrier Hubs
Ill Inst of Tech
ANL
OC-48 (2.5 Gb/s, Abilene)
Univ of Chicago
Multiple 10 GbE (Qwest)
Indianapolis (Abilene NOC)
Multiple 10 GbE (I-WIRE Dark Fiber)
NCSA/UIUC
  • Solid lines in place and/or available in 2001
  • Dashed I-WIRE lines planned for Summer 2002

Source Charlie Catlett, Argonne
26
OMNInet Technology Trial January 2002
  • A four-site network in Chicago -- the first 10GE
    service trial!
  • A test bed for all-optical switching advanced
    high-speed services
  • Partners SBC, Nortel, iCAIR at Northwestern,
    EVL, CANARIE, ANL

27
DataTAG Project
NewYork
ABILENE
STARLIGHT
ESNET
GENEVA
? Triangle
MREN
STAR-TAP
  • EU-Solicited Project. CERN, PPARC (UK), Amsterdam
    (NL), and INFN (IT)and US (DOE/NSF UIC, NWU
    and Caltech) partners
  • Main Aims
  • Ensure maximum interoperability between US and EU
    Grid Projects
  • Transatlantic Testbed for advanced network
    research
  • 2.5 Gbps wavelength-based US-CERN Link 6/2002 (10
    Gbps 2003 or 2004)

28
National Research Networks in Japan
  • SuperSINET
  • Started operation January 4, 2002
  • Support for 5 important areas
  • HEP, Genetics, Nano-Technology,
    Space/Astronomy, GRIDs
  • Provides 10 ?s
  • 10 Gbps IP connection
  • Direct intersite GbE links
  • Some connections to 10 GbE in JFY2002
  • HEPnet-J
  • Will be re-constructed with MPLS-VPN in
    SuperSINET
  • Proposal Two TransPacific 2.5 Gbps
    Wavelengths, and KEK-CERN Grid Testbed

NIFS
IP
Nagoya U
NIG
WDM path
IP router
Nagoya
Osaka
Osaka U
Tokyo
Kyoto U
NII Hitot.
ICR
Kyoto-U
U Tokyo
ISAS
Internet
IMS
NAO
U-Tokyo
29
True End to End Experience(see
http//www.internet2.edu/e2e)
  • User perception
  • Application
  • Operating system
  • Host IP stack
  • Host network card
  • Local Area Network
  • Campus backbone network
  • Campus link to regional network/GigaPoP
  • GigaPoP link to Internet2 national backbones
  • International connections

EYEBALL APPLICATION STACK JACK NETWORK . . . . .
. . . . . . .
30
Starting a Global Grid Operations Center (GGOC)
31
Rest of world by TLD
  • Russia poor to bad China poor

32
Throughput quality improvementsBWTCP lt
MSS/(RTTSqrt(Loss))
80 Improvement/Year ? Factor of 10 In 4 Years
China Recent Improvement
Eastern Europe Keeping Up
See Macroscopic Behavior of the TCP
Congestion Avoidance Algorithm, Matthis, Semke,
Mahdavi, Ott, Computer Communication Review
27(3), 7/1997
33
HENP Networks andRemote Regions
  • The increased bandwidth availability in NA,
    Europe, and Japan has changed our viewpoint
  • Faster progress in applications (e.g. Grids)
  • The Digital Divide with the Rest of the World
    is widening
  • Network improvements are especially needed in
    South America, Central Asia, and some other world
    regions
  • BRAZIL, Chile India, Pakistan, China, FSU
    Africa
  • Remote regions need help from SCIC, ICFA, the
    Internet2 HENP Net WG, the Grid Projects,
    HICB/HJTB, GGF
  • Better contacts, and better network monitoring
  • Installations, workshops, training
  • Identify funding programs, partner and help
    with proposals demos, prototypes, etc.
  • Particular attention is needed in the near term
    for countries planning Tier1 or Tier2 Regional
    Centres

34
Networks, Grids and HENP
  • Grids are starting to change the way we do
    science and engineering
  • Successful use of Grids requires reliable
    national and international networks With
    quantifiable, high performance
  • Grid projects will have a substantial impact on
    the demands for high BW, high quality networks
  • Grid architects are still grappling with
    understanding networks
  • The degree and nature of the impact is not yet
    understood
  • Getting high (reliable) Grid performance across
    networks means!
  • End-to-end network monitoring a coherent
    approach
  • Getting high performance (TCP) toolkits in
    users hands
  • Working in concert with Internet E2E, I2 HENP
    WG, DataTAG Working with the Grid projects and
    GGF

35
Networks, Grids and HENPFuture Interactions
  • Network technology advances will
    continuefibers, dynamic use of wavelengths
    security
  • This will fuel a continuing development cycle
    between
  • Grid services, Grid Applications, and HENPs
    software and data analysis environment

HEP Sofware Data AnalysisEnvironment
Grid Applications
NetworkTechnologies
Grid Services
36
Upcoming Grid Challenges Global Workflow
Management and Optimization
  • Workflow Management, Balancing Policy Versus
    Moment-to-moment Capability to Complete Tasks
  • Balance High Levels of Usage of Limited Resources
    Against Better Turnaround Times for Priority
    Jobs
  • Goal-Oriented According to (Yet to be Developed)
    Metrics
  • Maintaining a Global View of Resources and System
    State
  • Global System Monitoring, Modeling,
    Quasi-realtime simulation feedback on the
    Macro- and Micro-Scales
  • Adaptive Learning new paradigms for execution
    optimization and Decision Support (eventually
    automated)
  • This is the Job of a Grid Applications Layer
  • To be designed, written and deployed by each
    experiment
  • Common Elements, such as some higher level
    services ?
  • Who is doing/going-to-do this, and when ?

37
Extra Slides Follow
38
ICFA-SCIC Related URLs
  • ICFA-SCIC Homepage (Will be updated February)
  • http//www.hep.net/ICFA/index.html
  • ICFA Network Task Force (NTF) Homepage
    (1997-1998)
  • http/nicewww.cern.ch/davidw/icfa/icfa-ntf.html
  • ICFA-NTF July98 Reports
  • http//nicewww.cern.ch/davidw/icfa/July98Report.h
    tml
  • http//l3www.cern.ch/newman/icfareq98.html

39
ICFA and International Networking
NTF
  • ICFA Statement on Communications in Intl
    HEPCollaborations of October 17, 1996
    See http//www.fnal.gov/directorate/icfa/icfa_comm
    unicaes.html
  • ICFA urges that all countries and institutions
    wishing to participate even more effectively
    and fully in international HEP Collaborations
    should
  • Review their operating methods to ensure they
    are fully adapted to remote participation
  • Strive to provide the necessary communications
    facilities and adequate international bandwidth

40
The ICFA Network Task Force
NTF
  • Approved by ICFA in July 1997
  • David O. Williams (CERN) Chair
  • NTF Charge
  • Document and Evaluate the status of networking
    used by the whole ICFA community, and analyze
    its likely evolution until the turn-on of the LHC
  • Make specific recommendations or proposals to
    ICFA to improve the overall situation for HEP
    networking
  • Working Groups
  • Monitoring Present Status Requirements
    AnalysisRemote Regions Recommendations
  • Final Report to ICFA July 1998
  • http//nicewww.cern.ch/davidw/icfa/FinalRe
    port.doc

41
ICFA NTF Working Groups
NTF
  • Monitoring WG Cottrell (SLAC), Chair
  • Deployed and enhanced the SLAC/HEPNRC monitoring
    packages, with active participation from Europe,
    Japan,..
  • Present Status WG Tirler (SACLAY), chair
  • Prepared a status report listing the many
    networks used by the ICFA community then (in
    1997-8).
  • Requirements Analysis WG Newman (CALTECH),
    chair
  • The requirements group has carried out the
    difficult job of consolidating a statement of
    our needs for the next 5-10 years 1998 -
    2008.
  • Remote Regions WG Frese (DESY), Chair
  • This group has gathered information on
    connectivity to Russia, India and other remote
    regions, and investigated the use of satellite
    technology.
  • Recommendations WG Kaletka (FNAL), Chair

42
Bandwidth Requirements Projection
(Mbps) ICFA-NTF (1998)
NTF
1001000 X Bandwidth Increase Foreseen for
1998-2005 See the ICFA-NTF Requirements
Report http//l3www.cern.ch/newman/icfareq98.html
43
ICFA SCIC Outlook Next Meeting March 9 at CERN
  • Review Membership Do we have the global coverage
    we need ?
  • Track National and International Infrastructures
    Status
  • Technology Services Pricing
  • Remote Regions
  • Generic Problems Special Cases
  • Action Items Help encourage advanced RD help
    with proposals
  • Advanced Technology Topics
  • Highest Throughput In testbed demonstrations in
    production
  • 10 GigE Ethernet in the First Mile
  • New Protocols and Equipment IP/MPLS, and GMPLS
  • Wireless (3G and 4G)
  • Paradigm Shifts
  • Grids
  • Ubiquitous and Mobile Communications (and
    Computing)
  • New Requirements Report
  • In the context of Grids and Lambda-Provisioning
  • Schedule ? Content ? Specific Points ?

44
High Speed Bulk ThroughputBaBar Example and LHC
  • Driven by
  • Data intensive science, e.g. data grids
  • HENP data rates, e.g. BaBar 400TB/year,
    collection doubling yearly, i.e. PBytes in couple
    of years 40 PB/yr More at LHC
  • Data rate from experiment gt20 MBytes/s 200
    GBytes/d 5-75 Times More at LHC
  • Multiple regional computer centers (e.g.
    Lyon-FR, RAL-UK, INFN-IT, LBNL-CA, LLNL-CA,
    Caltech-CA) need copies of data
  • Boeing 747 has high throughput, BUT poor latency
    (2 weeks) very people intensive
  • Therefore Need high-speed networks and the
    ability to utilize them fully
  • High speed today 100 GB - 1 TB/day (12 to 120
    Mbps an OC3)

Data volume
Moores law
45
ICFA NTF Assessment of Future Network Needs
NTF
  • We expect that the next generation of
    experiments, such as BarBar, Run II at Fermilab,
    CLEO II and RHIC, which will start to run in the
    period 1999-2001, will require an increase in
    network capacity relative to 1998 by a factor of
    10-30 from todays levels. By the time LHC
    experiments start operating in 2005, we will be
    needing an increase by a factor of 100-1000 from
    todays levels

46
ICFA NTF Recommendations Concerning
International Links
NTF
  • ICFA should encourage the provision of
    considerable extra bandwidth, especially over
    the Atlantic
  • ICFA participants should make concrete proposals
    to
  • recommend how to provide increased bandwidth
    across the Atlantic
  • recommend a common ICFA approach to QoS
  • propose ways to improve co-operation with other
    disciplines and agencies
  • Consider whether and how to approach funding
    agencies for resources to implement QoS services
    on transoceanic links.
  • The bandwidth to Japan also needs to be upgraded
  • All organizations involved in providing
    intercontinental Internet service should be
    encouraged to regard integrated end-to-end
    connectivity as a primary requirement

47
ICFA-SCIC Membership
  • The chair is appointed directly by ICFA ?
  • Each of the major user laboratories, CERN, DESY,
    FERMILAB, KEK and SLAC, should appoint one
    member each ?
  • ECFA, DPF jointly with IPP, and ACFA, should
    appoint two members each ?
  • ICFA will appoint one member from the Russian
    federation and one member from South America ?

48
Installed Links (Gbps) Required to US Labs
and Transatlantic
Maximum Link Occupancy 50 Assumed
49
LHC Data Grid Hierarchy
CERN/Outside Resource Ratio 12Tier0/(?
Tier1)/(? Tier2) 111
PByte/sec
100-400 MBytes/sec
Online System
Experiment
CERN 700k SI95 1 PB Disk Tape Robot
Tier 0 1
HPSS
2.5 Gbits/sec
Tier 1
FNAL 200k SI95 600 TB
IN2P3 Center
INFN Center
RAL Center
2.5 Gbps
Tier 2
2.5 Gbps
Tier 3
Institute 0.25TIPS
Institute
Institute
Institute
Physicists work on analysis channels Each
institute has 10 physicists working on one or
more channels
100 - 1000 Mbits/sec
Physics data cache
Tier 4
Workstations
50
HENP Related Data Grid Projects
  • Projects
  • PPDG I USA DOE 2M 1999-2001
  • GriPhyN USA NSF 11.9M 1.6M 2000-2005
  • EU DataGrid EU EC 10M 2001-2004
  • PPDG II (CP) USA DOE 9.5M 2001-2004
  • iVDGL USA NSF 13.7M 2M 2001-2006
  • DataTAG EU EC 4M 2002-2004
  • GridPP UK PPARC gt15M 2001-2004
  • LCG (Ph1) CERN MS 30 MCHF 2002-2004
  • Many Other Projects of interest to HENP
  • Initiatives in US, UK, Italy, France, NL,
    Germany, Japan,
  • US and EU networking initiatives AMPATH, I2,
    DataTAG
  • US Distributed Terascale Facility (53M, 12
    TeraFlops, 40 Gb/s network)

51
TEN-155 and GEANTEuropean AR Networks 2001-2002
Project 2000 - 2004
GEANT from 9/0110 2.5 Gbps
TEN-155OC12 Core
European AR Networks are Advancing Rapidly31
Countries Are Now Connected to GEANT
52
(No Transcript)
53
California Optical Network Initiative (ONI)
  • Connect 46 Major Research and Ed. Institutions
  • Eventually all 125 Community Colleges and
    1135 K-12 Districts
  • Target Start Date 10/2001 Phase 1 Done in 2002
  • Tier1 Bleeding Edge Research Emphasis on the
    Use of Dark Fibers
  • Tier2 and Tier3 Multi-Gbps Production Networks

CENIC ONI
54
GriPhyN iVDGL Map Circa 2002-2003 US, UK,
Italy, France, Japan, Australia
  • International Virtual-Data Grid Laboratory
  • Conduct Data Grid tests at scale
  • Develop Common Grid infrastructure
  • National, international scale Data Grid
    tests, leading to managed ops (iGOC)
  • Components
  • Tier1, Selected Tier2 and Tier3 Sites
  • Distributed Terascale Facility (DTF)
  • 0.6 - 10 Gbps networks
  • Planned New Partners
  • Brazil T1
  • Russia T1
  • Pakistan T2
  • China T2

55
STARLIGHT The Next GenerationOptical STARTAP
StarLight, the Optical STAR TAP, is an advanced
optical infrastructure and proving ground for
network services optimized for high-performance
applications. In partnership with CANARIE
(Canada), SURFnet (Netherlands), and soon CERN.
  • Started last July
  • Existing Fiber Ameritech, ATT, Qwest Soon MFN,
    Teleglobe, Global Crossing and Others
  • Main distinguishing features
  • Neutral location (Northwestern University)
  • 40 racks for co-location
  • 1/10 Gigabit Ethernet based
  • Optical switches for advanced experiments
  • GMPLS, OBGP
  • See http//www.startap.net/starlight
  • Developed by EVL at UIC, iCAIR at NWU, ANL/MCS
    Div.
Write a Comment
User Comments (0)
About PowerShow.com