Networks for HENP and ICFA SCIC - PowerPoint PPT Presentation

About This Presentation
Title:

Networks for HENP and ICFA SCIC

Description:

and analyzed physics results from massive data stores ... All of the above will further open the Digital Divide chasm. We need to take action ... – PowerPoint PPT presentation

Number of Views:55
Avg rating:3.0/5.0
Slides: 26
Provided by: Harvey77
Category:
Tags: henp | icfa | scic | chasm | networks

less

Transcript and Presenter's Notes

Title: Networks for HENP and ICFA SCIC


1
  • Networks for HENP and ICFA SCIC

Harvey B. Newman California Institute
of TechnologyAPAN High Energy Physics
WorkshopJanuary 21, 2003
2
Next Generation Networks for Experiments Goals
and Needs
Large data samples explored and analyzed by
thousands of globally dispersed scientists, in
hundreds of teams
  • Providing rapid access to event samples, subsets
    and analyzed physics results from massive data
    stores
  • From Petabytes by 2002, 100 Petabytes by 2007,
    to 1 Exabyte by 2012.
  • Providing analyzed results with rapid turnaround,
    bycoordinating and managing the large but
    LIMITED computing, data handling and NETWORK
    resources effectively
  • Enabling rapid access to the data and the
    collaboration
  • Across an ensemble of networks of varying
    capability
  • Advanced integrated applications, such as Data
    Grids, rely on seamless operation of our LANs
    and WANs
  • With reliable, monitored, quantifiable high
    performance

3
Four LHC Experiments The
Petabyte to Exabyte Challenge
  • ATLAS, CMS, ALICE, LHCB Higgs New
    particles Quark-Gluon Plasma CP Violation

Data stored 40 Petabytes/Year and UP
CPU 0.30 Petaflops and UP 0.1
to 1 Exabyte (1 EB 1018
Bytes) (2007) (2012 ?) for the LHC
Experiments
4
LHC Data Grid Hierarchy
CERN/Outside Resource Ratio 12Tier0/(?
Tier1)/(? Tier2) 111
PByte/sec
100-400 MBytes/sec
Online System
Experiment
CERN 700k SI95 1 PB Disk Tape Robot
Tier 0 1
HPSS
Tier 1
2.5 Gbps
FNAL
IN2P3 Center
INFN Center
RAL Center
2.5 Gbps
Tier 2
2.5 Gbps
Tier 3
Institute 0.25TIPS
Institute
Institute
Institute
Tens of Petabytes by 2007-8.An Exabyte within 5
Years later.
Physics data cache
0.1 to 10 Gbps
Tier 4
Workstations
5
ICFA and Global Networks for HENP
  • National and International Networks, with
    sufficient (rapidly increasing) capacity and
    capability, are essential for
  • The daily conduct of collaborative work in both
    experiment and theory
  • Detector development construction on a global
    scale Data analysis involving physicists from
    all world regions
  • The formation of worldwide collaborations
  • The conception, design and implementation of
    next generation facilities as global networks
  • Collaborations on this scale would never have
    been attempted, if they could not rely on
    excellent networks

6
ICFA and International Networking
  • ICFA Statement on Communications in Intl
    HEPCollaborations of October 17, 1996
    See http//www.fnal.gov/directorate/icfa/icfa_comm
    unicaes.html
  • ICFA urges that all countries and institutions
    wishing to participate even more effectively and
    fully in international HEP Collaborations should
  • Review their operating methods to ensure they
    are fully adapted to remote participation
  • Strive to provide the necessary communications
    facilities and adequate international bandwidth

7
ICFA Network Task Force 1998 Bandwidth
Requirements Projection (Mbps)
NTF
1001000 X Bandwidth Increase Foreseen for
1998-2005 See the ICFA-NTF Requirements
Report http//l3www.cern.ch/newman/icfareq98.htm
l
8
ICFA Standing Committee on Interregional
Connectivity (SCIC)
  • Created by ICFA in July 1998 in Vancouver
    Following ICFA-NTF
  • CHARGE
  • Make recommendations to ICFA concerning the
    connectivity between the Americas, Asia and
    Europe (and network requirements of HENP)
  • As part of the process of developing
    theserecommendations, the committee should
  • Monitor traffic
  • Keep track of technology developments
  • Periodically review forecasts of future
    bandwidth needs, and
  • Provide early warning of potential problems
  • Create subcommittees when necessary to meet the
    charge
  • The chair of the committee should report to ICFA
    once peryear, at its joint meeting with
    laboratory directors (Feb. 2003)
  • Representatives Major labs, ECFA, ACFA, NA
    Users, S. America

9
ICFA-SCIC Core Membership
  • Representatives from major HEP laboratories
  • W. Von Reuden (CERN)
  • Volker Guelzow (DESY) Vicky White (FNAL)
    Yukio Karita (KEK) Richard Mount (SLAC)
  • User Representatives Richard Hughes-Jones
    (UK) Harvey Newman (USA)
  • Dean Karlen (Canada)
  • For Russia Slava Ilyin (MSU)
  • ECFA representatives
  • Denis Linglin (IN2P3, Lyon)Frederico Ruggieri
    (INFN Frascati)
  • ACFA representatives
  • Rongsheng Xu (IHEP Beijing)
  • H. Park, D. Son (Kyungpook Natl University)
  • For South America Sergio F. Novaes
    (University of Sao Paulo)

10
SCIC Sub-Committees
  • Web Page http//cern.ch/ICFA-SCIC/
  • Monitoring Les Cottrell (http//www.slac.stanfor
    d.edu/xorg/icfa/scic-netmon) With Richard
    Hughes-Jones (Manchester), Sergio Novaes (Sao
    Paolo) Sergei Berezhnev (RUHEP), Fukuko Yuasa
    (KEK), Daniel Davids (CERN), Sylvain Ravot
    (Caltech), Shawn McKee (Michigan)
  • Advanced Technologies Richard Hughes-Jones,With
    Vladimir Korenkov (JINR, Dubna), Olivier
    Martin(CERN), Harvey Newman
  • The Digital Divide Alberto Santoro (Rio, Brazil)
  • With Slava Ilyin, Yukio Karita, David O. Williams
  • Also Dongchul Son (Korea), Hafeez Hoorani
    (Pakistan), Sunanda Banerjee (India), Vicky
    White (FNAL)
  • Key Requirements Harvey Newman
  • Also Charlie Young (SLAC)

11
Transatlantic Net WG (HN, L. Price)
Bandwidth Requirements

BW Requirements Increasing Faster Than
Moores Law See http//gate.hep.anl.gov/lprice/TAN
12
History One large Research Site
Much of the TrafficSLAC ? IN2P3/RAL/INFNvia
ESnetFranceAbileneCERN
Current Traffic 400 MbpsESNet
LimitationProjections 0.5 to 24 Tbps by 2012
13
Tier0-Tier1 Link Requirements Estimate for
Hoffmann Report 2001
  • Tier1 ? Tier0 Data Flow for Analysis 0.5 - 1.0
    Gbps
  • Tier2 ? Tier0 Data Flow for Analysis 0.2 - 0.5
    Gbps
  • Interactive Collaborative Sessions (30 Peak)
    0.1 - 0.3 Gbps
  • Remote Interactive Sessions (30 Flows Peak) 0.1
    - 0.2 Gbps
  • Individual (Tier3 or Tier4) data transfers
    0.8 GbpsLimit to 10 Flows of 5 Mbytes/sec
    each
  • TOTAL Per Tier0 - Tier1 Link 1.7 - 2.8 Gbps
  • NOTE
  • Adopted by the LHC Experiments given in the
    upcomingHoffmann Steering Committee Report 1.5
    - 3 Gbps per experiment
  • Corresponds to 10 Gbps Baseline BW Installed on
    US-CERN Link
  • Hoffmann Panel also discussed the effects of
    higher bandwidths
  • For example all-optical 10 Gbps Ethernet across
    WANs

14
Tier0-Tier1 BW Requirements Estimate for
Hoffmann Report 2001
  • Does Not Include the more recent ATLAS Data
    Estimates
  • 270 Hz at 1033 Instead of 100Hz
  • 400 Hz at 1034 Instead of 100Hz
  • 2 MB/Event Instead of 1 MB/Event
  • Does Not Allow Fast Download to Tier34 of
    Small Object Collections
  • Example Download 107 Events of AODs (104 Bytes)
    ? 100 GbytesAt 5 Mbytes/sec per person (above)
    thats 6 Hours !
  • This is a still a rough, bottoms-up, static, and
    hence Conservative Model.
  • A Dynamic distributed DB or Grid system with
    Caching, Co-scheduling, and Pre-Emptive data
    movement may well require greater bandwidth
  • Does Not Include Virtual Data
    operationsDerived Data Copies Data-description
    overheads
  • Further MONARC Computing Model Studies are Needed

15
ICFA SCIC Meetings and Topics
  • Focus on the Digital Divide This Year
  • Identification of problem areas work on ways to
    improve
  • Network Status and Upgrade Plans in Each Country
  • Performance (Throughput) Evolution in Each
    Country, and Transatlantic
  • Performance Monitoring World-Overview (Les
    Cottrell, IEPM Project)
  • Specific Technical Topics (Examples)
  • Bulk transfer, New Protocols Collaborative
    Systems, VOIP
  • Preparation of Reports to ICFA (Lab Directors
    Meetings)
  • Last Report World Network Status and Outlook -
    Feb. 2002
  • Next Report Digital Divide, Monitoring,
    Advanced Technologies Requirements Evolution
    Feb. 2003
  • Seven Meetings in 2002 at KEK In December 13.

16
Network Progress in 2002 andIssues for Major
Experiments
  • Backbones major links advancing rapidly to 10
    Gbps range
  • Gbps end-to-end throughput data flows have
    been tested will be in production soon (in 12
    to 18 Months)
  • Transition to Multi-wavelengths 1-3 yrs. in the
    most favored regions
  • Network advances are changing the view of the
    nets roles
  • Likely to have a profound impact on the
    experiments Computing Models, and bandwidth
    requirements
  • More dynamic view GByte to TByte data
    transactionsdynamic path provisioning
  • Net RD Driven by Advanced integrated
    applications, such as Data Grids, that rely on
    seamless LAN and WAN operation
  • With reliable, quantifiable (monitored), high
    performance
  • All of the above will further open the Digital
    Divide chasm. We need to take action

17
ICFA SCIC RE Backbone and International Link
Progress
  • GEANT Pan-European Backbone (http//www.dante.net/
    geant)
  • Now interconnects gt31 countries many trunks 2.5
    and 10 Gbps
  • UK SuperJANET Core at 10 Gbps
  • 2.5 Gbps NY-London, with 622 Mbps to ESnet and
    Abilene
  • France (IN2P3) 2.5 Gbps RENATER backbone from
    October 2002
  • Lyon-CERN Link Upgraded to 1 Gbps Ethernet
  • Proposal for dark fiber to CERN by end 2003
  • SuperSINET (Japan) 10 Gbps IP and 10 Gbps
    Wavelength Core
  • Tokyo to NY Links 2 X 2.5 Gbps started Peer
    with ESNet by Feb.
  • CAnet4 (Canada) Interconnect customer-owned
    dark fiber nets across Canada at 10 Gbps,
    started July 2002
  • Lambda-Grids by 2004-5
  • GWIN (Germany) 2.5 Gbps Core Connect to US at 2
    X 2.5 GbpsSupport for SILK Project Satellite
    links to FSU Republics
  • Russia 155 Mbps Links to Moscow (Typ. 30-45 Mbps
    for Science)
  • Moscow-Starlight Link to 155 Mbps (US NSF
    Russia Support)
  • Moscow-GEANT and Moscow-Stockholm Links 155 Mbps

18
RE Backbone and Intl Link Progress
  • Abilene (Internet2) Upgrade from 2.5 to 10 Gbps
    in 2002
  • Encourage high throughput use for targeted
    applications FAST
  • ESNET Upgrade to 10 Gbps As Soon as Possible
  • US-CERN
  • to 622 Mbps in August Move to STARLIGHT
  • 2.5G Research Triangle from 8/02
    STARLIGHT-CERN-NL to 10G in 2003. 10Gbps
    SNV-Starlight Link Loan from Level(3)
  • SLAC IN2P3 (BaBar)
  • Typically 400 Mbps throughput on US-CERN,
    Renater links
  • 600 Mbps Throughput is BaBar Target for Early
    2003 (with ESnet and Upgrade)
  • FNAL ESnet Link Upgraded to 622 Mbps
  • Plans for dark fiber to STARLIGHT, proceeding
  • NY-Amsterdam Donation from Tyco, September 2002
    Arranged by IEEAF 622 Gbps10 Gbps Research
    Wavelength
  • US National Light Rail Proceeding Startup
    Expected this Year

19
(No Transcript)
20
2.5? 10 Gbps Backbone
gt 200 Primary ParticipantsAll 50 States, D.C.
and Puerto Rico75 Partner Corporations and
Non-Profits23 State Research and Education Nets
15 GigaPoPs Support 70 of Members
21

2003 OC192 and OC48 Links Coming Into
ServiceNeed to Consider Links to US HENP Labs
22
National RE Network ExampleGermany DFN
Transatlantic Connectivity 2002
  • 2 X OC48 NY-Hamburg and NY-Frankfurt
  • Direct Peering to Abilene (US) and Canarie
    (Canada)
  • UCAID said to be adding another 2 OC48s in a
    Proposed Global Terabit Research Network (GTRN)
  • Virtual SILK Highway Project (from 11/01) NATO
    ( 2.5 M) and Partners ( 1.1M)
  • Satellite Links to South Caucasus and
    Central Asia (8 Countries)
  • In 2001-2 (pre-SILK) BW 64-512 kbps
  • Proposed VSAT to get 10-50 X BW for same cost
  • See www.silkproject.org
  • Partners CISCO, DESY. GEANT, UNDP, US
    State Dept., Worldbank, UC London, Univ.
    Groenigen

23
National Research Networks in Japan
  • SuperSINET
  • Started operation January 4, 2002
  • Support for 5 important areasHEP, Genetics,
    Nano-Technology,Space/Astronomy, GRIDs
  • Provides 10 ?s
  • 10 Gbps IP connection
  • Direct intersite GbE links
  • 9 Universities Connected
  • January 2003 Two TransPacific 2.5 Gbps
    Wavelengths (to NY) Japan-US-CERN Grid
    Testbed Soon

NIFS
IP
Nagoya U
NIG
WDM path
IP router
Nagoya
Osaka
Osaka U
Tokyo
Kyoto U
NII Hitot.
ICR
Kyoto-U
U Tokyo
ISAS
Internet
IMS
NAO
U-Tokyo
24
SuperSINET Updated Map October 2002


25
APAN Links in Southeast AsiaJanuary 15, 2003

Write a Comment
User Comments (0)
About PowerShow.com