Title: Networking for LHC
1Networking for LHC
- Danish Grid Forum
- 13th September 2005
- David Foster
- Communication Systems Group Leader
2Communications Systems Group
- Mission Provision of all the electronic
communications services of the laboratory. - Internal Campus and desktop networks.
- External connectivity to all collaborating
institutes. - Telephony (fixed) and mobile (GSM)
- Access control systems for the accelerator.
- VHF communications (Fire Service).
- This includes
- Design and implementation of the network
infrastructure. - Network services, configuration management, DNS,
DHCP etc. - VoIP, PABX Redundancy, Red Phones, Leaky Feeder
etc. - Contracts for cabling and network interventions.
- Provisioning of all network equipment.
- Operational problem determination and resolution.
3Leaky Feeder (CNGS)
4A large campus infrastructure
- Enterasys Xpedition routers 14 XP-8600, 50
XP-8000 - 1200 subnets
- 650 switches, 15000 ports
- 860 Ethernet hubs , 20000 ports
- 400 servers with Gigabit Ethernet attachment
- Mainly Computer center
- 15000 active connections
- 35000 sockets - 1200 km of UTP cable
- 170 starpoints (from 20 to 1000 outlets)
- 2500 km of fibers
- 1500 requests for Moves-Adds-Changes per month
- Multi vendor site using only standards
- Major upgrade will start in 2006 and run for 3
years
5Wireless
- On demand service
- About 150 base stations in operation
- Meeting rooms
- Public areas, hotel
- Some Experimental areas
- But lots of operational (interferences) issues
with private installations
6LHC Accelerator Infrastructure
- Technical network backbone fully operational
- 4500 ethernet ports installed (inc PS, SPS)
- Fast network expansion going on (hundreds of
switches this year) - VDSL is available in some areas in the LHC tunnel
- Leaky Feeder for GSM communications being
installed.
7Networking for the Experiments
- All four LHC experiments use IT/CS services for
network cabling - Work has started in many buildings. A lot of
plugs - Network equipment selection
- Installation
- Operation
- Adaptation of the IT/CS management services will
be required to accommodate the new requirements - Network device configuration, cabling, fiber,
monitoring, user access is all based on
management tools
8CNIC towards network partitioning
- Computing and Network Infrastructure for
Controls - Protect the fragile equipment against attacks or
unexpected traffic - Enforce connection and connectivity policies
- Several domains GPN, TN, ALICE, ATLAS, CMS, LHCB
- Access control for every device based on
Credentials - Inter domain communication rules
- Intra domain traffic restrictions
- IT/CS will implement the network framework and
tools, enforce the agreed policies which will be
decided by the other services and management. - Implementation
- Pilot by end 2005 on TN
- Big impact on the end users
- http//cern.ch/wg-cnic/
9Networking for the LCG
- LCG will require
- Several thousands of Gigabit ports in the
computer center - Hundreds of Ten Gigabit Ethernet connections in
the computer Center - 10 Ten Gigabit Ethernet links to T1s (WAN)
- 8 Ten Gigabit Ethernet links to the Experiments
- Challenges
- Operation of the system as ONE entity
- Ensure security and protection of the system
10LCG cluster network
WAN
Gigabit Ethernet Ten Gigabit Ethernet link Double
Ten gigabit Ethernet link
2.4 Tbps CORE
One double Link not shown
Experimental areas
Campus Network
Distribution layer
.
.
10gbps to 881gbps
10gbps to 881gbps
10gbps to 881gbps
10gbps to 321gbps
10gbps to 1010gbps
..10..
..32..
..96..
..96..
..96..
2000 Tape and Disk servers
6000 CPU servers
11 CERNs external networking in a nutshell
- Four main connections to the Internet
- Switch (Swiss national education research
network (NREN)) 10Gb/s - GEANT (pan-European backbone co-funded by the
European Union and European NRENs) 10Gb/s - There will be 7-14 10Gb/sec dedicated circuit
connections to the Tier-1 centers by early 2006
as part of GEANT-2 - US Line Consortium (USLIC) 10Gb/s connection to
the Starlight Internet Exchange Point in Chicago - Will grow to 2 or 4 connections by mid-2006. This
will become the main infrastructure for LHC
traffic to the USA LHCNet - CERNs neutral Internet Exchange Point (CIXP)
1-gt10Gb/s - Distributed neutral Internet exchange point in
Geneva in collaboration with Telehouse. - Other mission/project oriented circuits
- IN2P3
- Dark Fiber (multi-10Gb/sec) to CCPN/IN2P3 (Lyon)
expected to be installed summer 2005 - WHO (World Health Organization)
- Part of the Commodity Internet Consortium (CIC)
with CRI74 (Department Haut Savoie)
12 CERNs Neutral Internet Exchange Point (CIXP)
- Telecom Operators dark fibre providers
- Cablecom, COLT, France Telecom,
FibreLac/Intelcom, Global Crossing, LDCom,
Deutsche Telekom/T-Systems, Interoute(), KPN,
MCI/Worldcom, SIG, Sunrise, Swisscom
(Switzerland), Swisscom (France), Thermelec, VTX. - Internet Service Providers include
- Infonet, ATT Global Network Services, Cablecom,
Callahan, Colt, DFI, Deckpoint, Deutsche Telekom,
Easynet, FibreLac, France Telecom/OpenTransit,
Global-One, InterNeXt, IS-Productions, LDcom,
Nexlink, PSI Networks (IProlink), MCI/Worldcom,
Petrel, SIG, Sunrise, IP-Plus,VTX/Smartphone,
UUnet, Vianetworks. - Others
- SWITCH, Swiss Confederation, Conseil General de
Haute Savoie ()
isp
isp
Telecom operators
c i x p
isp
isp
isp
isp
isp
isp
CERN firewall
Telecom operators
Cern LAN
13(No Transcript)
14GLIF MAP From GLIF
The Global Lambda Integrated Facility - GLIF
15HENP Bandwidth Roadmap for Major Links (in Gbps)
Continuing Trend 1000 Times Bandwidth Growth
Per DecadeHEP Co-Developer as well as
Application Driver of Global Nets
Harvey Newman
16International Workshops on HEP Networking, Grids
and Digital Divide Issues for Global e-Science
(ICFA-SCIC)
- Workshop Goals
- Review the current status, progress and barriers
to effective use of major national, continental
and transoceanic networks - Review progress, strengthen opportunities for
collaboration, and explore the means to deal with
key issues in Grid computing and Grid-enabled
data analysis, for high energy physics and other
fields of data intensive science - Exchange information and ideas, and formulate
plans to develop solutions to specific problems
related to the Digital Divide, with a focus on
the Asia Pacific region, as well as Latin
America, Russia and Africa - Continue to advance a broad program of work on
reducing or eliminating the Digital Divide, and
ensuring global collaboration, as related to all
of the above aspects.
http//icfa-scic.web.cern.ch/ICFA-SCIC/
Harvey Newman
17PingER World View from SLAC, CERN
S.E. Europe, Russia Catching up Latin Am.,
China Keeping up India, Mid-East, Africa
Falling Behind
C. Asia, Russia, SE Europe, L. America, M. East,
China 4-7 yrs behind India, Africa 7-8 yrs
behind
Latin America
Latin America
?
?
R. Cottrell
1821st Century Opportunities
- The increase in cost effective networking has
opened up many opportunities for collaborative
science - Sharing of computer center resources provides
increased involvement and engagement of the world
science community. - Engineering Science investment is now providing
real network capabilities for the science
community. - The Grid is providing the vehicle, tying together
the will (collaborative science) and the way
(networks) to produce a global infrastructure for
science and engineering