MultiGigabit Trials on GEANT Collaboration with Dante. - PowerPoint PPT Presentation

1 / 24
About This Presentation
Title:

MultiGigabit Trials on GEANT Collaboration with Dante.

Description:

ESLEA-FABRIC Technical Meeting , 1 Sep 2006, R. Hughes-Jones Manchester. 1 ... ESLEA-FABRIC Technical Meeting , 1 Sep 2006, R. Hughes-Jones Manchester. 6. What ... – PowerPoint PPT presentation

Number of Views:77
Avg rating:3.0/5.0
Slides: 25
Provided by: rhu99
Category:

less

Transcript and Presenter's Notes

Title: MultiGigabit Trials on GEANT Collaboration with Dante.


1
Multi-Gigabit Trials on GEANTCollaboration with
Dante.
Richard Hughes-Jones The University of
Manchester www.hep.man.ac.uk/rich/ then
Talks
2
Outline
  • What is GÉANT2
  • What is it interesting?
  • 10 Gigabit Ethernet
  • UDP memory-2-memory flows
  • Options using GÉANT Development Network
  • 10 Gbit SDH Network
  • Options Using the GÉANT LightPath Service
  • PoP Location for Network tests
  • PCs and Current 10 Gbit Tests
  • PC Servers
  • Some Test results

3
GÉANT2 Topology
4
GÉANT2 The Convergence Solution
NREN Access
Existing IP Router
Existing IP Router
5
From PoS to Ethernet
  • More Economical Architecture
  • Highest Overall Network Availability
  • Flexibility (VLAN management)
  • Highest Network Performance (Latency)

6
What do we want to do?
  • Set up 4 Gigabit Lightpath Between GÉANT PoPs
  • Collaboration with Dante
  • PCs in their PoPs with 10 Gigabit NICs
  • VLBI Tests
  • UDP Performance
  • Throughput, jitter, packet loss, 1-way delay,
    stability
  • Continuous (days) Data Flows VLBI_UDP and
  • multi-Gigabit TCP performance with current
    kernels
  • Experience for FPGA Ethernet packet systems
  • Dante Interests
  • multi-Gigabit TCP performance
  • The effect of (Alcatel) buffer size on bursty TCP
    when using BW limited Lightpaths
  • Need A Collaboration Agreement

7
  • Options Using the GÉANT Development Network
  • 10 Gigabit SDH backbone
  • Alkatel 1678 MCC
  • Node location
  • London
  • Amsterdam
  • Paris
  • Prague
  • Frankfurt
  • Can do traffic routingso make long rtt paths
  • Available Dec/Jan 07
  • Less Pressure for long term tests

8
Options Using the GÉANT LightPaths
  • Set up 4 Gigabit Lightpath Between GÉANT PoPs
  • Collaboration with Dante
  • PCs in Dante PoPs
  • 10 Gigabit SDH backbone
  • Alkatel 1678 MCC
  • Node location
  • Budapest
  • Geneva
  • Frankfurt
  • Milan
  • Paris
  • Poznan
  • Prague
  • Vienna
  • Can do traffic routingso make long rtt paths
  • Ideal London Copenhagen

9
4 Gigabit GÉANT LightPath
  • Example of a 4 Gigabit Lightpath Between GÉANT
    PoPs
  • PCs in Dante PoPs
  • 26 VC-4s 4180 Mbit/s

10
  • PCs and Current Tests

11
Test PCs Have Arrived
  • Boston/Supermicro X7DBE
  • Two Dual Core Intel Xeon Woodcrest 5130
  • 2 GHz
  • Independent 1.33GHz FSBuses
  • 530 MHz FD Memory (serial)
  • Chipsets Intel 5000P MCH PCIe MemoryESB2
    PCI-X GE etc.
  • PCI
  • 3 8 lane PCIe buses
  • 3 133 MHz PCI-X
  • 2 Gigabit Ethernet
  • SATA

12
Bandwidth Challenge wins Hat Trick
  • The maximum aggregate bandwidth was gt151 Gbits/s
  • 130 DVD movies in a minute
  • serve 10,000 MPEG2 HDTV movies in real-time
  • 22 10Gigabit Ethernet wavesCaltech SLAC/FERMI
    booths
  • In 2 hours transferred 95.37 TByte
  • 24 hours moved 475 TBytes
  • Showed real-time particle event analysis
  • SLAC Fermi UK Booth
  • 1 10 Gbit Ethernet to UK NLRUKLight
  • transatlantic HEP disk to disk
  • VLBI streaming
  • 2 10 Gbit Links to SALC
  • rootd low-latency file access application for
    clusters
  • Fibre Channel StorCloud
  • 4 10 Gbit links to Fermi
  • Dcache data transfers

In to booth
Out of booth
13
SC05 Seattle-SLAC 10 Gigabit Ethernet
  • 2 Lightpaths
  • Routed over ESnet
  • Layer 2 over Ultra Science Net
  • 6 Sun V20Z systems per ?
  • dcache remote disk data access
  • 100 processes per node
  • Node sends or receives
  • One data stream 20-30 Mbit/s
  • Used Neteion NICs Chelsio TOE
  • Data also sent to StorCloudusing fibre channel
    links
  • Traffic on the 10 GE link for 2 nodes 3-4 Gbit
    per nodes 8.5-9 Gbit on Trunk

14
Lab Tests 10 Gigabit Ethernet
  • 10 Gigabit Test Lab being set up in Manchester
  • Cisco 7600
  • Cross Campus ? lt1ms
  • Server quality PCs
  • Neterion NICs
  • Chelsio being purchased
  • B2B performance so far
  • SuperMicro X6DHE-G2
  • Kernel (2.6.13) Driver dependent!
  • One iperf TCP data stream 4 Gbit/s
  • Two bi-directional iperf TCP data streams 3.8
    2.2 Gbit/s
  • UDP Disappointing
  • Propose to install Fedora Core5 Kernel 2.6.17 on
    the new Intel dual-core PCs

15
10 Gigabit Ethernet TCP Data transfer on PCI-X
  • Sun V20z 1.8GHz to2.6 GHz Dual Opterons
  • Connect via 6509
  • XFrame II NIC
  • PCI-X mmrbc 4096 bytes66 MHz
  • Two 9000 byte packets b2b
  • Ave Rate 2.87 Gbit/s
  • Burst of packets length646.8 us
  • Gap between bursts 343 us
  • 2 Interrupts / burst

16
10 Gigabit Ethernet UDP Data transfer on PCI-X
  • Sun V20z 1.8GHz to2.6 GHz Dual Opterons
  • Connect via 6509
  • XFrame II NIC
  • PCI-X mmrbc 2048 bytes66 MHz
  • One 8000 byte packets
  • 2.8us for CSRs
  • 24.2 us data transfereffective rate 2.6 Gbit/s
  • 2000 byte packet, wait 0us
  • 200ms pauses
  • 8000 byte packet, wait 0us
  • 15ms between data blocks

17
  • Any Questions?

18
More Information Some URLs 1
  • UKLight web site http//www.uklight.ac.uk
  • MB-NG project web site http//www.mb-ng.net/
  • DataTAG project web site http//www.datatag.org/
  • UDPmon / TCPmon kit writeup http//www.hep.man
    .ac.uk/rich/net
  • Motherboard and NIC Tests
  • http//www.hep.man.ac.uk/rich/net/nic/GigEth_te
    sts_Boston.ppt http//datatag.web.cern.ch/datata
    g/pfldnet2003/
  • Performance of 1 and 10 Gigabit Ethernet Cards
    with Server Quality Motherboards FGCS Special
    issue 2004
  • http// www.hep.man.ac.uk/rich/
  • TCP tuning information may be found
    athttp//www.ncne.nlanr.net/documentation/faq/pe
    rformance.html http//www.psc.edu/networking/p
    erf_tune.html
  • TCP stack comparisonsEvaluation of Advanced
    TCP Stacks on Fast Long-Distance Production
    Networks Journal of Grid Computing 2004
  • PFLDnet http//www.ens-lyon.fr/LIP/RESO/pfldnet200
    5/
  • Dante PERT http//www.geant2.net/server/show/nav.0
    0d00h002

19
More Information Some URLs 2
  • Lectures, tutorials etc. on TCP/IP
  • www.nv.cc.va.us/home/joney/tcp_ip.htm
  • www.cs.pdx.edu/jrb/tcpip.lectures.html
  • www.raleigh.ibm.com/cgi-bin/bookmgr/BOOKS/EZ306200
    /CCONTENTS
  • www.cisco.com/univercd/cc/td/doc/product/iaabu/cen
    tri4/user/scf4ap1.htm
  • www.cis.ohio-state.edu/htbin/rfc/rfc1180.html
  • www.jbmelectronics.com/tcp.htm
  • Encylopaedia
  • http//www.freesoft.org/CIE/index.htm
  • TCP/IP Resources
  • www.private.org.il/tcpip_rl.html
  • Understanding IP addresses
  • http//www.3com.com/solutions/en_US/ncs/501302.htm
    l
  • Configuring TCP (RFC 1122)
  • ftp//nic.merit.edu/internet/documents/rfc/rfc1122
    .txt
  • Assigned protocols, ports etc (RFC 1010)
  • http//www.es.net/pub/rfcs/rfc1010.txt
    /etc/protocols

20
  • Backup Slides

21
Bandwidth on DemandOur Long-Term Vision
1678 MCC
Ethernet
Applications e.g. GRID
Applications e.g. GRID
Ethernet
1678 MCC
1678 MCC
22
  • SuperComputing

23
10 Gigabit Ethernet UDP Throughput
  • 1500 byte MTU gives 2 Gbit/s
  • Used 16144 byte MTU max user length 16080
  • DataTAG Supermicro PCs
  • Dual 2.2 GHz Xenon CPU FSB 400 MHz
  • PCI-X mmrbc 512 bytes
  • wire rate throughput of 2.9 Gbit/s
  • CERN OpenLab HP Itanium PCs
  • Dual 1.0 GHz 64 bit Itanium CPU FSB 400 MHz
  • PCI-X mmrbc 4096 bytes
  • wire rate of 5.7 Gbit/s
  • SLAC Dell PCs giving a
  • Dual 3.0 GHz Xenon CPU FSB 533 MHz
  • PCI-X mmrbc 4096 bytes
  • wire rate of 5.4 Gbit/s

24
10 Gigabit Ethernet Tuning PCI-X
  • 16080 byte packets every 200 µs
  • Intel PRO/10GbE LR Adapter
  • PCI-X bus occupancy vs mmrbc
  • Measured times
  • Times based on PCI-X times from the logic
    analyser
  • Expected throughput 7 Gbit/s
  • Measured 5.7 Gbit/s
Write a Comment
User Comments (0)
About PowerShow.com