Title: Grant Visit 2002
1Grants 2002 Richard Hughes-Jones, University of
Manchester
2Evaluating Motherboards Gigabit NICs
- PCI Activity
- Logic Analyzer with
- PCI Probe cards in sending PC
- Gigabit Ethernet Fiber Probe Card
- PCI Probe cards in receiving PC
- Latency
- Round trip times UDP frames
- Slope gives sum of individual data transfer rates
end-to-end - Histograms
- UDP Throughput
- Send a controlled stream of UDP frames spaced at
regular intervals
3Gigabit Ethernet Probe CardPrototype
4Latency Tests B2B PCs with SysKonnect SK-9843
Gigabit Ethernet
- NIC SysKonnect SK-9843
- Motherboard SuperMicro 370DLE
- Chipset ServerWorks III LE
- CPU PIII 800 MHz PCI64 bit 33 MHz
- RedHat 7.1 Kernel 2.4.14
- Latency low well behaved
- Latency Slope 0.0252 us/byte
- Expect
- PCI 0.00758
- GigE 0.008
- PCI 0.00758
- 0.0236 us/byte
- Collaboration with Industry Boston Ltd.
Watford UK
www.hep.man.ac.uk/rich/net/nic/GigEth_tests_Bosto
n.ppt
5UDP Throughput SysKonnect SK-9843 B2B
- Motherboard SuperMicro 370DLE Chipset
ServerWorks III LE Chipset - CPU PIII 800 MHz PCI64 bit 33 MHz
- RedHat 7.1 Kernel 2.4.14
- Max throughput 690Mbit/s
- No packet loss
- Packet loss during drop
6PCI SysKonnect SK-9843
- Motherboard SuperMicro 370DLE Chipset
ServerWorks III LE Chipset - CPU PIII 800 MHz PCI64 bit 66 MHz
- RedHat 7.1 Kernel 2.4.14
- SK300
- 1400 bytes sent
- Wait 100 us
- 8 us for send or receive
Send data memory ? NIC
Send PCI
Receive PCI
Gigabit Ethernet frame 11.6 ?s
Receive data NIC ? memory
7PCI SysKonnect SK-9843
- Motherboard SuperMicro 370DLE Chipset
ServerWorks III LE Chipset - CPU PIII 800 MHz PCI64 bit 66 MHz
- RedHat 7.1 Kernel 2.4.14
- SK301
- 1400 bytes sent
- Wait 20 us
- Sk303
- 1400 bytes sent
- Wait 10 us
- Frames are back-to-back
- Can drive at line speed
- Cannot go any faster !
Gig Eth frames back to back
8Latency Tests B2B PCs with Intel Pro/1000
Gigabit Ethernet
- NIC Intel Pro/1000
- Motherboard SuperMicro 370DLE
- Chipset ServerWorks III LE
- CPU PIII 800 MHz PCI64 bit 66 MHz
- RedHat 7.1 Kernel 2.4.14
- Latency high but well behaved
- Indicates Interrupt coalescence
- Slope 0.0187 us/byte
- Expect
- PCI 0.00188
- GigE 0.008
- PCI 0.00188
- 0.0118 us/byte
9The SuperMicro P4DP6 Motherboard
- Dual Xeon Prestonia (2cpu/die)
- 400 MHx Front side bus
- Intel E7500 Chipset
- 6 PCI-X slots
- 4 independent PCI buses
- Can select
- 64 bit 66 MHz PCI
- 100 MHz PCI-X
- 133 MHz PCI-X
- 2 100 Mbit Ethernet
- Adaptec AIC-7899W dual channel SCSI
- UDMA/100 bus master/EIDE channels
- data transfer rates of 100 MB/sec burst
- Collaboration
- Boston Ltd. (Watford) SuperMicro Motherboards,
CPUs, Intel GE NICsBrunel University Peter Van
SantenUniversity of Manchester Richard
Hughes-Jones
10Network Monitoring Components
11UDP Throughput Intel Pro/1000 on B2B P4DP6
- Motherboard SuperMicro P4DP6 Chipset Intel
E7500 (Plumas) - CPU Dual Xeon Prestonia (2cpu/die) 2.2 GHz Slot
4 PCI, 64 bit, 66 MHz - RedHat 7.2 Kernel 2.4.14
- Max throughput 950Mbit/s
- Some throughput drop for packets gt1000 bytes
- Loss NIC dependent
- Loss not due to user ? Kernel moves
- Traced to discards in the receiving IP layer ???
12PCI Intel Pro/1000
- Motherboard SuperMicro 370DLE Chipset
ServerWorks III LE Chipset - CPU PIII 800 MHz PCI64 bit 66 MHz
- RedHat 7.1 Kernel 2.4.14
- IT66M212
- 1400 bytes sent
- Wait 11 us
- 4.7us on send PCI bus
- PCI bus 45 occupancy
- 3.25 us on PCI for data recv
- IT66M212
- 1400 bytes sent
- Wait 11 us
- Packets lost
- Action of pause packet?
13PCI Intel Pro/1000 on P4DP6
- Motherboard SuperMicro P4DP6 Chipset Intel
E7500 (Plumas) - CPU Dual Xeon Prestonia (2cpu/die) 2.2 GHz Slot
3-5 PCI, 64 bit, 66 MHz - RedHat 7.2 Kernel 2.4.14
- ITP4010
- 1400 bytes sent
- Wait 8 us
- 5.14us on send PCI bus
- PCI bus 68 occupancy
- 2 us on PCI for data recv
14European Topology NRNs Geant
15Gigabit UDP Throughput on the Production WAN
- Manc - RAL 570 Mbit/s
- 91 of the 622 Mbit access link between
SuperJANET4 and RAL - 1472 bytes propagation 21?s
- Manc-UvA (SARA) 750 Mbit/s
- SJANET4 Geant SURFnet
- Manc CERN 460 Mbit/s
- CERN PC had a 32 bit PCI bus
16Gigabit TCP Throughput on the Production WAN
- Throughput vs TCP buffer size
- TCP window sizes in Mbytes calculated from
RTTbandwidth
17Gigabit TCP on the Production WAN Man-CERN
- Throughput vs n-streams
- Default buffer size slope 25 Mbit/s/stream up
to 9 streams then 15 Mbit/s/stream - With larger buffers rate of increase per stream
is larger - Plateaus at about 7 streams giving a total
throughput of 400 Mbit/s
18 UDP Throughput SLAC - Man
- SLAC Manc 470 Mbit/s
- 75 of the 622 Mbit access link
- SuperJANET4 peers with ESnet at 622Mbit in NY
19Gigabit TCP Throughput Man-SLAC
- Throughput vs n-streams
- Much less than for European links
- Buffer required rttBW (622Mbit) 14 Mbytes
- With larger buffers gt default, rate of increase
per stream is 5.4 Mbit/s/stream - No Plateau
20UDPmon The Works
- How it works
- The Zero_stats request also provides an
interlock against concurrent tests.
21UDPmon Ripe MAN-SARA from 25 May 02
- UDPmon Loss
- Throughput
- Mbit/s
- MAN UvA
- 750Mbit/s
- RiPE 1 way
- UvA ?
- RAL ms
- RiPE 1 way
- RAL ?
- UvA ms
22iperf Pinger UDPmon UK-Lyon From 25 May 02
04 Jun 02
- UDPmon Loss
- Throughput
- 140 Mbit/s
- Man ? Lyon
- Ping
- Man ? Lyon
- 40 ms rtt
- Iperf TCP Throughput
- 47 Mbit/s
- Man ? Lyon
- 1048576 byte
- buffer
23Packet Loss Where?
- Intel Pro 1000 on 370DLE
- 1472 byte packets
- Expected loss in transmitter !
N Lost
N Gen
IpDiscards
N Transmit
N Received
24Interrupt Coalescence Latency
- Intel Pro 1000 on 370DLE 800 MHz CPU
25Interrupt Coalescence Throughput
26E-science core project
- Project to investigate and pilot
- end-to-end traffic engineering and management
over multiple administrative domains MPLS in
core diffserv at the edges. - Managed bandwidth and Quality-of-Service
provision. (Robin T) - High performance high bandwidth data transfers.
(Richard HJ) - Demonstrate end-to-end network services to CERN
using Dante EU-DataGrid and to the US DataTAG. - PartnersCISCO, CLRC, Manchester, UCL, UKERNA
plus Lancaster and Southampton (IPv6) - Status
- Project is running with people in post at
Manchester and UCL. - Project Tasks have been defined and Detailed
planning in progress - Kit list for the routers given to Cisco
- Test PC ordered delivered
- UKERNA organising core network and access links
SJ4 10Gbit upgrade - Strong Links with GGF
27MB NG SuperJANET4 Development Network (22 Mar
02)
WorldCom
SJ4 Dev C-PoP Warrington 12416
OC48/POS-SR-SC
MAN OSM-4GE-WAN-GBIC
Leeds
Gigabit Ethernet 2.5 Gbit POS Access 2.5 Gbit
POS core MPLS Admin. Domains
SJ4 Dev ULCC 12016
SJ4 Dev C-PoP Reading 12416
SJ4 Dev C-PoP London 12416
OC48/POS-SR-SC
OC48/POS-LR-SC
ULCC
WorldCom
WorldCom
Dark Fiber (SSE) POS
28The EU DataTAG project
- EU Transatlantic Girid project.
- Status Well under way People in post, Link
expected Jul 02 - Partners CERN/PPARC/INFN/UvA. IN2P3
sub-contractor - US Partners Caltech, ESnet, Abilene, PPDG, iVDGL
- The main foci are
- Grid Network Research including
- Provisioning (CERN)
- Investigations of high performance data transport
(PPARC) - End-to-end inter-domain QoS BW / network
resource reservation - Bulk data transfer and monitoring (UvA)
- Interoperability between Grids in Europe and the
US - PPDG, GriPhyN, DTF, iVDGL (USA)
29DataTAG Possible Configuration multi-platform
multi-vendor