Title: Protocols Progress with Current Work.
1ProtocolsProgress with Current Work.
Richard Hughes-Jones The University of
Manchester www.hep.man.ac.uk/rich/ then
Talks
2vlbi_udp UDP on the WAN
- iGrid2002 monolithic code
- Convert to use pthreads
- control
- Data input
- Data output
- Code branch for Simons file transfer tests
- Work on vlbi_recv
- Output thread polled for data in the ring buffer
burned CPU - Input thread signals output thread when there is
work to do else wait on semaphore packet loss
at high rate, variable thoughput - Output thread uses sched_yield() when no work to
do CPU used - Add code for MarkV card and PCEVN interface
- Measure throughput, packet loss, re-ordering,
1-way delay - Multi-flow Network performance being set up
Nov/Dec06
3vlbi_udp B2B UDP Tests
- Kernel 2.6.9
- vlbi_recv sched_yield()
- Wait 12 us
- Stable throughput
- 999 Mbit/s variation less than 1 Mbit/s
- No packet loss
- Inter-packet time
- Processing time
- mean 0.1005 sigma 0.1438
- CPU load
- Cpu0 0.0 us, 0.0 sy, 0.0 ni, 99.7 id,
0.3 wa, 0.0 hi, 0.0 si - Cpu1 11.3 us, 88.7 sy, 0.0 ni, 0.0 id,
0.0 wa, 0.0 hi, 0.0 si - Cpu2 0.3 us, 0.0 sy, 0.0 ni, 99.3 id,
0.3 wa, 0.0 hi, 0.0 si - Cpu3 9.3 us, 15.6 sy, 0.0 ni, 37.5 id,
0.0 wa, 1.3 hi, 36.2 si
4vlbi_udp Multi-site Streams
Gbit link
Chalmers University of Technology, Gothenburg
Metsähovi
OnsalaSweden
Jodrell BankUK
Gbit link
TorunPoland
DedicatedGbit link
Dwingeloo DWDM link
MedicinaItaly
5TCP tcpdelay How does TCP move CBR data?
- Want to examine how TCP moves Constant Bit Rate
Data - VLBI Application Protocol
- tcpdelay a test program
- instrumented TCP program emulates sending CBR
Data. - Records relative 1-way delay
- Web100 Record TCP Stack activity
6TCP tcpdelay Visualising the Results
Stephen Kershaw
- If Throughput NOT limited by TCP buffer size /
Cwnd maybe we can re-sync with CBR arrival
times. - Need to store CBR messages during the Cwind drop
in the TCP buffer - Then transmit Faster than the CBR rate to catch
up
Arrival time
Message number / Time
7TCP tcpdelay JB-Manc
- Message size 1448 Bytes
- Wait time 22 us
- Data Rate 525 Mbit/s
- Route JB-Man
- RTT 1 ms
- TCP buffer 2MB
- Drop 1 in 10,000 packets
- 2.5-3 ms increase in timefor about 2000
messagesie 44 ms - Classic Cwnd behaviour
- Cwnd dip corresponds to 1.2M bytes data
Delayed (810 packets) - Peak Throughput 620 Mbit/s
8Arrival Times UKLight JB-JIVE-Manc
Presented at the Haystack Workshop
- Message size 1448 Bytes
- Wait time 22 us
- Data Rate 525 Mbit/s
- RouteJB-UKLight-JIVE-UKLight-Man
- RTT 27 ms
- TCP buffer 32M bytes
- BDP _at_512Mbit 1.8Mbyte
- Estimate catchup possible if loss lt 1 in 1.24M
- Data needed forJIVE-Manc 27msChicago-Manc 120
ms - Have 30 GBytes!!!
Stephen Kershaw
9TCP TCP Stacks, Sharing, Reverse Traffic
- Delayed by Provision of UKLight link Manc -
Starlight - PC installed in Starlight and Manchester Sep06
- Udpmon tests Good
- Plateau 990 Mbit/s wire rate
- No packet Loss
- Same in both directions
- TCP studies Work Now in progress
10DCCP The Application View
- Stephen Richard with help from Andrea
- Had problems with Fedora Core 6 using stable
kernel 2.6.19-rc1 - DCCP data packets never reached the receiving
TSAP ! - Verify with tcpdump
- Using 2.6.19-rc5-g73fd2531-dirty
- Ported udpmon to dccpmon
- Some system calls dont work
- dccpmon tests
- Plateau 990 Mbit/s wire rate
- No packet Loss
- Receive system crashed!
- Iperf tests
- 940Mbps, back-to-back
- Need more instrumentation in DCCP
- Eg a line in /proc/sys/snmp
1110 Gigabit Ethernet Lab
- 10 Gigabit Test Lab now set up in Manchester
- Cisco 7600
- Cross Campus ? lt1ms
- Neterion NICs
- 4 Myricom 10 Gbit NICs delivery this week
- Chelsio being purchased
- Boston/Supermicro X7DBE PCs
- Two Dual Core Intel Xeon Woodcrest 5130 2 GHz
- PCI-e and PCI-X
- B2B performance so far
- SuperMicro X6DHE-G2
- Kernel (2.6.13) Driver dependent!
- One iperf TCP data stream 4 Gbit/s
- Two bi-directional iperf TCP data streams 3.8
2.2 Gbit/s - UDP Disappointing
- Installed Fedora Core5 Kernels 2.6.17 2.6.18
(web100 packet drop) 2.6.19 on the Intel
dual-core PCs
12ESLEA-FABRIC4 Gbit flows over GÉANT
- Set up 4 Gigabit Lightpath Between GÉANT PoPs
- Collaboration with Dante
- GÉANT Development Network London Amsterdamand
GÉANT Lightpath service CERN Poznan - PCs in their PoPs with 10 Gigabit NICs
- VLBI Tests
- UDP Performance
- Throughput, jitter, packet loss, 1-way delay,
stability - Continuous (days) Data Flows VLBI_UDP and
- multi-Gigabit TCP performance with current
kernels - Experience for FPGA Ethernet packet systems
- Dante Interests
- multi-Gigabit TCP performance
- The effect of (Alcatel) buffer size on bursty TCP
using BW limited Lightpaths
13Options Using the GÉANT LightPaths
- Set up 4 Gigabit Lightpath Between GÉANT PoPs
- Collaboration with Dante
- PCs in Dante PoPs
- 10 Gigabit SDH backbone
- Alkatel 1678 MCC
- Node location
- Budapest
- Geneva
- Frankfurt
- Milan
- Paris
- Poznan
- Prague
- Vienna
- Can do traffic routingso make long rtt paths
- Ideal London Copenhagen
14Network/PC Booking System
- Based on Meeting Room Booking System
- Divide into Links and End systems
- Hard work by Stephen Kershaw
- Tesing with VLBI
15 16