Title: An Analysis of AIMD Algorithm with Decreasing Increases
1An Analysis of AIMD Algorithm with Decreasing
Increases
- Yunhong Gu, Xinwei Hong, and Robert L. Grossman
National Center for Data Mining
2Outline
- TCPs inefficiency in grid applications
- Improvements on AIMD
- AIMD with decreasing increases (DAIMD)
- The UDT algorithm
- Experimental result
- Conclusion and future work
3TCP and AIMD
- AIMD (Additive Increase Multiplicative Decrease)
- Fair max-min fairness
- Stable globally asynchronously stable
- But, inefficient and not scalable
- In grid networks (with high bandwidth-delay
product)
4Efficiency of TCP
1 Gb/s link, 200ms RTT, between Tokyo and Chicago
28 minutes
On 10 Gb/s link, 200ms RTT, it will take 4 hours
43 minutes to recover from a single loss.
TCPs throughput model It needs extremely low
loss rate on high bandwidth-delay product
networks.
5Improvements of TCP
- Fixed parameter (e.g., 1 segment per RTT) is not
scalable and hence inefficient - 32 segments per RTT works fine for 1 Gb/s link,
but how about its performance on 40Gb/s link or
1.5Mb/s link? - Increasing the increase parameter as the
congestion window increases - E.g., Scalable TCP and HighSpeed TCP
- Cause fairness and convergence problem
6AIMD with Decreasing Increases
- To reach high efficiency, the increase parameter
of an AIMD-based algorithm should be correlated
to the link capacity and the available bandwidth. - XCP uses available bandwidth and number of
concurrent flows to calculate next increase
parameter - The increase parameter should be large at the
beginning and decreases as the sending rate
increases.
7AIMD with Decreasing Increases
8UDT - UDP based Transport Protocol
- Application level built above UDP
- End-to-end approach
- Rate based control
- The sending rate is tuned per constant interval
(SYN).
9UDT Algorithm
- UDT considers end-to-end link capacity L
- It is hard to estimate the number of concurrent
flows and real-time available bandwidth - UDT tunes the increase parameter according to
L-C, where C is the current sending rate.
10UDT Algorithm
(1) (2) (3) (4) (5)
11UDT Algorithm
L 10 Gbps, S 1500 bytes
C (Mbps) L - C (Mbps) Increment (pkts/SYN)
0, 9000) (1000, 10000 10
9000, 9900) (100, 1000 1
9900, 9990) (10, 100 0.1
9990, 9999) (1, 10 0.01
9999, 9999.9) (0.1, 1 0.001
9999.9 lt0.1 0.00067
12UDT Algorithm
13UDT Efficiency and Fairness Characteristics
- Takes 7.5 seconds to reach 90 of the link
capacity, independent of BDP - Satisfies max-min fairness if all the flows have
the same end-to-end link capacity - Otherwise, any flow will obtain at least half of
its fair share - Does not take more bandwidth than concurrent TCP
flow as long as
14Experiment - Setup
15Experiment - Results
16Conclusion
- Standard TCP is inefficient for grid applications
in high bandwidth-delay product networks. - We argued that the increase parameter should be
correlated to such information as link capacity
and available bandwidth. - We analyzed a class of AIMD based control
algorithm whose increase parameter is decreasing
as the sending rate increases and proved that it
is fair and stable. - According to this analysis we designed a new
control algorithm that uses estimated link
capacity and the current sending rate as the
hints to update increase parameter. - This algorithm has been implemented in our UDT
protocol and the experiments have demonstrated
very good performance.
17Future Work
- Bandwidth Estimation
- Currently UDT uses packet pairs to estimate link
capacity - We will consider more methods to deal with cross
traffic and NIC interrupt coalescence
18Thank you!
- Questions and comments are welcome!
- For more information, please visit
- http//www.ncdm.uic.edu
- http//udt.sf.net