Title: Priority layered approach to Transport protocol for Long fat networks
1Priority layered approach to Transport protocol
for Long fat networks
- Vidhyashankar Venkataraman
- Cornell University
2TCP Transmission Control Protocol
Abilene backbone 2007 (10Gbps)
NSFNet 1991 (1.5Mbps)
- TCP ubiquitous end-to-end protocol for reliable
communication - Networks have evolved over the past two decades
- TCP has not
- TCP is inadequate for current networks
3Long Fat Networks (LFNs)
- Bandwidth delay product
- BW X Delay Max. amount of data in the pipe
- Max. data that can be sent in one round trip time
- High value in long fat networks
- Optical eg. Abilene/I2
- Satellite networks
- Eg 2 satellites 0.5 secs, 10Gbps radio link can
send up to 625MB/RTT
4TCP Basics
Window
AI
- Reliability, in-order delivery
- Congestion-aware
- Slow Start (SS) Increase window size (W) from 1
segment - Additive Increase Multiplicative Decrease (AIMD)
- AI Conservative increase by 1 segment/RTT
- MD Drastic cutback of window by half with loss
- AIMD ensures fair throughput share across network
flows
MD
SS
t
5TCPs AIMD revisited (Adapted from Nick
Mckeowns slide)
- Rule for adjusting W
- AI If an ACK is received W ? W1/W
- MD If a packet is lost W ? W/2
Only W packets may be outstanding
Bottleneck
Source
Dest
Loss
Window size
MD
Early cutback
AI
Timeout
Multiple cutbacks
SS
t
6TCPs inadequacies in LFNs
- W 105 KB or more in LFNs
- Two problems
- Sensitivity to transient congestion and random
losses - Ramping up back to high W will take a long time
(AI) - Detrimental to TCPs throughput
- Example 10 Gbps link, 100ms Loss rate of 10-5
yields only 10Mbps throughput! - Another problem Slow start Short flows take
longer time to complete
7Alternate Transport Solutions
Taxonomy based on Congestion signal to end host
Congestion Control in LFNs
Explicit
- Explicit notification from
- routers
- XCP
Implicit
End-to-end (like TCP)
General Idea Window growth curve better than
AIMD
Loss
Delay
- Loss signal for
- congestion
- CUBIC,
- HS-TCP, STCP
- RTT increase signal for
- congestion
- (Queue builds up)
- Fast
8Problems with existing solutions
- These protocols strive to achieve both
- Aggressiveness Ramping up quickly to fill pipe
- Fairness Friendly to TCP and other flows of same
protocol - Issues
- Unstable under frequent transient congestion
events - Achieving both goals at the same time is
difficult - Slow start problems still exist in many of the
protocols - Example
- XCP Needs new router hardware
- FastTCP, HS-TCP Stability is scenario-dependent
9A new transport protocol
- Need good aggressiveness without loss in
fairness - good Near-100 bottleneck utilization
- Strike this balance without requiring any new
network support
10Our approach Priority Layered Transport (PLT)
Subflow 1 Legacy TCP
Dst1
Src1
Bottleneck
Subflow 2
- Separate aggressiveness and fairness Split flow
into 2 subflows - Send TCP (SS/AIMD) packets over subflow 1 (Fair)
- Blast packets to fill pipe, over subflow 2
(Aggressive) - Requirement Aggressive stream shouldnt affect
TCP streams in network
11Prioritized Transfer
Sub flow 2 fills the troughs
Window size
WB (WBuffer)
W (Pipe capacity)
t
- Sub-flow 1 strictly prioritized over sub-flow 2
- Meaning Sub-flow 2 fills pipe whenever 1 cannot
and does that quickly - Routers can support strict priority queuing
DiffServ - Deployment issues discussed later
12Evident Benefits from PLT
- Fairness
- Inter protocol fairness TCP friendly
- Intra protocol fairness As fair as TCP
- Aggression
- Overcomes TCPs limitations with slow start
- Requires no new network support
- Congestion control independence at subflow 1
- Sub flow 2 supplements performance of sub flow 1
13PLT Design
- Scheduler assigns packets to sub-flows
- High priority Congestion Module (HCM) TCP
- Module handling subflow 1
- Low priority Congestion Module (LCM)
- Module handling subflow 2
- LCM is lossy
- Packets could get lost or starved when HCM
saturates pipe - LCM Sender knows packets lost and received from
receiver
14The LCM
- Is naïve no-holds-barred sending enough?
- No! Can lead to congestion collapse
- Wastage of Bandwidth in non-bottleneck links
- Outstanding windows could get large and simply
cripple flow - Congestion control is necessary
15Congestion control at LCM
- Simple, Loss-based, aggressive
- Multiplicative increase Multiplicative Decrease
(MIMD) - Loss-rate based
- Sender keeps ramping up if it incurs tolerable
loss rates - More robust to transient congestion
- LCM sender monitors loss rate p periodically
- Max. tolerable loss rate µ
- p lt µ gt cwnd ?cwnd (MI, ?gt1)
- p gt µ gt cwnd ?cwnd (MD, ?lt1)
- Timeout also results in MD
16Choice of µ
- Too High Wastage of bandwidth
- Too Low LCM is less aggressive, less robust
- Decide from expected loss rate over Internet
- Preferably kernel tuned in the implementation
- Predefined in simulations
17Sender Throughput in HCM and LCM
LCM fills pipe in the desired manner
LCM cwnd 0 when HCM saturates pipe
18Simulation study
- Simulation study of PLT against TCP, FAST and XCP
- 250 Mbps bottleneck
- Window size 2500
- Drop Tail policy
19FAST TCP
- Delay-based congestion control for LFNs Popular
- Congestion signal Increase in delay
- Ramp up much faster than AI
- If queuing delay builds up, increase factor
reduces - Uses parameter to decide reduction of increase
factor - Ideal value depends on number of flows in network
- TCP-friendliness scenario-dependent
- Though equilibrium exists, difficult to prove
convergence
20XCP Baseline
- Requires explicit feedback from routers
- Routers equipped to provide cwnd increment
- Converges quite fast
- TCP-friendliness requires extra router support
21Single bottleneck topology
22Effect of Random loss
PLT Near-100 goodput if loss ratelt µ
TCP, Fast and XCP underperform at high loss rates
0
23Short PLT flows
Frequency distribution of flow completion times
Flows pareto distributed (Max size 5MB)
Most PLT flows finish within 1 or 2 RTTs
24Effect of flow dynamics
3 flows in the network
Flows 1 and 2 leave, the other flow ramps up
quickly
Congestion in LCM due to another flows arrival
25Effect of cross traffic
26Effect of Cross traffic
- Aggregate goodput of flows
- FAST yields poor goodputs even with low UDP
bursts - PLT yields 90 utilization even with 50 Mbps
bursts
27Conclusion
- PLT layered approach to transport
- Prioritize fairness over aggressiveness
- Supplements aggression to a legacy congestion
control - Simulation results are promising
- PLT robust to random losses and transient
congestion - We have also tested PLT-Fast and results are
promising!
28Issues and Challenges ahead
- Deployability Challenges
- PEPs in VPNs
- Applications over PLT
- PLT-shutdown
- Other issues
- Fairness issues
- Receiver Window dependencies
29Future Work Deployment(Figure adapted from Nick
Mckeowns slides)
PEP
PLT connection
PEP
- How could PLT be deployed?
- In VPNs, wireless networks
- Performance Enhancing Proxy boxes sitting at the
edge - Different applications?
- LCM traffic could be a little jittery
- Performance of streaming protocols/ IPTV
30Deployment PLT-SHUTDOWN
- In the wide area, PLT should be disabled if no
priority queuing - Unfriendly to fellow TCP flows otherwise!
- We need methods to detect priority queuing at
bottleneck in an end-to-end manner - To be implemented and tested on the real internet
31Receive Window dependency
- PLT needs larger outstanding windows
- LCM is lossy Aggression Starvation
- Waiting time for retransmitting lost LCM packets
- Receive window could be bottleneck
- LCM should cut back if HCM is restricted
- Should be explored more
32Fairness considerations
- Inter-protocol fairness TCP friendliness
- Intra-protocol fairness HCM fairness
- Is LCM fairness necessary?
- LCM is more dominant in loss-prone networks
- Can provide relaxed fairness
- Effect of queuing disciplines
33 34Analyses of TCP in LFNs
- Some known analytical results
- At loss p, (p. (BW. RTT)2)gt1 gt small throughputs
- Throughput ? 1/RTT
- Throughput ? 1/vp
- (Padhye et. al. and Lakshman et. al.)
- Several solutions proposed for modified transport
35Fairness
- Average goodputs of PLT and TCP flows in small
buffers - Confirms that PLT is TCP-friendly
36PLT Architecture
Input buffer
Socket interface
PLT Sender
PLT Receiver
HCM Packet
HCM
HCM-R
LCM Rexmt Buffer
HCM ACK
LCM Packet
LCM
LCM-R
Strong ACK
Dropped Packets
37Other work Chunkyspread
- Bandwidth-sensitive peer-to-peer multicast for
live-streaming - Scalable solution
- Robustness to churn, latency and bandwidth
- Heterogeneity-aware Random graph
- Multiple trees provided robustness to churn
- Balances load across peers
- IPTPS 06, ICNP 06