Title: EECS 122: Introduction to Computer Networks TCP Variations
 1EECS 122 Introduction to Computer Networks TCP 
Variations
- Computer Science Division 
- Department of Electrical Engineering and Computer 
 Sciences
- University of California, Berkeley 
- Berkeley, CA 94720-1776
2Todays Lecture 11
2
17, 18, 19
Application
10,11
6
Transport
14, 15, 16
Network (IP)
7, 8, 9
Link
21, 22, 23
Physical
25 
 3Outline
- TCP congestion control 
- Quick Review 
- TCP flavors 
- Equation-based congestion control 
- Impact of losses 
- Cheating 
- Router-based support 
- RED 
- ECN 
- Fair Queueing 
- XCP
4Quick Review
- Slow-Start cwnd upon every new ACK 
- Congestion avoidance AIMD if cwnd gt ssthresh 
- ACK cwnd  cwnd  1/cwnd 
- Drop ssthresh cwnd/2 and cwnd1 
- Fast Recovery 
- duplicate ACKS cwndcwnd/2 
- Timeout cwnd1
5TCP Flavors 
- TCP-Tahoe 
- cwnd 1 whenever drop is detected 
- TCP-Reno 
- cwnd 1 on timeout 
- cwnd  cwnd/2 on dupack 
- TCP-newReno 
- TCP-Reno  improved fast recovery 
- TCP-Vegas, TCP-SACK
6TCP Vegas
- Improved timeout mechanism 
- Decrease cwnd only for losses sent at current 
 rate
- avoids reducing rate twice 
- Congestion avoidance phase 
- compare Actual rate (A) to Expected rate (E) 
- if E-A gt ?, decrease cwnd linearly 
- if E-A lt ?, increase cwnd linearly 
- rate measurements  delay measurements 
- see textbook for details! 
7TCP-SACK
- SACK  Selective Acknowledgements 
- ACK packets identify exactly which packets have 
 arrived
- Makes recovery from multiple losses much easier
8Standards?
- How can all these algorithms coexist? 
- Dont we need a single, uniform standard? 
- What happens if Im using Reno and you are using 
 Tahoe, and we try to communicate?
9Equation-Based CC
- Simple scenario 
- assume a drop every kth RTT (for some large k) 
- w, w1, w2, ...wk-1 DROP (wk-1)/2, 
 (wk-1)/21,...
- Observations 
- In steady state w (wk-1)/2 so wk-1 
- Average window 1.5(k-1) 
- Total packets between drops 1.5k(k-1) 
- Drop probability p  1/1.5k(k-1) 
- Throughput T  (1/RTT)sqrt(3/2p)
10Equation-Based CC
- Idea 
- Forget complicated increase/decrease algorithms 
- Use this equation T(p) directly! 
- Approach 
- measure drop rate (dont need ACKs for this) 
- send drop rate p to source 
- source sends at rate T(p) 
- Good for streaming audio/video that cant 
 tolerate the high variability of TCPs sending
 rate
11Question!
- Why use the TCP equation? 
- Why not use any equation for T(p)? 
12Cheating
- Three main ways to cheat 
- increasing cwnd faster than 1 per RTT 
- using large initial cwnd 
- Opening many connections 
13Increasing cwnd Faster
y
C
x increases by 2 per RTT y increases by 1 per RTT
Limit rates x  2y
x 
 14Increasing cwnd Faster 
 15Larger Initial cwnd
x starts SS with cwnd  4 y starts SS with cwnd  
1 
 16Open Many Connections
- Assume 
-  A starts 10 connections to B 
-  D starts 1 connection to E 
-  Each connection gets about the same throughput 
- Then A gets 10 times more throughput than D
17Cheating and Game Theory
D ? Increases by 1 Increases by 5
A Increases by 1 Increases by 5
(x, y)
22, 22 10, 35
35, 10 15, 15
- Too aggressive 
- Losses 
- Throughput falls
Individual incentives cheating pays Social 
incentives better off without cheating Classic 
PD resolution depends on accountability 
 18Lossy Links
- TCP assumes that all losses are due to congestion 
- What happens when the link is lossy? 
- Recall that Tput  1/sqrt(p) where p is loss 
 prob.
- This applies even for non-congestion losses
19Example
p  0
p  1
p  10 
 20What can routers do to help? 
 21Paradox
- Routers are in middle of action 
- But traditional routers are verypassive in terms 
 of congestioncontrol
- FIFO 
- Drop-tail
22FIFO First-In First-Out
- Maintain a queue to store all packets 
- Send packet at the head of the queue 
Next to transmit
Arriving packet
Queued packets 
 23Tail-drop Buffer Management
- Drop packets only when buffer is full 
- Drop arriving packet
Next to transmit
Arriving packet
Drop 
 24Ways Routers Can Help
- Packet scheduling non-FIFO scheduling 
- Packet dropping 
- not drop-tail 
- not only when buffer is full 
- Congestion signaling 
25Question!
- Why not use infinite buffers? 
- no packet drops!
26The Buffer Size Quandary
- Small buffers 
- often drop packets due to bursts 
- but have small delays 
- Large buffers 
- reduce number of packet drops (due to bursts) 
- but increase delays 
- Can we have the best of both worlds? 
27Random Early Detection (RED)
- Basic premise 
- router should signal congestion when the queue 
 first starts building up (by dropping a packet)
- but router should give flows time to reduce their 
 sending rates before dropping more packets
- Therefore, packet drops should be 
- early dont wait for queue to overflow 
- random dont drop all packets in burst, but 
 space drops out
28RED
- FIFO scheduling 
- Buffer management 
- Probabilistically discard packets 
- Probability is computed as a function of average 
 queue length (why average?)
Discard Probability
1
0
Average Queue Length
queue_len
min_th
max_th 
 29RED (contd)
- min_th  minimum threshold 
- max_th  maximum threshold 
- avg_len  average queue length 
- avg_len  (1-w)avg_len  wsample_len 
Discard Probability
1
0
min_th
max_th
Average Queue Length
queue_len 
 30RED (contd)
- If (avg_len lt min_th) ? enqueue packet 
- If (avg_len gt max_th) ? drop packet 
- If (avg_len gt min_th and avg_len lt max_th) ? 
 enqueue packet with probability P
Discard Probability (P)
1
0
queue_len
Average Queue Length
min_th
max_th 
 31RED (contd)
- P  max_P(avg_len  min_th)/(max_th  min_th) 
- Improvements to spread the drops (see textbook)
Discard Probability
max_P
1
P
0
Average Queue Length
queue_len
min_th
max_th
avg_len 
 32Average vs Instantaneous Queue 
 33RED Advantages
- High network utilization with low delays 
- Average queue length small, but capable of 
 absorbing large bursts
- Many refinements to basic algorithm make it more 
 adaptive (requires less tuning)
34Explicit Congestion Notification
- Rather than drop packets to signal congestion, 
 router can send an explicit signal
- Explicit congestion notification (ECN) 
- instead of optionally dropping packet, router 
 sets a bit in the packet header
- If data packet has bit set, then ACK has ECN bit 
 set
- Backward compatibility 
- bit in header indicates if host implements ECN 
- note that not all routers need to implement ECN
35Picture 
 36ECN Advantages
- No need for retransmitting optionally dropped 
 packets
- No confusion between congestion losses and 
 corruption losses
37Remaining Problem
- Internet vulnerable to CC cheaters! 
- Single CC standard cant satisfy all applications 
- EBCC might answer this point 
- Goal 
- make Internet invulnerable to cheaters 
- allow end users to use whatever congestion 
 control they want
- How?
38One Approach Nagle (1987)
- Round-robin among different flows 
- one queue per flow 
39Round-Robin Discussion
- Advantages protection among flows 
- Misbehaving flows will not affect the performance 
 of well-behaving flows
- Misbehaving flow  a flow that does not implement 
 any congestion control
- FIFO does not have such a property 
- Disadvantages 
- More complex than FIFO per flow queue/state 
- Biased toward large packets  a flow receives 
 service proportional to the number of packets
40Solution?
- Bit-by-bit round robin 
- Can you do this in practice? 
- No, packets cannot be preempted (why?) 
- we can only approximate it 
41Fair Queueing (FQ)
- Define a fluid flow system a system in which 
 flows are served bit-by-bit
- Then serve packets in the increasing order of 
 their deadlines
- Advantages 
- Each flow will receive exactly its fair rate 
- Note 
- FQ achieves max-min fairness
42Max-Min Fairness
- Denote 
- C  link capacity 
- N  number of flows 
- ri  arrival rate 
- Max-min fair rate computation 
- compute C/N 
- if there are flows i such that ri lt C/N, update 
 C and N
- if no, f  C/N terminate 
- go to 1 
- A flow can receive at most the fair rate, i.e., 
 min(f, ri)
43Example
- C  10 r1  8, r2  6, r3  2 N  3 
- C/3  3.33 ? C  C  r3  8 N  2 
- C/2  4 f  4
44Implementing Fair Queueing
- Idea serve packets in the order in which they 
 would have finished transmission in the fluid
 flow system
45Example
Flow 1 (arrival traffic)
1
2
3
4
5
6
time
Flow 2 (arrival traffic)
1
2
3
4
5
time
Service in fluid flow system
1
2
3
4
5
6
1
2
3
4
5
time
Packet system
1
2
1
3
2
3
4
4
5
5
6
time 
 46System Virtual Time V(t)
- Measure service, instead of time 
- V(t) slope  rate at which every active flow 
 receives service
- C  link capacity 
- N(t)  number of active flows in fluid flow 
 system at time t
V(t)
time
Service in fluid flow system
1
2
3
4
5
6
1
2
3
4
5
time 
 47Fair Queueing Implementation
- Define 
-  - finishing time of packet k of flow i (in 
 system virtual time reference system)
-  - arrival time of packet k of flow i 
-  - length of packet k of flow i 
- The finishing time of packet k1 of flow i is 
48FQ Advantages
- FQ protect well-behaved flows from ill-behaved 
 flows
- Example 1 UDP (10 Mbps) and 31 TCPs sharing a 
 10 Mbps link
49Alternative Implementations of Max-Min
- Deficit round-robin 
- Core-stateless fair queueing 
- label packets with rate 
- drop according to rates 
- check at ingress to make sure rates are truthful 
- Approximate fair dropping 
- keep small sample of previous packets 
- estimate rates based on these 
- apply dropping as above 
- wins because few large flows 
- per-elephant state, not per-mouse state 
- RED-PD not max-min, but punishes big cheaters
50Big Picture
- FQ does not eliminate congestion ? it just 
 manages the congestion
- You need both end-host congestion control and 
 router support for congestion control
- end-host congestion control to adapt 
- router congestion control to protect/isolate
51Explicit Rate Signaling (XCP)
- Each packet contains cwnd, RTT, feedback field 
- Routers indicate to flows whether to increase or 
 decrease
- give explicit rates for increase/decrease amounts 
- feedback is carried back to source in ACK 
- Separation of concerns 
- aggregate load 
- allocation among flows
52XCP (continued)
- Aggregate 
- measures spare capacity and avg queue size 
- computes desired aggregate change DaRS-bQ 
- Allocation 
- uses AIMD 
- positive feedback is same for all flows 
- negative feedback is proportional to current rate 
- when D0, reshuffle bandwidth 
- all changes normalized by RTT 
- want equal rates, not equal windows
53XCP (continued)
- Challenge 
- how to give per-flow feedback without per-flow 
 state?
- do you keep track of which flows youve signaled 
 and which you havent?
- Solution 
- figure out desired change 
- divide from expected number of packets from flow 
 in time interval
- give each packet share of rate adjustment 
- flow totals up all rate adjustment