Manpreet Singh, Prashant Pradhan* and Paul Francis - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

Manpreet Singh, Prashant Pradhan* and Paul Francis

Description:

MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS Manpreet Singh, Prashant Pradhan* and Paul Francis * Why is it so hard? – PowerPoint PPT presentation

Number of Views:78
Avg rating:3.0/5.0
Slides: 35
Provided by: manp8
Category:

less

Transcript and Presenter's Notes

Title: Manpreet Singh, Prashant Pradhan* and Paul Francis


1
MPAT Aggregate TCP Congestion Management as a
Building Block for Internet QoS
  • Manpreet Singh, Prashant Pradhan and Paul Francis


2
Each TCP flow gets equal bandwidth
3
Our Goal enable bandwidth apportionment among
TCP flows in a best-effort network
4
  • Transparency
  • No network support
  • ISPs, routers, gateways, etc.
  • Clients unmodified
  • TCP-friendliness
  • Total bandwidth should be the same

5
Why is it so hard?
  • Fair share of a TCP flow keeps changing
    dynamically with time.


Lot of cross-traffic
Server
Client
bottleneck
6
Why not open extra TCP flows ?
  • pTCP scheme Sivakumar et. al.
  • Open more TCP flows for a high-priority
    application
  • Resulting behavior is unfriendly to the network
  • Large number of flows active at a bottleneck lead
    to significant unfairness in TCP

7
Why not modify the AIMD parameters?
  • mulTCP scheme Crowcroft et. al.
  • Use different AIMD parameters for each flow
  • Increase more aggressively on successful
    transmission.
  • Decrease more conservatively on packet loss.
  • ? Unfair to the background traffic
  • ? Does not scale to larger differentials
  • Large number of timeouts
  • Two mulTCP flows running together try to
    compete with each other

8
Properties of MPAT
  • Key insight send the packets of one flow
    through the open congestion window of another
    flow.
  • Scalability
  • Substantial differentiation between flows
    (demonstrated up to 951)
  • Hold fair share (demonstrated up to 100 flows)
  • Adaptability
  • Changing performance requirements
  • Transient network congestion
  • Transparency
  • Changes only at the server side
  • Friendly to other flows

9
MPAT an illustration
Unmodified client
Server
Total congestion window 10
Congestion window Target 41
Flow1 5 8
Flow2 5 2
10
MPAT transmit processing
  • Send three additional red packets through the
    congestion window of blue flow.

cwnd
1
2
3
TCP1
4
5
6
cwnd
7
8
1
TCP2
2
11
MPAT implementation
Maintain a virtual mapping
  • New variable
  • MPAT window
  • Actual window
  • min ( MPAT window,
  • recv window)
  • Map each outgoing packet to one of the congestion
    windows.

Seqno Congestion window
1 Red
2 Red
3 Red
4 Red
5 Red
6 Blue
7 Blue
8 Blue
1 Blue
2 Blue
12
MPAT receive processing
For every ACK received on a flow, update the
congestion window through which that packet was
sent.
Seqno window
1 Red
2 Red
3 Red
4 Red
5 Red
6 Blue
7 Blue
8 Blue
1 Blue
2 Blue
Incoming Acks
1
cwnd
1
.
2
.
2
.
3
TCP1
4
5
7
6
8
7
1
8
2
cwnd
1
TCP2
2
13
TCP-friendliness
Invariant Each congestion window
experiences the same loss rate.
14
MPAT decouples reliability from congestion control
  • Red flow is responsible for the reliability of
    all red packets.
  • (e.g. buffering, retransmission, etc. )
  • Does not break the end-to-end principle.

15
Experimental Setup
  • Wide-area network test-bed
  • Planet-lab
  • Experiments over the real internet
  • User-level TCP implementation
  • Unconstrained buffer at both ends
  • Goal
  • Test the fairness and scalability of MPAT

16
MPAT can apportion available bandwidth among
its flows, irrespective of the total fair share
Bandwidth Apportionment
17
Scalability of MPAT
95
95
95 times differential achieved in experiments
18
Responsiveness
MPAT adapts itself very quickly to dynamically
changing performance requirements
19
Fairness
  • 16 MPAT flows
  • Target ratio 1 2 3 15 16
  • 10 standard TCP flows in background

1.6
20
Applicability in real world
  • Deployment
  • Enterprise network
  • Grid applications
  • Gold vs Silver customers
  • Background transfers

21
Sample Enterprise network(runs over the
best-effort Internet)
New York (web server)
New Delhi (application server)
San Jose (database server)
Zurich (transaction server)
22
Background transfers
  • Data that humans are not waiting for
  • Non-deadline-critical
  • Examples
  • Pre-fetched traffic on the Web
  • File system backup
  • Large-scale data distribution services
  • Background software updates
  • Media file sharing
  • Grid Applications

23
Future work
  • Benefit short flows
  • Map multiple short flows onto a single long flow
  • Warm start
  • Middle box
  • Avoid changing all the senders
  • Detect shared congestion
  • Subnet-based aggregation

24
Conclusions
  • MPAT is a very promising approach for bandwidth
    apportionment
  • Highly scalable and adaptive
  • Substantial differentiation between flows
    (demonstrated up to 951)
  • Adapts very quickly to transient network
    congestion
  • Transparent to the network and clients
  • Changes only at the server side
  • Friendly to other flows

25
Extra slides
26
MPAT exhibits much lower variancein throughput
than mulTCP
Reduced variance
27
Multiple MPAT aggregates cooperate with
each other
Fairness across aggregates
28
Multiple MPAT aggregates running simultaneously
cooperate with each other
29
Congestion Manager (CM)
Goal To ensure fairness
Feedback
Sender
Receiver
Data
Callbacks
API
CM
Per-aggregate statistics (cwnd, ssthresh, rtt,
etc)
Congestion controller
Scheduler
Per-flow scheduling
Flow integration
  • An end-system architecture for congestion
    management.
  • CM abstracts all congestion-related info into
    one place.
  • Separates reliability from congestion control.

30
Issues with CM
CM maintains one congestion window per
aggregate
TCP1
TCP2
Congestion Manager
TCP3
TCP4
TCP5
Unfair allocation of bandwidth to CM flows
31
mulTCP
  • Goal Design a mechanism to give N times more
    bandwidth to one flow over another.
  • TCP throughput f(a, ß) / (rtt sqrt(p))
  • a additive increase factor
  • ß multiplicative decrease factor
  • p loss probability
  • rtt round-trip time
  • Set a N and ß 1 - 1/(2N)
  • Increase more aggressively on successful
    transmission.
  • Decrease more conservatively on packet loss.
  • ? Does not scale with N
  • Loss process induced is much different from that
    of N standard TCP flows.
  • Unstable controller as N increases.

32
Gain in throughput of mulTCP
33
Drawbacks of mulTCP
  • Does not scale with N
  • Large number of timeouts
  • The loss process induced by a single mulTCP flow
    is much different
  • Increased variance with N
  • Amplitude increases with N
  • Unstable controller as N grows
  • Two mulTCP flows running together try to
    compete with each other

34
TCP Nice
  • Two-level prioritization scheme
  • Only give less bandwidth to low-priority
    applications
  • Cannot give more bandwidth to deadline-critical
    jobs
Write a Comment
User Comments (0)
About PowerShow.com