Decongestion Control - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

Decongestion Control

Description:

CHOKe - A stateless active queue management scheme for approximating fair ... end-to-end congestion control, IP layer provides no explicit feedback to the ... – PowerPoint PPT presentation

Number of Views:27
Avg rating:3.0/5.0
Slides: 34
Provided by: Nab38
Category:

less

Transcript and Presenter's Notes

Title: Decongestion Control


1
Decongestion Control
  • By Nabil Samir Sorial
  • 27 December 2007

2
Outline
  • Congestion Control
  • Principles of Congestion Control.
  • TCP Congestion Control.
  • Fairness.
  • Decongestion control
  • Introduction
  • Online codes
  • CHOKe - A stateless active queue management
    scheme for approximating fair bandwidth
    allocation
  • Selfish behavior and stability of the internet A
    Game-Theoretic Analysis of TCP
  • Benefits
  • Design
  • Challenges
  • References

3
Principles of Congestion Control
  • The cause and the costs of congestion
  • Three complex scenarios of congestion
  • Congestion cost (not fully utilized and poor
    performance)
  • Scenario 1 Tow senders, a router with infinite
    buffers
  • ?in lt R/2 ? throughput sending rate (?in),
    finite delay
  • ?in gt R/2 ? throughput R/2, infinite delay

4
Principles of Congestion Control
  • Scenario 2 Tow senders, a router with finite
    buffers
  • Packets will be dropped in case full buffer
  • If the packet is dropped the sender will
    retransmit it
  • Timeout is large enough ? R/3 original data, R/6
    retransmitted data
  • Retransmit a packet that not yet lost ? packet is
    forwarded twice, throughput R/4

5
Principles of Congestion Control
  • Scenario 3 Four senders, a router with finite
    buffers, multihop paths
  • Timeout/retransmission mechanism is used
  • If ?in is large, arrival rate of B-D traffic at
    R2 can be much larger than that of A-C traffic
  • Buffer at R2 is filled by B-D packets, throughput
    of A-C connection at R2 goes to zero
  • The work done by first router is wasted

6
TCP Congestion Control
  • TCP uses end-to-end congestion control, IP layer
    provides no explicit feedback to the end systems
    regarding network congestion
  • Each sender limits the sending rate as a function
    of perceived network congestion
  • AIMD (Additive-Increase, Multiplicative-Decrease)
  • additive increase increase CongWin by 1 MSS
    every RTT
  • multiplicative decrease cut CongWin in half
    after loss
  • Saw-toothed pattern

7
TCP Congestion Control
  • SS (Slow Start)
  • Instead of increasing its rate linearly, TCP
    sender increases its rate exponentially during
    the initial phase (CongWin initialized to 1 MSS)
  • Until there is a loss event, the CongWin is cut
    in half and then grows linearly
  • Reaction to Timeout event
  • Triple duplicate ACKs ? CongWin is cut in half,
    increases linearly
  • Timeout ? slow start phase, until CongWin reaches
    one half of it had before timeout, CongWin grows
    linearly (TCP Reno)

8
Fairness
  • K TCP connections, passing through a bottleneck
    link with transmission rate R bps
  • Fair congestion control if average transmission
    rate of each connection is approximately R/K
  • Is TCPs AIMD algorithm fair?
  • Different TCP connections may start at different
    times, may have different window sizes at a point
    in time.
  • Assume two connections have the same MSS and RTT
  • If they have the same window size,then they have
    same throughput

9
Fairness
  • The goal is to have the throughputs fall in
    somewhere near the intersection of the equal
    bandwidth line and the full bandwidth utilization
    line
  • Point A ? link bandwidth lt R, increase by 1 MSS
    per RTT for both connections (equal increase line
    AB)
  • Point B ? link bandwidth gt R, packet loss occurs,
    connection 1 and 2 decrease their windows by
    factor of 2 (point C)

10
Fairness
  • We assumed that only TCP connections traverse the
    bottleneck link and that connections have the
    same RTT value
  • These conditions are typically not met
  • The connections with a smaller RTT are able to
    grab available bandwidth more quickly and will
    enjoy higher throughput than the connections with
    larger RTTs

11
Decongestion control
  • Introduction
  • Congestion control is fundamental to network
    design to enjoy traffic stability, performance,
    and fairness
  • TCP and TCP-like congestion control limit
    transmission rate to explicitly avoid congestion
  • Fairness has been achieved by either using fair
    queuing at routers or using common congestion
    control protocol at end hosts
  • Fair queuing is expensive to implement and
    congestion control protocol is far from optimal
  • Decongestion control relies upon greedy,
    high-speed transmission that will achieve better
    performance and fairness than TCP

12
Decongestion control
  • Using efficient, high-speed erasure coding, no
    need to avoid packet loss
  • Decongestion control depends on fair dropping and
    erasure-coded data streams that is both efficient
    and fair
  • Each sender simply sends as fast as possible. If
    congested routers drop packets in a fair manner,
    each flow will receive its max-min fair
    throughput
  • If flows use efficient erasure coding, they will
    achieve goodput almost equal to their throughput,
    fully utilizing network bandwidth.

13
Decongestion control
  • Related issues
  • Online codesIt is one of efficient, high-speed
    erasure coding that may be used by decongestion
    control protocol
  • CHOKe - A stateless active queue management
    scheme for approximating fair bandwidth
    allocationDecongestion control may use CHOKe as
    one of protocols that achieve fairness
  • Selfish behavior and stability of the internet A
    Game-Theoretic Analysis of TCPDecongestion
    control uses game-theoretic to show that greedy,
    high-speed transmission will achieve better
    performance

14
Online codes
  • Exchanging information over a channel it may lose
    packets, example using UDP over the Internet
  • There is a solution FEC (forward-error
    correction) if there is a bound to how many
    packets a channel can lose
  • FEC the sender prepares an encoding of the
    original message and sends it over the channel
  • The receiver is assured to be able recover the
    original message from the packets that it ends up
    receiving
  • The size of the encoded message is bigger than
    the size of the original message to make up for
    the expected losses
  • The receiver recovers the original message as
    soon as they receive at least as big a part of
    the encoding as the size of the original message

15
Online codes
  • Channel that loses no more than a fraction d of
    the packets, also referred to as a channel of
    capacity 1 d
  • There are codes of rate R 1 d for
    transmission over such channels. This means that
    a message of size n blocks can be encoded into
    n/R transmission blocks
  • So that any n of the transmission blocks can be
    used to decode the original message
  • Elias' codes takes at least O (n log n) time to
    encode and decode
  • Tornado codes require linear encoding and
    decoding time
  • Online codes are linear encoding/decoding time
    codes, similar to Tornado codes

16
Online codes
  • Online code is called the free erasure channel
    that has no constraints on its loss rate
  • No fixed-rate erasure code could be used for this
    channel
  • Online codes properties local encodability and
    rateless-ness
  • Local encodability is the property that each
    block of the encoding of a message can be
    computed independently from the others in
    constant time.
  • Rateless-ness is the property that each message
    has an encoding of practically infinite size

17
CHOKe - A stateless active queue management
scheme for approximating fair bandwidth allocation
  • The problem is providing a fair bandwidth
    allocation to each of n flows that share the
    outgoing link of a congested router
  • The buffer at the outgoing link is a simple FIFO,
    shared by packets belonging to the n flows. CHOKe
    is a simple packet dropping scheme
  • CHOKe is queue management algorithm that is
    stateless and easy to implement
  • There are two types of router algorithms
    scheduling algorithms and queue management
    algorithms
  • Scheduling algorithms can provide a fair
    bandwidth allocation, but it is complex to
    implement

18
CHOKe - A stateless active queue management
scheme for approximating fair bandwidth allocation
  • RED (Random Early Detection)
  • It is one of queue management algorithms
  • Single FIFO to be shared by all the flows, and
    drops an arriving packet at random during periods
    of congestion.
  • The drop probability increases with the level of
    congestion
  • RED is unable to penalize unresponsive flows
  • This is because the percentage of packets dropped
    from each flow over a period of time is almost
    the same.
  • Misbehaving traffic can take up a large
    percentage of the link bandwidth

19
CHOKe - A stateless active queue management
scheme for approximating fair bandwidth allocation
  • CHOKe
  • CHOKe drops more of misbehaving flows packets.
  • CHOKe (CHOose and Keep for responsive flows,
    CHOose and Kill for unresponsive flows)
  • When a packet arrives at a congested router,
    CHOKe draws a packet at random from the FIFO
    buffer and compares it with the arriving packet
  • If they both belong to the same flow, then they
    are both dropped, else the randomly chosen packet
    is left intact
  • And the arriving packet is admitted into the
    buffer with a probability that depends on the
    level of congestion
  • This probability is computed exactly as in RED

20
CHOKe - A stateless active queue management
scheme for approximating fair bandwidth allocation
  • The CHOKe algorithm

21
Selfish behavior and stability of the internet A
Game-Theoretic Analysis of TCP
  • This paper attempts to answering the following
    fundamental questionIf network end-points
    behaved in selfish manner, Would the stability of
    the Internet be endangered?
  • TCP Game
  • Each flow attempts to maximize the throughout it
    achieves by modifying its congestion control
    behavior, using combination of analysis and
    simulation to determine whether the network
    operates efficiently
  • TCP Game is defined in which TCP flows in a
    network can adjust their AIMD congestion behavior
  • These flows are allowed to freely change and set
    their congestion control parameters, (a, ß),
    where a is the additive increase component and ß
    is the multiplicative decrease component

22
Selfish behavior and stability of the internet A
Game-Theoretic Analysis of TCP
  • The Goal is to determine the congestion
    parameters (aE, ßE) at Nash equilibrium, also the
    behavior and efficiency by measuring average
    goodput and the average loss rate
  • The Nash equilibrium reflects a balance between
    the gains and the cost related to aggressive
    behavior
  • Results from analysis and simulation of TCP Game
  • Greed end-point behavior can result in efficient
    network operation with setting of TCP-Reno loss
    recovery in a network of drop-tail routers
  • With setting of TCP-SACK loss recovery and RED,
    the result is either low network goodput or high
    drop rates or both, but in using very simple
    dropping algorithms such as CHOKe can restor the
    efficiency

23
Decongestion Control
  • Benefits
  • Fairness and efficiency
  • Senders always transmit at the maximum available
    rate fairness is ensured by dropping policies at
    congested routers
  • Tuning the coding rate between sender and
    receiver by dynamically adjust the coding rate
    based on recent throughput rates
  • Simplified core infrastructure
  • Decongestion control enables significantly
    simpler router designs. Idealized decongestion
    control only requires a fair dropping mechanism
  • Erasure coding can reduce the need for queuing in
    the network coding schemes can provide similar
    goodput with coding buffer sizes on the same
    order as router buffers

24
Decongestion Control
  • Incentive compatibility
  • Decongestion control is more robust to malicious
    behavior due to its time independence
  • Senders adjust coding rates based upon reported
    throughput, not individual packet events
  • So they are not as sensitive to short-term packet
    behaviors as TCP

25
Design
  • Design of decongestion control protocol, Achoo
  • Achoo sends erasure-coded packets fast as
    possible between a sender and a receiver
  • Packets are labeled with unique, increasing
    sequence numbers
  • The receiver periodically acknowledges packet
    reception with information about the rate of
    reception
  • All routers implement a fair dropping policy to
    ensure that each flow receives its fair share of
    link bandwidth
  • Achoos transmission behavior is controlled by
    two components at the sender the decongestion
    controller and the transmission controller.

26
Design
  • Decongestion controller
  • All data to be sent is divided into caravans
  • Each caravan consists of n fixed-size (1Kb, say)
    data blocks we pick n dynamically
  • The role of the decongestion controller is to
    select the appropriate rate of transmission, rate
    of coding, and caravan size
  • Caravan size
  • The controller starts with a fixed-size caravan
    and begins the transmission loop
  • When a caravan is successfully delivered, the
    controller doubles the size of the next caravan
  • If after some fixed timeout (likely a function of
    the RTT) there is insufficient data in the socket
    buffer to fill a caravan, the caravan size is
    halved

27
Design
  • Type and rate of coding
  • Many strategies can be used for each caravan
  • Small caravans consist of duplicate data
    (ordinary redundancy)
  • Large caravans use rateless erasure codes
  • Senders will adjust the rate of coding in
    response to the successful delivery rates
    reported by the receivers
  • Transmission rate
  • Each physical interface has a maximum achievable
    rate
  • The controller determines which flows can put
    that capacity to most effective use the nth flow
    on an interface is given 1/n of the link capacity

28
Design
  • Transmission rate (cont)
  • It is possible that the current transmission
    rates for some of these flows are insufficient to
    capture available capacity (unbottlenecked)
  • If the reception rate for the last caravan was
    less than the transmission rate, the controller
    attempts a rate decrease
  • The newly available capacity is distributed among
    all unbottlenecked flows

29
Design
  • Transmission controller
  • The job of the transmission controller is simple
    to ensure that each caravan is delivered
    successfully
  • Each caravan is streamed (using the rate and
    coding specified by the decongestion controller)
  • Until the sender receives an ACK indicating that
    the entire caravan has been successfully received
    and decoded
  • The transmission controller can then start
    sending the next caravan

30
Design
  • Example
  • A and B attempt to send data to C simultaneously
    using Achoo
  • Which results in two flows of 10 Mbps each
    arriving at a router R.
  • both As flow and Bs flow will achieve an
    end-to-end goodput of about 7 Mbps.
  • A decides to start a flow to D
  • It divides 10 Mbps between two flows, the B-C
    flow will consume the remaining 9 Mbps on the R-C
    link
  • A-D flow is bottlenecked (capacity 2 Mbps, A
    sends at 5 Mbps).
  • Bandwidth is decreased on the A-D flow until the
    A-C flow becomes bottlenecked again at 7 Mbps.

31
Challenges
  • What about coding overhead?Coding increases the
    end-to-end delay, packet overhead, and
    computational cost of decongestion control
  • What about the control channel?Connection
    establishment and teardown for Achoo is the same
    as that for TCPThe challenge is to ensure that
    control messages are not lost in the fray of
    competing data packets
  • What about unconventional routing?In recent
    researchers flows can be redirected along
    different paths as traffic conditions change, and
    packets can be sent across multiple paths
    simultaneouslySo coding and transmission rates
    need to be discovered after route changes

32
References
  • Barath Raghavan and Alex C. Snoeren. Decongestion
    Control. University of California, San Diego
  • P. Maymounkov. Online codes. Technical Report
    TR2002-833, New York University, 2002.
  • A. Akella, R. Karp, C. Papadimitrou, S. Seshan,
    and S. Shenker. Selfish behavior and stability of
    the internet A game-theoretic analysis of TCP.
    In Proceedings of ACM SIGCOMM, 2002.
  • R. Pan, B. Prabhakar, and K. Psounis. CHOKe - A
    stateless queue management scheme for
    approximating fair bandwidth allocation. In
    Proceedings of IEEE INFOCOM, 2000.
  • James F.Kurose and Keith W. Ross Computer
    networking

33
Thanks!
Write a Comment
User Comments (0)
About PowerShow.com