A LowState Packet Marking Framework for Approximate Fair Bandwidth Allocation PowerPoint PPT Presentation

presentation player overlay
1 / 24
About This Presentation
Transcript and Presenter's Notes

Title: A LowState Packet Marking Framework for Approximate Fair Bandwidth Allocation


1
A Low-State Packet Marking Framework for
Approximate Fair Bandwidth Allocation
  • Author Abhimanyu Das, Debojyoti Dutta and
  • Ahmed Helmy
  • Source Communications Letters, IEEE, vol. 8,
  • Issue 9, Sept. 2004 Page(s) 588
  • 590
  • Reporter Chao-Yu Kuo (???)
  • Date 2005/05/14

2
Outline
  • Background
  • Introduction
  • Our Architecture
  • Light Weight Distributed Fair Algorithm
  • Evaluations
  • Conclusions
  • Comments
  • Reference

3
Background
  • Local fair queueing
  • Global max-min fairness algorithm
  • Bitmap Algorithms counting the number of flows
  • Core-Stateless Fair Queueing
  • RIO (RED with In/Out) using twin RED algorithms
    for dropping packets, one for ins and one for outs

4
Introduction
  • The amount of non-congestion-reactive traffic is
    on the rise
  • Poor performance of well behaved
  • One way to control misbehaving traffic is to
    enforce local fairness among flows
  • Locally fair policies are inadequate to
    simultaneously control misbehaving traffic and
    provide high network utilization
  • Globally fair bandwidth allocations

5
Our Architecture
Edge Routers packets are marked as in
or out
Core Routers using twin RED algorithms
for dropping packets, one for ins and
one for outs
6
Our Architecture
  • Core Routers
  • Using limited feedback from bottelneck routers
  • RIO (RED with In/Out) queueing

7
Our Architecture
RIO (RED with In/Out) queueing
  • avg_in the average queue for the in packets
  • avg_total the average total queue size for all
    (both in and out)
  • arriving packets
  • It drops out packets much earlier than it drops
    in packets
  • min_in gt min_out pmax_out gt pmax_in
    max_in gt max_out

8
Our Architecture
  • Edge Routers
  • Based on stateless fair packet marking
  • Using the Core-Stateless Fair Queueing to
    aggregate packet marking
  • The queue, output link speed and drop probability
    are replaced by the token bucket, token-bucket
    rate and (1- token allocation probability)
  • Using the Light Weight Distributed Fair Algorithm
    to configure the target token rates for the
    markers
  • Using Bitmap to count the number of flows

9
Light Weight Distributed Fair Algorithm
10
Light Weight Distributed Fair Algorithm
11
Light Weight Distributed Fair Algorithm
Receives feedback M1,2,3 Bw1Bw2Bw310
N13, N24, N32 est_rate110/33.3 est_r
ate210/42.5 est_rate310/25 tightlink2 fa
irrate22.5 MM-21,3
Punion(P,DStighlink) temp1 (the intersection
of flows of bottlenecks 2 and 1) Bw110-110/47
.5 N13-12 temp1 (the intersection of
flows of bottlenecks 2 and 3) Bw310-110/47.5
N32-11
12
Light Weight Distributed Fair Algorithm
Second round M1,3 Bw1Bw37.5 N12,
N31 est_rate17.5/23.75 est_rate37.5/17
.5 tightlink1 fairrate13.75 MM-13 Punion
(P,DStighlink) temp0 (the intersection of
flows of bottlenecks 1 and 3)
13
Light Weight Distributed Fair Algorithm
Third round M3, Bw37.5, N31 est_rate3
7.5/17.5 tightlink3 fairrate37.5 MM-1 P
union(P,DStighlink)
14
Light Weight Distributed Fair Algorithm
Out loop fairrate13.75 fairrate22.5 fairrate
37.5 Markerrate1 3fairrate111.25 Marke
rrate2 1fairrate22.5 Markerrate3 1fair
rate37.5
Receives new pkt(p) MarketSet1,2,3 MarkerBottle
neck2 (fairratei is minimum) Mark(p, 2.5)
15
Evaluations - topology
All the other links have a capacity of 50
Mb/s Each link has a propagation delay of 5 ms
and a buffer size of 50 packets We use TCP Reno
with a window size of 20 packets
16
Evaluations - four UPD flows
All the flows are 1 Mb/s UDP sources
17
Evaluations - topology
4 TCP
210 1-Mb/s UDP
All the other links have a capacity of 50
Mb/s Each link has a propagation delay of 5 ms
and a buffer size of 50 packets We use TCP Reno
with a window size of 20 packets
18
Evaluations - TCP UDP
The number of UDP flows
TCP (1-0.2)/4 0.2 Mb/s
19
Conclusions
  • We introduced a lightweight architecture
  • Efficient network-wide max-min fair bandwidth
  • High network utilization
  • Our framework
  • Punishing misbehaving flows
  • Improving the throughput of well-behaved TCP
    flows dramatically

20
Comments
  • Counting the number of flows doesnt consider
    traffic
  • It is the problem that congestion rises around
    Edge Router
  • Packets of the flow can travel different path, so
    flow can get more bandwidth

21
Reference Bitmap
  • Direct bitmap
  • b the size of the bitmap
  • n the number of active flows
  • The probability that a given flow hashes to a
    given bit is p1/b
  • The probability that no flow hashes to given bit
    is pz(1-p)n (1/e)n/b
  • Ezbpz b (1/e)n/b
  • Estimating the number of active flows is n b
    ln(b/z)
  • PS. n b ln(b/z) b ln(b/Ez)
  • b ln(en/b) n

22
Reference Bitmap
  • Algorithm for computing the
  • estimate of the number of
  • active flows for a multiresolution
  • bitmap. We first pick the base
  • component that gives the best
  • accuracy then add together the
  • estimates for the number of
  • flows hashing to it and all higher
  • resolution components and
  • finally extrapolate.

23
Reference Core-Stateless Fair Queueing
The queue, output link speed and drop probability
are replaced by the token bucket, token-bucket
rate and (1- token allocation probability)
C token-bucket rate prob 1-token allocation
probability drop(p) p is marked as out
24
Reference Core-Stateless Fair Queueing
  • estimate_rate(ri,p)
  • tik arrival time of the kth packet of flow i
  • Tik tik tik-1
  • K is a constant
Write a Comment
User Comments (0)
About PowerShow.com