Title: Packet Networks 2
1Packet Networks (2)
- More routing algorithms
- ATM Fundamentals
- Congestion in networks
- Fair share queuing in Multiplexers
- Congestion Control
- Open loop controls
- Closed loop controls
2Congestion and Packet Loss
1
N
N-1
2
- Traffic is assumed to be in connections or
flows - In a network connection through N routers
- Competing traffic enters in each router
- Packets may be lost in each router due to buffer
overflow. - Single flow can hog available buffering and
capacity - Buffering strategy can control which traffic is
lost - First in First out (FIFO) queues
- other priority techniques
3FIFO queuing
(a) Single Class
Packet buffer
Arriving packets
Transmission link
Packet discard when full
(b) Two Class
Packet buffer
Arriving packets
Transmission link
Class 1 discard when full
Class 2 discard when threshold exceeded
- Lower Loss for Class 1 than Class 2
Figure 7.42
4Head of Line Queuing
Packet discard when full
High-priority packets
Transmission link
Low-priority packets
When high-priority queue empty
Packet discard when full
- High Priority served first (lowest delay and
loss) - No guarantee Low priority is served at all!
- Subject to hogging among high priority flows
Figure 7.43
5Priority Queuing
Sorted packet buffer
Arriving packets
Tagging unit
Transmission link
Packet discard when full
- Packets queued in order by tag
- Different tagging strategies
- Priority
- Urgency (function of priority and arrival time)
Figure 7.44
6Fair Share queuing
Approximated bit-level round robin service
Packet flow 1
Packet flow 2
C bits/second
Transmission link
Packet flow n
- Each flow has a separate buffer
- Each flow is guaranteed a proportional share of
the output (no hogging) - Flows can get higher share if not every flow is
producing enough packets to fill its share. - Also called Fluid flow
- Must be approximated packet by packet (using tags)
Figure 7.45
7Fluid Flow and Approximation
Queue 1 _at_ t0
Fluid-flow system both packets served at rate
1/2
1
Queue 2 _at_ t0
Both packets complete service at t2
t
0
2
1
Packet from queue 2 waiting
Packet-by-packet system queue 1 served first at
rate 1 then queue 2 served at rate 1.
1
Packet from queue 2 being served
Packet from queue 1 being served
t
0
2
1
Figure 7.46
8Approximating Fluid Flow
Generalize so R(t) is continuous, not discrete
R(t) grows at rate inversely proportional to
nactive(t)
- R(t) counts rounds with 1 bit each from active
flows - Rounds (R(t)) increased by C?t/Nactive
- Recomputed when needed or when Nactive changes
- Arriving packets are tagged with estimated number
of rounds at completion - Equal to the rounds count when packet starts plus
packet length - F(i,k) MAX(F(i,k-1),R)P(i,k)
- Subscripts denote flow and packet numbers numbers
- P(i,k) is length of kth packet in ith flow
Figure 7.47
9Approximating fair queuing
2
Fluid-flow system both packets served at rate
1/2
Queue 1 _at_ t0
1
Queue 2 _at_ t0
Packet from queue s served at rate 1
t
0
2
3
Packet-by-packet fair queueing queue 2 served at
rate 1
Packet from queue 2 waiting
1
Packet from queue 1 being served at rate 1
t
1
2
3
0
Figure 7.48
10Weighted fair share
- Gives each flow a weight number of bits sent in
each round. - Ideally, each flow i gets Wi/W (where W is the
sum of all the weights of active flows - Approximated by applying weights to tags
- F(i,k) MAX(F(i,k-1),R)P(i,k)/Wi
11Weighted fair queuing
Queue 1 weight 1
Fluid-flow system packet from queue 1 served at
rate 1/4 Packet from queue 1 served at rate 1
Queue 2 weight 3
1
Packet from queue 2 served at rate 3/4
t
0
2
1
Packet from queue 1 waiting
Packet-by-packet weighted fair queueing queue 2
served first at rate 1 then queue 1 served at
rate 1.
1
Packet from queue 1 being served
Packet from queue 2 being served
t
2
0
1
Figure 7.49
12Network Congestion
Congestion
3
6
1
4
8
2
7
5
- Packet arrivals at a node exceed outgoing
capacity - Will cause loss with any finite buffer
- Will cause unacceptable delay
Figure 7.50
13Consequences of congestion
Controlled
Throughput
Uncontrolled
Offered load
- Capacity spent delivering excessively delayed or
lost packets is wasted. - Wasted capacity doesnt contribute to throughput
- Without controls, throughput decreases with
increased load beyond a point Unstable!
Figure 7.51
14Types of Load Control
- Open Loop Load Control
- Control is applied before traffic is allowed to
enter - Source specifies traffic characteristics before
it starts to send - Appropriate with virtual circuit service and most
appropriate with virtual circuit routing - Network must insure sources comply with
contracted for limits. - Closed loop control
- Network users monitor congestion and react to
control loads when needed - Can be used with datagram or VC networking
- Subject to dynamic instability
15Characterizing a traffic flow
Burst time
Peak rate
Bits per second
Average rate
Time
- Variable rate flow has many interesting
parameters - average rate
- peak rate
- duration and shape of traffic bursts
Figure 7.52
16Leaky bucket algorithms
Water poured
irregularly
Leaky bucket
Water drains at
a constant rate
- Bucket averages traffic flow
- Drain rate is allowed average flow rate
- Bucket size is allowed peak rate times burst
duration - Bucket will overflow when either
- average rate exceeds allowed rate
- Burst size exceeds that allowed
Figure 7.53
17A Leaky Bucket program
Arrival of a packet at time ta
X X - (ta - LCT)
Yes
X lt 0?
No
X 0
Yes
Nonconforming
X gt L?
packet
No
X value of the leaky bucket counter
- No problems add the next packet
X X I
X auxiliary variable
LCT ta
LCT last conformance time I 1/sustainable
rate L characterizes burst size
conforming packet
Figure 7.54
18Leaky Bucket Example
Nonconforming (dropped)
Packet
arrival
Time
LI
Bucket
content
I
Time
conforming arrival
Figure 7.55
19Parameter relationships
MBS
Time
T
L
I
- MBS Maximum burst size
- MBS 1L/I-T
- T interval between packets at peak rate
- I interval between packets at average rate
- L characterizes burst size allowed
Figure 7.56
20Dual Leaky for policing ATM
Leaky bucket 1 PCR and CDVT
Tagged or dropped
Incoming traffic
Polices variation in peak rate
Untagged traffic
Leaky bucket 2 SCR and MBS
Tagged or dropped
Polices normal traffic
PCR Peak Cell Rate SCR Sustainable Cell
Rate MBS Max Burst Size CDVT Cell Delay
Variation Tolerance
Untagged traffic
Figure 7.57
21Traffic Shaping
10 Kbps
(a)
Time
0
1
2
3
50 Kbps
(b)
Time
0
1
2
3
100 Kbps
(c)
Time
0
1
2
3
- Same rate (10kbps) can have different peaks
- Redistributing packets in time (shaping) can
reduce peak/average variations - Not always acceptable
Figure 7.58
22Policing and Shaping In Networks
Shaping
Policing
Policing
Shaping
Network a
Network b
Network c
- Each network has a policy on allowed traffic in a
flow - Traffic is policed (i.e. checked for conformance
and rejected if non-conforming) on entry. - To avoid problems, traffic is shaped (i.e. bursts
smoothed) before entry to a network. - Result is more predictable traffic flows and
better - loss and delay performance
23Leaky bucket traffic shaper
Shaped
Incoming
Size N
traffic
traffic
Server
Packet
- Arriving packets enter the bucket
- Packets released at a constant rate
- Creates a constant rate flow (with gaps)
- Does not accommodate bursty traffic
Figure 7.59
24Token bucket traffic shaper
Tokens arrive
periodically
- Tokens represent permission to send
- Tokens accumulate in bucket
- Can authorize release of a burst
- Resulting flow is still bursty, but burst size
is smoothed
Size K
Token
Shaped
Incoming
Size N
traffic
traffic
Server
Packet
Figure 7.60
25Token bucket shaper flow
b bytes instantly
r bytes per second
t
- Shaper can release b bytes instantly (packet size
times k) - Shaper releases r bytes/second long term (packet
size times token arrival rate) - Traffic out lt brT bytes.
Figure 7.61
26Traffic through 2 multiplexers
A(t) brt
(a)
No backlog of packets
R(t)
R(t)
R(t)
(b)
b R - r
Buffer occupancy _at_ 1
empty
t
bR
t
0
- First multiplexer will smooth flow to lt R(t) if
R(t)ltr - Packets are buffered during burst, max delay of
b/r - Second (and later) multiplexers have no buffering
- Packets exit as they arrive
Figure 7.62
27Buffering delay through fair multiplexers
- If the multiplexers are fair and guarantee a rate
of Rgtr - First multiplexer delays by a max of b/r
- Others do not delay
- Delay for the chain is b/r
- If multiplexers are packet by packet weighted
- Packets a delay behind other packets using the
link. - Total delay D lt b/R(H-1)m/R?M/Rj
- where m and M are packet size in flow and in
network - Rj is rate of jth link
- H is number of hops (links)
28Closed loop control -- TCP
- TCP uses a flow control window to limit traffic
- Flow control window Min(receiver window,
congestion window) - TCP adjusts congestion window via slow start
- Window starts at 1 maximum size segment (PDU)
- Window is increased for every acknowledgement
- Send one, get ack, window goes to 2
- Send two, get 2 acks, window goes to 4
- Send 4, get 4 acks, window goes to 8
- When window exceeds a congestion threshold, it is
increased only when every packet in the window is
acknowledged - Send 8, get 8 acks, window goes to 9.
- Threshold and window adjusted when acks are lost
29Window variance in time
Congestion occurs
Congestion
20
avoidance
15
Congestion
window
Threshold
10
Slow
start
5
0
Round-trip times
Figure 7.63
30Traffic types for ATM
ABR
Total
bandwidth
VBR
CBR
Time
- Constant Bit Rate (CBR) constant arrival rate,
guaranteed service - Variable Bit Rate (VBR) variable but
characterized arrival rate, guaranteed service - Available Bit Rate (ABR) variable rate, served
when capacity is available only, subject to flow
control
Figure 7.64
31Controlling ABR Traffic
Forward
RM cell
Data cell
S
D
Backward
Switch
RM cell
S source
D destination
- Flow must be regulated to avoid buffer overflow
- Resource management cells accumulate congestion
information - Returned RM cells used to regulate flow
Figure 7.65
32Detecting and Relaying Congestion
RM cell
Data cell
Data cell
CI0
EFCI0
EFCI1
S
D
Congested
RM cell
switch
CI1
- Switch detects congestion through queue length or
occupancy - Source reacts to backwards RM flow
- CI0 Increase flow
- CI1 Reduce flow
Figure 7.66
33Explicit Rate Control
ER 1 Mbps
ER 5 Mbps
ER 10 Mbps
S
D
Can only
Can only
ER 1 Mbps
support 5 Mbps
support 1 Mbps
- Each switch determines rate it can support
- Ideally a fair share of available bandwidth B/n
- ER set to minER,switch rate
- Source regulates flow to ER in returned cells
Figure 7.67
34Some limitations
- Congestion happens and clears very quickly
- ATM cell transmit times are VERY short compared
to latencies, high speed IP networks have same
problem - Feedback indications (explicit or implied from
lost packets) are old data - Dynamic routing of packets impacts congestion
detection - Not all packets go through the same switches
- Flow control changes can trigger routing changes
(i.e. change link costs based on congestion or
delay) - Closed loop control algorithms easily become
unstable - Alternate between wide open and shut down
- Still a good area for research work
35For Next Week
- Read Chapter 8 TCP/IP Networking
- We will spend some time at the beginning of the
class reviewing the exams and exam results