Title: Congestion Control
1Congestion Control
2 Congestion Control
- Too many packets in part of the network
- Performance Degrades Congestion
- Network Layer provides congestion control to
ensure timely delivery of packets from source to
destination
3Congestion Control
- Causes of congestion
- Types of congestion control schemes
- Solving congestion
4Causes of Congestion
- 1. Exhaustion of buffer space
- 2. Deadlock
- Congestion can be long-lived or transient
5Exhaustion of Buffer Space
- Routers maintain packet queues (or buffers)
- Buffers fill up if
- Routers are too slow, OR
- Combined input traffic rate exceeds the outgoing
traffic rate - Insufficient buffer space leads to congestion
6Exhaustion ofBuffer Space (contd)
50 Mbps
Router
50 Mbps
50 Mbps
50 Mbps
Buffer
7Deadlock (or lockup)
- The first router cannot proceed until the second
router does something, and the second router
cannot proceed until the first router does
something - Both routers come to a complete halt and stay
that way forever
8Types of Deadlock
- 1. Store and Forward Lockup
- Direct Store and Forward Lockup
- Indirect Store and Forward Lockup
- 2. Reassembly Lockup
9Direct Store and Forward Lockup
- Simplest lockup between two routers
- Example
- Suppose router A has five buffers, all of which
are queued for output to router B - Similarly, router B has five buffers, all of
which are queued for output to router A - If there is flow control on the link between
routers A and B, then neither router can accept
any incoming packets from the other. They are
both stuck.
10Direct Store and Forward Lockup (contd)
Router A
To/from other routers
To/from other routers
Router B
To/from other routers
To/from other routers
11Indirect Store and Forward Lockup
- The same thing can happen on a larger scale
12Cycles in a Dependency Graph
- If we view the network as a graph with nodes and
edges
A deadlock occurs when There is a cycle of
dependencies in the graph
13Reassembly Lockup
- In some network layer implementations, the
sending router must split messages into multiple
network layer packets - Receiving routers reassemble split up packets
into a single packet - If the receiving routers buffer fills up with
incomplete packets, it cannot reassemble any more
packets
14Reassembly Lockup (contd)
Buffer size 16 packet segments
s0, p2
s1, p1
s2, p1
s3, p1
s0, p3
s1, p2
s2, p2
s3, p3
s0, p4
s1, p4
s2, p3
s3, p4
s0, p5
s1, p5
s2, p5
s3, p5
Input 0
Input 1
Input 2
Input 3
ssource, ppart
15Types of Congestion Control
- Preventive
- The hosts and routers attempt to prevent
congestion before it can occur - Reactive
- The hosts and routers respond to congestion after
it occurs and then attempt to stop it
16Preventive and Reactive Control
- Preventive Techniques
- Resource reservation
- Leaky/Token bucket
- Isarithmic control
- Fair Queuing
- Reactive Techniques
- Load shedding
- Choke packets
17 Solving Congestion
- Host based
- Isarithmic Control
- Leaky Bucket
- Token Bucket
- Router Based
- Load Shedding
- Choke Packets
- Resource Reservation
- Fair Queuing
18Load Shedding
- When a router becomes inundated with packets, it
simply drops some - Load Shedding
19Intelligent Load Shedding
- Discarding packets does not need to be done
randomly - Router should take other information into account
- Possibilities
- Total packet dropping
- Priority discarding
- Age biased discarding
20Total Packet Dropping
- When the buffer fills and a packet segment is
dropped, drop all the rest of the segments from
that packet, since they will be useless anyway - Only works with routers that segment and
reassemble packets
21Priority Discarding
- Sources specify the priority of their packets
- When a packet is discarded, the router chooses a
low priority packet - Requires hosts to participate by labeling their
packets with priority levels
22Age Biased Discarding
- When the router has to discard a packet, it
chooses the oldest one in its buffer - This works well for multimedia traffic which
requires short delays - This may not work so well for data traffic, since
more packets will need to be retransmitted
23Random Early Detection
- (RED)
- TCP detects packet loss and slows the sending
rate accordingly - When the router queues start to fill, randomly
drop some packets
24Choke Packets
- Each router monitors the utilization of each of
its output lines - Associated with each line is a variable u, which
reflects the utilization of that line - Whenever u moves above a given threshold value,
the output line enters a warning state - Each newly arriving packet checks if its output
line is in the warning state - If so, the router sends a choke packet back to
the source
25Choke Packets (contd)
- The data packet is tagged (by setting a bit in
its header) so that it will not generate any more
choke packets at downstream routers - When the source host receives the choke packet,
it is required to reduce its traffic generation
rate to the specified destination by X - Since other packets aimed at the same destination
are probably already on their way to the
congested location, the source host should ignore
choke packets for that destination for a fixed
time interval. After that, it resumes its
response to choke packets.
26Choke Packets Example
Congestion (u gtThr)
Host
router
router
Tagged Packets (to prevent further choke packet
generation at later routers)
Choke Packet
27Isarithmic Control
- Each host is initially allocated a pool of
permits - Data packets carry permits between hosts
- A host cant send a packet unless at has at least
one permit - If no permit is available, the packet waits in
the host until a permit becomes available.
28Isarithmic ControlExample
3 Permits
0 Permits
A
B
Time 1
Data
2 Permits
0 Permits
A
B
Time 2
2 Permits
1 Permit
A
B
Time 3
29Isarithmic Control (contd)
- Isarithmic control is implemented at the hosts,
not the routers - Isarithmic control guarantees that the number of
packets in the network will never exceed the
number of permits initially present
30Isarithmic Control Problem
- If a host doesnt send any packet, i.e. it just
holds onto its permits, then the network will not
be utilized fully - Solution Place a limit on the number of permits
a host may keep. If the host has more than its
allowed number of permits, it must transmit the
excess permits to other hosts by piggybacking
them onto other data packets or by using special
permit-transport packets
31Isarithmic ControlExample
Max-Permits 4
Data
1 Permit
3 Permits
A
B
Time 1
1 Permit
4 Permits
A
B
Time 2
Permit-transport Packet
1 Permit
3 Permits
A
B
Time 3
32 Resource Reservation
- Primarily for connection-oriented networks
- During connection setup
- Request resources (e.g., buffer space, connection
bandwidth) from the network - If the network has enough available resources to
support the new connection, the connection will
be established - Otherwise, the connection will be rejected
33Resource Reservation Example
Router
Router
Src
Dest
4 Mbps available
6 Mbps available
10 Mbps available
Case 1 Source attempts to connect to
destination, and attempts to
reserve 4 Mbps for the connection Result
Connection accepted. There is enough bandwidth
available. Available link
bandwidths updated. Case 2 Source attempts to
connect to destination, and attempts to
reserve 5 Mbps for the connection Result
Failure. There is not enough bandwidth available
on one of the links.
34Resource Reservation (contd)
- Once a connection is accepted, the host must use
only the amount of resources reserved. It may
not use more than that. - What if the host is malicious and attempts to use
more network resources than it reserved?
35Leaky Bucket
- Used in conjunction with resource reservation to
police the hosts reservation - At the host-network interface, allow packets into
the network at a constant rate - Packets may be generated in a bursty manner, but
after they pass through the leaky bucket, they
enter the network evenly spaced
36Leaky Bucket Analogy
Packets from host
Leaky Bucket
Network
37Leaky Bucket (contd)
- The leaky bucket is a traffic shaper It
changes the characteristics of packet stream - Traffic shaping makes more manageable and more
predictable - Usually the network tells the leaky bucket the
rate at which it may send packets when the
connection begins
38Leaky Bucket Doesnt allow bursty transmissions
- In some cases, we may want to allow short bursts
of packets to enter the network without smoothing
them out - For this purpose we use a token bucket, which is
a modified leaky bucket
39 Token Bucket
- The bucket holds tokens instead of packets
- Tokens are generated and placed into the token
bucket at a constant rate - When a packet arrives at the token bucket, it is
transmitted if there is a token available.
Otherwise it is buffered until a token becomes
available. - The token bucket has a fixed size, so when it
becomes full, subsequently generated tokens are
discarded
40Token Bucket
Packets from host
Token Generator (Generates a token once every T
seconds)
Network
41Token Bucket vs. Leaky Bucket
Case 1 Short burst arrivals
Arrival time at bucket
6
5
4
3
2
1
0
Departure time from a leaky bucket Leaky bucket
rate 1 packet / 2 time units Leaky bucket size
4 packets
6
5
4
3
2
1
0
Departure time from a token bucket Token bucket
rate 1 tokens / 2 time units Token bucket size
2 tokens
6
5
4
3
2
1
0
42Token Bucket vs. Leaky Bucket
Case 2 Large burst arrivals
Arrival time at bucket
6
5
4
3
2
1
0
Departure time from a leaky bucket Leaky bucket
rate 1 packet / 2 time units Leaky bucket size
2 packets
6
5
4
3
2
1
0
Departure time from a token bucket Token bucket
rate 1 token / 2 time units Token bucket size
2 tokens
6
5
4
3
2
1
0
43Router based congestion management
- Isarithmic, Token bucket and leaky bucket manage
traffic from the end host. - But what if we do not trust the incomming traffic
to behave? - Must manage within the router
44Fair queuing
- If one source is sending too many packets, can we
allow other sources to continue and just drop the
bad source? - First cut round-robin
45Round-Robin
46- Advantages protection among flows
- Misbehaving flows will not affect the performance
of well-behaving flows - Misbehaving flow a flow that does not implement
congestion control - Disadvantages
- More complex must maintain a queue per flow per
output instead of a single queue per output - Biased toward large packets a flow receives
service proportional to the number of packets
47Idealized flow model
- For N sources, we would like to give each host or
input source 1/N of the link bandwidth - Image we could squeeze factions of a bits on the
link at once (fluid model) - Solution would interleave the bits on the link at
the same instant
48Fluid model vs. Packet Model
49Approximate bit-bit RR
- Virtual time is incremented each time a bit is
transmitted for all flows - If we have 3 active flows, and transmit 3 bits,
we increment virtual time by 1. - At each packet arrival, compute time packet would
have exited the router during virtual time. - This is the packet finish time
50Approximate bit-RR
- Algorithm selects the queue with the packet that
has the lowest finish time - This is fair queuing