Title: COM 360
1COM 360
2Chapter 6
- Congestion Control and
- Resource Allocation
3Allocating Resources
- How do we effectively allocate resources among a
collection of competing users? - These resources include the bandwidth of the
links and the buffers on the routers, or switches
where the packets are queued awaiting
transmission. - Packets contend at a for the use of a link
router. - When too many packets are queued waiting for the
same link, the queue overflows and packets are
dropped. When this happens often, the network is
said to be congested. - Most networks provide congestion-control
mechanisms
4Allocating Resources
- Congestion control and allocating resources are
two sides of the same coin - If a network actively allocates resources, such
as scheduling a virtual circuit, then congestion
may be avoided. - Allocating network resources is difficult because
the resources are distributed throughout the
network. - On the other hand, you can send as much data as
you want and recover from congestion if it
occurs. This is the easier approach, but it can
be disruptive. - Thus congestion control and resource allocation
involve both hosts and network elements, like
routers, as well as queuing algorithms.
5Issues in Resource Allocation
- Resource allocation is complex and is partially
implemented in routers or switches and partially
in the transport protocol running on the end
hosts. - End systems use signaling protocols to convey
their resource requirements to network node,
which reply with information about availability.
6Terminology
- Resource allocation is the process by which
network elements try to meet the competing
demands that applications have for network
elements. - Congestion control describes the effort the
network nodes make to respond to overload
conditions. - Flow control involves keeping a fast sender from
overflowing a slow receiver. - Congestion control is intended to keep a lot of
senders from sending too much data into the
network because of a lack of resources at some
point.
7Network Model
- Packet Switched Network
- Problem is the same for routers or switches on a
network or an internet. - Source may have sufficient capacity to send a
packet on its outgoing link, but an intermediate
link may have heavy traffic. - For example, 2 high-speed links may feed into a
low speed link as seen on the next diagram
8Congestion in a Packet Switched Network
9Congestion Control
- Congestion control is not the same as routing,
and routing around a congested link does not
always solve the problem. - In the previous example, it is not possible to
route around the router and this congested router
is referred to as a bottleneck.
10Connectionless flows
- In the Internet Model, IP provides a
connectionless datagram delivery service and TCP
implements an end-to-end connection abstraction. - Datagrams are switched independently, but usually
a stream between a particular pair of hosts flows
through a particular set of routers. - The idea of flow- a sequence of packets following
the same route is an important abstraction in
connection control.
11Connectionless flows
- Flows can be defined as host-to-host,
process-to-process. - A flow is similar to a channel.
- A flow is visible to routers inside the network,
and a channel is an end-to-end abstraction. - A flow can be implicitly defined or explicitly
established like a connection.
12Multiple Flows
Multiple flows passing through a set of
routers.
13Taxonomy
- Resource allocation mechanisms can be
characterized as - Router-Centric versus Host-Centric
- Reservation-Based versus Feedback-Based
- Window-Based versus Rate-Based
14Router-Centric vs. Host-Centric
- In router-centric design, each router takes
responsibility for deciding when packets are
forwarded and selecting which packets are dropped
as well as for informing hosts that are
generating the network traffic how many packets
they are allowed to send. - In host-centric design, the end hosts observe the
network conditions and adjust their behavior
accordingly. - These are not mutually exclusive.
15Reservation-Based versus Feedback-Based
- Resource allocation mechanisms are sometimes
classified according to whether they use
reservations or feedback. - In a reservation-based system, the end host asks
the network for a certain capacity at the time a
flow is established. The router allocates enough
resources, or rejects the flow. - In a feedback-based system, the end hosts begin
sending data and adjust their sending rate
according to the feedback they receive.
16Window-Based versus Rate-Based
- Both flow control and resource allocation
mechanisms need a way to express to the sender,
how much data they can transmit. They do this
with a window or a rate. - In a window-based transport, such as TCP, the
receiver advertises the window to the sender.
This limits how much data can be sent a form of
flow control. - A rate can also be used to control the senders
behavior. The receiver says it can process a
certain number of bits per second and the sender
adheres to this rate.
17Evaluation Criteria
- How does a network effectively and fairly
allocate its resources. - These are the two criteria by which we can
evaluate whether a resource allocation mechanism
is a good one or not.
18Effective Resource Allocation
- Consider the two principle network metrics
throughput and delay (latency). - It may appear that increasing throughput means
reducing delay, but that is not always the case. - One way to increase throughput is to allow as
many packets as possible, driving the utilization
up to 100. - But increasing the number of packets, increases
the length of the queues, which means packets are
delayed longer in the network.
19Power of a Network
- The power of the network describes this
relationship of throughput and delay - Power Throughput/Delay
- This is based on M/M/1 queues ( 1 server and a
Markov distribution of packet arrival and
service). - This assumes infinite queues, but real networks
the have finite buffers and occasionally drop
packets. - The objective is to maximize this ration, which
is a function of the load on the network. - Ideally the resource mechanism operates at the
peak of this curve.
20Power Curve
21Effective Resource Allocation
- Ideally, we want to avoid the throughput going to
zero because the system is thrashing. - We want a system that is stable- where packets
continue to get through the network even when the
network is operating under a heavy load - If the mechanism is not stable, the network may
experience congestion collapse.
22Fair Resource Allocation
- Fairness presumes that a fair or equal share of
the bandwidth is allocated to each flow. - Raj Jain has proposed a metric to quantify the
fairness of a congestion-control mechanism. - (See formula p. 461)
- Should we consider the length of the paths being
compared? - What is fair when one-four hop flow is compared
with three one-hop flows?
23Fairness
One four hop flow competing with three
one-hop flows
24Queuing Disciplines
- Each router must implement some queuing algorithm
that governs how packets are buffered while
waiting to be transmitted. - The queuing algorithm allocates both bandwidth
(which packets get transmitted ) and buffer space
(which packets get discarded). - It also directly affects delay or latency by
determining how long a packet waits to be
transmitted. - Two common queuing algorithms are FIFO and Fair
Queuing (FQ)
25FIFO
- FIFO first in first out the first packet into
the router is the first to be transmitted. - Since the amount of buffer space is finite, if a
packet arrives and the buffer is full, the router
discards it. - This is sometimes called a tail drop, since the
packets that arrive at the tail end of the FIFO
are dropped.
26FIFO
a) FIFO queuing b) tail drop at a FIFO
queue
27FIFO and Priority
- FIFO is the simplest algorithm and is the most
widely used currently in Internet routers. - A simple variation is a priority queue. The idea
is to mark each packet with a priority (in the IP
(TOS) Type of Service field). - The routers implement multiple FIFO queues, one
for each priority class and transmit from the
highest priority queue first. - This can cause starvation, when low priority
packets do not get serviced. - It is used to give the router updating packets
highest priority.
28Fair Queuing (FQ)
- Fair queuing maintains a separate queue for each
flow currently being handled by the router. - The router services those queues in a round-robin
order, giving each a chance in order. - Since the traffic sources do not know the state
of the router, this must still be used in
conjunction with a congestion control mechanism.
29Fair Queuing (FQ)
A separate queue is maintained for each flow.
30Fair Queuing Example
- Packets with earlier finishing times are sent
first - Sending of packet already in progress is completed
Algorithm selects both packets in a) from flow 1
to be transmitted, because of their earlier
finishing times. In b) the router has begun to
send a packet from flow 2 when, the packet from
flow 1 arrives.
31TCP Congestion Control
- TCP sends packets into the network without a
reservation and then reacts to observable events
that occur. - TCP assumes FIFO queues, but works with FQ also.
- TCP is said to be self-clocking since it uses the
ACKs to pace the transmission of packets. - It also maintains variables such as
CongestionWindow and MAXwindow and increases and
decreases the window size.
32Packets in Transit
Additive increase one packet is added during
each RTT
33TCP SawTooth Pattern
Typical TCP Sawtooth pattern of continually
increasing and decreasing the window as a
function of time instead of increasing and
decreasing by one as in the additive increase.
34Slow Start
TCP provides another mechanism used to increase
the congestion window rapidly from a cold start.
It adds one packet then two , etc
35Behavior of TCP Congestion Control
Blue line is the value of the CongestionWindow
over time Bullets at top are timeouts Hash marks
at top are time when each packet is
transmitted Vertical bars are time when packet
that is retransmitted was first transmitted.
36Fast Retransmit and Fast Recovery
- Fast retransmit was added to TCP to trigger a
retransmit sooner that the regular timeout
mechanism. - When a data packet is received the receiver sends
an ACK.When a packet arrives out of order it
cannot be acknowledged, because the earlier
packet has not been acknowledged, so TCP sends
the same ACK it send last time- a duplicate ACK. - When the sender receives a duplicate ACK, it
knows that a packet was missing, and retransmits. - TCP waits for 3 duplicate ACKS before
retransmitting.
37Fast Retransmit
Fast retransmit based on duplicate ACKs
38TCP with Fast Retransmit
Blue line is the value of the CongestionWindow
over time Bullets at top are timeouts Hash marks
at top are time when each packet is
transmitted Vertical bars are time when packet
that is retransmitted was first transmitted.
39Congestion Avoidance Mechanisms
- TCPs strategy is to control congestion once it
happens, as opposed to avoid congestion in the
first place. - TCP repeatedly increases the load on the network
to find the point at which congestion occurs,
then it backs off from this point. (It finds the
available bandwidth.) - An alternative is to predict when congestion is
about to happen and to reduce the rate at which
hosts send packets, just before the packets start
being discarded this is congestion avoidance.
40Congestion Avoidance Mechanisms
- Three different avoidance mechanisms put
additional functionality into the router to
anticipate congestion - DECbit splits responsibility between router and
end nodes. Router sets a bit if the average queue
length gt 1 when packet arrives. - Random Early Detection (RED) each router
monitors its own queue length and notifies the
source of congestion. - Source-based Congestion Avoidance- attempts to
avoid congestion form the end nodes and watches
for a sign from the network that some routers
queue is increasing.
41Average Queue Length
Computing average queue length at router.
42Weighted Average Queue Length
43RED thresholds on a FIFO QUEUE
If average queue length is smaller than lower
threshold, no action is taken. If it is larger
than the upper(MAX) threshold, the packet is
dropped. If it is between the two thresholds then
the packet is dropped with some probability P.
44Drop Probability function for RED
45Source-Based
Congestion window vs. observed throughput
rate Top congestion window middle observed
throughput Bottom buffer space taken up at the
router
46TCP Vegas Congestion Avoidance Mechanism
47Quality of Service
- Packet Switched networks have promised the
support for multimedia applications which
combine, audio, video and data. - One obstacle to this has been the need for higher
bandwidth links. - Improvements in coding and the increasing speed
of links are bringing this about.
48Real-time Applications
- Real-time applications are sensitive to the
timeliness of data delivery- they need assurance
from the network that the data will arrive on
time. - Non-real time applications use retransmission to
be sure data arrives correctly, but this only
adds to the delay. - Timely delivery must be provided by the network
itself ( the routers) and not just the hosts.
49Quality of Service
- Applications that are happy with best effort
service should also be able to use the new
service model which provides time assurances. - This implies that the network will treat some
packets differently. - A network that can provide different levels of
service is said to support Quality of Service
(QoS)
50Application Requirements
- Divide applications into real-time and non
real-time or traditional data applications. - Non real-time applications (like telenet, ftp,
email, web browsing) are also called elastic
since they are able to stretch into increased
delay. - They do not become unusable with increased delay
(users just become frustrated!)
51Real-Time Audio
- In audio, data is generated by collecting samples
form a microphone and digitizing them using an
analog-to-digital (A-to-D) converter. - The digital samples are placed in packets which
are then transmitted across the network to a
receiver at the other end. - At the receiver the data must be played back at
some appropriate rate, usually the same at which
they were recorded. - If data arrives after its playback time, it is
useless.
52An Audio Application
53Real-Time Audio
- One way to make a voice application work would be
to require all samples to take the same amount of
time to traverse the network, but it is difficult
to do this. - The way to deal with this at the receiver end is
to buffer some amount of data, providing a store
of packets waiting to be played back at the right
time. - If it has a short delay, it is buffered until its
playback time arrives if it is delayed a longer
time may not need to be buffered very long. - We have effectively added a constant offset to
the playback time called playback point.
54A Playback Buffer
55Delay Distribution on the Internet
97of packets had latency lt 100ms. An average of
3 out of every 100 packets would arrive too late
to be used. The tail of the graph extends to
200ms, which would have to be the playback time
to ensure that all packets arrive on time.
56Taxonomy of Real-Time Applications
- Characteristics of Real-time applications
- Tolerance for loss of data (some audio can
sustain a loss of a late packet, but a robot arm
cannot function if it loses its command to stop) - Adaptability- some audio may be able to extend
their playback time to adapt to network delays
called delay-adaptive - Rate adaptive can trade off bit-rate verses
quality ( for some video applications)
57Taxonomy of Applications
58Token Buckets
Illustrates how a token bucket can be used to
characterize a flows bandwidth requirements.
Shows two flows with average rates but different
token buckets descriptions.
59Reservations
Making reservations on a multicast
tree.
60Assured Forwarding (AF)
RED with in and out drop probabilities
61Summary
- Resource allocation is both central to networking
and also a very hard problem. - Congestion control is concerned with preventing
degradation of service when the demand for
services exceeds the available supply of the
network. - Different quality of service can be provided to
applications that need more assurances than those
provided by the best-effort model.
62Open Issue
- The larger question we should ask is how much can
we expect from the network and how much
responsibility will ultimately fall to the end
hosts?
63Figure 6.27
64Figure 6.28
65Figure 6.29