Chapter 6 Congestion Control and Resource Allocation - PowerPoint PPT Presentation

1 / 37
About This Presentation
Title:

Chapter 6 Congestion Control and Resource Allocation

Description:

Congestion Control a Definition ' ... 10-Mbps Ethernet. Router. Source. 2. Source. 1. Source. 3. Router. Router. Destination. 2. Destination ... – PowerPoint PPT presentation

Number of Views:271
Avg rating:3.0/5.0
Slides: 38
Provided by: karynj
Category:

less

Transcript and Presenter's Notes

Title: Chapter 6 Congestion Control and Resource Allocation


1
Chapter 6 Congestion Control and Resource
Allocation
  • Packets contend at a router for the use of a
    link, they are placed in a waiting queue
  • Too many packets in a queue and some are dropped
  • The network is then said to be congested
  • Resources are bandwidth and buffer space

2
Factoids
  • Because of the bursty nature of Internet traffic,
    queues can become full very quickly, and then
    become empty again. (page 477)
  • The complexity of routing in the Internet is such
    that simply obtaining a reasonably direct,
    loop-free route is about the best that you can
    hope for. Routing around congestion is icing on
    the cake. (page 449)

3
More Factoids
  • The essential strategy of TCP is to send packets
    into the network without a reservation and then
    to react to observable events that occur
    (congestion). (page 464)
  • The main reason that packets are not delivered
    and dropped is due to congestion not errors
    during transmission. (page 465)

4
Congestion Control a Definition
  • We use the term congestion control to describe
    the efforts made by network nodes to prevent or
    respond to overload conditions. (page 448)

5
Source
1
10-Mbps Ethernet
Router
Destination
1.5-Mbps T1 link
100-Mbps FDDI
Source
2
6
Source
1
Router
Destination
1
Router
Source
2
Router
Destination
2
Source
3
7
Router-Centric vs. Host-Centric
  • Throughout chapter 6 there will be this
    particular choice should the router or host be
    responsible (more) for deciding which packets to
    drop and how to adjust the network behavior.

8
Throughput/delay
Optimal
Load
load
9
Queuing
  • FIFO (with tail drop) is the most widely used
    approach in Internet routers
  • This approach actually has the effect of turning
    over the responsibility for most congestion
    control in the Internet to the edges of the
    network TCP is used here.

10
Arriving
Next free
Next to
packet
buffer
transmit
Free buffers
Queued packets
(a)
Arriving
Next to
packet
transmit
(b)
Drop
11
Flow 1
Flow 2
Round-robin
service
Flow 3
Flow 4
12
Flow 1
Flow 2
Flow 1
Flow 2
Output
Output
(arriving)
(transmitting)
F 10
F 10
F 8
F 5
F 2
(a)
(b)
13
TCP Congestion Control
  • Section 6.3 is a large and important topic
  • Extends the chapter 2 discussion of sliding
    windows
  • ACKs are used to pace the transmission of packets
  • TCP is said to be self-clocking
  • Available bandwidth changes over time, so a
    source must adjust the number of transmitted
    packets
  • Sliding windows are used by TCP for congestion
    control

14
TCP Congestion Control in Action
  • The next few slides show TCP in action
  • First additive increase in packets sent
  • Second, a typical TCP transmission pattern
  • Slow start
  • A graph of typical behavior of TCP congestion
    control

15
Source
Destination

16
(No Transcript)
17
Source
Destination

18
(No Transcript)
19
Congestion Avoidance
  • In general predict when congestion is about to
    happen and then reduce the rate at which hosts
    send data (page 475)
  • Not widely adopted yet
  • Three different methods are discussed
  • First two depend on adding a small amount of
    additional logic to the routers

20
Queue Length/Buffer of the Router
  • Buffer space is one of the two main scarce
    resources of the Internet (bandwidth is the
    other)
  • The router passes information about its queue
    length (equivalent to remaining buffer space)
    onward to eventually end up at the sending host
  • The sending host can change it packet sending
    rate based on this information

21
Measuring Congestion/Buffers/Queues
  • Following are several slides that show various
    graphs
  • Some are measurements
  • Some represent thresholds and probability

22
Queue length
Current
time
T
ime
Current
Previous
cycle
cycle
A
veraging
interval
23
(No Transcript)
24
MaxThreshold
MinThreshold
A
vgLen
25
(No Transcript)
26
(No Transcript)
27
(No Transcript)
28
Quality of Service
  • Real-time applications need some assurance
    (guarantee) from the network that the data is
    likely (certainly) going to arrive on time.
    (page 488)
  • First we start with application requirements

29
Sampler
,
Microphone
Buffer
,
A D
D A
converter
Speaker
30
Playback Buffer
  • Study figure 6.21
  • Packet generation is at a constant rate
  • Packet playback is at a constant rate
  • In-between generation and playback is the network
  • Packet arrival will have some random components
    and appears wavy because of intermittent network
    delay

31
Packet
arrival
Packet
generation
Sequence number
Playback
Buffer
Network
delay
T
ime
32
(No Transcript)
33
Overview of Real-Time Applications (Taxonomy)
  • Figure 6.23 tells it best
  • Study over the text on pages 492-494

34
Applications
Real time
Elastic
Interactive
Asynchronous
Interactive
T
olerant
Intolerant
bulk
Adaptive
Nonadaptive
Rate-adaptive
Nonadaptive
Delay-
Rate-
adaptive
adaptive
35
QOS Mechanisms
  • We will only cover this briefly
  • The textbook is also brief about this rather
    extensive topic
  • Many different ideas are being tried out

36
3
Flow B
2
Bandwidth (MBps)
Flow A
1
1
2
3
4
T
ime (seconds)
37
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com