TCP congestion control - PowerPoint PPT Presentation

About This Presentation
Title:

TCP congestion control

Description:

Data packets reaching a destination are acknowledged by sending an appropriate ... to estimate the parameters in an interative manner in the direction of the gradient. ... – PowerPoint PPT presentation

Number of Views:38
Avg rating:3.0/5.0
Slides: 65
Provided by: scie85
Category:

less

Transcript and Presenter's Notes

Title: TCP congestion control


1
TCP congestion control
  • Roughly speaking, TCP operates as follows
  • Data packets reaching a destination are
    acknowledged by sending an appropriate message
    to the sender.
  • Upon receipt of the acknowledgement, data sources
    increase their send rate, thereby probing the
    network for available bandwidth, until congestion
    is encountered.
  • Network congestion is deduced through the loss of
    data packets (receipt of duplicate ACKs or non
    receipt of ACKs), and results in sources
    reducing their send rate drastically (by half).

2
TCP congestion control
  • Congestion control is necessary for a number of
    reasons, so that
  • catastrophic collapse of the network is avoided
    under heavy loads
  • each data source receives a fair share of the
    available bandwidth
  • the available bandwidth B is utilised in an
    optimal fashion.
  • interactions of the network sources should not
    cause destabilising network side effects such as
    oscillations or instability

3
TCP congestion control
  • Hespanhas hybrid model of TCP traffic.
  • Loss of packets caused by queues filling at the
    bottleneck link.
  • TCP sources have two modes of operation
  • Additive increase
  • Multiplicative decrease
  • Packet-loss detected at sources one RTT after
    loss of packet.

4
TCP congestion control
5
TCP congestion control
6
Modelling the queue not full state
  • The rate at which the queue grows is easy to
    determine.
  • While the queue is not full

7
Modelling the queue full state
  • When the queue is full
  • One RTT later the sources are informed of
    congestion

8
TCP congestion control
9
TCP congestion control Example (Hespanha)
10
TCP congestion control Example (Fairness)
11
Modelling of dynamic systems Part 3System
Identification
Robert N. Shorten Douglas Leith The Hamilton
Institute NUI Maynooth
12
Building our first model
  • Example Malthuss law of population growth
  • Government agencies use population models to
    plan.
  • What do you think be a good simple model for
    population growth?
  • Malthuss law states that rate of an unperturbed
    population (Y) growth is proportional to the
    population present.

Introduction
13
(No Transcript)
14
(No Transcript)
15
(No Transcript)
16
Modelling
  • Modelling is usually necessary for two reasons
    to predict and to control. However to build
    models we need to do a lot of work.
  • Postulate the model structure (most physical
    systems can be classified as belonging to the
    system classes that you have already seen)
  • Identify the model parameters
  • Validate the parameters (later)
  • Solve the equations to use the model for
    prediction and analysis (now)

Introduction
17
Modelling
  • Modelling is usually necessary for two reasons
    to predict and to control. However to build
    models we need to do a lot of work.
  • Postulate the model structure (most physical
    systems can be classified as belonging to the
    system classes that you have already seen)
  • Identify the model parameters
  • Experiment design
  • Parameter estimation
  • Validate the parameters (later)
  • Solve the equations to use the model for
    prediction and analysis (now)

Introduction
18
What is parameter estimation?
  • Parameter identification is the identification of
    the unknown parameters of a given model.
  • Usually this involves two steps. The first step
    is concerned with obtaining data to allow us to
    identify the model parameters.
  • The second step usually involved using some
    mathematical technique to infer the parameters
    from the observed data.

19
Linear in parameter model structures
  • The parameter estimation task is simple when the
    model is a linear in parameters model form.
  • For example, in the equation
  • the unknown parameters appear as coefficients
    of the
  • variables (and offset).
  • The parameters of such equations are estimated
    using the principle of least squares.

20
The principle of least squares
  • Karl Friedrick Gauss (the greatest mathematician
    after Hamilton) invented the principle of least
    squares to determine the orbits of planets and
    asteroids.
  • Gauss stated that the parameters of the models
    should be chosen such that the sum of the
    squares of the differences between the actually
    computed values is a minimum.
  • For linear in parameter models this principle can
    be applied easily.

21
The principle of least squares
  • Karl Friedrick Gauss (the greatest mathematician
    after Hamilton) invented the principle of least
    squares to determine the orbits of planets and
    asteroids.
  • Gauss stated that the parameters of the models
    should be chosen such that the sum of the
    squares of the differences between the actually
    computed values is a minimum.
  • For linear in parameter models this principle can
    be applied easily.

22
The principle of least squares
23
The principle of least squares The algebra
  • For our example we want to minimize
  • Hence, we need to solve

24
The principle of least squares The algebra
  • For our example we want to minimize
  • Hence, we need to solve the following equations
    for the parameters a,b.

25
A linear model
  • Example Find the least squares line that fits
    the following data points.

X Y -1 10 0 9 1 7 2 5 3
4 4 3 5 0 6 -1
26
A linear model
  • Example Find the least squares line that fits
    the following data points.

X Y -1 10 0 9 1 7 2 5 3
4 4 3 5 0 6 -1
27
A linear model
  • Example Find the least squares line that fits
    the following data points.

X Y -1 10 0 9 1 7 2 5 3
4 4 3 5 0 6 -1
28
A linear model
  • Example Find the least squares line that fits
    the following data points.

X Y -1 10 0 9 1 7 2 5 3
4 4 3 5 0 6 -1
29
A polynomial model
  • Least squares can be used whenever we suspect a
    linear in parameters model?Find the least squares
    polynomial fit to the following data points.

X Y 1.0000 2.9218 2.0000
5.9218 3.0000 10.9218 4.0000 17.9218
5.0000 26.9218 6.0000 37.9218
7.0000 50.9218 8.0000 65.9218 9.0000
82.9218 10.0000 101.9218
30
A polynomial model
  • By proceeding exactly as before

X Y 1.0000 2.9218 2.0000
5.9218 3.0000 10.9218 4.0000 17.9218
5.0000 26.9218 6.0000 37.9218
7.0000 50.9218 8.0000 65.9218 9.0000
82.9218 10.0000 101.9218
31
Building our first model
  • Example Malthuss law of population growth
  • Government agencies use population models to
    plan.
  • What do you think be a good simple model for
    population growth?
  • Malthuss law states that rate of an unperturbed
    population (Y) growth is proportional to the
    population present.

Introduction
32
An exponential model (the first lecture)
  • The solution to the differential equation is not
    linear in parameters.
  • However, there is a change of variables to make
    it linear in parameters.

33
(No Transcript)
34
(No Transcript)
35
Matrix formulation of least squares
  • The least squares parameters can be derived by
    solving a set of simultaneous linear equations.
    This technique is effective but tedious for
    complicated linear in parameter models. A much
    more effective solution to the least squares
    problem can be found using matrices.
  • Suppose that we wish to find the parameters of
    the following linear in parameters model and that
    we have m measurements.

36
Matrix formulation of least squares
  • All m-measurements can be written in matrix form
    as follows
  • or more compactly as

37
Matrix formulation of least squares
  • The matrix is known as the matrix of
    regressors.This matrix (here a mx3 matrix) is
    usually not invertible. To find the least squares
    solution we multiply both sizes of the equation
    by the transpose of the regressor matrix.
  • It can be shown that the least squares solution
    is given by the above equation.

38
A linear model
  • Example Find the least squares line that fits
    the following data points.

X Y -1 10 0 9 1 7 2 5 3
4 4 3 5 0 6 -1
39
A linear model
  • The regressor is given by
  • Hence

reg -1 1 0 1 1 1 2
1 3 1 4 1 5 1 6
1
reg'reg 92 20 20 8
40
Summary Linear least squares
  • To do a least squares fit we start by expanding
    the unknown function as a linear sum of basis
    functions
  • We have seen that the basis functions can be
    linear or non-linear. The linear parameters can
    be found using

41
Discrete time dynamic systems
  • Our examples work beautifully for static systems.
    What about identifying the parameters of dynamic
    systems. Dynamic systems are in principle not any
    different to static systems. We define our
    regressors and solve the regression problem.
  • Consider the following problem. We wish to build
    a model of the relationship between the throttle
    and the speed of an automobile. We begin by
    collecting data from an experiment.

42
Discrete time dynamic systems
43
Discrete time dynamic systems
  • A good choice for the model structure is first
    order
  • We can solve for the parameters by solving
  • yielding

44
Recursive identification
  • The algorithms that we have looked at so far are
    called batch algorithms.
  • Sometime we want to estimate model parameters
    recursively so that the parameters can be
    estimated on-line.
  • Also, if system parameters change over time, then
    we need to continually estimate and verify the
    model parameters.

45
Recursive least squares
  • The least squares algorithm invented by Gauss can
    be arranged in such a way such that the results
    obtained at time index k-1 can be used to obtain
    the parameter estimates at time index k. To see
    this we use
  • and note that

46
Recursive least squares
  • With a little manipulation (show) we get
  • where
  • More complicated versions of the algorithm are
    available that avoid matrix inversion.

47
Recursive least squares (car example)
48
Recursive least squares (car example)
49
The matrix inversion lemma
  • One not-so-nice feature of the RLS formula is the
    presence of a matrix inversion at each step. This
    can be removed using the matrix inversion lemma
    (the Sherman-Morrisson formula).

50
The RLS algorithm
  • Application of the lemma results in the standard
    RLS algorithm.

51
Time-varying systems
  • Much of the appeal of the RLS algorithm is that
    we can potentially deal with time-varying
    systems.
  • Example Suppose that a rocket ascends from the
    surface of the earth propelled a thrust force
    generated through the ejection of mass. If we
    assume that the rate of change of mass of the
    fuel is um and the exhaust velocity is ve, then
    the physical equations governing the rocket is

52
Forgetting factors
  • For time varying systems we must estimate the
    parameters recursively. How can we modify the
    basic RLS algorithm?
  • To estimate time-varying parameters we would like
    to forget past data points. The only place in the
    above formula that depends on past data points is
    the covariance matrix.

53
Forgetting factors
  • For time varying systems we must estimate the
    parameters recursively. How can we modify the
    basic RLS algorithm?
  • This corresponds to minimising the time-varying
    cost function

54
The RLS algorithm
  • Application of the matrix inversion lemma results
    in the standard RLS algorithm with a forgetting
    factor.

55
Example
  • Consider the dynamic system
  • where the parameters ak, bk vary as shown.

56
Example
57
Numerical issues
  • RLS algorithm is of great theoretical importance.
    However, it suffers from one very big
    disadvantage. It is numerically unstable.
  • The numerical instability stems from the
    equation
  • If no information enters the systems, P becomes
    singular and the estimator returns garbage.

58
Numerical issues
59
Persistence of excitation
  • One final thought Persistence of excitation
  • Persistence of excitation has a strict
    mathematical definition.
  • Roughly speaking, PE means that the input signal
    has been chosen such that the least squares
    estimate is unique.
  • The really interested student should consult
    Astrom for more on this topic.

60
Error surfaces and gradient methods
  • All the examples that we have looked at so far
    involved linear in parameter models. In this case
    finding the least squares solution was easy
    because the error surface is quadratic.
  • Huh! What is meant by a quadratic cost function.
  • Consider the examples of the line fitting. We
    were trying to minimize

61
Least mean squares and gradient methods
  • To make life simple, lets assume that we have
    two observations (m2) and that we assume that b
    0. Then.
  • Remember we are trying to find the parameter a
    that minimises this function. But the function is
    quadratic in a.

62
Least mean squares and gradient methods
  • The quadratic surface looks like the following
    for a single parameter.

63
Least mean squares and gradient methods
  • With two parameters we get some thing like

64
A word on gradient methods
  • Another way of estimating the best parameters is
    to estimate the parameters in an interative
    manner in the direction of the gradient.
  • For linear in parameter structures the batch
    version of least squares is better. However, the
    above idea can be extended to deal with model
    structures that are not linear in parameters
    (Doug will tell you all about this).
Write a Comment
User Comments (0)
About PowerShow.com