Improving QOS in IP Networks - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

Improving QOS in IP Networks

Description:

Fairness and Protection (for best-effort connections) -- FIFO does not offer any ... pro--It can serve best-effort packets when there are no eligible packets to serve ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 35
Provided by: csee3
Category:

less

Transcript and Presenter's Notes

Title: Improving QOS in IP Networks


1
Improving QOS in IP Networks
  • Thus far making the best of best effort
  • Future next generation Internet with QoS
    guarantees
  • RSVP signaling for resource reservations
  • Differentiated Services differential guarantees
  • Integrated Services firm guarantees
  • simple model for sharing and congestion
    studies

2
Principles for QOS Guarantees
  • Example 1MbpsI P phone, FTP share 1.5 Mbps
    link.
  • bursts of FTP can congest router, cause audio
    loss
  • want to give priority to audio over FTP

Principle 1
packet marking needed for router to distinguish
between different classes and new router policy
to treat packets accordingly
3
Principles for QOS Guarantees (more)
  • what if applications misbehave (audio sends
    higher than declared rate)
  • policing force source adherence to bandwidth
    allocations
  • marking and policing at network edge
  • similar to ATM UNI (User Network Interface)

Principle 2
provide protection (isolation) for one class from
others
4
Principles for QOS Guarantees (more)
  • Allocating fixed (non-sharable) bandwidth to
    flow inefficient use of bandwidth if flows
    doesnt use its allocation

Principle 3
While providing isolation, it is desirable to use
resources as efficiently as possible
5
Principles for QOS Guarantees (more)
  • Basic fact of life can not support traffic
    demands beyond link capacity

Principle 4
Call Admission flow declares its needs, network
may block call (e.g., busy signal) if it cannot
meet needs
6
Summary of QoS Principles
Lets next look at mechanisms for achieving this
.
7
Scheduling And Policing Mechanisms
  • scheduling choose next packet to send on link
    allocate link capacity and output queue buffers
    to each connection (or connections aggregated
    into classes)
  • FIFO (first in first out) scheduling send in
    order of arrival to queue
  • discard policy if packet arrives to full queue
    who to discard?
  • Tail drop drop arriving packet
  • priority drop/remove on priority basis
  • random drop/remove randomly

8
Need for a Scheduling Discipline
  • Why do we need a non-trivial scheduling
    discipline?
  • Per-connection delay, bandwidth, and loss are
    determined by the scheduling discipline
  • The NE can allocate different mean delays to
    different connections by its choice of service
    order
  • it can allocate different bandwidths to
    connections by serving at least a certain number
    of packets from a particular connection in a
    given time interval
  • Finally, it can allocate different loss rates to
    connections by giving them more or fewer buffers

9
FIFO Scheduling
  • Disadvantage with strict FIFO scheduling is that
    the scheduler cannot differentiate among
    connections -- it cannot explicitly allocate some
    connections lower mean delays than others
  • A more sophisticated scheduling discipline can
    achieve this objective (but at a cost)
  • The conservation law
  • the sum of the mean queueing delays received by
    the set of multiplexed connections, weighted by
    their fair share of the links load, is
    independent of the scheduling discipline

10
Requirements
  • A scheduling discipline must satisfy four
    requirements
  • Ease of implementation -- pick a packet every few
    microsecs a scheduler that takes O(1) and not
    O(N) time
  • Fairness and Protection (for best-effort
    connections) -- FIFO does not offer any
    protection because a misbehaving connection can
    increase the mean delay of all other connections.
    Round-robin scheduling?
  • Performance bounds -- deterministic or
    statistical common performance parameters
    bandwidth, delay (worst-case, average),
    delay-jitter, loss
  • Ease and efficiency of admission control -- to
    decide given the current set of connections and
    the descriptor for a new connection, whether it
    is possible to meet the new connections
    performance bounds without jeopardizing the
    performance of existing connections

11
Schedulable Region
12
Designing a scheduling discipline
  • Four principal degrees of freedom
  • the number of priority levels
  • whether each level is work-conserving or
    non-work-conserving
  • the degree of aggregation of connections within a
    level
  • service order within a level
  • Each feature comes at some cost
  • for a small LAN switch -- a single priority FCFS
    scheduler or at most 2-priority scheduler may be
    sufficient
  • for a heavily loaded wide-area public switch with
    possibly noncooperative users, a more
    sophisticated scheduling discipline may be
    required.

13
Work conserving and non-work conserving
disciplines
  • A work-conserving scheduler is idle only when
    there is no packet awaiting service
  • A non-work-conserving scheduler may be idle even
    if it has packets to serve
  • makes the traffic arriving at downstream switches
    more predictable
  • reduces buffer size necessary at output queues
    and the delay jitter experienced by a connection
  • allows the switch to send a packet only when the
    packet is eligible
  • for example, if the (k1)th packet on connection
    A becomes eligible for service only i seconds
    after the service of the kth packet, the
    downstream swicth receives packets on A no faster
    than one every i secs.

14
Eligibility times
  • By choosing eligibility times carefully, the
    output from a switch can be mode more predictable
    (so that bursts wont build up in the n/w)
  • Two approaches rate-jitter and delay-jitter
  • rate-jitter peak rate guarantee for a connection
  • E(1) A(1) E(k1) max(E(k) Xmin, A(k1))
    where Xmin is the time taken to serve a
    fixed-sized packet at peak rate)
  • delay-jitter at every switch, the input arrival
    pattern is fully reconstructed
  • E(0,k) A (0,k) E(i1, k) E(i,k) D L
    where D is the delay bound at the previous switch
    and L is the largest possible delay on the link
    between switch i and i1

15
Pros and Cons
  • Reduces delay jitter Con -- we can remove jitter
    at endpoints with an elasticity buffer
    Pro--reduces buffers(expensive) at the switches
  • Increases mean delay, problem? pro--for playback
    applications, which delay packets until the
    delay-jitter bound, increasing mean delay does
    not affect the perceived performance
  • Wasted bandwidth, problem? pro--It can serve
    best-effort packets when there are no eligible
    packets to serve
  • Needs accurate source descriptors -- no rebuttal
    from the non-work conserving camp

16
Priority Scheduling
  • transmit highest priority queued packet
  • multiple classes, with different priorities
  • class may depend on marking or other header info,
    e.g. IP source/dest, port numbers, etc..

17
Priority Scheduling
  • The scheduler serves a packet from priority level
    k only if there are no packets awaiting service
    in levels k1, k2, , n
  • at least 3 levels of priority in an integrated
    services network?
  • Starvation? Appropriate admission control and
    policing to restrict service rates from all but
    the lowest priority level
  • Simple implementation

18
Round Robin Scheduling
  • multiple classes
  • cyclically scan class queues, serving one from
    each class (if available)
  • provides protection against misbehaving sources
    (also guarantees a minimum bandwidth to every
    connection)

19
Max-Min Fair Share
  • Fair Resource allocation to best-effort
    connections?
  • Fair share allocates a user with a small demand
    what it wants, and evenly distributes unused
    resources to the big users.
  • Maximize the minimum share of a source whose
    demand is not fully satisfied.
  • Resources are allocated in order of increasing
    demand
  • no source gets a resource share larger than its
    demand
  • sources with unsatisfied demand s get an equal
    share of resource
  • A Generalized Processor Sharing (GPS) server will
    implement max-min fair share

20
Weighted Fair Queueing
  • generalized Round Robin (offers differential
    service to each connection/class)
  • each class gets weighted amount of service in
    each cycle

21
Policing Mechanisms
  • Goal limit traffic to not exceed declared
    parameters
  • Three common-used criteria
  • (Long term) Average Rate how many pkts can be
    sent per unit time (in the long run)
  • crucial question what is the interval length
    100 packets per sec or 6000 packets per min have
    same average!
  • Peak Rate e.g., 6000 pkts per min. (ppm) avg.
    1500 ppm peak rate
  • (Max.) Burst Size max. number of pkts sent
    consecutively (with no intervening idle)

22
Traffic Regulators
  • Leaky bucket controllers
  • Token bucket controllers

23
Policing Mechanisms
  • Token Bucket limit input to specified Burst Size
    and Average Rate.
  • bucket can hold b tokens
  • tokens generated at rate r token/sec unless
    bucket full
  • over interval of length t number of packets
    admitted less than or equal to (r t b).

24
Policing Mechanisms (more)
  • token bucket, WFQ combine to provide guaranteed
    upper bound on delay, i.e., QoS guarantee!

25
IETF Integrated Services
  • architecture for providing QOS guarantees in IP
    networks for individual application sessions
  • resource reservation routers maintain state info
    (a la VC) of allocated resources, QoS reqs
  • admit/deny new call setup requests

Question can newly arriving flow be admitted
with performance guarantees while not violating
QoS guarantees made to already admitted flows?
26
Intserv QoS guarantee scenario
  • Resource reservation
  • call setup, signaling (RSVP)
  • traffic, QoS declaration
  • per-element admission control

request/ reply
27
RSVP
  • Multipoint Multicast connections
  • Soft state
  • Receiver initiated reservations
  • identifies a session using a combination of
    dest. Address, transport-layer protocol type,
    dest. Port number
  • each RSVP operation applies only to packets of a
    particular session
  • not a routing protocol merely used to reserve
    resources along the existing route set up by
    whichever underlying routing protocol

28
RSVP Messages
  • Path message originating from the traffic sender
  • to install reverse routing state in each router
    along the path
  • to provide recceivers with information about the
    characteristics of the sender traffic and
    end-to-end path so that they can make appropriate
    reservation requests
  • Resv message originating from the receivers
  • to carry reservation requests to the routers
    along the distribution tree between receivers and
    senders
  • PathTear, ResvTear, ResvErr, PathErr

29
PATH Message
  • Phop the address of the last RSVP-capable node
    to forward this path message.
  • Sender template filter specification identifying
    the sender
  • Sender Tspec defines sender traffic
    characteristics
  • Optional Adspec info about the end-to-end path
    used by the receivers to compute the level of
    resources that must be reserved

30
RESV Message
  • Rspec reservation specification comprising the
    value R of bandwidth to be reserved in each
    router, and a slack term about end-to-end delay
  • reservation style, FF, WF, SE
  • Filterspec to identify the senders
  • Flowspec comprising the Rspec and traffic
    specification (set equal to Sender Tspec)
  • optional ResvConf

31
Call Admission
  • Arriving session must
  • declare its QOS requirement
  • R-spec defines the QOS being requested
  • characterize traffic it will send into network
  • T-spec defines traffic characteristics
  • signaling protocol needed to carry R-spec and
    T-spec to routers (where reservation is required)
  • RSVP

32
Intserv QoS Service models rfc2211, rfc 2212
  • Guaranteed service
  • worst case traffic arrival leaky-bucket-policed
    source
  • simple (mathematically provable) bound on delay
    Parekh 1992, Cruz 1988
  • Controlled load service
  • "a quality of service closely approximating the
    QoS that same flow would receive from an unloaded
    network element."

33
Chapter 6 outline
  • 6.1 Multimedia Networking Applications
  • 6.2 Streaming stored audio and video
  • RTSP
  • 6.3 Real-time, Interactivie Multimedia Internet
    Phone Case Study
  • 6.4 Protocols for Real-Time Interactive
    Applications
  • RTP,RTCP
  • SIP
  • 6.5 Beyond Best Effort
  • 6.6 Scheduling and Policing Mechanisms
  • 6.7 Integrated Services
  • 6.8 RSVP
  • 6.9 Differentiated Services

34
IETF Differentiated Services
  • Concerns with Intserv
  • Scalability signaling, maintaining per-flow
    router state difficult with large number of
    flows
  • Flexible Service Models Intserv has only two
    classes. Also want qualitative service classes
  • behaves like a wire
  • relative service distinction Platinum, Gold,
    Silver
  • Diffserv approach
  • simple functions in network core, relatively
    complex functions at edge routers (or hosts)
  • Dot define define service classes, provide
    functional components to build service classes
Write a Comment
User Comments (0)
About PowerShow.com