Title: An End-to-end Architecture for Quality-Adaptive Streaming Applications in Best-effort Networks
1An End-to-end Architecture for Quality-Adaptive
Streaming Applications in Best-effort Networks
- Reza Rejaie
- reza_at_isi.edu
- USC/ISI
- http//netweb.usc.edu/reza
- April 13, 1999
2Motivation
- Rapid growth in deployment of realtime
streams(audio/video) over the Internet - TCP is inappropriate for realtime streams
- The Internet requires end-system to react to
congestion properly and promptly - Streaming applications require sustained
consumption rate to deliver acceptable and stable
quality
3Best-effort Networks (The Internet)
- Shared environment
- Bandwidth is not known a prior
- Bandwidth changes during a session
- Seemingly-random losses
- TCP-based traffic dominates
- End-to-end congestion control is crucial for
stability, fairness high utilization - End-to-end congestion control in a TCP-friendly
fashion is the main requirement in the Internet
4Streaming Applications
- Delay-sensitive
- Semi-reliable
- Rate-based
- Require QoS from the end-to-end point of view
Encoder
Adaptation
Source
Server
TCP
TCP
Internet
Buffer
Decoder
Display
5The Problem
- Designing an end-to-end congestion control
mechanism - Delivering acceptable and stable quality while
performing congestion control
6Outline
- The End-to-end Architecture
- Congestion Control (The RAP protocol)
- Quality Adaptation
- Extending the Architecture
- Multimedia Proxy Caching
- Contributions
- Future Directions
7The End-to-end Architecture
Error Control
Quality Adaptation
Cong. Control
Acker
Playback Buffer
Internet
Buffer Manager
Buffer Manager
Transmission Buffer
Decoder
Archive
Adaptation Buffer
Server
Client
Data path
Control path
8Outline
- The End-to-end Architecture
- Congestion Control (The RAP Protocol)
- Quality Adaptation
- Extending the Architecture
- Multimedia Proxy Caching
- Contributions
- Future Directions
9Previous works on Congestion Ctrl.
- Modified TCP
- Jacob et al. 97, SCPCen et al. 98
- TCP equation
- Mathis et al. 97, Padhye et al. 98
- Additive Inc., Multiplicative Dec.
- LDASisalem et al. 98
- NETBLTLixia!
- Challenge TCP is a moving target
10Overview of RAP
- Decision Function
- Increase/Decrease Algorithm
- Decision Frequency
- Goal to be TCP-friendly
Decision Function
Rate
Increase/Decrease Algorithm
--
Time
Decision Frequency
11Congestion Control Mechanism
- Adjust the rate once per round-trip-time (RTT)
- Increase the rate periodically if no congestion
- Decrease the rate when congestion occurs
- Packet loss signals congestion
- Cluster Loss
- Grouping losses per congestion event
12Rate Adaptation Algorithm
- Coarse-grain rate adaptation
- Additive Increase, Multiplicative Decrease (AIMD)
- Extensive simulations revealed
- TCPs behavior substantially varies with network
conditions, e.g. retransmission timeout, bursty - TCP is responsive to a transient congestion
- AIMD only emulates window adjustment in TCP
13Rate Adaptation Algorithm(contd)
- Fine-grain rate adaptation
- The ratio of short-term to long-term average RTT
- Emulates ACK-clocking in TCP
- Increase responsiveness to transient congestion
14Coarse vs fine grain RAP fig
Impact of fine-grain rate adaptation
15RAP Simulation
TCP Traffic
- RAP against Tahoe, Reno, NewReno SACK
- Inter-dependency among parameters
- Config. parameters
- Bandwidth per flow
- RTT
- Number of flows
TCP Sinks
TCP Sources
SW
SW
RAP Sinks
RAP Sources
Avg. RAP BW
Fairness Ratio
Avg. TCP BW
RAP Traffic
16Fairness ratio across the parameter space without
F.G. adaptation
17Fairness ratio across the parameter space with
F.G. adaptation
18Impact of RED switches on Fairness ratio
19Summary of RAP Simulations
- RAP achieves TCP-friendliness over a wide range
- Fine grain rate adaptation extends inter-protocol
fairness to a wider range - Occasional unfairness against TCP traffic is
mainly due to divergence of TCP congestion
control from AIMD - Pronounced more clearly for Reno and Tahoe
- The bigger TCPs congestion window, the closer
its behavior to AIMD - RED gateways can improve inter-protocol sharing
- Depending on how well RED is configured
- RAP is a TCP-friendly congestion controlled UDP
20Outline
- The End-to-end Architecture
- Congestion Control (The RAP protocol)
- Quality Adaptation
- Extending the Architecture
- Multimedia Proxy Caching
- Contributions
- Future Directions
21Quality Adaptation
Error Control
Quality Adaptation
Cong. Control
Acker
Playback Buffer
Internet
Buffer Manager
Buffer Manager
Transmission Buffer
Decoder
Archive
Adaptation Buffer
Server
Client
Data path
Control path
22The Problem
- Delivering acceptable and stable quality while
performing congestion control - Seemingly random losses result in random
potentially wide variations in bandwidth - Streaming applications are rate-based
23Role of Quality Adaptation
- Buffering only absorb short-term variations
- Long-lived session could result in buffer
overflow or underflow - Quality Adaptation is complementary for buffering
- Adjust the quality with long-term variations in
bandwidth
BW(t)
Time
24Mechanisms to Adjust Quality
- Adaptive encoding Ortega 95, Tan 98
- CPU-intensive
- Switching between multiple encoding
- High storage requirement
- Layered encodingMcCanne 96, Lee 98
- Inter-layer decoding dependency
- When/How much to adjust the quality?
25Assumptions Goals
- Assumptions
- AIMD variations in bandwidth(rate)
- Linear layered encoding
- Constraint
- Obeying congestion controlled rate limit
- Goal
- To control the level of smoothing
26Layered Quality Adaptation
bw (t)
2
C
buf
2
bw (t)
bw (t)
2
Layer 2
1
BW(t)
BW(t)
C
buf
Internet
1
bw (t)
bw (t)
1
Layer 1
0
C
Display
buf
0
bw (t)
0
Layer 0
Decoder
Filling Phase
Quality Adaptation
Draining Phase
BW(t)
Linear layered stream
a
c
C
BW(t)
Consumption rate
C
b
C
Time(msec)
Time(sec)
27Buffering Tradeoff
bw (t)
2
C
buf
2
- Each buffering layer can only contribute at most
C(bps) - Buffering for more layers provides higher
stability
bw (t)
1
C
BW(t)
buf
1
bw (t)
0
C
buf
0
- Buffered data for a dropped layer is useless for
recovery - Buffering for lower layers is more efficient
BW(t)
- What is the optimal buffer distribution for a
single back-off scenario?
nC
Time
28Optimal Inter-layer Buffer Allocation
Draining Phase
Filling Phase
BW(t)
- Optimal buffer state depends on time of the
back-off - Draining pattern depends on the buffer state
- Back-off occurs randomly
- Keep the buffer state as close to the optimal as
possible during the filling phase
C
C
4C
Time
Buf. data
BW share of L0
Buf. data
BW share of L1
BW share of L2
Buf. data
29Adding Dropping
BW(t)
- Add a layer when buffering is sufficient for a
single back-off - Drop a layer when buffering is insufficient for
recovery - Random losses could result in frequent add and
drop - unstable quality
- Conservative adding results in smooth changes in
quality
Time
Buf. data for L0
Buf. data for L1
Buf. data for L2
30Smoothing
- Conservative adding
- When average bandwidth is sufficient
- When sufficient buffering for K back-offs
- Buffer constraint is preferred and sufficient
- Directly relate time of adding to the buffer
state - Effectively utilizes the available bandwidth
- K is a smoothing factor
- Short-term quality vs long-term smoothing
31Smooth Filling Draining
Proper Buf. State recovery from 1 backoff
Proper Buf. State recovery from 2 backoffs
Proper Buf. State recovery from K backoffs
Add a Layer
Drop a Layer
Filling
Draining
32Effect of smoothing factor
(K 2)
KB/s
TX rate Quality
C 10
40 Time(sec)
Buf. L3(KB)
9.5
9.5
Buf. L2(KB)
9.5
Buf. L1(KB)
9.5
Buf. L0(KB)
40 Time(sec)
(K 4)
KB/s
TX rate Quality
C 10
40 Time(sec)
17.5
Buf. L3(KB)
Buf. L2(KB)
17.5
17.5
Buf. L1(KB)
17.5
Buf. L0(KB)
40 Time(sec)
33Adapting to network load
(K 4)
KB/s
TX rate Quality
C 10
90 Time(sec)
30
60
KB
17.5
Buf. L3(KB)
17.5
Buf. L3(KB)
17.5
Buf. L3(KB)
17.5
Buf. L3(KB)
30
60
90 Time(sec)
34No of Dropped Layers
35Summary of the QA results
- Quality adaptation mechanism can efficiently
control the quality - Smoothing factor allows the server to trade
short-term improvement with long-term smoothing - Buffer requirement is low
- Deploying for live but non-interactive sessions!
36Limitation of the E2E Approach
- Delivered quality is limited to the average
bandwidth between the server and client - Solutions
- Mirror servers
- Multimedia proxy caching
Client
Client
Client
Internet
Server
Quality(layer)
Time
37Outline
- The End-to-end Architecture
- Congestion Control (The RAP protocol)
- Quality Adaptation
- Extending the Architecture
- Multimedia Proxy Caching
- Contributions
- Future Directions
38Multimedia Proxy Caching
- Assumptions
- Proxy can perform
- End-to-end congestion ctrl
- Quality Adaptation
- Goals
- Improve delivered quality
- Low-latency VCR-functions
- Natural benefits of caching
Client
Client
Client
Proxy
Internet
Server
39Challenge
- Cached streams have variable quality
- Layered organization provides opportunity for
adjusting the quality
L
4
Quality (layer)
L
3
L
2
L
1
L
0
Time
40Issues
- Delivery procedure
- Relaying on a cache miss
- Pre-fetching on a cache hit
- Replacement algorithm
- Determining popularity
- Replacement pattern
41Cache Miss Scenario
Client
Client
Client
- Stream is located at the original server
- Playback from the server through the proxy
- Proxy intercepts and caches the stream
- No benefit in a miss scenario
Proxy
Internet
Server
42Cache Hit Scenario
Client
Client
Client
- Playback from the proxy cache
- Lower latency
- May have better quality!
- Available bandwidth allows
- Lower quality playback
- Higher quality playback
Proxy
Internet
Server
43Lower quality playback
- Missing pieces of the active layers are
pre-fetched on-demand - Required pieces are identified by QA
- Results in smoothing
L
4
L
Quality (no. active layers)
3
L
2
L
1
L
0
Time
44Higher quality playback
- Pre-fetch higher layers on-demand
- Pre-fetched data is always cached
- Must pre-fetch a missing piece before its
playback time - Tradeoff
L
4
L
Quality (no. active layers)
3
L
2
L
1
L
0
Time
45Replacement Algorithm
- Goal converge the cache state to optimal
- Average quality of a cached stream depends on
- popularity
- average bandwidth between proxy and recent
interested clients - Variation in quality inversely depends on
- popularity
Client
Client
Client
Proxy
Internet
Server
46Popularity
- Number of hits during an interval
- Users level of interest (including
VCR-functions) - Potential value of a layer for quality adaptation
- Calculate whit on a per-layer basis
- Layered encoding guarantees monotonically
decrease in popularity of layers
whit PlaybackTime(sec)/StreamLength(sec)
47Replacement Pattern
- Multi-valued replacement decision for multimedia
object - Coarse-grain flushing
- on a per-layer basis
- Fine-grain flushing
- on a per-segment basis
Cached segment
Fine-grain
Quality(Layer)
Coarse-grain
Time
48Summary of Multimedia Caching
- Exploited characteristics of multimedia objs
- Proxy caching mechanism for multimedia streams
- Pre-fetching
- Replacement algorithm
- Adaptively converges state of the cache to the
optimal
49Contributions
- End-to-end architecture for delivery of
quality-adaptive multimedia streams - RAP, a TCP-friendly cong. ctrl mechanism over a
wide range of network conditions - Quality adaptation mechanism that adjusts the
delivered quality with a desired degree of
smoothing - Proxy caching mechanism for multimedia streams to
effectively improve the delivered quality of
popular streams
50Future Directions
- End-to-end Congestion Control
- RAPs behavior in the presence web-like traffic
- Emulating timer-driven regime TCP
- Bi-directional RAP connections, Reverse ns
forward path congestion control - Experiments over CAIRN the Internet
- Integration of RAP and congestion manager
- Adopting RAP into class-based QoS
- Using RAP for multicast congestion control
- Congestion control over wireless networks
51Future Directions(contd)
- Quality Adaptation
- Extending to other rate adaptation mechanisms
- Multimedia Proxy Caching
- Other replacement patterns popularity
functions(e.g. chunk-based) - Traffic Measurement and Characterization
- Imiprical evaluation of streaming applications
52An End-to-end Architecture for Quality-Adaptive
Streaming Applications in Best-effort Networks
- Reza Rejaie
- reza_at_isi.edu
- USC/ISI
- http//netweb.usc.edu/reza
- April 7, 1999
53Thank you
Reza Rejaie
reza_at_isi.edu
http//netweb.usc.edu/reza
54Target Environment
TCP Traffic
TCP Traffic
55Optimal Buffer Allocation
Optimal buffer state is not unique S1 and S2 are
extreme cases S1 requires more buffering
layers S2 requires more buffer share per
layer Buffer allocation for S1 can recover from
S2 but not vice versa
Backoff 2
Scenario 1
Scenario 2
Scenario 3