CostEffective Video Streaming Techniques - PowerPoint PPT Presentation

1 / 48
About This Presentation
Title:

CostEffective Video Streaming Techniques

Description:

When a free chunk is dispatched for a new stream, the chunk becomes busy. ... RM vs. Proxy Servers. Majority of the data are obtained from the network. ... – PowerPoint PPT presentation

Number of Views:76
Avg rating:3.0/5.0
Slides: 49
Provided by: longwoo
Category:

less

Transcript and Presenter's Notes

Title: CostEffective Video Streaming Techniques


1
Cost-Effective Video Streaming Techniques
  • Kien A. Hua
  • School of EE Computer Science
  • University of Central Florida
  • Orlando, FL 32816-2362
  • U.S.A

2
Server Channels
  • Videos are delivered to clients as a continuous
    stream.
  • Server bandwidth determines the number of video
    streams can be supported simultaneously.
  • Server bandwidth can be organized and managed as
    a collection of logical channels.
  • These channels can be scheduled to deliver
    various videos.

3
Using Dedicated Channel
Video Server
Dedicated stream
Client
Client
Client
Client
Too Expensive !
4
Batching
  • FCFS
  • MQL (Maximum Queue Length First)
  • MFQ (Maximum Factored Queue Length)

Can multicast provide true VoD ?
5
Challenges conflicting goals
  • Low Latency requests must be served immediately
  • Highly Efficient each multicast must still be
    able to serve a large number of clients

6
Some Solutions
  • Patching Hua98
  • Range Multicast Hua02

7
Patching
Video
Regular Multicast
A
8
Proposed Technique Patching
Skew point
Video
t
Patching Stream
Regular Multicast
A
9
Proposed Technique Patching
Video
2t
Regular Multicast
Skew point is absorbed by client buffer
Buffer
Video Player
A
B
10
Client Design
11
Server Design
  • Server must decide when to schedule a regular
    stream or a patching stream

time
r
p
p
p
r
p
p
A
B
C
D
E
F
G
Multicast group
Multicast group
12
Two Simple Approaches
  • If no regular stream for the same video exists, a
    new regular stream is scheduled
  • Otherwise, two policies can be used to make
    decision Greedy Patching and Grace Patching

13
Greedy Patching
  • Patching stream is always scheduled

Time
14
Grace Patching
  • If client buffer is large enough to absorb
    the skew, a patching stream is scheduled
    otherwise, a new regular stream is scheduled.

Time
15
Performance Study
  • Compared with conventional batching
  • Maximum Factored Queue (MFQ) is used
  • Performance metric is average service latency

16
Simulation Parameters
17
Effect of Server Bandwidth
18
Effect of Client Buffer
19
Effect of Request Rate
20
Optimal Patching
time
patching window
patching window
r
p
p
p
r
p
p
A
B
C
D
E
F
G
Multicast group
Multicast group
What is the optimal patching window ?
21
Optimal Patching Window
  • D is the mean total amount of data transmitted
    by a multicast group
  • Minimize Server Bandwidth Requirement, D/W ,
    under various W values

Buffer Size
Buffer Size
Video Length
A
W
22
Optimal Patching Window
  • Compute D, the mean amount of data transmitted
    for each multicast group
  • Determine ? , the average time duration of a
    multicast group
  • Server bandwidth requirement is D/? which is a
    function of the patching period
  • Finding the patching period that minimize the
    bandwidth requirement

23
Candidates for Optimal Patching Window
24
Piggybacking Golubchik96
  • Slow down an earlier service and speed up the new
    one to merge them into one stream
  • Limited stream sharing due to long catch-up delay
  • Implementation is complicated

25
Concluding Remarks
  • Unlike conventional multicast, requests can be
    served immediately under patching
  • Patching makes multicast more efficient by
    dynamically expanding the multicast tree
  • Video streams usually deliver only the first few
    minutes of video data
  • Patching is very simple and requires no
    specialized hardware

26
Patching on Internet
  • Problem
  • Current Internet does not support multicast
  • A Solution
  • Deploying an overlay of software routers on the
    Internet
  • Multicast is implemented on this overlay using
    only IP unicast

27
Content Routing
  • Each router forwards its Find messages to
    other routers in a round-robin manner.

28
Removal of An Overlay Node
Inform the child nodes to reconnect to the
grandparent
29
Failure of Parent Node
  • Data stop coming from the parent
  • Reconnect to the server

30
Slow Incoming Stream
Reconnect upward to the grandparent
31
Downward Reconnection
  • When reconnection reaches the server, future
    reconnection of this link goes downward.
  • Downward reconnection is done through a sibling
    node selected in a round-robin manner.
  • When downward reconnection reaches a leave node,
    future reconnection of this link goes upward
    again.

32
Limitation of Patching
  • The performance of Patching is limited by the
    server bandwidth.
  • Can we scale the application beyond the physical
    limitation of the server ?

33
Chaining Hua97
  • Using a hierarchy of multicasts
  • Clients multicast data to other clients in the
    downstream
  • Demand on server bandwidth is substantially
    reduced

34
Chaining
  • Highly scalable and efficient
  • Implementation is complex

35
Range Multicast Hua02
  • Deploying an overlay of software routers on the
    Internet
  • Video data are transmitted to clients through
    these software routers
  • Each router caches a prefix of the video streams
    passing through
  • This buffer may be used to provide the entire
    video content to subsequent clients arriving
    within a buffer-size period

36
Range Multicast Group
  • Four clients join the same server stream at
    different times without delay
  • Each client sees the entire video

Buffer Size Each router can cache 10 time units
of video data. Assumption No transmission delay
37
Multicast Range
  • All members of a conventional multicast group
    share the same play point at all time
  • They must join at the multicast time
  • Members of a range multicast group can have a
    range of different play points
  • They can join at their own time

Multicast Range at time 11 0, 11
38
Network Cache Management
  • Initially, a cache chunk is free.
  • When a free chunk is dispatched for a new stream,
    the chunk becomes busy.
  • A busy chunk becomes hot if its content matches a
    new service request.

39
RM vs. Proxy Servers
40
2-Phase Service Model(2PSM) Hua99Browsing
Videos in a Low Bandwidth Environment
41
Search Model
  • Use similarity matching or keyword search to look
    for the candidate videos.
  • Preview some of the candidates to identify the
    desired video.
  • Apply VCR-style functions to search for the video
    segments.

42
Conventional Approach
1. Download So
2. Download S1
while playing S0
3.
Download S2
while playing S1
.
.
.
Advantage
Reduce wait time
Disadvantage
Unsuitable for searching video
43
Search Techniques
  • Use extra preview files to support the preview
    function
  • Requires more storage space
  • Downloading the preview file adds delay
  • Use separate fast-forward and fast-reverse files
    to provide the VCR-style operations
  • Requires more storage space
  • Server can become a bottleneck

44
Challenges
How to download the preview frames for FREE ?
No additional delay
No additional storage requirement
How to support VCR operations without VCR files ?
No overhead for the server
No additional storage requirement
45
2PSM Preview Phase
46
2PSM Playback Phase
t
47
Remarks
1. It requires no extra files to provide the
preview feature.
2. Downloading the preview frames is free.
3. It requires no extra files to support the VCR
functionality.
4. Each client manages its own VCR-style
interaction. Server is not involved.
48
2PSM Video Browser
Write a Comment
User Comments (0)
About PowerShow.com