Peer-To-Peer Multimedia Streaming Using BitTorrent - PowerPoint PPT Presentation

About This Presentation
Title:

Peer-To-Peer Multimedia Streaming Using BitTorrent

Description:

Peer-To-Peer Multimedia Streaming Using BitTorrent Purvi Shah, Jehan-Fran ois P ris University of Houston Houston, TX Problem Definition Transferring Videos Video ... – PowerPoint PPT presentation

Number of Views:142
Avg rating:3.0/5.0
Slides: 34
Provided by: P391
Learn more at: https://www2.cs.uh.edu
Category:

less

Transcript and Presenter's Notes

Title: Peer-To-Peer Multimedia Streaming Using BitTorrent


1
Peer-To-Peer Multimedia Streaming Using BitTorrent
  • Purvi Shah, Jehan-François Pâris
  • University of HoustonHouston, TX

2
Problem Definition
One Server
Transferring videos is a resource intensive task!
ManyCustomers
  • Objectives
  • Customer satisfactionMinimize customer waiting
    time
  • Cost effectivenessReduce operational costs
    (mostly hardware costs)

3
Transferring Videos
  • Video download
  • Just like any other file
  • Simplest case file downloaded using conventional
    protocol
  • Playback does not overlap with the transfer
  • Video streaming from a server
  • Playback of video starts while video is
    downloaded
  • No need to wait until download is completed
  • New challenge ensuring on-time delivery of data
  • Otherwise the client cannot keep playing the video

4
Why use P2P Architecture?
  • Infrastructure-based approach(e.g. Akamai)
  • Most commonly used
  • Client-server architecture
  • Expensive Huge server farms
  • Best effort delivery
  • Client upload capacity completely unutilized
  • Not suitable for flash crowds

5
Why use P2P Architecture?
  • IP Multicast
  • Highly efficient bandwidth usage
  • Several drawbacks so far
  • Infrastructure level changes make most
    administrators reluctant to provide it
  • Security flaws
  • No effective widely accepted transport protocol
    on IP multicast layer

6
P2P Architecture
  • Leverage power of P2P networks
  • Multiple solutions are possible
  • Tree based structured overlay networks
  • Leaf clients bandwidth unutilized
  • Less reliable
  • Complex overlay construction
  • Content bottlenecks
  • Fairness issues

7
Our Solution
  • Mesh based unstructured overlay
  • Based on widely-used BitTorrent content
    distribution protocol
  • A P2P protocol started 2002
  • Linux distributors such as Lindows offer software
    updates via BT
  • Blizzard uses BT to distribute game patches
  • Start to distribute films through BT this year

8
(No Transcript)
9
BitTorrent (I)
10
BitTorrent (II)
  • Has a central tracker
  • Keeps information on peers
  • Responds to requests for that information
  • Service subscription
  • Built-in incentives Rechoking
  • Give preference to cooperative peers Tit-for-tat
    exchange of content chunks
  • Random search Optimistic un-choke
  • When all chunks are downloaded, peers can
    reconstruct the whole file
  • Not tailored to streaming applications

11
Evaluation Methodology
  • Simulation-based
  • Answers depend on many parameters
  • Hard to control in measurements or to model
  • Java based discrete-event simulator
  • Models queuing delay and transmission delay
  • Remains faithful to BT specifications

12
BT Limitations
  • BT does not account for thereal-time needs of
    streaming applications
  • Chunk selection
  • Peers do not download chunks in sequence
  • Neighbor selection
  • Incentive mechanism makes too many peers to wait
    for too long before joining the swarm

13
Chunk Selection Policy
  • Replace BT rarest first policy by a sliding
    window policy
  • Forward moving window is equal to viewing delay

chunk not yet received
Download window
missed chunk
received chunk
playback start
playback delay
14
Two Options
  • Sequential policy
  • Peers download first the chunks at the beginning
    of the window
  • Limit the opportunity to exchange chunks between
    the peers
  • Rarest-first policy
  • Peers download first the chunks within the window
    that are least replicated among its neighbors
  • Feasibility of swarming by diversifying available
    chunks among peers

15
Best
Worst
16
(No Transcript)
17
Discussion
  • Switching to a sliding window policy greatly
    increases quality of service
  • Must use a rarest first inside window policy
  • Change does not suffice to achieve a satisfactory
    quality of service

18
Neighbor Selection Policy
  • BT tit-for-tat policy
  • Peers select other peers according to their
    observed behaviors
  • Significant number of peers suffer from slow
    start
  • Randomized tit-for-tat policy
  • At the beginning of every playback each peer
    selects neighbors at random
  • Rapid diffusion of new chunks among peers
  • Gives more free tries to a larger number of peers
    in the swarm to download chunks

19
(No Transcript)
20
(No Transcript)
21
Discussion
  • Should combine our neighbor selection policy
    with our sliding window chunk selection policy
  • Can then achieve an excellent QoS with playback
    delays as short as 30 s as long as video
    consumption rate does not exceed 60 of network
    link bandwidth.

22
Comparison withClient-Server Solutions
23
Chunk size selection
  • Small chunks
  • Result in faster chunk downloads
  • Occasion more processing overhead
  • Larger chunks
  • Cause slow starts for every sliding window
  • Our simulations indicate that 256KB is a good
    compromise

24
(No Transcript)
25
Premature Departures
  • Peer departures before the end of the session
  • Can be voluntary or resulting from network
    failures
  • When a peer leaves the swarm, it tears down
    connections to its neighbors
  • Each of its neighbors to lose one of their active
    connections

26
  • Can tolerate the loss ofat least 60 of the
    peers

27
Future Work
  • Current work
  • On-demand streaming
  • Robustness
  • Detect malicious and selfish peers
  • Incorporate a trust management system into the
    protocol
  • Performance evaluation
  • Conduct a comparison study

28
Thank You Questions?
  • Contact purvi_at_cs.uh.edu
  • paris_at_cs.uh.edu

These addresses are obsoletePlease use
jfparis_at_uh.edu
29
Extra slides
30
nVoD
  • Dynamics of client participations, i.e. churn
  • Clients do no synchronize their viewing times
  • Serve many peers even if they arrive according to
    different patterns

31
Admission control policy (I)
  • Determine if a new peer request can be accepted
    without violating the QoS requirements of the
    existing customers
  • Based on server oriented staggered broadcast
    scheme
  • Combine P2P streaming and staggered broadcasting
    ensures high QoS
  • Beneficial for popular videos

32
Admission control policy (II)
  • Use tracker to batch clients arriving close in
    time form a session
  • Closeness is determined by threshold ?
  • Service latency, though server oriented, is
    independent of number of clients
  • Can handle flash crowds
  • Dedicate ? channels for each video making worst
    service latency, w ? D/?

33
Results
  • We use the M/D/? queuing model to estimate the
    effect on the playback delay experienced by the
    peers
Write a Comment
User Comments (0)
About PowerShow.com