Title: P2PMoD PeertoPeer MovieonDemand
1P2PMoDPeer-to-PeerMovie-on-Demand
- GCH1
- Group members
- Cheung Chui Ying
- Lui Cheuk Pan
- Wong Long Sing
- Supervised by
- Professor Gary Chan
2Presentation flow
- Introduction
- System Design
- Results
- Conclusion
- QA and Demo
3Technical Challenges
- Asynchronized Play Time
- Movie-on-Demand is not TV Program Broadcast
- Viewers start watching at different time
- Peer Dynamics
- Network topology might changes over time
- Viewers might go on and off
- Interactivity
- Support for pause and jump
4Related Work
- Traditional Server-to-Client
- Server loading grows linearly ? Not scalable
- Multicasting
- Special network support needed
- Interactivity is not supported
- BitTorrent
- Unpredictable download order ? Cannot start
before you finish downloading - Interactivity is not supported
5What is P2PMoD?
- It is a peer-to-peer (P2P) based interactive
movie streaming system that brings movies to your
home - Scalable
- Low server bandwidth requirement
- Decentralized control
- Support for user interactivity
- Resilience to node/link failure
- Short playback delay
6Why is P2PMoD important?
- Overcome the limitation of the server-to-client
movie streaming architecture - Shape the future of movie watching experiences
- Commercial deployment Help strike on illegal
movie downloading by BT
7System Architecture PRIME
GUI
Control
RTSP
Off-the-shelf Media Player
RTSP Server
RTP
Internal Logic
Statistic
DHT
Buffering
Communication
Buffer
8RTSP Server
RFC 2326
RTSP Server
RTSP Protocol
Commands
RTP
Internal Logic
Movie data
Off-the-shelf Media Player
9RTP Packetizer
- Can play on any RTP-compatible media player
- Abstraction
- No change is needed for PRIME to support
different movie format
RFC 2250
0ms ? 0 1000ms ? 164452 2000ms ?
299501
Index file
10Director Backend
- Responsible for the actual movie data retrieval
process - Provide programming interface for stream
management and interactivity control - Implementation Goal
- Scalable and Fastcollaboration between peers
- EfficientMinimize control communication overhead
11Director Backend Implementation
- Use the concept of virtual time slot to find
potential parents - Use a DHT to achieve decentralized control
communication
12Moving virtual timeslot
000300
000000
000600
000900
001200
001800
001500
002100
002400
Movie Length
000000 Start
004239 End
Time since Publishing
- The time boundary keeps advancing along with the
real time. - Peers will stay on the same slot once started
playing, unless user seeks to another position. - Peers in the same or earlier virtual time slot
can help us in streaming. - How to identify these potential parents? DHT
comes into play
13DHT Key
- We construct ltmovie hash, virtual time slot,
random numbergt as the DHT key
lttitanic, 1, 91gt
lttitanic, 2, 34gt
ltmi3, 6, 99gt
ltmi3, 5, 65gt
lttitanic, 2, 72gt
ltmi3, 1, 2gt
lttitanic, 3, 23gt
ltmatrix, 4, 71gt
ltmatrix, 2, 2gt
ltmatrix, 3, 82gt
ltmatrix, 3, 12gt
14How to retrieve the data?
- Implemented 2 versions of director
- Both utilized FreePastry as the DHT
- Initial version
- Movie data are carried on Scribe
- Scribe an application-level multicast
infrastructure built on top of FreePastry - Revised version
- Out-of-band transfer
- Employ a multiple parents scheme to transfer
movie data
15Director Initial Version
Publisher
By the nature of DHT, usually it takes some hops
for node A to contact node B.
But that also means, sometimes it have to go
through other off-topic nodes.
Clients subscribe the slot they interested
in. i.e. Slots covered by pre-buffer range
Slot 6 Members
One node would be the root determined by its ID.
Slot 7 Members
Slot 7 (001800) Topic Root
By the nature of DHT, Slot root nodes are all
around the ring, uniformly distributed.
Slot 6 (001500) Topic Root
16Director Revised Version
- Direct data connection contrary to multi-hops
transfer overlay in Scribe - less likely to have problem induced by link
failure - Faster, due to reduced IP and processing overhead
- If the parents jump, the child can still stream
from other parents smoothly unaffected - Peer could schedule frame request intelligently
to achieve load balancing
17Finding Parents
- Recall that each nodes carry an IP list of its
immediate N neighbors. - By searching/routing the message to the ltMovie
and Slotgt, the node could returns us a list of
potential parents.
18Director Scheduling
- With the use of buffer map, that shows the frame
availability of ones node - Continuity Fetch the frames with the closest
playback deadline first - The streaming is smooth
- Load Sharing Fetch the frames which are
possessed by the least number of nodes first - To obtain the rare pieces for redistribution
- To share the load for the peers holding these
pieces - Efficiency Stream from multiple parents at the
same time
19Graphical User Interface
20Results
- Deployment of P2PMoD on 71 nodes in PlanetLab
- Configuration 1 server and 70 peers
- 40KBps stream for 10 minutes
- Measurement Metrics
- User Experience
- Efficiency
21Results User Experience
- Measures Continuity
- Playback delay
- Time required to start the stream
- Stall Occurrences
- Number of times the stream pauses to buffer more
data - Stall Ratio
- Ratio of paused time to streaming time
22Results User Experience
23Results User Experience
24Results User Experience
25Results User Experience
- Playback delay
- Over 90 has lt 6 seconds delay
- Stall Occurrences
- Over 90 has lt 2 occurrences
- Stall Ratio
- Over 90 has lt 3 of total time
26Results Efficiency
- Peer
- Overhead caused by control messages
- Server
- Bandwidth required
27Result Efficiency
28Results Efficiency
- Peer
- Ratio of stream data to all data input 90
- Server
- Data output rate 275KBps
- Output bandwidth equivalent to 7 streams
- Use 10 bandwidth of traditional server-client
model
29Practical Issue
- Network Traversal
- Router and NAT is common
- Until IPv6 lands
- Universal Plug and Play, Hole Punching
- RTSP and RTP compatibility
- Glitches are common and expected
30Network Positioning
- GNP, Vivaldi could potentially be used
- Map network latency to Rn coordinate
- Even with n?inf, never perfect due to triangular
inequality violation - GNP Landmark selection and reselection
- Vivaldi No fixed reference, coordinates are
updated continuously (spinning) - Ping time does not reflect transfer rate
31Future Work
- Fixed data cache instead of moving slot
- Parents interactivity would not affect
availability - Searching / refreshing next slot parents could be
slow - Frames popularity
- More movie formats, handheld devices to be
supported - Error correction code
32Conclusion
- Peer-to-peer is the way to go, to make use of
users increasing bandwidth and reducing server
resource - PRIME, a working P2P MoD implementation
- Workload reduced by adopting open standard and
using off-the-shelf player
33Thank You
34Pastry Ring
0x0002
0x22AF
0xDF41
0x3529
0xCB95
0xA125
0x591A
0x9A92
0x62C8
0x8392
0x7F52
35Pastry Routing Knowledge
Leaf Set N immediate neighboring nodes
0x0002
0x22AF
0xDF41
0x3529
0xCB95
0xA125
0x591A
Routing Table
0x9A92
0x62C8
0x8392
0x7F52
36Pastry Object Storage
Object is duplicated to N immediate neighboring
nodes
0x0002
0x22AF
0xDF41
0x3529
0xCB95
0x3530
0xA125
0x591A
Routing Table
0x9A92
0x62C8
0x8392
0x7F52
37PRIME?
- PRIME stands for Peer-to-peer Interactive
Media-on-demand