CoopNet: Cooperative Networking - PowerPoint PPT Presentation

1 / 51
About This Presentation
Title:

CoopNet: Cooperative Networking

Description:

affects Web content as well as streaming content. infrastructure-based CDNs aren't for everyone ... MDC-based streaming media distribution ... – PowerPoint PPT presentation

Number of Views:38
Avg rating:3.0/5.0
Slides: 52
Provided by: VenkatPad6
Category:

less

Transcript and Presenter's Notes

Title: CoopNet: Cooperative Networking


1
CoopNet Cooperative Networking
  • Venkat Padmanabhan
  • Microsoft Research
  • September 2002

2
Acknowledgements
  • Microsoft Research
  • Phil Chou
  • Helen Wang
  • Interns
  • Kay Sripanidkulchai (CMU)
  • Karthik Lakshminarayanan (Berkeley)

3
Outline
  • CoopNet
  • motivation and overview
  • web content distribution
  • streaming media content distribution
  • multiple description coding
  • multiple distribution trees
  • related work
  • summary and ongoing work
  • Other networking projects at MSR

4
Motivation
  • A flash crowd can easily overwhelm a server
  • often due to news event of widespread interest
  • but not always (e.g., Webcast of birthday
    party)
  • can affect relatively obscure sites (e.g.,
    election.dos.state.fl.us, firestone.com,
    nbaa.org)
  • site becomes unreachable precisely when popular!
  • affects Web content as well as streaming content
  • infrastructure-based CDNs arent for everyone
  • too expensive even for big sites (e.g., CNN)
  • uninteresting for CDN to support small sites
  • Goal solve the flash crowd problem without
    requiring new infrastructure!

5
Cooperative Networking
CoopNet
Client-server
Pure peer-to-peer
  • CoopNet complements client-server system
  • client-server operation in normal times
  • P2P content distribution invoked on demand to
    alleviate server overload
  • clients participate only while interested in the
    content
  • server still plays a critical role

6
CoopNet Tradeoffs
  • Avoids dependence on expensive CDN infrastructure
  • but no performance guarantees
  • P2P network size scales with load
  • Availability of resourceful server simplifies
    many P2P tasks
  • but is the server a potential bottleneck?

7
Flash Crowd Characteristics
  • Any overload at a server is a flash crowd for
    our purposes
  • often the load is gt 10X the normal level
  • but could still be small in absolute terms
    (e.g., Webcast of a birthday party from home)
  • Sometimes anticipated but often not
  • More widespread than you might think
  • aopa.org, nbaa.gov, firestone.com, usgs.gov,
    election.dos.state.fl.us (Bill LeFebvre, CNN)
  • Could be due to content of various types
  • Web content
  • streaming content live and on-demand

8
Flash Crowd Characteristics
  • Lasted many hours
  • Traffic gt 10x normal

An MSNBC server on Sep 11, 2001
9
Where is the bottleneck?
  • Disk?
  • no, usually requests are for popular content
  • MSNBC 90 of requests were for 141 files
  • CPU?
  • perhaps for dynamic content
  • a single server node can pump out gt 1 Gbps
  • Bandwidth?
  • yes, most likely close to the server
  • 65 of servers have bottleneck bandwidths of less
    than 1.5 Mbps (Stefan Saroiu, U.W.)

10
CoopNet for Web Content
  • When overloaded, the server redirects new clients
    to clients that have previously downloaded the
    content
  • Huge bandwidth savings (100X)
  • 200 B redirect instead of 20 KB page

11
Operation of CoopNet
B
A
C
D
E
12
Operation of CoopNet
B
A
C
D
E
13
Operation of CoopNet
B
A
A
C
D
E
14
Operation of CoopNet
B
A
A
C
D
E
15
Operation of CoopNet
B
A
A, B
A
C
D
E
16
Operation of CoopNet
A
B
A
A, B
C
D
E
17
Operation of CoopNet
A
B
A
A, B
C
D
E
18
Operation of CoopNet
A
B
A
A, B, C
C
A, B
D
E
19
Operation of CoopNet
A
B
A
A, B
A, B, C
C
D
E
20
Issues
  • Peer selection
  • network proximity BGP address prefix,
    delay-based coordinates
  • matching peer bandwidth
  • Server bottleneck
  • large of CoopNet peers ? large volume of
    redirects
  • small of CoopNet peers ? server remains
    overloaded
  • cooperation is still beneficial, but initial
    redirect can take long
  • solution form peer groups
  • often 30 peers in a group suffices
  • greatly simplifies distributed search
  • fall back to server-based redirect upon miss

21
Finding content (optimistic)
  • High locality during flash crowd
  • Most content can be found amongst peer groups
    of size 5-30

22
Finding content (pessimistic)
Clients go back to the server only 15 of the
time.
23
Alternative approaches
  • Proxy caching
  • deployment barriers
  • not effective when clients are scattered across
    the Internet
  • Commercial CDNs
  • not cost-effective for small sites
  • P2P system of servers
  • feasible in practice?

24
CoopNet for Live Streaming
  • Server can be overwhelmed more easily
  • Key issue robustness
  • peers are not dedicated servers ? potential
    disruption due to
  • node departures and failures
  • higher priority traffic
  • traditional application-level multicast (ALM) is
    not sufficient

25
Traditional Application-level Multicast
26
CoopNet Approach to Robustness
  • Add redundancy in data
  • multiple description coding (MDC)
  • and in network paths
  • multiple, diverse distribution trees

27
Multiple Description Coding
Layered coding
MDC
  • Unlike layered coding, there isnt an ordering of
    the descriptions
  • Every subset of descriptions must be decodable
  • Modest penalty relative to layered coding

28
Multiple Description Coding
  • Simple MDC
  • every Mth frame forms a description
  • More sophisticated MDC combines
  • layered coding
  • Reed-Solomon coding
  • priority encoded transmission
  • optimized bit allocation

29
Multiple Distribution Trees
30
MDC Analysis
  • Key parameters
  • number of nodes (N)
  • number of descriptions (M)
  • out-degree of each node
  • repair time
  • node departure rate
  • Two scenarios of interest
  • large N, high churn ? multiple node failures in
    repair interval
  • small N, stable ? occasional, single node
    failures

31
Quality During Multiple Failures
32
Quality During Single Failure
33
Tree Management
  • Goals
  • short and wide trees
  • efficiency
  • diversity
  • quick join and leave processing
  • scalability
  • CoopNet approach centralized protocol anchored
    at the server
  • single point of failure
  • but source of data anyway

34
Centralized Tree Management
  • Basic protocol
  • nodes inform server of their arrival/departure
  • server tracks node capacity and tells new nodes
    where to join
  • each node monitors its packet loss rate and takes
    action when the loss rate becomes too high
  • simple, should scale to 1000 joins/leaves per
    sec.
  • Optimizations
  • delay-based coordinates to estimate node
    proximity (à la GeoPing)
  • achieving efficiency and diversity
  • migrate stable nodes to a higher level in the
    tree

35
Achieving Efficiency and Diversity
NY
SEA
S
Supernode
SF
36
Performance Evaluation
  • MSNBC access logs from Sep 11, 2001
  • Live streaming
  • 18,000 simultaneous clients
  • 180 joins/leaves per second on average peak
    rate of 1000 per second
  • 70 of clients tuned in for less than a minute
  • On-demand streaming
  • 300,000 requests in a 2-hour period

37
Live Streaming
  • Key questions
  • how beneficial is MDC?
  • does well is diversity preserved as trees evolve?
  • how does repair time impact performance?

38
MDC versus SDC
Based on MSNBC traces from Sep 11
39
Random Trees vs. Evolved Trees
Random Trees
Evolved Trees
40
Impact of Repair Time
41
CoopNet for On-demand Streaming
  • Distributed streaming of multiple descriptions
  • Improves robustness and load distribution

42
On-demand Streaming
  • Key results
  • server bandwidth requirement drops from 20 Mbps
    to 300 Kbps
  • peer bandwidth requirement
  • average over all peers is 45 Kbps
  • average over active peers is 465 Kbps
  • storage requirement at a peer is less than 100
    MB
  • probability of finding peer in the same BGP
    prefix cluster is under 20

43
CoopNet Transport Architecture
Embedded Stream
GOF
Parse
Packetize
ZSF
Optimize (M, p(m))
Break Points
RD Curve
M descriptions
RS Encoder
Server
Internet
Depacketize
Embedded Stream(truncated)
Reformat
Decode
Render
GOF(quality depends on descriptions received)
m M descriptions
RS Decoder
Client
44
Related Work
  • Infrastructure-based CDNs
  • Akamai, Digital Island
  • P2P CDNs
  • Pseudo-serving, PROOFS, Backslash
  • SpreadIt, Allcast, vTrails
  • Application-level multicast
  • ALMI, Narada, Scattercast
  • Bayeux, Scribe
  • Multi-path content delivery
  • Byers et al. 1999, Nguyen Zakhor 2002,
    Apostolopoulos et al. 2002

45
Summary
  • Client-server applications can benefit from
    selective use of peer-to-peer communications
  • Availability of server simplifies system design
  • Web content
  • high degree of locality
  • server-based redirection plus small peer group
  • Streaming content
  • robustness to dynamic membership is the key
    challenge
  • MDC with multiple, diverse distribution trees
    improves robustness in peer-to-peer media
    streaming
  • centralized tree management is efficient and can
    scale

46
Ongoing Work
  • Prototype implementation
  • Dealing with client heterogeneity for live
    streaming
  • combine MDC with layering
  • More info
  • research.microsoft.com/padmanab/projects/CoopNet
  • Papers at IPTPS 02 and NOSSDAV 02

47
Networking Research at MSR
  • Internet measurement and performance
  • Passive Network Tomography
  • IP2Geo Internet Geography
  • PeerMetric broadband network performance
  • Peer-to-Peer networking
  • Herald scalable event notification system
  • CoopNet P2P content distribution
  • Wireless networking
  • UCoM energy-efficient networking
  • Mesh Networks multi-hop wireless access network

48
PeerMetric
  • Goal characterize broadband network performance
  • DSL, cable modem, satellite, etc.
  • P2P as well as client-server performance
  • Deployment on 25 distributed nodes underway
  • none in Atlanta ? volunteers welcome!
  • Joint work with Karthik Lakshminarayanan (MSR
    intern from Berkeley)

49
Issues
  • Firewall and NAT traversal
  • Digital Right Management issues
  • ISP pricing policies
  • Enterprise scenarios

50
Research Activities
  • Web flash crowd alleviation (with Kay
    Sripanidkulchai)
  • evaluation using Sep 11 traces from MSNBC
  • prototype implementation done
  • paper _at_ IPTPS 02
  • MDC-based streaming media distribution
  • evaluation using Sep 11 traces from MSNBC,
    Akamai, Digital Island
  • implementation in progress
  • paper _at_ NOSSDAV 02
  • patent process in progress
  • initial discussions with Digital Media Division

51
Research Activities (contd.)
  • PeerMetric (with Karthik Lakshminarayanan)
  • characterize broadband network performance
  • P2P as well as client-server performance
  • working with Xbox Online (Mark VanAntwerp)
  • deployment on 25 distributed nodes underway
  • eventual deployment on 300 Xbox Live beta users
  • Future directions
  • CoopNet in a Wireless Mesh Network
  • good synergy saves Internet bandwidth, improves
    robustness
Write a Comment
User Comments (0)
About PowerShow.com