An Evaluation of Scalable Application-level Multicast Using Peer-to-peer Overlays - PowerPoint PPT Presentation

About This Presentation
Title:

An Evaluation of Scalable Application-level Multicast Using Peer-to-peer Overlays

Description:

An Evaluation of Scalable Application-level Multicast Using Peer-to-peer Overlays Miguel Castro, Michael B. Jones, Anne-Marie Kermarrec, Antony Rowstron , Marvin ... – PowerPoint PPT presentation

Number of Views:86
Avg rating:3.0/5.0
Slides: 24
Provided by: Bog52
Category:

less

Transcript and Presenter's Notes

Title: An Evaluation of Scalable Application-level Multicast Using Peer-to-peer Overlays


1
An Evaluation of Scalable Application-level
Multicast Using Peer-to-peer Overlays
  • Miguel Castro, Michael B. Jones, Anne-Marie
    Kermarrec, Antony Rowstron , Marvin Theimer,
    Helen Wang and Alec Wolman
  • Presented by Ricky Taing

Authors
2
Outline
  • Motivations and Goals of Paper
  • Overview of Overlay Networks
  • Overview of p2p Multicast Implementations
  • Experimental Methodology
  • Results
  • Conclusions

3
Motivations and Goals
  • Lack of IP multicast adoption has increased
    development in app-layer multicast
  • Different versions available to use, with
    different implementations
  • Determine the best application layer multicast
    system from different overlays and implementations

4
P2P Overlay Networks
  • Two main approaches
  • Divide and conquer in a ring
  • Pastry, Chord, Tapestry
  • Cartesian hyper-space
  • CAN

5
Pastry
  • Routes by nodeID
  • Circular 128-bit namespace
  • Get to destination in log 2b N time
  • b is configurable, usually b4 (hex)
  • Each node maintains a leaf set
  • pointers to l nodes closest to it

6
CAN
  • Route in multiple dimensions
  • Each node is assigned a particular zone
  • Many optimizations
  • Neighbor with lowest network delay
  • Multiple nodes per zone
  • Uniform partitioning
  • Landmark based placement

7
Landmark based placement
  • Set of well known landmarks
  • ordered by distance
  • placed into evenly sized bins
  • Nodes with same landmark ordering end up close to
    each other

8
P2P Multicast Implementations
  • Flooding
  • Unique overlay-per-group
  • Only nodes in group get group messages
  • CAN
  • broadcast to neighbors, use seq numbers
  • Pastry
  • Forwards copies to all nodes in routing table
  • Notes levels, sends to greater levels

9
Tree-based
  • Scribe
  • Reverse path forwarding to create one tree per
    group
  • Joins route messages to groupId (root)
  • Registers if a node along the route is in the tree

10
Evaluation
  • Simulation
  • Five different topologies
  • 5050 routers, 80000 end nodes
  • Two sets of experiments
  • Single multicast group, all nodes are members
  • Large number of groups (1500)

11
Criteria
  • Relative Delay Penalty
  • RMD Ratio of Maximum delay between app and IP
    multicast
  • RAD Ratio of Average delay between app and IP
    Multicast
  • Link Stress
  • number of packets sent over link

12
Criteria (2)
  • Node Stress
  • Number of nodes in a routing table
  • Number of messages received by a join
  • Duplicates
  • Number received by end nodes

13
CANFlooding Results
  • Landmark based and NDR (lowest network delay) was
    best
  • Benefits from increased table state is uneven
  • Link stress for 80,000 joins is huge
  • Increases with state size
  • Link stress is significantly lower for sent
    messages

14
Link stress CANFlooding
15
CANTree Results
  • Landmark based assignment of nodes better
  • Delay is better than flooding by a factor of 2 to
    3
  • Link stress for joins and sends are relatively
    similar

16
Pastry Tart and Tangy
  • varied b from 1 to 4
  • TART
  • Topology aware routing table construction
  • default optimization
  • nodes probe each other to estimate delay
  • TOP
  • Topology aware nodeId assignment
  • currently random / distributed

17
PastryFlooding Results
  • delay decreases as b increases
  • delay of b4 is 50 lower than b1
  • TART and TOP both decrease delay
  • TOP reduces average link stress by factor of 3
    and max link stress by a factor of 30
  • Large number of duplicates when b is large (16
    b4)
  • due to holes in routing tables

18
PastryTree Results
  • delay decreases as b increases, and with TART and
    TOP
  • similar to flooding results
  • TART reduces both max and avg link stress
  • TOP reduces avg but increases max link stress
  • Pastry with TART and without TOP is best for
    tree-based

19
Multiple multicast
  • Tree based Pastry was best for delay
  • Interesting is CAN Flooding average CDF curve
    not tightly bound
  • Flooding is better in the 1500 group experiment
    than the single group

20
Max Delay Penalties
21
Average Delay Penalties
22
Evaluation
  • For delay, Pastry is 20-50 better than CAN
  • For average link stress, Pastry was 15 lower
  • Max link stress CAN was 25 lower

23
Conclusion
  • Separate overlays are only better if you want to
    limit nodes that route traffic
  • More overhead for flooding approach
  • Tree base can reuse the same overlay for multiple
    groups
  • Less delay, joins and sends are more lightweight
  • Multicast trees with Pastry have better
    performance than CAN
Write a Comment
User Comments (0)
About PowerShow.com