Supporting%20Wide-area%20Applications - PowerPoint PPT Presentation

About This Presentation
Title:

Supporting%20Wide-area%20Applications

Description:

CITRIS Poster. Supporting Wide-area Applications. Complexities of global deployment ... CITRIS Poster. Tapestry: Decentralized Object Location and Routing ... – PowerPoint PPT presentation

Number of Views:13
Avg rating:3.0/5.0
Slides: 10
Provided by: benz3
Category:

less

Transcript and Presenter's Notes

Title: Supporting%20Wide-area%20Applications


1
Supporting Wide-area Applications
  • Complexities of global deployment
  • Network unreliability
  • BGP slow convergence, redundancy unexploited
  • Management of large scale resources / components
  • Locate, utilize resources despite failures
  • Decentralized Object Location and Routing (DOLR)
  • wide-area overlay application infrastructure
  • Self-organizing, scalable
  • Fault-tolerant routing and object location
  • Efficient (b/w, latency) data delivery
  • Extensible, supports application-specific
    protocols
  • Recent work
  • Tapestry, Chord, CAN, Pastry, Kademlia, Viceroy,

2
Tapestry Decentralized Object Location and
RoutingZhao, Kubiatowicz, Joseph, et. al.
  • Mapping keys to physical network
  • Large sparse Id space N
  • Nodes in overlay network have NodeIds ? N
  • Given k ? N, overlay deterministically maps k to
    its root node (a live node in the network)
  • Base API
  • Publish / Unpublish (Object ID)
  • RouteToNode (NodeId)
  • RouteToObject (Object ID)

3
Tapestry MeshIncremental prefix-based routing
NodeID 0xEF97
NodeID 0xEF32
NodeID 0xE399
NodeID 0x43FE
NodeID 0xEF37
NodeID 0xEF44
NodeID 0x099F
NodeID 0xE530
NodeID 0xEF40
NodeID 0xEF31
NodeID 0xEFBA
NodeID 0xE555
NodeID 0x0999
NodeID 0xE932
NodeID 0x0921
NodeID 0xFF37
NodeID 0xE324
4
Object LocationRandomization and Locality
5
Single Node Architecture
DecentralizedFile Systems
Application-LevelMulticast
ApproximateText Matching
Application Interface / Upcall API
Routing TableObject Pointer DB
Dynamic NodeManagement
Router
Network Link Management
Transport Protocols
6
Status and Deployment
  • Planet Lab global network
  • 98 machines at 42 institutions, in North America,
    Europe, Australia ( 60 machines utilized)
  • 1.26Ghz PIII (1GB RAM), 1.8Ghz PIV (2GB RAM)
  • North American machines (2/3) on Internet2
  • Tapestry Java deployment
  • 6-7 nodes on each physical machine
  • IBM Java JDK 1.30
  • Node virtualization inside JVM and SEDA
  • Scheduling between virtual nodes increases latency

7
Node to Node Routing
  • Ratio of end-to-end routing latency to shortest
    ping distance between nodes
  • All node pairs measured, placed into buckets

8
Object Location
90th percentile158
  • Ratio of end-to-end latency for object location,
    to shortest ping distance between client and
    object location
  • Each node publishes 10,000 objects, lookup on all
    objects

9
Parallel Insertion Latency
  • Dynamically insert nodes in unison into a
    Tapestry of 200
  • Shown as function of insertion group size /
    network size
  • Node virtualization effect CPU scheduling
    contention results in timeout of Ping
    measurements, resulting in high deviation
Write a Comment
User Comments (0)
About PowerShow.com