Title: L' Peterson, Princeton
1A blueprint for introducing disruptive
technology into Internet
- by
- L. Peterson, Princeton
- T.Anderson, UW
- D. Culler, T. Roscoe, Intel, Berkeley
- HotNets-I (Infrastructure panel), 2002
- Presenter
- Shobana Padmanabhan
- Discussion leader
- Michael Wilson
- Mar 3, 2005
- CS7702 Research seminar
2Outline
- Introduction
- Architecture
- PlanetLab
- Conclusion
3Introduction
Recently
Until recently
Internet
- Widely-distributed applications make own
forwarding decisions - Network-embedded storage, peer-to-peer file
sharing, content distribution networks, robust
routing overlays, scalable object location,
scalable event propagation - Network elements (layer-7 switches transparent
caches) do application-specific processing - But Internet is ossified..
Figures courtesy planet-lab.org
4This paper proposes using overlay networks to
achieve it..
5Overlay network
- A virtual network of nodes logical links, built
atop existing network, to implement a new service - Provides opportunity for innovation as no changes
in Internet - Eventually, weight of these overlays will cause
emergence of new architecture - Similar to Internet itself (an overlay) causing
evolution of underlying telephony network
This paper speculates what this new architecture
will look like..
Figure courtesy planet-lab.org
6Outline
- Introduction
- Architecture
- PlanetLab
- Conclusion
7Goals
- Short-term Support experimentation with new
services - Testbed
- Experiment at scale (1000s of sites)
- Experiment under real-world conditions
- diverse bandwidth/ latency/ loss
- wide-spread geographic coverage
- Potential for real workloads users
- Low cost of entry
- Medium-term Support continuous services that
serve clients - Deployment platform
- support seamless migration of application from
prototype to service, through design iterations,
that continues to evolve - Long-term Microcosm for next generation Internet!
8Architecture
- Design principles
- Slice-ability
- Distributed control of resources
- Unbundled (overlay) management
- Application-centric interfaces
9Slice-ability
- A slice is horizontal cut of global resources
across nodes - Processing, memory, storage..
- Each service runs in a slice
- Service is a set of programs delivering some
functionality - Node slicing must
- be secure
- use resource control mechanism
- be scalable
Slice a network of VMs
Figure courtesy planet-lab.org
10Virtual Machine
- VM is the environment where a program
implementing some aspect of the service runs - Each VM runs on a single node uses some of the
nodes resources - VM must be
- No harder to write programs, protection from
other VMs, fair sharing of resources, restriction
of traffic generation - Multiple VMs run on each node with
- VMM (Virtual Machine Monitor) arbitrating nodes
resources
11Virtual Machine Monitor (VMM)
- a kernel-mode driver running in the host
operating system - Has access to the physical processor manages
resources between host OS VMs - prevents malicious or poorly designed
applications running in virtual server from
requesting excessive hardware resources from the
host OS - With virtualization, two interfaces now
- API for typical services
- Protection Interface used by VMM
- VMM used here is Linux VServer..
12A node..
Figure courtesy planet-lab.org
13Across nodes (ie. across network)
- Node manger (one per node part of VMM)
- When service managers provide valid tickets
- Allocates resources, creates VMs returns a
lease - Resource Monitor (one per node)
- Tracks nodes available resources (using VMs
interface) - Tells agents about available resources
- Agents (centralized)
- Collect resource monitor reports
- Advertise tickets
- Issue tickets to resource brokers
- Resource Broker (per service)
- Obtain tickets from agents on behalf of service
managers - Service Managers (per service)
- Obtain tickets from broker
- Redeem tickets with node managers to create VM
- Start service
14Obtaining a Slice
Agent
Broker
Service Manager
Courtesy Jason Waddles presentation material
15Obtaining a Slice
Agent
Broker
Resource Monitor
Service Manager
Resource Monitor
Resource Monitor
Courtesy Jason Waddles presentation material
16Obtaining a Slice
Agent
ticket
Broker
ticket
Resource Monitor
ticket
Service Manager
Resource Monitor
Resource Monitor
Courtesy Jason Waddles presentation material
17Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
Courtesy Jason Waddles presentation material
18Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
Courtesy Jason Waddles presentation material
19Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
Courtesy Jason Waddles presentation material
20Obtaining a Slice
Agent
ticket
Broker
Service Manager
ticket
ticket
Courtesy Jason Waddles presentation material
21Obtaining a Slice
Agent
ticket
Broker
Service Manager
ticket
Node Manager
ticket
Node Manager
Courtesy Jason Waddles presentation material
22Obtaining a Slice
Agent
ticket
Broker
Service Manager
Courtesy Jason Waddles presentation material
23Obtaining a Slice
Agent
ticket
Broker
Service Manager
Courtesy Jason Waddles presentation material
24Architecture
- Design principles
- Slice-ability
- Distributed control of resources
- Unbundled (overlay) management
- Application-centric interfaces
25Distributed control of resources
- Because of dual role of testbed, two types of
users - Researchers
- Likely to dictate how services are deployed
- Node properties
- Node owners/ clients
- Likely to restrict what services run on their
nodes how resources are allocated to them - De-centralize control between the two
- Central authority provides credentials to service
developers - Each node independently grants or denies a
request, based on local policy
26Architecture
- Design principles
- Slice-ability
- Distributed control of resources
- Unbundled (overlay) management
- Application-centric interfaces
27Unbundled (overlay) management
- Independent sub-services, running in own slices
- discover set of nodes in overlay learn their
capabilities - monitor health instrument behavior of these
nodes - establish a default topology
- manage user accounts credentials
- keep software running on each node up-to-date
- extract tracing debugging info from a running
node - Some are part of core system (user a/c..)
- Single, agreed-upon version
- Others can have alternatives, with a default,
replaceable over time - Unbundling requires appropriate interfaces
- Eg. hooks in VMM interface to get status of each
nodes resources - Sub-services may depend on each other
- Eg. resource discovery service may depend on
node monitor service
28Architecture
- Design principles
- Slice-ability
- Distributed control of resources
- Unbundled (overlay) management
- Application-centric interfaces
29Application-centric interfaces
- Promote application development by letting it run
continuously (deployment platform) - Problem difficult to simultaneously create
testbed use it for writing applications - API should remain largely unchanged while
underlying implementation changes - If alternative API emerges, new applications must
be written to it but original should be
maintained for legacy applications
30Outline
- Introduction
- Architecture
- PlanetLab
- Conclusion
31PlanetLab
- Phases of evolution
- Seed phase
- 100 centrally managed machines
- Pure testbed (no client workload)
- Researchers as clients
- Scale testbed to 1000 sites
- Continuously running services
- Attracting real clients
- Non-researchers as clients
32PlanetLab today
- Services
- Berkeleys OceanStore RAID distributed over
Internet - Intels Netbait Detect track worms globally
- UWs ScriptRoute Internet measurement tool
- Princetons CoDeeN Open content distribution
network
Courtesy planet-lab.org
33Related work
- Internet2 (Abilene backbone)
- Closed commercial routers -gt no new functionality
in the middle of network - Emulab
- Not a deployment platform
- Grid (Globus)
- Glues together modest number of large computing
assets with high bandwidth pipes but - planetlab emphasizes on scaling the less
bandwidth applications across wider collection of
nodes - ABONE (from active networks)
- Focuses on supporting extensibility of forwarding
function but - planetlab is more inclusive ie. apps throughout
the network including those involving storage
component - XBONE
- Supports IP-in-IP tunneling, w/ GUI for specific
overlay configurations - Alternative package as desktop application
- Eg. Napster, KaZaa
- Needs to be immediately widely popular
- Difficult to modify system once deployed unless
compelling applications - Not secure
- KaZaa exposed all files on local system
34Conclusion
- An open, global network test-bed, for pioneering
novel planetary-scale services (deployment). - A model for introducing innovations
(service-oriented network architecture) into the
Internet through overlays. - Whether a single winner emerges gets subsumed
into Internet or - services continue to define their own routing,
- remains a subject of speculation..
35References
- PlanetLab An overlay testbed for broad-coverage
services by B. Chun et. al., Jan 2003
36Backup slides
37Overlay construction problems
- Dynamic changes in group membership
- Members may join and leave dynamically
- Members may die
- Dynamic changes in network conditions and
topology - Delay between members may vary over time due to
congestion, routing changes - Knowledge of network conditions is member
specific - Each member must determine network conditions
for itself
38Testbeds mode of operation as deployment platform