P2P Networking - PowerPoint PPT Presentation

1 / 85
About This Presentation
Title:

P2P Networking

Description:

Free software is not freeware. Free software is open source software ... With commercial games it is often a display, keyboard, and mouse. ... – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 86
Provided by: csieN5
Category:

less

Transcript and Presenter's Notes

Title: P2P Networking


1
P2P Networking
2
What is peer-to-peer (P2P)?
  • Peer-to-peer is a way of structuring distributed
    applications such that the individual nodes have
    symmetric roles. Rather than being divided into
    clients and servers each with quite distinct
    roles, in P2P applications a node may act as both
    a client and a server.-- Charter of
    Peer-to-peer Research Group, IETF/IRTF, June 24,
    2004(http//www.irtf.org/charters/p2prg.html)

3
Client/Server Architecture
Client
Server
4
Disadvantages of C/S Architecture
  • Single point of failure
  • Strong expensive server
  • Dedicated maintenance (a sysadmin)
  • Not scalable - more users, more servers

5
The Client Side
  • Todays clients can perform more roles than just
    forwarding users requests
  • Todays clients have
  • more computing power
  • more storage space
  • Thin client ? Fat client

6
Evolution at the Client Side
IBM PC _at_ 4.77MHz 360k diskettes
A PC _at_ 4GHz 100GB HD
DECS VT100 No storage
2007
70
80
7
What Else Has Changed?
  • The number of home PCs is increasing rapidly
  • Most of the PCs are fat clients
  • As the Internet usage grow, more and more PCs are
    connecting to the global net
  • Most of the time PCs are idle
  • How can we use all this?

8
Resources Sharing
  • What can we share?
  • Computer resources
  • Shareable computer resources
  • CPU cycles - seti_at_home, GIMPS
  • Data - Napster, Gnutella
  • Bandwidth - PPLive, PPStream
  • Storage Space - OceanStore, CFS, PAST

9
SETI_at_Home
  • SETI Search for Extra-Terrestrial Intelligence
  • _at_Home On your own computer
  • A radio telescope in Puerto Rico scans the sky
    for radio signals
  • Fills a DAT tape of 35GB in 15 hours
  • That data have to be analyzed

10
SETI_at_Home (cont.)
  • The problem analyzing the data requires a huge
    amount of computation
  • Even a supercomputer cannot finish the task on
    its own
  • Accessing a supercomputer is expensive
  • What can be done?

11
SETI_at_Home (cont.)
  • Can we use distributed computing?
  • YEAH
  • Fortunately, the problem can be solved in
    parallel - examples
  • Analyzing different parts of the sky
  • Analyzing different frequencies
  • Analyzing different time slices

12
SETI_at_Home (cont.)
  • The data can be divided into small segments
  • A PC is capable of analyzing a segment in a
    reasonable amount of time
  • An enthusiastic UFO searcher will lend his spare
    CPU cycles for the computation
  • When? Screensavers

13
SETI_at_Home - Example
14
SETI_at_Home - Summary
  • SETI reverses the C/S model
  • Clients can also provide services
  • Servers can be weaker, used mainly for storage
  • Distributed peers serving the center
  • Not yet P2P but were close
  • Outcome - great results
  • Thousands of unused CPU hours tamed for the
    mission
  • 3 millions of users

15
(No Transcript)
16
(No Transcript)
17
(No Transcript)
18
Google -- Larry Page and Sergey Brin
19
Cloud computing
20
  • MUREX A Mutable Replica Control Scheme for
    Peer-to-Peer Storage Systems

21
Murex Basic Concept
22
Peer-to-Peer Video Streaming
23
Peer-to-Peer Video Streaming
24
Napster -- Shawn Fanning
25
(No Transcript)
26
History of Napster (1/2)
  • 5/99 Shawn Fanning (freshman, Northeastern
    University) founds Napster Online (supported by
    Groove)
  • 12/99 First lawsuit
  • 3/00 25 Univ. of Wisconsin traffic on Napster

27
History of Napster (2/2)
  • 2000 estimated 23M users
  • 7/01 simultaneous online users 160K
  • 6/02 file bankrupt
  • 10/03 Napster 2 (Supported by Roxio) (users
    should pay 9.99/month)

19842000, 23M domain names are counted vs. 16
months, 23M Napster-style names are registered at
Napster
28
Napster Sharing Style hybrid centeredge
Title User Speed song1.mp3
beasiteboy DSL song2.mp3 beasiteboy
DSL song3.mp3 beasiteboy
DSL song4.mp3 kingrook T1 song5.mp3
kingrook T1 song5.mp3 slashdot
28.8 song6.mp3 kingrook
T1 song6.mp3 slashdot 28.8 song7.mp3
slashdot 28.8
1. Users launch Napster and connect to Napster
server
2. Napster creates dynamic directory from users
personal .mp3 libraries
3. beastieboy enters search criteria
s
o
n
g
5
4. Napster displays matches to beastieboy
5. beastieboy makes direct connection to kingrook
for file transfer
  • song5.mp3

29
Gnutella History
  • Gnutella was written by Justin Frankel, the
    21-year-old founder of Nullsoft.
  • (Nullsoft acquired by AOL, June 1999)
  • Nullsoft (maker of WinAmp) posted Gnutella on the
    Web, March 14, 2000.
  • A day later AOL yanked Gnutella, at the behest of
    Time Warner.
  • Too late 23k users on Gnutella
  • People had already downloaded and shared the
    program.
  • Gnutella continues today, run by independent
    programmers.

30
Gnutella -- Justin Frankel and Tom Pepper
31
The Animal GNU Either of two large African
antelopes (Connochaetes gnou or C. taurinus)
having a drooping mane and beard, a long tufted
tail, and curved horns in both sexes. Also
called wildebeest.
GNU Recursive Acronym GNUs Not Unix .

Gnutella
GNU
Nutella
Nutella a hazelnut chocolate spread produced by
the Italian confectioner Ferrero .
32
GNU
  • GNU's Not Unix
  • 1983 Richard Stallman (MIT) established Free
    Software Foundation and Proposed GNU Project
  • Free software is not freeware
  • Free software is open source software
  • GPL GNU General Public License

33
About Gnutella
  • No centralized directory servers
  • Pings the net to locate Gnutella friends
  • File requests are broadcasted to friends
  • Flooding, breadth-first search
  • When provider located, file transferred via HTTP
  • History
  • 3/14/00 release by AOL, almost immediately
    withdrawn

34
Peer-to-Peer Overlay Network
Focus at the application layer
35
Peer-to-Peer Overlay Network
End systems
one hop (end-to-end comm.)
a TCP thru the Internet
Internet
36
Topology of a Gnutella Network
37
Gnutella Issue a Request
xyz.mp3 ?
38
Gnutella Flood the Request
39
Gnutella Reply with the File
Fully distributed storage and directory!
xyz.mp3
40
So Far
n number of participating nodes
  • Centralized
  • - Directory size O(n)
  • - Number of hops O(1)
  • Flooded queries
  • - Directory size O(1)
  • - Number of hops O(n)

41
We Want
  • Efficiency O(log(n)) messages per lookup
  • Scalability O(log(n)) state per node
  • Robustness surviving massive failures

42
How Can It Be Done?
  • How do you search in O(log(n)) time?
  • Binary search
  • You need an ordered array
  • How can you order nodes in a network and data
    objects?
  • Hash function!

43
Example of Hasing
Shark
194.90.1.58080
44
Basic Idea
P2P Network
Publish (H(y))
Join (H(x))
Object y
Peer x
H(y)
H(x)
Peer nodes also have hash keys in the same hash
space
Objects have hash keys
y
x
Hash key
Place object to the peer with closest hash keys
45
Mapping Keys to Nodes
0
M
- a node
- an data object
46
Viewed as a Distributed Hash Table
0
2128-1
Hash table
Peer node
47
DHT
  • Distributed Hash Table
  • Input key (file name)Output value
    (file location)
  • Each node is responsible for a range of the hash
    table, according to the nodes hash key. Objects
    are placed in (managed by) the node with the
    closest key
  • It must be adaptive to dynamic node joining and
    leaving

48
How to Find an Object?
0
2128-1
Hash table
Peer node
49
Simple Idea
  • Track peers which allow us to move quickly across
    the hash space
  • a peer p tracks those peers responsible for hash
    keys (p2i-1), i1,..,m

0
2128-1
i22
i24
i28
i
Hash table
Peer node
50
DHT example Chord -- Ring Structure
N8 knows of only six other nodes.
Circular 6-bit ID space
O(log n) states per node
51
DHT example Chord -- Ring Structure
O(log n)-hop query cost
52
Classification of P2P systems
  • Hybrid P2P Preserves some of the traditional
    C/S architecture. A central server links between
    clients, stores indices tables, etc
  • Napster
  • Unstructured P2P no control over topology and
    file placement
  • Gnutella, Morpheus, Kazaa, etc
  • Structured P2P topology is tightly controlled
    and placement of files are not random
  • Chord, CAN, Pastry, Tornado, etc

53
Whats next in the future?
  • P2P NEVs (Networked Virtual Environments)
  • P2P MMOGs (Massively Multiplayer Online Games)
  • P2P 3D Scene Streaming

54
P2P-NVEPeer-to-Peer Networked Virtual
Environment
  • Part of the following slides are adapted from
    www.movesinstitute.org/mcgredo/NVE.ppt

55
NVE Examples
  • Commercial
  • FPS Americas Army
  • MMOGQuake, Unreal, EverQuest, World of Warcraft
    (WoW), Lineage, Second Life, FPS (first person
    shooter)
  • Research NPSNET, Dive, MASSIVE
  • Military Close Combat Tactical Trainer, SIMNET

56
Americas Army
57
Massively Multiplayer Online Games
  • MMOGs are growing quickly
  • 8 million registered users for World of Warcraft
  • Over 100,000 concurrent players
  • Billion-dollar business

Adaptive Computing and Networking Lab, CSIE, NCU
58
Adaptive Computing and Networking Lab, CSIE, NCU
59
Adaptive Computing and Networking Lab, CSIE, NCU
60
(No Transcript)
61
(No Transcript)
62
Adaptive Computing and Networking Lab, CSIE, NCU
63
Close Combat Tactical Trainer
64
NVE (1)
  • Networked Virtual Environments (NVEs) are
    computer-generated virtual world where multiple
    geographically distributed users can assume
    virtual representatives (or avatars) to
    concurrently interact with each other
  • A.K.A. Distributed Virtual Environments (DVEs)

65
NVE (3)
  • 3D virtual world with
  • People (avatar)
  • Objects
  • Terrain
  • Agents
  • Each avatar can do a lot of operations
  • Move
  • Chat
  • Other actions

66
NVE Components
  • Graphic display
  • User input and communication
  • Processing/CPU
  • Data network

67
NVE Components Graphics
  • The display and CPU have become astonishingly
    cheap in the last few years
  • We typically need to draw in 3D on the display.
    The great thing about standards is that there are
    so many to choose from.
  • OpenGL, X3D, Java3D
  • Varying degrees of realism, with FPS emphasizing
    photorealism and others like EverQuest or Sims
    Online sacrificing graphics for better gameplay
    in other aspects

68
NVE User Interfaces
  • You can have all sorts of input/output devices at
    the users location.
  • With commercial games it is often a display,
    keyboard, and mouse.
  • Military simulations may have more elaborate user
    environments, such as a mockup of an M1 tank
    interior
  • Some fancy UIs, such as head-mounted displays,
    caves, haptic feedback, data gloves, etc.

69
Processing/CPU
  • Used for physics, AI, agent behavior, some
    graphics, networking
  • Still on Moores law curve. 4 GHz machines are
    cheap and plentiful.

70
Data Network
  • Networks are a real bottleneck in NVE design.
    Why?
  • Players can send out position updates at 30/sec X
    50 bytes/update 42 bytes/packet overhead
    22,000 bits/sec/player
  • Even a 1 mbit/sec connection can run out of
    bandwidth after 40-50 players

71
Data Network
  • Latency is another big implementation problem.
    The position updates that arrive at hosts are
    always out of date, so we only know where the
    object was in the past
  • This may be a big problem (wide area network
    across a satellite link) or a small problem
    (everyone on a LAN)

72
Issues for NVEs
  • Scalability
  • To accommodate as many as participants
  • Consistency
  • All participants have the same view of object
    states
  • Persistency
  • All contents (object states) in NVE need to exist
    persistently
  • Reliability
  • Need to tolerate H.W and S.W. failures
  • Security
  • To prevent cheating and to keep user information
    and game state confidentially.

73
Architectures
  • NVE architectures fall between two extreme poles
    peer-to-peer and client-server
  • In a Client-Server architecture, a server is
    responsible for sending out updates to the other
    hosts in the NVE
  • In a P2P architecture, all hosts communicate with
    other hosts

74
Architectures (Client-Server)
Popular with commercial game engines
75
Architectures (P2P)
More popular in research and military
76
The Scalability Problem (1)
  • Client-server has inherent resource limit

Resource limit
Adaptive Computing and Networking Lab, CSIE, NCU
77
The Scalability Problem (2)
  • Peer-to-Peer Use the clients resources

Resource limit
Adaptive Computing and Networking Lab, CSIE, NCU
78
You only need to know some participants
Area of Interest(AOI)
  • ? self
  • ? neighbors

Adaptive Computing and Networking Lab, CSIE, NCU
79
Voronoi-based Overlay Network VON
  • Observation
  • for virtual environment applications, the
    contents we want are messages from AOI neighbors
  • Content discovery is a neighbor discovery problem
  • Solve the Neighbor Discovery Problem in a
    fully-distributed, message-efficient manner.
  • Specific goals
  • Scalable ? Limit minimize message traffics
  • Responsive ? Direct connection with AOI neighbors

80
Voronoi Diagram
  • 2D Plane partitioned into regions by sites, each
    region contains all the points closest to its site

Neighbors
Region
Site
81
Design Concepts
Use Voronoi to solve the neighbor discovery
problem
  • Each node constructs a Voronoi of its neighbors
  • Identify enclosing and boundary neighbors
  • Mutual collaboration in neighbor discovery

? node i and the big circle is its AOI
enclosing neighbors ? boundary neighbors ? both
enclosing and boundary neighbors ? normal AOI
neighbors ? irrelevant nodes
82
Procedure (JOIN)
  • 1) Joining node sends coordinates to any existing
    node
  • Join request is forwarded to acceptor
  • 2) Acceptor sends back its own neighbor list
  • joining node connects with other nodes on the
    list

Joining node
Acceptors region
83
Procedure (MOVE)
  • 1) Positions sent to all neighbors, mark messages
    to B.N.
  • B.N. checks for overlaps between movers AOI and
    its E.N.
  • 2) Connect to new nodes upon notification by B.N.

Boundary neighbors
New neighbors
84
Procedure (LEAVE)
  • 1) Simply disconnect
  • 2) Others then update their Voronoi
  • new B.N. is discovered via existing B.N.

Leaving node (also a B.N.)
New boundary neighbor
85
  • QA
Write a Comment
User Comments (0)
About PowerShow.com