Title: Infrastructures and Architectures
1Infrastructures and Architectures
- Distributed Computing
- Spring 2007
2Focus
- Christophe Diot (SPRINT, ATL) and Laurent
Gautier (INRIA) A Distributed Architecture for
Multiplayer Interactive Applications on the
Internet in Network, IEEE, 1999 (MiMaze) - Rajesh Krishna Balan (CMU), Maria Ebling, Paul
Castro, and Archan Misra (IBM)Matrix Adaptive
Middleware for Distributed Multiplayer Games in
Middleware 2005 (Matrix)
3Organization
- Compare aims and contents of the papers
- Motivations
- Identify Take Aways from the 2 papers
- Points to focus on during the discussion
- Discuss separately the concepts in the papers
- Results and Evaluations
4(No Transcript)
5Peer to Peer vs. Client Server Architectures
- P2P Robustness
- Players fail independently
- Server is not a point of failure
- P2P Scalability
- Server Capacity is not a bottleneck
- Server Costs with scaling does not come into
picture - P2P Amount of data is large since each client
has to talk to possibly every other client
However multicast takes care of a small part of
the problem - P2P Theoretically reduced network delay since
there is no intervening server between clients
communication
- CS Have multiple servers and dynamic server
provisioning - CS Costs are involved when addressing
scalability - CS Amount of data in the network is reduced
since Server notifies the client of the state and
each client has to talk only to the server - CS Due to localized presence of servers, this
network delay maybe reduced to some extent
6Peer to Peer vs. Client Server Architectures
- P2P Synchronization is tougher to achieve in
MiMaze they address this issue by one algorithm
(of the several existing) - P2P Accounting is tough Not Addressed in
MiMaze - P2P Cheating is easier. MiMaze doesnt address
this. - P2P The game is closely linked with
infrastructure - P2P Does not handle hotspots well (consistency
is harder)
- CS Architecture introduces natural global /
local consistency at servers for synchronization
issues - CS Cost Model and Accounting is simpler
- CS Cheating is not easy
- CS Game can be separated from infrastructure.
- CS (Matrix) can address this
7MiMaze Communication Architecture
8MiMaze Concerns and Solutions
- Network Delay, Dynamic Tree (Players can join /
leave anytime), Distributed System Architecture
For Scalability and Robustness - Continuity Real time behaviour of game objects
(Avatars) in face of packet losses and delays - USP Distributed Synchronization
- Multicast Distributed Tree (based on MBone)
- Dead Reckoning
- Bucket Synchronization
9Bucket Synchronization
10Dead Reckoning
- Absence of ADU for a particular avatar in current
bucket (due to loss / delay). - Go back to previous buckets and collect previous
state of Avatar. - Act on it (Extrapolation).
11Experiments
- Tested with MiMaze on Mbone with upto 25 players
- 1600 traces of 15-20 mins each collected
- Average network delays are always less than 100ms
- Metric Drift Distance - absolute value of the
distance between the position of an avatar as
displayed by its local entity, and the position
of the same avatar displayed by a remote entity.
The game is consistent if the drift distance is
zero. But tolerance of shifts in the drift
distance is assumed corresponding to the avatar
radius. - Avatar radius is 32 units, Speed is 32 units per
40 ms. Errors upto 50 units on a moving Avatar
are not significant
12Informal Analysis
- Since number of players (max25) nor the network
delay (average was always less than 100ms) did
not affect the quality of the game giving
preference to interactivity at expense of
consistency is a good choice - Participation disconnection and NTP
synchronization failures affected only the victim
of the problem and not the participants thanks
to distributed architecture ()
13Results and Conjectures
- Delay distribution between 2 nodes (longest path
in tree) on a region of MBone - More than 15 of the ADUs experience network
delays higher than 100ms - Standard Deviation is 50.44 ms. Mean delay is
55.47 ms very close to the average delay
measured during experiments - Reducing the standard deviation will reduce the
number of late ADUs (?)
14Delay Distribution and Clock
(a) Percentage and (b) distribution of late/lost
ADUs on the MBone
15Bucket Synchronization Efficiency Consistency
in MiMaze (Drift)
- Drift is 97 of the time less than 50 units
- It is less than 20 units in 85 of the buckets
- In 65 of the cases remote entities display exact
position of the avatar
16Bucket Synchronization Efficiency Impact of
Synchronization Mechanism
- Synchronization reduces the drift for long delays
(because it introduces another delay (?)) - Synchronization does not have a great advantage
in high loss conditions - Synchronization reduces consistency in no loss
low delay environment (Maybe because of clock
synchronization and the playout delays)
17Discussion
- Only" 65 of buckets deliver the exact position
of a given avatar. At the same time, players were
very satisfied during the entire game session.
This indicates that this type of application is
more tolerant to network impairments than
numerical observations would tend to show. - Parameters which also could have been involved -
characteristics of the avatar (slow avatar is
more sensitive to error), game nature (no
terrain limits implies difficulty in dead
reckoning trajectories make it easier) - This deliberate lack of precision (due to the
unreliability of the architecture) allows more
scalability, provides real-time interaction
between participants, and does not alter
participants satisfaction
18Take Away
- Relaxed Reliability is Tolerated
- No study of scalability (todays games have a
much larger participation) - Would increase in computational time at a node
due to more complex game semantics and dead
reckoning algorithm affect consistency? the
bucket synchronization parameters
19Matrix - Pitch
- Static partitioning schemes while provisioning
servers for client base does not help - Tradeoff between client response latency and
consistency (larger the user base and network
topology, longer it takes to maintain
consistency). But consistency is linked to user
satisfaction also. - MMOGs today are nearly decomposable systems.
(number of interactions among subsystems, in some
geometric space, is of a lower order of magnitude
than the number of interactions within an
individual subsystem) - radius or zone of visibility for a game
player MiMaze game object
20Promises Middleware architecture
- Scalable
- Low latency (both local and occasional global
communication) - Provides pockets of local consistency
- Claim easy to use API which requires minimal
changes to existing MMOGs - Can handle transient hot-spots and dynamic loads
(load spikes) () - Needs no change in security model (P2P would
lower the ability to handle cheating and denial
of service) - Uses preferred client server architecture
- Supports multiple gaming platforms
21Providing Local Consistency
- Assignment of partitions of MMOGs spatial map
among game servers - Consistency set any point in the spatial map
handled by server 1 which lies in server 2s
radius of visibility (peripheral points) must be
applied consistently to both the servers (This
happens via parent Matrix Servers)
22Consistency..contd
- If radius of visibility is small compared to the
size of partition, updates are restricted to the
game server. If it is infinite global updates
are required. - Overlap Regions groups of points which have a
non empty consistency set. This information is
maintained at Matrix Servers
23Architecture
- Clients are mobile so they must be able to
switch game servers transparently (handled by
Matrix servers and Matrix co-ordinators). - The users must always be identified by globally
unique ids - Game servers are on the same machine as parent
matrix servers - All client packets are spatially tagged by Game
Servers and sent to Matrix servers - If server is overloaded, Matrix server splits the
game world between the overloaded game server and
new game server and forward game specific state
and clients to the new game server via Matrix.
The old matrix server becomes parent of the new
matrix server. - Each Matrix server maintains an Overlap table
of the regions of overlap between the space it
manages and that of its Matrix peers. This
information is sent by the Matrix Coordinator - Space split Split to left convention
- Game state corresponding to clients is minimal in
current games and can be transferred
efficiently. The large state regarding map
textures is static and can be pre-cached on all
new servers - No longer needed servers when client base
reduces / moves are reclaimed.
24Architecture contd
- Matrix Coordinator (MC) calculates overlap tables
using geometric algorithms for computing bounding
boxes between spatial regions - Avoids making MC a bottleneck each Matrix
server usually does as O(1) lookup to determine
consistency set of a point in consideration. In
case of non proximal interaction only, MC is
contacted regarding the consistency set.
Otherwise MC is contacted only during splits and
reclamations which are infrequent. Mainly packet
forwarding is latency critical, and MC need not
be contacted most of the time for this.
25Results
26Other results (not in this paper)
- Matrix overheads were acceptable
- Matrix scaling (via splitting and reclamation)
was completely transparent and sustained
performance levels - Matrix scaling was limited by maximum I/O
capacity of individual servers