Model Checking Nash Equilibria in MAD Distributed Systems

About This Presentation
Title:

Model Checking Nash Equilibria in MAD Distributed Systems

Description:

Model Checking Nash Equilibria in MAD Distributed Systems Federico Mari, Igor Melatti, Ivano Salvo, Enrico Tronci Dep. of Computer Science, University of Roma La ... – PowerPoint PPT presentation

Number of Views:0
Avg rating:3.0/5.0
Slides: 28
Provided by: Enrico66

less

Transcript and Presenter's Notes

Title: Model Checking Nash Equilibria in MAD Distributed Systems


1
Model Checking Nash Equilibria in MAD Distributed
Systems
Federico Mari, Igor Melatti, Ivano Salvo, Enrico Tronci Dep. of Computer Science, University of Roma La Sapienza Roma, Italy Lorenzo Alvisi, Allen Clement, Harry Li Dep. of Computer Science, University of Texas at Austin Austin, Texas, USA
FMCAD 2008Formal Methods in Computer Aided
DesignPortland, OR, USANovember 17 20, 2008
2
SAD Distributed Systems
  • In a Single Administrative Domain (SAD)
    Distributed System all nodes belong to the same
    administrative domain.

3
MAD Distributed Systems
  • In a Multiple Administrative Domain (MAD)
    Distributed System each node owns its resources.

4
Examples of MAD Systems
Internet Routing (e.g., each router is an
administrative domain)
Wireless Mesh Routing (e.g., each node is an
administrative domain)
File Distribution (e.g., each PC is an
administrative domain)
Cooperative Backup (e.g., each PC is an
administrative domain)
Archival Storage (e.g., each PC is an
administrative domain (e.g. http//www.oracorp.com
/Research/p2pStorage.html)
5
Node Behaviours in SAD Systems
  • Altruistic (or correct or obedient) nodes, that
    is nodes faithfully following the proposed
    protocol

Byzantine nodes, that is nodes that may
arbitrarily deviate from the proposed protocol,
for example, because of
hardware failure
software failures
or malicious attacks.
6
SAD Correctness
  • A protocol P for a SAD Distributed system S is
    expected to tolerate up to f byzantine nodes.
    Thus typically correctness for SAD systems is a
    statement of the form
  • Protocol P for system S satisfies property f as
    long as there are no more than f byzantine nodes
    in S.

7
Node Behaviours in MAD Systems
  • Altruistic nodes (as in SAD).

Byzantine nodes (as in SAD).
Rational (or selfish) nodes. That is, nodes whose
administrators are selfishly intent on maximizing
their own benefits from participating in the
system.
Rational nodes may change arbitrarily the
protocol if that is at their advantage. In
particular rational nodes may change their
hardware or software if that is at there
advantage.
8
MAD Correctness (1)
  • Problem
  • In a MAD system any node may behave selfishly.
    This rules out the classical approach of showing
    that a given property holds when there are no
    more than f byzantine nodes.

Solution Show BAR (Byzantine, Altruistic,
Rational) tolerance. Namely, a protocol is BAR
tolerant if it guarantees the desired property
despite the presence of Byzantine and rational
players.
9
MAD Correctness (2)
  • Sufficient to show the following
  • Show correctness when there are only Byzantine
    and Altruistic players
  • Show that no rational node has an interest in
    deviating from the proposed protocol.
  • Note that
  • Point 1 above is SAD correctness (and can be done
    using well know model checking techniques)
  • Point 2 above amounts to show that the proposed
    protocol is a Nash equilibrium ... in a suitable
    sense. This is our focus here.

10
Outline
  • Formal definition of Proposed Protocol and
    Mechanism.
  • Formal definition of Nash equilibrium for
    mechanisms.
  • Symbolic algorithm verifying that a given
    proposed protocol is a Nash equilibrium for a
    given mechanism.
  • Experimental results showing feasibility of
    proposed approach.

11
Mechanism
  • An n player mechanism M is a tuple ltS, I, A, T,
    B, h, ?gt such that
  • States S ltS1, ... Sngt
  • Initial states I ltI1, ... Ingt
  • Actions A ltA1, ... Angt
  • Underlying (Byzantine) behavior
  • B ltB1, ... Bngt, Bi S Ai Si ? Boole s.t.
  • No deadlock ?s ? ai ? si s.t. Bi (s, ai, si)
  • Deterministic Bi (s, ai, si) ? Bi (s, ai,
    si) ? (si si)
  • Proposed Protocol
  • T ltT1, ... Tngt, Ti S Ai ? Boole s.t.
  • Realizability Ti (s, ai) ? ? s Bi (s, ai, si)
  • Nonblocking ?s ? ai Ti (s, ai)
  • Reward h lth1, ... hngt with hi S A ? ?
  • Discount ? lt ? 1, ... ? ngt with ? i ? (0,
    1)

12
Mechanism Transition Relation
Let Z ? 1, ... n (Byzantine agents)
Byzantine Agent
Bi (s, ai, si) if i ? Z
BTi (Z, s, ai, si)
Altruistic Agent
Bi (s, ai, si) ? Ti(s, ai) if i ? Z
Agents move synchronously BT(Z, s, a, s) BT1
(Z, s, a1, s1) ? BTn (Z, s, an, sn)
13
Example of Mechanism
gain
sleep
2
0
1
Ti
Bi
work
reset
14
Paths in Mechanisms
A path ? in (M, Z) is a finite or infinite
sequence ? s(0) a(0) s(1) a(1) s(t) a(t)
s(t1) s.t. BT(Z, s(t),a(t), s(t1)) holds.
  • Let n 2 and Z 1. An example of (M, Z) path
    is
  • ? lt0, 0gt ltsleep, workgt lt2,1gt ltreset, gaingt
    lt0, 0gt ltwork, workgt
  • lt1, 1gt ltgain, gaingt lt0, 0gt

Value of path ? for agent i vi(?) ?t0? ?i t
hi(s(t),a(t))
Value of path ? for agent 1 v1(?) ?t03 ?1
t h1(s(t),a(t)) 10 0.50 - 0.251 0.1254
0.25
Value of path ? for agent 2 v2(?) ?t03 ?2
t h2(s(t),a(t)) -11 0.50 - 0.251 0.1254
-0.75
15
Strategies
A strategy is a finite or infinite sequence ? of
local actions for a given player. For example,
? ltsleep, reset, work, gaingt is a
strategy for player 1.
A strategy ? for player i agrees with (is
associated to) path ? iff
where n 2 and Z 1.
16
Value of a Strategy
The value of strategy ? in state s for player i,
vi(Z, s, ?), is the minimum value of paths (with
the same lenght of ?) that agree with ?. That
is vi(Z, s, ?) min vi(?) ? is an (M, Z)
path that agrees with strategy ? of agent i
In other words, we are assuming that all other
players will play against i (pessimistic view).
Namely, they will try to minimize i gain.
For example, let ? ltwork, gaingt, then v1(?,
lt0, 0gt, ?) v1(1, lt0, 0gt, ?) -11
0.54 1 v1(2, lt0, 0gt, ?) v1(1, 2, lt0,
0gt, ?) -11 0.50 -1
17
Value of a State
The value of state s at horizon k for player i,
vik(Z, s), is the value of the best strategy of
length k for i starting at s. That is vik(Z,
s), maxvi(Z, s, ?) ? is a strategy of length
k for agent i
For example v12(?, lt0, 0gt) v12 (1, lt0, 0gt)
-11 0.54 1 (witness ltwork,
gaingt) v12 (2, lt0, 0gt) -11 0.50 -1
(witness ltwork, gaingt) v12 (1, 2, lt0, 0gt)
10 0.50 0 (witness ltsleep, resetgt)
18
Worst Case Value of a State
The worst case value of state s at horizon k for
player i, uik(Z, s), is the value of the worst
strategy of length k for i starting at s. That
is uik(Z, s), minvi(Z, s, ?) ? is a
strategy of length k for agent i
For example u12(?, lt0, 0gt) -11 0.54 1
(witness ltwork, gaingt) u12 (1, lt0, 0gt)
10 0.50 0 (witness ltsleep,
resetgt) u12 (2, lt0, 0gt) u12(1, 2, lt0, 0gt)
-11 0.50 -1 (witness ltwork,
gaingt)
We omit suprscript k when k ?.
19
Computing Values of States
Proposition The value of state s at horizon k
for player i, vik(Z, s) can be computed using a
dynamic programming appraoch. The worst case
value of state s at horizon k for player i,
uik(Z, s) can be computed using a dynamic
programming appraoch.
20
Nash
Intuitively, a mechanism M is ?-f-Nash if, as
long as the number of Byzantine agents is no more
than f, no rational agent has an interest greater
than ? in deviating from the proposed protocol.
Pf(Q) subsets of Q of size at most f.
  • Definition. Let M be an n player mechanism, f
    ? 0, 1, ... n and ? gt 0.
  • M is ?-f-Nash for player i if
  • ? Z ? Pf(n - i) ? s ? I, ui(Z, s) ? ?
    vi(Z ?i, s)
  • M is ?-f-Nash if it is ?-f-Nash for each player i
    ? n.

21
Finite and Infinite Paths
?i 0.5
b/0
d/3
Bi
h/3
2
1
0
4
5
3
Ti
e/-3
a/-1
c/1
f/-1
g/-3
Strategy a d e d

u1k(?, 0) -1 (3/2) (3/4) (3/8) -1
(3/2) ?t0, .. k-2(-1/2)t (-1)k/(1/2k-1)
Strategy a (d e)?
v1k(1, 0) 1/(1/2k-1) Strategy a (d e)?
when k is even c (g h)? when k is odd
Thus if k is odd then u1k(?, 0) lt v1k(1,
0) if k is even then u1k(?, 0)
v1k(1, 0) Thus there is no k gt 0 s.t. for all k
gt k, u1k(?, 0) ? v1k(1, 0)
In other words, although the above mechanism is
?-0-Nash, there is no k gt 0 s.t. the ?-0-Nash
property can be proved by only looking at a
finite prefix of length at most k.
22
Main Theorem
  • Let M be an n player mechanism, f ?0, 1, ...
    n, ? gt 0 and ? gt 0. Furthermore, for each agent
    i let
  • Mi max hi(s, a) s ? S and a ? A
  • Ei(k) 5 ?ik Mi/(1 - ?i)
  • ?i(k) max vik(Z ? i, s) uik(Z, s) s ?
    I and Z ? Pf(n i)
  • ?1(i, k) ?i(k) - 2Ei(k)
  • ?2(i, k) ?i(k) 2Ei(k)
  • For each agent i let ki be s.t. 4 Ei(ki) lt ?.
    Then
  • If ? i ? ? ?2(i, ki) gt 0 then M is
    ?-f-Nash.
  • If ? i 0 lt ? ? ?1(i, ki) then M is not
    ?-f-Nash.
  • Otherwise M is (? ?)-f-Nash.

Not ?-f-Nash
?-f-Nash.
(? ?)-f-Nash
?1(i, k)
?2(i, k)
?
?
?
Proof idea By computing an upper bound to the
error we make by only considering paths of length
up to k.
23
Symbolic Algorithm
  • for i 1, ... n do
  • Let k s.t. 4 Ei(k) lt ?
  • Let b ltb1, ... bngt and Ci(b) ?j1, ...n,
    i?j bj ? f
  • vi0(b, s) 0, ui0(b, s) 0
  • for t 1, ... k do
  • vit(b, s) maxmin hi(s, ltai, a-igt) ?i
    vit-1(s, b)
  • BT(bbi 1), s, ltai, a-igt, s) ?? Ci(b) ?
    a-i ? A-i ai ? Ai
  • uit(b, s) minmin hi(s, ltai, a-igt) ?i
    uit-1(s, b)
  • BT(bbi 0), s, ltai, a-igt, s) ?? Ci(b) ?
    a-i ? A-i ai ? Ai
  • ?i maxvik(s, b) - uik(s, b) Init(s) ?
    Ci(b)
  • ?1(i) ?i 2Ei(k) ?2(i) ?i 2Ei(k)
  • if (? lt ?1(i)) return (FAIL)
  • if (? i (?2(i) lt ?)) return (PASS with ?)
  • else return (PASS with (? ?))

24
Experimental Results (1)
Agent 1 task sequence 0, 1, ... q-1 Agent 2 task
sequence 1, 2, 3 ... q-1, 0 Agent 3 task
sequence 2, 3, 4... q-1, 0, 1
An agent incurs a cost by working towards the
completion of its currently assigned task. Once
an agent has completed a task it waits for its
reward (if any) before it considers working to
the next task in its sequence. As soon as an
agent receives its reward it considers working to
the next task in its list.
A job is completed if for each task it needs
there exists at least one agent that has
completed that task. In such a case each of such
agents receive a reward. Note that even if two or
more agents have completed the same task all of
them get a reward.
25
Experimental Results on a 64-bit Dual Quad Core
3GHz Intel Xeon Linux PC with 8GB of RAM
Agents Jobs Tasks Byz Nash CPU (Sec) Mem (MB) MaxBDD Present states Bits Action bits
5 2 1 Pass 14.9 79.9 5.8805 15 5
5 2 2 Fail 23.9 74.1 5.4205 15 5
6 2 2 Pass 209 81.8 6.1505 18 6
6 2 3 Fail 299 86 4.3305 18 6
6 3 3 Pass 1680 277 5.6206 24 6
6 3 4 Fail 1140 217 5.8505 24 6
7 3 3 Pass 19,100 1850 2.2907 28 7
7 3 4 Fail 22,200 2280 5.6407 28 7
8 3 2 Pass 80,300 4660 5.5307 32 8
8 3 3 N/A gt127,000 gt8000 gt3.4507 32 8
26
Conclusions
  • We presented
  • A Formal definition of Proposed Protocol and
    Mechanism.
  • A Formal definition of Nash equilibrium for
    mechanisms.
  • A symbolic algorithm verifying that a given
    proposed protocol is a Nash equilibrium for a
    given mechanism.
  • Experimental results showing feasibility of our
    approach.

27
Thanks
Write a Comment
User Comments (0)