Title: Combinatorial Agency Michal Feldman Hebrew University
1Combinatorial Agency Michal Feldman(Hebrew
University)
- Joint with
- Moshe Babaioff (UC Berkeley)
- Noam Nisan (Hebrew University)
2Hidden Actions
- Algorithmic Mechanism Design computational
mechanisms to handle Private Information. - (Classical) Mechanism Design
- Private Information
- Hidden Actions
- We study hidden actions in multi-agents
computational settings
3Example
- Quality of Service (QoS) Routing FCSS05
- We have some value from message delivery.
- Each agent controls an edge
- succeeds with low probability by default.
- succeeds with high probability if exerts costly
effort - Message delivered if there is a successful
source-sink path. - Effort is not observable, only the final outcome.
source
sink
4Modeling Principal-Agent Model
exerts effortcost c 0
Project succeeds with high probability
Project succeeds with low probability
Does not exert effortcost 0
Agent
Principal
Motivating rational agents to exert costly effort
toward the welfare of the principal, when she
cannot contract on the effort level, only on the
final outcome
Success Contingent contract. The agent gets a
high payment if project succeeds, gets a low
payment if project fails
Our focus is on multi-agents technologies
5Our Model
The Principals input parameter.
- n agents
- Each agent has two actions (binary-action)
- effort (ai1), with cost c0 (ci(1)c)
- no effort (ai0), with cost 0 (ci(0)0)
- There are two possible outcomes (binary outcome)
- project succeeds, principal gets value v
- project fails, principal gets value 0
- Monotone technology function t maps an action
profile to a success probability - t 0,1n? 0,1 t(a1,,an)success
probability given (a1,,an) - ?i t(1, a-i) t(0,a-i) (monotonic)
- Principal designs a contract for each agent
- Project succeeds? agent i receives pi (otherwise
he gets 0) - Players utilities, under action profile
a(a1,,an) and value v - Agent i ui(a) t(a)pi ci(ai)
- Principal u(a,v) t(a)(v Sipi)
- Agents are in a game, reach Nash equilibrium.
The Principals design parameter Used to
induce the desired equilibrium
6Example Read-Once Networks
- A graph with a given source and sink
- Each agent controls an edge, independently
succeeds or fails in his individual task
(delivering on his edge) - Succeeds with probability ?
- Succeeds with probability 1-? (½?) with effort
- The project succeeds if the successful edges form
a source-sink path. - example t(1, 1, 0) Pr x1 ? (x2 ? x3) 1
a(1,1,0)
(1- ?) (1- ?(1-?))
a21
a11
Pr x211- ?
sink
source
a30
Pr x111- ?
Pr x31?
7Nash Equilibrium
Agent is utility
exerts effort
Does not exert effort
- Principals best contract to induce eq.
a(a1,,an) - pi c / Di(a-i) for agent i with ai1
- pi 0 for agent i with ai0
- e.g., (1,0) (1,1)
ui( 1,a-i ) pi t( 1,a-i ) c
ui( 0,a-i ) pi t(0,a-i )
8Optimal Contract
- the principal chooses a profile a(v) that
maximizes her optimal equilibrium utility
Probability of success
Total payments
9Research Questions
- How does the technology affect the structure of
the optimal contracts? - Several examples (AND, OR, Majority )
- General technologies
- What is the damage to the society due to the
inability to monitor individual actions? - price of unaccountability
- What is the complexity of computing the optimal
contract? - Can the principal gain utility from mixed
strategies? - Can the principal gain utility from a-priory
removing edges from the graph?
10Optimal Contracts simple AND technology
- 2 agents, g ¼, c1
- t(0,0) g2 (¼)21/16
- t(1,0) t(0,1) g(1-g) 3/16
- D0 t(1,0)-t(0,0)3/16 - 1/16 1/8
- t(1,1) (1-g)2 9/16
- Principals Utility
- 0 agents exert effort
- u((0,0),v) t(0,0)v v/16
- 1 agent exerts effort
- u((1,0),v) t(1,0)(v-c/D0)
- 3/16(v-1/(1/8))(3/16)v-3/2
- 2 agents exert effort
- u((1,1),v) t(1,1)(v-2c/D1) 9v/16-3
s
t
x1
x2
At value of 6 there is a jump from 0 to 2 agents
11Optimal Contract Transitions in AND and OR
x1
x1
x2
s
t
s
t
x2
v
v
2
g
g
12Optimal Contract Transitions in AND and OR
- Theorem For any AND technology, there is only
one transition, from 0 to n agents. - Theorem For any OR technology, there are always
n transition (any number of agents is optimal for
some value). - We characterize all technologies with 1
transition and with n transitions.
13Proofs Idea-ANDs single transition
- Observation (monotonicity) number of contracted
agents monotonically non-decreasing in v. - Proof for ANDs single transition
- At the indifference value between 0 and n agents,
contracting with 0 - By the above observation, a single transition.
The 0 and n indifference value
14Transitions in AND and OR
- Proof (AND)k number of contracted agents
- this function has a single minimum point, thus
maximized at one of the edges 0 or n
15Proofs Idea ORs n transitions
- Let vk be the indifference point between k and
k1 agents ( u(k,vk) u(k1,vk) ) - We show that for OR vk1 vk
- This ensures that k is optimal from vk-1 to vk
v1 The 1 ,2 indifference value.
v0 The 0 ,1 indifference value.
v1v0
16Transitions in AND and OR
- k number of contracted agentssolve for v u(k)
u(k1), and let v(k) be the solutionwe have
to show v(k1) v(k) ? d - E.g., n3
v(2)
v(1)
v(0)
d
17Majority, 5 agents
18General Technologies
- In general we need to know which agents exert
effort in the optimal contract - Examples
- In potential, any subset of agents (out of 2n
subsets) that exert effort could be optimal for
some v. - Which subsets can we get as an optimal contract?
19And-of-Ors (AOO) Technology
- Example 2x2 AOO technology
- Theorem The optimal contract in any AOO network
(with identical OR components) has the same
number of agents in each OR-component - Proof by induction based on following lemmas
- Decomposition lemma if STUR is optimal on
fh?g on some v, then T is optimal for h on
vtg(R) and R is optimal for g on vth(T) - Component monotonicity lemma the function
v?th(T) is monotone non-decreasing (same for
v?tg(R) )
?
A1,B1
A1,B1,A2,B2
v
20Decomposition Lemma
if STUR is optimal on fh?g on some v, then T
is optimal for h on vtg(R) and R is optimal for
g on vth(T)
21Component Monotonicity Lemma
The function v?th(T) is monotone non-decreasing
(same for v?tg(R) )
- Proof
- S1 T1 U R1 optimal on v1
- S2 T2 U R2 optimal on v2
- By monotonicity lemma f(S1) f(S2)
- Since fgh, f(S1)h(T1)g(R1) h(T2)g(R2)
f(S2) - Assume in contradiction that h(T1) Since h(T1)g(R1) h(T2)g(R2) , we get g(R1)
g(R2). - By decomposition lemma, T1 is optimal for h on
v1g(R1), and T2 is optimal for h on v2g(R2) - As v1 v2, and g(R1) g(R2), T1 is optimal for
h on a larger value than T2. - Thus, by monotonicity lemma, h(T1) h(T2)
f
R1
h
g
R2
T1
T2
22And-of-Ors
- Theorem The optimal contract in any AOO network,
composed of nc OR-components (of size nl)
contracts with the same number of agents in each
OR-component. Thus, orbit(AOO) nl1 - Proof by induction on nc
- Base nc2assume (k1,k2) is optimal on some v,
assume by contradiction k1k2 (wlog), thus
h(k1)h(k2).By decomposition lemma k1 optimal
for h on vh(k2) k2 optimal for h on
vh(k1)vh(k2)but if k2 optimal for a larger
value, k2k1. in contradiction.
23And-of-Ors
h2
k1
k2
k2
k3
k3
k2
h
h
h
h
- assume (induction) that claim holds for any
number of OR components - Assume 1st component has k1 contracted agents
- Let g be the conjunction of the other (nc-1)
comp. - By decomposition lemma, contract on g is optimal
at vh(k1), thus by induction hypothesis has same
number of agents, k2, on each OR component. - Let h2 be conjunction of first two comp.
- By decomp. Lemma, contract on h2 is optimal for
some value and by induction hypothesis has same
number of agents, k3 - We get k1k3 (in first comp. k1 agents
contracted), and k2k3 (in second comp. k2 agents
contracted), thus k1k2
24The Collection of Optimal Contracts
- Given t we wish to understand how the optimal
contract changes with v (the orbit). - Monotonicity Lemma The optimal contract success
probability t(a(v)) is monotonic non-decreasing
with v - So is the utility of the principal, and the total
payment - Thus, there are at most 2n-1 changes to the
optimal contracts (Orbit(t) 2n)
Is there a structure on the collection of optimal
contracts of t?
25The Collection of Optimal Contracts
- Observation 1 in the observable-actions case,
only one set of size k can be optimal (set with
highest probability of success) - Observation 2 not all 2n subsets can be obtained
- Only a single set of size 1 can be optimal (set
with highest probability of success) - Thm There exists a tech. with optimal
contracts - Open question 1 is there a read-once network
with exponential number of optimal contracts?
Can a technology have exponentially many
different optimal contracts?
26Exponential number of optimal contracts (1)
- Thm There exists a tech. with optimal
contracts - Proof sketch
- Lemma 1 all k-size sets in any k-admissible
collection can be obtained as optimal contracts
of some t - Lemma 2 For any k, there exists a k-admissible
collection of k-size sets of size - Based on error correcting code
- Lemma 3 for kn/2 we get a k-admissible
collection of k-size sets of size ,
as required.
Collection of sets of size k, in which every two
sets in it differ by at least two elements
27Proof of Lemma 1
n
- marginal contribution of i ? S is t(S)
t(S\i) eS
t(S) ½ - eS
k
S
k-1
t(S\i) ½ - 2eS
- Claim at vs(ck) / 2eS2, the set S is optimal
- S better than any other set in col. (by
derivative of u(S,v)) - S better then any other set not in col. (too
high payments)
1
28 29Proof (k-orbit)
- ? admissible collection of k-size sets
- Z all S ? ? U all S\i
- For sets T ? Z
- if ? z? Z z ? T
- else t(T) e T
Pick eS?(0.17,0.2
t(S)½ - eS
S3
? 3-size sets
S1
t(S\i)½ - 2eS
i
S2
Pick eS0.2
S4
- Let vs be v s.t. ,S chosen at
vs
30Proof (contd)
- Need to show s yields higher utility at vs than
any other set s - s k-1
- If s ? s\i t(s)es ? can be arbitrarily
small - If s ? s\i t(s) ½ - es 0.3, t(s) ½ -
2es - s k ? at least one agent is paid 1/e , so
pick e s.t. payment vs
u(s,v0.2) 0.16
u(s0.18)
u(s0.2)
Size k and (k-1)
v
V0.2
V0.18
31Exponential number of optimal contracts (1)
Collection in which every two sets in it differ
by at least two elements
collection of optimal sets of size exactly k
- Thm There exists a tech. with optimal
contracts - Proof sketch constructive
- Lemma 1 any admissible collection can be
obtained as the k-orbit of some t - Define t as follows
- for every set in the collection, Pick eS,
- and define t(S)½ - eS and t(S\i)½ - 2eS
- (thus, marginal contribution of i?S is eS)
- for every set not in the collection, define t to
ensure that the - marginal contribution of each agent is very small
- Claim at vs(ck) / 2es2, the set S is optimal
- S better than any other set in col. (by
derivative of U(S,v)) - S better then any other set not in col. (too high
payments)
32Exponential number of optimal contracts (2)
- Lemma For any n k, there exists an admissible
collection of - k-size sets of size
- Proof take error correcting code that corrects 1
error. - Hamming distance 3 ? admissible
- Known ? codes with W(2n/n) code words.
- Construct a code with sufficient of k-weight
words - XOR every code word with a random word r. weight
k w/ prob - Expected number of k-weight code words
- There exists r such that the expectation is
achieved or exceeded
33Research Questions
- How does the technology affect the structure of
the optimal contracts? - What is the damage to the society / principal due
to the inability to monitor individual actions? - price of unaccountability
- What is the complexity of computing the optimal
contract? - Can the principal gain utility from mixed
strategies? - Can the principal gain utility from a-priory
removing edges from the graph?
34Observable-Actions Benchmark (first best)
- Actions are observable
- Payment an agent that exerts effort is paid his
cost (c) - Principals utility u(a,v) vt(a) Siai1 c
- Principals utility social welfare sw(a,v).
- The principal chooses aOA, the profile with
maximum social welfare.
35Social Price of Unaccountability
- Definition The Social Price Of Unaccountability
(POUS) of a technology is the worst ratio (over
v) between the social welfare in the
observable-action case, and the social welfare in
the hidden-action case - a - optimal contract for v in the
hidden-action case - aOA - optimal contract for v in the
observable-action case - Example AND of 2 agents
s
t
v
Hidden actions
0
2
Observable actions
0
2
36Principals Price of Unaccountability
- Definition The Principals Price Of
Unaccountability (POUP) of a technology is the
worst ratio (over v) between the principals
utility in the observable-action case, and the
principals utility in the hidden-action case - a - optimal contract for v in the
hidden-action case - aOA - optimal contract for v in the
observable-action case
37Price of Unaccountability - Results
- Theorem The POU of AND technology is
-
- unbounded for any fixed n2, when g?0
- unbounded for any fixed g
- Theorem The POU of OR technology is bounded by
2.5 for any n
38Research Questions
- How does the technology affect the structure of
the optimal contracts? - What is the damage to the society due to the
inability to monitor individual actions? - price of unaccountability
- What is the complexity of computing the optimal
contract? - Can the principal gain utility from mixed
strategies? - Can the principal gain utility from a-priory
removing edges from the graph?
39Complexity of Finding the Optimal Contract
- Input value v, description of t
- Output optimal contract (a,p)
- Theorem There exists a polynomial time algorithm
to compute (a,p), if t is given by a table
(exponential input). - Theorem If t is given by a black box,
exponentially many queries may be required to
find (a,p). - Theorem For read-once networks, the optimal
contract problem is p-hard (under Turing
reduction) - (proof reduction from network reliability
problem) - Open problem 3 is it polynomial for
series-parallel networks? - Open problem 4 does it have a good approximation?
40Complexity of Finding the Optimal Contract
- Input value v, description of t
- Output optimal contract (a,p)
- Theorem There exists a polynomial time algorithm
to compute (a,p), if t is given by a table
(exponential input). - Theorem If t is given by a black box,
exponentially many queries may be required to
find (a,p). - Proof
- for value v c(k ½), S is optimal
- Any algorithm must query all sets of size kn/2
to find S in the worst case
41Complexity of Finding the Optimal Contract
- Input value v, description of t
- Output optimal contract (a,p)
- Theorem For read-once networks, the optimal
contract problem is p-hard - Proof reduction from network reliability problem
- Open problem 3 is it polynomial for
series-parallel networks? - Open problem 4 does it have a good approximation?
42Best Contract Computationin Read-Once Networks
- Proof (sketch) an algorithm for this problem can
be used to compute t(E) (probability of success) - Player x will enter the contract only for very
large value of v (only after all other agents are
contracted), call this value vc - At vc, principal is indifferent between E and
EUx
G
t
gx? ½
43Research Questions
- How does the technology affect the structure of
the optimal contracts? - What is the damage to the society due to the
inability to monitor individual actions? - price of unaccountability
- What is the complexity of computing the optimal
contract? - Can the principal gain utility from mixed
strategies? - Can the principal gain utility from a-priory
removing edges from the graph?
44Mixed Strategies
Can mixed-strategies help the principal ? What is
the price of purity ?
- In the non-strategic case NO (convex
combination) - What about the agency case?
- Extended game
- qi probability that agent i exerts effort
- t( qi,q-i ) qit(1,q-i ) (1-qi )t(0,q-i )
- Marginal contribution Di(q-I ) t(1,q-i ) -
t(0,q-i ) 0
45Nash Equilibrium in Mixed Strategies
- Claim agent is best-response is to mix with
probability q ? (0,1) only if she is indifferent
between 0 and 1 - Agent is utility
- Principals utility
Agent is utility
High effort
Low effort
ui( 1,q-i ) pi t( 1,q-i ) ci
ui( 0,q-i ) pi t(0,q-i )
46ExampleOR with two agents
- Optimal contract for v110
- Pure strategies both agents contracted u
88.12... - Mixed strategies q1q20.96.. u88.24...
- Two observations
- q1q2 in optimal contract
- Principals utility is improved, but only
slightly - How general are these observations?
47Optimal Contract in OR Technology
- Lemma For any anonymous OR (any g,n,c,v),
k?0,1,,n agents exert effort with equal
probabilities q1qk ? (0,1, and n-k agents
shirk. i.e. optimal profile (0n-k, qk) - Proof (skecth) suppose by contradiction that
(qi,qj,q-ij) s.t. qi,qj? (0,1) and qi qj is
optimal
(qi,qj,q-ij)
qj
For a sufficiently small e , success probability
increases, and total payments decrease. In
contradiction to optimality
qi
48Optimal Contract in OR Technology
Example OR with 2 agents
49Price of Purity (POP)
- Definition POP is the ratio between principals
utility in mixed strategies and in pure
strategies
Optimal mixed contract
Optimal pure contract
50Price of Purity
- Definition technology t exhibits
- increasing returns to scale (IRS) if for any i
and any b a t(bi,b-i)-t(ai,b-i)
t(bi,a-i)-t(ai,a-i) - decreasing returns to scale (DRS) if for any i
and any b a t(bi,b-i)-t(ai,b-i)
t(bi,a-i)-t(ai,a-i) - Observations AND exhibits IRS, OR exhibits DRS
- Theorem for any technology that exhibits IRS,
optimal contract is obtained in pure strategies - e.g., AND
51Price of Purity
- For any anonymous DRS technology, POP n
- For anonymous OR with n agents, POP 1.154..
- For any anonymous technology with 2 agents, POP
1.5 - For any technology (not necessarily anonymous,
but with identical costs) with 2 agents, POP 2 - Observation the payment to each agent in a mixed
profile is greater than the min payment in a pure
profile and smaller than the max payment in a
pure profile
52Research Questions
- How does the technology affect the structure of
the optimal contracts? - What is the damage to the society due to the
inability to monitor individual actions? - price of unaccountability
- What is the complexity of computing the optimal
contract? - Can the principal gain utility from mixed
strategies? - Can the principal gain utility from a-priory
removing edges from the graph?
53Free-Labor
- So far, technology was exogenously given
- Now, suppose the principal has control over the
technology in that he can ex-ante remove some
agents from the graph - Example OR with 2 agents
- Action set of agent i ai ? 1,0,?
- 1 exert effort succeed with probability d.
costc - 0 do not exert effort - succeed with probability
g - ? do not participate succeed with probability
0. cost0 - Action ? wastes free-labor since action 0
increases the success probability with no
additional cost
as before
54Free-Labor
Are there scenarios in which the principal gains
utility from wasting free-labor?
- The answer is YES
- Example OR technology, n2, g0.2
- Theorem for technologies with increasing
marginal contribution (e.g., AND), utilizing all
free-labor is always optimal
v
0
1
2
1 removed
55Analysis of OR
- Lemma for any OR with n agents and g which is
small enough, there exists a value for which in
the optimal contract one agent exerts effort and
no other agent participates
g0.49
g0.25
g0.01
56Version of the Braesss Paradox
- A project is composed of 2 essential components
A and B - And-of-Ors (AOO) allow interaction between teams
- Or-of-Ands (OOA) dont allow interaction between
teams - Obviously, AOO is superior in terms of success
probability
project succeeds if at least one of the following
pairs succeed (A1,B1) (A1,B2) (A2,B1)
(A2,B2)
project succeeds if at least one of the following
pairs succeed (A1,B1) (A2,B2)
57Version of the Braesss Paradox
A1
B1
Example g0.2, v110
gi 1
s
t
A2
B2
And-of-Ors
Or-of-Ands
u(2,2) 75.59..
u(1,1) 74.17..
Or-of-Ands wastes free-labor. Could the
principal gain utility from removing middle edge?
Conclusion it may be beneficial for the
principal to isolate the teams
58Summary
- Combinatorial Agency hidden actions in
combinatorial settings - Computing the optimal contract in general is hard
- Natural research directions
- technologies whose contract can be computed in
polynomial time - Approximation algorithms
- Many open questions remain
59Thank You
- mfeldman_at_cs.huji.ac.il
60Related Literature
- Winter2004 Incentives and discrimination
- The effect of technology on optimal contract
(full implementation) - Winter2005 Optimal incentives with information
about peers - Ronen2005Smorodinsky and Tennenholtz2004,2005
- Multi-party computation with costly information
- Holmstrom82 Moral hazard in teams
- Budget-balanced sharing rules