Title: Illusion of Control in Minority and Parrondo Games
1Illusion of ControlinMinority and Parrondo Games
- Jeffrey Satinover1, Didier Sornette2
- Condensed Matter Physics Laboratory, University
of Nice, France, Dept.of Politics, Princeton
University jsatinov_at_princeton.edu - Chair of Entrepreneurial Risk, Swiss Federal
Institute of Technology, Zurich, Switzerland,
.
dsornette_at_ethz.ch
2I. Message
- Optimization often yields perverse results
- (In economic policy-making
- Law of Unintended Consequences)
- but not always When and why?
- Attempt to formally characterize conditions that
yield perverse outcomes under optimization
3II. Overview THMG
- 10 Time-Horizon MG (THMG) Pro/Con
- In general, agents underperform strategies for
reasonable t (no impact) - Agent performance declines with dH
- Agent evolution dH ? 0
- Counteradaptive agents perform best
4III. Parrondo Games Briefly
- 10 effect 2 losing games win if alternated
- History-dependent games
- Attempt to optimize this effect inverts it
- Shown in unusual multi-player setting
- Here in natural single-player setting
5IV. Other Briefly
- Cycle decomposition of THMG
- Cycle predictor for real-world 1D series
- Status Minority Game
6A. 10 Time-Horizon MG (THMG) Pro/Con
- Pro
- MG unreasonable teq
- Many real-world series not stationary
- Many real-world trading strategies use short or
declining-valued t (expon. damping) - Certain kinds of tractability due to reasonable
t
- Con
- Far from equilibrium
- Arguendo many real-world series effectively at
equilibrium (high-freq data?) - Analytic solutions more difficult for finite t
- Very complex finite-size effects, e.g., s2
periodic in t
7THMG Markov Chain
(EPJB, B07270)
8THMG Markovian
9THMG Markovian
10THMG Markovian
11THMG Markovian
12B. agents underperform strategies for
reasonable t (no impact)
.
All N, m, S and
13B. agents underperform strategies for
reasonable t (no impact)
m, S, N2,2,31
14B. agents underperform strategies for
reasonable t (no impact)
15B. agents underperform strategies for
reasonable t (no impact)
16B. agents underperform strategies for
reasonable t (no impact)
17B. agents underperform strategies for
reasonable t (no impact)
18B. agents underperform strategies for
reasonable t (no impact)
19B. agents underperform strategies for
reasonable t (no impact)
20B. agents underperform strategies for
reasonable t (no impact)
21B. agents underperform strategies for
reasonable t (no impact)
22B. agents underperform strategies for
reasonable t (no impact)
23B. agents underperform strategies for
reasonable t (no impact)
24B. agents underperform strategies for
reasonable t (no impact)
- Do we underestimate the extent to which
real-world financial systems are so difficult
simply because they are far-from equilibrium?
in a THMG composed entirely of impact-accounting
agents, with N31, S2, a near equilibrium state
is attained for 10gttgt100. For t1 or 10,
strategies outperform their agents as we have
described. For t100, the reverse is true.
25C. Agent performance declines with dH
26D. At all a, agent performance declines with dH
27D. Agent Evolution
- If agents are allowed to evolve strategies (e.g.,
adaptive evolution, GA) - dH ? 0
28Agent performance declines with dHbut,
- for MG proper (equilibrium), for agt ac,
- Agent performance increases with dH
- dH ? 1
29E. Counteradaptive agentsperform best
30E. Counteradaptive agentsperform best (they
choose worst strategy)
- Carefully designed privileges can yield superior
results for a subset of agents - An important question We pose it carefully so as
to avoid introducing either privileged agents or
learning Is the illusion-of-control so powerful
that inverting the optimization rule could yield
equally unanticipated and opposite results? - The answer is yes If the fundamental
optimization rule of the MG is symmetrically
inverted for a limited subset of agents who
choose their worst-performing strategy instead of
their best, those agents systematically
outperform both their strategies and other
agents. They also can attain positive gain.
31E. Counteradaptive agentsperform best (they
choose their worst strategy)
32E. Counteradaptive agentsperform best (they
choose their worst strategy)
33E. Counteradaptive agentsperform best (they
choose their worst strategy)
34E. Counteradaptive agents
35Parrondo Games(Physica A, 386,1339-344)
- 10 effect 2 losing games win if alternated
- Capital-dependent ? History-dependent
- Attempt to optimize this effect inverts it
- Shown in unusual multi-player setting
- Here (ref.) in natural single-player setting
- Choose worst partially restores PE
36Parrondo Games(Physica A, 386,1339-344)
37Parrondo Games(Physica A, 386,1339-344)
38Parrondo Games(Physica A, 386,1339-344)
Under optimization (choose best) 8 X 8
transition matrix
Under choose worst
39IV. Other Briefly
- Cycle decomposition of THMG
- Cycle predictor for real-world 1D series
- Status Minority Game
40Status MG LMG? SMGmobile agents
competition for topsimple definition of
social
- Boundary conditions reflective, random, fixed
But NOT circular - Neighborhood size, heterogeneity
- Role for different neighborhood functions