Mud slides & earthquakes in California. When catastrophe

1 / 32
About This Presentation
Title:

Mud slides & earthquakes in California. When catastrophe

Description:

Mud s & earthquakes in California. When catastrophes occur that were preceded by near miss events, the question becomes: ... – PowerPoint PPT presentation

Number of Views:92
Avg rating:3.0/5.0
Slides: 33
Provided by: USRA
Learn more at: http://training.fema.gov

less

Transcript and Presenter's Notes

Title: Mud slides & earthquakes in California. When catastrophe


1
The Psychology of Avoiding Disaster Readiness
Disasters
  • Robin Dillon-Merrill
  • Catherine H. Tinsley
  • The McDonough School of Business
  • Georgetown University

2
Precursors to Catastrophes
  • People are confronted by the same threats year
    after year
  • Hurricanes along the Southeastern coast
  • Floods and tornados in the Midwest
  • Wildfires in the West
  • Mud slides earthquakes in California
  • When catastrophes occur that were preceded by
    near miss events, the question becomes
  • Were near miss events ignored?

3
Anecdotal Evidence
  • Governor Haley Barbour of Mississippi hurricane
    fatigue
  • He feared that his constituents were not
    evacuating in response to the Katrina threat
    because they had successfully weathered earlier
    storms.
  • A former FEMA official described an agency that
    was responding to business as usual
  • (i.e., treating Katrina like past hurricanes)
  • Individual statements I survived Camille my
    house is sturdy I am staying put
  • Organizational decision making This is how we
    have responded to hurricane warnings in the
    past. 1
  • Quotes from The Washington Post, September 11,
    2005 pp. A6-A7

4
Gap in Current Disaster Research
  • Research has shown that the level of preparedness
    is significantly linked to personal experience
    with disasters
  • (Lindell and Perry, 2000, Wenger, 1980, and
    Dooley, et al., 1992).
  • But these experiences can either lead to greater
    awareness and preparedness or to greater
    complacency and fatalism offering no conclusions
    as to why the variation exists
  • (Tierney, Lindell, and Perry, 2001, Jackson,
    1981, and Mileti and OBrien, 1992),.
  • It is precisely their interpretations of the
    outcomes of prior disasters and why these
    outcomes unfolded that will influence their
    subsequent perceptions of, and preparations for,
    future disaster events
  • (Lindell and Perry, 1992).

5
Opportunities for New Orleans to have Learned
Prior to Katrina
  • Hurricane Ivan
  • 2004 cat 4-5 (140-155 mph winds)
  • Predicted 25 chance stay on direct track to New
    Orleans (actual landfall in Mobile Bay, Alabama
    2am Sept. 16)
  • By noon Sept. 15 (when storm turned) estimated
    600,000 out of 1.2 million evacuated New Orleans
  • 2/3 of non-evacuees (with means to evacuate)
    didnt evacuate because they felt safe in their
    homes. Others were discouraged by negative
    experiences with past evacuations
  • 120,000 NO residents did not have cars
  • Superdome was used to shelter non-evacuees

6
Opportunities for New Orleans to have Learned
Prior to Katrina
  • Hurricane Pam Simulation conducted July 2004
  • 8 day table-top exercise with over 250 officials
    participating
  • Assumed 120 mph (cat 3) slow moving storm
  • Assumed more than 1 million evacuated
  • Recognized that the levees would be overtopped
  • Recognized the need to rely on state resources
    for shelters for 3-5 days
  • Focused recommendations for managing the
    aftermath of the catastrophe (i.e., search
    rescue, debris removal, etc.) rather than for
    minimizing the magnitude of the catastrophe
    (i.e., improving evacuation and sheltering
    strategies remained open issues)
  • A second exercise in summer of 2005 didnt take
    place because of lack of funding

7
Precursors Influence
  • Decision Makers attend to near-misses
  • Near-miss information is incorporated into
    decision calculus
  • Near-misses will systematically bias decision
    making
  • Towards more risk
  • Near-misses can be evidence of a systems
    vulnerability or of a systems resilience
  • Resilience gt Vulnerability
  • Good Fortune is Discounted

8
What is a Near-Miss?
  • Near-miss
  • An event that has some probability of a negative
    (even fatal) outcome and some probability of a
    positive (safe) outcome, and the actual outcome
    is non-hazardous
  • A success that could have been a failure except
    for good luck

9
What is a Near-Miss?
Definitions Lx lt CMIN Success Cx gt LMAX
Success CMIN lt Lx lt LMAX AND Lx gt Cx
Hit CMIN lt Lx lt LMAX AND Lx lt Cx
Near-miss Examples L1 lt CMIN SUCCESS C3 gt
LMAX SUCCESS L3 gt C1 HIT L2 lt C2
NEAR-MISS L2 lt C1 NEAR-MISS L3 lt C2 NEAR-MISS
NEAR MISS Or HIT
SUCCESS
SUCCESS
C2
L2
L3
L1
C1
C3
LMAX
CMIN
10
First Studies
  • Simulation of a Mars Rover mission
  • Limited battery life (8 days)
  • 5 travel days to destination
  • Rewarded 5 extra dollars for each battery day
    extra
  • Weather forecast for each day
  • Mild weather or 95 chance of severe storm
  • Severe dust storms can cause catastrophic failure
  • 40 catastrophic failure if drive through severe
    storm
  • 100 safe if stop deploy wheel guards
  • Operational decisions (stop/ go) for day 6-13
  • Decide to drive or stop deploy wheel guards
  • Manipulation check, risk propensity, and
    engagement

11
Manipulation
  • Near-Miss
  • Of 5 days before you started operating the rover,
    had 3 days of severe storms and rover had driven
    successfully through these
  • Of 5 days before you started operating the rover,
    all mild weather

12
Results-- NASA
13
Results-- Students
14
Results Experiment 2 Those who USED probability
information
15
Results Experiment 2 Those who did NOT use
probability information
16
Second Studies
  • Given biases exist, how does that influence how
    managers are evaluated within an organization?
  • Failures and successes are attributed to poor
    decision making
  • Is there another variable?
  • Loma Prieta 1989 (Friday afternoon rush hour)
  • Northridge 1994 (430 am on a holiday)
  • If all outcomes are a function of decision
    quality and luck, how do we evaluate others
    decision processes?

17
Biases in Decision making
  • Outcome Bias (Baron Hershey, 1988, Allison, et
    al., 1996)
  • The outcome systematically influences peoples
    evaluations of the quality of the decision making
  • Hindsight bias (Fischoff, 1982)
  • Anchor on outcomes
  • Exaggerate what could have been anticipated at
    time of decision
  • Misremember ones own predictions to be
    consistent with now-known outcomes
  • Suggest we will anchor on outcomes

18
Hypothesis 1
  • H1a Managers whose decisions result in a miss
    (organizational success) will have their decision
    making evaluated in a significantly more
    favorable light than managers whose decisions
    result in a hit (organizational failure)
  • H1b Managers whose decisions result in a miss
    (organizational success) will be judged to be
    more competent, to be more intelligent, to have
    more leadership ability, and to be more
    promotable than managers whose decisions result
    in a hit (organizational failure)

19
What happens with near-misses?
  • Recall that a near miss is both
  • Evidence of a systems resilience
  • Evidence of a systems vulnerability
  • And what if we know the outcome was derived, in
    part, from good luck?
  • Prospect theory reference points
  • Norm theory
  • Immutable features give you class of events to
    categorize something
  • Mutable features (easily imagined as different)
    give you contrast events
  • What is easily imagined mutable feature?
  • Failure
  • Thus near miss miss and near miss contrasted
    with failure
  • Suggests near-misses more likely to be coded as
    successes than as failures
  • Suggests we will discount others good luck

20
Hypothesis 2
  • H2a Managers whose decisions result in a
    near-miss will have their decision making
    evaluated more favorably than managers whose
    decisions result in a hit and judged less
    favorably than managers whose decisions result in
    a miss.
  • H2b Managers whose decisions result in a
    near-miss will be judged more competent, more
    intelligent, to have more leadership ability, and
    to be more promotable than managers whose
    decisions result in a hit and judged less
    competent, less intelligent, to have less
    leadership ability, and to be less promotable
    than managers whose decisions result in a miss.

21
Hypothesis 3
  • H3 Managers whose decisions result in a
    near-miss will be judged closer to those whose
    decisions ended in a miss than to those whose
    decisions ended in a hit.

22
Method
  • Case study loosely based on development details
    from past unmanned NASA missions
  • Development problems
  • Challenges interacting across NASA development
    centers
  • A skipped peer review
  • Mission not delayed over a last-minute
    potentially fatal problem (considered highly
    unlikely)
  • Three different outcomes
  • Success Launch and deployment successful (no
    problem shortly after launch)
  • Failure Problem shortly after launch, because of
    spacecrafts orientation to sun, problem is
    catastrophic
  • Near-miss Problem shortly after launch, because
    of spacecrafts orientation to sun, not a
    problem, data collection is successful

23
Participants
  • 89 undergraduate students
  • 98 MBA students
  • 24 NASA managers

24
Sample Differences
  • In general, NASA managers tended to be a bit
    easier on Chris
  • Rated decision to launch higher (plt.05)
  • NASA mean 3.7, MBA mean 3.4, UG mean 3.0
  • Were marginally more likely to promote Chris
    (p.1)
  • NASA mean 3.8, MBA mean 3.3, UG mean 3.3
  • Were significantly less likely to fire Chris
    (plt.001)
  • NASA mean 3.0, MBA mean 4.2, UG mean 4.3
  • No significant interaction effects between sample
    and condition

25
ALL PARTICIPANTS
Competence
Decision to proceed without peer review
2 (very bad)
2 (not at all)
4 (neutral)
3
4 (somewhat)
5
3
5
6 (greatly)
6 (very good)
plt0.001
plt0.001
Intelligence
Decision to launch without redesign
2 (not at all)
3
4 (somewhat)
5
2 (very bad)
6 (greatly)
3
4 (neutral)
6 (very good)
5
plt0.05
plt0.001
Decision making ability
Decision to promote
2 (not at all)
2 (very bad)
plt0.01
3
5
4 (somewhat)
4 (neutral)
6 (greatly)
6 (very good)
3
5
plt0.001
plt0.001
Leadership ability
Decision to fire
2 (very bad)
6 (very good)
4 (neutral)
3
2 (not at all)
6 (greatly)
3
5
5
4 (somewhat)
plt0.001
p.11
- Failure
- Near-miss
- Success
26
Summary
  • Rated managers whose decisions resulted in
    organizational success significantly more
    favorably than mangers whose decisions resulted
    in failures
  • Rated mangers whose decisions, BUT FOR LUCK,
    would have resulted in failures more favorably
    than those whose decisions resulted in failure
  • Did not hold managers accountable for faulty
    decision making if results in good organizational
    outcome, EVEN WHEN SUCCESS IS BECAUSE OF LUCK

27
Implications for organizations
  • Near-misses categorized as misses rather than
    hits, meaning organizations fail to take
    advantage of learning opportunities
  • Generally lack the formal failure investigation
    board
  • Near-miss bias may make organizations more risky
  • May explain the normalization of deviance
    (Vaughan, 1996) Without obvious failures, events
    that once caused concern become accepted as
    normal occurrences.
  • If those experiencing near-misses are promoted
    through organizational ranks, given they make
    more risky subsequent decisions, organizations
    will come to embrace more and more risk.

28
What to do about all this?
  • Knowledge and recognition that biases exist
  • Hindsight, Outcome, and Near-Miss Bias
  • Decisions do have a luck component
  • Developing an Effective Lessons Learned System
  • Effectiveness of Lessons Learned systems are
    dependent on completeness of data
  • A complete data set requires noticing both
    failures and successes and being able to
    distinguish near-misses
  • How can you increase chances of acknowledging
    both successes and failures
  • Improve group decision making groupthink,
    escalation, abilene

29
Avoiding Groupthink
  • Monitor team size (lt10)
  • Provide face-saving mechanism for dissent and
    changing ones mind
  • Dont be a bystander because fearful of appearing
    foolish (evaluation apprehension)
  • Discuss risks before benefits
  • Discuss how things might have failed
  • Encourage track alternative viewpoints
  • Get external observers

30
Avoiding Escalation
  • All advice for avoiding groupthink, plus
  • Set resource limits up front
  • Recognize sunk costs

31
Avoiding Abilene Paradox
  • All advice for avoiding groupthink, plus
  • Generate solution alternatives without evaluation
    (brainstorming)
  • Conduct a private vote (Delphi)
  • Create norms for expression of controversial
    views (rotating devils advocate)

32
Future Work
  • Determine what factors may help mitigate a
    near-miss bias
  • Determine what effect the accumulation of the
    near-miss bias may have as an inhibitor to
    organizational learning
Write a Comment
User Comments (0)