Do Humans make Good Observers - PowerPoint PPT Presentation

About This Presentation
Title:

Do Humans make Good Observers

Description:

Do Humans make Good Observers and can they Reliably Fuse Information? Dr. Mark Bedworth MV Concepts Ltd. mark.bedworth_at_mv-concepts.com – PowerPoint PPT presentation

Number of Views:107
Avg rating:3.0/5.0
Slides: 96
Provided by: canad87
Category:

less

Transcript and Presenter's Notes

Title: Do Humans make Good Observers


1
Do Humans make Good Observers and can they
Reliably Fuse Information?
  • Dr. Mark Bedworth
  • MV Concepts Ltd.
  • mark.bedworth_at_mv-concepts.com

2
What we will cover
  • The decision making process
  • The information fusion context
  • The reliability of the process
  • Where the pitfalls lie
  • How not to get caught out
  • Suggestions for next steps

3
What we will not cover
  • Systems design and architectures
  • Counter-piracy specifics
  • Inferencing frameworks
  • Tracking
  • Multi-class problems
  • Extensive mathematics
  • In fact most of the detail!

4
Our objectives
  • Understanding of the context of data fusion for
    decision making
  • Quantitative grasp of a few key theories
  • Appreciation of how to put the theory into
    practice
  • Knowledge of where the gaps in theory remain

5
Warning
  • This presentation containsaudience
    participation experiments

6
Decision Making
  • To make an informed decision
  • Obtain data on the relevant factors
  • Reason within the domain context
  • Understand the possible outcomes
  • Have a method of implementation

7
Boyd Cycle
  • This is captured more formally as a fusion
    architecture
  • Observe acquire data
  • Orient form perspective
  • Decide determine course of action
  • Act put into practice
  • Also called OODA loop

8
OODA loop
9
Adversarial OODA Loops
Own information
Adversary information
Decide
Decide
Orient
Orient
Act
Act
Observe
Observe
Physical world
10
Winning the OODA Game
  • To achieve dominance
  • Make better decisions
  • In a more timely manner
  • And implement more effectively

11
Dominance History
  • Action dominance (-A)
  • Longer range, more destructive, more accurate
    weapons
  • Observation dominance (O-)
  • Longer range, more robust, more accurate sensors
  • Information dominance (-O-D-)
  • More timely and relevant information with better
    support to the decision maker

12
Information DominancePart One Orientation
  • Having acquired relevant data
  • to undertake reasoning about the data
  • within the domain context to form a
  • perspective of the current situation
  • so that an informed decision can
  • subsequently be made

13
A number of approaches
  • Fusion of hard decisions
  • Majority rule
  • Weighted voting
  • Maximum a posteriori fusion
  • Behaviour knowledge space
  • Fusion of soft decisions
  • Probability fusion

14
Reasoning Frameworks
  • Boolean
  • Truth and falsehood
  • Fuzzy (Zadeh)
  • Vagueness
  • Evidential (Dempster-Shafer)
  • Belief and ignorance
  • Probabilistic (Bayesian)
  • Uncertainty

15
Probability theory
  • 0 P(H) 1
  • if P(H)1then H is certain to occur
  • P(H) P(H) 1either H or not-H is certain to
    occur (negation rule)
  • P(G,H) P(GH) P(H) P(HG) P(G)the joint
    probability is the conditional probability
    multiplied by the prior (conjunction rule)

16
Bayes Theorem
Priorprobability
Likelihood
Posteriorprobability
Marginallikelihood
17
Perspective Calculation
  • Usually the marginal likelihood is awkward to
    compute
  • But is not needed since it is independent of the
    hypothesis
  • Compute the products of the likelihoods and
    priors then normalise over hypotheses

18
Human Fusion Experiment (1)
  • A threat is present 5 of the time it is looked
    for
  • Observers A and B both independently look for the
    threat
  • Both report an absence of the threat with
    posterior probabilities 70 and 80
  • What is the fused probability that the threat is
    absent?

19
Human Fusion Experiment (2)
  • Threat absent the hypothesis (H)
  • P(H) 0.05
  • P(H) 0.95
  • P(HXA) 0.70
  • P(HXB) 0.80
  • P(HXA,XB) ?

20
Human Fusion Experiment (3)
No threat H 1.00
Prior P(H) 0.95
Report A P(HXA) 0.70
Report B P(HXB) 0.80
21
Conditional Independence
  • Assume the data to be conditionally independent
    given the class
  • Note that this does not necessarily imply

22
Conditionally Independent
23
Conditionally independent
24
Not conditionally independent
25
Not conditionally independent
26
Fusion Product Rule (1)
  • We require
  • From Bayes theorem

27
Fusion Product Rule (2)
  • We assume conditional independence so may write

28
Fusion Product Rule (3)
  • Applying Bayes theorem again
  • And collecting terms

29
Fusion Product Rule (4)
  • We may drop the marginal likelihoods again and
    normalise

Posteriorprobability
Posteriorprobability
Priorprobability
Fused posteriorprobability
30
Multisource Fusion Rule
  • The generalisation of this fusion rule to
    multiple sources
  • This is commutative

31
Commutativity of Fusion (1)
32
Commutativity of Fusion (2)
  • The probability fusion rule commutes
  • It doesnt matter what the architecture is
  • It doesnt matter if it is single stage or
    multi-stage

33
Experiment Results
  • Normalising gives
  • P(HA,B) 0.33 P(HA,B) 0.67

34
Human Fusion Experiment (3)
No threat H 1.00
Prior P(H) 0.95
Report A P(HXA) 0.70
Fusion A,B P(HXA,XB) 0.33
Report B P(HXB) 0.80
35
Why was that so hard?
  • Most humans find it difficult to intuitively fuse
    uncertain information
  • Not because they are innumerate
  • But because they cannot comfortably balance the
    evidence (likelihood) with their predisposition
    (prior)

36
Prior Sensitivity (1)
  • If the issue is with the priors do they matter?
  • Can we ignore the priors?
  • Do we get the same final decision if we change
    the priors?

37
Prior Sensitivity (2)
  • If P(HA) P(HB)
  • What value of P(H) makes P(HA,B) 0.5?

38
Prior Sensitivity (3)
39
Prior Sensitivity (4)
  • Between 0.2 lt P(HA) lt 0.8 the prior has a
    significant effect
  • Carefully define the domain over which the prior
    is evaluated
  • Put effort into using a reasonable value

40
Sensitivity to Posterior Probability
  • What about the posterior probabilities delivered
    to the fusion centre?
  • Can we endure errors here?
  • Which types of errors hurt most?

41
Probability Experiment (1)
  • 10 estimation questions
  • Write down lower and upper bound
  • So that you are 90 sure it covers the actual
    value
  • All questions relate to the highest point in
    various countries (in metres)

42
Probability experiment (2)
  • Winner defined as
  • Person with most answers correct
  • Tie-break decided by smallest sum of ranges (for
    all 10 questions)
  • Pick a range big enough
  • But not too big!

43
The questions-
  1. Australia
  2. Chile
  3. Cuba
  4. Egypt
  5. Ethiopia
  6. Finland
  7. Hong Kong
  8. India
  9. Lithuania
  10. Poland

44
The answers-
  1. Australia (2228m)
  2. Chile (6893m)
  3. Cuba (1974m)
  4. Egypt (2629m)
  5. Ethiopia (4550m)
  6. Finland (1324m)
  7. Hong Kong (958m)
  8. India (8586m)
  9. Lithuania (294m)
  10. Poland (2499m)

45
Overconfidence (1)
  • Large trials show that most people get fewer than
    40 correct
  • Should be 90 correct!
  • People are often overconfident(even when primed
    that they are being tested!)

46
Overconfidence (2)
overconfident
wrong
underconfident
Declared probability
underconfident
wrong
overconfident
Actual probability
47
Confidence Amplification(1)
Fused class probability
Input class probability
48
Confidence Amplification(2)
49
Veto Effect
  • If any local decision-maker outputs a probability
    of close to zero for a class then the fused
    probability is close to zero
  • even if all the other decision-makers output a
    high probability
  • about 40 of the response surface for two sensors
    is either lt0.1 or gt0.9
  • this rises to 50 for three sensors and nearly
    60 for four

50
Moderation of probabilities
  • If we suspect that the posterior probabilities
    are overconfident then we should moderate them
  • By building it into automatic techniques
  • By allowing for it if this is not possible

51
Gaussian Moderation
  • For Gaussian classifiers the Bayesian correction
    is analytically tractable
  • By integrating over the mean and variance rather
    than taking the maximum likelihood value

52
Student t-distribution(1)
  • For Gaussian data this is
  • Which is a Student t-distribution

53
Student t-distribution(2)
Likelihood of data
Measurement value
54
Student t-distribution(3)
Probability of class 1
Probability of class 1
55
Approximate Moderation(1)
  • We can get a similar effect at the fusion centre
    using the posteriors
  • Convert back to likelihoods by dividing by the
    prior
  • Add a constant to everything
  • Convert back to posteriors by multiplying by
    the prior
  • Renormalise

56
Approximate Moderation(2)
  • How much to add depends on the source of the
    posterior probabilities
  • Correction factor for each source
  • Learned from data

57
Other Issues
  • Conditional independence not holding
  • Information incest
  • Missing data
  • Communication errors
  • Asynchronous information

58
Information DominancePart Two Decision
  • Having reasoned about the datato form a
    perspective of the current situation to make an
    informed decision which optimises the
    desirability of the outcome

59
Deciding what to do
  • Decision theory is trivial, apart from the
    details
  • Select an action that maximises the expected
    utility of the outcome

60
Utility functions?
  • A utility function describes how desirable each
    possible outcome is
  • People are sometimes irrational
  • Desirability cannot be captured by a single
    valued function
  • Allais paradox

61
Utility Experiment(1)
  1. Guaranteed 1 million
  2. 89 chance of 1 million10 chance of 5
    million1 chance of nothing

62
Utility Experiment(2)
  1. 89 chance of nothing11 chance of 1 million
  2. 90 chance of nothing10 chance of 5 million

63
Utility Experiment(3)
  • If you prefer 1 to 2 on the first slideYou
    should prefer 1 to 2 on the second slide as well
  • If not you are acting irrationally

64
Decision Theory
  • Assume we are able to construct a utility
    function (or at least get our superior to define
    one!)
  • Enumerate the possible actions
  • Use our fused probabilities to weight the utility
    of the possible outcomes
  • Choose the action for which the expected utility
    of the outcome is greatest

65
Timing the decision
  • What about timing?
  • When should the decision be made?
  • If we wait then maybe the (fused) probabilities
    will be more accurate
  • Or the action will be more effective

66
Explore versus Exploit
  • By waiting you can explore the situation
  • By stopping you can exploit the situation
  • Stopping rule
  • Sequential analysis
  • SPRT
  • Bayesian optimal stopping

67
Experiment with timing
  • I will show you 20 numbers
  • They are drawn from the same (uniform)
    distribution
  • Select the highest value
  • But no going back
  • A bit like Allá tú!

68
Experiment with timing(1)
  • 131

69
Experiment with timing(2)
  • 16

70
Experiment with timing(3)
  • 125

71
Experiment with timing(4)
  • 189

72
Experiment with timing(5)
  • 105

73
Experiment with timing(6)
  • 172

74
Experiment with timing(7)
  • 39

75
Experiment with timing(8)
  • 94

76
Experiment with timing(9)
  • 57

77
Experiment with timing(10)
  • 133

78
Experiment with timing(11)
  • 52

79
Experiment with timing(12)
  • 69

80
Experiment with timing(13)
  • 7

81
Experiment with timing(14)
  • 242

82
Experiment with timing(15)
  • 148

83
Experiment with timing(16)
  • 163

84
Experiment with timing(17)
  • 23

85
Experiment with timing(18)
  • 139

86
Experiment with timing(19)
  • 146

87
Experiment with timing(20)
  • 211

88
The answer
  • How many people chose 242?
  • Balance between collecting data on how big the
    numbers might be (exploration)and actually
    picking a big number(exploitation)

89
The 1/e Law(1)
  • Consider a rule of the formObserve M and
    remember the best value (V)Observe remaining
    N-M and pick the first that exceeds V

90
The 1/e Law(2)
  • It can be shown that the optimum value for M is
    N/e
  • And that for this rule the probability of
    selecting the maximum is at least 1/e
  • Even for huge values of N

91
Time Pressure (1)
  • Individuals tend to make the decision too early
  • Committees tend to leave the decision too late

92
Time Pressure (2)
  • Lecturers tend to overrun their time slot!

93
Time Pressure (3)
  • Apologies for skipping over so much of the detail
  • Some of the other areas that warrant mention
  • Game theory
  • Sensor management
  • Graphical models
  • Cognitive inertia
  • Inattentional blindness

94
Please feel freeto contact me
  • mark.bedworth_at_mv-concepts.com
  • www.mv-concepts.com
  • Or just come and introduce yourself

95
Thank you!Questions
Write a Comment
User Comments (0)
About PowerShow.com