Bayesian Inference for Signal Detection Models of Recognition Memory PowerPoint PPT Presentation

presentation player overlay
1 / 27
About This Presentation
Transcript and Presenter's Notes

Title: Bayesian Inference for Signal Detection Models of Recognition Memory


1
Bayesian Inference for Signal Detection Models of
Recognition Memory
  • Michael Lee
  • Department of Cognitive Sciences
  • University California Irvine
  • mdlee_at_uci.edu

Simon Dennis School of Psychology University of
Adelaide simon.dennis_at_adelaide.edu.au
2
The Task
  • Study a list of words
  • Tested on another list of words, some of which
    appeared on the first list
  • Subjects have to say whether each word is an old
    or new word
  • Data take the form of counts
  • Hits
  • Misses
  • False Alarms
  • Correct Rejections

3
Signal Detection Model
4
Unofficial SDT Analysis Procedure
  • Calculate a hit rate and false alarm rate per
    subject per condition
  • Add a bit to false alarm rates of 0 and subtract
    a bit from hit rates of 1 to avoid infinite d'
  • Throw out any subjects that are inconsistent with
    your hypothesis
  • Run ANOVA
  • While p gt 0.05
  • collect more data
  • Run ANOVA
  • Publish

5
What we would like
  • Sampling variability - variability estimates
    should depend on how many samples you have per
    cell
  • Edge corrections should follow from analysis
    assumptions
  • Excluding subjects should be done in a
    principled way
  • Evidence in favour of null hypothesis
  • Used iteratively without violating likelihood
    principle
  • Small sample sizes not only applicable in the
    limit

6
A Proposal The Individual Level
  • Assume hits and false alarms are drawn from a
    binomial distribution (which allows us to
    generate a posterior distribution for the
    underlying rates that generated the data)
  • Assume that both hits and false alarms are
    possible (given this the least informative prior
    about them is uniform)
  • Assume d' and C are independent (which is true
    iff the hit rate and false alarm rates are
    independent)
  • With these assumptions d' and C will be
    distributed as Gaussians

7
A Proposal The Group Level
  • Within subjects model
  • Error Model
  • Error Effect Model

8
List Length Data (Dennis Humphreys 2001)
  • Is the list length effect in recognition memory a
    necessary consequence of interference between
    list items?
  • List types
  • Long ---------Study---------Filler--Test
    --
  • Short Start -Study--------Filler-----------Test
    --
  • Short End ----Filler------Study-Filler--Test
    --

9
Sampling Variability
Rate Parameterization Posteriors
10
Sampling Variability
Discriminability Criterion Posteriors
11
Sampling Variability
Discriminability Log-Bias Posteriors
12
Edge Corrections
  • Always assuming a beta posterior distribution of
    rates never a single number
  • Assumption of uniform priors provides principled
    method for determining degree of correction

13
Excluding Subjects
Guess vs SDT Model in the Short-Start Condition
14
List Length Data (Kinnell Dennis in prep)
  • Does contextual reinstatement create a length
    effect?
  • List types
  • Long Filler ---------Study---------Filler-
    -Test--
  • Short Filler -Study--------Filler----------
    -Test--
  • Long NoFiller ---------Study-----------Test--
  • Short NoFiller -Study--------Filler----Test--

15
Evidence in Favour of NullFiller Error Model
16
Evidence in Favour of NullFiller Error Effect
Model
17
Evidence in Favour of NullNo Filler Error Model
18
Evidence in Favour of NullNo Filler Error
Effect Model
19
Evidence in Favour of NullFrequency Error Model
20
Evidence in Favour of NullFrequency Error
Effect Model
21
Evidence in Favour of Null Hypothesis Summary
22
Used Iteratively
  • Likelihood principle
  • Inference should depend only on the outcome of
    the experiment not on the design of the
    experiment
  • Conventional statistical inference procedures
    violate likelihood principle
  • They cannot be used safely iteratively because as
    you increase sample size you change the design
  • Bayesian methods (like ours) avoid this problem

23
Small Sample Sizes
  • No asymptotic assumptions
  • Applicable even with small samples
  • Note Still could be problems if there are strong
    violations of assumptions

24
Conclusions
  • Sampling variability - variability estimates
    should depend on how many samples you have per
    cell
  • Edge corrections should follow from analysis
    assumptions
  • Excluding subjects should be done in a
    principled way
  • Evidence in favour of null hypothesis
  • Used iteratively without violating likelihood
    principle
  • Small sample sizes not only applicable in the
    limit

25
Evidence in Favour of Null Hypothesis Filler
d'
26
Evidence in Favour of Null Hypothesis No Filler
d'
27
Evidence in Favour of Null Hypothesis Frequency
d'
Write a Comment
User Comments (0)
About PowerShow.com