the Interactive Activation Model - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

the Interactive Activation Model

Description:

the Interactive Activation Model * * Ubiquity of the Constraint Satisfaction Problem In sentence processing I saw the grand canyon flying to New York I saw the sheep ... – PowerPoint PPT presentation

Number of Views:219
Avg rating:3.0/5.0
Slides: 21
Provided by: JayM47
Category:

less

Transcript and Presenter's Notes

Title: the Interactive Activation Model


1
the Interactive Activation Model
2
Ubiquity of the Constraint SatisfactionProblem
  • In sentence processing
  • I saw the grand canyon flying to New York
  • I saw the sheep grazing in the field
  • In comprehension
  • Margie was sitting on the front steps when she
    heard the familiar jingle of the Good Humor
    truck. She remembered her birthday money and ran
    into the house.
  • In reaching, grasping, typing

3
(No Transcript)
4
Graded and variable nature of neuronal responses
5
Lateral Inhibition in Eye of Limulus (Horseshoe
Crab)
6
Findings Motivating the IA Model
  • Reichers experiment
  • Used pairs of 4-letter words differing by one
    letter
  • READ ROAD
  • The critical letter is the letter that differs.
  • Critical letters occur in all four positions.
  • Same critical letters occur alone or in scrambled
    strings
  • _E__ _O__ EADR EODR
  • The word superiority effect (Reicher, 1969)
  • Subjects identify letters in words better than
    single letters or letters in scrambled strings.
  • The pseudoword advantage
  • The advantage over single letters and scrambled
    strings extends to pronounceable non-words (e.g.
    LEAT LOAT)
  • The contextual enhancement effect
  • Increasing the duration of the context or of the
    target letter facilitates correct identification.

Percent Correct
W PW Scr L
7
READ
READ
8
The Contextual Enhancement Effect
Percent Correct
Ratio
9
Questions
  • Can we explain the Word Superiority Effect and
    the Contextual Enhancement Effect as a
    consequence of a synergistic combination of
    top-down and bottom-up influences?
  • Can the same processes also explain the
    Pseudoword advantage?
  • What specific assumptions are necessary to
    capture the data?
  • What can we learn about these assumptions from
    the study of model variants and effects of
    parameter changes?
  • Can we derive novel predictions?
  • What do we learn about the limitations as well as
    the strengths of the model?

10
Approach
  • Draw on ideas from the way neurons work
  • Keep it as simple as possible

11
The Interactive Activation Model
  • Feature, letter and word units.
  • Activation is the systems only currency
  • Mutually consistent items on adjacent levels
    excite each other
  • Mutually exclusive alternatives inhibit each
    other.
  • Response selected from the letter units in the
    cued location according to the Luce choice rule
  • where

12
IAC Activation Function
Calculate net input to each unit
neti Sjoj wij
oj aj
Set outputs
13
The Interactive Activation Model
14
How the Model WorksWords vs. Single Letters
15
Rest levels for features, letters -.1 Rest
level for words frequency dependent between -.001
and -.05
16
Word and Letter Level Activations for Words and
Pseudowords
Idea of conspiracy effect rather than
consistency with rules as a basis of
performance on regular items.
17
Role of Pronouncability vs. Neighbors
  • Three kinds of pairs
  • Pronounceable SLET-SPET
  • Unpronouncable/good SLCT-SPCT
  • Unpronouncable/bad XLQJ-XPQJ

18
Simulation of Contextual Enhancement Effect
19
The Multinomial IA Model
  • Very similar to Rumelharts 1977 forumulation
  • Based on a simple generative model of displays in
    letter perception experiments.
  • Experimenter selects a word,
  • Selects letters based on word, but with possible
    random errors
  • Selects featues based on letters, again with
    possible random error AND/OR
  • Visual system registers features with some
    possibility of error
  • Some features may missing as in the WOR? example
    above
  • Units without parents have biases equal to log of
    prior
  • Weights defined top down correspond to log of
    p(CP) where C child, P parent
  • Units take on probabilistic activations based on
    softmax function
  • only one unit allowed to be active within each
    set of mutually exclusive hypotheses
  • A state corresponds to one active word unit and
    one active letter unit in each position, together
    with the provided set of feature activations.
  • If the priors and weights correspond to those
    underlying the generative model, than states are
    sampled in proportion to their posterior
    probability
  • State of entire system sample from joint
    posterior
  • State of word or letter units in a given position
    sample from marginal posterior

Subscript i indexes one memberof a set of
mutually exclusive hypotheses i runs over all
members of the set of mutually exclusive alternati
ves.
20
Input and activation of units in PDP models
  • General form of unit update
  • Simple version used in cube simulation
  • An activation function that links PDP models to
    Bayesian ideas

ai or pi
neti
Write a Comment
User Comments (0)
About PowerShow.com