The Selective Tuning Model of Visual Attention - PowerPoint PPT Presentation

1 / 33
About This Presentation
Title:

The Selective Tuning Model of Visual Attention

Description:

Brain Imaging (fMRI & PET) So how do we experience a coherent ... Pop-Out of Conjunction Targets. A moving X pops out of a display of moving O's and static X's ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 34
Provided by: marcp179
Category:

less

Transcript and Presenter's Notes

Title: The Selective Tuning Model of Visual Attention


1
Studying Visual Attention with theVisual Search
Paradigm Marc Pomplun Department of Computer
Science University of Massachusetts at
Boston E-mail marc_at_cs.umb.edu Homepage
http//www.cs.umb.edu/marc/
2
Studying Visual Attention with theVisual Search
Paradigm
  • Overview
  • The Feature Integration Theory
  • Visual Search
  • The Guided Search Theory
  • The Area Activation Model

3
The Binding Problem
  • Different features of the visual scene are coded
    by separate systems
  • e.g., direction of motion, location, color and
    orientation
  • How do we know this?
  • Anatomical neurophysiological evidence
  • Brain Imaging (fMRI PET)
  • So how do we experience a coherent world?

4
Feature Integration Theory (Treisman et al)
  • Attention is used to bind features together
  • Code one object at a time on the basis of its
    location
  • Bind together whatever features are attended at
    that location

5
Feature Integration Theory
  • Sensory features (color, size, orientation,
    etc) are coded in parallel by specialized modules
  • Modules form two kinds of maps
  • Feature maps (e.g., color maps, orientation maps
    etc.)
  • A master map of locations

6
Feature Integration Theory
  • Feature maps contain two kinds of information
  • - presence of a feature anywhere in the field
    (theres something red out there)
  • - implicit spatial information about the feature
  • Activity in the feature maps can tell us which
    features are contained in the visual scene.
  • It cannot tell us which other features the green
    blob has.
  • The master map codes the location of features.

7
Feature Integration Theory
  • The basic idea of the FIT is that visual
    attention is used for
  • Locating features
  • Binding appropriate features together
  • There are two stages of object perception
  • Preattentive stage Individual features are
    extracted in parallel across the whole visual
    scene.
  • Attentive stage When attention is directed to
    a location, the local features are combined
    to form a whole.

8
(No Transcript)
9
Feature Integration Theory
  • Attention moves within the location map
  • Focus of attention selects whatever features are
    linked to that location
  • Features of other objects are excluded
  • Attended features are then entered into the
    current temporary object representation

10
Feature Integration Theory
  • Empirical evidence for the FIT has been obtained
    through
  • Visual search tasks
  • Illusory conjunctions
  • We will focus on the paradigm of visual search.

11
Visual Search
12
Feature Search
  • Is there a red T in the display?

T
T
T
Target defined by a single feature
T
T
T
T
T
T
T
T
According to FIT, this should not demand
attention
Target should pop out
13
Conjunction Search
  • Is there a red T in the display?

T
X
X
T
Target is now defined by its shape and color
T
T
X
T
T
T
T
T
X
This involves binding features and so should
demand attention
X
Need to attend to each item until target is
found
14
Feature Search
  • Changing the number of distractors

15
Conjunction Search
  • Changing the number of distractors

16
Visual Search Experiments
  • Record time taken to determine whether target is
    present or not
  • Vary the number of distractors
  • Search for features should be independent of the
    number of distractors
  • Conjunction search should get slower with more
    distractors

17
Visual Search
Feature targets pop out ? flat display size
function
Conjunction targets demand serial search ?
significant slope
18
Problem with FIT Pop-Out of Conjunction Targets
  • A moving X pops out of a display of moving Os
    and static Xs

Target is defined by a conjunction of movement
and form
At least some conjunctions do not require
focal attention
19
Guided Search Theory
  • The Guided Search Theory (GST) is similar to the
    FIT in that it also assumes two subsequent stages
    of visual search performance
  • a preattentive, parallel stage
  • an attentive, serial stage
  • However, the main difference to FIT is that GST
    assumes the preattentive stage to obtain spatial
    saliency information that is used to guide
    attention in the serial stage.

20
Guided Search Theory
  • According to GST, saliency is encoded in an
    additional map, called the saliency map.
  • The saliency map is created during the
    preattentive stage and can combine multiple
    features if necessary.
  • In the subsequent serial search process,
    attention is first directed to the highest peak
    in the saliency map, then to the second-highest,
    and so on.
  • This visual guidance allows efficient search even
    for some conjunction targets.

21
Guided Search Theory
  • Support for the GST comes from eye-movement
    research.
  • Eye-movement recording allows researchers to
    determine the items that a subject looks at
    during visual search.

22
Guided Search Theory
23
Guided Search Theory
  • In the previous example,
  • 80 of fixations were closest to an item
    sharing color with the target,
  • 20 of fixations were closest to an item
    sharing orientation with the target.
  • It seems that the color dimension is guiding the
    subjects visual search process.
  • Of course, due to imprecision of eye movements
    and their measurement, better statistics are
    necessary to determine the guiding dimension.

24
Guided Search Theory
  • In visual search tasks, subjects are usually
    guided by one target feature or a combination of
    target features.
  • This supports the idea of GST that preattentively
    derived information from multiple dimensions
    guides and thereby facilitates the subsequent
    serial search process.

25
Guided Search Theory
  • There are two problems with GST
  • According to GST, grouping the guiding
    distractors should result in reduced guidance
    (less bottom-up activation). However, the
    opposite happens.
  • There is no quantitative implementation of a
    Guided Search model that could predict
    guidance, i.e., saccadic selectivity for a given
    search task.
  • To overcome these problems, we proposed the Area
    Activation Model of saccadic selectivity in
    visual search tasks.

26
Area Activation
  • Assumptions
  • Processing resources during a fixation are
    distributed like a two-dimensional Gaussian
    function centered at fixation.
  • Fixation positions are chosen to allow a maximum
    of information processing according to the
    assumed processing resources.
  • Scan paths are chosen in such a way that they
    connect the optimal fixation positions with
    minimal eye-movement cost (path length).

27
Area Activation - Strong Guidance
28
Area Activation - Strong Guidance
29
Area Activation - Weak Guidance
30
Area Activation - Weak Guidance
31
Area Activation - Empirical Results
32
Area Activation
  • Problems with the Area Activation Model
  • Empirical number of fixations per trial needs
    to be known in advance.
  • Only very basic factors influencing visual
    search have been implemented so far.
  • Nevertheless, Area Activation can be considered a
    very first step towards a quantitative model of
    visual search.

33
Conclusions
We have discussed how the visual search paradigm
can be employed to investigate the mechanisms of
visual attention. Various models of attention
have been developed and evaluated with visual
search tasks in more recent studies, this was
done based on eye-movement data. In the next
lecture, we will look at slightly different
paradigms, which are aimed at identifying factors
that determine visual scan paths. See you then!
Write a Comment
User Comments (0)
About PowerShow.com