Paying Attention

1 / 38
About This Presentation
Title:

Paying Attention

Description:

First we will discuss the issue of dividing attention which. covers such issues as what happens ... Moray showed that people would notice their names if they ... – PowerPoint PPT presentation

Number of Views:134
Avg rating:3.0/5.0
Slides: 39
Provided by: stevejo8

less

Transcript and Presenter's Notes

Title: Paying Attention


1
Chapter 3
  • Paying Attention

2
Gameplan
Given our class layout (2hrs Monday, 1 Hr
Wednesday) I am going to flip the order with
which I present the two major topics of Chapter 3
therefore First we will discuss the issue of
dividing attention which covers such issues as
what happens when we have to divide our
attention among multiple tasks when can we do
it, when can we not? where does practice fit
in? Then we will discuss the issue of selective
attention which focuses primarily on the issues
of how we selectively attend to certain inputs,
and to what extent we nonetheless process
unattended inputs.
3
Dividing Attention - An Example
Consider the following example for
context. Given the recent popularity of cell
phones, it has become very common for people to
drive cars while talking on the phone to people
who have little notion of the driving
context. In this context, the driver is dividing
their attention between the task of driving, and
the task of talking on the phone. Is this
dangerous? If so, under what conditions is it
most dangerous? Why is this an issue at all?
4
Limitations
Most of us realize that there is at least
something dangerous about driving and talking on
the phone but why? It is dangerous because we
know that human attention is limited that for
some reason we cannot attend to everything at
once in fact, we sometimes have
difficulties attending to even two things at
once. Why? That is, what is it about the human
information system that causes such limitations?
Specifically, what is the cause of the
limitation? Is there any way around it? These
issues are the focus of todays lecture.
5
The Notion of Resources
In attempting to tackle these issues, cognitive
psychologists have often drawn on the notion of
limited resources consider a money
analogy Lets say you have 100 in your pocket
and you want to buy stuff. One of the things you
want is 70, another is 60. With your 100
dollars, you cannot buy both you can buy one or
the other but if you want both you have to buy
one first, then wait until you have more
resources to buy the other. Perhaps our
attention works the same way that is,
perhaps we have limited attentional resources
and, if we pay so many resources to attend to
one thing, we do not have enough left over to
also attend to something else.
6
Attention, Effort and Arousal
There is a fascinating book by Kahneman (1973)
where he presents his ideas on all this he
believes that we have some limited mental
resource the quantity of which is linked to our
state of arousal (higher arousal -gt more
resources). He further believes that certain
mental tasks are very effortful meaning they
drawn heavily on these resources. gt Steves eye
pupil demonstration. Thus, only so much deep
mental processing can occur at any given time
and it is this that is the source of our
limitations. gt Back to the car and phone
example.
7
General vs. Specific Resources
Given the possibility of limited resources,
another important distinction is between general
versus task specific resources. The notion of
general resources is that we have some general
pool of resources that we can only divide up
among so many tasks, even if the tasks are very
different from one another the specifics of
the task are irrelevant. In contrast, there may
also be task specific resources such that the
resources available to certain mental processes
are limited such that there will only be
interference between tasks if they drawn on the
same processes tasks drawing on different
processes will not interfere with one another.
8
Evidence for Task Specific Resources
A study by Allport, Antonis Reynolds (1972) had
subjects shadow a stream of words presented to
one ear, while a second set of words was
simultaneously presented. The second set of
items were either presented 1) As words
auditorily presented in the other ear 2) As
words visually presented on a computer screen,
or 3) As pictures visually presented on a
computer screen. A memory test for this second
set of items revealed the best memory in
condition 3, then condition 2, and the worst
memory in condition 1 this pattern is
consistent with the expections of a task-specific
resource notion.
9
Evidence for General Resources
There is a great deal of evidence for the notion
of general resources from everyday life like
the cellphone example. There are also several
laboratory demonstrations of this but what say we
try our own. gt Counting backwards while
clapping .. Easy vs Hard. Given this, it seems
we have both general resource limitations and
task-specific resource limitations. But is the
money analogy really correct or do these
limitations really reflect a more mechanistic
view?
10
Pools versus Mechanisms
Until this point we have been talking about
resources as if they were some pool that could be
divided into little pieces like money or a pool
of water (or gasoline?). However, another
possibility is that the limitations arise as
a result of two tasks need to share a common
mechanism a mechanism that can only do one
thing at a time. This is cast in the text as a
difference between divisible versus unitary
resources and, though the distinction is somewhat
subtle, it is important if we are to understand
human cognition. Given this, lets be more
concrete about this distinction, then take a
close look at a unitary resource idea.
11
The Computer as an Analogy
The computers we use also run into the issue of
limited resources causing them to sometimes do
things more slowly than we would like. In fact,
both sorts of resource limitations effect
computers RAM (random access memory) is a
very fast memory store that allows computers to
get at currently relevant information very
rapidly it is a divisible resource. The
processor itself is where the commands are
compiled and run it can only work on one set
of commands at a time and, therefore, must
timeshare among the current jobs switching
from one to the other it is a unitary resource.
12
A Unitary Response Selector - 1
Imagine the following experiment across a
number of trials the following events occur.
2
Respond 1 or 2 - R1
S1
B
Respond A or B - R2
S2
If only S2 had been presented, lets say that an
R2 response could typically be made in 300 ms
if S2 is presented just after S1 (say 150 ms
later) R2 responses would slow maybe to
something like 400 ms.
13
A Unitary Response Selector - 2
This slowdown of R2 responses is not surprising
given that we know humans have difficulties
processing and responding to two stimuli at once
but what is causing the slowdown? Said another
way, where is the bottleneck? One hypothesis is
that the bottleneck is caused by a
response selector mechanism that examines the
information passed on to it by basic perceptual
mechanisms then assigns the appropriate
response to be made the hypothesis is that
this mechanism can only deal with one stimulus at
a time. According to this view, R2 responses are
slow because, after basic perceptual processing
of S2, the system must wait for the response
selector to be free before it can emit R2.
14
A Prediction
If this is true, and the bottleneck is caused by
a response selector mechanism, than if we did
some manipulation that should slow the perceptual
processing of S2, that manipulation should have
little effect because it could be dealt with in
the slack period that occurs while waiting for
the response selector. Specifically ...
Say that, degrading the stimulus would usually
slow an A/B decision by about 100 ms
B
B
300 ms 400 ms
15
The Prediction Confirmed
When the same degradation manipulation is
performed in the context of the S2 on the heels
of S1 technique, the effect of degradation is
much smaller or even nonexistent
why? According to the response selector view,
this is because the degraded perceptual input of
S2 can be cleaned up while the processing of S2
waits for the response selector to be finished
with S1 thus, it can perform pre-bottleneck pr
ocessing while it waits it just cant go beyond
that. This suggests that at least some of our
limitations may be caused by a need to share
critical one task only mechanisms. It remains
unclear whether our limitations also reflect
resource pool issues, or just shared mechanisms.
16
Back to Phones and Cars
Thus, it is possible that the interference we
observe when try to do two things at once may be
due to the need to share some mechanism, such as
a response selector but why is that sometimes a
problem and sometimes not? For example, often we
can talk and drive without any seeming interferenc
e both tasks seem to be performed
well. However, when the driving situation
changes to one that is different from typical
driving then interference occurs. Why? In
order to understand the complexities of task
interference, we now need to turn to the role of
practice.
17
Practice - Time for Another Experiment
J T
Targets for this trial (can be one or more)

Fixation point
P W D S
Search array is one of the targets present, yes
or no?
Participants performed this task over and over
and over and two different conditions were
critical ...
18
Consistent versus Varied Mappings
In the varied mapping condition, which letters
were targets, and which were distractors varied
from trial to trial trial 52 search for F
G in R T G Q yes trial 53 search for
Q in F Z X R no trial 54 search for X T in
S P T W yes In the consistent mapping
condition, certain letters were always targets,
and other letters were always distractors
trial 32 search for R in R D F
W yes trial 33 search for G P E in F Q W O
no trial 34 search for E P in O P F
Q yes Predictions from the audience?
19
Results
In the varied mapping condition, participants did
not get much faster with practice, and they
always showed a large effect of number of
targets searching slower when they
were looking for more targets suggests serial
search However, in the consistent mapping
condition, participants became very fast with
practice, and the number of targets
was irrelevant by the end of practice
(participants could search for three targets as
fast as one). This suggests that participants
had learned to automatically associate certain
stimuli (the targets) with certain
responses (yes) and could now make such
responses without the need of a response
selector evidence from switch
20
The Notion of Automaticity
Thus, it appears that if we consistently make
some certain response in some specific stimulus
context, we can eventually come to emit the
response without using the response
selector. Such automatic (or reflexive)
responses are generally assumed to be gt very
fast gt generally non-interfering with other
tasks gt not requiring of awareness OK, so let
us again return to the cars and phones example
now we can explain why sometimes driving
interferes with talking, and sometimes does not.
21
Where does the response come from?
But if the response selector is no longer making
the response, what part of the cognitive system
is? To answer this, consider another the
following questions gt What is 4x8? 3x7?
5x3? 127x4? A relevant experiment (Logan, 1988)
had participants learn to do a new form of
arithmetic called alphabetic arithmetic gt
B2D? G2J? K2M? or ... gt
C5I? Q5V? E5J? With practice, participants
became very fast making true/false decisions to
these and the effect of difficulty
disappeared, why?
22
Efficiency or Memory?
Two possibilities that come to mind are 1)
Participants simply become very efficient at
selecting the appropriate response thus, it
is the same basic process, it just gets faster
and better. 2) Participants retrieve the
response from memory, rather than using the
limited capacity response selector process
this frees up the response selector for
other things and allows efficient responding
(though may be prone to errors). Given the
fact that the difficulty effect disappears (as
opposed to simply getting smaller) Logan has
argued that the second of these options is
correct it is still an active issue.
23
The Current View then is ...
The first few times some response to a stimulus
is made, the response is generated via a limited
capacity response selector. However, if the
response made in some given context
remains constant, memory may take over and
automatically provide the response to the
stimulus. These automatic memory-based responses
are much faster, but are detached from current
goals thus, they are not as easy to
control. This means we seem to have two modes of
responding to familiar stimulus contexts a
slow but controlled response selector mode, or a
fast but relatively uncontrolled memory mode.
24
The Stroop Task as an Example
First described by Stroop (1935), the following
task shows how hard it is (impossible?) to
control a response that has become largely
automatic. The task is simple color words will
appear in colored ink. All you have to do is read
aloud the color of the ink the word is written
in, while ignoring the word itself easy
right? As we will see, it is very hard to ignore
words because, for us, reading is such an
automatic process that the sound of a word
automatically comes to mind even when we dont
want it to, making it difficult to read ink
colours when the word is incongruent with the
current color.
25
The Stroop Task as an Example
  • RED
  • GREEN
  • BLUE
  • BLUE
  • RED
  • BLUE
  • GREEN
  • RED
  • RED

These are termed congruent stimuli because the
word and the ink color are both associated with
the same verbal response.
26
The Stroop Task as an Example
  • RED
  • RED
  • GREEN
  • BLUE
  • RED
  • BLUE
  • BLUE
  • GREEN
  • RED

These stimuli are termed incongruent
because the response associated with the word is
different from that associated with the ink color
27
OK, on to Selective Attention
So far we have been talking about the limitations
that occur when humans try to process two bits of
incoming information at once. Given that these
limitations do occur, and that we are aware of
them, it is not surprising that we will often try
to focus our processing on some subset of the
available stimuli typically the stimuli that
are deemed most relevant. Thus, we must
selectively attend to the relevant
stimuli, while attempting to ignore the
irrelevant stimuli what are the processes that
allow us to do this? Investigators interested in
selective attention attempt to answer this
question.
28
A Brief History of Selective Attention
Early studies of selective attention tended to
use the dichotic listening paradigm in which
stimuli (sometimes words, sometimes sentences)
are presented to both the left and right
ears. Subjects are asked to shadow (repeat) the
stimuli presented to one of the ears (the
attended ear) while ignoring the stimuli
presented to the other ear (the unattended
ear). With this paradigm we can use both online
and memory measures to assess the processing of
the attended and unattended information.
29
Broadbents Conclusions
One of the first to create a theory of attention
based on dichotic listening results was Donald
Broadbent. Broadbents technique was to alter
the character of the unattended message in some
manner then, afterwards, he would ask subjects if
they noticed any change. He found that subjects
only noticed gross changes in the physical
qualities of the unattended stimulus (e.g., if
the gender of the voice changed) but did not
notice deeper changes such as a switch of
language or even a switch to backward
speech. He concluded that unattended information
is filtered at a basic perceptual level (so
called early selection).
30
The Leaky Filter Hypothesis
A number of subsequent results suggested that the
filter was not as early and complete as Broadbent
proposed. Moray showed that people would notice
their names if they were presented in the
unattended channel, even if it had the same
physical characteristics of the other unattended
info. Other studies suggested that rude
nasty words were also noticed (e.g., SHIT,
FUCK, BASTARD). These studies suggested that
highly salient stimuli could make it past the
perceptual filter clearly unattended
stimuli were processed to some extent beyond just
perceptual features.
31
Evidence for Deeper Processing Yet!
A number of subsequent studies then suggested
that unattended information was actually
processed fairly deeply perhaps including deep
semantic processing. A famous example is
Triesmans demonstration that subjects can be
enticed to switch the ear they are shadowing if
the semantic content of the attended ear moves to
the unattended ear. Other demonstrations
include Eichs demonstration with homophonic
words (e.g., READ vs. REED) and a similar study
where the perceived meaning of polysemous
words (e.g., BAT) is biased by unattended
information.
32
Attentuation versus Inhibition
Up to this point, the debate is really about the
degree to which unattended information is
processed with the extremes termed early
selection and late selection
theories respectively. However the debate
changes substantially in the late 70s and early
80s with the discovery of a phenomenon
termed negative priming. The basic negative
priming phenomenon is that subjects respond
disproportionately slowly to attended items that
were previously unattended this suggests that
unattended info may be inhibited in some manner
(a qualitative difference).
33
Negative Priming
RED GREEN BLUE YELLOWRED GREEN BLUE GREEN YELLOW
BLUERED GREEN
The negative priming phenomenon was
first discovered by Dalrymple-Alford
Budayr (1966) but laid largely dormant until work
by Tipper and colleagues in 80s (e.g., Tipper,
1985). Generally speaking, the phenomenon is
that people are slower to make some response if
the stimulus associated with that response was
just experienced as a distractor. For example,
in the Stroop context, responses would be slower
to list 2 (ignored repetition list) than to list
1 (control list).
34
A Dual-Process Selection Mechanism
The claim that arose was that selective attention
involves a dual-process mechanism whereby
relevant information is activated whereas
distracting information is inhibited. According
to this view, the reason for the slowdown in the
Stroop context is that the distracting word (and
all of its associated response information) is
inhibited and, as a result, when that
inhibited response is required on the next trial,
it is slower that usual. This view stands in
contrast to the early and late selection
accounts which generally assumed that
distracting information was activated, just to a
lesser extent than distracting information.
35
A Debate in Process
There is currently an active debate concerning
whether negative priming results truly do
reflect a dual-process selective attention
mechanism. Yours truly is one of the debaters
(against the dual-process notion), and one of the
experiments we have conducted to test the notion
is one where we force participants to attend to
the distracting information prior to selectively
responding If negative priming is due to
having to respond to a previously unattended
stimulus, then it should disappear when
attention is forced on to the distractor, before
it occurs as a target on the subsequent trial
it does not as shown in the next slide.
36
Non-Relative Versus Relative Procedures MacDonald,
Joordens, Seergobin (1998)
HOUR MONTH
SECOND MONTH
MINUTE HOUR
MINUTE HOUR
Non-Relative - Name the green item - 30 ms
slowdown Relative - Name the item
corresponding to the longer unit
of time - 100 ms slowdown
37
Selective Attention vs. Responding
All of this illustrates that there was a constant
confound in previous designs the stimulus that
was not attended to was also not responded to
thus it is unclear whether the slowdown was due
to the lack of attention, or the need to
withhold a response. In our study, the
distractors were attended to, but not responded
to and huge negative priming was observed. This
suggests that negative priming really reflects
inhibition at the response selection level, not
at the selective attention level and that the
most negative priming is observed when one must
withhold a response to an item that was attended.
38
Summary of Chapter 3
There are resource limitations to human
perception These limitations are likely due to
the need to time share various processes such
as the response selector process. Practice can
get around these limitations by retrieving
responses from memory instead of using the
response selector. These limitation also force
us to selectively attend to our environment
currently it seems as though the mechanism that
does this is one that allows the most activation
from stimuli we are attending to, but still
allowing some activation to flow in from the
unattended channels as well (though it is an
attenuated level of activation).
Write a Comment
User Comments (0)