Schedules of reinforcement - PowerPoint PPT Presentation

About This Presentation
Title:

Schedules of reinforcement

Description:

Chapter 6 Schedules of reinforcement Differential Reinforcement of Low Rates of Responding (DRL) Chapter 6 Schedules of reinforcement Differential Reinforcement of ... – PowerPoint PPT presentation

Number of Views:918
Avg rating:3.0/5.0
Slides: 29
Provided by: skinner
Category:

less

Transcript and Presenter's Notes

Title: Schedules of reinforcement


1
Chapter 6
  • Schedules of reinforcement

2
Schedules of reinforcement
Continuous Reinforcement Schedule
Every response is followed by the delivery of a
reinforcer (can also be called FR-1 one reward
for one response)
Partial (Intermittent) Reinforcement Schedule
Not every response is followed by the delivery of
a reinforcer that is, Rs are reinforced
"intermittently" according to the rule specified
by the schedule
3
Four Simple Partial Reinforcement Schedules
1. Fixed Interval
2. Variable Interval
3. Fixed Ratio
4. Variable Ratio
4
FR
VR
FI
VI
5
Comparison of ratio and interval schedules
  • both fixed ratio and fixed interval schedules
    have a
  • postreinforcement pause
  • both FR and FI schedules produce high rates of
  • responding just before delivery of the next
    reinforcer
  • both VR and VI schedules maintain steady rates
    of
  • responding, without predictable pauses
  • BUT, there are differences between ratio and
    interval
  • schedules
  • Ratio schedules produce higher response rates
    than
  • interval schedules

6
FR
VR
FI
VI
7
Comparison of ratio and interval schedules
VR schedules produce higher response rates
(responses per min) than VI schedules.
One possibility Response rate higher when
reinforcement rate (reinforcer per min) is higher.
Will VR still produce higher response rate if
rate of reinforcement is equated on both
schedules?
8
Reynolds (1975) Experiment
Compared responses on a VI schedule yoked to a VR
schedule
One pigeon reinforced on VR schedule
One pigeon on VI yoked to a pigeon on VR so that
when the pigeon on VR was one response short of
the VR requirement, the next response by both
birds produced food.
9
Reynolds (1975) Experiment
The yoked pigeon was on a VI schedule because
  • food availability depended on the time it took
    the VR bird to complete its response requirement.
  • this time interval varied from one reinforcer to
    the next (dependent on of responses the VR bird
    had to make and how long it took the VR bird to
    make them).

10
Reynolds (1975) Experiment
Both birds received food at approximately the
same time, and therefore the rate of
reinforcement (i.e., reinforcers per min) was the
same for both birds
Results
Despite the effort to equate rate of
reinforcement, the VR bird pecked much more
rapidly than the VI bird
Thus, differences in reinforcement rate do not
account for differences in response rate
11
(No Transcript)
12
Another possible reason for higher response rates
on VR than VI
  • on a VR schedule a certain number of responses
  • must be made to obtain each reward
  • however, on a VI schedule only one response
  • must be made to obtain each reward
  • if the number of responses emitted to obtain
  • each reinforcer were the same on the two
    schedules,
  • then perhaps the rate of responding would be the
  • same

13
Experiment by Catania et al. (1977)
This study replicated Reynolds finding (by
equating reinforcement rate) and also tested when
equating number of responses for each reinforcer
by
yoking the VR schedule to the number of responses
made by the VI subject.
i.e., the number of responses the VR bird had to
make to obtain each reinforcer depended on the
number of responses the VI bird had made during
the interval to obtain its reinforcer.
14
Experiment by Catania et al. (1977)
Again, even when the birds made the same
number of responses per reinforcer, the VR birds
responded at a higher rate than the VI birds.
15
Replication Reynolds (1975)
Cumulative Responses
Bird 414 on VR 25
Bird 402 on VI 30 s
Time (min)
Bird 410 on VR, yoked so food comes after same
of responses as for Bird 402
Bird 406 on VI, yoked so food comes at the same
time as for Bird 414.
16
So, higher rate of responding on ratio schedules
than on interval schedules is not due to
  • differences in the rate of reinforcement on the
  • two schedules
  • differences in the number of responses on the
  • two schedules

Why do ratio schedules produce higher rates of
responding than interval schedules?
17
A better way to explain the difference in
response rates between ratio and interval
schedules is based on the Inter-response time
(IRT) the interval, or pause, between responses
18
Consider the probability of receiving a reward
following a given response
  • on interval schedules, the probability of reward
  • increases with longer IRTs
  • that is, the slower the animal responds, the
    more
  • likely it is that the next response will be
    reinforced
  • BECAUSE, the next response is always closer to
    the
  • end of the interval
  • this is not true for ratio schedules
  • a low response rate under ratio schedules does
    not
  • change the probability that the next response
    will
  • produce reward
  • in fact, long IRTs postpone reinforcement
    because
  • reward delivery is determined exclusively by the
    ratio
  • requirement, not the passage of time

19
On a VR schedule, short interresponse times
(IRTs) are more likely to be reinforced, thus
rapid responding is reinforced.
On a VI schedule, long IRTs are more likely to be
reinforced, thus pausing (less rapid responding)
is reinforced.
20
Ratio schedules produce higher rates of
responding than interval schedules but neither
schedule requires that animals respond at a
specific rate
Can have procedures that specifically require
that a subject respond at a particular rate to
get reinforced
Response-rate schedules
21
Differential Reinforcement of Low Rates of
Responding (DRL)
  • response is rewarded only after a certain amount
  • of time has elapsed since the last response
  • DRL 15
  • responses that are 15 seconds apart will be
  • reinforced (IRT 15).
  • responses that occur with a lower IRT
  • (lt15 seconds) will restart the timer
  • 4 responses/min
  • different than interval schedules because the
    timer
  • is reset

22
Differential Reinforcement of High Rates of
Responding (DRH)
  • response is rewarded only if it occurs really
    quickly
  • after the last response
  • DRH 5
  • response is reinforced only if it occurs within
    5 s
  • of the last response
  • 12 responses/min or more
  • if response rate drops below that, no
    reinforcement
  • (i.e., respond 6 or 7 seconds after last
    response, then
  • no reward)

23
Choice Behavior Concurrent Schedules
24
Measures of ChoiceUsing Concurrent Schedules of
Reinforcement
Typically two levers or keys with a schedule of
reinforcement associated with each. Choice is
then assessed by comparing an animal's rate of
responding on one lever with its rate of
responding on the other. e.g.,
Lever A
Lever B
VI 1'
VI 3'
25
Concurrent Schedules of Reinforcement
  • Usually, reward on each lever is programmed
  • independently
  • This means that if an interval schedule is
    programmed
  • on lever A, while responding on lever B, the
    timer for
  • lever A is running and reward availability is
    becoming
  • more likely
  • Thus, with interval schedules the more time
    spent
  • responding on the other lever, the more likely
    the next
  • response on the interval lever will be reinforced

26
Typically there is a limited time frame e.g.,
The session is 60 min have to obtain as many
reinforcers as possible in that time.
Thus, wait too long to respond on a lever (next
reward sits there waiting), then may not get the
maximum number of reward allotted for that lever
in the time allowed.
27
A formulation which describes the way animals
distribute their responding on the two levers
isThe MATCHING LAW
  • Relative rate of responding on a particular lever
    equals the relative rate of reinforcement on that
    lever

Responses on A
Rewards on A

Responses on A Responses on B
Rewards on A Rewards on B
N.B. Reinforcement is what the animal actually
receives NOT what he could receive
28
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com