Title: Schedules of Reinforcement
1Schedules of Reinforcement
- The Effects of Intermittently Reinforcing Behavior
2Schedules of Reinforcement
- Behavior is not necessarily going to be
reinforced every time it occurs - In real life, behavior is not often reinforced
each time it occurred - A reinforcement schedule is a rule stating which
instances of behavior, if any, will be reinforced - Intermittent reinforcement refers to
reinforcement that is not administered to each
instance of a response
3Advantages of Intermittent Reinforcement
- Economizing on time and reinforcers when
reinforcement does not have to be administered
for each instance of a behavior - Building persistent behavior which is much more
resistant to extinction - Delays the effects of satiation since fewer
reinforcements need to be delivered
4Types of Schedules
- Continuous reinforcement every instance of a
behavior is reinforced - Ratio schedules Reinforcement is based on the
number of behaviors required - Interval schedules Reinforcement is based on the
passage of time - Duration schedules Reinforcement is based on the
continued performance of a response for a period
of time - Fixed schedules The requirements for
reinforcement are always the same - Variable schedules The requirements for
reinforcement change randomly
5Schedules of Reinforcement
- Continuous reinforcement refers to reinforcement
being administered to each instance of a response - Intermittent reinforcement lies between
continuous reinforcement and extinction
6An Example of Continuous Reinforcement
- Each instance of a smile is reinforced
7Fixed Ratio Reinforcement
- A fixed number of responses is required for each
reinforcement - These schedules are designated FRn where nthe
number of responses required - These schedules usually produce rapid rates of
responding with short post-reinforcement pauses - The length of the pause is directly proportional
to the number of responses required
8An example of Fixed Ratio Reinforcement
- Every fourth instance of a smile is reinforced
9Graph of Fixed Ratio Responding
10Fixed Interval Reinforcement
- These schedules require the passage of a
specified amount of time before reinforcement
will be delivered contingent on a response - No response during the interval is reinforced
- The first response following the end of the
interval is reinforced - This schedule usually produces a scalloped
pattern of responding in which little behavior is
produced early in the interval, but as the
interval nears an end, the rate of responding
increases - This also produces an overall low rate of
responding
11Graph of Fixed Interval Responding
12Variable Schedules of Reinforcement
- Variable schedules differ from fixed schedules in
that the behavioral requirement for reinforcement
varies randomly from one reinforcement to the
next - This usually produces a more consistent pattern
of responding without post-reinforcement pauses - Variable ratio schedules produce an overall high
consistent rate of responding - Variable interval schedules produce an overall
low consistent rate of responding
13An Example of Variable Ratio Reinforcement
- Random instances of the behavior are reinforced
14Graph of Variable Ratio Responding
15Graph of Variable Interval Responding
16Fixed and Variable Duration Schedules
- The response is required to continue for a
specified or variable period of time for
reinforcement to be delivered - These schedules produce a continuous rate of
behavior since that is the requirement for
reinforcement
17Extinction of Intermittently Reinforced Behavior
- The less often and the more inconsistently
behavior is reinforced, the longer it will take
to extinguish the behavior, other things being
equal - Behaviors that are reinforced on a thin
schedule are more resistant to extinction than
behaviors reinforced on a more dense schedule - Behavior that is reinforced on a variable
schedule will be more resistant to extinction
than behavior reinforced on a fixed schedule
18Reducing Reinforcer Density
- Large amounts of behavior can be obtained with
very little reinforcement using intermittent
schedules - Initially, behavior needs dense schedules of
reinforcement to establish it, preferably
continuous reinforcement - As the behavior is strengthened, reinforcement
can be gradually reduced in frequency - Start with as low a density as the behavior can
tolerate and decrease the density as responding
is strengthened
19- If it is reduced too quickly, signs of extinction
may be observed - Response rate may slow down
- Inconsistent responding may be seen
- May see an increase in other responses
- This is known as schedule strain
- If this happens, retreat to a denser
reinforce-ment schedule - Adding a conditioned reinforcer in between
reinforcements can help bridge the gap
20Variations of Reinforcement Schedules I Limited
Hold
- This is applied when a faster rate of responding
is desired with a fixed interval schedule - Response rate can be slowed down if response is
not made soon after the end of the interval - By limiting how long the reinforcer is available
following the end of the interval, responding can
be speeded up - If the response is not made within that period,
the reinforcement is lost and another is not
available until the end of the next interval
21Variations of Reinforcement Schedules II
Concurrent Schedules
- Two or more basic schedules are operating
independently at the same time for two or more
different behaviors - The organism has a choice of behaviors and
schedules - This provides a better analog for real-life
situations because reinforcement is often
available for more than one response class or
from more than one source or both -
22Concurrent Schedules (contd)
- When similar reinforcement is scheduled for each
of the concurrent responses - the response receiving the higher frequency
of reinforcement will increase in rate - the response requiring the least effort
will increase in rate - the response providing the most immediate
reinforcement will increase in rate
23Matching Law and Maximizing
- The proportion of responses made to each schedule
will be proportionate to the ratio of reinforcers
available under each schedule - Maximizing subjects switch back and forth
between alternatives to receive maximum
reinforcers - Concurrent ratio schedules little
switching back and forth - Concurrent interval schedules the subjects
can earn close to all of the reinforcements on
both schedules
24Example of the Matching Law
- You are speaking to 2 people, providing them with
information that is of interest to them. - When you look at each person, person A is looking
at you (the reinforcement) about 75 of the time
and looking away about 25 of the time. - Person B is looking at you about 25 of the time
and looking away about 75 of the time. - About what percent of the time will you be
looking at each person?
25Variations of Reinforcement Schedules II Chained
Schedules
- Two or more basic schedule requirements are in
place, one schedule occurring at a time but in a
specified sequence - There is usually a cue that is correlated with a
specific schedule and is present as long as the
schedule is in effect - Reinforcement for responding in the 1st component
is the presentation of the 2nd - Reinforcement does not occur until the final
component is performed
26Variations of Reinforcement Schedules III
Conjunctive Schedules
- The requirements for two or more schedules must
be met simultaneously - Task/interval interactions
- When the task requirements are high and the
interval is short, steady work throughout the
interval will be the result - When task requirements are low and the
interval long, many nontask behaviors will be
observed