Chapter 7

Chapter 7:
Schedules and Theories
of Reinforcement
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Schedule of Reinforcement
• indicates what exactly has to be done for
the reinforcer to be delivered.
• Example:
– How many lever presses are required for the
food pellet to be presented?
• Different response requirements can have
dramatically different effects on behavior.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Continuous Reinforcement
Schedule (CRF)
• Each specified response is reinforced.
• Example:
– Each time a rat presses the lever, it obtains a
food pellet.
• It is very useful when a behavior is first
being shaped or strengthened.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Intermittent Reinforcement
Schedule
• Only some responses are reinforced.
• Example:
– Only some of the rat’s lever presses result in a food
pellet.
• This obviously characterizes much of everyday
life.
• Examples:
– Not all concerts we attend are enjoyable.
– Not every person we invite out on a date accepts.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Four Basic Intermittent Schedules
•
•
•
•
fixed ratio
variable ratio
fixed interval
variable interval
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Fixed Ratio Schedules
• Reinforcement is contingent upon a fixed,
predictable number of responses.
• Examples:
– On a fixed ratio 5 schedule (FR 5), a rat has
to press the lever 5 times to obtain food.
– On a FR 50 schedule, a rat has to press the
lever 50 times to obtain food.
• An FR 1 schedule is the same as a CRF
schedule.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Responses to FR Schedules
• Usually a high rate of response with a short
postreinforcement pause.
• Example:
– On an FR 25 schedule, a rat rapidly emits 25 lever
presses, munches down the food pellet it receives,
and then snoops around before emitting more lever
presses.
• A postreinforcement pause is a short pause
following the attainment of each reinforcer.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Responses to FR Schedules,
continued
• Higher ratio requirements produce longer
postreinforcement pauses.
• Example:
– You will take a longer break after completing a
long assignment than after completing a short
one.
• There may be little or no pausing with a
FR1 or FR2 schedule because the
reinforcer is so close.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
“Stretching the Ratio”
• moving from a low ratio requirement (a
dense schedule) to a high ratio
requirement (a lean schedule).
• It should be done gradually to avoid ratio
strain or burnout.
• Example:
– A rapid increase in your workload will more
easily cause you to burn out than a gradual
increase in your workload.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Variable Ratio Schedules
• Reinforcement is contingent upon a varying,
unpredictable number of responses.
• Example:
– On a variable ratio 5 (VR 5) schedule, a rat has to
emit an average of 5 lever presses for each food
pellet.
• They generally produce a high and steady rate
of response with little or no postreinforcement
pause.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Real Examples of VR Schedules
• Only some of a cheetah’s attempts at
chasing down prey are successful.
• Only some acts of politeness receive an
acknowledgment.
• Only some CDs that we buy are enjoyable.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
VR Schedules & Maladaptive
Behaviors
• VR schedules help account for the
persistence of certain maladaptive
behaviors.
• Example:
– Gambling - The unpredictable nature of
gambling results in a very high rate of
behavior.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
VR Schedules & Abusive
Relationships
• At the start, the couple typically provide
each other with an enormous amount of
positive reinforcement.
• As the relationship progresses, one
person provides reinforcement on an
intermittent basis, while the other person
works hard to obtain that reinforcement.
• This happens gradually and both usually
are unaware of the change.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Fixed Interval Schedules
• Reinforcement is contingent upon the first
response after a fixed, predictable period of
time.
• Example:
– On a fixed interval 30-second (FI 30-sec) schedule,
the first lever press after a 30-second interval has
elapsed results in a food pellet.
– Trying to phone a friend who is due to arrive home in
exactly 30 minutes will be effective only after the 30
minutes have elapsed.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Responses to FI Schedules
• Responses consist of a postreinforcement pause
followed by a gradually increasing rate of
response as the interval draws to a close.
• Example:
– A rat on an FI 30-sec schedule will emit no lever
presses at the start of the 30-second interval, but the
rate of responses will gradually increase as the 30
ends.
– Your study habits this term were probably very weak
at the beginning of the semester and they will
increase as the semester draws to a close.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Variable Interval Schedules
• Reinforcement is contingent upon the first
response after a varying, unpredictable period of
time.
• Example:
– On a variable interval 30-second (VI 30-sec)
schedule, the first lever press after an average
interval of 30 seconds will result in a food.
– Looking down the street for the bus will be reinforced
after a varying, unpredictable period of time.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Responses to VI Schedules
• They produce a moderate, steady rate of
response with little or no
postreinforcement pause.
• Example:
– If you need to contact a friend who always
arrives home between 6:00 p.m. and 6:30
p.m., a good strategy would be to phone
every few minutes throughout that time
period.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Responses to the Four Basic
Intermittent Schedules
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Rate Schedules
• are entirely “response contingent”.
• They depend entirely on the number of
responses emitted.
• Thus, they result in high rates of response.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Fixed Schedules
• produce postreinforcement pauses.
• Attaining one reinforcer means that the
next reinforcer is necessarily some
distance away.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Duration Schedules
• reinforcement is contingent on performing a
behavior continuously throughout a period of
time.
• Fixed duration (FD) schedule means the
behavior must be performed continuously for a
fixed, predictable period of time.
• Example: FD 60-sec schedule for lever pressing
• Variable duration ( VD) schedule means the
behavior must be performed continuously for a
varying, unpredictable period of time.
• Example: VD 60-sec schedule for lever pressing
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Duration Schedules &
Human Behavior
• Duration schedules are sometimes useful in
modifying certain human behaviors.
• However, in some ways they are rather
imprecise.
• What constitutes “continuous performance of
behavior”?
• Also, reinforcing the mere performance of an
activity with no regard to level of performance
can undermine a person’s intrinsic interest in
that activity.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Response-Rate Schedules
• Reinforcement is directly contingent upon
the organism’s rate of response.
• These are differential reinforcement
schedules.
• Differential reinforcement means simply
that one type of response is reinforced
while another is not.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Differential Reinforcement of High
Rates (DRH)
• Reinforcement is provided for a high rate
of response and not for a low rate.
• Example:
– Food pellets are provided when the rat
presses the lever several times in a short
period of time.
– Winning a running or swimming is contingent
on a fast rate of response.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Differential Reinforcement of Low
Rates (DRL)
• Reinforcement is provided for responding at a
slow rate.
• Example:
– A rat might receive a food pellet only if it waits at least
10 seconds between lever presses.
– Preparing your homework slowly and carefully will
result in a better grade.
• Responses that occur during the interval have
an adverse effect in that they prevent the
reinforcement from occurring.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Differential Reinforcement of Paced
Responding (DRP)
• Reinforcement is provided for responding
neither too fast nor too slow.
• Example:
– Musical activities require that the relevant
actions be performed at a specific pace.
– Long distant runners have to remain on a
certain pace to win.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Noncontingent Schedules
• The reinforcer is delivered independently
of any response.
• A response is not required for the
reinforcer to be obtained.
• There are two-types of responseindependent schedules: fixed time and
variable time.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Fixed Time (FT) Schedule
• The reinforcer is delivered following a fixed,
predictable period of time, regardless of the
organism’s behavior.
• Example:
– On a fixed time 30-second (FT 30-sec) schedule, a
pigeon receives food every 30 seconds regardless of
its behavior.
– People receive Christmas gifts each year regardless
of their behavior on an FT 1-year schedule.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Variable Time (VT) Schedule
• The reinforcer is delivered following a
varying, unpredictable period of time,
regardless of the organism’s behavior.
• Example:
– A pigeon receives food after an average
interval of 30 seconds (VT 30-sec schedule).
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Superstitious Behavior
• Noncontingent reinforcement may account for
some forms of superstitious behavior.
• Behaviors may be accidentally reinforced by the
coincidental presentation of reinforcments.
• Example:
– Students were placed in a booth that contained three
levers and a counter and were told to earn as many
points as possible.
– Most students developed at least temporary patterns
of superstitious lever pulling.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Superstitious Behaviors, continued
• This is also true of athletes and gamblers.
• Unusual events that precede a fine
performance may be quickly identified and
then deliberately reproduced in the hopes
of reproducing that performance.
• Superstitious behavior can be seen as an
attempt to make an unpredictable situation
more predictable.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
The Downside of Noncontingent
Schedules
• When a noncontingent schedule is
superimposed on a response-contingent
schedule, the level of response on the
respondent schedule will decrease.
• Example:
– The work schedules of people on welfare
– Pigeons on a noncontingent and a responsecontingent schedule
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
The Upside of Noncontingent
Schedules
• Noncontingent schedules can be an effective
means of reducing the frequency of maladaptive
behaviors.
• Example:
– Giving children who act out a sufficient amount of
attention on a noncontingent basis may reduce their
maladaptive behavior.
• Carl Rogers’ unconditional positive regard can
be seen as a form of noncontingent social
reinforcement, which can indeed have beneficial
effects.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Complex Schedules of
Reinforcement
• a combination of two or more simple
schedules.
• They include:
– Conjunctive Schedules
– Adjusting Schedules
– Chained Schedules
– Multiple Schedules
– Concurrent Schedules
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Conjunctive Schedule
• type of complex schedule in which the
requirements of two or more simple schedules
must be met before a reinforcer is delivered.
• Example:
– A FI 2-minute FR 100 schedule for lever pressing
means reinforcement is contingent upon completing
100 lever presses and completing at least one lever
press following a 2-minute interval.
– Earning wages at your job is another example.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Adjusting Schedule
• The response requirement changes as a
function of the organism’s performance while
responding for the previous reinforcer.
• Example:
– A FR 100 schedule may increase to 110 responses
(FR 110) if the rat performs well.
• Shaping usually works on an adjusting schedule
where gradually more is required to receive the
reinforcer.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Chained Schedule
• sequence of two or more simple schedules,
each of which has its own SD and the last of
which results in a terminal reinforcer.
• Example:
– A pigeon is presented with a VR 20 schedule on a
green key, followed by an FI 10-sec schedule on a red
key, which then leads to the terminal reinforcer of
food.
VR 20
FI 10-sec
Green key: Peck → Red key: Peck → Food
SD
R
SR/SD
R
SR
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Responses to a Chained Schedule
• The earlier links of the chain are associated with
weaker responding.
• The terminal reinforcer is more immediate and
hence more influential.
• The goal gradient effect is an increase in the
strength and/or efficiency of responding as one
draws near to the goal.
• Example:
– Rats tend to run faster and make fewer wrong turns
running through a maze as they near the goal box.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Backward Chaining
• is training the final link first and the initial link
last, in order to make the chain more effective.
• The sight of each stimulus is both a secondary
reinforcer for the previous behavior and a
discriminative stimulus for the next behavior.
• In this manner, very long chains of behavior can
be established.
• Shaping and chaining are thus the basic means
by which circus and marine animals are trained.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Chaining & Human Behavior
• Most human endeavors involve response
chains, some of which are very long.
• For humans, response chains are often
established through instructions.
• Example:
– Reading the text
– Completing this course
• It is best to chart your progress because terminal
reinforcers are often extremely distant, so
behavior is easily disrupted during the early part
of the chain.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Theories of Reinforcement
•
•
•
•
The Drive Reduction Theory
The Premack Principle
The Response Deprivation Hypothesis
The Behavioral Bliss Point Approach
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Drive Reduction Theory
• an event is reinforcing to the extent that it is
associated with a reduction in some type of
physiological drive.
• Example:
– Food deprivation produces a “hunger drive,” which
then propels the animal to behave in order to receive
food.
• Most theorists no longer believe that drive
reduction theory can offer a comprehensive
account of reinforcement, and it has now been
largely abandoned.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Hull’s View
• According to Hull, all reinforcers are associated,
either directly or indirectly, with some type of
drive reduction.
• However, not all behaviors appear associated
with a reduction in a physiological drive.
• Example:
– A chimpanzee will press a button so that it can obtain
a peek into another room.
• It seems as though the motivation for some
behaviors exists more in the reinforcing stimulus
than in some type of internal state.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Incentive Motivation
• motivation that is derived from some property of
the reinforcer, as opposed to an internal drive
state.
• Examples:
– Playing a video game for the fun of it
• Even events that seem to be clearly associated
with drive reduction can be strongly affected by
incentive factors.
• Example:
– Eating at a restaurant that serves spicy food
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
The Premack Principle
• states that a high-probability behavior can be
used to reinforce a low-probability behavior.
• The process of reinforcement can be
conceptualized as a sequence of two behaviors:
1. the behavior that is being reinforced, followed by
2. the behavior that is the reinforcer.
• Example:
–
Lever pressing is reinforced by eating food.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
The Premack Principle, continued
• By focusing on the relative probabilities of
behaviors, the Premack principle allows us to
quickly identify potential reinforcers in the real
world.
• Example:
– Kaily spends only a few minutes each morning doing
chores, but at least an hour reading comic books.
– The opportunity to read comic books can be used to
reinforce doing chores.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Response Deprivation Hypothesis
• states that a behavior can serve as a
reinforcer when
1. access to the behavior is restricted and
2. its frequency thereby falls below its
preferred level of occurrence.
• The preferred level is its baseline level of
occurrence when the animal can freely
engage in that activity.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Response Deprivation Hypothesis
Example
• A rat typically runs for 1 hour a day whenever it
has free access to a running wheel.
• If the rat is then allowed free access to the wheel
for only 15 minutes per day, it will be unable to
reach this preferred level .
• The rat will be in a state of deprivation with
regard to running.
• The rat will now be willing to work to obtain
additional time on the wheel.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Contingencies of Reinforcement
• Contingencies of reinforcement are effective to
the extent that they create a condition in which
the organism is confronted with the possibility of
a certain response falling below its baseline
level.
• Example:
– Kaily, who enjoys reading comic books each day, is
presented with a contingency in which she has to do
her chores before reading.
– Reading comic books is a reinforcer because the
contingency pushes free comic book reading to below
its preferred rate of occurrence.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Behavioral Bliss Point Approach
• An organism with free access to alternative
activities will distribute its behavior in such a way
as to maximize overall reinforcement.
• Example:
– A rat that can freely choose between running in a
wheel and exploring a maze might spend 1 hour per
day running in the wheel and 2 hours exploring the
maze.
• The optimal distribution of behavior is based on
the notion that each activity is freely available.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Behavioral Bliss Point Approach,
continued
• What will happen when the rat can not reach its
bliss point because a contingency won’t allow it?
– It will compromise by distributing its activities in such
a way as to draw as near as possible to its behavioral
bliss point.
• Likewise most of us are forced to spend several
more hours working and several fewer hours
enjoying ourselves than we would if we were
free to do whatever we wanted.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Summary
• A schedule of reinforcement is the response
requirement that must be met to obtain a
reinforcer.
• Different types of schedules produce different
schedule effects.
• Continuous vs. Intermittent schedules
• The basic schedules include FR, VR, FI, and VI.
• Fixed vs. Variable
• Ratio vs. Interval
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Summary, continued
• Duration Schedules
• Response-rate schedules, including DRH, DRL,
and DRP
• Noncontingent Schedules
• A complex schedule consists of two or more
simple schedules.
• Complex schedules include conjunctive
schedules, adjusting schedules, and chained
schedules.
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.
Summary, continued
•
•
•
•
The Drive Reduction Theory
The Premack Principle
The Response Deprivation Hypothesis
The Behavioral Bliss Point Approach
Introduction to Learning and Behavior, 3e
by Russell A. Powell, Diane G. Symbaluk, and P. Lynne Honey
Copyright © 2009 Wadsworth Publishing, a division of Cengage Learning. All rights reserved.