Terms-and-defintions.. - FIT ABA Materials: Eb Blakely

advertisement
1
1
BCBA Terminology For 3rd Task List
Content Area 1: Ethics
1. Approval of programs - For restrictive programs, informed consent should be obtained where
appropriate; informed consent may be circumvented if the client is in serious danger, if the treatment
has a reasonable chance of success, and if there are procedural safeguards in place. Approval should
also be obtained by a CBA or BPRC chair, as specified in HRSM 160-4.
2. Bill of Rights of the Retarded & other basic rights - Developmentally disabled individuals should be
afforded basic rights, including the right to medical care, privacy, central file, habilitation plan, personal
belongings, due process, freedom from unnecessary restraints or cruel and unusual punishment, etc.
3. Confidentiality - This refers to the attempt to keep a client's personal information away from public view.
Thus, program data, film, and files should be kept private, and should be released only with consent by the
client or guardian.
4. Program Review Process - LRC, PRC, HRAC - The LRC is the Local Review Committee, which
reviews and approves restrictive behavior programs in its DCF district. The PRC (Peer Review
Committee), which comprises several experts in the field, provides technical assistance and advice to
facilities in the state, as well as to the Developmental Services Program Office in Tallahassee. The
HRAC (Human Rights Advocacy Committee) serves in an advisory capacity, and attempts to ensure
that basic client rights are protected. HRAC members usually sit on the LRC.
5. Procedure selection - A behavioral programmer should consider the following issues in selecting a
procedure: Use the least restrictive procedure; use the most normal procedure; are staff trained to
implement procedure; is there a competent behavior analyst to supervise; and is there support for the
procedure by significant others.
a. Least restrictive treatment - When considering a treatment for a behavior problem, one
should attempt to use the "least restrictive treatment" that has a reasonable chance of
success. "Least restrictive" means the treatment that minimizes suspension of basic rights or
freedoms.
b. Normalization - The principle of normalization holds that living environments and treatment
procedures should be used that are most like those applied to normal populations (e.g., single
family dwellings are more "normal" than large institutional settings - thus, many
developmentally disabled people now live in group homes that approximate normal family
arrangements)
c. Informed consent - Informed consent must be given when restrictive procedures (those that
involve suspending some basic rights) are used. Consent can be given by the client, if he/she
is competent, or by the parent/guardian. To give informed consent, the person must be 1)
"informed," or appraised of the cost/benefits of the procedure and alternatives 2) capable of
giving consent (understands the pros and cons) and 3) voluntarily giving consent (the person
can not be coerced). The person must also have the opportunity to withdraw his/her consent
at any time.
d. The procedure should be consistent with principles of behavior and published research.
e. The procedures should be such that they can be implemented by the staff.
f. The intervention is reasonable given the available resources.
g. The intervention should involve client choice whenever possible.
h. The intervention must be linked to the assessment!
i. Social validity - Social validity refers to whether goals, procedures, and outcomes are acceptable.
For example, to examine the validity of goals and task analyses, you may ask experts or competent
individuals to rate the goal (e.g., ask a fireman whether the fire safety skills are appropriate). To
examine the validity of the procedures, you may ask community members, parents, and the client
2
2
whether a timeout procedure that you are using are acceptable. To examine the outcome validity, you
may ask parents or the client if the behavior changes in their child are important and worthwhile.
j. The intervention should be consistent with 65B.
k. If there are troublesome lifestyle problems, those should be addressed before a plan is
implemented. Such problems include poor curricula in a classroom, boring jobs, and
unpleasant living arrangements.
l. Medical concerns – these should be addressed prior to, or concomitant with, behavioral
interventions.
6. Emergency situations - Emergency situations are those that present a risk to the client (SIB) or to
others (aggression). In the BCBA task list, there are 3 kinds of situations:
a. A problem behavior that is totally unexpected
b. A problem behavior that is low frequency: <6/year
c. A problem behavior that must be managed until a program is in place
7. Emergency procedures - There are two essential emergency procedures - immediate restraint or
immediate isolation. They are designed to protect the person, others, or environment from immediate
harm. They are not designed necessarily as therapeutic interventions. Components and limitations:
a. Should be accompanied by active programming strategies
b. Use only when absolutely necessary
c. Behaviors and precursors should be clearly defined
d. Train staff in how to implement - consider logistical problems (e.g., what to do at the dance)
e. Ensure adequate staff
f. Ensure reporting, monitoring, and evaluation
g. Avoid or minimize negative side effects - avoid dangerous confrontations when possible
8. HRSM 160-4 - The Behavior Management Guidelines maintained by Developmental Services of HRS.
HRSM 160-4 specifies the range of aversive procedures that can be used, and approval requirements thereof.
Now superseded by 65B.
3
3
Content Area 2: Definition and Characteristics
1. Applied behavior analysis - There are several characteristics of applied behavior analysis, listed
below.
a. Effective – the applied behavior analyst attempts to produce large enough effects that have
an impact on the person’s life.
b. Technological - Provides written detail of procedures to permit replication of techniques in
other settings.
c. Conceptually systematic - The procedures are tied to the basic principles of behavior.
d. Generality - Attempts to identify techniques that can be successful with other individuals,
with other behavior problems, and in other situations. (see external validity)
e. Analytic - Scientifically based experimental designs are used to assess the effectiveness of
the interventions under study.
f.
Applied - focuses on behavior with social significance
g. Behavioral - behavior is the focus, and not some hypothetical entity
2. Applied and experimental analysis - In applied behavior analysis, behaviors of social significance to
the person are investigated. In experimental, the behaviors are of no social significance to the person
under investigation. Both sets of procedures involve operational definitions of the independent
variable, objectively defined dependent variables, systematic manipulations, and data analysis of
individual organisms.
3. Behavioral technologies – the collection of procedures that have arisen from EAB and ABA research, and
are then applied to practical problems by practitioners and non-behavioral service providers. For example,
behavioral momentum procedures have been developed by several researchers, and this set of procedures
(technologies) is now implemented by many service providers in clinics, schools, and homes.
3. Assumptions of science as applied to behavior analysis
a. Determinism - it is assumed that behavior is caused by some event(s)
b. Empiricism - information is collected by objective observations
c. Philosophic doubt - conclusions of science are tentative and can be revised as new data
comes to light
d. Law of Parsimony - when possible, the simplest explanation of behavior should be provided,
all else being equal
e. Scientific manipulation - To see if an event affects behavior, that event is systematically
manipulated and the effects on behavior are noted.
f. Replication – repeating the experiments that are done in ABA. The experiment may be
repeated using the exact same methods (direct replication) or the experiment may be changed
in some way (systematic replication).
4. Behaviorism - Behaviorism is a philosophy of behavior that assumes behavior is a function of
current and past environments, as well as genetics. Other inadequate explanations of behavior are
rejected (see below).
5. Inadequate explanations of behavior
a. Nominal fallacy - Attempting to explain a behavior by merely naming or classifying it ("That person is
pica!" to explain eating inedible objects).
4
4
b. Teleology - Explaining behavior by appealing to future, unexperienced events ("I am going to class
to get a CBA certificate.").
c. Reification - Explaining behavior by appealing to a non-existent entity (e.g., ID, Ego, Superego, the
Self).
d. Explanatory fiction/circular reasoning - Explaining behavior by appealing to some entity, the
evidence for which lies in the behavior itself (e.g., "Dave is aggressive because he has an
aggressive trait." How do you know he has an aggressive trait? "Because he has aggressive
behavior." Thus, the only evidence of the trait is the presence of the behavior.)
6. Mentalistic explanations of behavior
a. Explanations that appeal to mental, unobservable processes. For example, “He was aggressive
because his thought processes led him to an inevitable conclusion that aggression was required.” Or,
“The child became depressed due to his frustration with school.”
7. Role of private events in behavior analysis:
a. Private events involve behavior and/or stimuli that can only be observed by the person. These
include private stimuli such as headaches, or behavior such as operant or conditioned seeing. These
behaviors and stimuli still must be explained by appealing to a history of environmental contingencies
(see operant seeing) or biological processes (as in headaches). Skinner proposed ways that we come
to respond to our own private stimuli.
5
5
Content Area 3: Principles, Processes, and Concepts
1. Operant conditioning - A kind of learning in which behavior is modified by changing its consequences. The
behavior may be increased, decreased, or brought under stimulus control (see stimulus control)
a. Antecedent - An event(s) that occurs before a behavior. Antecedents will include discriminative
stimuli and motivational operations.
b. Behavior - The action of the muscle groups and/or glands.
c. Consequence - An event that occurs after a behavior.
2. Behavior - the interaction of a person and his/her environment - the actions of the muscles and glands – the
activity of an organism. A response is a single instance of a behavioral class.
2a. Operant (also called a functional class, or behavioral class) – a collection of responses with a common
effect on the environment. For example, a kid may do a variety of things to obtain attention, such as tell jokes,
raise his hand, and curse.
3. Environment - the entire constellation of stimuli (see below) that can affect a person. This includes
both internal and external stimuli.
4. Contingency - A dependency among behavior and stimuli, or among stimuli. A contingency can be
expressed as an IF-THEN statement. For example, "If Darcy has a tantrum, she will lose her car ride."
Or "When you deliver food, always deliver praise." Or "If Steve completes 10 earrings, he will earn a
soda." Or, among stimuli might be “When the light comes on, there will be a loud bang.”
5. Free operant - An operant behavior that can occur at anytime, given some motivation. For example,
a client may have a temper tantrum at anytime to procure attention; in other words, the behavior is not
"opportunity-based" such as following directions. In following directions, there must first be a direction
to follow before there can be compliance or non-compliance. This latter kind of behavior is called a
discriminated operant.
6. Discriminated operant - An operant that requires some "opportunity" or punctate antecedent to occur. For
example, compliance is a discriminated operant because such behavior first requires a direction to be followed.
(see Discrete Trials below)
7. Topography of response - The form of a response. For example, aggression can take many different
topographies, such as hitting, kicking, or biting.
8. Target behavior - The behavior to be changed. Acceleration targets are those that are to be increased,
deceleration targets are those to be decreased.
9. Stimulus - an energy change in the environment that affects a person through his/her senses.
9a. Stimulus class – a collection of stimuli with a common characteristic. For example a stimulus class could
be any stimulus that evokes tantrums. Or, a class could be any stimulus of a certain wavelength (e.g., 550
nm).
10. Social learning theory - A theory of learning that posits learning occurs as a result of observations that are
subsequently affect the person through cognitive mediational processes. For example, child may see another
child receive tokens for on-task behavior, and as a result, engage in similar on-task behavior. Presumably, the
child has engaged in some cognitive processes that allow this effect to occur.
11. Discrete trials - An instructional method wherein the client is presented with a formal opportunity to perform
some behavior, and a consequence is provided depending on the performance. Prompts may be provided
depending on whether the response is correct or incorrect. Percentage correct data are often collected.
12. Response definitions - A description of a response that is in objective and observable terms.
6
6
a. Functional response definition - A response definition that includes a description of the form, or
topography, of the response, as well as a description of the behavior's functional antecedents and/or
consequences.
b. Topographical response definition - A response definition that includes only a description of the
form, or topography, of the response.
13. Fundamental properties of behavior - 1) temporal locus 2) temporal extent 3) repeatability 4) temporal
locus and repeatability
14. Cycle - A behavior cycle specifies when a behavior begins and when it ends (e.g., a tantrum begins when
the person cries and ends when the person stops crying for 1 minute).
15. Dead man's test - A test for evaluating whether a goal or objective is viable. If a dead man can do it (e.g.,
staying in bed), then it may not be a functional, useful goal.
16. Collateral measures - Measures of behaviors other than the primary target behaviors. For example, when
attempting to decrease aggression, an observer would record instances of aggression, but perhaps also
negative self-statements, use of free time, changes in the behavior of other individuals or in the supervisors,
etc.
17. Positive reinforcer - A stimulus that when presented after a behavior, increases the rate of the behavior.
Note that the IRT will decrease.
18. Positive reinforcement – the process in which a stimulus is presented after a behavior, and the rate of the
behavior increases.
19. Negative reinforcer - A stimulus that when withdrawn after a behavior, increases the rate of the behavior.
Note that the IRT will decrease.
20. Negative reinforcement – The process in which a stimulus is withdrawn after a behavior, and the rate of the
behavior increases.
21. Conditioned reinforcer (punisher) -A consequence that increases (or decreases) the rate of behavior
because it has been paired with another reinforcer (punisher).
22. Unconditioned reinforcer - A reinforcer that is effective without previous experience (e.g., food, drinks)
23. Avoidance/escape behavior - Avoidance behavior is behavior that is reinforced by the
postponement or avoidance of an aversive stimulus (negative reinforcer). Escape behavior is behavior
that is reinforced by escaping from an aversive stimulus (negative reinforcer).
24. Adventitious reinforcement - A term that refers to "accidental" reinforcement. For example, a baseball
pitcher may wear some colorful socks and then pitch an excellent game. Wearing the socks may then become
"accidentally" reinforced (the socks had nothing to do with pitching the excellent game), and the pitcher
suddenly must wear the socks on every time out. Such behavior is called superstitious behavior.
25. Extinction - The withholding of a stimulus that normally occurs after a behavior, resulting in a decrease in
the rate of the behavior. There are several kinds of extinction, including sensory, escape, and social.
26. Spontaneous recovery - Following an extinction session, a temporary re-appearance of the behavior in the
beginning of the next extinction session. It is thought that the re-appearance is due to the relative novelty of
the "beginning of the session" that was only briefly experienced in the previous session.
27. Resistance to extinction - The extent to which behavior persists, and in what pattern, when the
maintaining reinforcer is withheld. Typically, behavior under continuous reinforcement will extinguish
much faster than that under intermittent reinforcement.
28. Escape extinction – extinction of a negatively reinforced behavior. Withholding escape.
29. Social extinction – extinction of a behavior maintained by social reinforcers. Withholding social
reinforcement.
7
7
30. Sensory extinction – extinction of behavior maintained by sensory reinforcers.
30a. Extinction side effects: bursting, emotional behavior, aggression, increase in variety of topographies,
increase in intensity of behavior.
31. Positive punisher - A stimulus that when presented after a behavior, decreases the rate of the behavior.
32. Negative punisher - A stimulus that when withdrawn after a behavior, decreases the rate of the behavior.
33. Stimulus control - The extent to which behavior occurs when an antecedent stimulus is presented. A
stimulus has "stimulus control" over a behavior when the behavior tends to occur only when the stimulus is
present.
34. Generalization - Stimulus generalization is when the effects of some contingency spread to stimuli that
have not yet been associated with the contingency. Response generalization is when the effects of some
contingency spread to responses that have not yet been associated with the contingency. The ways to
program generalization are found below.
35. Motivative operation (2 effects) – A motivative operation 1) changes the reinforcing effectiveness of
some stimulus, and 2) changes the strength of behavior that has produced that stimulus in the past
(e.g., food deprivation makes food more reinforcing, and evokes food-seeking behavior). (in the past,
this was an establishing operation or EO)
A. Establishing operations: 1) increases the reinforcing effectiveness of some stimulus, and 2)
increases the strength of behavior that has produced that stimulus in the past
B. Abolishing operantions: 1) decreases the reinforcing effectiveness of some stimulus, and 2)
decreases the strength of behavior that has produced that stimulus in the past
C. Conditioned motivative operations: CMOs have the same 2 effects that MOs have, but they
are due to a conditioning history. There are 3 kinds:
1. Surrogate – surrogates have their effects because of a history of pairing with an
MO, and these effects mimic those of the MO.
2. Reflexive – Reflexives have their effects because their presence signals a
“worsening” or “improvement” of conditions. In the former, their offset is reinforcing.
In the latter, their offset is punishing.
3. Transitive – Transitives change the reinforcing value of some other stimulus, and
change the strength of behavior that has produced that stimulus in the past.
36. Deprivation - The absence of a reinforcer for a period of time, thereby making that event more reinforcing.
37. Satiation - A decrease in responding due to the reduced effectiveness of the reinforcer, because the person
has received too much of it.
38. Discrimination - Discrimination refers to a change in observed behavior when antecedent stimuli are
changed. Discriminations are generally trained through differential reinforcement.
a. SD - A stimulus that 1) evokes a behavior 2) because that behavior has been reinforced in the
presence of the stimulus.
b. SP (SD-) - A stimulus that decreases or suppresses a behavior because that behavior has been
punished in the presence of the stimulus.
c. S-delta - A stimulus that suppresses a behavior because that behavior has been extinguished in the
presence of the stimulus.
39. Discrimination training - Reinforcing a behavior in the presence of some antecedent, and extinguishing (or
punishing) the behavior in the absence of the antecedent. (e.g., reinforcing standing up when you say "stand
up" and not reinforcing standing up when you do not say "stand up").
40. Respondent (classical) conditioning - A kind of learning in which one stimulus is paired with a second
stimulus, and as a result, the first comes to elicit the same or similar response that the second elicits. (see
below)
8
8
a. US - Unconditioned stimulus
b. UR - Unconditioned response
c. NS - Neutral stimulus
d. CS - Conditioned stimulus
e. CR - Conditioned response
BEFORE PAIRING
US
-------->
UR
(food)
(salivation)
PAIRING
NS + US -------->
UR
(bell) (food)
(salivation)
AFTER PAIRING
CS + US --------> UR
(bell) (food)
(salivation)
--------------------> CR
(salivation)
41. Respondent extinction - A decrease in the strength of CR as a result of presenting the CS alone.
42. Generalization gradient - a graph that shows the frequency of a behavior in various stimulus conditions, one
of which is the "training" situation and then other similar but untrained situations (i.e., "test" situations). The
behavior occurs most frequently in the training situation, and then drops off as the test situations become more
and more different compared to the training situation.
43. Skinner's verbal behavior - a system of language that classifies verbal behavior according to its function.
a. Echoic - verbal behavior under the antecedent control of a prior verbal stimulus, and with point-topoint correspondence between the antecedent stimulus and the response ("imitative" behavior)
b. Mand - verbal behavior that specifies its reinforcer and is evoked by some establishing operation
("asking for")
c. Tact - verbal behavior that is evoked by some non-verbal environmental stimulus ("naming")
d. Textual - verbal behavior that is evoked by some written stimulus, and with some point-to-point
correspondence.
e. Intraverbal - verbal behavior evoked by some antecedent verbal stimulus, but without point-to-point
correspondence ("red, white, and ____").
44. Rules and rule-governed behavior - Rules are "contingency-specifying stimuli" that describe relations
between stimuli ("When the light goes on, you will get shocked.") or between stimuli and behavior ("When you
get home, make your bed and we can go out."). The behavior resulting from these rules is "rule-governed."
Rules engender behavior without direct exposure to the contingencies that they describe. Thus, they can
shorten training time, because long, drawn-out exposure to contingencies may not be necessary. It is said by
some that rules, in effect, alter the function of stimuli (i.e., Rules create new CSs, SDs, conditioned reinforcers,
etc.). (see contingency-shaped behavior).
45. Contingency-shaped behavior - Behavior that occurs because it has resulted from direct exposure
to contingencies. For example, you may learn to assemble a small engine by doing it, and being
exposed to the natural contingencies involved in the process. Such behavior is contrasted with rulegoverned behavior, which results from direct exposure to rules, not exposure to contingencies. (see
rules).
9
Content Area 4: Behavioral Assessment
1. Behavioral assessment - An assessment that examines the person's entire life, including medical conditions,
nutrition, sleep patterns, behavioral records, family life, ABC records, test scores, etc. The goal is to identify
potential causes of the behavior in question, whether the behavior results from a medical condition, task
difficulty, family tragedy, or some environmental variable. There are 2 kinds of assessments: descriptive
assessment and functional analysis.
Descriptive assessment:
a. Records review - look for patterns, causes
b. Interview client and significant others - look for patterns, potential causes of behavior (Quantify in a
MAS or FAST)
b. Direct observations - ABC data, pattern analysis, narrative accounts, response rates, latencies,
IRTs, etc.
c. Goal: To identify patterns of behavior, topographies, and frequencies. When a descriptive
assessment is conducted, the practitioner will then be able to develop an hypothesis.
Functional Analysis:
a. Mult-component manipulations - arranging for particular contingencies to occur and measuring the
behavior therein (e.g., in task, alone, toys present).
b. Test hypotheses - formulate a hypothesis and formulate a design to test it (e.g., if the behavior is
thought to occur for attention, arrange for situations with low attention vs. high attention and record
frequencies in each)
c. Goal: To confirm hypotheses in the descriptive assessment stage. Functional analysis is more
precise and provides more convincing evidence that a functional relation exists between problem
behavior and some independent variable.
2. Contextual variables - Also called setting events, these are variables that are more generally present
stimuli that are not necessarily manipulated as part of a behavior change program. Such variables
include the number of staff and clients present, room temperature, lighting, medical status of the
person, task variation (see below), choice availability (see above), curriculum, schedules, overall
appearance of the environment, and ambient noise. These variables may indeed influence the efficacy
of behavioral procedures. Note that contextual variables is a generic term that may actually refer to the
effects of other variables such as motivativ operations.
3. Sequence analysis - Identifying events that typically precede and follow a target behavior. A sequence
analysis (also called an ABC analysis) will provide hypotheses of the functional antecedents and consequences
for the behavior.
4. Simple, direct solutions to behavior problems - Solutions to solving behavior problems that do not involve
formal behavior programs that teach new behaviors. Rather, the solution involves rectifying an underlying
medical condition that caused the behavior, removing an antecedent stimulus that evoked the behavior,
changing a schedule of activities, changing some feature of the physical environment (e.g., lighting,
temperature), etc.
5. Pattern analysis - Looking for patterns of behavior. This involves noting any kind of correlation of behavior
and some other factor. This other factor could be time of day, diet, curriculum, staff, etc. A commonly used
pattern analysis technique is the scatterplot that shows which time of day behaviors occur.
6. Baseline - A pre-intervention assessment that is used to refine recording procedures, design the
intervention, and provide data with which to compare intervention data when evaluating intervention effects.
There can be different kinds of baseline, such as simply recording behavior in its natural condition, graduated
guidance baselines, and positive reinforcement baselines.
7. Multi-component manipulations - This kind of assessment, first systematized by Brian Iwata (Univ. of
Florida), involves placing the client in several different situations such as in task, alone, in an enriched
environment, and with no attention. Then, potential reinforcers are provided for problem behavior, and data
from each situation are then compared. Marked differences are noted. If behaviors seem to collect in task,
then there is a possibility that the behaviors are escape-maintained. If, however, the behaviors occur in the
contingent attention condition, then it is likely that they are maintained by attention.
9
10
10
8. Testing hypotheses - Hypotheses are generated from other assessment data, such as ABC, interviews, or
direct observation. For example, if ABC data suggest that the behavior occurs in task, then the client may be
tested in two situations: one in which they are put in a task, and the second in which they are put in free time.
Or perhaps, the client may be exposed to two kinds of tasks, one that is difficult and one that is easy. If the
assessment data suggest that the behavior is maintained by attention, then you might arrange for two
situations, one in which the behavior receives lots of reprimands, etc. and a second in which the behavior is
ignored.
9. Kinds of interventions that follow from assessment
a. Ecological changes - changing schedules, staffing patterns, activities, diet, medical status
b. Antecedent manipulation - adding or removing antecedents that evoke behaviors
c. Replacement skills - teaching new skills that replace maladaptive skills and obtain the same reinforcer.
d. Change consequences of appropriate and inappropriate behaviors - fine tune the environment such that
these behaviors receive the proper incentives.
e. Emergency procedures - Emergency procedures are used for relatively low frequency behaviors that
require some intervention to prevent injury or harm, but do not require a formal behavioral program that
involves a programmed punishment procedure.
f. Motivational operations – a) increasing or decreasing the value of a reinforcer and b) evoking or
suppressing behavior that in the past has obtained the reinforcer
example: food deprivation increases value of food, and evokes food-seeking behavior.
10. Complete behavioral support plan – Program designer should consider elements of 4 components:
motivational operations, discriminative control, replacement behaviors, consequence manipulations.
11
11
Content Area 5: Experimental Evaluations of Interventions
1. Confounding variable - Uncontrolled variables or events that influence the outcome of an experiment. Such
variables often accompany the independent variable, and thus are indistinguishable from same. For example,
a change in medication (confound) may accompany a new treatment program (intervention), and therefore, any
observed changes may not be confidently attributed to the intervention.
2. Functional relation - A functional relation is said to be present when an independent variable lawfully affects
a dependent variable. In behavior analysis, a function relation between behavior and intervention occurs when
that intervention systematically affects the behavior.
3. External validity - the extent to which an intervention can be successfully applied to other people, other
situations, or other behaviors. Also, may be termed “generality.”
4. Internal validity - whether the changes in behavior can be attributed to the intervention. ABAB designs have
a high degree of internal validity, while AB designs do not. There are many threats to internal validity (actually,
threats are really potential confounds), which bring into question whether the changes in behavior resulted from
the treatment. Threats include maturation of the subject, inaccurate or biased recording, poor implementation
of the treatment, and unplanned, subtle environmental changes.
5. Experimental designs - A sequence of conditions that permit conclusions about whether the changes in
behavior resulted from the intervention.
a. Withdrawal design - A design in which baseline conditions are alternated with intervention
conditions. The minimum of alternations are ABA (baseline, intervention, baseline) or BAB
(intervention, baseline, intervention).
b. Reversal design - A design in which an intervention is applied to a behavior, then withdrawn from
that behavior and applied to another behavior, and then withdrawn from that behavior and re-applied to
the first behavior (in a DRO and DRI control).
c. Alternating treatment (multi-element) design - A design wherein two or more treatments,
each with its own distinctive signal, alternate across time. Typically, all treatments are
presented each day, but in a random order.
e. Component analysis - Taking a treatment apart and identifying which component is the effective
component. This can be accomplished by slowly taking each element out, one-by-one, or by starting
with a single component, and slowly adding each element, one-by-one.
f. Parametric analysis - Studying different values or "levels" of a treatment. For example, you might
test the suppressive effects of different timeout durations, or the accelerative effects of FR 1, FR 5, FR
10 schedules of reinforcement. Parametric analyses can be accomplished by 1) randomly presenting
the different values in a ABCDEF design that is varied across participants or 2) presenting the values
in an ascending/descending series in a ABCDEDCBA design.
g. Multiple baseline (3 kinds) - A design in which baseline data are collected on two or more behaviors
(or situations, or subjects), and then intervention is applied to the first, and then the first and second,
first, second, third, etc.
h. Changing criterion design - A design in which the criterion for reinforcement is systematically
changed. Experimental control is shown when changes in behavior "shadow" changes in criterion.
i. Multiple probe - Essentially a multiple baseline design, but untreated behaviors are not assessed
every session, but only occasionally (these are called probes) until they receive the intervention.
j. Withdrawal with probe design - A standard ABAB design, except the return to the B condition is very
brief, constituting a probe. Probes may be as brief as 1 session.
6. Correlation - when two events co-vary. One may cause the second, the second may cause the first, or both
may be caused by a third variable.
12
12
7. Replication - Direct replication refers to repeating the exact same experiment with the same or similar
subjects. Direct replication, when done with the same subject, speaks to internal validity. Systematic
replication refers to purposefully changing elements of the experiment, and then repeating the new experiment.
Elements that might be changed include the species used, settings, or procedures. Systematic replication
speaks to external validity.
8. Integrity of the independent variable - The integrity of the independent variable refers to the extent to which
the treatment is implemented as intended.
9. Deductive processes - testing a hypothesis by collecting data in a systematic manipulation format. For
example, you may guess that a behavior occurs for attention. To test this hypothesis, you could observe the
behavior in high attention vs. Low attention conditions.
10.
Inductive processes - Generating a hypothesis from data that has already been collected. For
example, say that you have no idea about why a behavior occurs. You then just work with the person, collect
some data, and then hypotheses emerge. This hypothesis may then be tested using deductive processes (see
above).
11.
Generality - the extent to which the results or functional relations will be observed if the experiment is
changed in some way (e.g., using different populations of subjects, settings, or procedures).
12. Independent variable - In behavior analysis, the intervention or treatment. (see dependent variable)
13. Dependent variable - In behavior analysis, the behavior of interest, expressed in some measure (e.g., rate,
% correct, latency). In research design, changes in the dependent variable are measured as the
independent variable (treatment) is manipulated.
14. Practical issues with designs:
a. W/D and reversal – requires counter therapeutic change; not appropriate for irreversible
changes; SIB can make this design dangerous.
b. MB – requires untreated behaviors, participants, settings which could cause concern. Internal
validity can be unclear when generalization occurs.
c. ATD – effects of one Tx can be seen in other Tx conditions due to rapid alternation. Tx
procedures may not be discriminable, and therefore differences between the Tx may not be
evident in the data.
d. Changing criterion – not all Tx can be studied with this design. There may not be a parameter
that can be varied.
13
13
Content Area 6: Measurement of Behavior
1. Measures of behavior a. Rate - Number of responses per unit of time (e.g., responses/min, responses/hr).
b. Latency - The duration of time between a stimulus and the beginning of a response.
c. IRT - Interresponse time. The time between the end of a response and the beginning of another
response.
d. Duration - The time between the beginning of a response and the end of that response.
e. Percentage correct - The number of correct responses divided by the total number of responses
(correct+incorrect), multiplied by 100.
f. Frequency - The number of times a behavior occurs (count).
g. Intensity - The force of the behavior, which could be measured in decibels (loudness) or #lbs/sq.in
(pressure).
h. Trials to criterion - The number of trials required for a behavior to meet some criterion. For example,
you may count how many trials it takes a kid to complete a task without errors.
i. Celeration - A measure of the change in behavior over time. If behavior occurs 10/min, 20/min, and
40/min over 3 consecutive weeks, then the behavior is "doubling" or is increasing at a "x2." If the
behavior occurs 10/min, 1/min, and .1/min over 3 weeks, then it is decreasing at a ÷10.
2. Recording procedures - Methods for recording behavior that produce data transposable into a measure. For
example, event and duration recording can generate data that can be converted into a rate measure.
a. Partial interval recording - A recording procedure in which a time period is divided into "bins" or
smaller intervals, and then a "+" is recorded in each bin if a behavior occurs at all during that bin.
Otherwise, a "-" is recorded, meaning the behavior did not occur at all during that bin. The dependent
variable is the percentage of intervals in which the behavior occurred. This measure tends of
overestimate actual rates, and is therefore used for reduction targets.
b. Whole interval recording - A recording procedure in which a time period is divided into "bins" or
smaller intervals, and a "+" is recorded if a behavior occurred during the entire bin. Otherwise, a "-" is
recorded, meaning the behavior did not occur during the entire bin. The dependent variable is the
percentage of intervals in which the behavior occurred for the entire interval. This recording procedure
tends to underestimate the occurrence of behavior, and is therefore often used with acquisition targets.
c. Momentary time sampling - A recording procedure in which a time period is divided into "bins" or
smaller intervals, and a "+" is recorded if a behavior occurred at the end of the bin. Otherwise, a "-" is
recorded, meaning the behavior did not occur at the end of the bin. The dependent variable is the
percentage of intervals in which the behavior occurred at the end of the interval. There is no
systematic bias of this measure.
d. Event recording - A recording procedure in which the number of occurrences of a response are
recorded.
e. Duration recording - Using some timing device, recording the duration of time (e.g., seconds,
minutes).
f. ABC recording - Recording antecedent-behavior-consequence streams. Such recording is used to
identify potential functional antecedents, as well as reinforcers and punishers that may be affecting the
behavior.
g. Direct observation - Observing behavior directly, instead of assessing behavior indirectly through
testing (i.e., indirect observation).
14
14
h. Narrative recording - An on-line description of behavior and accompanying
antecedents/consequences as they occur, which can be converted in ABC data. The writing style is
usually in prose.
i. Permanent product recording - The recording of the effects of behavior, not the behavior
itself. For example, recording whether or not a bed is made properly, rather than observing the
person actually making the bed.
j. Continuous vs. sample recording - Continuous recording is when recording occurs uninterrupted - or,
the person's behavior is being observed continuously. In sample recording, the person's behavior is
being observed only occasionally or for short periods of time.
3. Reactivity - The extent to which the very act of recording influences behavior. For example, many people
behave differently when they know that they are being observed, as compared to when they do not know that
they are being observed.
4. Recalibration - Re-training an observer to increase accuracy. Used to decrease or correct observer drift.
5. Training observers - Observers can be trained through many ways, among them explanations, modeling,
viewing video tapes, and providing feedback for recording behavior (tell the observer when he/she has correctly
recorded behavior). Observers can be "calibrated" by having them record samples of behavior in which the
frequencies of behavior are already known(e.g., on a prepared video tape).
6. Observer drift - The tendency for an observer's recording to gradually changed across time. Can be caused
by subtle changes in the response definitions (the definition may become more stringent, or less stringent).
Observer drift can be pinpointed to the time when an observer's scores begin to diverge from those of a 2nd
observer.
7. Inter-observer agreement (IOA) - The extent to which two observer's data agree. For example, if observer
#1 records 5 behaviors, and observer #2 records 10 behaviors, the index of agreement would be 50% (5/10
then multiply by 100). For time-sampling records, IOA would be the # of agreements/# total intervals then
multiply by 100.
8. Accuracy – Accuracy is assumed to be present when there is agreement between 2 trained observers.
However, this may not necessarily be the case. A more consistent assay of accuracy is comparing an
observer’s data with “true values.” (see below)
9. True values – data in which extraordinary measures have been taken to eliminate sources of error. For
example, true values can be obtained by taking video tapes of a sample of behavior and have observers
repeatedly score the sample. These scores should approximate the true measures of the behavior in the
sample.
10. Reliability – the extent to which a given measurement result will be obtained to the same sample of
behavior. This is measured by having an observer repeatedly score the same sample of behavior, and
compare the data. A measure of correlation can be generated as an index of reliability.
11. Behavior definitions – behavior should be defined in observable and measurable terms. For example,
“frustration” is not observable or measurable. However, striking another person with a closed fist is
observable, and you could count the number of times that this occurred. It is therefore measurable.
12. Observation times for high rate: can be brief
13. Observation times for lower rate: must be longer duration to catch the behavior
15
15
Content Area 7: Displaying and Interpreting Behavioral Data
1. Graph - A visual display of data, using two intersecting axes, that shows trends and variability over time. A
graph is used for a visual display of data, for decision making, and for comparisons of different treatments.
a. Vertical axis label - The measure of behavior (e.g., "Rate" "Percentage correct")
b. Horizontal axis label - Some unit of time (e.g., "Days" "Weeks").
c. No chance day - A day wherein the behavior could not occur (e.g., no training trials were conducted),
thus, the previous data point and the one following are not connected.
d. Ignored day - A day wherein the behavior did have a chance to occur, but no data were collected,
thus, the previous data point and the one following are connected.
e. Condition change line - A solid vertical line in a graph that indicates when a change in treatment
conditions occurred (e.g., separating baseline from intervention). A vertical dashed line is used to
indicate unplanned environmental changes (e.g., changes in staff).
h. Bar graphs - graphs used to show average # of behaviors or other measures such as the number of
males vs females in a classroom. They are not appropriate for showing daily frequencies in real time.
i. Cumulative record - A graph, of sorts, that shows the cumulative number of responses over time.
The rate of response at any given time is represented by the slope of the cumulative record.
j. Characteristics of graphed data - Trend is the direction of the data points, as described by a "trend
line" (see below). Level is the general height of the points, as described by the arithmetic mean (or
median) of points. Level is often shown with a horizontal line through the points (see below).
Variability is the extent to which data points vary wildly from day to day. Variability is often expressed
as the range of the data points.
k. Equal interval graphs – These graphs have a Y axis that shows the distance between individual units
(e.g., between 1-2, 3-4, 5-6) is the same. Thus, if you start at a place on the Y axis, and add a
constant (e.g., 5), then these increments are equidistant from each other.
2. Split middle method – a method for drawing a trend line.
20
# beha vio rs
Tre nd l ine
10
0
0
1
2
3
4
Ses sions
5
6
7
16
16
Number of ta ntrums
15
10
5
0
0
5
10
15
Days
3. Standard chart concepts
a. X axis – calendar days
b. Y axis – Count per minute from 1/1000 minutes to 1000/minute
c. Darker vertical lines – Sunday lines
d. Right hand Y axis – time
e. Rate data points go up – rate is increasing
f. Rate data points go down – rate is decreasing
g. Duration data points go up – Duration is decreasing
h. Duration data points go down – duration is increasing
i. Celeration - the rate of change. Computed by drawing in a best fit line. Then, get the rate on a
given Sunday, and the rate on the previous Sunday. Divide the large by the small. If data are
increasing, then affix a “x” to the number (e.g., x2). If the data are decreasing, then affix a “÷” to the
number (e.g., ÷ 2)
g. Record floor - A dash on a particular day on a standard chart that shows the duration that the person
was observed (can be read from the right vertical axis or computed by dividing 1/#minutes of
observation – observe for 10 minutes = record floor is .1)
4. Scatterplot – a datasheet that shows at what time of day a behavior is occurring. This device is good for
seeing patters of behavior. See below.
Time
SIB
Agg
Vomiting
9a-10a
xxxx
10a-11a
11a-12p
xxxxx
12p-1p
xxxxx
1p-2p
17
Content Area 8: Selecting Intervention Outcomes and Strategies
1. Task analysis - When a task is broken up into smaller elements. The elements should be stated in their
correct order.
2. Ultimate outcomes – goals that relate to safety, health, choice, access to positive reinforcers, avoiding
aversive events, and quality of life
3. Intermediate outcomes – those goals that lead to ultimate outcomes, e.g., learning to dress, ride the bus,
ask for assistance, job skills
4. Behavioral goal - A statement of when a behavioral program will be successful. Specific behaviors are
included, but with no detail of precise criteria or conditions. Behavioral goals should be age-appropriate, or
they should be consistent with the chronological age of the person.
5. Behavioral objective (5 elements) - A precise description of when a program will be successful, including the
measure, criterion for success, antecedents, behaviors, and consequences.
6. Constructional approach (by Goldiamond) - An approach to decreasing inappropriate behavior that focuses
on building in new behaviors to replace the inappropriate behavior.
7. Functional goals - Goals that if accomplished, will improve the life of the client and allow more independence
and choices in his/her life. Achieving such goals will permit the client to access more reinforcers available in
the natural environment (learning to dress will allow the client to live in a group home and attend a workshop).
Moreover, functional goals are those, that is not accomplished, will require that caregivers perform the activity
for the person.
8. "Fair pair" (Decreasing behavior) - There are two elements in the approach to decrease behavior. The first
element is to identify the inappropriate behavior, and program a procedure to directly decrease it. The second,
and most important, is to identify a replacement behavior and then teach it. This second element will be most
successful if the reinforcer that the replacement behavior obtains is the same as that which is obtained by the
inappropriate behavior. These two elements constitute a "fair pair."
9. Foundational skills - Skills that must be taught before other skills can be taught. For example, you
may teach eye contact before teaching speech; or you may teach compliance before teaching job
skills.
10. Choice availability - This refers to the extent to which clients are given choices about their lives and events
therein. Some research suggests that when choices are frequently provided, fewer problem behaviors may be
exhibited.
11. Recommendations regarding target outcomes: should consider client preferences, task analysis
information, the client’s current repertoire, how supportive the environment is, environmental
constraints (e.g., staff), social validity issues, assessments, and what is best practice in the
scientific community.
12. Recommendations regarding interventions: should consider client preferences, task analysis
information, the client’s current repertoire, how supportive the environment is, environmental
constraints (e.g., staff), social validity issues, assessments, and what is best practice in the
scientific community.
13. Weakening behavior : follow constructional approach and always select a replacement skill. It
should be easy to emit and have the same function as the inappropriate behavior.
14. Environmental changes to reduce the need for ABA Tx: When appropriate, consider changes in
the client’s environment that will eliminate the need for a behavior program. Such as, find
interesting jobs, satisfying places to live and recreate, and a network of friends. Ensure choices
over important elements of the client’s life.
15. Program design related to implementers – design the Tx while keeping in mind the contingencies
controlling the implementers behavior. Some questions to consider: is implementing the Tx in the
staff’s job description? Will the Tx be monitored? Will the staff receive feedback? Will
management ensure the feedback is effective?
17
18
18
Content Area 9: Behavior Change Procedures
1. Reinforcer menu - A visual display of several reinforcers, from which a client may choose. A menu may
utilize pictures, written descriptions, or the actual items.
2. Reinforcer sampling - Requiring a person to sample various reinforcers, such that he/she has sufficient
experience with them to choose the preferred reinforcer. Thus, a person is presented with a sample of
potential reinforcers non-contingently to ensure the person has experienced them. In the future, these can be
made contingent.
3. Preference assessment procedures:
a. Survey – ask people about their preferences
b. Single stimulus – present stimulus and record the latency of contact, or % of trials in which contact was
made. You may also record the duration of contact.
c. Multiple stimulus with (or without) replacement – present an array and record how often an item is
selected. The without replacement can be used to rank order preference.
d. Forced choice – present person with pairs of reinforcers, and note which one is selected. Pair each
reinforcer with all of the others on the list of possible reinforcers
Rank-order student preferences. Analyze the student's choices to determine the most preferred
and least preferred items. You can compute a 'preference percentage' for any item by: (a)
calculating the number of times that the child selected item X, (b) dividing that figure by the total
number of pairs in which item X appeared, and (c) multiplying the answer by 100 (See Figure 1).
e.
Reinforcer sampling – exposing person to various reinforcers for brief periods of time.
At the start of your assessment, give the child a brief opportunity to sample each reinforcer.
f.
o
If the reinforcer is a food item, the child is given a tiny taste of the food or beverage.
o
If the reinforcer is an activity such as working on the computer, the child has 5-10 seconds to
engage in the activity.
o
If the reinforcer is access to a preferred object (e.g., stuffed toy), the student has 5-10 seconds
of access to the object.
Observation – observe person in free time and record what they do
3a. Reinforcer assessment – present the possible reinforcer contingent on a behavior and see if the behavior
is strengthened
4. Relation between reinforcer effectiveness and delay, amount, quality, deprivation, and variety Reinforcer effectiveness increases with shorter delays, larger amounts, higher quality, greater
deprivation, and greater variety.
5. Superstitious behavior - Behavior that occurs as a result of "accidental" or adventitious reinforcement.
(See above).
6. Premack principle - A procedure in which a high probability behavior can be used to reinforce low probability
behavior, and low probability behavior can punish high probability behavior.
7. Response deprivation procedures – A set of procedures that involves depriving an organism the opportunity
to emit a response, and then using the opportunity to emit the response as a potential reinforcer for other
behavior. A deprivation of the response would function as a motivational operation, or more specifically, an
19
19
establishing operation. The Premack Principle is said to be an example of this procedure, as the person
typically does not have free access to the response in question.
8. Simple schedules of reinforcement - A rule that specifies when a reinforcer is delivered. The rule may
include time and/or response requirements.
a. Continuous reinforcement - A term that refers to a fixed-ratio (FR) 1 schedule of reinforcement
wherein every response produces a reinforcer.
b. FR - Fixed-ratio schedule of reinforcement. When a reinforcer is delivered after a fixed number of
responses. This schedule produces a steady, high rate of response with pauses after reinforcement.
(Or, a break-and-run pattern.)
c. FI - Fixed-interval schedule of reinforcement. When a reinforcer is delivered after the first response
after a fixed amount of time has elapsed. This schedule produces a scalloped rate of response (low at
first, and then gradually increases).
d. VR - Variable-ratio schedule of reinforcement. When a reinforcer is delivered after an average
number of responses. This schedule produces a steady, very high rate of response with very brief, if
any, pauses after reinforcement.
e. VI - Variable-interval schedule of reinforcement. When a reinforcer is delivered after the first
response after an average amount of time has elapsed. This schedule produces a steady, medium
rate of response with very brief, if any, pauses after reinforcement.
k. Adjusting ratio - A ratio schedule in which the size of the ratio increases as responding
becomes more rapid and consistent, but decreases when responding deteriorates.
l. Progressive ratio - A ratio schedule in which the ratio size gradually increases over time.
m. Schedule thinning - Gradually decreasing the rate of reinforcement. In a ratio schedule, increasing
the size of the ratio; in an interval schedule, increasing the time requirement.
n. Post-reinforcement pause - A brief pause in responding immediately after reinforcement
under fixed-ratio or variable ratio schedules. The length of the pause increases as the ratio
size increases.
o. Ratio strain - A decrease in responding under a ratio schedule because the ratio size is too large, or
was increased too rapidly.
9. Side effects of reinforcement – When using negative reinforcement, side effects would be similar to those
with punishment (see below). When using positive reinforcement, one may see some similar effects, but with a
different valence. For example, there may be emotional behavior, but it would be considered desirable
emotions. Instead of escape/avoidance, one may see approach to the source of reinforcement. However,
there may be some other unfortunate side effects of using positive reinforcement. For example, 1) scheduleinduced aggression may occur, 2) very frequent requests for the reinforcer may occur, 3) the person may
“shadow” the source of reinforcement to an extent that is disruptive or 4) the schedule may have aversive
properties and the person may attempt to terminate it or escape. See “schedule induced behavior” for more
details.
10. Extinction-induced aggression - Aggressive behavior that occurs when a behavior is being extinguished.
11. Punishment side effects
a. Escape from punishing agent
b. Aggression towards punishing agent
c. Emotional behavior
d. Modeling by observers
e. Inappropriate generalization - person is literally "afraid" to do anything
20
20
12. Social validity - Social validity refers to whether goals, procedures, and outcomes are acceptable. For
example, to examine the validity of goals and task analyses, you may ask experts or competent individuals to
rate the goal (e.g., ask a fireman whether the fire safety skills are appropriate). To examine the validity of the
procedures, you may ask community members or parents whether the timeout procedures that you are using
are acceptable. To examine the outcome validity, you may ask parents if the behavior changes in their child
are important and worthwhile.
13. Probe trials - a method of measuring generalization in which the behavior is measured in untrained
situations.
14. Shaping - Gradually changing the form, or topography, of a behavior by reinforcing successive
approximations to the correct, final topography. For example, one may increase voice volume by first
reinforcing speaking above 25 db, then above 35 db, 45 db, etc. (see also stimulus shaping)
15. Complex schedules of reinforcement
a. Differential reinforcement - When a reinforcement contingency depends on 1) the presence or
absence of a feature of a response (e.g., loud vocalizations produce food, soft ones do not) or 2) the
presence of absence of an antecedent stimulus, as in teaching a discrimination (e.g., engaging in
assignments are reinforced when the direction is given, not when the direction was not given).
b. Concurrent schedule (Conc) - Two or more schedules are available simultaneously that can be
selected (choose to work in workshop or watch TV)
c. Multiple schedule (Mult) - Two or more independent schedules that are presented successively,
each with their own signal (1st period has FR 10 attention for tasks, 2nd period with a different teacher
has EXT for task completion). Each schedule has its own set of contingencies.
d. Mixed schedule (Mix) - Two or more independent schedules that are presented successively, but
each does not have its own signal (Schedule changes from FR 10 to EXT, but without notification).
Each schedule has its own set of contingencies.
e. Chain schedule (Chain) - Two or more schedules that are presented successively, each with its own
signal. A reinforcer is given only at the end of the sequence (FR 10 - FI 1' - VR 20 - reinforcer.)
f. Tandem schedule (Tan) - Two or more schedules that are presented successively, but there is no
signal for each. A reinforcer is given only at the end of the sequence (FR 10 - FI 1' - VR 20 reinforcer.)
g. Alternative schedule (Alt) - A reinforcer is given when one of two schedules is completed. There is
only one response option. E.G., food is given when Bill completes a FI 1' or FR 50, whichever comes
first.
h. Conjunctive schedule (Conj) - A reinforcer is given when both of two schedules are completed.
There is only one response option. e.g., food is given when Bill completes a FI 1' and FR 50.
i. Fixed time (FT) - A reinforcer is delivered after a fixed amount of time, irrespective of behavior. Also
called “response independent” schedules.
j. Variable time (VT) - A reinforcers is delivered after a variable amount of time, although the times
average some nominal value (VT 15' - delivered after, on the average, every 15').
k. Limited hold - when a reinforcer is available for the next response, that response has a limited
amount of time to occur, or the reinforcer is lost (e.g, FI 1' LH 10" means that the first response after 1'
is reinforced, but it must occur within 10" after the 1' has elapsed). LH tends to increase response
rates.
21
21
16. Schedule-induced (adjunctive) behavior - Behavior that seems to appear because it is under a schedule of
reinforcement. For example, some organisms will exhibit aggression under FR 50 schedules of food delivery.
Rats will exhibit copious drinking when exposed to FT 1' schedules of food delivery.
17. Behavioral momentum - Behavioral momentum is a laboratory phenomenon in which subjects
behavior patterns and characteristics tend to persist (albeit temporarily), even when the contingencies
are changed. In the applied arena, researchers have shown that following difficult directions (low
probability directions) can be increased when these directions are preceded by several easy directions
(high probability directions) with reinforcers given after each.
18. Group contingencies
a. Dependent – when the reinforcer for a group depends on the behavior of a single person or
small # of people
b. Interdependent – when a reinforcer is available if all people in the group meet a minimum
criterion, or if the group’s overall performance meets a criterion (e.g., if the class as a whole
achieves 90%)
c. Independent – when a reinforcer is available for any person whose behavior meets a
criterion
Prompting and Transfer of stimulus control
1. Prompts - An extra antecedent stimulus that is used to evoke a behavior, such that it can then be reinforced.
Extra-stimulus prompts are those that are "outside" the SD, such as physical guidance to prompt hand
washing, verbal guidance to prompt the steps of making a salad, etc. Within-stimulus prompts are those
that are contained within SD, such as isolating and exaggerating the critical difference between an "E" and an
"F" (i.e., the bottom leg _).
a. Combined - Prompts are given at the same time, or just after, the SD.
b. Delayed - Prompts are given after a period of time elapses after the SD (i.e., give the person a
chance to perform the skill independently).
c. Graduated guidance - Give prompts whenever they are required, but immediately fade when person
begins to perform the response.
d. Kinds of prompts - Physical guidance, gestural, verbal, written, imitation (modeling)
2. Fading - The gradual withdrawal of prompts, such that the SD alone comes to evoke the desired behavior.
a. Most-to-least - Present the SD and prompt at the same time (or prompt just after the SD), and then
gradually use a less and less intense prompt. Less intense can be less physical force, or going from
physical to gestural and then to verbal.
b. Least-to-most - Give SD and then wait for response to be performed. If it is not, give the least
intrusive prompt first. If this is unsuccessful, then the 2nd least intrusive prompt, 3rd least intrusive,
etc.
c. Shadowing - When the trainer moves his/her hands along with the client's hands as he performs a
skill. Shadowing can be a step in fading most-to-least prompts, a fading step in graduated guidance.
d. Spatial fading - Gradually changing the spatial locus of a prompt during fading. For example,
prompting the hand, then the wrist, forearm, elbow shoulder, etc. This is probably an example of mostto-least fading.
e. Graduated guidance - Give prompts whenever necessary, and then immediately fade it when client
begins to perform the skill.
f. Differential reinforcement - Reinforce trials that are performed with a less intensive prompt, and do
not present reinforcement for trials that require more intensive prompts. As fading progresses, the
criterion for reinforcement also changes (i.e., the person must perform the response more and more
independently to obtain a reinforcer).
22
22
3. Transfer of stimulus control - When one stimulus can evoke a response, and then that capacity is
transferred to a second stimulus.
a. Delayed prompting - Impose a time delay between the SD and the prompt, thereby providing an
opportunity for the person to perform the response without the prompt. In some procedures, the delay
is gradually increased as more and more correct trials are obtained, but decreased as incorrect trials
appear. In others, the time delay is remains fixed over time (e.g., 4 seconds between the SD and
prompt if person does not respond).
b. Fading - see "Fading" procedures
c. See "errorless learning."
4. Stimulus shaping - This involves a transfer of stimulus control from an already effective stimulus to a new
stimulus. For example, you may be teaching a child to name the number "2" by first beginning with a picture of
2 apples, and then slowly changing the picture of the apples to a picture of the number "2".
5. Stimulus over-selectivity - The tendency of lower functioning individuals to attend to one, and only one,
element of a complex SD. For example, if asked to discriminate between a red "A" and a blue "B", the person
may merely attend to the difference in colors, and learn nothing about the difference in the letters. This
tendency can make it difficult to fade prompts, because the person attends to the prompt and never attends to
the SD.
6. Errorless discrimination - Teaching a discrimination with few or no errors. One procedure involves slowly
fading in an S-delta, or incorrect stimulus. Another involves taking an already learned discrimination,
superimposing a new set of stimuli, and then fading out the already learned stimuli. This procedure has,
unfortunately, been termed the "Terrace effect."
7. Concept formation - Generalization within a class of stimuli, and discrimination between classes. For
example, learning to identify all canines as "dogs" and discriminating dogs from cats.
8. Stimulus equivalence - When a class of stimuli evoke the same responses, or more generally, have the
same effects on behavior. For example, there is a class of stimuli that evoke the response "dog" which would
include 1) the word "dog" 2) a picture of a dog 3) the sight of a dog and 4) the sound of a barking dog. This is
an "equivalence class" because the stimuli are equivalent in the sense that they all evoke the response "dog."
Stimuli can be added to this equivalence class by rules (A person tells you that "Perro means dog in Spanish.")
and by matching to sample (a child could learn to match a card with "Perro" written on it to a card with a picture
of a dog on it).
Stimulus equivalence is often represented by some basic facts in math. For example, if A = B, and B = C, then
B = A, C = B, and A = C. The letters can be identified with specific stimuli described above. Thus, A could be
the word “dog”, B is a picture of a dog and C could be the word dog in Spanish or “perro.”
a. Transitivity – If A = B and B = C, then A = C
b. Symmetry – if A = B, then B = A
c. Reflexivity – if A = A, then A = A
Experiment:
Training:
1. Train kids to match “dog” to a picture
2. Train kids to match picture of dog to Spanish word “perro”
Testing: (these are called emergent relations)
1. See if kids can match picture to “dog”
2. See if kids can match “perro” to picture of dog
3. See if kids can match “dog” to “perro”
4. See if they can match all stimuli to itself
5. If all of these relations are present, then an equivalence class has been formed
Chaining
23
23
1. Chaining - Systematically linking together individual skills into a larger chain of skills. (see total task training)
a. Forward chaining - Teaching a sequence of responses by initially training the first response of the
chain, then the first and second, first, second, third etc. In the training, the reinforcer is presented after
the required number of steps are completed. For example, if the learner is working on steps 1,2, & 3,
then the reinforcer is delivered after step #3.
b. Backward chaining - Teaching a sequence of responses by initially training the last response of the
chain, the second to the last and the last, third to the last, second to the last, and the last, etc. In the
training, the reinforcer is presented after the required number of steps are completed. For example, if
the learner is working on steps 4 & 5, then the reinforcer is delivered after step #5.
2. Total task training - When an entire task is trained at once, instead of implementing a chaining procedure. In
total task training, graduated guidance is often used (see Prompts - graduated guidance)
Behavioral Skills Training Procedures
1.
2.
3.
4.
Modeling – imitating some response for another person. For example, demonstrating how to fix a bike.
Instructions – verbal descriptions of behavior and antecedents/consequences. For example, a written
description of how to deliver reinforcement. Also called “rules.”
Rehearsal – practicing a behavior to be learned. For example, having a child practice conflict
resolution skills every day.
Feedback – Providing information contingent on a behavior. Feedback can function as reinforcement
or punishment, depending on the nature of the information.
5. Incidental learning/teaching - This is learning that occurs in naturally occurring activities, and not as a result
of programmed, artificial learning trials. For example, you may teach correct signing in the context of a
workshop task - the person completes a task, and then must communicate what he desires. Instruction on
signing can occur in this naturally occurring communication opportunity.
6. Task variation - The extent to which tasks are varied in a block of time. There is some research that
suggests rapidly varying the tasks may engender improved learning.
7. Direct instruction - A method of teaching material such as reading and math that involves scripted
presentations, active student participation, and immediate feedback from the teacher.
8. Behavioral momentum - see above
9. Fair pair - see above
3. Personalized System of Instruction (PSI) - A system for teaching college classes, developed by
Fred Keller. In this system, the material is broken down into small units, and each unit has its own
study objectives. The student works through the units at his/her own pace, studying the material
and then taking an exam. The student who needs assistance can work with a proctor, who is
usually an upper level student who has successfully completed the class. If a student does not
meet a mastery criterion on an exam, he/she re-examines the material and then takes a parallel
form of the same exam.
4. Precision teaching – using sound behavioral teaching methods and the standard chart (see
standard chart in the graphing section).
Precision Teaching boils down to "basing educational decisions on changes in continuous self-monitored
performance frequencies displayed on 'standard celeration charts'" (Lindsley, 1992a, p. 51). As such, it does not
prescribe what should be taught or even how to teach it: "Precision teaching is not so much a method of
instruction as it is a precise and systematic method of evaluating instructional tactics and curricula" (West &
Young, 1992, p. 114).
Here are some links to concepts in precision teaching:
1. Focus on Directly Observable Behavior
2. Frequency as a Measure of Performance
24
24
3. The Standard Celeration Chart
4. The Learner Knows Best
Differential reinforcement
1. Schedules of reinforcement used indirectly to decrease behavior
a. DRO - Differential reinforcement of other behavior. When a reinforcer is delivered when a response
does not occur for a fixed amount of time (also, VDRO - when a reinforcer is delivered for the absence
of a response after a variable amount of time).
b. DRI - Differential reinforcement of incompatible behavior. When a reinforcer is delivered when a
response occurs for a fixed amount of time. The response is chosen because it is incompatible with
another response that is a deceleration target behavior.
c. DRA - Differential reinforcement of alternative behavior. When a reinforcer is delivered when a
response occurs for a fixed amount of time. The response is chosen because it is an alternative
response to another response that is a deceleration target behavior.
d. Momentary DRO - A DRO schedule in which a reinforcer is delivered if the target behavior is
not occurring at the moment the DRO interval terminates. Thus, if the DRO interval is 10', a
reinforcer is delivered for no target behaviors at the end of the 10' interval.
e. DRL - Differential reinforcement of low rates of behavior. When a reinforcer is delivered for no more
than a fixed number of responses in a time period, or when a reinforcer is delivered after an IRT
greater than some criterion amount of time. This schedule is used to decrease the rate of some
behavior.
f. DRH - Differential reinforcement of high rates of behavior. When a reinforcer is delivered for more
than a fixed number of responses in a time period, or when a reinforcer is delivered after an IRT less
than some criterion amount of time. This schedule is used to increase the rate of some behavior.
Antecedent manipulations
1.
2.
3.
4.
5.
Antecedent control procedure
Establishing operation (motivative operation – see above)
Present SDs for appropriate behavior
Remove SDs for inappropriate behavior
Increasing response effort for inappropriate behavior
Response cost & timeout
1. Timeout - A shorthand term for "timeout from reinforcement," meaning signaling the removal of the
opportunity to earn reinforcement for a period of time, contingent on some behavior. It is used in an attempt to
decrease behavior, and when successful, can be classified as a punisher.
2. Response cost - Contingent on some inappropriate behavior, the removal of a reinforcing object (e.g., token,
radio, magazine).
3. Facial screening - Briefly covering the eyes, or restricting visual input in some way, contingent on a behavior.
4. Isolation timeout - A timeout from reinforcement in which the person is placed in another location away from
others. The person is unable to leave the location until some criterion is met.
5. Exclusion timeout - A timeout from reinforcement in which the person is removed from the immediate
situation, but kept in the general area (e.g., corner timeout, chair timeout).
6. Planned ignoring - when behavior is maintained by social reinforcers, and such reinforcers are withheld for a
given period of time contingent on the behaviors.
25
25
7. Contingent observation - Contingent on a behavior, the person is removed from ongoing activities and
permitted to observe same.
8. Punishment guidelines for efficacy
a. Immediate after the target behavior
b. Consistent - punish every response (FR 1)
c. Provide an alternative behavior that obtains the same reinforcer
d. Do not allow a reinforcer to follow too closely after the punisher
e. Use a high intensity punisher - do not start with a weak punisher and then slowly increase the
intensity
f. Withhold all reinforcers that can be produced by the target behavior
g. The particular punisher should be linked to assessment data
Other Punishment procedures
1. Restitutional over-correction - Contingent on some inappropriate behavior, requiring the person to restore
the environment to a condition superior to that before the behavior occurred. For example, if the person turned
over a table, he/she would right the table, and then straighten up the other tables or furniture in the room.
2. Positive practice over-correction - Contingent on some inappropriate behavior, requiring the person to
practice the appropriate behavior that should have occurred. For example, if the person wet his/her pants, then
he/she would practice signaling "bathroom" and using the toilet.
3. Required relaxation - Contingent on some inappropriate behavior, requiring the person to lie down and relax
in a quiet area (e.g., bedroom) for a period of time. (IN CONTRAST - Progressive relaxation - A technique of
relaxation wherein the person learns to relax various muscle groups. When completed, the person is then able
to totally relax all major muscle groups, and can do so under the control of a cue such as "relax." Such a
procedure is then used in situations that normally evoke disabling anxiety.)
4. Contingent effort - Any one of several procedures that involve requiring, contingent on a response, the client
to engage in an effortful activity. The activity may be positive practice (see above), restitution (see above), or
physical exercise.
5. Negative practice - Contingent on some inappropriate behavior, requiring the client to engage in the
that behavior repeatedly. It is designed to decrease the rate of the behavior.
Programming Generalization and maintenance
1. Artificial (contrived) vs. natural contingencies - Given a choice, a behavioral programmer should
select contingencies that approximate those in the natural environment, rather than artificial
contingencies. When artificial contingencies must be used, however, they should be changed to more
normal contingencies whenever possible.
2. Behavioral rehearsal - Practicing a skill under simulated conditions. For example, if a client is to
learn assertiveness skills, then he/she may rehearse the skills under conditions set up by a trainer, and
receive reinforcers for exhibiting correct behavior. The simulated conditions should approximate those
in the natural environment.
3. Generalization - Stimulus generalization is when the effects of some contingency spread to stimuli that
have not yet been associated with the contingency. Response generalization is when the effects of some
contingency spread to responses that have not yet been associated with the contingency. The ways to
program generalization are found below.
a. Instructions - After training a response to criterion, give the client instructions to encourage
generalization (e.g., "Why don't you try this at home?")
b. Train in many stimulus conditions - Train the response in several new situations, such that the client
will be better prepared for new situations in the future. This kind of training will broaden the
generalization gradient.
26
26
c. Design supportive environments - If possible, design untrained situations to encourage and support
generalized skills.
d. Use simulations – program natural stimuli and contingencies when training.
e. Train loosely - During training, vary the environment such that there is not narrow, or irrelevant,
stimulus control over the skill. As a result, it is more likely that the stimuli in the new situations will
evoke the response. This may be particularly important with lower-level clients. For example, wear
different clothing, use different tones of voice, vary the reinforcers and schedule, etc. Or, to
encourage response generalization, require slightly different topographies over time.
f. Programming common stimuli - Identify the stimuli in which the skill must be performed. Then use
those stimuli in the training stimuli, such that the behavior will be evoked by them in both the training
and the new situations.
g. Delayed/intermittent reinforcement - Gradually increase reinforcement delay, and thin reinforcement
schedules, during generalization training, as reinforcers are are normally longer and less frequent in
natural, untrained environments. Thus, the training and untrained environments are less discriminable,
and ergo generalization is more likely to occur.
h. Self-management - Implementing any of the self-management techniques (see self-management)
after initial training is completed. By doing so, the client "takes with him" a package of procedures that
can be applied in any environment, and therefore promotes generalization.
i. Vary the prosthetic devices to encourage response generalization - when prosthetics are required
(e.g., eating utensils), use variations of the instruments (e.g., use different forks and spoons).
4. Molar/Level systems - A level system is a system wherein clients begin at a beginning level, and then work
their way up to higher levels. Each level has its own behavioral criteria for entry and its own collection of
reinforcers. Presumably, the higher levels have more stringent behavioral criteria and more powerful/desirable
reinforcers. Good for homogeneous groups of clients, and saves time on staff training. Not good for
heterogeneous groups.
5. Target setting - The setting to which a client will be placed after behavioral programming has finished. The
setting to which generalization efforts are directed.
6. Maintenance - The extent to which a procedure can produce durable changes in behavior. Or, a phase of
acquisition that uses specially designed procedures (e.g., self-management procedures, thinner schedules of
reinforcement) to maintain an already-learned response. Ways to encourage maintenance are:
a. Train to fluency
b. Use naturally occurring stimuli
c. Fade out artificial stimuli
d. Use delayed consequences
e. Use self control repertoires
f. Use intermittent schedules of reinforcement
Self-management
1. Self control - Self-control involves procedures that are implemented, at least in part, by the client.
Such procedures can involve self-recording, self-delivery of consequences, goal setting (setting one's
own goals), and self-evaluation (determining when behavior meets criterion for reinforcement).
Accurate self control procedures typically requires some external source of contingency management.
For example, a child who is recording his own behavior may receive reinforcement for accuracy, and
lose reinforcement for incorrect recording. Or, a person designing his/her own self control program
may make a public commitment, and thereby enlist the contingency management support of friends
or family that will maintain the self control repertoire. The term self control is actually a misnomer, as
the "self" is not the ultimate source of control - instead the environment assumes the ultimate control.
2. Self-management - another term for self control that does not have the philosophical assumptions.
27
3. Self-recording - a procedure wherein the client decides if and when their own behavior meets a
criterion, and then recording the behavior if it does. This procedure typically requires some external
contingency for accurate implementation.
4. Self-reinforcement - a procedure wherein a person decides if his/her behavior meets a criterion for
reinforcement, and then delivering the reinforcer if it does. This procedure typically requires some
external contingency for accurate implementation.
5. Self-punishment - a procedure wherein a person decides if his/her behavior meets a criterion for
punishment, and then delivering the punisher if it does. This procedure typically requires some
external contingency for accurate implementation.
Token economies
1. Tokens - Generalized conditioned reinforcers that when earned, can be exchanged for other reinforcers, or
backup reinforcers. Tokens have advantages, as they can be quickly and easily delivered, and their efficacy
does not depend on a particular deprivation because they can be exchanged for a variety of backup
reinforcers.
2. Backup reinforcer - A reinforcer that is obtained by exchanging a token for it. Used in token systems.
3. Generalized reinforcer - A reinforcer that is effective in many situations because it can be exchanged for a
wide variety of backup reinforcers. Its effectiveness does not depend on one particular deprivation. Tokens
are generalized reinforcers.
4. Rules for designing a token system
a. Base it on functional assessments
b. Identify tokens that are easily used
c. Identify target behaviors and rules for obtaining tokens
d. Identify schedule of token exchanges
e. Identify how tokens will be conditioned as reinforcers
f. Field test the system and fine tune as needed
Imitation and language development
1. Generalized imitation - Imitation skills that will occur even to untrained models. For example, a person may
imitate a behavior that he/she has never before been reinforced for imitating.
2. Modeling - providing a model for another person to imitate. It can be used as a form of prompting, which is
designed to evoke a behavior so that the behavior might be reinforced. The likelihood of the person imitating
the model will depend on the history of reinforcement for imitating, the model's "prestige," the "attractiveness"
of the model, etc.
3. Model - some antecedent stimulus that is topographically identical to the behavior to be strengthened.
4. How to teach verbal behavior using transfer of stimulus control
a. Teach echoics or textuals
b. Use echoics or textuals as prompts when teaching mands, tacts, intraverbals
c. Fade use of echoics or textuals as prompts
6. Delayed imitation - when a person imitates a model, but the model is no longer present.
7. Model characteristics - There are several characteristics that might influence whether a model's behavior will
be imitated. These include: model similarity, prestige of the model, emphasis of modeled behaviors, how
nurturing the model is, and instructions.
Miscellaneous
27
28
28
1. Behavior contrast - In positive contrast, behavior in a changed situation decreases, resulting in an increase
of the behavior in an unchanged situation. In negative contrast, behavior in a changed situation increases,
resulting in a decrease of the behavior in an unchanged situation.
2. Contingency (behavioral) contract - A contingency contract is an agreement between a client and
behavior programmer. The agreement states specific behaviors by the client, and what consequences
will be forthcoming for each behavior. A contract often includes bonus clauses for excellent
performance, and the opportunity to withdraw from the contract.
3. Matching law – The matching law is an equation that expresses a fundamental functional relation.
To wit, the rate of a response will be sensitive to the rate of reinforcement for that response, as well as
the rate of reinforcement for other responses. The equation is as follows:
R1
=
R1 + R2
r1
r1+r2
Here, R1 is the rate of response for a particular option, R2 is the rate of a second response option, r1 is the
rate of reinforcement for the first option, and r2 is the rate of reinforcement for the second option. R2 and r2
can refer to the total sum of other responses and reinforcers, respectively. Other factors can influence the
distribution of responses, such as magnitude and delay of the reinforcers, and whether or not there is a
change-over delay (COD). In a COD, a reinforcer for a R1 can not occur right after an R2.
3a. 2 ways to reduce R1 using the matching law: 1) decrease the rate of reinforcement for R1 and 2) increase
the rate of reinforcement for R2.
29
Content Area 10: Systems Support
Staff Training
1. Competency-based training - The kind of training that is essential in staff training and management. This
training involves a needs assessment, learning objectives, performance criteria, training procedures
(instruction, simulations, in vivo training), and on-line feedback. Training systems observe the principles of
behavior that are found in CBA class.
2. Information sharing/display - Information about behavior analysis services should be provided to those
directly involved (clients, trainers, parents), and to those who have a legitimate interest
(educational/governmental officials, administrators). When sharing data with non-professionals, the display
should be easily interpreted (avoid 6 cycle graphs - consider bar graphs).
3. Countercontrol - Countercontrol refers to attempts by the subjects of behavior programming to
change the behavior of the programmer. For example, students learned to train their teachers to
deliver more praise and positive comments.
4. Support for behavior analysis services - A behavior analyst should enlist support for her/his technology from
those who are directly affected by the services and by those only indirectly affected, but who may have decision
power over them (administrators, educational/government officials, advocacy committee, HRS, popular media).
Such support can be obtained by educational programs, and feedback/outcome measures that show cost
effectiveness of the technology.
5. Performance monitoring systems - These systems are designed to encourage and maintain appropriate
staff behavior. They involve objectively defined job descriptions, sufficient training in the job, on-line
frequent feedback, and a system of incentives for excellent performance.
6. Obtain support from others: to maintain a client’s behavior, you should secure support from those in their
natural environment. You should also work in collaboration with others who are involved with the client.
7. Procedural integrity – collecting data on the extent to which the program is being implemented correctly.
With those data, you can then arrange for effective contingencies to maintain and shape the behavior of the
implementers.
29
Download
Study collections