Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/6988423

Recent reinforcement-schedule research and applied behavior analysis.


Journal of Applied Behavior Analysis, 29, 213-230

Article in Journal of Applied Behavior Analysis · February 1996


DOI: 10.1901/jaba.1996.29-213 · Source: PubMed

CITATIONS READS

48 990

2 authors:

Kennon Andy Lattal Nancy A Neef


West Virginia University The Ohio State University
172 PUBLICATIONS 3,646 CITATIONS 76 PUBLICATIONS 2,947 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Response-Reinforcer Relations View project

Persistence of behavior View project

All content following this page was uploaded by Nancy A Neef on 04 June 2014.

The user has requested enhancement of the downloaded file.


JOURNAL OF APPLIED BEHAVIOR ANALYSIS 1996, 2% 213-230 NUMBER2 (SUMMER 1996)
RECENT REINFORCEMENT-SCHEDULE RESEARCH AND
APPLIED BEHAVIOR ANALYSIS
KENNON A. LATrAL AND NANCY A. NEEF
WEST VIRGINIA UNIVERSITY AND
UNIVERSITY OF PENNSYLVANIA

Reinforcement schedules are considered in relation to applied behavior analysis by ex-


amining several recent laboratory experiments with humans and other animals. The ex-
periments are drawn from three areas of contemporary schedule research: behavioral
history effects on schedule performance, the role of instructions in schedule performance
of humans, and dynamic schedules of reinforcement. All of the experiments are discussed
in relation to the role of behavioral history in current schedule performance. The paper
concludes by extracting from the experiments some more general issues concerning re-
inforcement schedules in applied research and practice.
DESCRIPTORS: reinforcement schedules, behavioral history, verbal behavior, dy-
namic schedules, application of basic processes

We selected three areas of contemporary mediate schedule circumstances interact


reinforcement-schedule research as the topic with other variables to control behavior.
of our article for this series on the potential These variables may be thought of broadly
applications of recent developments in the as historical ones, whether they involve
experimental analysis of behavior: the ef- what happened on the preceding cycle in
fects of behavioral history on reinforcement the case of dynamic schedules, in preceding
schedule performance, the role of instruc- conditions in the case of history effects, or
tions in schedule performance of humans, in the person's verbal repertoire, which of-
and dynamic schedules of reinforcement. ten is considered another kind of history.
These areas share several interesting similar- Third, the research in each area comments
ities, beyond representing some current re- on and qualifies the reinforcement process,
search directions in schedules of reinforce- albeit in different ways.
ment. First, all of the research bears directly We begin by discussing the definitions
on human behavior, which is a strong trend and roles of reinforcement schedules in con-
in the experimental analysis of behavior temporary behavior analysis. This is fol-
(Hyten & Reilly, 1992). Even the nonhu- lowed by a review of some representative
man animal (hereafter, animal) experiments experiments in each area mentioned above
on behavioral history that we discuss are and a discussion of their potential relevance
rooted in earlier investigations of behavioral for applied behavior analysis. We conclude
history effects with humans. Second, the re- with some more general issues concerning
search in each area illustrates how the im- basic research involving schedules of rein-
forcement in relation to applied behavior
Lattal thanks Mark Reilly for many helpful discussions analysis.
about progressive reinforcement schedules.
Reprints can be obtained from Kennon A. Lattal, De-
partment of Psychology, West Virginia University, Mor-
gantown, West Virginia 26506-6040 (E-mail: lat- REINFORCEMENT SCHEDULES
tal@wvnvm.wvnet.edu) or from Nancy A. Neef, Graduate IN BEHAVIOR ANALYSIS
School of Education, University of Pennsylvania, 3700
Walnut Street, Philadelphia, Pennsylvania 19104-6216 (E- Reinforcement schedules have been de-
mail: nancyn@nwfs.gse.upenn.edu). fined as prescriptions for arranging rein-
213
214 KENNON A. LA TTAL and NANCY A. NEEF
forcers in time and in relation to responses, CURRENT SCHEDULE
as the rules used to present reinforcing PERFORMANCE AND
stimuli (Zeiler, 1977), or as "specifica- SCHEDULE HISTORY
tion(s) of the criteria by which responses An assumption of behavior analysis is that
become eligible to produce reinforcers" operant behavior is controlled by current re-
(Catania, 1992, p. 394). Such definitions inforcement schedules. Recent investigations
are useful in specifying the form or struc- with both animal and human subjects have
ture of the schedule, but they fall short in systematically explored how previous expe-
other domains. First, they are mute on the riences also influence current schedule-con-
dynamic interplay between the initial pre- trolled behavior. The experiments illustrate
scription or rule and subsequent behavior. techniques for establishing functional rela-
This dynamic nature of at least some re- tions between explicitly arranged past expe-
inforcement schedules affects the subse- riences, offer a broader context for discussing
quent arrangement between time, re- reinforcement schedule performance, and
sponses, and reinforcers that in turn leads raise important issues for applied behavior
to changes in performance. Second, they analysts concerning how historical variables
do not consider how schedule-controlled are conceptualized and studied in relation to
performance may be tempered by such fac- applied problems.
tors as prior history or the operation of Historical variables have been examined
other contextual factors such as, in the case in basic research by establishing different
of humans, instructions that may either baseline histories of responding under sepa-
compete with or complement the rules rate schedules and evaluating subsequent
specified by the schedule. Third, the defi- performance under a common third sched-
nitions understate the role of schedules in ule as a function of prior schedule experi-
natural settings where they also may be ence. For example, Freeman and Lattal
presumed to operate, but not always in a (1992) examined behavioral history effects
manner prescribed or imposed by an agent in three experiments with pigeons. In the
or specified by an a priori rule, as the def- first experiment, subjects initially were ex-
initions imply. posed daily to a fixed-ratio (FR) schedule
The ubiquity of schedules has made during one session and to a differential-re-
them a focal point of behavior analysis. inforcement-of-low-rate (DRL) schedule
They sometimes have been labeled the during the other session, each under differ-
"amino acids of behavior" and often have ent stimulus conditions. Each schedule was
been discussed as fundamental determi- presented for 50 or more sessions, and es-
nants of behavior (Morse & Kelleher, tablished a history of responding at a rate
1977). Reinforcement schedules are central that was about 4.5 times higher under the
in the experimental analysis of behavior be- FR than under the DRL schedule. Subse-
cause of what can be learned about the re- quently, identical fixed-interval (FI) sched-
inforcement process from them and be- ules were implemented under the respective
cause they serve as useful baselines for the stimulus conditions during both of the ses-
study of other behavioral processes (Zeiler, sions for 60 days. Response rates in the for-
1984). Reinforcement schedules serve sim- mer FR condition remained higher than re-
ilar purposes in applied behavior analysis sponse rates in the former DRL condition,
and also, either directly or indirectly, are and they tended to converge only after pro-
embedded in most treatment programs. longed (approximately 15 to 40 sessions) ex-
REINFORCEMENT SCHEDULES 215
posure to the FI schedule. Similar results those reported by Freeman & Lattal, 1992)
were obtained in a second experiment that to be even more persistent or to manifest
examined the effects of high-rate (FR sched- differently with humans. This in turn has
ule) and low-rate (DRL schedule) histories led to other investigations ofvariables related
of responding on subsequent performance to behavioral history that might account for
under a variable-interval (VI) schedule of re- such differences.
inforcement, except that the effects of his- For example, it has been hypothesized
tory were not as persistent. These results that, for humans, variable-ratio (VR) sched-
were replicated in a third experiment using ules that generate high-rate responding
a multiple schedule to generate high and low could interfere with subsequent FL perfor-
response rates within individual baseline ses- mance (Wanchisen, Tatham, & Mooney,
sions. 1989); however, the results of a study by
These studies demonstrated that if behav- Baron and Leinenweber (1995) suggested
ior has been established under stimulus con- that such a VR history does not by itself
trol in the past, then that past schedule per- account for FI performance differences be-
formance affects current responding in the tween humans and other animals. That
presence of the stimuli. They also suggest study examined performance of 18 rats un-
that certain histories of reinforcement can be der an FI 30-s schedule, half of whom had
relatively persistent (e.g., DRL on subse- a conditioning history of high-rate respond-
quent FI performance). Does an intervening ing under single or compound VR sched-
history mitigate the persistence in behavior ules. As in previous studies, the high re-
of more remote histories? LeFrancois and sponse rates initially established in the rats
Metzger (1993) compared rates of bar press- exposed to VR schedules progressively di-
es by two groups of rats under FL schedules. minished with continued exposure to the FI
For both groups, responses first were con- schedule. Of particular note, however, was
ditioned under a DRL schedule, but for one that the pattern of responding within each
group, exposure to an FR schedule preceded FI was similar for subjects with and without
the FI schedule. Performance under the FL a VR history. That is, subjects with and
schedule was affected by immediate history, without a prior VR history showed similar
and DRL schedule histories did not affect FL development of the characteristic FL scallop
performance for subjects with an intervening and postreinforcement pauses that are indic-
FR schedule history. ative of temporal control by FI schedules.
Humans past the age of 5 or 6 years often Thus, although history effects for rats with
differ from animals in patterns of responses prior exposure to VR schedules were evident
and control by schedules of reinforcement. in high overall response rates under the FL
For example, whereas the scalloped or schedule, within-interval response rates were
"break-and-run" pattern predicted by FL similar to those of rats without a VR history.
schedules readily occurs with rats and pi- Regardless of history, then, rats' responding
geons (i.e., postreinforcement pause fol- was controlled by the schedule in a way that
lowed by positively accelerated response is not always characteristic of adult human
rates), human performance, particularly hu- performance.
man performance that is instructed (e.g., That a history of responding established
Catania, Matthews, & Shimoff, 1982), is re- under certain schedules affects current per-
sistant to control by the temporal variables formance under a different reinforcement
that are implicit in those schedules. Thus, schedule for some period of time after the
one could expect some history effects (e.g., original conditions have changed has also
216 KENNON A. LA TTAL and NANCYA. NEEF
been observed in applied studies with hu- experiment illustrate. Perone, Galizio, and
man subjects. For example, in a study of Baron (1988) discussed a number of differ-
concurrent schedule reinforcement of aca- ences between human and animal studies
demic behavior, Mace, Neef, Shade, and that may contribute to the differences in
Mauro (1994) reported that "changes in the schedule performance as a function of sched-
concurrent VI schedules failed to generate ule history. Specifically, in the case of differ-
patterns of time allocation that matched the ences in behavioral history effects, unlike the
relative rates of reinforcement. Instead sub- animal subjects' relatively simple and brief
jects generally tended to persist in their al- (e.g., 2 months or less) "simulated" histories,
location patterns of the previous schedule the behaviors of individuals treated by ap-
condition" (p. 593). plied behavior analysts often have a history
Considering that the control of current of many years' duration. The duration of a
schedule performance by past experiences history might affect its persistence and may,
appears to be a robust finding in basic re- for example, partially account for Lovaas'
search, it is somewhat surprising that the re- finding that the success of even prolonged
sidual or carryover effects of prior experienc- intensive treatment for autism was related to
es have not been observed more often in ap- the age of the child (Lovaas, 1993).
plied studies. It may be that carryover effects The results of LeFrancois and Metzger
are more common than it would appear (1993) indicating that current performance
from the literature. For example, because is affected more by immediate history than
multiple baseline and reversal designs that by remote history have implications for ap-
are common in applied research require im- plied research on functional analysis. In a
mediate changes in behavior to demonstrate sense, functional analyses involve efforts to
experimental control, studies in which be- determine the effects of immediate history
havior patterns persist from one condition to by identifying the contingencies that main-
the next may not be submitted, or may be tain current responding. But doing so in-
rejected, for publication. volves implementing contingencies and
Another possibility for the absence of his- schedules of reinforcement that also create
tory effects in applied studies is that changes immediate histories for those behaviors. It is
in independent variables in these studies are possible that the behaviors measured may
often correlated with unique discriminative come to be controlled more by the interven-
stimuli, which may mitigate the effects of ing histories arranged by the functional anal-
behavioral history. This is supported by the ysis than by the histories the functional anal-
results of studies by Freeman and Lattal ysis is designed to assess, especially if the
(1992) indicating stimulus control of history analysis is prolonged.
effects, and by Hanna, Blackman, and To- Although the above studies indicate the
dorov (1992) showing that discriminated re- control of current performance by previous
sponding following changes in concurrent schedules, they also demonstrate the dimin-
VI schedules occurred sooner in pigeons ishment of this control as experience under
when each schedule was uniquely correlated other schedules increases. History effects
with a discriminative stimulus (although see therefore can be conceptualized as transition
Mace et al., 1994, for an exception). states (Sidman, 1960). Indeed, the assump-
The extent to which history effects persist tion that history effects, for the most part,
with humans cannot be predicted with cer- will not persist indefinitely as behavior
tainty from those involving animals, as the. adapts to the present environment is the rai-
results of Baron and Leinenweber's (1995) son d'etre for applied behavior analysis. It
REINFORCEMENT SCHEDULES 217
underlies our successes as well as our treat- etiology, since the conditions that are nec-
ment failures such as relapse (Lerman, Iwata, essary to develop or maintain a response may
Smith, Zarcone, & Vollmer, 1994). be totally unrelated to the conditions that
Paradoxically, applied behavior analysts are sufficient to alter or eliminate it" (p.
have regarded the role of behavioral history 198). It may be inconsistent or arbitrary to
as both paramount and irrelevant. On the assume, however, that the relevance of his-
one hand, a tenet of behavior analysis is that tory depends on whether or not we have ar-
history profoundly affects human behavior. ranged it. For example, sometimes we seek
In fact, it could be argued that for applied to establish a history that will render a de-
behavior analysts, arranging conditions to al- sired behavior insensitive to immediate con-
ter subsequent behavior is itself a matter, and tingencies (as implied by the definition of
goal, of generating a different history that generalization) so that it will be maintained
will produce durable changes in the targeted in the natural environment. If we can ar-
behavior. On the other hand, until the de- range treatment conditions for desirable be-
velopment of functional analysis methods havior to persist in the presence of disruptive
(Iwata, Dorsey, Slifer, Bauman, & Richman, events, then we also must appreciate that
1982/1994), behavior analysts generally dis- other conditions in the natural environment
regarded the historical conditions under create a history in which problem behavior
which behavior developed. The relative lack is resistant to change by our treatment con-
of emphasis on the history that subjects ditions. By understanding how and under
bring to the situation is, in part, what dis- what conditions history affects schedule-
tinguishes our field (e.g., from psychoana- controlled behavior, applied behavior ana-
lytic models). One reason was practical: We lysts may be able to design interventions that
can never know with certainty the history mitigate or optimize those influences.
that led to the development of a particular
problem behavior because we were not able
to observe it (nor, obviously, to change it). HUMAN VERBAL BEHAVIOR
We therefore relegated history to the role of AND SCHEDULE PERFORMANCE
an inevitable source of behavior variability, The studies just discussed illustrate that
recognizing that "any difference in behavior schedule performance is not determined ex-
will always be subject to interpretation as a clusively by contemporary requirements for
product of some currently unknowable fluc- reinforcement. Human schedule perfor-
tuation in those unknown variables" (Baer, mance also may be affected by typically
1977, p. 168). long-standing histories of verbal behavior.
Another reason for disregarding behavior- Hackenberg and Joker (1994) examined
al history was a belief that history is irrele- choices of adult humans when the corre-
vant, either because the problem behavior, spondence between instructions and contin-
when we encounter it, might be affected by gencies was made progressively less accurate
different conditions than those that contrib- by gradual shifts in the schedule of reinforce-
uted to its development, or because the ar- ment. The purposes of their investigation
rangement of different conditions (immedi- were to examine instructional control under
ate or proximal history) can override the ef- conflicting schedule requirements, the tran-
fects of prior conditions (remote or distal sition from instructional to schedule control,
history). As noted by Iwata et al. (1982/ and the effects of a history of inaccurate in-
1994), "behavioral researchers and clinicians structions on compliance. The procedure in-
generally have dismissed the importance of volved presenting two different stimuli on a
218 KENNON A. LA TTAL and NANCYA. NEEF
computer screen, one of which was associ- structions as the PT step size increased across
ated with a fixed-time (FT) schedule and the several experimental conditions. As the in-
other with a progressive-time (PT) schedule. structions became progressively more inac-
Neither schedule required a response; the re- curate, however, choice patterns became
inforcers were simply delivered at the end of more variable and produced more reinforce-
the scheduled time period. After each suc- ment, resulting in an abrupt transition from
cessive reinforcer, subjects selected one or instructional to schedule control. The extent
the other schedule, which then remained in to which choices were controlled by the
effect throughout that trial. If a subject ini- schedule contingencies as PT step sizes de-
tially selected the stimulus associated with creased (in the descending sequence) varied
the PT schedule, a point was delivered im- according to the point at which instructional
mediately (0 s), and each successive choice control first broke down in the ascending
of the PT schedule gradually increased the sequence, suggesting that the history of con-
time to point delivery in fixed increments. sequences for following inaccurate instruc-
If a subject initially selected the stimulus as- tions can have enduring effects on behavior.
sociated with the FT schedule, a point was The schedules in Hackenberg and Joker's
delivered after 60 s and the PT schedule was (1994) experiment are analogous to many
reset to 0 s. situations in the natural environment in
Subjects were given the same set of in- which conditions gradually change (e.g., the
structions throughout the experiment. Un- progression of an illness, potency of a drug,
der the first experimental condition, the in- the economy) such that instructions from
structions accurately characterized the se- others (e.g., physicians, financial advisers)
quence of PT and FT choices that would that initially described the schedule become
produce the most reinforcement (i.e., the progressively less accurate. Hackenberg and
schedule and the instructions were identi- Joker's results suggest that the history and
cal). The size of the increments (step size) in degree of correspondence between instruc-
the PT schedule was altered gradually across tions and consequences of behavior may af-
successive experimental conditions (in as- fect the extent to which choices are deter-
cending and then descending order), such mined by those changing conditions or con-
that the same instructions gradually became tinue to be controlled by instructions. In
less (and then more) accurate in describing some situations, instructional control and
the optimal choice sequence. This unique weak control by changing schedule condi-
procedure permitted examination of the tions are desirable because they increase the
transition from instructional to schedule probability that a response will persist de-
control on a continuum of changing stim- spite short-term punishers, extinction, or in-
ulus conditions, and examination of sched- creasing work requirements (e.g., a physi-
ule control as a function of history. cian's instructions when a medical program
Instructional control was established takes time to work). In other situations, con-
quickly in the first condition, in which in- trol by instructions is maladaptive (albeit not
structions accurately described the schedules. always for the person giving them). Scam
However, conformity with instructions nec- artists, for example, strengthen instructional
essarily constrained the range of behavior control by initially arranging consequences
(choice patterns), thereby both precluding that support their instructions (such as en-
contact with changes in the schedule and re- suring that the victim receives "profits" for a
ducing subsequent control of responding. period of time) so that instructions will con-
Choices continued to be controlled by in- tinue to control escalated investments in a
REINFORCEMENT SCHEDULES 219
bogus operation in which the effectiveness more persistence when the condition was
of the instructions depends on the victim's changed to extinction than did the respond-
behavior becoming insensitive to the chang- ing of subjects who were not given specific
ing consequences (e.g., when the profits no instructions. Persistence was most pro-
longer arrive). nounced for subjects who previously had
One direct implication of Hackenberg been classified as "rigid."
and Joker's (1994) findings for applied be- In a second experiment, subjects were giv-
havior analysts involves interventions to de- en accurate, specific instructions with an FR
crease noncompliance, one of the most per- schedule in effect. When the schedule was
vasive childhood behavior problems (e.g., changed to DRL, half of the subjects were
Neef, Shafer, Egel, Cataldo, & Parrish, instructed accurately and the other half were
1983; Parrish, Cataldo, Kolko, Neef, & instructed inaccurately. All subjects who had
Egel, 1986). Establishing a history of follow- been given accurate instructions responded
ing adult instructions may also have the dis- accordingly. When instructions were inac-
advantage of reducing the direct schedule curate, all subjects initially responded ac-
control of the behavior of children. For ex- cording to the instructions (which described
ample, Parrish et al. (1986) found that re- the previous FR schedule), but the subjects
inforcing compliant behavior produced col- who had previously been characterized as
lateral reductions in inappropriate behavior rigid persisted in this pattern of behavior;
(e.g., aggression, disruption) for which there responses of most of the "nonrigid" subjects
were no scheduled consequences. Similarly, eventually conformed to the schedule.
Neef et al. (1983) demonstrated that rein- Some recent research suggests that self-
forcing compliance to a subset of instruc- generated rules may control nonverbal be-
tions increased compliance with instructions havior similarly to instructions from others
of similar types of requests ("do" or "don't") (e.g., Rosenfarb, Newland, Brannon, &
that were not reinforced. The demonstration Howey, 1992). For example, in some prelim-
of generalized response classes (i.e., that re- inary work on matching by Neef and col-
inforcement of instruction following had leagues, a student whose completion of dif-
collateral effects on unreinforced behaviors) ferent sets of math problems was reinforced
in both of these studies necessarily also on a concurrent VI 30-s VI 60-s schedule
shows that the unreinforced members were devoted her time exclusively to one set of
not controlled by the reinforcement sched- problems, saying, "I should finish what I
ule. Although these particular results had start before doing something else." Obvi-
obvious advantages, such histories might also ously in many situations control of behavior
have long-term disadvantages if they pro- by that rule (or the history that is reflected
duce rigid rule following that is not regulat- in the rule) would be adaptive in obtaining
ed by environmental contingencies. reinforcers that are associated with successful
Such rigidity effects were demonstrated in task completion. But because that rule com-
a recent study by Wulfert, Greenway, Farkas, peted with the optimal response strategy in
Hayes, and Dougher (1994) on the effects this situation, her behavior could not come
of instructional histories and measures of be- into contact with the programmed contin-
havioral rigidity on sensitivity to schedule gencies. Her exclusive responding to a single
contingencies. In the first experiment, the alternative that yielded a relatively low rate
responding of subjects who were given ac- of reinforcement persisted over numerous
curate, specific instructions under a multiple sessions until adjunct procedures were added
DRL FR schedule of reinforcement showed (e.g., modeling, count-down timers) that
220 KENNON A. LA TTAL and NANCYA. NEEF
nonverbally described the contingencies. schedules, although in other studies the re-
Similarly, the extent to which faulty or im- lation between subjects' descriptions and ac-
precise rules may have been generated by companying verbal responses has been less
other subjects in studies on matching by clear (Hackenberg & Axtell, 1993; Jacobs &
Neef, Mace, and Shade (1993), Neef, Mace, Hackenberg, 1996). The relation between
Shea, and Shade (1992), and Mace et al. verbal description and schedule performance
(1994) could have contributed to the initial may be represented as a continuum. On the
lack of control of their responding by con- one hand, it is likely that schedule perfor-
current VI schedules until adjunct proce- mance influences the verbal description. On
dures were used to establish matching. the other, schedules might control perfor-
The effects of self-generated rules on mance only indirectly, to the extent that they
schedule performance was also examined by affect individuals' verbal behavior in the
Horne and Lowe (1993) in a series of six form of rules that govern behavior. This lat-
experiments with 30 adults. Because match- ter observation has interesting implications
ing had been reported in a number of stud- for treatment. For example, naturally exist-
ies with humans by Bradshaw, Szabadi, and ing reinforcement schedules are often am-
Bevan (1976, 1977, 1979), Horne and Lowe biguous, and even in treatment settings it is
used a similar procedure. Two computer difficult to arrange them with the consisten-
keys could be pressed for points (exchange- cy and precision that are characteristic of
able for money) on concurrent VI schedules. laboratory research. To the extent that in-
On one key, six different-valued VI sched- dividuals formulate and follow faulty rules,
ules were randomly rotated every 10 min nonverbal behavior that is controlled by
separated by 5-min rest periods; responses to those rules may be maladaptive or restricted
the other key were reinforced on the same such that it does not come into contact with
VI schedule throughout. Each VI schedule schedules that might lead to other rules. In
was correlated with a different stimulus (e.g., counseling situations, for example, clients
geometric shape). At the end of each exper- are often encouraged to articulate their co-
iment, subjects completed a questionnaire vert verbal behavior so that inaccurate con-
asking them to describe the schedules that tingency descriptions can be observed and
had been in effect and the factors that had challenged; the therapist can then shape
influenced their choices. These performance more appropriate performance rules or
rules then were compared to the subjects' prompt clients to follow testable rules that
schedule performance. The choices of only will contact contingencies that support al-
13 of the 30 subjects conformed to the gen- ternative behaviors.
eralized matching equation and approximat- Early applied research addressed the role
ed the typically reported performance of an- of verbal behavior directly in relations be-
imal subjects. Significantly, actual perfor- tween verbal and nonverbal correspondence
mance of 29 of the subjects corresponded (e.g., Deacon & Konarski, 1987; also see Is-
closely to their descriptions of the perfor- rael, 1978, for a review) or indirectly in self-
mance rules they had generated, whether or instruction training (Meichenbaum &
not those rules accurately described the re- Goodman, 1971). However, there have been
inforcement schedules (see also Rosenfarb et few studies in JABA on the role of verbal
al., 1992). behavior over the past decade. For the most
These data suggest that adult humans' part, behavior analysts have treated verbal
own verbal behavior may influence their be- behavior and pretreatment history similarly:
havior in the presence of reinforcement Because neither history nor covert verbal be-
REINFORCEMENT SCHEDULES 221
havior can be directly observed, and reports events is more characteristic of behavior-an-
of those events cannot be presumed to be alytic research.
reliable or valid, these variables are often Reinforcement schedules typically involve
considered to be incidental to prediction and repetitive, basically static, arrangements
management of behavior. In fact, in a sem- whereby the same requirements for rein-
inal article that continues to define our field, forcement are in effect on successive cycles.
Baer, Wolf, and Risley (1968) stated that ap- With an FR schedule, for example, after ev-
plied research ery reinforcer the same fixed response re-
quirement is repeated. But even here, there
usually studies what subjects can be may be an interaction between schedule per-
brought to do rather than what they formance at a given point in a session and
can be brought to say.... Accordingly, subsequent performance in that session. For
a subject's verbal description of his own example, during an FR schedule, reinforce-
nonverbal behavior usually would not ment rate is determined by response rate
be accepted as a measure of his actual such that more rapid responding yields a
behavior unless it were independently higher rate of reinforcement than does lower
substantiated. Hence, there is little ap- rate responding. Similar relations have been
plied value in the demonstration that described for other schedules (e.g., Baum,
an impotent man can be made to say 1973, 1989; Nevin, 1984). Such effects may
that he is no longer impotent. The rel- be considered on the same continuum as
evant question is not what he can say, those in the previously cited studies of his-
but what he can do. (p. 93) torical influence on schedule performance,
Although the focus of behavior analysis ap- the difference being the time scale over
propriately remains on what an individual which the prior experience is measured. The
does, the research by Horne and Lowe interaction of humans' rules and schedules
(1993) and others (e.g., Catania et al., 1982; also illustrates dynamic schedule effects. For
Hayes, Brownstein, Zettle, Rosenfarb, & example, if a rule and the requirements of a
Korn, 1986) suggests that behavior analysts schedule are in conflict, over time the sub-
need to consider that "what subjects can be ject's behavior may conform to the schedule
brought to do" may, in many situations, be even though it was initially controlled by the
a function of "what they can be brought to rule (Galizio, 1979). Self-generated rules, if
say." It seems that the analysis of verbal be- and when they occur, might interact in a
havior in relation to reinforcement schedules similar way with schedule-controlled behav-
may warrant a more central role in applied ior to change the latter from time to time.
behavioral research. As the person's rules change, within or be-
tween sessions, behavior changes, which
leads to further changes in the rules and so
DYNAMIC REINFORCEMENT on.
SCHEDULES Dynamic reinforcement schedules provide
Much of the research in the areas de- a procedure for explicitly studying the types
scribed above concerns the role of distal ex- of interactions suggested in the preceding
periences, either those with previous rein- paragraphs. With these dynamic schedules,
forcement schedules or aspects of a history the requirements for reinforcement change
of rule following, on current schedule per- after each reinforcer, or some sequence of
formance. The control of schedule perfor- reinforcers, as a function of either an a priori
mance by more contemporary, or proximal, algorithm or some aspect of the organism's
222 KENNON A. LA TTAL and NANCYA. NEEF
previous performance on the schedule. The- man subjects. Subjects were seated at a con-
oretically, such dynamic schedules are of in- sole and earned points by pushing a button
terest because of the insights they may reveal according to a chained PI t-s F1 20-s sched-
into the adaptation of behavior to rapidly ule. In the presence of the letter A on the
changing circumstances. They also may be computer screen, the first response after t s
of interest to applied behavior analysts be- on a button changed the stimulus on the
cause they represent an attempt by basic re- screen to B, indicating that an F1 20-s sched-
searchers to address complex situations with- ule was in effect. The first response after 20
out dissecting them into more elemental s was reinforced in the presence of B. The
schedules. Such dissection in fluid, applied F1 schedule remained in effect until five re-
settings sometimes is undesirable, difficult, inforcers (points on a counter that were ex-
or even impossible. We note too that we do changeable for money) were earned. At that
not wish to proliferate an already complex time, the letter A reappeared on the screen
taxonomy of schedules. Thus, the distinc- and the PI t-s schedule was incremented.
tions that we describe below are only for di- This A-B cycle repeated throughout each 1-
dactic purposes. We do not propose these hr session. The size of t was 20 s, 40 s, 80
terms as substitutes for descriptions of s, or 160 s in different conditions of the
schedules that we believe are adequate (e.g., experiment. Increments in t were made ac-
Lattal, 1991). cording to either geometric or arithmetic
progressions in different conditions. Human
Algorithm-Based Dynamic Schedules PI performance was characterized by dimin-
One dynamic arrangement occurs when ishing response rates and increasing postre-
the requirements for reinforcement are inforcement pause durations as a function of
changed according to an algorithm. When progressively increasing interval require-
an algorithm is used, the requirements ments. Postreinforcement pauses tended to
change independently of behavior. Progres- increase arithmetically under the arithmetic
sive schedules of reinforcement involve sys- progressions and geometrically under the
tematic, gradually incrementing response re- geometric ones. Under both types of pro-
quirements for reinforcement in the case of gressions, response rates decreased across in-
progressive-ratio (PR) schedules, or time be- creasing PI requirements. These results were
tween reinforcement availability for a re- similar to those reported by Harzem (1969),
sponse in the case of progressive-interval (PI) with rats as subjects. In a second experiment,
schedules. The most common algorithms for Dougherty et al. used PI schedules to assess
incrementing the schedule requirements are temporal control of behavior in humans as
arithmetic (i.e., a constant amount is added a function of different doses of marijuana.
to each successive interval) and geometric The findings of that second study are less
(i.e., each successive interval is increased by important to the present theme than the fact
a constant proportion of the preceding one) that the investigators first analyzed a sched-
progressions. Because the changes in sched- ule with unknown effects on human behav-
ule requirements occur after each reinforcer ior and then were able to use the schedule
(or block of reinforcers) and do so without to assess another behavioral process: tem-
regard to the organism's behavior, they ex- poral control under a drug.
emplify an algorithm-driven dynamic rein- In PR schedules, it is customary to con-
forcement schedule. tinue increasing the ratio requirement for re-
Dougherty, Cherek, and Roache (1994) inforcement until responding ceases for a
investigated PI schedule performance of hu- preestablished time period (e.g., 10 min, but
REINFORCEMENT SCHEDULES 223
the actual duration varies from one experi- tual analyses that have elucidated the issues
ment to another), described as the break surrounding the resistance of targeted be-
point (but see also Thompson, 1972, and havior to change in applied settings. Perfor-
Keesey & Goldstein, 1968, for other crite- mance on PR schedules could be a useful
ria). Break points have not been studied with complement to other measures of resistance
PI schedules, raising a question about how to change in applied settings.
PR and PI schedules compare in terms of
engendering response persistence. Lattal, Interactive Dynamic Schedules
Reilly, and Kohn (1996) compared PI and Another dynamic schedule arrangement is
PR performance directly by yoking PR and one in which the requirements for upcoming
PI schedules such that, in successive pairs of reinforcers change directly as a function of
sessions, a PR schedule in one session was the organism's current or past behavior. The
followed by a PI schedule with a matched last schedule described by Ferster and Skin-
(yoked) distribution of reinforcers to the PR ner (1957) was an adjusting schedule in
schedule. In each of 4 birds in almost every which the response requirement of a ratio
pair of sessions, PI responding continued be- schedule was increased or decreased after
yond the point at which the break point had each reinforcer as a function of how long the
been reached on the PR schedule. animal paused before responding after each
A progressive reinforcement schedule can reinforcement. These adjusting procedures
be likened to an applied setting in which the sometimes have been described as titration
level of difficulty of material being taught procedures (e.g., Lea, 1976; Weiss & Laties,
increases independently of the subject's be- 1959).
havior, as when instructor-paced changes in Hackenberg and Axtell (1993) used an in-
material are used. That is, regardless of the teractive dynamic reinforcement schedule to
individual student's performance, according study the control of human behavior by
to a specified set of rules, objectives, or plans long- and short-term consequences. They
(often based on some performance measure provided human subjects with choices be-
based on group averages or modes), the re- tween a PI schedule and an FI schedule that
quirements for reinforcement (successful also, in some conditions, reset the PI sched-
task completion) increase systematically. ule to 0 s. We will limit our description to
Such progressions occur in many education- the first of their three experiments, because
al systems, and it is not uncommon under it contained the features critical to the pres-
these conditions for a student's performance ent discussion and yielded findings that were
to deteriorate as the difficulty level or work replicated and elaborated on in the subse-
requirements increase. Conversely, the diffi- quent experiments. On each of a series of
culty level may remain constant but the sub- trials, subjects chose one of two schedules
ject's performance improves over time. The (the unchosen schedule at the start of each
traditional negatively accelerated learning trial was rendered ineffective for the remain-
curve expresses the latter relation. der of the trial), each correlated with a dis-
Progressive-ratio schedules also have been tinct stimulus on a computer monitor. All
used to assess response persistence (Stewart, operant responses were made on a computer
1975). For example, Hodos (1961) showed keyboard space bar and, according to the
a correlation between food deprivation and schedules described below, yielded points
the break point on PR schedules. Mace et that could be exchanged for money. Each
al. (1988, 1990) have provided a valuable trial started immediately after the preceding
series of experiments and thoughtful concep- reinforcer with the simultaneous presenta-
224 KENNON A. LA TTAL and NANCYA. NEEF
tion of a red and a blue stimulus on the response, the conditions necessary for rein-
monitor. The red stimulus was correlated forcement are adjusted. Thus, the subject's
with an FI schedule, and the blue stimulus behavior alters the scheduling of reinforcers.
was correlated with a PI schedule that in- The optimal relation between behavior and
creased by 5 s after each point was delivered changing requirements for reinforcement
on that schedule. In different phases of the (i.e., the schedule) during shaping has been
experiment, the F1 was 15, 30, or 60 s. Two a matter of considerable speculation but lim-
conditions were compared in each phase. In ited experimentation.
a no-reset condition, "PI requirements were There is general agreement that the con-
independent of FI choices, escalating with tingencies should be changed "gradually,"
successive PI choices" (Hackenberg & Ax- but the shaping of new behavior largely re-
tell, 1993, p. 448). In a reset condition, mains an art. One exception to this latter
choices of the FI schedule allowed a point observation is the work of Eckerman, Hienz,
to be produced by the operant response and Stern, and Kowlowitz (1980), who quanti-
reset the PI schedule to 0 s. fied the shaping process and suggested that
All but 1 subject switched between the large, rapidly changing requirements for re-
schedules more frequently during the reset inforcement led to the fastest shaping of a
than during the no-reset condition. During key-peck response of a pigeon to a particular
the reset conditions, the subjects switched location on a 10-in. wide response strip.
more frequently between the schedules when Similar conclusions about the dynamic
the F1 value was shorter. Hackenberg and changes in shaping have been reached by
Axtell (1993) then asked whether the Platt and his colleagues (e.g., Alleman &
switching patterns were better predicted by Platt, 1973; Kuch & Platt, 1976) in the
taking into account only the immediately shaping of interresponse times. Galbicka
preceding PI interval, or whether they were (1994) recently discussed some of the ap-
better predicted by taking into account some plied implications of Platt's shaping proce-
aggregate of previous PI intervals. This was dures.
done by comparing the data to predictions The control of human behavior by long-
derived from an optimality model (Charnov,
1976) and from a model based on the cu- and short-term consequences is another ex-
mulative effects of one or more delays (PIs) ample of an interactive dynamic schedule, in
to reinforcement. Based on these compari- that what a person does in the present may
sons, they concluded that choice in dimin- either positively or negatively affect future
ishing returns situations (i.e., PI schedules) reinforcers. In Hackenberg and Axtell's
is determined by an aggregate of reinforcers (1993) experiment, the value of the PI
in time over multiple trials, and not only by schedule, and the subsequent overall rate of
the immediately preceding PI value. reinforcement, depended on the choices of
The shaping of a response through the either of two schedules over trials. Applied
differential reinforcement of successive ap- behavior analysis historically has emphasized
proximations is a simpler example of a dy- the immediate consequences of behavior, but
namic interactive schedule. Shaping involves further development of the type of analysis
systematic and progressive changes in the re- and theory derived from Hackenberg and
quirements for reinforcement as the subject Axtell's analysis of a dynamic interactive
meets successive behavioral objectives in re- schedule illustrates how one might concep-
lation to the target behavior. As the behavior tualize and assess the effects of longer term
more and less closely approximates the target consequences (e.g., reinforcement rate over
REINFORCEMENT SCHEDULES 225
extended time periods) on both individual eral observations about reinforcement sched-
and group (systems) behavior. ules and their relation to issues in applied
In terms of individual behavior, one ex- behavior analysis.
ample, which has been of considerable con- 1. Often, the term contingency (or contin-
troversy among applied behavior analysts, is gencies) is used in applied work or in talking
the use of aversive (shock) procedures as a about naturally occurring behavior to de-
means of controlling self-injurious behavior scribe the interrelations among stimuli, re-
(e.g., Iwata, 1988). Some have argued that sponses, and reinforcers. The term schedule
the short-term benefits of reduced maladap- is used less often, perhaps because the latter
tive behavior are not compensated for by the term has come to connote greater precision
long-term negative effects of shock on the than contingencies. Nonetheless, it is impor-
child. Although Hackenberg and Axtell's tant to recognize that, in the noted uses, the
(1993) study of dynamic reinforcement two terms describe the same phenomena.
schedules obviously does not bear directly on Sometimes contingency is used to describe
this issue, it offers a framework for placing the fact that the reinforcer depends on a re-
the problem of competing outcomes of in- sponse, but one of us has suggested that de-
dividual behavior or individual treatment pendency is preferable for that use (Lattal,
programs on a continuum with other prob- 1995; Lattal & Poling, 1981). Terms are im-
lems that involve the analysis of short- and portant, lest applied behavior analysts con-
long-term consequences. sider that they are not studying or using
The same framework described above can reinforcement schedules. They are-perhaps
be useful in behavior-analytic applications not with the precise specification of the lab-
involving aggregates of people. In a business, oratory, but the schedules are operative
for example, changing to a new system for nonetheless. This latter observation reinforc-
providing a service may produce short-term es our initial observation and the theme of
losses of revenue that result from the time this paper that basic research on reinforce-
that must be devoted to employee training, ment schedules, even the esoteric ones, is a
but these losses may be more than compen- rich vein for applied behavior analysts to
sated for in the long run by having employ- mine.
ees who are better trained to deliver the ser-
2. Most basic reinforcement-schedule re-
vices offered by the business. search has involved and continues to almost
It is perhaps too much of a stretch toexclusively involve positive reinforcement.
claim that experiments like that of Hack-Positive reinforcement procedures have been
enberg and Axtell (1993) are precisely anal-
meticulously honed over many years by
ogous to the complex examples described in
many investigators. Such a history gives be-
the preceding two paragraphs; however, their
havior analysts a powerful research tool, as
experiment, and others related to it, are valu-
the reviewed studies illustrate. At least in
able in providing a bridge between applied
part because positive reinforcement proce-
behavior analysts and basic researchers that
dures are so thoroughly analyzed, reliable,
allows the use of similar descriptions and
and of such proven value, there has been rel-
techniques to account for human behavior atively little study of the aversive control of
through reinforcement schedule analyses. behavior, involving schedules of negative re-
inforcement and schedules of punishment.
CONCLUDING COMMENTS This is an unfortunate omission from the
We conclude by extracting from the ex- standpoint of applied behavior analysis be-
periments reviewed above some more gen- cause the controversies that surround the use
226 KENNON A. LA TTAL and NANCYA. NEEF
of aversive control in applied settings invite lecular and the latter as molar, and each has
a better basic understanding of aversive con- strong proponents (e.g., Baum, 1973, 1989;
trol and its by-products (e.g., Iwata, 1987). Peele, Casey, & Silberberg, 1984). One pur-
3. Reinforcement schedules have not gen- pose of Hackenberg and Axtell's (1993) ex-
erated the same enthusiasm in applied work periment was to determine whether rein-
as they have at some points in the history of forcers operate over extended, molar time
the experimental analysis of behavior. In- frames or whether reinforcement effects are
deed, many complex schedules have been more local.
avoided or regarded as esoteric or even use- The body of evidence and the arguments
less by a number of applied behavior ana- concerning the merits and limitations of
lysts. Similarly, Zeiler (1984) reserved his each position are beyond the scope of this
strongest criticism of schedule research for paper (see Williams, 1983, for an informa-
basic researchers who are caught up in the tive critique of the debate), but one general
minutiae of schedules. Although these criti- point about the debate is important in re-
cisms have merit, each research problem de- lation to applied work. The debate about the
scribed in the preceding sections suggests ex- mechanisms of reinforcement may be sepa-
citing adaptations of reinforcement sched- rated from the practical issues of using re-
ules to interesting problems for applied be- inforcement. In applied work, the appropri-
havior analysts: elucidating the role of ate level of analysis might be selected on
behavioral history in current performance, practical grounds alone: One selects a level
gleaning insights into the relation between of analysis that yields the behavioral control
verbal and direct contingency control of be- necessary to achieve the behavior change. If
havior, and disentangling the dynamic pro- a molecular approach leads to behavioral
cesses that often operate in applied settings. control, it should be used. If a molar ap-
The questions regarding schedules and proach does so, then use it. This is not to
schedule use need to be framed functionally undermine the importance and value of un-
and pragmatically rather than structurally: derstanding reinforcement mechanisms, but
How can the schedule be used to address an only to iterate the idea that many things
important applied question? We earlier not- work even when the precise mechanism of
ed that at least part of the utility of schedules their operation cannot be isolated. For ex-
may result from how the schedule concept ample, Darwin (1859/1964) articulated the
is used. Applied, or basic, situations and theory of natural selection in the absence of
problems need not, and do not, always re- any evidence of the genetic mechanisms that
duce down to single, simple schedules. They make such selection possible.
often can be conceptualized usefully, how- 5. Discussions of both reinforcement
ever, along the lines suggested in some of the schedule performance in the basic literature
experiments on dynamic reinforcement and reinforcement theory increasingly utilize
schedules. quantitative description and analysis. Such
4. Reinforcement schedule performance is analyses and their advantages have been
essential to discussions about mechanisms of summarized succinctly by Nevin (1984) and
reinforcement. For example, in a VR sched- Shull (1991). Applied behavior analysis has
ule, do high response rates occur because of not yet been strongly affected by these de-
the differential reinforcement of short inter- velopments in the experimental analysis of
response times or because higher response behavior. Research on the matching law
rates increase the overall reinforcement rate? (e.g., McDowell, 1988) and on behavioral
The former position is characterized as mo- momentum (Nevin, Mandell, & Atak,
REINFORCEMENT SCHEDULES 227
1983) is grounded in quantitative analysis, nal of the Experimental Analysis of Behavior,
but both have been imported into applied schedule research is more often conceptual-
behavior analysis primarily at a conceptual ized around other problems than those
level, largely in the absence of the quanti- posed by the reinforcement schedule per se.
tative framework in which they developed. For example, Hackenberg and Axtell (1993)
6. The study of human behavior in the used a complex schedule to analyze the ef-
laboratory has largely, but, of course, not ex- fects of diminishing returns on behavior, and
clusively, been a matter of studying schedule several investigators have used different
control of behavior. Issues of prior histories, schedules as a way of studying problems re-
both in and out of the immediate experi- lated to behavioral history. Despite the
mental situation, and of the role of verbal changed role of schedules in basic research,
behavior are common themes in this re- reinforcement schedules are woven deeply
search. The question of how to study and into the fabric of the experimental analysis
account for the effects of self-generated rules of behavior. In applied behavior analysis, re-
on human schedule performance is impor- inforcement schedules have not sustained
tant. It does, however, raise some knotty is- the degree of interest that they have among
sues, for if this source of control is acknowl- basic researchers. Nonetheless, we hope that
edged, we must rethink some fundamental our discussion will bring applied behavior
assumptions behavior analysts have made analysts to realize that "a student of any
about the ways in which behavior is con- problem in psychology .. . ignores the con-
trolled. sequences of the precise scheduling arrange-
Questions of similarities and differences ments of his experiments [or, we would add,
between human and animal behavior applications] at his peril" (Dews, 1963, p.
abound. Some attribute much of the differ- 148).
ence to procedural differences between hu-
man and animal research procedures (Perone REFERENCES
et al., 1988), but others view the differences
as being more fundamental. The role of Alleman, H. D., & Platt, J. R (1973). Differential
reinforcement of interresponse times with con-
rules and instructions in the schedule con- trolled probability of response per reinforcement.
trol of behavior is a critical issue for both Learning and Motivation, 4, 40-73.
basic researchers and applied behavior ana- Baer, D. M. (1977). Perhaps it would be better not
to know everything. Journal of Applied Behavior
lysts. Similarly, the more general question of Analysis, 10, 167-172.
the relation between the controlling variables Baer, D. M., Wolf, M. M., & Risley, T. R. (1968).
of animal and human operant behavior cuts Some current dimensions of applied behavior
to the quick of the relevance of basic re- analysis. Journal of Applied Behavior Analysis, 1,
91-97.
search in the experimental analysis of behav- Baron, A., & Leinenweber, A. (1995). Effects of a
ior to applied problems. variable-ratio conditioning history on sensitivity
to fixed-interval contingencies in rats. Journal of
the Experimental Analysis ofBehavior, 63, 97-110.
There was a time in our early history Baum, W. M. (1973). The correlation-based law of
when the study of reinforcement schedules effect. Journal of the Experimental Analysis of Be-
could be considered a distinct area of re- havior, 20, 137-153.
Baum, W. M. (1989). Quantitative prediction and
search, and their elaboration and develop- molar description of the environment. The Behav-
ment was of general concern to many basic ior Analyst, 12, 167-176.
researchers. Although research designed to Bradshaw, C. M., Szabadi, E., & Bevan, P. (1976).
elucidate the effects of schedules of rein- Behavior of humans in variable-interval schedules
of reinforcement. Journal ofthe ExperimentalAnal-
forcement still appears regularly in the Jour- ysis of Behavior, 26, 135-141.
228 KENNON A. LA TTAL and NANCYA. NEEF
Bradshaw, C. M., Szabadi, E., & Bevan, P. (1977). in situations of diminishing returns. Journal ofthe
Effect of punishment on human variable-interval Experimental Analysis of Behavior, 62, 367-383.
performance. Journal of the Experimental Analysis Hanna, E. S., Blackman, E. E., & Todorov, J. C.
of Behavior, 27, 275-279. (1992). Stimulus effects on concurrent perfor-
Bradshaw, C. M., Szabadi, E., & Bevan, P. (1979). mance in transition. Journal of the Experimental
The effect of punishment on free-operant choice Analysis ofBehavior, 58, 335-347.
behavior in humans. Journal of the Experimental Harzem, P. (1969). Temporal discrimination. In R.
Analysis ofBehavior, 31, 71-8 1. M. Gilbert & N. S. Sutherland (Eds.), Animal
Catania, A. C. (1992). Learning (3rd ed.). New York: discrimination learning (pp. 299-334). New York:
Prentice Hall. Academic Press.
Catania, A. C., Matthews, B. A., & Shimoff, E. Hayes, S. C., Brownstein, A. J., Zettle, R. D., Rosen-
(1982). Instructed versus shaped human verbal farb, I., & Korn, Z. (1986). Rule-governed be-
behavior: Interactions with nonverbal responding. havior and sensitivity to changing consequences of
Journal of the Experimental Analysis of Behavior, responding. Journal of the Experimental Analysis of
38, 233-248. Behavior, 45, 237-256.
Charnov, E. L. (1976). Optimal foraging: The mar- Hodos, W. (1961). Progressive ratio as a measure of
ginal value theorem. Theoretical Population Biolo- reward strength. Science, 134, 943-944.
gy, 9, 129-136. Horne, P. J., & Lowe, C. F. (1993). Determinants of
Darwin, C. (1964). On the origin of species. Cam- human performance on concurrent schedules.
bridge, MA: Harvard University Press. (Original Journal of the Experimental Analysis of Behavior,
work published 1859) 59, 29-60.
Deacon, J. R., & Konarski, E. A., Jr. (1987). Cor- Hyten, C., & Reilly, M. P. (1992). The renaissance
respondence training: An example of rule-gov- of the experimental analysis of human behavior.
erned behavior? Journal ofApplied Behavior Anal- The Behavior Analyst, 15, 109-114.
Israel, A. C. (1978). Some thoughts on correspon-
ysis, 20, 391-400. dence between saying and doing. Journal of Ap-
Dews, P. B. (1963). Behavioral effects of drugs. In S. plied Behavior Analysis, 11, 271-276.
M. Farber & R. H. L. Wilson (Eds.), Conflict and Iwata, B. A. (1987). Negative reinforcement in ap-
creativity (pp. 138-153). New York: McGraw plied behavior analysis: An emerging technology.
Hill. Journal ofApplied Behavior Analysis, 20, 361-378.
Dougherty, D. M., Cherek, D. R., & Roache, J. D. Iwata, B. A. (1988). The development and adoption
(1994). The effects of smoked marijuana on pro- of controversial default technologies. The Behavior
gressive-interval schedule performance in humans. Analyst, 11, 149-157.
Journal of the Experimental Analysis of Behavior, Iwata, B. A., Dorsey, M. F., Slifer, I. J., Bauman, K.
62, 73-87. E., & Richman, G. S. (1994). Toward a func-
Eckerman, D. A., Hienz, R. D., Stern, S., & Kowlow- tional analysis of self-injury. Journal ofApplied Be-
itz, V. (1980). Shaping the location of a pigeon's havior Analysis, 27, 197-209. (Reprinted from
peck: Effect of rate and size of shaping steps. Jour- Analysis and Intervention in Developmental Dis-
nal of the Experimental Analysis of Behavior, 33, abilities, 2, 3-20, 1982)
299-310. Jacobs, E. A., & Hackenberg, T. D. (1996). Humans'
Ferster, C. B., & Skinner, B. F. (1957). Schedules of choices in situations of time-based diminishing re-
reinforcement. New York: Appleton-Century- turns: Effects of fixed-interval duration and pro-
Crofts. gressive-interval step size. Journal of the Experi-
Freeman, T. J., & Lattal, K. A. (1992). Stimulus con- mental Analysis of Behavior, 65, 5-19.
trol of behavioral history. Journal of the Experi- Keesey, R. E., & Goldstein, M. D. (1968). Use of
mental Analysis of Behavior, 57, 5-15. progressive fixed ratio procedures in the assess-
Galbicka, G. (1994). Shaping in the 21st century: ment of ICS reinforcement. Journal of the Exper-
Moving percentile schedules into applied settings. imental Analysis of Behavior, 11, 293-301.
Journal ofApplied Behavior Analysis, 27, 739-760. Kuch, D. O., & Platt, J. R. (1976). Reinforcement
Galizio, M. (1979). Contingency-shaped and rule- rate and interresponse time differentiation. Journal
governed behavior: Instructional control of hu- of the Experimental Analysis ofBehavior, 26, 471-
man loss avoidance. Journal of the Experimental 486.
Analysis of Behavior, 31, 53-70. Lattal, K. A. (1991). Scheduling positive reinforcers.
Hackenberg, T. D., & Axtell, S. A. M. (1993). Hu- In I. H. Iversen & K. A. Lattal (Eds.), Techniques
mans' choices in situations of time-based dimin- in the behavioral and neural sciences: Vol. 6 Exper-
ishing returns. Journal of the Experimental Analysis imental analysis of behavior (Part 1, pp. 87-134).
of Behavior, 59, 445-470. Amsterdam: Elsevier.
Hackenberg, T. D., & Joker, V. R. (1994). Instruc- Lattal, K. A. (1995). Contingency and behavior anal-
tional versus schedule control of humans' choices ysis. The Behavior Analyst, 18, 209-224.
REINFORCEMENT SCHEDULES 229
Lattal, K. A., & Poling, A. D. (1981). Describing re- Neef, N. A., Shafer, M. S., Egel, A. L., Cataldo, M.
sponse-event relations: Babel revisited. The Behav- F., & Parrish, J. M. (1983). The class specific
ior Analyst, 4, 143-152. effects of compliance training with "do" and
Lattal, K. A., Reilly, M. P., & Kohn, J. P. (1996). "don't" requests: Analogue analysis and classroom
Separating temporal and response requirements for application. Journal of Applied Behavior Analysis,
reinforcement in progressive schedules. Manuscript 16, 81-99.
submitted for publication. Nevin, J. A. (1984). Quantitative analysis. Journal of
Lea, S. E. G. (1976). Titration of schedule parameters the Experimental Analysis of Behavior, 42, 421-
by pigeons. Journal of the Experimental Analysis of 434.
Behavior, 25, 43-54. Nevin, J. A., Mandell, C., & Atak, J. R. (1983). The
LeFrancois, J. R., & Metzger, B. (1993). Low-re- analysis of behavioral momentum. Journal of the
sponse-rate conditioning history and fixed-interval Experimental Analysis of Behavior, 39, 49-59.
responding in rats. Journal of the Experimental Parrish, J. M., Cataldo, M. F, Kolko, D. J., Neef, N.
Analysis ofBehavior, 59, 543-549. A., & Egel, A. L. (1986). Experimental analysis
Lerman, D. C., Iwata, B. A., Smith, R. G., Zarcone, of response covariation among compliant and in-
J. R., & Vollmer, T. R. (1994). Transfer of be- appropriate behaviors. Journal ofApplied Behavior
havioral function as a contributing factor in treat- Analysis, 19, 241-254.
ment relapse. Journal ofApplied Behavior Analysis, Peele, D. B., Casey, J., & Silberberg, A. (1984). Pri-
27, 357-370. macy of interresponse-time reinforcement in ac-
Lovaas, 0. I. (1993). The development of a treatment counting for rate differences under variable-ratio
research project for developmentally disabled and and variable-interval schedules. Journal of Experi-
autistic children. Journal ofApplied BehaviorAnal- mental Psychology: Animal Behavior Processes, I0,
ysis, 26, 617-630. 149-167.
Mace, F C., Hock, M. L., Lalli, J. S., West, B. J., Perone, M., Galizio, M., & Baron, A. (1988). The
Belfiore, P., Pinter, E., & Brown, D. K. (1988). relevance of animal-based principles in the labo-
Behavioral momentum in the treatment of non- ratory study of human operant conditioning. In
compliance. Journal of Applied Behavior Analysis, G. Davey & C. Cullen (Eds.), Human operant
21, 123-141. conditioning and behavior modification (pp. 59-
Mace, F C., Lalli, J. S., Shea, M. C., Lalli, E. P., West, 85). London: Wiley & Sons.
B. J., Roberts, M., & Nevin, J. A. (1990). The Rosenfarb, I. S., Newland, M. C., Brannon, S. E., &
momentum of human behavior in a natural set- Howey, D. S. (1992). Effects of self-generated
ting. Journal of the Experimental Analysis ofBehav- rules on the development of schedule-controlled
ior, 54, 163-172. behavior. Journal of the Experimental Analysis of
Mace, F C., Neef, N. A., Shade, D., & Mauro, B. C. Behavior, 58, 107-121.
(1994). Limited matching on concurrent-sched- Shull, R. L. (1991). Mathematical description of op-
ule reinforcement of academic behavior. Journal of erant behavior: An introduction. In I. H. Iversen
Applied Behavior Analysis, 27, 585-596. & K. A. Lattal (Eds.), Techniques in the behavioral
McDowell, J. J. (1988). Matching theory in natural and neural sciences: Vol. 6 Experimental analysis of
human environments. The Behavior Analyst, 11, behavior (Part 2, pp. 243-282). Amsterdam: El-
95-109. sevier.
Meichenbaum, D., & Goodman, J. (1971). Training Sidman, M. (1960). Tactics ofscientific research. New
impulsive children to talk to themselves: A means York: Basic Books.
of developing self-control. Journal of Abnormal Stewart, W. J. (1975). Progressive reinforcement
Psychology, 77, 115-126. schedules: A review and evaluation. Australian
Morse, W H., & Kelleher, R. T. (1977). Determi- Journal of Psychology, 27, 9-22.
nants of reinforcement and punishment. In W. K. Thompson, D. M. (1972). Enhancement of progres-
Honig & J. E. R. Staddon (Eds.), Handbook of sive ratio performance by chlordiazepoxide and
operant behavior (pp. 174-200). New York: Pren- phenobarbital. Journal of the ExperimentalAnalysis
tice Hall. of Behavior, 18, 287-293.
Neef, N. A., Mace, F. C., & Shade, D. (1993). Im- Wanchisen, B. A., Tatham, T. A., & Mooney, S. E.
pulsivity in students with serious emotional dis- (1989). Variable-ratio conditioning history pro-
turbance: The interactive effects of reinforcer rate, duces high- and low-rate fixed-interval perfor-
delay, and quality. Journal of Applied Behavior mance in rats. Journal ofthe Experimental Analysis
Analysis, 26, 37-52. of Behavior, 52, 167-179.
Neef, N. A., Mace, F. C., Shea, M., & Shade, D. Weiss, B., & Laties, V. (1959). Titration behavior on
(1992). Effects of reinforcer rate and reinforcer various fractional escape programs. Journal of the
quality on time allocation: Application of match- Experimental Analysis of Behavior, 2, 227-248.
ing theory to educational settings. Journal ofAp- Williams, B. A. (1983). Revising the principle of re-
plied Behavior Analysis, 25, 691-699. inforcement. Behaviorism, 11, 63-85.
230 KENNON A. LA TTAL and NANCYA. NEEF
Wulfert, E., Greenway, D. E., Farkas, P., Hayes, S. C., Zeiler, M. D. (1984). Reinforcement schedules: The
& Dougher, M. J. (1994). Correlation between sleeping giant. Journal of the Experimental Analysis
self-reported rigidity and rule-governed insensitiv- ofBehavior, 42, 485-493.
ity to operant contingencies. Journal ofApplied Be-
haviorAnalysis, 27, 659-671.
Zeiler, M. D. (1977). Schedules of reinforcement: Received February 6, 1996
The controlling variables. In W. K. Honig & J. Initial editorial decision February 8, 1996
E. R. Staddon (Eds.), Handbook of operant behav- Final acceptance February 16, 1996
ior (pp. 201-232). New York: Prentice Hall. Action Editor, David 1 Wacker

View publication stats

You might also like