Revealed: Journal of The Experimental Analysis of Behavior Number

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR 1995, 64, 313-329 NUMBER 3 (NOVEMBER)

ASSESSING PREFERENCE FOR REINFORCERS USING


DEMAND CURVES, WORK-RATE FUNCTIONS, AND EXPANSION PATHS
R. DON TUSTIN

A behavioral economic model that explains the choice and allocation of work rate is used to predict
performance patterns in three contexts: with single schedules, with concurrent schedules when total
reinforcement is low, and with concurrent schedules when reinforcement increases. Performance in
the three contexts is predicted to change in orderly ways depending on how the subject evaluates
the reinforcers earned. Quadrant diagrams are used to generate reinforcer demand functions, work-
rate supply functions, and reinforcement-rate expansion paths. Preference between reinforcers is
viewed as being a variable, with preference reversing in some situations.
Key words: preference, constrained optimization, reinforcer evaluation function, work-rate evalu-
ation, isovalue function, demand curve, work-rate supply, expansion path, revealed preference

Behavioral economic analyses have been events. It is further assumed that subjects are
used to understand how schedules of rein- able to rank the relative values of events so as
forcement influence operant performance. to form preferences that are stable when con-
One advance involves the concept of a rein- ditions are invariant. Preference is assumed
forcer demand curve, which relates rate of to be a variable that is influenced by other
obtained reinforcement to schedule require- variables, rather than being a result of static
ment or price when a single schedule is avail- properties of reinforcers. Because preference
able (Hursh, 1984). This paper extends pre- is a variable, suitable measures of preference
vious analyses by predicting performance are required. Three measures of preference
patterns when different reinforcers are avail- are proposed for different schedule arrange-
able in three schedule arrangements: single ments.
schedules, concurrent schedules when total
reinforcement is low, and concurrent sched-
ules as reinforcement increases. Although BEHAVIORAL ECONOMIC
previous analyses have explained perfor- APPROACHES
mance in some of these situations, no previ- Behavioral economic approaches explain
ous analysis has predicted performance un- performance using constrained optimization
der all three schedule arrangements. Testable models that assume that subjects attempt to
predictions are made using isovalue func- maximize the value of certain variables
tions, which illustrate how a subject's evalua- (called choice variables) while acting under re-
tions of reinforcers can be made observable. strictions imposed by other variables (called
Patterns of performance are predicted to constraints). Choice variables are controlled
vary in orderly ways according to the subject's by the subject, and may be said to reflect the
evaluation of a reinforcer. The principle of subject's motivation. Constraints are limita-
revealed preference is used to deduce a sub- tions that either are introduced by the exper-
ject's evaluations of reinforcers from consis- imenter or are inherent to a subject.
tent patterns of performance. Behavioral economic models vary accord-
The model assumes that schedule perfor- ing to the choice variables and constraints
mance involves an exchange, as responding they assume. Rachlin, Kagel, and Battalio
or work is exchanged for reinforcers. It is as- (1980) and Rachlin, Battalio, Kagel, and
sumed that subjects evaluate the events that Green (1981) focused on how subjects allo-
are exchanged by assigning positive values to cate time between schedules, producing time-
some events and negative values to other allocation models. On the other hand, Stad-
don and Motheral (1978), Battalio, Green,
and Kagel (1981), and Tustin and Morgan
The contribution of Peter Morgan in developing the (1985) offered models that explain the allo-
ideas presented in this paper is acknowledged with grat-
itude. Any errors remain the responsibility of the author. cation of work between schedules; thus, work
Correspondence can be addressed to Don Tustin, P.O. rate was the choice variable.
Box 427 Glenelg, Adelaide 5045, Australia. Each of these models assumes that the

313
314 R DON TUSTIN

schedule of reinforcement is a constraint, be- plicable when these assumptions are varied,
cause the schedule determines the relation- space prevents discussion of these variations.
ship between the subject's performance and The relation set by a schedule between rate
the rate of reinforcement that is delivered. In of work and rate of reinforcement is plotted
work-rate models, the subject emits work and in a graph whose axes are rate of reinforce-
receives reinforcers in return. The schedule ment (R) and rate of work (W). Schedules
sets the rate of exchange between work and are objective variables that can be drawn in
reinforcement, which may be expressed in ei- the graph. Fixed-ratio (FR) schedules can be
ther of two ways. A schedule can be seen as represented accurately by straight lines. How-
defining a wage rate, in that it sets the rate ever, functions for variable-interval (VI)
of reinforcement (R) delivered for each unit schedules are considerably more complex
of work rate (W) (wage = R/ W). Alterna- (Morgan & Tustin, 1992).
tively, a schedule defines the price of a rein-
forcer, in that it sets the number of responses ISOVALUE FUNCTIONS
that must be emitted to gain a unit of the
reinforcer (price = W/R). Price is the inverse Before a subject's choice variables can be
of wage. represented graphically, it is necessary to
The present paper extends one work-rate transform the subject's evaluations of differ-
model that assumes that a subject's total work ent combinations of work and reinforcement
rate is itself a variable (Tustin & Morgan, from subjective evaluations into objective
1985) by examining choices involving quali- variables. This transformation is achieved us-
tatively different reinforcers. Other models ing the concept of an isovalue function. An
have assumed either that total work rate is isovalue function is a set of combinations of
constant (Rachlin & Burkhard, 1978; Stad- choice variables that are given the same over-
don & Motheral, 1978) or that subjects ex- all evaluation by a subject (Green, Kagel, &
change reinforcers for leisure (Battalio, Battalio, 1987; Thurstone, 1931). This paper
Green, & Kagel, 1981). Only variable work- uses two different isovalue functions. The first
rate models predict both the choice of a total set of isovalue functions involves combina-
work rate and the allocation of work between tions of work and reinforcement that are giv-
alternative schedules (Tustin & Morgan, en the same value by a subject; these are
1985). Work rate (W) is defined as the prod- called work-reinforcement or W-R isovalue
uct of rate of instrumental responding (I) functions. The second set of isovalue func-
and work (w) required to emit each response tions involves combinations of two reinforcers
(work rate = Iw), where work is defined in that are given the same value by the subject;
turn as the product of force (F) and distance these are called reinforcer-reinforcer or R-R
isovalue functions.
(d) (w= Fd). The question arises of how one may iden-
The next section illustrates how the con-
strained optimization approach is applied in tify the shape of an isovalue function. The
a work-rate model. shape of a subject's isovalue function for com-
binations of work and reinforcement can be
A Work-Rate Model derived from the shapes of hypothetical func-
tions that describe the subject's evaluations of
The choice variables assumed in the work- increasing rates of reinforcement and in-
rate model are work rate and reinforcement creasing rates of work, respectively. This der-
rate. It is assumed that a subject will maximize ivation has not previously been reported.
the overall value of those combinations of
work and reinforcement that are available giv- Reinforcer Evaluation Functions
en the schedule constraints. This paper dis- A function that describes how a subject
cusses applications when three simplifying as- evaluates increasing rates of a reinforcer is
sumptions are made: (a) All work is negatively called a reinforcer evaluation function. Fig-
valued, (b) consequences are reinforcers that ure 1 illustrates several possible shapes of re-
are positively valued, and (c) schedules require inforcer evaluation functions. The upper pan-
an exchange of units of work in return for units el of Figure 1 shows functions for Reinforcers
of reinforcers. Although the model is still ap- 1 and 2. The function for Reinforcer 1 is
REINFORCER PREFERENCES 315

VR 1 for Reinforcers 3 and 4. The functions for


Reinforcers 3 and 4 increase very steeply
when the first few units of the reinforcers are
provided, showing that the initial units of
these reinforcers are very highly valued. How-
ever, turning points are reached, at which
point the functions level off and become al-
most flat. A flat section in a reinforcer eval-
uation function shows that additional units of
the reinforcer are given an almost zero eval-
uation by the subject.
VR The lower panel of Figure 1 shows yet an-
other function, this time for Reinforcer 5.
The function for Reinforcer 5 increases
steadily in an almost linear manner, with each
additional unit of the reinforcer being given
a similar positive value to previously obtained
units. In other words, Reinforcer 5 shows very
little effect of diminishing marginal evalua-
tion, because the evaluation of units of the
reinforcer is almost independent of the rate
at which the reinforcer is obtained.
Figure 1 illustrates shapes for reinforcer
evaluation functions that differ in three ways:
(a) the curvature or acceleration of evalua-
tion functions, (b) the location of any turn-
ing point, and (c) asymptotic levels of
functions. Reinforcers are said to be qualita-
tively different if their evaluation functions
under the same levels of deprivation differ on
any of these indices.
Characteristics of a reinforcer evaluation
Fig. 1. Reinforcer evaluation functions are shown for function can be interpreted. Because the cur-
five reinforcers. Objective rate of reinforcement (R) is
plotted against the subject's hypothesized evaluations of vature of a reinforcer evaluation function
these rates of reinforcement (VR). measures the marginal increase in value to
the subject when a further unit of a reinforc-
er is obtained, the function assesses whether
higher than the function for Reinforcer 2, evaluations of marginal units of a reinforcer
showing that units of Reinforcer 1 are valued are affected by the existing level of the rein-
more highly than units of Reinforcer 2. The forcer. If all units of a reinforcer are given a
two functions have similar shapes, in that they similar evaluation regardless of the current
increase rapidly and then level off when mov- rate of reinforcement, then the evaluation
ing to the right, giving the function a gentle function for the reinforcer will be linear. On
curvature. A gently curved shape is interpret- the other hand, if a subject's evaluation of an
ed as showing that the first few units of each additional unit of reinforcement depends on
reinforcer are highly valued, with increasing the rate at which the reinforcer is delivered,
units being given a positive value, but a pos- then the evaluation function will be curved,
itive value that is less than the value of the showing an effect of diminishing evaluations
earlier units. In other words, a negatively ac- of increasing rates of reinforcement.
celerated curve reflects a pattern of diminish- A straight evaluation function means that
ing marginal evaluation of increasing units of there is no satiation effect as the rate of a
the reinforcer. reinforcer increases. On the other hand, a
Two further reinforcer evaluation func- turning point in a reinforcer evaluation func-
tions are shown in the middle panel of Figure tion indicates that a marked change in eval-
316 R DON TUSTIN

vw function has a gentle downward slope, show-


ing that higher work rates are given progres-
sively more negative evaluations. Although
(+) only one work-rate function is used to illus-
trate the model, the model applies equally if
differently shaped work-rate functions are as-
sumed.
(- ) W-R Isovalue Functions
Figure 3 shows how reinforcer evaluation
Wi functions and work-rate evaluation functions
are combined to produce a work-reinforcer
Fig. 2. A subject's work-rate evaluation function (WI) (W-R) isovalue function. Figure 3 uses a quad-
for Response 1. Work rate (W) is plotted against the sub-
ject's hypothesized evaluations of these work rates (Vw). rant diagram to combine a subject's evalua-
tion functions for work and reinforcement, a
technique that has not previously been re-
uation of units of the reinforcer occurs at this ported.
point. The steep section of the function Figures 1 and 2 showed the subject's eval-
shows that lower rates of the reinforcer are uations of choice variables on one axis. The
very highly valued by the subject. A turning task now is to remove these unobservable
point may occur because a certain rate of the variables from the analysis. Panel A of Figure
reinforcer is required to satisfy a "need" for 3 reproduces a reinforcer evaluation func-
the reinforcer. The needed rate of reinforce- tion, and Panel B reproduces a work-rate eval-
ment is given by the location of the turning uation function. Panel C uses a four-quadrant
point. Once the needed rate is achieved, diagram to show how these two evaluation
higher rates of the reinforcer are given a low functions can be transformed so as to yield
evaluation, producing the plateau. an isovalue function. The axes in Figure 3C
Asymptotic levels of functions show the are the reinforcement rate (R), work rate
evaluations given to very high rates of a re- (W), and the subject's evaluations of rein-
inforcer. Reinforcers may reach an asymptote forcement rates (VR) and work rates (Vw).
at different levels. In particular, a reinforcer The aim of the exercise is to generate a
with a sharply turning evaluation function function in R/W space that reflects the
(showing that low rates of the reinforcer are shapes of the functions in R/ VR space and W/
highly valued) may reach an asymptote at a Vw space. One begins in the upper-right seg-
lower level than another reinforcer that has ment of Panel C, in VR/ Vw space, which rep-
no satiation effect. resents the subject's evaluations of the two
The shape of a reinforcer evaluation func- choice variables. A straight line (V = V') is
tion may be affected by motivational vari- drawn in VR/Vw space, passing through the
ables. Any intervention that increases the val- origin. The straight line indicates that all
ue of a reinforcer to a subject will raise the points on the line are equally valued, so this
reinforcer evaluation function. For example, is an isovalue line in VR/VW space. Because
if a subject is deprived of water, then the eval- the isovalue line passes through the origin, it
uation function for water will be higher and is valued at V = 0 units. The task is to find
will have both a higher asymptote and a combinations of work rate and reinforcement
sharper turning point than if the subject were rate that the subject also values at 0 units. The
sated with water. task is accomplished by moving both horizon-
tally and vertically away from any point on the
Work-Rate Evaluation Functions isovalue line in VR/ Vw space into the two ad-
Figure 2 illustrates one possible shape of a jacent quadrants until one intersects the eval-
work-rate evaluation function (WI) for a given uation function, and then moving into the
response. The work-rate evaluation function lower left quadrant representing R/ W space
falls below the origin, showing that all units until the lines intersect.
of work are negatively valued as is implied by Two example points are illustrated, labeled
the law of least effort (Solomon, 1948). The 1 and 2. If the process is repeated several
REINFORCER PREFERENCES 317

VR VW
A (+) B
0 W-

0R Wi

VR (+) c

Rl2 v=V-

X
RW

V=V'I I

v=vIIWvR

v=v,

01~~ W
Fig. 3. A procedure for transforming the shape of a subject's isovalue curve from valuation space (VR/ VW) to
reinforcement-rate/work-rate (R/ W) space is illustrated. Panel A shows objective rates of reinforcement (R) plotted
against a subject's evaluations of these rates of reinforcement (VR). Panel B shows objective rates of work (W) plotted
against the subject's evaluations of these work rates (Vw). Panel C is a four-quadrant diagram in which the axes are
VR, VW, R, and W The dotted lines show how points are transformed from an isovalue curve V = V' in VR/ Vw space
to an isovalue function in RI W space. Panel D reproduces the isovalue curve in R/ W space.
318 R DON TUSTIN

times, one establishes a set of work rates and


reinforcement rates that is given the same val-
ue of 0 units, which reflects the subject's eval-
uation functions for the particular response
and reinforcer. In Panel C, points are trans-
posed from the upper right quadrant into the
lower left quadrant where the axes are rein-
forcement rate (R) and work rate (W). Be- vIW (-)
cause both reinforcement rate and work rate
are observable quantities, the exercise has
identified the shape of a subject's isovalue
function in terms of observable variables.
Panel D redraws the isovalue function that
was derived in Panel C in a more convenient
way, with reinforcement rate on the vertical
axis and work rate on the horizontal axis. The
isovalue function V = V' has a positive slope
with a positive acceleration as Wincreases. R
The shape of a W-R isovalue function is in-
terpreted as follows. A positive slope in a W-
R isovalue function shows that the subject
maintains a constant overall value by ex-
changing units of work for units of reinforce-
ment. This is compatible with the view that
reinforcers are positively valued and work is
negatively valued. The slope of a W-R isovalue
function at any point gives the rate of ex-
change of work for reinforcers that maintains
a constant value for the subject. Curvature in Fig. 4. Panel A is a four-quadrant diagram that trans-
a W-R isovalue function shows that the equal- forms three isovalue curves (V1 > V2 > V3) from one
space to another. The axes in Panel A are rates of rein-
value rate of exchange of work for reinforce- forcement (R), rates of work (W), the subject's evalua-
ment is not a constant, because the equal-val- tions of reinforcement rates (VR), and the subject's eval-
ue exchange rate depends on current levels uations of work rates (Vw). The three isovalue curves are
of work and reinforcement. The positive ac- reproduced in Panel B, in which the axes are R and W
celeration shows that, as work rate increases,
higher rates of reinforcement are required to
compensate for each additional unit increase ued. No restriction was assumed about total
in work rate. response rate.
Figure 3 derived the shape of a subject's W- The shapes of W-R isovalue curves differ
R isovalue function from the shapes of the between the analysis of Rachlin et al. (1981,
subject's evaluation functions for a reinforcer Figure 7) and the present analysis, because
and a response. Rachlin and Burkhard isovalue functions have a negative slope in
(1978) described an alternative procedure Rachlin's analysis and a positive slope in the
for deriving the shape of W-R isovalue func- present analysis. The different slopes reflect
tions. Their analysis required two restrictive a different conceptualization of the choice
assumptions that are not made in the present problem. Rachlin et al. produced a negative
paper: (a) that all responses have a positive slope because they assumed that subjects ex-
value, and (b) that total output remains con- change leisure for reinforcers, where leisure
stant so that any increase in one response is the inverse of work.
necessarily reduces another response. In the
present model, the use of a quadrant diagram A Set of Isovalue Functions
enabled us to identify the shapes of isovalue The process illustrated in Figure 3 identi-
functions when one event was positively val- fied the shape of a subject's isovalue function
ued and the other event was negatively val- for a reinforcer and a response. Figure 4 re-
REINFORCER PREFERENCES 319

VR (+) VR (+)

,(-)
(- )

B
B

Fig. 5. Panel A is a four-quadrant diagram that iden- Fig. 6. Panel A is a four-quadrant diagram that iden-
tifies the shapes of isovalue curves for Reinforcers 1 and tifies the shapes of isovalue curves for Reinforcers 4 and
2. Axes are rate of reinforcement (R), work rate (W), the 5. See Figure 5 for further details.
subject's evaluations of increasing reinforcement rates
(VR), and the subject's evaluations of increasing work
rates (Vw). The isovalue functions derived in Panel A are
reproduced in Panel B. tions for different reinforcers. The transfor-
mation process is repeated in Figures 5 and
6. Figure 5 shows the transformation of eval-
peats the process, but in this case a set of W- uation functions for two reinforcers, Rein-
R isovalue functions is generated that have forcer 1 and Reinforcer 2, where units of Re-
different values, where V1 > V2 > V3. The inforcer 1 are more highly valued than units
function VI is valued more highly than V2 be- of Reinforcer 2 at every rate of reinforce-
cause it has a higher intercept in VRI Vw
ment. The aim is to provide isovalue func-
space. When transformed into reinforce-
tions for different reinforcers that are evalu-
ment-rate/work-rate space, V1 is higher than ated at the same level of V = V1, both in
V2, indicating that the subject gives a higher VRI
evaluation to combinations in which higher Vw space and in RI W space. The reinforcer
reinforcement rates are produced for the evaluation functions have similar shapes in
same work rate. Figure 4 shows how a quad-
R/ VR space, but the evaluation function for
rant diagram can be used to generate sets of
units of Reinforcer 1 is higher than the func-
W-R isovalue functions with different values tion for units of Reinforcer 2. When these
when one reinforcer is delivered at different functions are transformed into RI W space to
rates. show equally valued combinations of work
and reinforcement, the isovalue function for
Isovalue Functions for Reinforcer 2 is higher than the isovalue func-
Different Reinforcers tion for Reinforcer 1, showing that more
The next step is to extend the approach so units are required of Reinforcer 2 than of Re-
as to identify the shapes of W-R isovalue func- inforcer 1 to provide the same value to the
320 R. DON TUSTIN
subject. Note that both functions represent ent contexts, based on the shapes of isovalue
the same value because both functions were functions.
derived from the same isovalue function
where V = V1. The shapes of the two func-
tions in R/ W space are similar, except that SOLUTIONS FOR
the function for Reinforcer 1 has a lower SINGLE-SCHEDULE PROBLEMS
slope. The lower slope of the function for Re- Constrained optimization approaches as-
inforcer 1 indicates that, at any given work sume that subjects will select the point that is
rate, the subject is willing to emit a higher the most highly valued of the available choice
marginal work rate to obtain an additional points; this point is called the optimal point.
unit of Reinforcer 1 than of Reinforcer 2, The model indicates that the optimal point is
showing that additional units of Reinforcer 1 determined jointly by the choice variables
are more highly valued. and by the constraints. The constraints deter-
Note that the slope of an isovalue function mine which alternatives are available. The
in R/Wspace measures the number of units subject's evaluations of the choice variables
of the reinforcer a subject requires in return indicate which alternatives are most highly
for a unit increase in work rate so as to re- valued.
main at the same overall value. A lower slope When a single schedule is available, the
indicates that fewer units of reinforcement subject must select one combination of work
are required to compensate for a unit in- rate and reinforcement rate from all of the
crease in work rate; showing that each unit of alternatives made available by the schedule.
the reinforcer is more highly valued. This section explains how a constrained op-
Any intervention that changes the value of timization approach is used to solve the
a reinforcer for a subject will have the effect choice problem posed by a single schedule of
of displacing the reinforcer evaluation func- reinforcement. Rachlin and Burkhard (1978)
tion in a manner similar to that illustrated in offered an explanation that is formally simi-
Figure 5. lar, although their model differed in impor-
Figure 6 repeats the process for two rein- tant details such as relating the value of re-
forcers, Reinforcer 4 and Reinforcer 5, where sponses to their duration and measuring
the reinforcer evaluation function for Rein- performance in terms of relative duration of
forcer 5 has a gentle slope and the reinforcer responses rather than rate of discrete re-
evaluation function for Reinforcer 4 has a sponses.
sharp turning point. The two reinforcer eval- The choice variables in the model are re-
uation functions cross. When transformed inforcement rate and work rate. One con-
into R/ Wspace, the isovalue function for Re- straint is the schedule of reinforcement. A
inforcer 4 has a sharp curvature, whereas the second constraint is the subject's maximum
isovalue function for Reinforcer 5 has a gen- possible rate of work. A third constraint is the
tle slope. The two isovalue functions cross in time available for responding, because this re-
RI W space. stricts total work output. The first two con-
straints are illustrated in Figure 7. The sub-
ject's evaluations of choice variables are
Summary represented by a set of isovalue functions
This section explained how W-R isovalue equivalent to those derived in Figure 4, with
functions are derived from information about V1 > V2> V3. The subject's maximum work
the shapes of a subject's reinforcer evaluation capacity is shown by W The schedule represents
functions and work-rate evaluation functions. a constraint, because the subject can choose
Differently shaped W-R isovalue functions only points on the line. Points above and be-
were generated for different reinforcers as low the schedule line are unattainable. The
the shapes of reinforcer evaluation functions subject selects different points along the
varied. Although isovalue functions are not ob- schedule line by varying response rate.
servable, they are plotted on a graph whose The subject's evaluation of different points
axes are observable and quantifiable. The fol- is represented by the W-R isovalue functions.
lowing sections will show that observable differ- The points on the line VI are valued more
ences in performance are predicted in differ- highly than points on the isovalue functions
REINFORCER PREFERENCES

bination R*/ W*. The optimal point occurs at


the point at which the schedule line is tan-
gent to the most highly valued isovalue func-
tion.
The marginalist formulation assumes that
the subject is sensitive both to the slope of
the schedule function and to the slope of the
isovalue function, because the optimal point
is the point at which these two slopes are
equal. The slope of the schedule function at
any response rate measures the marginal pro-
ductivity of the schedule, because it gives the
Fig. 7. The solution to the choice problem posed by marginal rate of reinforcement obtained per
a single schedule is illustrated. Axes are reinforcement unit of responding. The slope of the W-R iso-
rate (R) and work rate (W). A fixed-ratio schedule con- value function measures the equal-value rate
straint (OS) is shown together with three isovalue curves of exchange of work for reinforcers.
(VI > V2 > V3). The subject's maximum work capacity The suggestion that subjects are sensitive to
is W.
marginal productivities of schedules has con-
cerned critics for two reasons. The first issue
V2 and V3, because VI provides higher rein- involves concern about whether the infor-
forcement rates for the same work rate. How- mation-processing skills of subjects enable
ever, points on the isovalue function VI are them to discriminate marginal productivities
unattainable, because they all lie above the of schedules, because animal subjects are able
schedule line. Points such as b and c on the to learn about marginal productivities of
isovalue function V3 are attainable, but they schedules only by observing the outcomes of
are not the most highly valued of the avail- their own behavior (Hinson & Staddon, 1983;
able points. The subject can attain a higher Prelec, 1982). The second issue involves the
value than that provided by b by increasing its question of how an experimenter might de-
response rate, thereby moving upwards along termine whether a subject has attempted to
the schedule line so as to gain access to a select the optimal point.
more highly valued isovalue function. The It has been established that subjects work-
subject is predicted to increase its overall val- ing on two concurrent unequal VI schedules
ue by moving onto more highly valued iso- of reinforcement spend most of their time on
value functions, until it reaches a. If the sub- the richer schedule while periodically sam-
ject moves past a, then it will move onto pling the leaner schedule (Tustin & Davison,
isovalue functions with lower values. 1979). Subjects might learn about the relative
The subject is assumed to follow a margin- marginal productivities of schedules by com-
alist approach of evaluating the marginal paring schedules for the numbers of re-
gains or returns in reinforcement rate against sponses required or times spent responding
the marginal costs of responding. If a subject before reinforcers are delivered. Tustin and
is at b on the schedule line, then it will find Davison examined times between change-
that the positive value of the last unit of re- overs on concurrent VI schedules and found
inforcement exceeds the negative value of that when relative reinforcement rates varied
the last unit of work required to obtain the between conditions, subjects adjusted the fre-
reinforcer. The point of equilibrium occurs quency of sampling the leaner schedule while
at a, because this is the point at which the maintaining the duration of this sample con-
negative value of the last unit of work is ex- stant at about 3 s. In other words, subjects
actly offset by the positive value of the last adjusted to changes in the relative rate of re-
unit of reinforcer. At this point, the net value inforcement by varying the time spent in the
of the exchange is zero. richer schedule before making a changeover
Point a is the optimal point because it is into the leaner schedule while responding at
the most highly valued of the attainable a constant rate on both schedules. This sam-
points. At the optimal point, the subject ob- pling approach enables subjects to compare
tains the reinforcement-rate/work-rate com- concurrent schedules for numbers of rein-
322 R3 DON TUSTIN
forcers obtained per unit of responding or the overall reinforcement rate obtained from
per unit of time spent responding. Compari- their chosen work rate. Performance is said to
sons are made within specific time periods or be efficient if subjects achieve the highest pos-
"time horizons" (Timberlake, Gawley, & Lu- sible reinforcement rate from their chosen
cas, 1988). work rate. Efficient performance can be
The second issue involves procedures that achieved only if subjects first discriminate the
will show whether subjects have identified and marginal productivities of schedules and then
approximated the optimal point. To assess allocate responses so as to maximize reinforc-
whether a subject has discriminated marginal ers obtained per unit of work. Morgan and
productivities of schedules between experi- Tustin used the theoretical schedule functions
mental conditions, a fuller description of the to compute the maximum rates of reinforce-
nature of the discrimination problem present- ment that could be earned by the most effi-
ed by the schedules is required. Morgan and cient allocation across the two response keys
Tustin (1992) described one approach for as- of the total rate of work chosen by each sub-
sessing the difficulty of the discrimination ject. Measures of the level of efficiency
problem posed by concurrent VI schedules. achieved by each subject were made by com-
They reanalyzed data from an experiment by puting differences between these theoretical
Hunter and Davison (1982) in which pigeons' maximum reinforcement rates and the rein-
response rates varied considerably between forcement rates actually obtained. Results
conditions due to changes made both to force showed a high level of efficiency, in that 94%
requirements on response keys and to sched- of allocations achieved at least 97% of the
uled rates of concurrently available reinforce- maximum available reinforcement rate. It was
ment. Data were obtained from 6 subjects concluded that subjects both were able to dis-
working on five VI schedules that were pre- criminate the marginal productivities of sched-
sented concurrently with a second schedule. ules and were motivated to maximize the rate
In this case, the time horizon is assumed to be of reinforcement obtained per unit of work.
equivalent to an experimental session. The difficulty of discriminating marginal
Morgan and Tustin (1992) plotted all rates productivities of schedules can easily be un-
of reinforcement obtained from each VI derestimated. For example, Ettinger, Reid,
schedule against the response rates that and Staddon (1987) commented that a VI
earned the reinforcers. Lines of best fit were schedule can be represented by a line that is
computed, producing functions that de-
scribed relations between responding and re- positively accelerated with an asymptote at
inforcement from each schedule. The mar- the scheduled maximum reinforcement rate.
ginal productivities of schedules for any given However, as noted by Morgan and Tustin
response rate are given by the slopes of these (1992), VI schedules produce considerable
schedule functions. Although Morgan and variability in reinforcement rate for any given
Tustin were able to compute lines of best fit, response rate, making it difficult to estimate
they noted that a number of reinforcement the marginal productivities of these sched-
rates fell a considerable distance away from ules. VI schedules are static schedules, be-
the line of best fit for most response rates cause the properties of the schedule do not
(see their Figure 1). Because reinforcement vary as a function of the subject's own behav-
rates varied considerably for any given re- ior. Even static VI schedules provide a sub-
sponse rate, it is more difficult for subjects to stantial discrimination problem for subjects.
estimate the marginal productivity of the The difficulty of discriminating the marginal
schedule by assessing the rate of reinforce- productivity of a schedule increases consid-
ment obtained from any given response rate. erably if a dynamic schedule, in which the
The question arises as to whether subjects are properties of the schedule are dependent on
able to solve these discrimination problems. the subject's own behavior, is used. Nonethe-
Morgan and Tustin (1992) used the con- less, some experimenters have used dynamic
cept of efficiency to assess the dual questions schedules to test the ability of subjects to dis-
of (a) whether subjects were able to solve the criminate the marginal productivities of
discrimination problems and (b) whether sub- schedules. For example, Ettinger et al. (1987)
jects allocated responding so as to maximize provided reinforcement only if subjects met
REINFORCER PREFERENCES 323
the combined requirements of two interlock- to generate a locus of optimal points when
ing schedules. schedule requirements change.
Before one can reasonably comment on Figure 8A shows a set of FR schedule lines
whether subjects have responded to the mar- and a set of W-R isovalue curves tangent to
ginal productivities of schedules, it is neces- the schedule lines, yielding a set of optimal
sary to examine the nature of the discrimi- points. Different schedule lines represent dif-
nation task required by the schedules. A ferent schedule requirements (FR n), with
number of experiments that have supposedly the slope of the schedule line decreasing as
examined the question of whether subjects the requirement increases. W-R isovalue
were sensitive to marginal productivities have curves have a similar shape to those derived
failed to report any assessment of the margin- in Figure 3.
al productivities of the schedule functions Figure 8A shows how the loci of a subject's
used in the experiment. To examine the na- optimal points will change as the schedule re-
ture of the discrimination problem, experi- quirement increases for a given response and
menters must display the reinforcement rates reinforcer. As the schedule requirement in-
that were obtained from each response rate, creases from n, to n4, the optimal reinforce-
so as to show both the slope of the schedule ment rate (R*) decreases monotonically and
function and any variability about a line of the optimal work rate (W*) first increases
best fit. Current findings that subjects have and then decreases, as predicted by Staddon
not adjusted their performance according to (1979) and Rachlin et al. (1981).
the productivities of schedule functions may Figures 8B and 8C reproduce the rein-
indicate simply that the subjects were unable forcement-rate and work-rate performance
to solve the discrimination problem posed by patterns from Figure 8A, plotted against
the schedules, and may tell us nothing about schedule requirement. The schedule require-
the motivation of subjects to allocate perfor- ment determines both the price of a rein-
mance according to the marginal productivi- forcer (I/R) and the wage rate for working
ties of schedules. (R/I), where I is the rate of instrumental
In summary, this section has proposed that responding. In Figure 8B, optimal reinforce-
subjects' performance can be described as a ment rates are plotted against increasing
marginalist approach to select a point on a schedule requirement or increasing price,
schedule line. The chosen point is predicted giving a reinforcer demand curve. In Figure
to be optimal, because it provides the highest 8C, optimal work rates are plotted against in-
possible reinforcement rate for the chosen creasing schedule requirement or decreasing
work rate. At the optimal point, the slope of wage rate, giving a labor or work-rate supply
the isovalue function is equal to the slope of curve. Figure 8 illustrates the relationship be-
the schedule function. The question of tween reinforcer demand curves and work-
whether subjects'behavior approximates the rate supply curves, as derived using isovalue
optimal point is assessed by examining con- curves. Other analyses have separately de-
current schedule performance and by mea- rived either demand curves (Hursh, 1980,
suring the efficiency with which subjects ob- 1984; Hursh & Bauman, 1987) or work-rate
tained the maximum reinforcement rate that supply functions (Battalio, Green, & Kagel,
was available from the chosen work rate. Ef- 1981; Battalio, Kagel, & Green, 1979). The
ficient allocations can occur only if subjects present derivation, showing the relationship
are able to discriminate the productivities of between the functions, has not previously
schedules. been reported.
The present analysis derives the slope of a
demand curve from the intersection of iso-
MODELS OF REINFORCER value functions and schedule lines. If the
DEMAND AND WORK SUPPLY shapes of schedule lines are known, then it
follows that the shape of an empirical de-
The previous section showed how the con- mand curve reflects information about the
strained optimization approach is used to shape of a subject's isovalue function. The
identify the optimal point in a single-schedule principle of revealed preference can be used
situation. This section uses isovalue functions to deduce how a subject evaluates reinforcers,
324 R DON TUSTIN

B C

Ir I I I Price I I I - Wage
ni n2 n3 n, n, n2 n3 ni,
Fig. 8. The axes of Panel A are optimal reinforcement rates (R*) and optimal work rates (W*). Four isovalue
functions are shown tangent to schedule lines with differing requirements (n1), generating optimal solutions for four
schedules. In Panel B optimal reinforcement rates are plotted against price or schedule requirement. In Panel C
optimal work rates are plotted against schedule requirement or wage rate.

based on the shape of the subject's demand bitonic relation consistent with the hypothesis
curve. that total response rate remains constant
The work-rate function derived in Figure (Herrnstein, 1974, 1979). Although current
8C is bitonic, in that it first increases and then data support the notion of a bitonic relation
decreases as wage rate changes. A bitonic between response rate and reinforcement
work-rate function has also been predicted by rate (Battalio et al., 1979; Hanson & Timber-
other analyses (Allison & Boulter, 1982; Bat- lake, 1983; Staddon, 1979; Timberlake,
talio, Green, & Kagel, 1981; Battalio et al., 1977), the relationship warrants further in-
1979; Green, Kagel, & Battalio, 1982, 1987; vestigation.
Kagel, Battalio, Winkler, & Fisher, 1977; Rach- The analysis is now extended to show how
lin et al., 1981; Rachlin & Burkhard, 1978; the shape of a subject's reinforcer evaluation
Staddon, 1979). A bitonic relation between function will affect the shapes of both the re-
response rate and reinforcement rate is not inforcer demand curve and the work-rate
consistent with the notion that reinforcers al- supply curve. Demand curves and work-rate
ways strengthen behavior, because the curves are derived in Figure 9 for three dif-
strengthening principle implies a monotonic ferently shaped reinforcer evaluation func-
relation between response rate and reinforce- tions, when the same schedule constraints are
ment rate (Herrnstein, 1970; McDowell & used. Note that the price of reinforcers in-
Wood, 1985). Neither is the prediction of a creases as schedule requirement increases,
REINFORCER PREFERENCES 325
EVALUATION SOLUTIONS DEMAND WORK RATE
*
R* v*
(a) I(b)In C
Ic) (d)

RI - H n n2
R2 x
/% ~~x
n3
RI _R
R3
0 0 n1
I X\p
I O. -Wage
W3 WI W2 n2 n3 ni n2 n3
R* V*
, (e) I (g) (h)
x

x7 //l
I' ' p
RI I
' -Wage
n, n2 n3 ni n2 n3
R*
(j) Cl) C(m)

x
0 ni n2 Wage

R -- Wage a.
0 WIW3 W2 0 n, n2 n O n, n2 n3

Fig. 9. Solutions for choice problems involving single schedules are illustrated for Reinforcers i, ii, and iii. The
first column shows differendy shaped reinforcer evaluation functions for each reinforcer. Solutions to the choice
problems are illustrated in the second column. Resultant reinforcer demand functions are shown in the third column,
in which optimal reinforcement rates (R*) are plotted against the prices (P) of reinforcers or schedule requirements
(n). Associated work-rate supply functions are shown in the fourth column, in which optimal work rates (W') are
plotted against wage rate or schedule requirement.

whereas wage rate decreases as schedule re- lower intercept; the work-rate function in-
quirement increases. creases monotonically and peaks near the
The subject's evaluation functions for alter- right side of the graph. The demand curve
native reinforcers are shaped as follows: the for Reinforcer iii has an intercept high on the
evaluation function for Reinforcer i in Panel A R* axis and a steep slope; the work-rate func-
has a gentle slope; the evaluation function for tion increases monotonically, with a peak to-
Reinforcer ii in Panel E is nearly rectangular wards the left of the graph.
with a turning point at RI; and the evaluation Figure 9 shows that markedly different pat-
function for Reinforcer iii in Panel J is almost terns of performance are predicted for the
linear. (These differently shaped evaluation same changes in requirement of a single
functions were introduced in Figure 1.) schedule of reinforcement, depending on the
Figure 9 shows how the shape of a rein- shape of the subject's set of reinforcer eval-
forcer evaluation function affects perfor- uation functions. The predicted work-rate
mance patterns. The demand curve for Re- functions varied from a monotonically in-
inforcer i has a moderate negative slope and creasing function to a bitonic function and
a moderate intercept on the R* axis; the then to a monotonically decreasing function.
work-rate function is bitonic. The demand The shapes of the associated demand curves
curve for Reinforcer ii is almost flat, with a varied greatly.
326 R. DON TUSTIN
PERFORMANCE ON
CONCURRENT SCHEDULES A
Earlier sections have illustrated the solu-
tions to choice problems posed by single
schedules of reinforcement. This section ex-
tends the analysis to the more complex situ-
ation in which two schedules are concurrent-
ly available. When two schedules are
available, the theory must explain the allo-
cation of work or reinforcers between sched-
ules as well as the selection of a work rate.
A subject's total response output can be
controlled by procedures such as ending an
experimental session when a set number of
responses has been emitted or when a certain
number of reinforcers has been earned. Pro-
cedures that control a subject's response out-
put are predicted to move performance along
a locus of optimal points called an expansion
path. An expansion path can be defined either
in response-rate space or in reinforcement-rate B
space. It has been conventional to measure
preference for a reinforcer from the ratio of
responses allocated to the schedule producing
that reinforcer, because the ratio of reinforcers
obtained is almost invariant when VI schedules
are used.
One important question is whether pref-
erence remains constant as response output
changes along an expansion path. Figure 1OA Fig. 10. An expansion path in reinforcement-rate
examines this question using a quadrant di- (R1/R2) space is derived in Panel A, in which Reinforcers
agram. The upper left quadrant shows an al- 1 and 2 have differently shaped evaluation functions.
Panel A is a four-quadrant diagram in which axes are
most linear reinforcer evaluation function for objective rates of reinforcement (R1 and R2) and the sub-
Reinforcer 1, and the lower right quadrant ject's evaluations of these reinforcer rates (V1 and V2).
shows a sharply curved reinforcer evaluation Three isovalue lines are shown in VI/V2 space, with V'"
function for Reinforcer 2. Three different iso- > V"' > V'. The isovalue curves are transformed into R1/
value functions are shown, with V"' > V" > R2 space by following the dotted lines. The isovalue
curves are reproduced in R1/R2 space in Panel B. An
V'. Isovalue functions in V1V2 space are expansion path (OM) is obtained from the points of tan-
straight lines with a slope of -1, because a gency between unlabeled iso-TW lines and the highest
constant overall value is maintained by de- isovalue curves.
creasing one reinforcer by the same amount
that the alternative reinforcer is increased.
When transformed into a graph on which the inforcers. If two reinforcers are perfect substi-
axes are the rates of Reinforcer 1 and Rein- tutes, then the isovalue function will be a
forcer 2, the more highly valued isovalue straight line, because a unit of one reinforcer
function V"' lies to the right of V". The iso- completely replaces a unit of the other rein-
value curves are reproduced in reinforce- forcer. R-R isovalue functions will have a neg-
ment-rate space in Panel B. Note that because ative slope, because a constant value is main-
these isovalue functions describe choices be- tained by exchanging one reinforcer for the
tween two reinforcers, they are called rein- other. In Figure lOB, the R-R isovalue function
forcer-reinforcer (R-R) isovalue curves. is curved convexly to the origin. The convex
The slope of an R-R isovalue curve measures curvature shows that as the rate of one rein-
the marginal rate of substitutability of two re- forcer decreases, higher and higher incre-
REINFORCER PREFERENCES 327
ments of the other reinforcer are required to ence is a direct outcome of the differences in
maintain a constant overall value. The degree shapes of the evaluation functions for the two
of convexity of the curvature measures the reinforcers.
substitutability of the reinforcers. The model of choice between two differing
A set of unmarked lines is drawn tangent to reinforcers presented here is equivalent to
the R-R isovalue curves, which represent iso- the economic theory of consumer demand
total-work-rate functions (iso-TW). An iso-TW for two goods when there is a constraint on
shows how a constant number of responses income (Battalio, Kagel, Rachlin, & Green,
may be allocated between two schedules. A 1981; Kagel, Battalio, Green, & Rachlin, 1980;
fuller discussion of how expansion paths are Kagel, Battalio, Rachlin, & Green, 1981; Ka-
generated using iso-TW functions is given in gel et al., 1975; Rachlin, Green, Kagel, &
Tustin and Morgan (1985). In this case, iso- Battalio, 1976). This paper extends previous
TWs have a slope of -1, because every re- analyses by assessing the effect on preference
sponse removed from one schedule is allocat- of changing total response output. It is pre-
ed to the other schedule. Iso-TWs farther from dicted that preference will not always remain
the origin represent higher response outputs. constant, and that preference may reverse in
Optimal points occur where the iso-TW lines conditions in which reinforcers have differ-
are tangent to the highest isovalue functions, endy shaped evaluation functions. An exam-
because reinforcement rates are obtained for ple of a reversal in preference is given by Tus-
the minimum work rate at these points. An tin (1994). The prediction that preference
expansion path (OM) is drawn through the between qualitatively different reinforcers will
points of tangency, representing the locus of change and even reverse as a subject's total
optimal points that will be chosen as response reinforcement rate increases is not made by
output increases. other theories of performance.
The point of interest in Figure 10 is whether
the expansion path (OM) moves along a
straight line radiating from the origin, as CONCLUSION
would occur if preference remains constant This paper describes a new way to concep-
when response output increases. The expan- tualize preference. It is proposed that pref-
sion path in Figure 10 is not a straight line, erence is a variable that is best measured us-
because it lies closer to the R2 axis when re- ing appropriate functions. This represents a
sponse output is low and then moves closer to significant change from the proposal made
the RI axis as total reinforcement rate increas- by Premack (1965) that preference can be ac-
es. This indicates that preference will not re- curately estimated by observing performance
main constant as reinforcement rate increases in a single situation in which responses are
in a case in which the two reinforcers have freely available.
differently shaped evaluation functions. If This argument has been presented in a
preference is measured from the ratio of the number of steps. Evaluation functions for re-
obtained reinforcement rates, then preference inforcers were represented graphically. Quad-
is predicted to reverse as the total reinforce- rant diagrams were used to combine evalua-
ment rate increases; Reinforcer 2 is preferred tion functions for reinforcers and for work so
at low reinforcement rates, and Reinforcer 1 as to produce sets of isovalue functions. Iso-
is preferred at high reinforcement rates. value functions were then used to predict per-
A reversal of preference is predicted for formance in a number of schedule arrange-
any choice situation in which the evaluation ments. The choice of a work rate was
function of one reinforcer has a sharp turn- predicted when a single schedule was provid-
ing point and the evaluation function of the ed. A locus of optimal points was generated
other reinforcer is almost linear. The rein- as schedule requirements changed, yielding
forcer with the sharply turning evaluation both reinforcer demand curves and work-rate
function is predicted to be preferred at low supply curves. The shapes of both reinforcer
rates of reinforcement, but the reinforcer demand curves and work-rate supply curves
with a linear evaluation function is predicted were predicted to vary as a function of the
to be preferred as the level of income of re- shape of the subject's isovalue function. Pre-
inforcers increases. The reversal of prefer- dictions were made about concurrent sched-
328 R1 DON TUSTIN

ule performance both when reinforcement function is light, for animals deprived of these
was low and when reinforcement increased. reinforcers.
It was predicted that, in some circumstances, It was hypothesized that the principle of re-
preference between reinforcers will not be vealed preference can be used to deduce the
constant but will vary and may even reverse shape of a subject's evaluation function from
as total reinforcement increases. No other consistent patterns of performance in rele-
theory has predicted performance across vant conditions. Three measures of perfor-
these three schedule arrangements, and no mance are expected to be related, because it
other theory of performance has predicted a is assumed that they are all determined by the
reversal of preference as reinforcement in- shape of the isovalue function, which is in
creases. turn determined by the shape of the subject's
To date, these predictions remain untested, evaluation function for a reinforcer. These
because no experiment has examined perfor- three measures of performance are reinforc-
mance under the variety of conditions that er demand curves, work-rate supply func-
are required. To test the predictions, a sub- tions, and reinforcer expansion paths.
ject will work for two reinforcers that are ex- Although the present analysis is similar to
pected to have differently shaped evaluation previous behavioral economic analyses, there
functions in three conditions: in single-sched- are some crucial differences. The present
ule arrangements, in concurrent-schedule ar- model uses variables that are directly observ-
rangements in which reinforcement is limit- able, because subjects are assumed to ex-
ed, and in concurrent-schedule arrangements change work for reinforcers. Some models
in which total reinforcement increases progres- have introduced unobservable variables, for
sively. The schedule requirement will vary with- example by assuming that subjects exchange
in each condition. The type of schedule and reinforcers for leisure (Rachlin et al., 1981).
instrumental response will remain constant. Because the present model assumes that total
Analyses will be facilitated if simpler schedules work rate is a variable, it is able to predict
(e.g., FR schedules) are used, so that marginal changes in preference when total response
productivities can be more easily discriminated output varies.
by subjects. One limitation of this paper is that it does
not provide any mathematical specifications
It is predicted that reinforcers with differ- to describe subjects' evaluations of reinforc-
ently shaped evaluation functions will gener- ers. Mathematical specifications will permit
ate distinctive patterns of performance. Re- more precise predictions of the effects of dif-
inforcers whose evaluation functions have ferent evaluations of reinforcers.
sharp turning points will generate demand
curves with a flat slope and an intercept that
is close to the free-operant rate, generate REFERENCES
work-rate supply curves that increase as
schedule requirements increase, and will be Allison, J., & Boulter, P. (1982). Wage rate, non-labor
preferred in concurrent-schedule experi- income, and labor supply in rats. Learning and Moti-
vation, 13, 324-342.
ments when total reinforcement is low but Battalio, R. C., Green, L., & Kagel, J. (1981). Income-
will become less preferred as total reinforce- leisure tradeoffs of animal workers. The American Eco-
ment increases. In contrast, reinforcers whose nomic Review, 71, 621-632.
evaluation function is almost linear are pre- Battalio, R. C., Kagel, J. H., & Green, L. (1979). Labor
supply behavior of animal workers: Towards an ex-
dicted to generate demand curves with a perimental analysis. In V. L. Smith (Ed.), Research in
steep slope, generate work-rate functions with experimental economics (Vol. 1, pp. 231-253). Green-
a steep negative slope as schedule require- wich, CT: JAI Press.
ments increase, and to be not preferred in con- Battalio, R. C., Kagel, J. H., Rachlin, H., & Green, L.
(1981). Commodity choice behavior with pigeons as
current-schedule experiments when reinforce- subjects. Journal of Political Economy, 89, 67-91.
ment is low but to become more preferred as Ettinger, R. H., Reid, A. H., & Staddon,J. E. R. (1987).
total reinforcement increases. A reinforcer that Sensitivity to molar feedback functions: A test of mo-
might be expected to have a sharp turning lar optimality theory. Journal of Experimental Psychology:
Animal Behavior Processes, 13, 366-375.
point is water, and a reinforcer that might be Green, L., Kagel,J. H., & Battalio, R. C. (1982). Ratio
expected to have a nearly linear evaluation schedules of reinforcement and their relation to
REINFORCER PREFERENCES 329
economic theories of labor supply. In M. L. Com- ment. Journal of the Experimental Analysis of Behavior, 43,
mons, R. J. Herrnstein, & H. C. Rachlin (Eds.), 61-71.
Quantitative analyses of behavior: Vol. 2. Matching and Morgan, P. B., & Tustin, R. D. (1992). The perception
maximizing accounts (pp. 395-429). Cambridge, MA: and efficiency of labor supply choices by pigeons. The
Ballinger. EconomicJournal, 102, 1134-1148.
Green, L., Kagel, J. H., & Battalio, R. C. (1987). Con- Prelec, D. (1982). Matching, maximizing, and the hy-
sumption-leisure tradeoffs in pigeons: Effects of perbolic reinforcement schedule function. Psychologi-
changing marginal wage rates by varying amount of cal Review, 89, 187-225.
reinforcer. Journal of the Experimental Analysis of Behav- Premack, D. (1965). Reinforcement theory. In D. Levine
ior, 47, 17-28. (Ed.), Nebraska symposium on motivation (Vol. 13, pp.
Hanson,J. H., & Timberlake, W. (1983). Regulation dur- 123-188). Lincoln: University of Nebraska Press.
ing challenge: A general model of learned perfor- Rachlin, H., Battalio, R., Kagel, J., & Green, L. (1981).
mance under schedule constraint. Psychological Review, Maximization theory in behavioral psychology. Behav-
90, 261-282. ioral and Brain Sciences, 4, 371-417.
Herrnstein, R. J. (1970). On the law of effect. Journal of Rachlin, H., & Burkhard, B. (1978). The temporal tri-
the Experimental Analysis of Behavior, 13, 243-266. angle: Response substitution in instrumental condi-
Herrnstein, R. J. (1974). Formal properties of the tioning. Psychological Review, 85, 22-47.
matching law. Journal of the Experimental Analysis of Be- Rachlin, H., Green, L., Kagel, J. H., & Battalio, R. C.
havior, 21, 159-164. (1976). Economic demand theory and psychological
Herrnstein, R. J. (1979). Derivatives of matching. Psycho- studies of choice. In G. Bower (Ed.), The psychology of
logical Review, 86, 486-495. learning and motivation (Vol. 10, pp. 129-154). New
Hinson, J. M., & Staddon, J. E. R. (1983). Matching, York: Academic Press.
maximizing, and hill-climbing.Journal of the Experimen- Rachlin, H. C., Kagel, J. H., & Battalio, R. C. (1980).
tal Analysis of Behavior, 40, 321-331. Substitutability in time allocation. Psychological Review,
Hunter, I. W., & Davison, M. C. (1982). Independence 87, 355-374.
of response force and reinforcement rate on concur- Solomon, R. L. (1948). The influence of work on be-
rent variable-interval schedule performance.Journal of havior. Psychological Bulletin, 45, 1-40.
the Experimental Analysis of Behavior, 37, 183-198. Staddon,J. E. R. (1979). Operant behavior as adaptation
Hursh, S. R. (1980). Economic concepts for the analysis to constraint. Journal ofExperimental Psychology: General,
of behavior. Journal of the Experimental Analysis of Be- 108, 48-67.
havior, 34, 219-238. Staddon, J. E. R., & Motheral, S. (1978). On matching
Hursh, S. R. (1984). Behavioral economics. Journal of the and maximising in operant choice experiments. Psy-
Experimental Analysis of Behavior, 42, 435-452. chological Review, 85, 436-444.
Hursh, S. R., & Bauman, R. A. (1987). The behavioral Thurstone, L. L. (1931). The indifference function.Jour-
analysis of demand. In L. Green & J. Kagel (Eds.), nal of Social Psychology, Z 139-167.
Advances in behavioral economics (Vol. 1, pp. 117-165). Timberlake, W. (1977). The application of the matching
Norwood, NJ: Ablex. law to simple ratio schedules.Journal of the Experimental
Kagel, J. H., Battalio, R. C., Green, L., & Rachlin, H. Analysis of Behavior, 27, 215-217.
(1980). Consumer demand theory applied to choice Timberlake, W., Gawley, D. J., & Lucas, G. A. (1988).
behavior of rats. In J. E. R. Staddon (Ed.), Limits to Time horizons in rats: The effect of operant control
action: The allocation of individual behavior (pp. 237- of access to future food. Journal of the Experimental
267). New York: Academic Press. Analysis of Behavior, 50, 405-417.
Kagel, J. H., Battalio, R. C., Rachlin, H., & Green, L.
(1981). Demand curves for animal consumers. The Tustin, R. D. (1994). Preference for reinforcers under
Quarterly Journal of Economics, 46, 1-15. varying schedule arrangements: A behavioral eco-
Kagel, J. H., Battalio, R. C., Rachlin, H., Green, L., Bas- nomic analysis. Journal of Applied Behavior Analysis, 27,
mann, R. L., & Klemm, W. R. (1975). Experimental 597-606.
studies of consumer demand using laboratory ani- Tustin, R. D., & Davison, M. C. (1979). Choice: Effects of
mals. Economic Inquiry, 13, 22-38. changeover schedules on concurrent performance.
Kagel, J. H., Battalio, R. C., Winkler, R. C., & Fisher, E. Journal of the Experimental Analysis of Behavior, 3Z 75-91.
B., Jr. (1977). Job choice and total labor supply: An Tustin, R. D., & Morgan, P. (1985). Choice of reinforce-
experimental analysis. Southern Economic Journal, 4, ment rates and work rates with concurrent schedules.
13-24. Journal of Economic Psychology, 6, 109-141.
McDowell,J.J, & Wood, H. M. (1985). Confirmation of
linear system theory prediction: Rate of changes of Received October 14, 1994
Herrnstein's k as a function of response-force require- Final acceptanceJune 6, 1995

You might also like