Data Envelopment Analysis (DEA) - Thirty Years On. European Journal of Opera

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Available online at www.sciencedirect.

com

European Journal of Operational Research 192 (2009) 1–17


www.elsevier.com/locate/ejor

Invited Review

Data envelopment analysis (DEA) – Thirty years on


Wade D. Cook a,*, Larry M. Seiford b
a
Department of Operations Management and Information Systems, Schulich School of Business, York University, Toronto, Ontario, Canada M3J 1P3
b
Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States

Received 19 January 2008; accepted 22 January 2008


Available online 3 February 2008

Abstract

This paper provides a sketch of some of the major research thrusts in data envelopment analysis (DEA) over the three decades since
the appearance of the seminal work of Charnes et al. (1978) [Charnes, A., Cooper, W.W., Rhodes, E.L., 1978. Measuring the efficiency of
decision making units. European Journal of Operational Research 2, 429–444]. The focus herein is primarily on methodological devel-
opments, and in no manner does the paper address the many excellent applications that have appeared during that period. Specifically,
attention is primarily paid to (1) the various models for measuring efficiency, (2) approaches to incorporating restrictions on multipliers,
(3) considerations regarding the status of variables, and (4) modeling of data variation.
! 2008 Elsevier B.V. All rights reserved.

Keywords: DEA; Models; Multiplier restrictions; Data variation

1. Introduction into any satisfactory measure of efficiency. These inade-


quate approaches included forming an average productiv-
Efficiency measurement has been a subject of tremen- ity for a single input (ignoring all other inputs), and
dous interest as organizations have struggled to improve constructing an index of efficiency in which a weighted
productivity. Reasons for this focus were best stated fifty average of inputs is compared with output. Responding
years ago by Farrell (1957) in his classic paper on the mea- to these inadequacies of separate indices of labor produc-
surement of productive efficiency. tivity, capital productivity, etc., Farrell proposed an acti-
vity analysis approach that could more adequately deal
‘‘The problem of measuring the productive efficiency of an
with the problem. His measures were intended to be appli-
industry is important to both the economic theorist and the
cable to any productive organization; in other words,
economic policy maker. If the theoretical arguments as to
‘‘from a workshop to a whole economy.” Unfortunately,
the relative efficiency of different economic systems are to
he confined his numerical examples and discussion to single
be subjected to empirical testing, it is essential to be able
output situations, although he was able to formulate a mul-
to make some actual measurements of efficiency. Equally,
tiple output case.
if economic planning is to concern itself with particular
Twenty years after Farrell’s seminal work, and building
industries, it is important to know how far a given industry
on those ideas, Charnes et al. (1978), responding to the
can be expected to increase its output by simply increasing
need for satisfactory procedures to assess the relative effi-
its efficiency, without absorbing further resources.”
ciencies of multi-input multi-output production units,
Farrell further stated that the primary reason that all introduced a powerful methodology which has subse-
attempts to solve the problem had failed, was due to a fail- quently been titled data envelopment analysis (DEA).
ure to combine the measurements of the multiple inputs The original idea behind DEA was to provide a method-
ology whereby, within a set of comparable decision
*
Corresponding author. Tel.: +1 416 736 2100/33573. making units (DMUs), those exhibiting best practice
E-mail address: wcook@schulich.yorku.ca (W.D. Cook). could be identified, and would form an efficient frontier.

0377-2217/$ - see front matter ! 2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.ejor.2008.01.032
2 W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17
P P
Furthermore, the methodology enables one to measure the eo ¼ max ur y ro = vi xio
r i
level of efficiency of non-frontier units, and to identify P P
benchmarks against which such inefficient units can be s:t: ur y rj " vi xij 6 0; all j ð2:1Þ
r i
compared. ur ; vi P e; all r; i:
Since the advent of DEA in 1978, there has been an
impressive growth both in theoretical developments and where e is a non-archimedian value designed to enforce
applications of the ideas to practical situations. The pur- strict positivity on the variables. We point out that this
pose of the current paper is to provide a sketch of the model involving the ratio of outputs to inputs is referred
major directions in methodological developments (as to as the input-oriented model. One could, as well, invert
opposed to a discussion of applications), in this important this ratio and solve the corresponding output-oriented min-
field during the past three decades. The coverage is by no imization problem. We will generally deal with the input-
means complete as the volume of literature is enormous, oriented model herein.
and beyond the current scope. Problem (2.1) is referred to as the CCR (Charnes, Coop-
Section 2 reviews the various DEA models, including er and Rhodes) model, and provides for constant returns to
those that go beyond the usual definition of DEA, specifi- scale (CRS). It is observed that in their original 1978 paper,
cally the free disposal hull (FDH) model, cross evaluation, the authors simply restricted the variables to be non-nega-
and minimum distance models. Section 3 goes beyond the tive (e = 0); the imposition of a strictly positive lower limit
single level models of Section 2 and examines multilevel (e > 0) was introduced in a follow-up paper, Charnes et al.
models. Section 4 discusses various forms of multiplier (1981). For convenience we refer to (2.1) as the original
restrictions used to constrain the frontier. In Section 5 CCR model.
the status of different types of variables is reviewed. These Applying the Charnes and Cooper (1962) theory of frac-
include non-discretionary, non-controllable, categorical, tional programming, making the P change of variables
"1
and ordinal variables. As well, we consider the issue lr = tur and ti = tvi, where t ¼ ð i mi xio Þ , problem (2.1)
regarding uncertainty as to the input versus output status can be converted to the linear programming (LP) model:
of variables. Data variation is explored in Section 6. This P
e0 ¼ max lr y ro
includes sensitivity analysis, probability-based models, r
P
window analysis, Malmquist models for capturing times s:t: ti xio ¼ 1
series impacts on efficiency, and statistical inference issues P
i
P ð2:2Þ
surrounding the efficient frontier. Concluding remarks fol- lr y rj " ti xij 6 0; 8j
r i
low in Section 7.
lr ; ti P e; all r; i:
By duality, this problem is equivalent to the linear pro-
2. The models gramming problem:
! "
P þ P "
2.1. The constant returns to scale (CRS) model min ho " e sr þ si
r i
P
Consider a set of n DMUs, with each DMU j, s:t: kj xij þ s"i ¼ h o xio ; i ¼ 1; . . . ; m
j
(j = 1, . . ., n) using m inputs xij(i = 1, . . ., m) and generating P ð2:3Þ
s outputs yrj (r = 1, . . ., s). If the prices or multipliers !ur ; !vi kj y rj " sþ
r ¼ y ro ; r ¼ 1; . . . ; s
j
associated with outputs r and inputs i, respectively, are
kj ; s "
i ; sr P 0;
þ
8i; j; r
known, then borrowing from conventional benefit/cost the-
ory, one could express the efficiency ēj of DMUj as the ratio ho unconstrained:
of weighted outputs to weighted inputs, i.e. Problem (2.3) is referred to as the envelopment or primal
X X problem, and (2.2) the multiplier or dual problem.
ur y rj =
! !vi xij : The constraint space of (2.3) defines the production pos-
r i
sibility set T. That is,
( )
This benefit/cost ratio is, of course, the basis for the X X
standard engineering ratio of productivity. T ¼ ðX ; Y Þ j X P kj X j ; Y 6 kj Y j ; kj P 0 :
j j
In the absence of known multipliers, Charnes et al.
(1978) proposed deriving appropriate multipliers for a To get a geometric appreciation for the CRS model, one
given DMU by solving a particular non-linear program- can represent problem (2.3) in a form such as pictured in
ming problem. Specifically, if DMUo is under consider- Fig. 1.
ation, the Charnes et al model for measuring the This figure provides an illustration of a single output
technical efficiency of that DMU is given by the solution single input case. If we solve (2.3) for each of the DMUs,
to the fractional programming problem: this amounts to projecting that DMU to the left, to a point
on the frontier. In the case of DMU #3, for example, its
W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17 3

DMU 1 2 3 4 5 6 7 the resulting projected value h&E xE is simply the frontier


X 2 3 6 9 5 4 10
DMU B. We may refer to B as a ‘‘benchmark” for DMU
E. In the case of DMU G, its projected (frontier) value is
Y 2 5 7 8 3 1 7
represented by the point K, hence B and C are appropriate
benchmarks for DMU G. As illustrated by this example,
10
the CCR model is appropriately referred to as providing
8
A a radial projection. Specifically, each input is reduced by
3* 4 (9,8)
3 (6,7) 7 (10,7) Efficiency the same proportionality factor h.
6 Measure = A
B2 (3,5) In the example of Fig. 2, in solving (2.3) for DMUs A,
Y

B
4 B, C, D, E, F, G, all slack variables (in this case s"
1 ; s2 ) will
"

5 (5,3) be zero. Fig. 3 is a redrawn version of Fig. 2 with an addi-


2 1 (2,2)
tional DMU H added. For the DMU H, however, the pro-
6 (4,1)
jected point lies on an extension of the frontier at H*, and
2 4 6 8 10 12 14
not on the frontier proper. In this case, the slack corre-
X
sponding to input #1 ðs" 1 Þ will be positive, and equal to
Fig. 1. Constant returns to scale projection in the single input single the distance represented by the line segment from H* to
output case. D. We say that DMU H is improperly enveloped. A
DMU located at point H* would be deemed weakly effi-
projection to the frontier is represented by the point 3*. cient, as opposed to points such as A, B, C, D which are
Intuitively, one would reasonably measure the efficiency strongly efficient or simply efficient (DEA-efficient). For a
of DMU #3 as the ratio A/B = 4.2/6 = .70 or 70%. The complete discussion of efficiency classes, the reader is
solution of (2.3) for this DMU results in h&3 ¼ :70. It is use- referred to Charnes et al. (1986, 1991).
ful to note that had we defined performance by the recipro-
cal ratio of inputs to outputs, the resulting value of that 2.2. The variable returns to scale (VRS) model
ratio would be 1.43, and we would deem the ‘‘efficiency
to be 1/1.43 = .70”, which is the same as arrived at above. Banker et al. (1984) (BCC), extended the earlier work of
Pictorially, solving this output-oriented model involves a Charnes et al. (1978) by providing for variable returns to
vertical projection from DMU 3 up to the frontier, rather scale (VRS). This is pictured in the redrawn version of
than the horizontal projection to the left as shown in the Fig. 1 in the form of Fig. 4.
figure. Shown are the original CRS frontier, and the VRS fron-
An alternative geometric view of model (2.3) is provided tier, here represented by the line segments 1–2, 2–3 and 3–4.
in Fig. 2. The BCC ratio model differs from (2.1), by way of an addi-
Here, we have two inputs (and we assume a single com- tional variable, i.e.
mon output value for all DMUs). In solving (2.3) we find # $
P P
that DMUs A, B, C and D are efficient, i.e., eo ¼ max
&
ur y ro " uo = vi xio
r i
hA = hB = hC = hD = 1. For DMU E h&E ¼ ð83:3%Þ, and P P
s:t: ur y rj " uo " vi xij 6 0; j ¼ 1; . . . ; n
r i
ur P e; vi P e; 8i; r
INPUT A B C D E F G
BUDGET (X1 ) 3 5 8 12 6 8 10 uo unrestricted in sign:
COND (X2 ) 90 70 55 50 84 80 60
ð2:4Þ
Efficiency of point E = OB
OE
10 10
E Efficiency of point F = OK E
A F A
OF F
8 8
G G
B B
6 K 6
X2
X2

C C
4 D 4 D

2 2

0
2 4 6 8 10 12 14 2 4 6 8 10 12 14
X1 X1

Fig. 2. A two input illustration of the DEA projection. Fig. 20 . Impact of assurance region restrictions.
4 W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17

10
It is noted that (2.6) differs from (2.3) in that P
it has the
E additional convexity constraint on the kj, namely j kj ¼ 1.
A F
8
In reference to Fig. 4, that portion of the frontier from
point 1 up to (but not including) point 2, constitutes the
G
6
B increasing returns to scale portion of the frontier; point 2
K H
is experiencing constant returns to scale; all points on the
X2

C
4 D H* frontier to the right of 2 (i.e., the segments from 2 to 3
and from 3 to 4) make up the decreasing returns to scale
2 portion of the frontier. As with the CRS model, a DMUo
is BCC-efficient in the VRS sense if there exists a solution
0 to (2.6) such that h&o ¼ 1 and all slacks s"&i ; sr
þ&
are zero in
2 4 6 8 10 12 14
value. Clearly, any CCR-efficient DMU is also BCC-
X1
efficient.
Fig. 3. DEA projection for an improperly enveloped DMU. The returns to scale (RTS) classification of DMUs has
been the subject of study by numerous authors, including
Banker (1984) (using the most productive scale size concept
and letting the sum of lambda values dictate the RTS),
10 Banker et al. (1984) (using the free variable in (2.5)), and
Färe et al. (1994), (applying their scale efficiency index
8
method). A problem in classifying RTS is the existence of
4 multiple optima, meaning that the classification may be a
3 7 function of the particular solution selected by the optimiza-
6
tion software. Various attempts have been made to provide
Y

2
a more definitive RTS classification assignment for a given
4
DMU, including developing intervals for the various free
5
variables arising from the multiple optima. Zhu and Shen
2 1
(1995) suggest a remedy for the CCR RTS method under
6 multiple optima. Seiford and Zhu (1997, 1999b) review
the various methods and suggest computationally simple
2 4 6 8 10 12 14
X methods to characterize RTS, and to circumvent the need
for exploring all alternate optimal solutions.
Fig. 4. The variable returns to scale Frontier.

2.3. The additive model


The linear programming equivalent of (2.4) is
The previous two efficiency models are radial projection
P constructs. Specifically, in the input-oriented case, inputs
e&o ¼ max lr y ro " lo
r
P are proportionally reduced while outputs remain fixed.
s:t: ti xio ¼ 1 (For the output-oriented case, outputs are proportionally
i
P P increased while inputs are held constant.) Charnes et al.
lr y rj " lo " ti xij 6 0; j ¼ 1; . . . ; n (1985b) introduced the additive or Pareto–Koopmans
r i
(PK) model which, to an extent, combines both orienta-
lr P e; ti P e; 8i; r; lo unrestricted:
tions. Fig. 5 illustrates this idea wherein any direction in
ð2:5Þ the quadrant formed by B–A–C is permitted.

the dual for which is given by


C
10
! "
P P þ
min ho " e s"
i þ sr 8 4 (9,8)
i r B 3 (6,7)
P A
7 (10,7)
6
s:t: kj xij þ s"i ¼ ho xio ; i ¼ 1; . . . ; m
Y

j 2 (3,5)
P 4
kj y ro " sþ
r ¼ y ro ; r ¼ 1; . . . ; s: ð2:6Þ 5 (5,3)
j 2 1 (2,2)
P 6 (4,1)
kj ¼ 1
j
2 4 6 8 10 12 14
kj ; s "
i ; sr P 0
þ
8i; r; j X

ho unrestricted: Fig. 5. Additive model projection (constant returns to scale).


W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17 5

There are several versions of the additive model, the While the non-linearity of Ro poses a computational
most basic being given by the linear optimization problem inconvenience, the resulting measure 1 " Ro does possess
shown as (2.7). The convexity condition on the kj variables the property of being on the unit scale [0,1], hence serving
implies that we are using the VRS technology. The frontier as a legitimate efficiency score.
generated by model (2.7) is identical to that arising from Tone (2001) introduced the so-called slacks-based mea-
the corresponding VRS structure (2.6), hence a DMU is sure (SBM) which is invariant to the units of measurement
additive-efficient or PK efficient (all slacks equal to zero and is monotone increasing in each input and output slack.
at the optimum in (2.7)) if and only if it is VRS-efficient. The SBM is derived from the solution of the fractional pro-
Clearly, the CRS production possibility set can be used gramming problem
as well (and is the one illustrated in Fig. 5). 1
P"
1"m si xio
P " P þ min p ¼ 1þ1 Pi sþ =y
P o ¼ max si þ sr s r ro ð2:9Þ
i r r
P
s:t: kj xij þ s"i ¼ xio ; i ¼ 1; . . . ; m Subject to the constraints of ð2:7Þ:
j
P Clearly, 0 6 p 6 1, and is therefore a legitimate PK effi-
kj y rj " sþ
r ¼ y ro ; r ¼ 1; . . . ; s ð2:7Þ
j ciency score in the spirit of the CCR and BCC models. It is
P shown in Tone (2001) that (2.9) can be transformed into a
kj ¼ 1
j linear programming problem.
kj ; s "
i ; sr P 0;
þ
8j; i; r:
2.5. The Russell measure
Since the various inputs and outputs may be measured
in non-commensurate units, (Russell, 1988), it may not The Russell measure model, named by Färe and Lovell
be practical in certain contexts to use the simple sum of (1978), and later revisited by Pastor et al. (1999) (referring
slacks as the objective in (2.7). Moreover, model (2.7) does to it as the enhanced Russell measure), is equivalent to
not provide for an actual measure of inefficiency as in the Tone’s SBM, as discussed in Cooper et al. (2006). Specifi-
case for the BCC and CCR models. To overcome this latter cally, the model is
problem, Charnes et al. (1985b) proposed the use of Qo, # $
P P
where Ro ¼ min ðhi =mÞ= ður =sÞ
! i r
P
X
"
X
þ s:t: kj xij 6 hi xio ; i ¼ 1; . . . ; m
Qo ¼ d si =xio þ sr =y ro j
i r P
kj y rj P ur y ro ; r ¼ 1; . . . ; s
j
subject to the constraints as in (2.7). A suggested value for P
kj ¼ 1
d was 1/(m + s). The division of the s" i and sr by xio and
þ
j
yro, respectively, is intended to render these slacks units kj P 0; 0 6 hi 6 1; ur P 1; all i; j; r:
invariant (i.e., commensurate), while multiplying by d con-
ð2:10Þ
trols the overall scale. In order to maintain consistency
with the sense of efficiency in the CCR and BCC models,
Sueyoshi (1990) offers 1 " Qo as such a measure. The prob- 2.6. Other non-radial models
lem, as acknowledged in a later paper by Chang and Sue-
yoshi (1991), is that 0 6 1 " Qo 6 1 may not necessarily There are several other non-radial models. One of these
hold, and may in fact be negative. is the RAM (range adjusted measure) model of Cooper
et al. (1999a), and is similar to the additive model with
the additional feature that the score lies on the [0, 1] scale.
2.4. Slacks-based measures There are also non-radial models employed as a second
stage in a two-stage efficiency analysis, after a projection
To address the above shortcomings in the additive point has been identified for a given DMU. See Tone
model, Green et al. (1997) propose as a measure of (2001), Cooper et al. (2001), Portela and Thanassoulis
efficiency (2007), Portela et al. (2003), and others.
" #
1 X " X
Ro ¼ s =xio þ sþ þ
r =ðy ro þ sr Þ ;
2.7. Alternative views
sþr i i r
2.7.1. FDH – The free disposal Hull model
and recommend solving the problem A production possibility set (PPS) or reference technol-
ogy can be thought of as a declaration of the totality of
max Ro production activities that might plausibly have been
ð2:8Þ
Subject to the constraints of ð2:7Þ: observed on the evidence of the activities actually observed.
6 W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17

DEA uses the frontier of the PPS, defined in terms of In a game sense, suppose one player DMUd is given an effi-
observed activities deemed efficient, and specific linear/con- ciency score ad, and that another player DMUj then tries to
vex combinations thereof, to evaluate the observed activi- maximize its own efficiency, given that ad cannot be
ties. Deprins et al. (1984) and Tulkens (1993) take an decreased. The authors present an algorithm that continu-
alternate view, whereby the assumption is that only the ally updates the ad, arriving at a final set of scores that are,
observed DMUs make up the frontier, not linear or convex in a competitive sense, best for the set of DMUs.
combinations of those observed units.
The model that will generate this frontier is simply the 2.8. Least distance projections
VRS model of Banker et al. (1984), but with the additional
restriction that the kj 2 {0, 1}. While Thrall (1999) chal- A number of authors have examined the problem of
lenged this concept, from an economic theory perspective, deriving the least distance projection to the efficient fron-
this was strongly rebutted by Cherchye et al. (2000), and tier; note that this is the opposite criterion to that of the
FDH remains an attractive, but potentially underutilized additive model that searches for the greatest distance. Frei
approach to efficiency measurement. See also Green and and Harker (1999) proposed using the Euclidean norm to
Cook (2004). define the closest point. Charnes et al. (1992, 1996) and
Briec (1999) obtain the minimum city block distance to
2.7.2. Cross efficiency the weak efficient frontier. Gonzalez and Alvarez (2001)
The cross efficiency score of a given DMU is obtained minimize input contractions, while Portela et al. (2003),
by computing for that DMU the set of n technical efficiency Cherchye and Van Puyenbroeck (2001), approach this by
scores (using the n sets of optimal weights corresponding to identifying all the efficient facets. In a recent paper Apari-
the n DMUs), and then averaging those scores. Thus, cross cio et al. (2007) present a set of models for obtaining least
efficiency goes beyond pure self-evaluation inherent in con- distance projections.
ventional DEA analysis, and combines this with the other
(n " 1) scores arising from the optimal peer multipliers. 2.9. Invariance to data alterations
This approach was originated by Sexton et al. (1986),
and was further investigated by Doyle and Green (1994), Either out of necessity or convenience, the modeler is
and others. Cross efficiency provides an efficiency ordering sometimes called upon to alter or transform the data to
among all the DMUs to differentiate between good and be used in a DEA analysis. For example, it may be more
poor performers. It eliminates the need for incorporating convenient for scale purposes to represent a resource in
additional weight restrictions on multipliers, thereby avoid- thousands of dollars rather than in dollars. If certain profit
ing potentially unrealistic weighting schemes (Anderson figures can take negative values it may be desirable to
et al., 2002). One can find many uses of cross efficiency translate the data by adding a fixed number to the value
for example, R&D project selection (Oral et al., 1991), of that variable for each DMU (thereby rendering all val-
preference voting (Doyle et al., 1996) and others. As ues positive). An important consideration is whether or
pointed out by Doyle and Green (1994), the non-unique- not such alterations made to original data, influence the
ness of the DEA optimal weights possibly reduces the use- outcomes arising from the application of the various effi-
fulness of cross efficiency. To combat this problem, those ciency measurement models discussed above. This question
authors have proposed various secondary goals such as of invariance has been a subject of importance in the DEA
given by the aggressive and benevolent models. Liang literature, and is discussed, for example, in Ali and Seiford
et al. (2008) further improve on the idea of the cross effi- (1990), Thrall (1996), Pastor (1996) and Cooper et al.
ciency score, using game theoretic constructs. To imple- (2006).
ment their idea, one views DMUs as players in a game, In the case of a given factor (e.g., xij), two specific forms
and defines the efficiency score as that arising from (2.1). of data alteration are of particular significance:

Table 2.1
DEA models and their respective invariance properties (adapted with permission from Cooper et al. (2006), p. 105)
Model CCR-I CCR-O BCC-I BCC-O ADD SBM
Data X Semi-p Semi-p Semi-p Free Free Semi-p
Y Free Free Free Semi-p Free Free
Trans. X No No No Yes Yesa No
Invariance "Y No No Yes No Yesa No
Units invariance Yes Yes Yes Yes No Yes
h* [0, 1] [0, 1] (0, 1] (0, 1] No [0, 1]
Tech. or mix Tech. Tech. Tech. Tech. Mix Mix
Returns to scale CRS CRS VRS VRS C(V)RSb C(V)RS
a
The Additive model is translation invariant only when the convexity constraint is added.
b
C(V)RS means Constant or Variable returns to scale according to whether or not the convexity constraint is included.
W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17 7

(1) Scaling using a common multiplier a, i.e., transform- impacts later stages. Here, intermediate products
ing xij to ^xij ¼ axij , are accounted for, meaning that the outputs of one
(2) Translation using a common coefficient a, transform- stage become inputs to a later stage.
ing xij to !xij ¼ a þ xij . (3) Technology adoption: This model allows one, for
example, to examine production on different proces-
If a data transformation of the form (1) is undertaken, sors (e.g. machines). Inputs are allocated among the
and if the outcomes from a model do not change from what processors to allow one to determine which technol-
they would have been under the original data, the model in ogy to adopt.
question is said to possess the property of units invariance.
If outcomes are not affected by translation of the data (#2), 3.1.2. Supply chains
then the model exhibits translation invariance. Several DEA-based approaches have been used to
Table 2.1 adapted from Cooper et al. (2006) provides a examine buyer–supplier supply chain settings, leading to
summary of the extent to which these properties hold in the efficiency evaluation. The important issue is that of deriv-
various models. ing a measure of overall efficiency as opposed to looking
only at the efficiencies of individual members of the chain.
3. Multilevel models Therein, of course, lies the difficulty in that a recommended
efficiency improvement for one member of the chain can
The models of the previous section generally pertain to lead to a decrease in the efficiency of the other members.
single level situations in which we wish to evaluate the effi- Seiford and Zhu (1999c) and Chen and Zhu (2004) provide
ciency status of each member of a given set of decision two approaches in modeling two-stage processes. Zhu
making units at a given point in time. A number of effi- (2003b) presents a DEA-based supply chain model to both
ciency measurement situations can involve having to look measure the overall efficiency, and that of its members.
at what might be regarded as multiple levels. Some of the Fig. 7 captures the type of structure examined by Zhu
model structures for such situations are briefly discussed (2003b).
herein. A number of supply chain approaches due to Liang
et al. (2006) are built on game theoretic constructs. Liang’s
3.1. Multistage/serial models two principal models are (1) a non-cooperative model and
(2) a cooperative model. In the non-cooperative model, he
3.1.1. Network DEA views the seller as the leader and buyer as the follower. In
This body of work originated by Fare and Grosskopf the first stage, one optimizes the leader’s efficiency score
(1996) is built around the concept of sub-technologies and then maximizes (in the second stage) that of the fol-
within the ‘‘black box” of DEA. This approach allows lower, with the constraint that the multipliers used must
one to examine in more detail the inner workings of the be such that the first stage (leader) score remains
production process, potentially leading to a greater under- unchanged. The resulting model is a non-linear parametric
standing of that process. Färe et al present three general programming problem. In the cooperative game model, no
network models: leader – follower assumption is made.

(1) A static network model in which a finite set of sub- 3.2. Multicomponent/parallel models
technologies or activities are connected to form a net-
work. Its main feature is that it allows one to analyze The multilevel settings of the previous subsection are gen-
the allocation of intermediate products. Fig. 6 illus- erally directed toward what can be termed serial processes.
trates the concept. Some situations can involve multiple components that can
(2) Dynamic network model: This structure permits one be regarded as operating in parallel. In Cook et al. (2000) a
to examine a sequence of production technologies study of bank branch performance is discussed but where
where a decision at one stage (e.g. a time period) in each branch activities can be grouped under two headings
– sales activities and service activities. While one could con-
ceivably consider looking at the two sets of activities as
4y involving separate analyses, the complication that arises is
1
P1 that of dividing up the shared inputs such as support staff.
1x
0
3x
3y
1 4y
Cook et al. (2000) instead propose an aggregate efficiency
x 0
0
P3
3
4 y
3y
2x 2
XA YA YB
0
P2 4y Seller Buyer
2 XB

Fig. 6. A network technology (adapted with permission from Färe and Fig. 7. Seller–buyer supply chain (adapted with permission from Zhu
Grosskopf (2000)). (2003b)).
8 W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17

model that allows one to evaluate both the component level for more realistic multipliers, they proposed imposing a set
efficiencies as well as the overall or aggregate efficiency. of linear restrictions that define a convex cone. Specifically,
Another approach to multicomponent situations in organi- the feasible region for say the input multiplier vector
zations is given by Portela et al. (2007). t = (t1, . . ., tm) is defined to be in the polyhedral convex cone
spanned by a set of k admissible non-negative direction vec-
3.3. Hierarchical/nested models tors (a‘), ‘ = 1, . . ., k. Thus, a feasible t can be expressed as
X
t¼ a‘ a‘ ; with a‘ P 0; 8‘: ð3:1Þ
Some multilevel efficiency measurement situations can

involve hierarchical or nested structures. Cook et al.
(1998), Cook and Green (2005) examine a set of power Let the resulting polyhedral cone be denoted by V, the
plants, wherein each plant is made up of a set of individu- set of all t satisfying (3.1). Letting U be a similar cone
ally operating power units. At one level, it is necessary to defining the set of feasible output multiplier vectors
consider measuring the efficiency of each power unit rela- l = (l1, . . ., ls), the CCR cone ratio model is then given by
tive to the full set of power units across all plants; here, P
max lr y ro
the power units are the DMUs. At the same time, at the Pr
next level up in the hierarchy, one can evaluate the effi- s:t: ti xio ¼ 1
ciency of each plant or group of units, against all other P i
P ð3:2Þ
plants. There can be inputs and outputs at one level that lr y rj " ti xij 6 0; j ¼ 1; . . . ; n
r i
are not part of the analysis at another level. Cook et al.
teV ; leU :
(1990) give another example of such a structure in consid-
ering the efficiency of highway maintenance crews or There is, of course, the corresponding dual of this model
patrols. Here, patrols (level 1 DMUs) are grouped under which can be found in Charnes et al. (1990).
districts (level 2 DMUs), which are further grouped under An important generalization in (3.2) is that given by
the different regions (level 3 DMUs) within the province or Thompson et al. (1995), wherein the spaces for l and t
state. are connected by way of ‘‘linked” cones. Those authors
apply their general cone ratio model to a problem involving
4. Multiplier restrictions Illinois coal mining.

Within the DEA literature, there is a body of research 4.3. Assurance regions
aimed at addressing the problem of unacceptable weighting
schemes. We refer to the collection of methodologies here A special case of the cone ratio idea is what Thompson
as involving multiplier restrictions, although as discussed et al. (1986, 1990) termed an assurance region (AR). The
below, some of the tools do this in an indirect rather than AR concept was developed to prohibit large differences in
direct way (e.g., via constrained facet analysis). the values of multipliers, and imposes constraints on the
relative magnitudes of those multipliers. For example,
4.1. Absolute multiplier restrictions one might add a constraint on the ratio of multipliers for
a pair of inputs 1 and 2, in the form:
Some of the earliest work here involved imposing abso- L12 6 t2 =t1 6 U 12 ; ð3:3Þ
lute lower and upper bounds on input and output multipli-
ers, that is where L12, U12 are lower and upper bounds, respectively,
on the ratio t2/t1.
P 1r 6 lr 6 P 2r ; Q1i 6 ti 6 Q2i :
Generally, the imposition of multiplier restrictions,
Roll et al. (1991) examined the use of such absolute lim- whether it is through absolute bounds, cone ratio con-
its in the context of evaluating highway maintenance units straints, or AR constraints, leads to a worsening of effi-
(see also Cook et al. (1990)). In an earlier paper by Dyson ciency scores. Referring again to Fig. 2, a redrawn version
and Thanassoulis (1988), a similar approach was proposed. is shown in Fig. 20 . The slopes of the facets in this figure
Implementing absolute bounds can prove difficult, in that are related to the relative values of t1 and t2. When restric-
appropriate levels for the Pkr, Qki are very much a function tions say of the form (3.3) are imposed on the DEA model,
of the scales used for the variables. Only after running an certain slopes may no longer be admissible. This has the
‘‘unbound” model will the range of possible multiplier val- affect of ‘‘bending” the frontier out (giving it less curvature),
ues be known in relation to the scales adopted. as illustrated by the dashed line in Fig. 20 . Thus, DMUs that
are efficient in an unrestricted setting, e.g. DMU D in Fig. 2,
4.2. Cone ratio restrictions may be rendered inefficient as in Fig. 20 . In their recent
book, Cooper et al. (2006), p. 172 provide a useful alterna-
Charnes et al. (1990), in their study of large industrial tive pictorial view of this idea in the multiplier space.
banks, recognized that undesirable weighting schemes are Many applications of the AR form of the various DEA
a natural outcome in many DEA applications. To provide models can be found in the literature. These help to
W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17 9

enlighten the reader on the practicalities of deriving appro- As an example, the method of Green et al. (1996) is a
priate bounds. In some cases, these bounds are presented three-stage procedure. In stage 1, the standard CCR model
by the author as being ‘‘illustrative” only, and one is left is solved to determine the full set of efficient units E [ E0 .
with the question as to how the ‘‘right” values could be Note that E is composed of extreme efficient DMUs (cor-
derived. In other circumstances, bounds may fall naturally ners of facets) while E0 are the non-extreme (interior to fac-
out of the available data. In Cook et al. (2000), for exam- ets) efficient units. Stage 2 then follows Charnes et al.
ple, in studying bank branch efficiency, outputs are various (1986) to partition this full set of units into the two subsets.
classes of branch transactions (deposits, withdrawals, etc.), Finally, in the third stage one solves the mixed integer pro-
and their multipliers are transaction processing times, in gramming problem,
minutes or hours. While exact times are not given, since P
max eo ¼ lr y ro
these can vary from branch to branch, from one employee r
P P
to another etc., ranges with established lower and upper s:t: lr y re " ti xie 6 0; eeE;
limits are available. It is these ranges that lead to concrete P
r
P
i

limits of the form illustrated in (3.3). lr y re " ti xie þ Mze P 0; eeE;


r i
Various generalizations on the AR concept appear in P ð4:1Þ
the literature. Allen et al. (1997) and Thanassoulis et al. ti xio ¼ 1;
i
(1998) present a global type of restriction on the weighted P
ze ¼j E j "ðm þ s " 1Þ;
values for each DMU. For example, if input #1 is to rep- eeE
resent at least 10%, and at most 20% of the total weighted lr P 0r ¼ 1; . . . ; s; ti P 0 i ¼ 1; . . . ; m;
input for P any DMU, then constraints such as ze ef0; 1geeE; M ' 0:
:10 6 t1 x1j =ð i ti xij Þ 6 :20 would be appropriate. Note
that separate constraints for each DMU would result from This model guarantees that exactly m + s " 1 of the
this. m + s constraints will be satisfied as equalities. This means
Cook and Zhu (2008) present a context-dependent that at most one slack variable will be positive, hence all lr,
assurance region DEA (CAR-DEA) model which provides ti variables will be forced to be strictly positive, meaning
for restrictions of the form (3.3) that may differ from one that the DMU in question is projected against a full-dimen-
subset of DMUs to another. Thus, if there are K groups sional facet. The method is illustrated by Fig. 8.
of DMUs, and if we wish to impose restrictions on say out-
puts of the form 4.5. Generating unobserved DMUs
ckrL 6 l1 =lr 6 ckrU ; k ¼ 1; . . . ; K;
It is noted that in extending facets to eliminate weakly
it is necessary to account for potential inconsistency (infea- efficient projections, new ‘‘unobserved” DMUs will be gen-
sibility) if all K sets are imposed simultaneously. If we as- erated. In Fig. 8, points where the rays from the origin to
sume that output #1 is used as the numeraire against improperly enveloped DMUs (P4 and P7) intersect the
which other outputs are compared, the approach taken extended facet, define such DMUs. Thanassoulis and Allen
by Cook and Zhu (2007) is to replace the set of K groups (1998) present a formal procedure for producing new unob-
of AR restrictions by a single set of AR constraints, appli- served DMUs, thus creating the means for extending
cable to all K classes of DMUs. observed facets. Their approach amounts to obtaining

4.4. Facet models

Significant work has been done relating to facet exten-


sion and facet identification, to address the inherent prob-
lem involving the occurrence of zero weights (or e-weights)
in the multiplier models, as indicated above. This is equiv-
alent to projection to weakly efficient facets or non-full-
dimensional facets. Bessent et al. (1988) were the first to
introduce the idea of constrained facet analysis (CFA). In
the event that a given unit is projected to a weakly efficient
facet, CFA involves extending a selected Pareto-efficient
(full-dimensional) facet, and then projecting the given
DMU on to that extended facet. Lang et al. (1995)
improved on this idea by adopting a two-stage approach
which ultimately amounts to finding the ‘‘closest” full-
dimensional facet to which to project the DMU in ques-
tion. Other similar approaches have been suggested by Fig. 8. Facet extension in DEA (adapted with permission from Green
Green et al. (1996), and by Olesen and Petersen (1996). et al. (1996)).
10 W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17

information from the decision maker as to his/her esti- It is noted that for non-discretionary i 2 ND, ti P 0
mates of the tradeoff between pairs of factors. rather than ti P e. Correspondingly, in (4.1) it is only those
Another line of research in a similar direction is due to input slacks related to discretionary factors that appear in
Golany and Roll (1994) who introduced the idea of incor- the objective function. See Cooper et al. (2006), pp. 210–
porating standard DMUs into the DEA structure. Cook 211 for a full discussion of this.
and Zhu (2005) extended this work by way of incorporat- More recently Ruggiero (1996) has pointed out that in
ing production standards (as opposed to standard DMUs). certain cases, the Banker and Morey model can over esti-
mate technical efficiency by allowing production impossi-
5. Special considerations regarding the status of variables bilities into the referent set. Ruggiero’s approach restricts
weights to zero for production possibilities with higher lev-
As originally conceived, the DEA model involves the els of the non-discretionary inputs, and as a result, produc-
s m
generation of s outputs fy rj gr¼1 using m inputs fxij gi¼1 . In tion impossibilities are appropriately excluded from the
a structure such as that in (2.3) all inputs are projected referent set. Recent simulation analyses in Syrjanen
radially to the efficient frontier, all variables are assumed (2004) and Muñiz et al. (2006) demonstrate that Ruggiero’s
to be quantitative, and the collection of DMUs under eval- method performs relatively well in evaluating efficiency.
uation is assumed to form a relatively homogeneous group See also Ruggiero (1998, 2007).
(all are comparable to one another). As new applications of
DEA have arisen, it has become necessary to expand the 5.2. Non-controllable variables
original model structures to accommodate new situations,
hence relaxing a number of those original assumptions. In the non-discretionary variable model (4.1) we note
In this section we discuss some of the more important that the slacks s"
i ; i 2 ND are permitted to be strictly posi-
developments in this regard. tive. This means that at the optimum, a DMUo may end up
being compared to a linear combination of peers wherein
5.1. Non-discretionary variables the value of a non-discretionary variable for that combina-
tion is less than its value in DMUo. In certain situations
In many applications of DEA, certain of the input vari- there can be inputs (and outputs) whose values must
ables may not be under the direct control of management. remain fixed, and can only be compared to DMUs whose
In a DEA analysis of bank branch efficiency, for example, linear combinations are at the same levels as those fixed
an input variable such as fixed expenditures (rent, utilities, factors. Such variables have been labeled as non-controlla-
etc.) could not be proportionally reduced as would be the ble. Let N1, N2 denote non-controllable inputs and outputs
case for variable expenditures such as staff. Thus, it is respectively, and N 1 ; N 2 denote regular (controllable)
important to identify those variables that are discretionary inputs and outputs, respectively. The non-controllable var-
(staff) versus non-discretionary (fixed costs). iable model is then:
!
Banker and Morey (1986a) introduced the first DEA P " P þ
model that allowed for non-discretionary inputs by modi- min ho " e si þ sr
i2N 1 r2N 2
fying the input constraints to disallow input reduction on P
the fixed factor. Letting D denote the subset of inputs s:t: kj xij þ s"
i ¼ ho xio ; i 2 N 1
j
ie{1, 2, . . ., m} that are discretionary, and ND, the non-dis- P
cretionary inputs, the Banker and Morey model becomes kj xij ¼ xio ; i 2 N 1
j
! " P
P " P þ kj y rj " s"
min ho " e si þ sr r ¼ y ro ; r 2 N 2
i2D r j
P P
s:t: kj xij þ s" i ¼ h o xio ; i 2 D
kj y rj ¼ y ro ; r 2 N 2
j j
P ð4:1Þ
kj xij þ s" i ¼ xio ; i 2 ND
kj P 0; 8j ; s"
i P 0; i 2 N 1 ; sr P 0; r 2 N 2 :
þ

j
P ð4:3Þ
kj y rj " sþr ¼ y ro ; r ¼ 1; . . . ; s
j

kj ; s "
i ; sr P 0;
þ
8i; r; j; h0 unrestricted: 5.3. Categorical variables (categorical DMUs)
The corresponding dual problem is
P P There are situations in which DMUs fall into natural
max lr y ro " ti xio categories. An example would be when we are evaluating
r i2ND a set of retail establishments wherein different levels of
P P P
s:t: lr y rj " ti xij " ti xij 6 0; j ¼ 1; . . . n competition exist from one establishment to another. To
r i2D i2ND
provide a fair evaluation of each DMU, it can be argued
lr P e 8r; ti P e i 2 D; ti P 0; i 2 ND:
that a DMU in any given category should be compared
ð4:2Þ only to those other units in the same or less-advantaged
W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17 11

categories. A DMU under heavy competition would be for rank data is equivalent to the Cook et al. (1993, 1996)
unfairly penalized if compared to units in significantly methodology.
more favorable competitive environments.
Banker and Morey (1986b) presented the first model to 5.5. Modelling undesirable factors
deal with such situations. Their model saw the introduction
of categorical inputs xij, i = m0 + 1, . . ., m in addition to reg- The usual variables in DEA are such that more is better
ular controllable inputs xij, i = 1, . . . ,m0 . In the case where for outputs, and less is better for inputs. In some situations,
a categorical variable xij is not controllable by manage- however, a factor can behave opposite to this; consider, for
ment, their treatment involves replacing that variable by example, air pollution as one of the outputs from power
ðkÞ
a set of binary variables d ij ; k ¼ 1; . . . ; K i , with Ki being plants. A number of authors have addressed this issue, in
the number of categories. Arranging the categories in particular, Scheel (2001), Seiford and Zhu (2002), Fare
ðkÞ
decreasing order of favorability one sets d ij ¼ 1; k 6 k o , and Grosskopf (2004) and Hua and Bin (2007).
ðkÞ
and d ij ¼ 0; k > k o , if DMU i is in category
P ko. The usual Approaches range from linear transformations of the origi-
non-discretionary input constraint kj xij þ s"
jP i ¼ xio is nal data to the use of directional distance functions.
ðkÞ ðkÞ
then replaced by a set of Ki constraints j kj d ij 6 d io .
In this way, a DMU is compared only to other DMUs in 5.6. Flexible measures – Classifying inputs and outputs
the same or less favorable categories.
In the case of a controllable categorical variable, Banker In the standard applications of DEA it is assumed that
and Morey (1986b) presented a mixed integer LP formula- the input versus output status of each performance mea-
tion. However, as pointed out by Kamakura (1988), the sure related to the DMUs is known. In some situations,
Banker and Morey model was flawed due to a mis-specified however, the role of a variable may be flexible. Consider
constraint, and a revised model was given. The Kamakura the example of measuring power plant efficiency as dis-
model presented its own difficulties, however, and these cussed in Cook et al. (1998) and Cook and Green (2005),
were addressed in a later paper by Rousseau and Semple where one of the outputs is a function of what is termed
(1993). The latter authors dealt with both input and output ‘outages.’ This measure is designed to represent the per-
categorical variables, and were able to reduce the integer centage of time that a plant is available to be in operation,
problem to a more conventional LP approach. and can, therefore, be viewed as a type of accomplishment
(output) on the part of management. At the same time, it is
5.4. Ordinal variables/data reasonable to view this variable as an environmental input
that has a direct influence on plant performance.
DEA analyses are generally based on a set of quantita- The incorporation of such flexible variables into the
tive output and input factors. In certain settings, however, DEA structure present a problem in that there is a need
qualitative variables may be present. For a factor such as to make allowance for them on both the input and output
management competence, for example, one may be able sides of the model. Beasley (1995) dealt with a similar prob-
to provide only a ranking of the DMUs from best to worst. lem, and presented a formulation for a situation where the
The capability of providing a more precise, quantitative variable ‘research funding’ was counted as both an input
measure reflecting such a factor is often not feasible. In and an output in evaluating universities in the UK. Cook
some situations such factors can be ‘‘quantified”, but often et al. (2006) later showed that Beasley’s model was flawed
such quantification is superficially forced as a modeling and gave an alternative, corrected version. The flexible var-
convenience. iable problem is not one of counting the influence in both
The original DEA models incorporating rank order places, but rather counting it in the most appropriate place.
variables are due to Cook et al. (1993, 1996). To capture Suppose there exist L flexible measures, whose input/
such rank order variables within the DEA structure, the output status we wish to determine. Denote the values
authors proceed as follows. For such an output r, for assumed by these measures as w‘j for DMUj (‘ = 1,..,L).
example, assume a DMU k can be assigned to one of L For each measure ‘, introduce the binary variables
rank positions (L 6 n). One can view the assignment of d‘ 2 {0, 1}, where d‘ = 1 designates that factor ‘is an out-
DMUk to position d on ordinal output r, as having put, and d‘ = 0 designates it as an input. Let c‘ be the
assigned that k an output value or worth yr(d). weight for each measure ‘. One approach, presented by
More recently, Cooper et al. (1999b) examined the DEA Cook and Zhu (2007), to deciding on the appropriate sta-
structure in the presence of what they termed imprecise tus of a flexible variable, is to view the problem from the
data (IDEA). Zhu (2003a) and others have extended the perspective of the individual DMU. Specifically, adopt
Cooper et al. (1999a) model. While various forms of impre- the position that for any given DMU the status should
cision are looked at under the umbrella of IDEA, the prin- be that which maximizes that DMU’s efficiency score.
cipal focus is on rank order data. In a recent paper by Cook and Zhu (2007) establish the following mathematical
Cook and Zhu (2006), rank order variables and IDEA programming model (4.4). Here each DMU is allowed to
are revisited, and both discrete and continuous projection select the status of each variable that will credit it with
models are discussed. It is shown that the IDEA approach the highest possible score. The authors then suggest taking
12 W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17

a majority rule position, giving each variable the status et al. (1992). It examined the following problem relating
(input or output) preferred by the majority of the DMUs. to the additive model, and involving an inefficient DMU:
An alternate model is given that views the problem from
the perspective of the aggregate or composite of all the max d
P "
DMUs. The status of the flexible variable is that which s:t: : kj xij þ s"
i ¼ xio " dd i ; i ¼ 1; . . . ; m
j
maximizes the efficiency score of that composite DMU P
kj y rj " sþ þ ð5:1Þ
r ¼ y ro þ dd r ; r ¼ 1; . . . ; s
P P
ly þ d cw j
P r r roP r ‘ ‘ ‘o
max m x þ ð1"d P
i io ‘ Þc‘ w‘o
Pr Pr kj ¼ 1;
lr y rj þ d ‘ c‘ w‘j ð4:4Þ j
s:t: P mrx þP ð1"d r
Þc w
6 1 j ¼ 1; 2; . . . ; n
i ij ‘ ‘ ‘j
r r
d" þ
i ; d r ¼ 1 or 0; depending on whether a dimension is to be
d ‘ 2 f0; 1g; 8‘ lr ; mi ; c‘ P 0; 8r; i; ‘:
included or excluded from the perturbation d. This model
looks at the maximum improvement in the status of an
6. Data variation inefficient DMU (increase in outputs/decrease in inputs)
before it is rendered efficient. For the case of an efficient
In the methodology developments discussed above, it is DMU, Charnes et al examine the model:
assumed that the data values are fixed and known. Signif- min d
icant literature has, however, been dedicated to situations P "
s:t: : kj xij þ s"
i ¼ xio " dd i ; i ¼ 1; . . . ; m
wherein the data may exhibit ‘‘variation” or ‘‘uncertainty”; j6¼o
we briefly discuss various lines of research relating to such P þ ð5:2Þ
kj y rj " sþ
r ¼ y ro þ dd r ; r ¼ 1; . . . ; s
situations. j6¼o
P
kj ¼ 1:
j6¼o
6.1. Sensitivity analysis
Here, we remove that DMUo under evaluation, from the
This body of work addresses the question ‘‘if certain convexity considerations, much in the spirit of super-effi-
parameters/data within a model such as (2.1) and (2.2), ciency (Anderson and Petersen 1993).
or (2.3) are altered, how does this influence the efficiency These problems have been revisited by various authors
status of DMUs?” Several directions have been taken here. including Cooper et al. (2001), Seiford and Zhu (1998a,
1999a), and others.
6.1.1. Problem size issues
Various studies have examined the sensitivity of DMU 6.1.4. Super-efficiency
efficiency to the addition of DMUs to or extraction of An important problem in the DEA literature is that of
DMUs from the analysis. See Wilson (1995). A number ranking those DMUs deemed efficient by the DEA model,
of simulation studies (e.g. Banker et al., 1996) have related all of which have a score of unity. One approach to the rank-
to the impact on the efficiency generated, for varying num- ing problem is that provided by the super efficiency model of
bers of DMUs and of inputs and outputs. Andersen and Petersen (1993), as mentioned above. See also
Banker et al. (1989). The super-efficiency model involves
executing the standard DEA models (CRS or VRS), but
6.1.2. Direct data perturbations under the assumption that the DMU being evaluated is
Charnes and Neralic (1992) and Neralic (1997, 2004) excluded from the reference set. In the input-oriented case,
addressed the subject of the impact on efficiency of pertur- the model provides a measure of the proportional increase
bations to the values of inputs and outputs. We refer to this in the inputs for a DMU that could take place without
as involving direct data perturbation. That is, the research destroying the ‘‘efficient” status of that DMU relative to
pertains to the derivation of ranges of variation in data the frontier created by the remaining DMUs.
over which matrix inversion in the simplex algorithm is The super-efficiency score can also be thought of as a
unaffected. measure of stability. That is, if input data for instance, is
subject to error or change over time, the super-efficiency
6.1.3. Indirect data perturbation – Radius of stability score provides a means of evaluating the extent to which
An alternative to the above direct perturbation such changes could occur without violating that DMU’s
approaches, are what we should call indirect approaches. status as an efficient unit. Hence, the score yields a measure
A number of studies have focused on the question of a of stability.
maximum radius that will maintain the efficiency status. In addition to being a tool for ranking, the super-effi-
That is, ‘‘for a given DMU, what is the maximum allow- ciency concept has been used in other situations; for exam-
able increase in outputs or decrease in inputs such that ple, two-person ratio efficiency games (Rousseau and
its efficiency status (efficient or inefficient) is unaltered?” Semple, 1995), and acceptance decision rules (Seiford and
The first work in this direction was initiated by Charnes Zhu, 1998b), among others.
W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17 13

It is well known that under certain conditions, the super- failing to meet the constraints. The value bo is referred to
efficiency DEA model may not have feasible solutions for as an ‘‘aspiration level,” and specifies the desired efficiency
efficient DMUs (see, e.g., Zhu, 1996; Seiford and Zhu, level for DMUo.
1998a,b; Dulá and Hickman, 1997; Seiford and Zhu,
1999a,b). As shown in Seiford and Zhu (1999a,b), infeasi- 6.3. Time series data – Window analysis
bility must occur in the case of the variable returns to scale
(VRS) super-efficiency model. Although infeasibility Time series data represent an important format in which
implies a form of stability in DEA sensitivity analysis (Sei- ‘‘data variability” occurs. Specifically, in many applica-
ford and Zhu, 1998b), limited efforts have been made to tions, data for a DMU are available at different points in
provide numerical super-efficiency scores for those efficient time, for example, in each of a set of quarters over several
DMUs for which feasible solutions are unavailable in the years. While one can perform static DEA analyses on the
VRS super-efficiency model. Lovell and Rouse (2003) data for each quarter, and then apply standard regression
developed a standard DEA approach to the super-effi- concepts to study efficiency changes, such an approach
ciency model by scaling up the inputs (scaling down the often proves rather unsatisfactory, generally failing to cap-
outputs) of a DMU under evaluation. As a result, a feasible ture important interactions from period to period. Window
solution can be found for efficient DMUs that do not have analysis as introduced by Charnes et al. (1985a) is a model
such (feasible) solutions in the standard VRS super-effi- structure that tries to bring a more robust treatment to effi-
ciency model. The super-efficiency scores for all efficient ciency changes in a time series sense. The idea is to choose a
DMUs without feasible solutions are then equal to the ‘‘window” of k observations for each DMU (say k = 4
user-defined scaling factor. Chen (2004, 2005) suggests quarters), and treat these as if they represented k different
using both the input- and output-oriented VRS super-effi- DMUs. Hence, in any analysis, a total of n ( k ‘‘DMUs”
ciency models to quantify the super-efficiency when infeasi- are evaluated; kdifferent scores for each DMU are then cre-
bility occurs. However, Chen’s approach will fail if both ated. One then moves the window by one period (e.g.
the input- and output-oriented VRS super-efficiency mod- instead of quarters Q1 to Q4, one uses Q2 to Q5) and repeats
els are infeasible. the analysis.
Recently, Cook et al. (2008) posed an alternative Window analysis allows the analyst to observe both the
approach to resolving the issue of infeasibility. Unlike the stability of a DMU for any point in time across different
standard input-oriented and output-oriented super-effi- data sets, as well as trends across the k observations for a
ciency models, each of which has a specific orientation DMU, within the same data set. Cooper et al. (2006) dis-
(input or output), this model provides for the minimum cuss a number of weaknesses in conventional window anal-
movement in both directions needed to reach the frontier ysis, one of which is the fact that beginning and ending
generated by the remaining DMUs. Viewed another way, periods are not tested as frequently as is the case for other
in the case of infeasibility, the model derives the minimum periods. Sueyoshi (1992) has attempted to remedy this sit-
change needed to project a data point, classified as an uation by the use of a ‘‘round robin” approach. This pro-
extremity, to a non-extreme position. ceeds by first looking at each DMU in one period, then
two, . . ., etc., up to k periods. This gives a more complete
6.2. Data uncertainty and probability-based models picture of stability and trends, but with the disadvantage
of becoming computationally burdensome as the number
A number of researchers have concentrated on the prob- of combinations grows exponentially.
lem of modeling technical efficiency when the data for the
inputs and outputs are random variables. Thore (1987) 6.4. Time series data – The Malmquist index
and Land et al. (1992, 1994) looked at the application of
chance constrained programming (CCP) to DEA. Cooper The Malmquist index was first suggested by Malmquist
et al. (1996, 2004) demonstrated the use of CCP in the form (1953) as a quantity for use in the analysis of consumption
of a satisficing model. Specifically, the CCR model (2.1) is of inputs. Färe et al. (1994) developed a DEA-based Malm-
replaced by the CCP model quist productivity index which measures productivity
P ! change over time. The index can be decomposed into two
ur ~y ro
components, with one measuring the change in the technol-
max P P
r
P bo
v ~x i io ogy frontier and the other the change in technical
Pi ! efficiency.
ur ~y rj ð5:3Þ
s:t: P P v ~x 6 bj
r
P 1 " aj ; j ¼ 1; . . . ; n To describe the method, let xtij ; y trj denote the input and
i ij output levels for a DMUj at any given point in time t. The
i

ur ; v i P 0 8r; i: Malmquist index calculation requires two single period and


two mixed period measures. The two single period mea-
Here the random variables ~y rj ; ~xij are assumed to have sures can be obtained by using the CRS DEA model. Thus,
known probability distributions, and aj is a scalar in the for period t we solve following CRS DEA model which cal-
unit range [0, 1] that specifies the allowable likelihood of culates the efficiency in time period t, as displayed in (5.4)
14 W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17

hto ðxto ; y to Þ ¼ min ho sion dates back to Farrell (1957); this was subsequently
P t extended by Aigner and Chu (1968) with their corrected
s:t: kj xj 6 ho xto ;
j ordinary least squares model. Later this approach was pre-
P t ð5:4Þ
kj y j P y to ; sented in a more formalized statistical format by Aigner
j et al. (1977), and has been labeled the ‘‘composed error”
kj P 0; j ¼ 1; . . . ; n approach.
More recently Banker and Maindiratta (1992), Banker
In a similar way, using t + 1 instead of t for the above
(1993) and Banker and Natarasan (2004) approached this
model, we get htþ1 tþ1 tþ1
o ðxo ; y o Þ the technical efficiency score
issue from a DEA perspective. They show that DEA pro-
for DMUo in time period t + 1.
vides a consistent estimator of arbitrary monotone and
The first of the mixed period measures, which is defined
concave production functions when the (one-sided) devia-
as hto ðxotþ1 ; y otþ1 Þ for each DMUo, is computed as the optimal
tions from such a production function are regarded as sto-
value to the following linear programming problem:
chastic variations in technical inefficiency. Convergence is
hto ðxotþ1 ; y otþ1 Þ ¼ min ho ; slow, however, since, as is shown by Korostolev et al.
P t
s:t: kj xj 6 ho xtþ1 (1995), the DEA likelihood estimator in the single output
o ;
j – m input case converges at the rate n"2/(1+m) and no other
P ð5:5Þ
kj y tj P y tþ1
o ;
estimator can converge at a faster rate.
j The above approaches treat only the single output –
kj P 0; j ¼ 1; . . . ; n: multiple input case. Simar and Wilson (1998) turn to
‘‘bootstrap methods” which enable them to deal with the
This model compares xtþ1 o to the frontier at time t. In a sim-
case of multiple outputs and inputs. In this manner, the
ilar way, we can obtain the other mixed period measure,
sensitivity of h*, the efficiency score obtained from the
htþ1 t t t
o ðxo ; y o Þ which compares xo to the frontier at time t + 1.
BCC model, can be tested by repeatedly sampling from
The (input-oriented) Malmquist productivity index can
the original samples. A sampling distribution of h* values
be expressed as
is then obtained from which confidence intervals may be
" #12
derived and statistical tests of significance developed.
hto ðxto ; y to Þ htþ1 t t
o ðxo ; y o Þ
M o ¼ t tþ1 tþ1 tþ1 : In some respects, the output-oriented VRS model is a
ho ðxo ; y o Þ ho ðxotþ1 ; y otþ1 Þ
non-parametric version of the ordinary least squares model
Mo measures the productivity change between periods t of Aigner and Chu (1968). This was alluded to in Banker
and t + 1. Productivity declines if Mo > 1, remains un- and Maindiratta (1992) and Kuosmanen (2006).
changed if Mo = 1 and improves if Mo < 1. The following For a thorough coverage of stochastic frontier analysis,
modification of Mo makes it possible to measure the and other approaches to efficiency evaluation, see Coelli
change of technical efficiency, and the movement of the et al. (1998), Kumbhakar and Lovell (2000).
frontier in terms of a specific DMUo.
7. Conclusions
# tþ1 tþ1 tþ1 tþ1 t t $12
ht ðxt ; y t Þ h ðxo ; y o Þ ho ðxo ; y o Þ
M o ¼ tþ1o o o ) ot tþ1 : This paper has attempted to provide a brief sketch of
ho ðxtþ1
o ; y tþ1 Þ
o
ho ðxo ; y tþ1
o Þ hto ðxto ; y to Þ
some of the important areas of research in DEA that have
The first term on the right hand side measures the mag- emerged over the past three decades. The focus here is on
nitude of technical efficiency change between periods t and those topics that, in the authors’ estimation, have attracted
t t t <
t+1. TECo ¼ htþ1hoðxðxtþ1
o ;y o Þ
tþ1 ¼ 1 indicating that technical effi- the most attention. At the same time it is acknowledged
o o ;y o Þ >
ciency improves, remains the same, or declines. The second that, due to limited space, and possibly to ignorance on
h tþ1 tþ1 tþ1 tþ1 t t i12 our part, many important works in DEA may not have
term FSo ¼ hhot ðxðxtþ1
o ;y o Þ ho ðxo ;y o Þ
t
;y tþ1 Þ h ðxt ;y t Þ
measures the shift in the been highlighted. Some of these topics include the model-
o o o o o o
frontier between periods t and t + 1. A value of FSo greater ing of integer variables, issues of congestion, handling
than unity indicates regress in the frontier technology, a missing data, allocation of fixed inputs across DMUs,
value of FSo less than unity indicates progress in the fron- resource constrained DEA, analysis of composite DMUs,
tier technology, and a value of FSo equal to unity indicates directional derivatives, and the connections relating to
no shift in the frontier technology. DEA and general multiple criteria decision models. In a
number of these situations the topic has received some,
6.5. Stochastic data-statistical inference but not significant attention, and may be a direction for
future work.
Another approach to treating data variations involves
the characterization of the production function by way of Acknowledgment
classical statistical inference methodology. Two lines of
research have emerged around this issue; stochastic frontier The authors wish to thank Professor Robert Dyson,
analysis, and a DEA approach. Stochastic frontier regres- Editor, European Journal of Operational Research, for
W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17 15

his constructive comments on an earlier version of this Charnes, A., Cooper, W.W., Rhodes, E.L., 1981. Evaluating program and
article. managerial efficiency: An application of data envelopment analysis to
program follow through. Management Science 27, 668–697.
Charnes, A., Clarke, C., Cooper, W.W., Golany, B., 1985a. A develop-
References ment study of DEA in measuring the effect of maintenance units in the
U.S. Air Force. Annals of Operations Research 2, 95–112.
Aigner, D.J., Chu, S.F., 1968. On estimating the industry production Charnes, A., Cooper, W.W., Golany, B., Seiford, L.M., Stutz, J., 1985b.
frontiers. American Economic Review 56, 826–839. Foundations of data envelopment analysis and Pareto–Koopmans
Aigner, D.J., Lovell, C.A.K., Schmidt, P., 1977. Formulation and empirical production functions. Journal of Econometrics 30, 91–
estimation of stochastic frontier production models. Journal of 107.
Econometrics 6, 21–37. Charnes, A., Cooper, W.W., Thrall, R.M., 1986. Classifying and
Ali, A.I., Seiford, L.M., 1990. Translation invariance in data envelopment characterizing inefficiencies in data envelopment analysis. Operations
analysis. Operations Research Letters 9, 403–405. Research Letters 5/3, 105–110.
Allen, R., Athanasopoulos, A., Dyson, R.G., Thanassoulis, E., 1997. Charnes, A., Cooper, W.W., Huang, Z.M., Sun, D.B., 1990. Polyhedral
Weights restrictions and value judgments in data envelopment cone-ratio DEA models with an illustrative application to large
analysis: Evolution, development and future directions. Annals of commercial banks. Journal of Econometrics 46, 73–91.
Operations Research 73, 13–34. Charnes, A., Cooper, W.W., Thrall, R.M., 1991. A structure for
Andersen, P., Petersen, N.C., 1993. A procedure for ranking efficient units classifying and characterizing efficiencies and inefficiencies in DEA.
in DEA. Management Science 39, 1261–1264. Journal of Productivity Analysis 2, 197–237.
Anderson, T.R., Hollingsworth, K.B., Inman, L.B., 2002. The fixed Charnes, A., Haag, S., Jaska, P., Semple, J., 1992. Sensitivity of efficiency
weighting nature of a cross-evaluation model. Journal of Productivity calculations in the additive model of data envelopment analysis.
Analysis 18 (1), 249–255. International Journal of System Sciences 23, 789–798.
Aparicio, J., Ruiz, J., Sirvent, I., 2007. Closest targets and minimum Charnes, A., Rousseau, J., Semple, J., 1996. Sensitivity and stability of
distance to the Pareto-efficient frontier in DEA. Journal of Produc- efficiency classifications in data envelopment analysis. Journal of
tivity Analysis 28, 209–218. Productivity Analysis 7, 5–18.
Banker, R.D., 1984. Estimating most productive scale size using data Chen, Y., 2004. Ranking efficient units in DEA. OMEGA 32, 213–219.
envelopment analysis. European Journal of Operational Research 17, Chen, Y., 2005. Measuring super-efficiency in DEA in the presence of
35–44. infeasibility. European Journal of Operational Research 161, 545–551.
Banker, R.D., 1993. Maximum likelihood, consistency and data envelop- Chen, Y., Zhu, J., 2004. Measuring information technology’s indirect
ment analysis: A statistical foundation. Management Science 39, 1265– impact on firm performance. Information Technology & Management
1273. Journal 5 (1–2), 9–22.
Banker, R.D., Maindiratta, A., 1992. Maximum likelihood estimation of Cherchye, L., Van Puyenbroeck, T., 2001. A comment on multistage DEA
monotone and concave production frontiers. Journal of Productivity methodology. Operational Research Letters 28 (2), 143–149.
Analysis 3, 401–415. Cherchye, L., Kuosmanen, T., Post, T., 2000. What is the economic
Banker, R.D., Morey, R., 1986a. Efficiency analysis for exogenously fixed meaning of FDH? A reply to Thrall. Journal of Productivity Analysis
inputs and outputs. Operations Research 34 (4), 513–521. 13, 263–267.
Banker, R.D., Morey, E.C., 1986b. The use of categorical variables in data Coelli, T., Rao, D.S.P., Battese, G.E., 1998. An Introduction to Efficiency
envelopment analysis. Management Science 32 (12), 1613–1627. and Productivity Analysis. Kluwer Academic Publishers, Boston, MA.
Banker, R.D., Natarasan, R., 2004. Statistical tests based on DEA Cook, W.D., Green, R.H., 2005. Evaluating power plant efficiency: A
efficiency scores. In: Cooper, W.W., Seiford, L.M., Zhu, J. (Eds.), hierarchical model. Computers and Operations Research 32, 813–823.
Handbook on Data Envelopment Analysis. Kluwer Academic Pub- Cook, W.D., Zhu, J., 2005. Building performance standards into DEA
lishers, Norwell, MA (Chapter 11). structures. IIE Transactions 37, 267–275.
Banker, R.D., Charnes, A., Cooper, W.W., 1984. Some models for Cook, W.D., Zhu, J., 2006. Rank order data in DEA: A general
estimating technical and scale inefficiencies in data envelopment framework. European Journal of Operational Research 174, 1021–
analysis. Management Science 30, 1078–1092. 1038.
Banker, R., Das, S., Datar, S., 1989. Analysis of cost variances for Cook, W.D., Zhu, J., 2007. Classifying inputs and outputs in data
management control in hospitals. Research in Governmental and envelopment analysis. European Journal of Operational Research 180
Nonprofit Accounting 5, 268–291. (2), 692–699.
Banker, R.D., Chang, H., Cooper, W.W., 1996. Simulation studies of Cook, W.D., Zhu, J., 2008. CAR-DEA: Context dependent assurance
efficiency, returns to scale and misspecification with nonlinear func- regions in DEA. Operations Research, forthcoming.
tions in DEA. Annals of Operations Research 66, 233–253. Cook, W.D., Roll, Y., Kazakov, A., 1990. A DEA model for measuring
Beasley, J., 1995. Determining teaching and research efficiencies. Journal the relative efficiency of highway maintenance patrols. INFOR 28,
of the Operational Research Society 46, 441–452. 113–124.
Bessent, A., Bessent, W., Elam, J., Clark, T., 1988. Efficiency frontier Cook, W.D., Kress, M., Seiford, L.M., 1993. On the use of ordinal data in
determination by constrained facet analysis. Journal of Operational data envelopment analysis. Journal of the Operational Research
Research Society 36, 785–796. Society 44, 133–140.
Briec, W., 1999. Holder distance function and measurement of technical Cook, W.D., Kress, M., Seiford, L.M., 1996. Data envelopment analysis
efficiency. Journal of Productivity Analysis 11 (2), 111–131. in the presence of both quantitative and qualitative factors. Journal of
Chang, Y., Sueyoshi, T., 1991. An interactive application of DEA in the Operational Research Society 47, 945–953.
microcomputers. Computer Science in Economics and Management 4 Cook, W.D., Chai, D., Doyle, J., Green, R.H., 1998. Hierarchies and
(1), 51–64. groups in DEA. Journal of Productivity Analysis 10, 177–198.
Charnes, A., Cooper, W.W., 1962. Programming with linear fractional Cook, W.D., Hababou, M., Tuenter, H., 2000. Multi-component
functionals. Naval Research Logistics Quarterly 9, 67–88. efficiency measurement and shared inputs in data envelopment
Charnes, A., Neralic, L., 1992. Sensitivity analysis in data envelopment analysis: An application to sales and service performance in bank
analysis. Glasnik Matematicki 27, 191–201. branches. Journal of Productivity Analysis 14, 209–224.
Charnes, A., Cooper, W.W., Rhodes, E.L., 1978. Measuring the efficiency Cook, W.D., Green, R., Zhu, J., 2006. Dual role factors in DEA. IIE
of decision making units. European Journal of Operational Research Transactions 38, 1–11.
2, 429–444.
16 W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17

Cook, W.D., Liang, L., Zha, Y., Zhu, J., 2008. A modified super-efficiency Hua, Z., Bin, Y., 2007. DEA with undesirable factors. In: Zhu, J., Cook,
DEA model for infeasibility. Journal of the Operational Research W.D. (Eds.), Modeling Data Irregularities and Structural Complexities
Society, forthcoming. in Data Envelopment Analysis. Springer Science Series (Chapter 6).
Cooper, W.W., Huang, Z., Li, S., 1996. Satisficing DEA models under Kamakura, W.A., 1988. A note on the use of categorical variables in
chance constraints. Annals of Operations Research 66, 279–295. data envelopment analysis. Management Science 34 (10), 1273–
Cooper, W.W., Park, K., Pastor, J.T., 1999a. RAM: Range adjusted 1276.
measure of inefficiency for use with additive models and relations to Korostolev, A.P., Simar, L., Tsybakov, A.B., 1995. Efficient estimation of
other models and measures in DEA. Journal of Productivity Analysis monotone boundaries. Annals of Statistics 23, 476–489.
11 (1), 5–42. Kumbhakar, S.C., Lovell, C.A.K., 2000. Stochastic Frontier Analysis.
Cooper, W.W., Park, K.S., Yu, G., 1999b. IDEA and AR-IDEA: Models Cambridge University Press.
for dealing with imprecise data in DEA. Management Science 45, 597– Kuosmanen, T., 2006. Stochastic nonparametric envelopment of data:
607. Combining virtues of SFA and DEA in a unified framework. MTT
Cooper, W.W., Li, S., Seiford, L.M., Tone, K., Thrall, R.M., Zhu, J., Discussion Paper.
2001. Sensitivity and stability analysis in DEA: Some recent develop- Land, K.C., Lovell, C.A.K., Thore, S., 1992. Productive efficiency under
ments. Journal of Productivity Analysis 15, 217–246. capitalism and state socialism: The chance constrained programming
Cooper, W.W., Huang, Z., Li, S., 2004. Chance constraint DEA. In: approach. Public Finance in a World of Transition 47, 109–121.
Cooper, W.W., Seiford, L.M., Zhu, J. (Eds.), Handbook on Data Land, K.C., Lovell, C.A.K., Thore, S., 1994. Production efficiency under
Envelopment Analysis. Kluwer Academic Publishers, Norwell, MA. capitalism and state socialism: An empirical inquiry using chance-
Cooper, W.W., Seiford, L.M., Tone, K., 2006. Introduction to Data constrained data envelopment analysis. Technological Forecasting and
Envelopment Analysis and its Uses. Springer Science, p. 351. Social Change 46, 139–152.
Deprins, L., Simar, L., Tulkens, H., 1984. Measuring labor efficiency in Lang, P., Yolalan, O.R., Kettani, O., 1995. Controlled envelopment by
post offices. In: Marchand, M., Pestieau, P., Tulkens, H. (Eds.), The face extension in DEA. Journal of Operational Research Society 46
Performance of Public Enterprises: Concepts and Measurement. North (4), 473–491.
Holland, Amsterdam, pp. 243–267. Liang, L.F., Yang, F., Cook, W.D., Zhu, J., 2006. DEA models for supply
Doyle, J., Green, R., 1994. Efficiency and cross efficiency in DEA: chain efficiency evaluation. Annals of Operations Research 145 (1), 35–
Derivations, meanings and the uses. Journal of the Operational 49.
Research Society 45 (5), 567–578. Liang, L.F., Wu, J., Cook, W.D., Zhu, J., 2008. The DEA cross efficiency
Doyle, J., Green, R., Cook, W.D., 1996. Preference voting and project model and its Nash equilibrium. Operations Research, forthcoming.
ranking using DEA and cross-evaluation. European Journal of Lovell, C.A.K., Rouse, A.P.B., 2003. Equivalent standard DEA models to
Operational Research 90, 461–472. provide super-efficiency scores. Journal of the Operational Research
Dulá, J.H., Hickman, B.L., 1997. Effects of excluding the column being Society 54 (1), 101–108.
scored from the DEA envelopment LP technology matrix. Journal of Malmquist, S., 1953. Index numbers and indifference surfaces. Trabajos de
the Operational Research Society 48, 1001–1012. Estatistica 4, 209–242.
Dyson, R.G., Thanassoulis, E., 1988. Reducing weight flexibility in DEA. Muñiz, M., Paradi, J., Ruggiero, J., Yang, Z.J., 2006. Evaluating
Journal of Operational Research Society 39 (6), 563–576. alternative DEA models used to control for non-discretionary inputs.
Färe, R.S., Grosskopf, S., 2000. Network DEA. Socio-Economic Journal Computers & Operations Research 33, 1173–1183.
5 (1-2), 9–22. Neralic, L., 1997. Sensitivity in data envelopment analysis for arbitrary
Fare, R., Grosskopf, S., 1996. Intertemporal Production Frontiers: With perturbations of data. Glasnik Matematicki 32, 315–335.
Dynamic DEA. Kluwer Academic, Boston, MA. Neralic, L., 2004. Preservation of efficiency and inefficiency classification
Fare, R., Grosskopf, S., 2004. Modelling undesirable factors in efficiency in data envelopment analysis. Mathematical Communications 9, 51–
evaluation: Comment. European Journal of Operational Research 157, 62.
242–245. Olesen, O.B., Petersen, N.C., 1996. Indicators of ill-conditioned data sets
Färe, R.S., Lovell, C.A.K., 1978. Measuring the technical efficiency of and model misspecification in data envelopment analysis: An extended
production. Journal of Economic Theory 19, 150–162. facet approach. Management Science 42 (2), 205–219.
Färe, R.S., Grosskopf, S., Lovell, C.A.K., 1994. Production Frontiers. Oral, M., Kettani, O., Lang, P., 1991. A methodology for collective
Cambridge University Press. evaluation and selection of industrial R&D projects. Management
Farrell, M.J., 1957. The measurement of productive efficiency. Journal of Science 37 (7), 871–883.
the Royal Statistical Society, Series A, General 120 (3), 253–281. Pastor, J.T., 1996. Translation invariance in DEA: A generalization.
Frei, F., Harker, P., 1999. Projections onto efficient frontiers: Theoretical Annals of Operations Research 66, 93–102.
and computational extensions to DEA. Journal of Productivity Pastor, J.T., Ruiz, J.L., Sirvent, I., 1999. An enhanced DEA Russell graph
Analysis 11, 275–300. efficiency measure. European Journal of Operational Research 115,
Gonzalez, E., Alvarez, A., 2001. From efficiency measurement to efficiency 596–607.
improvement: The choice of relevant benchmarks. European Journal Portela, M., Castro, P., Thanassoulis, E., 2003. Finding closest targets in
of Operational Research 133, 512–520. non-oriented DEA models: The case of convex and non-convex
Golany, B., Roll, Y., 1994. Incorporating standards via data envelopment technologies. Journal of Productivity Analysis 19, 251–269.
analysis. In: Charnes, A., Cooper, W.W., Lewin, A., Seiford, L. (Eds.), Portela, M., Thanassoulis, E., 2007. Developing a decomposable measure
Data Envelopment Analysis: Theory, Methodology and Applications. of profit efficiency using DEA. Journal of the Operational Research
Kluwer Academic, Boston, MA, pp. 313–328. Society 58 (4), 481–490.
Green, R.H., Cook, W.D., Doyle, J., 1997. A note on the additive data Roll, Y., Cook, W.D., Golany, B., 1991. Controlling factor weights in
envelopment analysis model. Journal of the Operational Research data envelopment analysis. IIE Transactions 23, 2–9.
Society 48 (4), 446–448. Rousseau, J.J., Semple, J.H., 1993. Categorical outputs in data envelop-
Green, R., Cook, W.D., 2004. A free disposal hull approach to efficiency ment analysis. Management Science 39 (3), 384–386.
measurement. Journal of the Operational Research Society 55, 1059– Rousseau, J.J., Semple, J.H., 1995. Two-person ratio efficiency games.
1063. Management Science 41, 435–441.
Green, R., Doyle, J., Cook, W.D., 1996. Efficiency bounds in data Ruggiero, J., 1996. On the measurement of technical efficiency in the
envelopment analysis. European Journal of Operational Research 89, public sector. European Journal of Operational Research 90, 553–
482–490. 565.
W.D. Cook, L.M. Seiford / European Journal of Operational Research 192 (2009) 1–17 17

Ruggiero, J., 1998. Non-discretionary inputs in data envelopment Syrjanen, M.J., 2004. Non-discretionary and discretionary factors and
analysis. European Journal of Operational Research 111, 461–469. scale in data envelopment analysis. European Journal of Operational
Ruggiero, J., 2007. Non-discretionary inputs. In: Zhu, J., Cook, W.D. Research 158, 20–33.
(Eds.), Modeling Data Irregularities and Structural Complexities in Thanassoulis, E., Allen, R., 1998. Simulating weights restrictions in data
Data Envelopment Analysis. Springer Science Series (Chapter 5). envelopment analysis by means of unobserved DMUs. Management
Russell, R.R., 1988. Measures of technical efficiency. Journal of Economic Science 44 (4), 586–594.
Theory 35, 109–126. Thompson, R.G., Singleton Jr., F.D., Thrall, R.M., Smith, B.A., 1986.
Scheel, H., 2001. Undesirable outputs in efficiency valuations. European Comparative site evaluations for locating a high-energy physics lab in
Journal of Operational Research 132, 400–410. Texas. Interfaces 16, 35–49.
Seiford, L.M., Zhu, J., 1997. An investigation of returns to scale in data Thompson, R.G., Langemeir, L.N., Lee, C., Lee, E., Thrall, R.M., 1990.
envelopment analysis. OMEGA 27, 1–11. The role of multiplier bounds in efficiency analysis with application to
Seiford, L.M., Zhu, J., 1998a. Sensitivity analysis of DEA models for Kansas farming. Journal of Econometrics 46, 93–108.
simultaneous changes in all of the data. Journal of the Operational Thompson, R.G., Dharmapala, S., Thrall, R.M., 1995. Linked-cone DEA
Research Society 49, 1060–1071. profit ratios and technical inefficiencies with applications to Illinois
Seiford, L.M., Zhu, J., 1998b. An acceptance system decision rule with coal mines. International Journal of Production Economics 39, 99–
data envelopment analysis. Computers and Operations Research 25, 115.
329–332. Thore, S., 1987. Chance-constrained activity analysis. European Journal
Seiford, L.M., Zhu, J., 1999a. Infeasibility of super-efficiency data of Operational Research 30, 267–269.
envelopment analysis models. INFOR 37, 174–187. Thrall, R.M., 1996. The lack of invariance of optimal dual solutions under
Seiford, L.M., Zhu, J., 1999b. An investigation of returns to scale in data translation. Annals of Operations Research 66, 103–108.
envelopment analysis. OMEGA 27, 1–11. Thrall, R.M., 1999. What is the economic meaning of FDH? Journal of
Seiford, L.M., Zhu, J., 1999c. Profitability and marketability of the top 55 Productivity Analysis 11, 243–250.
US commercial banks. Management Science 45 (9), 1270–1288. Tone, K., 2001. A slacks-based measure of efficiency in data envelopment
Seiford, L., Zhu, J., 2002. Modelling undesirable factors in efficiency analysis. European Journal of Operational Research 130, 498–509.
evaluation. European Journal of Operational Research 142, 16– Tulkens, H., 1993. On FDH efficiency analysis: Some methodological
20. issues and applications to retail banking, courts and urban transit.
Sexton, T.R., Silkman, R.H., Hogan, A.J., 1986. Data envelopment Journal of Productivity Analysis 4, 183–210.
analysis: Critique and extensions. In: Silkman, R.H. (Ed.), Measuring Wilson, P.W., 1995. Detecting influential observations in data envelop-
Efficiency: An Assessment of Data Envelopment Analysis, vol. 32. ment analysis. Journal of Productivity Analysis 6, 27–46.
Jossey-Bass, San Francisco, pp. 73–105. Zhu, J., 1996. Robustness of the efficient DMUs in data envelopment
Simar, L., Wilson, P.W., 1998. Sensitivity analysis of efficiency scores: analysis. European Journal of Operational Research 90, 451–460.
How to bootstrap in nonparametric frontier models. Management Zhu, J., 2003a. Imprecise data envelopment analysis (IDEA): A review
Science 44, 49–61. and improvement with an application. European Journal of Opera-
Sueyoshi, T., 1990. A special algorithm for the additive model in DEA. tional Research 144, 513–529.
Journal of the Operational Research Society 41 (3), 249–257. Zhu, J., 2003b. Quantitative Models for Performance Evaluation and
Sueyoshi, T., 1992. Comparisons and analyses of managerial efficiency Benchmarking: Data Envelopment Analysis with Spreadsheets. Klu-
and returns to scale of telecommunication enterprises by using DEA/ wer Academic Publishers.
WINDOW. Communications of the Operations Research Society of Zhu, J., Shen, H.Z., 1995. A discussion of testing DMUs’ returns to scale.
Japan 37, 210–219. European Journal of Operational Research 81, 590–596.

You might also like