Download as pdf or txt
Download as pdf or txt
You are on page 1of 198

ECONOMETRICS OF PLANNING AND EFFICIENCY

ADVANCED STUDIES IN THEORETICAL AND APPLIED ECONOMETRICS


VOLUME 11

Managing Editors:
J.P. Ancot, Netherlands Economic Institute, Rotterdam, The Netherlands
A.J. Hughes Hallett, University of Newcastle, U.K.

Editorial Board:
F.G. Adams, University of Pennsylvania, Philadelphia, U.S.A.
P. Balestra, University of Geneva, Switzerland
M.G. Dagenais, University of Montreal, Canada
D. Kendrick, University of Texas, Austin, U.S.A.
J.H.P. Paelinck, Netherlands Economic Institute, Rotterdam, The Netherlands
R.S. Pindyck, Sloane School of Management, M.I.T., U.S.A.
H. Theil, University of Florida, Gainsville, U.S.A.
W. Welfe, University of Lodz, Poland

For a complete list of volumes in this series see final page of this volume.
Econometrics
of Planning
and Efficiency

edited by

Jati K. Sengupta
Professor of Economics and Operations Research,
University of California, Santa Barbara

Gopal K. Kadekodi
Professor of Economics, Institute of Economic Growth, Delhi, India

1988 KLUWER ACADEMIC PUBLISHERS.


DORDRECHT I BOSTON I LANCASTER ••~'
Distributors

for the United States and Canada: Kluwer Academic Publishers, P.O. Box 358,
Accord Station, Hingham, MA 02018-0358, USA
for the UK and Ireland: Kluwer Academic Publishers, MTP Press Limited,
Falcon House, Queen Square, Lancaster LA 1 1RN, UK
for all other countries: Kluwer Academic Publishers Group, Distribution Center,
P.O. Box 322, 3300 AH Dordrecht, The Netherlands

Library of Congress Cataloging in Publication Data

EconometrIcs of plannIng and effIcIency I edIted by Jatl K. Sengupta


and Gopal K. Kadekodl.
p. cm. -- (Advanced studIes In theoretIcal and applIed
econometrIcs: v. 11)
ISBN 9024736021
1. EconometrIcs. 2. EconomIc pollcy. 3. EffIcIency. Industrial.
I. Sengupta. Jattkumar. II. Kadekodl. Gopal K. III. SerIes.
HB139.E32 1987
330' .028--dc19 87-21305
CIP

ISBN-13: 978-94-0 I0-8146-7 e-ISBN-13: 978-94-009-3677-5


001: 10.1007/978-94-009-3677-5

Copyright

© 1988 by Martinus Nijhoff Publishers, Dordrecht.


Softcover reprint of the hardcover 1st edition 1988
All rights reserved. No part of this publication may be reproduced, stored in a
retrieval system, or transmitted in any form or by any means, mechanical,
photocopying, recording, or otherwise, without .the prior written permission of
the publishers,
Martinus Nijhoff Publishers, P.O. Box 163, 3300 AD Dordrecht,
The Netherlands.
Table of Contents

List of Contributors vu

J. K. Sengupta and G. K. Kadekodi / Presentation of the


Volume lX

PART I: METHODOLOGY 1

G. K. Kadekodi, T. K. Kumar, and 1. K. Sengupta / The


Scientific Work of Gerhard Tintner 3
K. A. Fox / Econometrics Needs a History: Two Cases of
Conspicuous Neglect 23

PART II: PLANNING MODELS 49

G. K. Kadekodi / Energy Pricing in a Welfare Framework:


Issues in Equity, Efficiency and Exhaustibility 51
1. B. Nugent / Foreign Aid, Economic Development and
Income Distribution: Some Inferences from a CGE Model
for Egypt 63

PART III: ANALYSIS OF EFFICIENCY 91

1. K. Sengupta / The Active Approach of Stochastic Optimiza-


tion with New Applications 93
P. van Moeseke / Efficient Portfolios: Risk Shares and Mone-
tary Policy 109
A. Charnes, B. Golany, M. Keane, and J. Rousseau / Extremal
Principle Solutions of Games in Characteristic Function
Form: Core, Chebychev and Shapley Value Generalizations 123
T. V. S. R. Rao / Econometric Estimation of Decision Models
under Uncertainty 135
VI

J. H. Gapinski and T. K. Kumar / On Estimating the Elasticity


of Factor Substitution by Nonlinear Least Squares 157
J. K. Sengupta / Recent Nonparametric Measures of Produc-
tive Efficiency 169
List of Contributors

A. CHARNES, University of Texas at Austin, Texas, U.S.A.


K. A. FOX, Iowa State University Ames, Iowa, U.S.A.
J. H. GAPINSKI, Florida State University, Tallahassee, Florida,
U.S.A.
B. GOLANY, University of Texas at Austin, Texas, U.S.A.
G. K. KADEKODI, Institute of Economic Growth, Delhi, India
M. KEANE, University of Texas at Austin, Texas, U.S.A.
T. K. KUMAR, University of Hyderabad, Hyderabad, India
J. B. NUGENT, University of Southern California, Los Angeles,
California, U.S.A.
T. V. S. R. RAO, Indian Institute of Technology, Kanpur, India
1. ROUSSEAU, University of Texas at Austin, Texas, U.S.A.
J. K. SENGUPTA, University of California at Santa Barbara, Cali-
fornia, U.S.A.
P. VAN MOESEKE, Massey University, Palmerston North, New
Zealand
GERHARD TINTNER (1907-1983)
Presentation of the Volume

This volume dedicated to the memory of Gerhard Tintner (1907-


1983) presents a selected set of contributions to the three topics
where he himself made significant contributions. These topics are:
the methodology of econometrics, economic planning and efficiency
analysis.
Born of Austrian parents on September 29, 1907 in Nuremberg,
Gerhard Tintner became one of the most versatile econometricians of
his time, with seminal contributions to the theory of stochastic
economics, dynamic economic theory and stochastic programming. As
a distinguished teacher and versatile scholar he had a profound
influence on all the students and colleagues who came in close touch
with him. Much ahead of his time in many respects he made pioneer-
ing attempts toward the development of a theory of uncertainty based
on the three disciplines of probability theory, econometrics and
operations research. He took the field of economic development as a
challenge where the techniques of stochastic planning need be increas-
ingly applied and implemented. The generation of students he trained
bear adequate testimony to his profound expertise and universal
appeal. The philosophy and culture of India had a special fascination
for him, which is due in part to the intellectual link he established with
some of the most eminent Indian statisticians of his time.
All the contributions of this memorial volume have been prepared
as a token of deep appreciation, lasting friendship and esteem for a
dedicated teacher who honored the profession by his original research
and universal outlook in teaching. To many of his students such as
ourselves his memory provides a perennial source of inspiration,
dedication to research and singular faith in the advancement of
knowledge for a better future.

JATI K. SENGUPTA GOPAL K. KADEKODI


University ofCalifornia, Institute ofEconomic Growth,
Santa Barbara, Delhi, India
California, U.S.A.
PART I
METHODOLOGY
The Scientific Work of Gerhard Tintner

G. K. KADEKODI
Institute ofEconomic Growth, Delhi, India
T. K. KUMAR
University ofHyderabad, Hyderabad, India
and
1. K. SENGUPTA
University ofCalifornia at Santa Barbara, California, u.s.A.

1. INTRODUCTION

Gerhard Tintner (1907-1983) published his first scientific studies in


the late 1930s, after completing his doctorate in economics, statistics
and law at the University of Vienna in 1929. As a distinguished
scholar in three major branches: economics, mathematics and statis-
tics, he spent over fifty years of his active life in founding, shaping and
extending some of the basic contents of econometrics, its theory and
empirical applications to planning, efficiency and economic develop-
ment. Born in Nuremberg of Austrian parents on September 29, 1907
he became one of the most versatile econometricians of his time,
making fundamental contributions in both economic theory and
econometrics. His original contributions to economic theory comprise
such topics as: general equilibrium models in a dynamic context,
demand and production studies under dynamic conditions and a
stochastic approach to economics, whereby the mathematical theory of
stochastic processes could be integrated in a fundamental sense with
the deterministic approach to economic modeling. On seeing the
current progress of economic theory in the direction of dynamic and
stochastic phenomena, one is struck with a sense of wonder at the
vision of Tintner's work.
His original contributions in econometrics and the theory of opera-
tions research included the areas of information theory and econo-
metrics, the theory of probability and the approach of stochastic
3
lati K. Sengupta and Copal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN-13: 978-94-010-8146-7
© 1988 Martinus Nijhoff, Dordrecht
4

programming. His view of stochastic programming emphasized the


active decision approach and the conditional distribution of the
optimal outcome, both of which make theory highly applied and
flexible. His ceaseless efforts in applying this optimizing technique
in sectoral investment decisions in planning and in improving the
resource allocation processes in less developed countries provide a
glorious testimony to his abiding faith in applying the modern tools of
econometrics to the benefit of mankind. He made consistent attempts
at disseminating his econometric ideas through textbooks, mono-
graphs, graduate teaching and research publications of over two
hundred articles. For most of his life he lived in the USA, teaching
and conducting his researches, but widely traveled in Asia (India,
Japan), Europe (England, Poland, U.S.S.R., Germany, France, Austria,
Spain, the Netherlands, Sweden, Ireland, Hungary, Italy, Norway),
Australia, New Zealand, South America (Mexico, Chile, Colombia,
Venezuela) and so on to deliver lectures, as adviser and to address
professional gatherings.
Gerhard Tintner started his professional career at a time when the
economics discipline was being engulfed by the econometric revolu-
tion. In fact, he was one of the principal architects responsible for
bringing about the econometric tradition to economic reasoning.
Today some of the statistical and econometric tools often applied in
economics have become so routine and standard that one misses to
note the pioneering contributions made by him in multivariate and
time series analysis, stochastic systems of equations, errors in vari-
ables, multicollinearity, stochastic process and control theory. All
these emerged out of his concern about integrating economics with
empiricism, which gave respectability and maturity to econometrics.

2. GENERAL METHODOLOGY

Three aspects of Tintner's professional outlook must be noted at the


outset. First, he viewed both economics and statistics with a very
broad perspective and used mathematics and empirical observations
to knit the subjects. Second, he realized the research potentials of
integrating all the three disciplines, including operations research. And
finally, he more than his critics, realized the tentativeness of both the
mathematical theorising and the empirical verification. That is what
made him concentrate on stochastic economics as an integration of
5

mathematics, economics and statistics (or econometrics if one wants to


call it).
As a pioneer and first generation econometrician, he spent con-
siderable time and effort to define the boundaries of econometrics.
According to him,

'It consists in the application of mathematical economic theory


and statistical procedures to economic data in order to establish
numerical results in the field of economics to verify economic
theorems': Tintner, Econometrics (1952).

In other words, for him an econometrician is one who would dwell


with mathematics and statistics only as tools but returns to economic
theory for policy applications.
With a firm conviction that economics is a prodigy of science while
science itself emerged out of a confrontation with history and phi-
losophy, Tintner joined other social scientists like Morgenstern, von
Neumann, Abraham Wald, and Marschak to put economics ahead of
science in many respects.
Tintner attached highest importance to historical development and
metaphysical concepts. If economics is not able to solve many of the
present day problems, it is partly due to the influence of ideological
biases. Whether it is Adam Smith's 'invisible hand' or utopia of Sir
Thomas Moore, Plato's 'state' or Aumann's markets with a 'continuum
of traders', development of the theoretical foundations of economic
science has heavily relied on metaphysical concepts. Likewise, the
emergence of Newtonian physics and deterministic economic theory
or mercantile prophecy were not independent of historical roots and
thoughts prevailing in those periods. Tintner always emphasized
therefore, the study of economics in a historical perspective.
Tintner launched a major attack on the deterministic approach to
economic problems, not because they are mere limitations of classical
physics but because of distinct characteristics of economic behavior
and problems. He untiringly argued that while contemporary physics
has switched over to stochastic physics (even though the law of large
numbers may be very valid in certain domains of physics), the
contemporary economic theory is still by and large deterministic
(while the economic observations form a small sample out of the
universe). Tintner's own contributions to this quest on the nature of
stochastic economic behavior have been enormous covering applica-
6

tions of discrete diffusion probability models, continuous stochastic


processes, stochastic programming models, control theoretical models
and stochastic variate difference methods all of them with concrete
applications in economic planning. He constantly argued that deter-
ministic economics must be considered only as dealing with the mean
values (i.e. mathematical expectations) of the random variables which
really characterize the empirical economic system.

3. ECONOMIC THEORY

Three major branches of economic theory interested him most,


namely the theories of choice, economics of uncertainty and business
cycles. He makes the point that more than measurability (Bernoulli) or
non-measurability (pareto) of utility, the basic issues in the theory of
choice are risk and uncertainty and their impact on the stream of
anticipated income and prices. It will still be possible to derive
implications on choices, provided one is able to formulate the model
in terms of expected utility rather than any preassigned utility (mea-
surable or not).
He made original contributions toward the development of a theory
of uncertainty by utilizing the three disciplines of probability, eco-
nomics and operations research. His view of probability is much
broader than the usual frequentist concept and he showed in his
seminal article 'Foundations of Probability and Statistical Inference'
(1949) that a structural concept similar to Carnap's degree of
confirmation is likely to prove more useful in the social and behavioral
field. It was one of Tintner's major convictions that the tools of
cybernetics and systems theory ought to be adapted and applied to
economic and social modeling. Thus his emphasis on information
theory and stochastic processes were as much a part of his broad
philosophy, as the empirical foundations of applied economics.
Tintner's approach to business cycle problems was Schumpeterian
in conceptual framework but statistical in application. He believed in
the simultaneity of theory and empiricism. Good theory needs to be
tested, as he often said while referring to Karl Popper's falsifiability
test. It is only through useful applications that one gains more insight
into the behavioral processes by which the economic models can
be enriched. Thus through his empirical analysis of business cycle
phenomena he was led to develop the variate difference method as a
7

technique for filtering the systematic component of the statistical time


series. Through his central emphasis on the need to understand the
dynamics of behavioral adjustment which underlie most economic
time series, he pioneered in emphasizing the policy applications for
the stochastic process models. To him the variate difference method
appeared both logical and empirical. In his later work he had exten-
sively used optimal control theory in the stochastic framework, which
he pioneered at a time when few others could visualize these connec-
tions. His empirical application of the variate difference method to the
study of business cycles led him finally to inquire into the Austrian
theory of capital originally proposed by Bohm-Bawerk. Thus he
argued that the fluctuations one may find along an optimal production
trajectory following a turnpike do in fact coincide with the concept of
an optimal period of production in Bohm-Bawerk's theory when
suitably defined. This result, in a way, bridges the conceptual gap
between the classical and neoclassical approaches to capital theory.
Apart from a large number of text books in econometrics, mathe-
matical economics and statistics, two of his books in the fields of
stochastic and mathematical economics are quite outstanding. In the
book Stochastic Economics (jointly with 1. K. Sengupta), 1972, he
deals with various aspects of stochasticity in economic behavior and
adjustment. The theoretical developments cover both discrete and
continuous economic phenomena in the time domain, and the consis-
tent and optimizing stochastic process models appropriate in analyzing
economic behavior and in decision-making. The earlier book Meth-
odology of Mathematical Economics and Econometrics, published in
1968 is a systematic survey on the development of these subjects.
Tintner has left a large number of methodological propositions for
economic theorists to ponder about. For instance, even though the
Austrian concept of period of production corresponds to the period of
production cycle (or fluctuations), the result holds only under the
assumption of linear dynamic production systems. Nonlinear dynamics
and nonlinear stochastic processes need far more attention. Although
Tintner did apply some types of nonlinear diffusion processes in
economic systems e.g., lognormal process, his estimation method was
only preliminary and approximate. He firmly believed that the advance
in computer capabilities would make it possible to obtain more
reliable estimates of the nonlinear models. Recent trends in applying
the concepts of bifurcation phenomena in dynamic disequilibrium
models in economics show the correctness of Tintner's conjectures.
8

Secondly, the stochastic basis of economic theory needs to be investi-


gated in an essential and fundamental way, according to Tintner's
philosophy and this means that we have to examine the various factors
which influence in a stochastic sense the various probabilities which
characterize the nonlinear Chapman-Kolmogorov equations of a dy-
namic economic system. When these probabilities can be interpreted
as transition probabilities, they can be related to other policy or
instrument variables and hence can be used for optimal decision-
making through decentralization. In other cases the probabilities may
be related to search models or disequilibrium phenomena, the latter in
terms of the divergence of the time-dependent (short-run) probability
path from the steady-state path. For Tintner the major obstacle to a
satisfactory theory of economic planning in a stochastic environment
is due to the problem of aggregation of microeconomic data and to
the lack of any satisfactory statistical or econometric theory of data
compression. Although he suggested the use of methods such as
information theory, Hotelling's principal components and linear rules
of aggregation, he was more aware than anyone else of their short-
comings. It is still an open problem.

4. OPERATIONS RESEARCH AND ECONOMETRICS

Tintner's favorite definition of operations research was 'the econo-


metrics of enterprise' and he firmly believed that econometrics as
traditionally practiced should be integrated with operations research
as it is customarily understood in business schools. He deplored
the gap existing between the two disciplines and worked hard to
emphasize their complementarity.
His fundamental contributions in the theory of stochastic linear
programming emphasize this complementarity in a remarkable way.
The early literature on linear programming had dealt exclusively with
the static and deterministic case. Tintner (1955) was the first to
develop the basic theory of optimal decision-making under various
stochastic cases in which inputs, outputs, technical coefficients and the
intertemporal constraints are subject to random disturbances. His
classic distinction between the active (here and now approach) and the
passive (wait and see approach) decision rules, his attempt in specify-
ing and estimating the conditional distribution of the optimal objective
function in the active approach and most of all, his emphasis on the
9

need to understand the dynamics of the optimal decision process


when the system is influenced by a stochastic process are still valid
today as much as they were when he pioneered this approach. Since
Tintner's pioneering work, the theory of stochastic programming had
been developed by many scholars e.g., Charnes and Cooper and
Tintner's students and others also made important contributions to the
field. If Kantorovich, a Nobel Laureate, is to be remembered for the
use of linear programming in economic decision-making, Tintner is to
be remembered for facilitating the same in real life situations which
are full of stochastic elements.
In econometrics he emphasized the need to study nonnormal and
nonlinear stochastic processes. He considered it a fundamental chal-
lenge that the Pareto-Levy class of stable indivisible distributions
would have significant impact on our econometric thinking in the
future. Most econometricians probably think that the use of normal
distributions is justified in a fashion by the appeal to some kind of a
central limit theorem. This is indeed true for a great variety of
statistical problems but this might in his opinion prove an illusion. For
if the sample is drawn from a Cauchy distribution the distribution of
the sample mean is exactly the same as the distribution of each
individual item in the sample. In this case we gain nothing by com-
puting the arithmetic mean of the sample. We might just as well pick
out anyone of the items in our sample at random. He realized
that this certainly calls for fundamentally new methods of approach.
Recent trends in the theory of nonparametric production frontiers
might well agree with Tintner's prognosis.

5. OTHER CONTRIBUTIONS

Tintner was a leading contributor to Econometrica during its first


two decades (1933-52), an associate editor from 1951 to 1978 and
the book review editor from 1951-1972. In the spirit of a true
international scholar he always kept himself in touch with new and
promising developments in econometrics and the related fields such as
operations research, stochastic processes and information theory. He
was much ahead of his time in his prognosis of the future of
econometrics and economic theory. Dedicated research for nearly half
a century in some of the less explored tracts of economics and related
fields has earned for Gerhard Tintner a singular distinction of being a
10

pioneer among economists, mathematicians, and statisticians belonging


to his generation and the generations to come. An author of more
than a dozen books and over two hundred articles, a teacher for more
than forty years, Gerhard Tintner leaves the academic community with
several unfinished works, among them we found one entitled Some
Topics in the History of Economics in which he seemed to have
thought of writing a chapter on the 'Future of Economics'.

SELECTED WORKS OF PROFESSOR GERHARD TINTNER

Books

1. Prices in the Trade Cycle, Vienna, Springer, 1935, (XII and 204 pp., 60
graphs, mathematical appendix, French and German resume).
2. The Variate Difference Method, published jointly by the Cowles Commission
for Research in Economics and the Department of Economics and Sociology,
Iowa State College in 1940, Principia Press, Bloomington, Indiana (175 pp.,
three figures, 48 tables, appendix, etc.).
3. Econometrics, published by John Wiley & Sons, New York City, and
Chapman & Hall Ltd., London 1952. (Second printing 1955, XIII and 370
pp.), pocket book ed. 1965. Japanese translation: Bun Ga Do Publishers,
Tokyo,Japan 1961.
4. Mathematics and Statistics for Economists, published by Holt, Rinehart &
Winston Inc., New York City 1953. Second Printing 1954. English edition:
Constable & Co. Ltd. London. Japanese translation: Chikurashobe Publishing
Co., Tokyo, Japan (XIV and 363 pages). French Translation: Mathematiques
et Statistiques pour les Economistes (XV and 488 pp.), Paris: Dunod 1965,
nouveau tirage 1969, 508 pages. Korean translation: Il-Jo-Ka publishers,
Introduction to Economic Mathematics, 315 pp., translation by Moon Sik
Kim. Second Edition (with C. B. Millham), XX and 485 pp., 1970. Spanish
translation: Matematica y estadistica para economistas, Interamaericana, Mexico
1973.
5. Handbuch der Okonometrie (Handbook of Econometrics), has been included
in the series Encyclopaedie der Rechts- und Staatswissenschaften (Springer
Verlag, Berlin, 1960, XI and 328 pp.). Russian translation: Vvedenie v
ekonometriyu Izdatelstove Statistika, 360 pp., Moscow 1965.
6. Introduction a La Econometria, 128 pp., has been published by the Central
University of Quito and the Junta Nacional de Planfication y Coordinacion
Economics, Quito, Ecuador. Portugese translation: Elementos de Econometrica,
117 pp., 1965, published by Enio Matheus Guazelle & Cia., Sao Paulo, Brazil.
7. Topics in Econometrics, A course of Lectures delivered at the Institute of
Agricultural Research Statistics, June - August 1965, Indian Council of
Agricultural Research, New Delhi, India 1966, 125 pp.
11

8. The Econometrics of Development Planning, Cambridge University Press,


Cambridge, England (with 1. K. Sengupta), 1970.
9. Methodology of Mathematical Economics and Econometrics, Encyclopaedia of
Unified Sciences, VITI and 113 pp., 1968, Chicago, lllinois, University of
Chicago Press. Japanese translation: Tokyo 1971.
10. Stochastische Elemente der Okonomie, Peter Hanstin, Bonn, Germany, 1971.
11. Stochastic Economics, (with 1. K. Sengupta), Academic Press, New York City
1972. XI and 315 pp.
12. MeLOdi Ekonomickeskich /ssledovanii, (with E. Fels), Moscow, Progress, 1971.
13. Studies in Correlation. Multivariate Analysis and Econometrics, (with A. Afriat
and M. V. Rama Sastry), Gottingen: Vandenhoek & Rupprecht 1975.
14. New Results in the Variate Difference Method, (with 1. N. K. Rao and H.
Strecker), Gottingen: Vandenhoek & Rupprecht 1978.
15. Some Econometric Studies in Indian Agriculture, (with M. Patel), Gottingen:
Vandenhoek & Rupprecht 1978.

Articles

1931 'Die Allgemeine Preisbildung, /880-1913', Schmollers Jahrbuch pp. 95 ft.


1935 'Die Nachfage im Monopolgebiet', Zeitschrift [iir Nationaloekonomie, 6: pp.
536 ft.
1936a 'A note on distribution of income over time', Econometricia 4: pp. 60 ft.
1936b 'Intemationale Konjunkturforschung', der Oesterreichische Volkswirt, 29:
pp. 738 ft.
1937 'Monopoly over Time', Econometrica 5: pp. 160 ft.
1938a 'The maximization of utility over time', Econometrica 6: pp. 154 ft.
1938b 'The theoretical derivation of dynamic demand curves', Econometrica 6:
pp. 375 ft.
1939a 'Notes on the problems of bilateral monopoly', Journal of Political Economy
47: pp. 263 ft.
1939b 'On tests of significance in time Series', Annals of Mathematical Statistics
10: pp. 139 ft.
1939c 'Elasticities of expenditure in the dynamic theory of demand', Econo-
metrica 7: pp. 266 ft.
1939d 'The theory of measurement of demand', Journal of Farm Economics 21,
pp. 606 ft.
193ge 'The distribution of symmetric quadratic forms in normal and independent
variables', Iowa State College Journal ofScience 13, pp. 231 ft.
1940a 'The analysis of economic time Series', Journal of the American Statistical
Association 35: pp. 93 ft.
1940b 'A contribution of the non-static theory of production', in Lange et aJ. (ed.),
Studies in Mathematical Economics and Econometrics, pp. 92 ft.
1941a 'The Theory of choice under subjective risk and uncertainty', Econometrica
9: pp. 298 ft.
1941b 'The pure theory of production under technological risk and uncertainty',
Econometrica 9: pp. 305 ft.
12

1942a 'A contribution to the non-static theory of choice', Quarterly Journal of


Economics 51: pp. 274 ff.
1942b 'A simple theory of business fluctuations', Econometrica 10, pp. 317 ff.
1942c 'The theory of production under non-static conditions', Journal of Political
Economy 50, pp. 645 ff.
1944a 'An application of the variate difference method to multiple regression',
Econometrica 12, pp. 566 ff.
1944b 'The simple theory of business fluctuations: a tentative verification', Review
ofEconomic Statistics 26, pp. 148 ff.
1944c (With O. H. Brownlee), 'Production functions derived farm records,'
Journal ofFarm Economics 26, pp. 566 ff.
1944d 'A note on the derivation of production functions from farm records',
Econometrica 12, pp. 26 ff.
1944e 'A note on economic aspects of the theory of errors in time series',
Quarterly Journal ofEconomics 53, pp. 141 ff.
1945 'A note on rank, multicollinearity and multiple regression', Annals of
Mathematical Statistics 16, pp. 304 ff.
1946a 'A note on welfare economics', Econometrica 14, pp. 69 ff.
1946b 'Multiple regression for systems of equations', Econometrica 14, pp. 5 ff.
1946c 'Some applications of multivariate analysis to economic data', Journal of the
American Statistical Association 41, pp. 472 ff.
1948a 'Vne theorie simple des fluctuations economiques', Revue d' Economie
Politique, Vol. 57, pp. 209 ff.
1948b 'Homogeneous systems in mathematical economics', Econometrica 16, pp.
273 ff.
1948c 'Scope and method of econometrics, illustrated by application to american
agriculture', Statistical and Social Enquiry Society ofIreland 18, pp. 161 ff.
1949a 'Foundations of probability and statistical inference', Journal of the Royal
Statistical Society 112, pp. 251 ff.
1949b 'Static macro-economic models and their economic verification', Metro-
economica 1, pp. 48 ff.
1949c 'La position de l'econometrie dans la hierarchie des sciences sociales',
Revue d'Economie Politique 59, pp. 634 ff.
1950a 'Some Formal Relations in Multivariate Analysis', Journal of the Royal
Statistical Society, Series B (Methodological), Vol. 12, pp. 95 ff.
1950b 'Die Identifikation: Ein Problem der Okonometrie', Statistische Viertel-
jahreschrift 3, pp. 7 ff.
1950c 'A test for linear relations between weighted regression coefficients', Journal
of the Royal Statistical Society, Series B (Methodological), Vol. 12, pp. 273
ff.
1951 'Static econometric models and their empirical verification illustrated by a
study of the American Meat Market', Metroeconomica 2, pp. 33 ff.
1952a 'Some formal relations in multivariate analysis', Proceedings, and Inter-
national Congress of Mathematics, 1950, Providence, Rhode Island, Vol. 1,
p.584.
1952b 'Abraham Wald's Contributions to Econometrics', Annals of Mathematical
Statistics 23, pp. 21 ff.
13

1952c 'Die Anwendung der variate difference Methode auf die Probleme der
gewogenen regression und der MultikoUinearitat', Mittellungsblatt /iir
Mathematische Statistik 4, pp. 159 ff.
1952d 'Complementarity and shifts in demand', Metroeconomica 4, p. 1.
1952e 'Econometrica', A Revista de Perquisas Economica-sociales 2, pp. 361 ff.
1953a 'Econometrics', Scientia 88, pp. 250 ff.
1953b 'Econometrie', Scientia 88, Supplement, pp. 148 ff.
1953c 'Econometria', El Trimestre Economico 20, pp. 75 ff.
1953d 'The Definition of Econometrics', Econometrica 21, pp. 31 ff.
1953e (With 0. H. Brownlee), 'The Production Functions Derived from Farm
Records - A Correction', Journal ofFarm Economics 35, pp. 123 ff.
1954a 'The Teaching of Econometrics', Econometrica 22, pp. 77.
1954b 'Ein Beitrag zur Nicht - Statischen Wertheorie', Zeitschrift /iir Nationalo-
konomie 14, pp. 358 ff.
1954c 'The teaching of econometrics, UNESCO - The University of Social
Sciences, Economics, Paris, pp. 282 ff.
1954d 'The definition of econometrics', (Japanese translation), Bulletin of the
Bureau of Statistics, Office of the Prime Minister Tokyo, Japan, Vol. 3, No.
6, pp. 57 ff.
1954e 'The Use of Mathematical Methods of Econometrics and Economic
Statistics', International Social Science Bulletin, UNESCO, Vol. 6, No.4,
pp.640ff.
1955a 'Stochastic Linear Programming with Applications to Agricultural Econom-
ics', Symposium on Linear Programming, National Bureau of Standards,
Washington, D.C., Vol. 1, pp. 197 ff.
1955b 'The distribution of the variance of variate differences in the circular case',
Metron 17, p. I ff.
1955c 'Programmazione lineare stocastica con applicazioni a problemi di economia
agraria', Giornale degli Economisti, pp. 3 ff.
1955d (With M. M. Babbar and E. Heady), 'Programming with consideration of
variation in input coefficients', Journal ofFarm Economics 37, pp. 33 ff.
1955e 'Einige Grund - Probleme der Okonometrie', Zeitschrift Fuer die Gesamte
Staats wissenschaft 111, pp. 601 ff.
1956 'Statistik und Okonometrie', Statistische Viertel-Jahrschrift 9, pp. 92 ff.
1957a 'Les programmes lineaires stochastiques', Revue d' Economie Politique 67,
pp. 601 ff.
1957b 'La theorie probabiliste de Carnap et son Application aux problemes de
l'Econometrie', Economie Appliques 10, pp.19 ff.
1957c 'Mathematics and Statistics for Economists', Metroeconomica 8, pp. 146 ff.
1957d 'Game Theory, Linear Programming and Input-output Analysis', Zeitschrift
fuer Nationaloekonomie 17, p. 1 ff.
1957e 'La Teoria dei Giochi, La Programmazione Lineare e I' Analisi delle
Interdependenze Structurali', Industria, pp. 505 ff, 653 ff.
1957f 'Strategische Spieltheorie und ihre Anwendung in den Sozialwissenschaf-
ten', Allgemeines Statistiches Archiv, pp. 242 ff.
1957g 'Makroekonomische Modelle fUr die Osterreichische Wirtschaft', Zeitschrift
fuer Nationaloekonomie 17, pp. 262 ff.
14

1957h 'Produktions - Funktionen fur die Osterreichische Landwirtschaft',


Zeitschrift/iir Nationaloekonomie 17, pp. 426 ft.
1958a 'The Use of Mathematics in Econometrics and Economic Statistics',
(Arabic translation) Readings in the Social Sciences, translation prepared
for the UNESCO Middle East Science Cooperation Office, Cairo, Egypt,
Vol. 1, pp. 105 ft.
1958b 'Eine neue Methode zur Schatzung der logistischen Funktion', Metrika 1,
pp. 154 ft.
1958c 'Angebots - und Nachfrage - Funktionen fur einige Fleischsorten in
Osterreich', Zeitschrift/iir Nationalokonomie 17, pp. 443 ft.
1959a 'Einkommenselastizitiiten im Osterreichischen Konsum', Monatsberichte des
Osterreichischen Institutes /iir Wirtschaftsforschung 32, Beilage 57, 18 pp.
1959b 'The Application of Decision Theory of Probability to a Simple Inventory
Problem', Trabajos de Estadistica 10, pp. 239 ft.
1960a 'External Economies in Consumption', Essays in Economics and Econo-
metrics, University of North Carolina Press, Chapel Hill, N.C., 1960, pp.
106 ft.
1960b (With a Murteria), 'Un Modelo 'Input-output' Simplificado para a Eco-
nomia Portuguesa', Colectanea de Estudos 8, pp. 1-14.
1960c 'Jeux Strategiques, Programmes Lineaires, et Analyse des Input-output',
Cahiers de l'Institut de Science Economiques Appliquee, No. 98, pp. 490 ft.
1960d 'A note on stochastic linear programming', Econometrica 28, pp. 490 ft.
1960e 'Eine Anwendung der Wahrscheinlichkeitstheorie von Carnap auf ein Prob-
lem der Untnehmungsforschung', Unternehmungsforschung 4, pp. 164 ft.
1960f 'The Use of Stochastic Linear Programming in Planning', Indian Economic
Review 5, pp. 159 ft.
1960g 'Application of the Theory of Information to the Problem of Weighted
Regression', Studi in Onore di Corrado Gini 1, pp. 373 ft.
1961a 'A stochastic theory of economic development and fluctuation', in N.
Hegelund (ed.), Money, Growth and Methodology, Lund, Sweden, pp. 59 ft.
1961b 'The logistic law of economic development', Arthaniti 4, pp. 75 ft.
1961c 'The Statistical Work of Oskar Anderson', Journal of the American Statis-
tical Association 56, pp. 273 ft.
1961d (With G. V. L. Narasimhan, L. PatH, and N. W. Raghavan), 'A logistic trend
for Indian agricultural income', Indian Journal of Economics 42, pp. 79-
83.
1962a (With B. von Hohenbalken), 'Econometric Models of the OEEC Countries,
the United States and Canada and their Application to Economic Policy',
Weltwirtschaftliches Archiv 89, pp. 29 ft.
1962b (With 1. K. Sengupta and V. Y. Rao), 'An Application of Stochastic Linear
Programming to Development Planning', Metroeconomica 14, pp. 25 ft.
1962c 'Ein Test fur Singularitiit einer Matrix, ' IFO-Studien 8, pp. 1-14.
1962d (With 1. K. Sengupta), 'Some aspects of the design and the use of a
generalized growth model', Indian Economic Review 6, pp. 1-21.
1962e (With 1. N. K. Rao), 'The distribution of the ratio of the variance of the
variate difterences in the circular case', Sankhya, Series A. Vol. 24, pp.
385-394.
15

1963a (With 1. K. Sengupta), 'Ein verallgemeinerter Geburten-und Todesprozess


zur ErkHirung der Entwicklung des Deutschen Volkseinkommens, 1851-
1939', Metrika 6, pp. 143-147.
1963b (With J. K. Sengupta), 'On some economic models of development planning',
Economia Internazionale 16,1963, pp. 1-19.
1963c (With 1. K. Sengupta), 'A stochastic programming - interpretation of the
Domar type growth model', Arthaniti 6, pp. 1-11.
1963d (With 1. K. Sengupta), 'On some aspects of trend in the aggregative models
of economic growth', Kyklos 16, pp. 47-61.
1963e (With E. 1. Thomas), 'Un modele stochastique de development economique
avec application a l'industrie anglaise', Revue d'Economie Politique 73, pp.
278-280.
1963f (With C. Millham and 1. K. Sengupta), 'A weak duality theorem for
stochastic linear programming', Unternehmungsforschung 7, pp. 1-8.
1963g (With 1. K. Sengupta and B. Morrison), 'Stochastic linear programming with
applications to economic models', Economica 30, pp. 262-276.
1963h (With 1. N. K. Rao), 'Die variate difference Methode im Falle einer
zirkularen Grundgesamtheit', Zur Anwendung Statistischen Methoden.
Festschrift Zum 75. Geburtstag von Prof. D.D.D.H.C. Felix Burkhart,
Wissenschaftliche Zeitschrift der Karl Marx Universitiit Leipzig. Mathe-
matisch-Naturwissenschaftliche Reine. Sonderband 11, pp. 189-193.
1963i (With 1. N. K. Rao), 'On the variate difference method', Australian Journal
ofStatistics 5, pp. 106-116.
1963j (With G. V. L. Narasimham, N. S. Raghavan and L. Patil), 'A simple
stochastic process for the explication of the trend of regional development',
Colloquim of Applications of Mathematics to Economics, Budapest 1963,
Publishing House of the Hungarian Academy of Science, Budapest, pp.
355-358.
1963k (With 1. K. Sengupta and C. Millham), 'On some theorems of stochastic
linear programming with applications', Management Science 10, pp. 143-
159.
1964a (With P. Van Moeseke), 'Base duality theorem for stochastic and para-
metric linear programming', Unternehmungsforschung 8, pp. 73-79.
1964b (With O. Davila), 'Un Modelo Econometrico parael Ecuador', Junta
Nacional de Planificati6n y Coordinacion Econ6mica, Quito, Ecuador,
1964.
1964c (With 1. K. Sengupta), 'An approach to a stochastic theory of economic
development with applications', Problems of Economic Dynamics and
Planning; Essays in honor of Michal Kalecki, PWN, Warsaw, pp. 373-
397.
1964d (With 1. K. Sengupta), 'Stochastic Linear Programming and Its Applications
to Economic Planning', On Political Economy and Econometrics. Essays in
honor of Oskar Lange, PWN, Warsaw, pp. 601-618.
1964e 'A note on the relation between Mahalanobis distance and weighted
regression', Contributions of Statistics, (presented to Professor P. C.
Mahalanobis on the occasion of his 70th birthday), London: Pergamon
Press, pp. 481 ff.
16

1964f 'Lineare programme und input-output analyse', Statistische Hefte, Cahiers


Statistiques-Statistical Papers 1/2 pp. 50-54.
1964g (With V. Mukerjee and R. Narayanan), 'A generalized poisson process with
applications to Indian Data', Arthaniti 7, pp. 156-164.
1964h 'Test NA Osobliwosc Macierni', Przegl, Statyst., Warsaw, Poland, Vol. 11,
No.4, pp. 387-389.
1965a Introduccion a la Econometria - has been published by the Central
University of Quito, Ecuador and the Junta Nacional de Planificacion y
Coordinacion Economica, Quito, Ecuador; A Portuguese translation
published Elementos de Econometria, by Enio Matheus Guazzelle & Cia.,
Ltd., Sao Paulo, Brazil.
1965b (With 1. K. Sengupta), 'Flexibility and adaptability of the Domartype growth
model', Metroeconomica 17, pp. 3-16.
1965c (With 1. K. Sengupta and C. Millham), 'On the Stability of Solutions under
Error in Stochastic Linear Programming', Metrika 9, pp. 47-60.
1965d (With O. Davila), 'Applicaciones de la Econometria a la Planificaci6n', El
Trimestre Economico 32(4), pp. 717-724.
1965e 'Stochastic Linear Porgramrning with Dlustrations', in R. Henn (ed.),
Operations Research Verfahren 2, Meisenhein am Glan: Verlag Anton
Hain, pp.180-221.
1966a Topics in Econometrics - A course of Lectures delivered at the Institute of
Agricultural Research Statistics, June-August 1965, Indian Council of
Agricultural Research, New Delhi, India.
1966b (With R. C. Patel), 'A log-normal diffusion process applied to the economic
development oflndia', Indian Economic Journal 13, pp. 465-467.
1966c (With 1. K. Sengupta), 'On the stability of solutions under recursive
programming', Unternehmungsforschung 10, pp. 1-14.
1966d 'Some thoughts about the state of econometrics', in S. R. Krupp (ed.), The
Structure of Economic Science, Englewood Cliffs, New Jersey, Prentice-
Hall, Inc., pp. 114-128.
1966e (With M. Patel), 'Evaluation of Indian Fertilizer Projects: An Application of
Consumer's and Producer's Surplus', Journal of Farm Economics 48, pp.
104-110.
1966f (With R. Narayanan), 'A Multi-dimensional Stochastic Process for the
Explanation of Economic Development', Metrika 11, pp. 85-90.
1966g (With L. Patil and V. Mukerjee), 'Multivariate exponential model of growth
and transactions for international trade', OPSEARCH, New Delhi, India,
Vol., 3, pp. 63-70.
1966h (With G. V. L. Narasirnham), 'Trend movements in aggregate economic
growth models', Metroeconomica 18 pp. 31-39.
1966i (With B. von Hohenbalken), 'Okonometrische Modelle der Mitgliedstaaten
der OEEC, der USA and Kanadas und ihre Anwendung auf die Wirts-
chaftpolitik', in E. and M. Streissler (eds.), Konsum and Nachfrage Cologne
Kiepenheuer und Wits, pp. 421-444.
1966j (With 1. K. Sengupta and E. J. Thomas), 'Application of the theory of
Stochastic Processes to Economic Development', I. Adelman and E.
17

Thorbecke (eds.), The Theory and Design of Economic Development,


Baltimore: Johns Hopkins Press, pp. 99-110.
1966k 'Estudios Sobre Modelos Econometricos', Revista de Economia 22: pp.
3-43.
19661 (With R. C. Patel), 'A Log-normal diffusion process applied to the develop-
ment of Indian agriculture with some considerations on economic policy',
Journal ofthe Indian Society ofAgricultural Statistics 18.
1966m 'Modern Decision Theory', Journal of the Indian Society of Agricultural
Statistics 18.
1967a (With G. V. L. Narasimham), 'Lineare Stochastische Difterentzengleichungen
fUr das Volkseinkommen 1851-1930', Die Statistik in der Wirtschafts-
forschung Festgabe fUr Rolf WagenfUhr Zum 60. Geburtstag. Berlin: Dunker
& Humbolt, p. 451 ft.
1967b (With E. Fels), 'methodik der Wirtschaftswissenschaft', Enzyclopiidie der
Geisteswissenschaftlichen Arbeitsmethoden, in M. Thiel (ed.), Vol. 8, Munich,
Oldenbourg, pp. 3-94.
1967c (With S. A. Farghali), 'The application of stochastic programming to the
UAR First Five Year Plan', Kylos 20: pp. 749-758.
1967d (With R. Zind), 'Un Modele Keynesien Simplifie de I'Economie Marro-
caine', Revue d'Economie Politique 77: pp. 51-161.
1967e (With W. Gensurowsky), 'Integration of input-output and programming in
land resources supply analysis', Unternehmungsforschung 11: pp. 259-
277.
1967f (With E. Fels), 'Mathematical economics in the Soviet Union', Communist
Affairs 5: pp. 3-8.
1968a 'International encyclopedia of the social sciences', in D. L. Sills (ed.),
Auspitz, Rudolf and Lieben Richard, New York, Macmillan and Free Press,
Vol. 1, pp. 470-471. Time Series (General), Vol. 16, pp. 47-59.
1968b (With I. Bello), 'Application de un Proceso Stocastico de Diffusion
Logaritmico Normal al Crecimiento Economico de Chile', Trabajos de
Estadistica 19: Cuaderno I & II, pp. 83-97.
1968c (With Y. Aureille), 'Les Problems de la Methodologie du Niveau de
I'Activite en France', Economie Appliquee, Archives de I'Isea 21: No.2,
pp.337-367.
1968d (With W. Pollan), 'Ein einfaches Makr6konomisches Modell fUr Osterreich',
Jahrbiicher fUr NationaLOkonomie und Statistik 181: pp. 397-405.
1968e 'Mathemetical Economics and Econometrics', A. A. Brown, E. Neuberger,
M. Palmatier ed., Perspectives in Economics, New York, MacGraw Hall,
pp.57-74.
1968f (With T. V. S. Ramamohan Rao), 'Investment allocation, relative price
stability and stable growth', Artha Vijnana, (Gokhale Institute of Eco-
nomics and Politics, Poona, India) Vol. 10, pp. 1-10.
1968g (With B. von Hohenbalken), 'Zur Existenz und Eindeutigkeit eines
Gleichgewichts im Walras-Cassel Modell', in H. Henn (ed.), Operations
Research Verfahren 5, Meisenheim, Verlag Anton Hain, pp. 415-422.
1968h (With W. Pollan), 'Ein logarithmisch-normaler DiffusionsprozeB mit
18

Anwendung auf die Wirtschaftliche Entwicklung Osterreichs', Wissenschaft-


fiche Zeitschrift der Karl Marx Universitiit Leipzig, Germany, Vol. 7, pp.
843-844.
1968i (With T. V. S. Ramamohan Rao), 'Investment Allocation Relative Price
Stability and Growth', (Abstracts), Econometrica 36, suppl. p. 53.
1969a (With M. Patel) 'A Log-normal diffusion process applied to the growth of
yields of some agricultural crops in India', Journal of Development Studies
6: p. 49.
1969b 'What does control theory have to offer', American Journal of Agricultural
Economics 51: No. 2,pp. 383-393.
1969c The pure theory of production under technological risk and uncertainty', in
D. K. Coyer, R. M. Finley, and 1. C. Headley (eds.), Readings in Production
Economics, Educational Publ. Co., New York, pp. 88-95.
1969d (With O. H. Brownlee), 'Production functions derived from farm records',
in D. K. Coyer, R. M. Finley, and 1. C. Headley (eds.), Readings in
Production Economics, Educational Publ. Co., New York, pp. 79-84.
196ge (With 1. K. Sengupta and E. 1. Thomas), 'Applications of the Theory of
Stochastic Processes to Economic Development', in I. Adelman and E.
Thorbecks (eds.), The Theory and Design of Economic Development,
Baltimore, John Hopkins University Press, (paperback), pp. 99-110.
1969f (With M. V. Rama Sastry), 'Information Theory and the Statistical
Estimation of Economic Relations', Multivariate Analysis 2, N.Y. Academic
Press, pp. 687-696.
1969g (With M. Patel), The Use of Consumer's Surplus in the Evaluation of
Projects Applied to Indian Agriculture', in G. R. Seth (ed.), Contribution in
Statistics and Agricultural Sciences, New Delhi, Indian Society of Agri-
cultural Satistics, pp. 323-330.
1969h (With C. Glezakops), 'Macro-economic Model of the Greek Economy',
Spoudi Athens, Greece, No.1, pp. 1-18.
1970a (With J. Licari), 'The Stochastic view of economics', American Economist
14:No.4.
1970b (With N. S. Raghavan), 'Stochastic linear programming applied to a
dynamic planning model for India', Economia Internazionale 23: No.1,
Febr. pp. 105-117.
1970c (With I. Consigliere and 1. Carneiro), 'Un Modelo Econometrico Aplicado
a Economia Brasiliera', Revista Brasiliera Economica 24: No.1, Jan.!
March pp. 5-29.
1970d 'Lineare Methoden der Nationalokonomie und die Produktionsperiode von
Boehm-Bawerk', Zeitschrift fUr NationalOkonomie 30: pp. 1-5.
1970e (With 1. K. Sengupta), 'Stochastic linear programming with applications to
planning in India', Jahrubiich der Wirtschaft Osteuropas I, Munich, Olzog,
pp.188-200.
1970f (With R. Raghavan), 'Stochastic linear programming applied to a dynamic
model for India (Abstract)" Econometric Annual, Indian Economic Journal
17:No.6,781.
1970g (With R. Raghavan), 'Stochastic linear programming applied to a dynamic
model for India (Abstract)" Econometrica 38: p. 87.
19

1970h (With G. Kadekodi and S. Thompson), 'Statistical estimation of the logistic


and gompertz functions applied to the problem of long term population
trends (Abstract), Econometrica 38: p. 184.
1970i (With I. Jimbo), 'An equilibrium model of economic growth', Metro-
economica XXII: pp. 1-15.
1971a (With J. K. Sengupta), 'A review of stochastic linear programming', Review
ofInternational Statistical Institute 39: No.2, pp. 197-223.
1971b 'Systematic planning and decision procedures', Jahrbiich der Wirtschaft
Osteuropas (Yearbook of the East European Economics), Vol. 2, Munich,
pp.115-189.
1971c (With G. Kadekodi) 'Note on the transformation of variables in simul-
taneous equation systems', Journal of the Indian Society of Agricultural
Research Statistics 23: pp. 163-173.
1972a (With G. Kadekodik and S. Thompson) 'Use of transformations and the
statistical estimation of long term population trends' in T. A. Bancroft (ed.),
Statistical Papers in Honor of George W. Snedecor, Iowa State University
Press, Ames, Iowa, pp. 285-292.
1972b (With M. V. Rama Sastry) 'A note on the use of nonparametric statistics in
stochastic linear programming', Management Science 19: No.2, pp. 205-210.
1972c (With A. F. M. Kandeel), 'Economic appraisal of the aswam high dam',
Festschift fUr W. G. Waffenschmidt, Meisenheim, Hain, pp. 180-190.
1973a 'Stochastic Economics', Essays in Honor of Gulseppe Ugo papi, CEDAM,
Padova.pp.341-355.
1973b (With R. C. Patel) 'Multivariate Log-normal Diffusion Process of Economic
Development', Operations Research Verfahren 14: p. 232.
1973c 'Some aspects of stochastic economics', Stochastics I: pp. 71-86.
1973d (With S. A. S. Ferghali), 'A note on stochastic programming in a develop-
ment plan-reply', Kyklos 26: pp. 399-401.
1973e (With G. Kadekodi), 'A note on the use of differences and transformation
in the estimation of econometric relations', Sankhya, ser. B, Vol. 35, pp.
268-277.
1974a (With G. Kadekodi and Rama Sastry M. V.) 'A Macro-model of the
economy for the explanation of trend and cycle with applications to India',
in W. Sellekaerts (ed.), Econometrics and Economic Theory, Essays in
Honor of Jan Tinbergen, MacMillan, London, p. 115.
1974b 'Control theory: balance and perspective', Bulletin of the International
Statistical Institute 45: No.4, pp. 302.
1974c (With B. Von Hohenbalken) 'Mathematische programmierung und ihre
anwendung auf die wirtschaft', Zeitschrift fUr NationalOkonomie 34: pp.
1-44.
1974d (With E. Deutsch and R. Rieder) 'A production function for Austria
emphasizing energy', in Franz-Lothar Altmann, Oldrich Kyn, and Hans-
Jurgen Wagener (eds.), On the measures of Factor Productivities, Papers
and Proceeding of the 2nd Reisenburg Symposium, June 23-27, 1974,
Vanden-holck & Ruprecht, Gottingen.
1974e 'Linear Economics and the Bohrn-Bawerk Period of Production', Quarterly
Journal ofEconomics 188:pp.127-132.
20

1974f 'Application de la methode des differences finies et de la transformation


des variables, l'estimation des systemes econometriques', Publications
Economhriques 7: pp. 1-12.
1974g (With R. Rieder and G. Kadekodi), 'Transformation von variablen und
transformationen durch differenzenbildung', Mitteilungsblatt der Osterrei-
chischen gesellschajt fUr Statistik und Informatik 4: Heft 15, pp. 102-112.
1975a 'Probabilistic economics', International Statistical Institute, International
Association for the Study ofProbability and Statistics, Warszawa, Vol. 9.
1975b (With R. Rieder and G. Kadekodi), 'Transformationen bei der Schatzung
von okonometrischen Beziehungen:, lahrbiicher fUr nationalokonomie und
Statistik, G. Fischer Verlag, Stuttgart, Heft 1/2, pp. 127-138.
1975c 'Stochastic programming and stochastic control', Instituto de Investigacion
Operativa y Estadistica XXVI Cuadernos, 1,2,3, Madrid, pp. 487-499.
1975d (With E. Deutsch, R. Rieder and P. Rosner), 'Ein energiekrisenmodell',
Empirica 2, ZeitschriJt des Osterreichischen Institutes fiir Wirtschaftsfors-
chung, pp. 125-164.
1975e (With G. Wiirgotter), 'The demand for energy in private households in
Austria', in William D. Nordhaus (ed.), Proceedings of the Workshop on
Energy Demand, May 22-23, International Institute for Applied Systems
Analysis, 2361 Laxenburg, Austria, pp. 330-334.
1976a 'The stochastic approach to planning', The Royal Institute of International
Affairs, Oxford University Press, pp. 233-249.
1976b (With P. Peek), 'Marginalism and linear economics: East and West', History
ofPolitical Economy 8: No.3, pp. 367-379.
1976c (With G. Worgotter), 'Zur Energienachfrage private Haushalte', Empirica
2, Zeitschrift des Osterr. Institutes fiir Wirtschaftsforschung, pp. 281-288.
1976d 'Einige Aspekte der statistischen Behandlung okonomischer Zeitreihen',
Sonderbruck aus den lahrbiichern fUr Nationalokonomie und Statistik,
Band 190, Heft 5, G. Fischer Verlag, Stuttgart, pp. 404-427.
1977a (With A. Sonderman) 'Statistical Aspects of Economic Aggregation', in R.
Henn and O. Moeschlin (eds.), Mathematical Economics and Game Theory,
Essays in Honor of Oscar Morgenstern.
1977b (With E. Deutsch and R. Rieder) 'A production Function for Austria
emphasising energy', De Economist 125: No.1, pp. 75-94.
1977d (With B. Bohm and R. Rieder), 'Stabilitatskonzepte am Beispiel Oster-
reichs', Empirica, No.1.
1977e (With G. Worgotter), 'Konsumfunktionen fiir Osterreich', Quantitative
Wirtschaftsforschung, Krelle-Festschrift, Mohr Verlag, Tubingen.
1977f (With G. L. Gomez) 'Sobre una aplicacion de la econometrica a la politica
economica de Colombia', Comercio Exterior 27: Num. 3, Mexico, Marz,
pp.286-292.
1977g 'Risk and Uncertainty. The Importance for the Homogeneity of Demand
and Supply Functions and the Dichotomy between Real and Monetary
Economies', in G. Schwodiaur (ed.), Equilibrium and Disequilibrium in
Economic Theory, pp. 277-294.
21

1977h (With E. Deutsch and R. Rieder), 'GroBtechnische Energienutzung und


menschlicher Lebensraum', Hrsg. K. Strnadt und H. Porias.
1977i (With W. Pollan), 'The Short-Term Responsiveness of Consumer Expendi-
tures in the Austrian Economy', Empirica 2178, Zeitschrift des Osterr.
Institutes fur Wirtschaftsforschung.
1978a (With G. Worgotter), 'Une appliquation de la methode des differences
variees a I'hypothese du revenu permanent - Une investigation avec des
donnees Autrichiennes de 1955-1975', Structures Economiques et Econo-
metric, Lyon.
1978b (With G. L. Gomez), 'Sobre una applicacion de la econometrica a la eco-
nornia columbiana', Cuardenos de economia 6: No. 15, enero-abril.
1979a (With B. Bohm and R. Rieder), 'Is the Austrian Economy Stable?', in 1. M.
L. Janssen, F. L. Pau, and A. Straszek (ed.), Models and Decision Making
in National Economies, North-Holland Pub!. Co.
1979b (With G. Worgotter), 'Ein empirischer Text der Annahme der permanenten
Einkommenshypothese von Milton Friedman', Empirica 1/79, Zeitschrift
des Osterr. Institutes fur Wirtschaftsforschung.
1979c (With G. Worgotter), 'Une application de la methode des differences
variees a I'hypothese du revenue permanent et son utilisation a l'etude des
donnees Autrichienne de 1955-1975', Publications Economerriques XII,
Fasc. 1, Ed. Sirey.
1979d (With E. Deutsch), 'Methodische Probleme der Energiefrage', Wirtschafts-
politische Blatter, Energie und Wirtschaftstheorie, 26, Jg., Nr. 3.
197ge (With B. Bohm), 'Ein okonometrisches Ungleichgewichtsmodell fur den
aggregierten osterreichischen Arbeitsmarke', Beitrage zur Diskussion and
Kritik der neoklassischen okonomie Festschrift fur Kurt W. Rothschild und
Josef Steindl, Hrsg. K. Laski, E. Matzner, E. Nowotny, Springer Verlag,
Berlin, Heidelberg.
1979f (With A. Worgotter and G. Worgotter), 'Ein K1eines monetares Modell fur
Osterreich', Empirische Wirtschaftsforschung und monetare okonomik,
Festschrift fiir Stephan Koren zum 60. Gerburtstag, Hrsg. W. Clement und
K. Socher, Duncker & Humblot, Berlin.
1979g 'Stochastische Programme', Handworterbuch der Mathematischen Wirts-
chaftswissenschaften, Gabler.
1979h 'Okonometrie - Entwicklung und Aspekte der Zukunft', Das Wissen-
schaftsunimagazin 'thema', Dez, Nr. 11.
1979i (With G. L. Gomez), 'Applicacion de los procesos de difusion a la
planificacion del desarrollo economico', Trabajos de Estadistica y de
Investigaci6n Operativa 30, Num. 2, pp. 33-35.
1980a (With G. L. Gomez), 'The application of diffusion processes in problems of
developmental economic planning: A case study (Columbia)', in Prof. H.
Los (ed.), Studies in Economic Theory and Practice, North Holland Pub!.
Co., pp. 177-194.
1980b (With B. Bohm and R. Rieder), 'A system of demand equations for
Austria', Empirical Economics 5, pp. 129-142, Vienna.
22

1981a (With M. Deistler), 'Identifiability', in 1. Gani and V. K. Rohatgi (eds.),


Contributions to Probability, Academic Press, pp. 231-246.
1981b (With E. Deutsch), 'Aggregation und empirische modelle: Untersuchungen
anhand eines Produktionsmodells', Jahrb. F. Nationalokonomie und
Statistik, Bd. (Vol.) 196/4, Fischer, Stuttgart.
1982a (With A. and G. Worgotter), 'Makrokonomische Modellbildung', in
Angewandte Statistik und Wirtschaftsforschung heute, Vandenhoeck &
Ruprecht, Gottingen.
1982b 'A 'Simple' theory of business fluctuations', Economic Analysis and
Agricultural Policy, in Richard H. Day (ed.), The Iowa State University
Press, Ames.
1983 'Aus der Geschichte der modernen Statistik', Mitteilungsblatt der Oster-
reichischen Statistischen Gesellschaft, 13. Jg., 63-66.
1984 (With P. Rosner, A. and G. Worgotter), 'Lohnbestimmung, auBenwirtschaft-
liche Stabilitat und internationale Stagnation', Jahrbiicher /iir nationalO-
konomie und Statistik, Vol. 199/3, 193-212.
1985 (With P. Rosner, A. and G. Worgotter), 'Lohnziiriickhaltung bei fixen und
f1exiblen Wechselkursen', Kredit und Kapital, 18, Jg., 299-319.
1986 'Variate difference method', Encyclopedia of statistical sciences, John Wiley
& Sons, New York (forthcoming).
Econometrics Needs a History:
Two Cases of Conspicuous Neglect

KARL A. FOX*
Iowa State University, Ames, Iowa 5OOlI, u.s.A.

Econometrics needs a history. Our knowledge of our own past is very


fragmentary. Instead of history we have oral traditions, and the tradi-
tions of different schools have glaring inconsistencies and incredible
gaps. No civilized community would be content with this state of
affairs, and if we econometricians are a community and are civilized,
neither should we.
In this paper I cite two examples of conspicuous neglect of impor-
tant aspects of our history. The first involves the brilliant contributions
of Evans, Roos, and Tintner to the theory of optimization over time.
The second concerns the leading role of agricultural economists and
their close associates in the development of applied econometrics.

PART I.
THE MATHEMATICS AND ECONOMICS OF OPTIMIZATION OVER TIME:
EVANS, ROOS, AND TINTNER, 1922-1942

Recently I undertook to write a brief biographical sketch about


Gerhard Tintner for The New Palgrave: A Dictionary of Economic
Theory and Doctrine (Fox, 1987b). To understand what Tintner
contributed to economic theory, I should have known a great deal
about (1) the state of economic theory in 1930, (2) the flow of
new theory from 1930 through 1960, (3) the substantive merits of

* Distinguished Professor in Sciences and Humanities and Professor of Economics,


Iowa State University, Ames, Iowa 50011, U.S.A. I am indebted to Arnold Faden,
Stanley Johnson, Michael Lawlor, Charles Meyer, Mary Morgan, Mark Perlman, and
Herbert Simon for comments on an earlier draft but I take full responsibility for all
interpretations.
23
Jati K. Sengupta and Gopal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN-13: 978-94-010-8146-7
© 1988 Martinus Nijhojf, Dordrecht
24

Tintner's own articles, and (4) the influence of his articles upon the
thinking and writing of others. From the vantage point of 1986, I
should also have known enough about recent (and future?) develop-
ments to say that Tintner's articles had received less (or more?)
recognition than was their due.
This task should have been tackled by a professional historian of
economic thought. After a generation of neglect following World War
II, the field has enjoyed a resurgence symbolized in part by the journal
History of Political Economy, established in 1969. A few young
people with strong training in mathematical economic theory and
econometrics have entered the field very recently and during the next
few years they may contribute the historical articles I sought, but did
not find, in 1986. Let me describe some of the problems of under-
standing Tintner's contributions.
From 1936 through 1942, Tintner published nearly a dozen articles
in which he used the calculus of variations to formulate dynamic
versions of various branches of static economic theory (Tintner, 1936,
1937, 1938a, b, 1939, 1941a, b, 1942a, b, c; Tintner, 1946 is a
logical sequel to the series). He cited Griffith C. Evans (1930) and
Charles F. Roos (1934) as sources of inspiration and examples of the
fruitful application of calculus of variations to dynamic economics.
I had met Evans in 1937 (when I was 20) and I had seen Roos at a
session of the Econometric Society about 1954. I knew that Roos had
joined with Ragnar Frisch in 1928 to seek Irving Fisher's support in
organizing a new society for the advancement of economic theory in
its relation to mathematics and statistics. The enthusiasm of the two
younger men convinced Fisher, and the Econometric Society held its
organizational meeting in December 1930. For this role if nothing
else, Roos deserves a prominent place in the history of econometrics.
As of 1937 I knew that Evans was a respected professor of mathe-
matics at Berkeley who taught courses in calculus of variations and
occasionally in mathematical economics. Many years later I saw a
ten-story building on the Berkeley campus called Evans Hall and
formed the impression that Evans' long chairship of the Mathematics
Department had contributed greatly to the department's international
stature. during the 1970s and early 1980s at least, Evans Hall also
sheltered the mathematically-oriented wing of Berkeley's Economics
Department.
Why was Tintner so impressed by Evans and Roos? I leafed
through the two books specifically cited by Tintner (Evans, 1930 and
25

Roos, 1934). These led me to a series of articles on mathematical


economics by Evans beginning in 1922 and Roos in 1925, which I
also scanned. How did these two talented mathematicians get inter-
ested in economics? For Roos, a plausible answer was available; he
had completed his Ph.D. at Rice Institute in 1926 (at age 25) under
the guidance of Evans. But what about Evans? Who Was Who in
America indicates that he was born in 1887, completed his Ph.D. at
Harvard in 1910 (at age 23), and spent 1910-1912 studying at the
University of Rome. Other evidence indicated that in Rome he
worked very closely with Vito Volterra, who had originated the theory
of functionals in 1887 and also did pathbreaking work on integral and
integro-differential equations.
I happened to own a copy of a book on functionals by Evans, based
on lectures Evans had given in 1916; it contained far more references
to Volterra than to any other author. Volterra (1930) credits Evans
with having published the first book with ''functionals'' in its title;
Volterra's bibliographies contain many references to Evans, and
several to Roos.
Evans (1922, pp. 379-380) notes that a producer is interested in
maximizing his total profit over a period of time of considerable extent
and with reference to cost functions which are themselves changing as
a whole with respect to time. "The mathematical discipline which
enables us to find functions which make a maximum or a minimum
quantities which depend upon them throughout periods of time is the
calculus of functionals, or in special case the calculus of variations.
But the quantity which we want to make a maximum over a period of
time need not be the total profit; it may be the total production, or
whatever other quantity we wish to take as a desirable characteristic of
the social system we discuss. The author regrets that at the present
time he can refer only to his lecture courses for a further treatment of
this point of view. Nevertheless it seems the most fruitful way that a
really theoretical economics may be developed".
In reviewing Evans (1930), HoteHing (1931b) comments that "of all
the distinguishing economists who have begun as mathematicians,
none has gone so far in mathematics as has Professor Evans. . ..
Hence it is not surprising that this work involves more mathematics
than its predecessors and goes into significant economic problems
which heretofore have been almost untouched ... untouched ... .
Many economic problems seem to require formulation in terms of
functionals, on which Professor Evans is a leading authority. This book
26

helps to lay a groundwork upon which future contributions to political


economy of first-rate importance may be expected to be based."
Hotelling (1931a, p. 102) cites Evans (1924), which uses the calculus
of variations, as the earliest article on economic theory to use "mathe-
matics more advanced than the most elementary parts of calculus".
At this point I was sufficiently informed concerning Evans' stature
as a mathematician, and the reasonableness of his approach to
dynamic economics, to understand Tintner's enthusiasm. I would still
like to know the patterns of interaction among Italian mathematical
economists during 1910-1912, their prestige among Italian scientists
generally, and their associations with Italian mathematicians. Was
mathematical economics the most exciting game in Italian science as of
1910-1912? How could it otherwise have attracted a young mathe-
matician of Evans' capacity?
Hotelling (1958, p. 1194) describes Charles F. Roos as 'a unique
and outstanding figure'. Born in 1901, he completed B.A., M.A., and
Ph.D. degrees in mathematics at Rice Institute in 1921, 1924, and
1926 respectively, with Evans as his principal adviser. Roos spent
1926-1928 as a National Research Fellow in Mathematics at Chi-
cago, Princeton, and Cornell. During his graduate student and post-
doctoral years, he published several brilliant articles on dynamic
economic theory (Roos, 1925, 1927a, b, c, 1928). Roos spent 1928-
1931 as an assistant professor of mathematics at Cornell, resigning in
1931 to become permanent secretary of the American Association for
the Advancement of Science. In 1933 he resigned that post on receiv-
ing a Guggenheim fellowship to work in England on mathematical
economics.
Within three months the economic crisis in the United States, and
the New Deal response to it, led him to return and become principal
economist and director of research for the National Recovery Admin-
istration (NRA). Most of the chapters in his book Dynamic Eco-
nomics (1934) had been written in the summer of 1930 and during
his three months in England; they were supplemented by some
analyses from his job at NRA. Integral equations were very much in
evidence, often as starting points for highly concrete applications.
When the Act creating the NRA was invalidated by the U.S.
Supreme Court in 1934, Roos became Director of Research at the
Cowles Commission (then in Colorado Springs, Colorado); Tintner
was a Research Fellow there in 1936-1937. Roos left for New York
in 1937, served briefly as research director for an investment manage-
27

ment firm, and then organized the Institute for Applied Econometrics,
whose name was later shortened to the Econometric Institute, Inc.
Roos had served as the first Secretary of the Econometric Society
during 1931-1936; he later served as the Society's Vice-President
(1947) and President (1948). A full-page photograph of him appears
as the frontispiece of the July 1952 issue of Econometrica.
It seems clear that Tintner's enthusiasm as of 1936-1942 for Roos'
work was scientifically appropriate. But I would like to have known
much more about the interactions among Evans, Roos, and several
other Americans who had been trained to the PhD. level in mathe-
matics, including Harold Hotelling, Harold T. Davis, Edwin B. Wilson,
and Irving Fisher. Wilson and Fisher had both written their doctoral
dissertations under the guidance of 1. Williard Gibbs, probably the
greatest scientist the United States had produced prior to 1900.
Fisher, of course, became primarily an economist and university
professor. Roos became a full-time econometrician active (from 1937)
in the private sector. Hotelling moved through economics to mathe-
matical statistics. Collectively, had these six or more trained mathe-
maticians stimulated enough graduate students and younger colleagues
to provide a readership for Tintner's work? I don't believe the needed
history has been written.
My impression is that Tintner's 1936-1942 articles received much
less recognition than their scientific merits justify. Moreover, econo-
mists who failed to cite Tintner also failed to cite Evans and Roos.
Samuelson (1947) says a great deal about dynamical theory and
functional equations but makes no mention of Evans, Roos, or
Tintner. This may have been an oversight, as in his enlarged edition
Samuelson (1983, p. xxv) includes Griffith C. Evans along with Frisch,
Tinbergen, Hotelling, Wilson, Leontief, and Georgescu-Roegen as his
'then living elders' whose works he had pored over during the 1930s
and early 1940s.
Baumol's Economic Dynamics (1951) cites two articles by Tintner
(1941a and b); the references are in an appendix to Chapter 5
supplied by Ralph Turvey: 'Uncertainty and the equilibrium of the
firm'. Turvey also supplied an appendix to Chapter 8 entitled 'A note
on functionals' but makes no reference to Evans or Roos. I scanned a
collection of articles on modern growth theory published in the 1950s
and 1960s, all of which involved optimization over time; I found no
references to Evans, Roos, or Tintner.
Roos (1927b, p. 280) wrote: "In a previous paper I have shown
28

how the dynamic problem of competition is related to a dynamic


generalization of the static theory of economic equilibrium of Walras
and Pareto". The paper cited was Roos (1927c); in it Roos outlined
the several sets of equations that constituted Walras' complete system,
extended them to include additional variables involving cumulations,
rates of change, and other dynamic elements, and stated the conditions
required for solution of the dynamic system. Several of Tintner's
papers also imply dynamic formulations of major subsets of Walras'
system.
In leafing through some classics of general equilibrium theory
published in the 1950s I formed the following impressions:
(1) Ideas involving topology and convexity were introduced to
economists in the mid-1930s by gifted professional mathematicians,
notably Wald and von Neumann, who identified formal mathematical
deficiencies in Walras' system;
(2) Some gifted and personally impressive young economists
undertook to reformulate Walras' system to satisfy the highest contem-
porary standards of mathematical rigor;
(3) In asserting the superiority of these new ideas over calculus for
some purposes, they may have left the impression with graduate
students that they were superior to calculus for all purposes;
(4) The inferiority of differential calculus presumably extended to
all other forms or branches of calculus.
I find some support for my impressions in Allais (1978). After
citing passages from articles by Debreu, Allais (p. 141) summarizes
Debreu's points as follows:

"... the new theory [based on topology and convexity K.F.] claims
to be more suggestive, more general, more realistic, and more
rigorous than the earlier theories based on differential calculus and,
in light of the results to which it leads, it concludes that differential
calculus should never be used in studying equilibrium conditions
and maximum efficiency situations".

With this conclusion Allais deeply disagrees. He concedes advan-


tages to the new theory on some points, but charges it (p. 144) with
major internal weaknesses, fundamental logical inconsistencies, and
some postulates that formally conflict with "the data of introspection
on fields of choice and ... the observed data on fields of production.
29

Last, in many economic analyses there is no substitute for the use of


differential calculus. In fact it is irreplaceable".
Allais' detailed arguments in favor of differential calculus in static
models of general equilibrium would evidently apply to calculus of
variations in their dynamic extensions. The contributions of Evans,
Roos, and Tintner may yet bear fruit in the decade ahead.
Several additional references for Part I have been suggested by
friends who read my preliminary draft. Edgeworth (1881) described
calculus of variations as the highest branch of mathematics and
suggested its application to a number of static optimization problems
in the social sciences, each problem involving a rather simple function.
Volterra (1887, 1930) emphasized dynamic problems involving the
optimal time-paths of several or many variables. Evans and Roos in
their journal publications were wholly concerned with dynamics; Roos
(1927c, p. 632) took as his starting point the one passage in Cournot
which refers to movements of prices and demand functions within a
given year.
Simon (1987) emphasizes other contributions of Evans to mathe-
matical economics both directly and through his students. Fox (1987)
supplies a biographical sketch on Roos written on short notice at the
request of an editor of The New Palgrave. Morgan (1985a) cites two
articles by Evans and two by Roos.
Davis (1941) makes eleven page references each to Roos and
Tintner and nine to Evans. In his preface (p. xiii), Davis asserts his
particular indebtedness to Tintner for many useful suggestions: "His
[Tintner's] wide acquaintance with the literature of economics has
made his criticism ... of great value".
But Davis' esteem for Tintner, Evans, and Roos had little impact, as
Davis himself was rarely cited by mainstream economists. Christ
(1952) includes photographs of Davis and Roos and biographical
sketches on Davis, Roos, and Tintner. Davis was associated with the
Cowles Commission during 1932-1937; Roos was the Commission's
research director from September 1934 to January 1937; Tintner was
a Research Fellow at the Commission during 1936-37. As new
people moved into the Cowles Commission after 1939, no trace of the
influences of Davis, Roos, and Tintner appears to have survived.
Weintraub (1983) provides an excellent history of work during
1930-1954 on existence proofs for competitive equilibrium. On page
21 he quotes Morgenstern and von Neumann as scorning the 'primi-
30

tive mathematics' of Hicks' Value and Capital. This supports my


impressions and the considered judgment of Allais. The pursuit of
mathematical rigor at the expense of operationality may also have
delayed serious work on computable general-equilibrium models for a
couple of decades (see Shoven and Whalley, 1984 for a survey and
history of such models).

PART II.
AGRICULTURAL ECONOMISTS AS WORLD LEADERS IN
APPLIED ECONOMETRICS, 1917-1933

During the 1950s and 1960s many young economists shared a com-
mon set of misconceptions about econometrics: (1) econometrics
consisted of the limited information maximum likelihood (LIML)
method of estimation; (2) structural equations estimated by LIML
were right, all others were wrong: (3) all structural equations con-
tained disturbances and the disturbances in all possible pairs of
equations were significantly correlated; (4) all economic variables were
measured without error; and (5) the history of econometrics began in
the 1940s with solution of the identification problem, which had been
formulated by E. J. Working in 1927.
Not being curious about history, they did not realize that E. J.
Working's article was part of a very extensive literature; they had
found an island and missed a continent. A little further searching in
the same journal (QJE) would have disclosed articles by Holbrook
Working (1925) and Mordecai Ezekiel (1928) reflecting a sophis-
ticated understanding of the identification problem posed by E. J.
Working and noting a solution for it in most practical problems
involving agricultural products. Serious effort would have led them to
Holbrook Working (1922) who found substantial levels of error in
important variables and to F. V. Waugh (1923) who encountered
errors and disturbances in the same equation. A semester's explora-
tion of the related literatures might have converted them to a mature,
realistic, and professional conception of the place of quantitative
methods in economic research. Only the most rigid could have drawn
back at the last moment, saying "if the world does not conform to
LIML specifications, so much the worse for the world".
The coincidence in 1985 of the 100th anniversary of the American
31

Economic Association and the 75th anniversary of the American


Agricultural Economics Association has temporarily stimulated the
interest of each group in its own history. As econometrics is currently
the common property of economists and agricultural economists, the
history of applied econometrics from about 1908 to 1933 should be
of interest to both groups.
During the 1920s, U.S. agricultural economists and their close
associates led the world in the creation and application of what are
now known as econometric techniques. The most important single
figure in this development was Mordecai Ezekiel. Howard R. Tolley,
Holbrook Working, Frederick V. Waugh, Louis H. Bean, and E. 1.
Working made substantial contributions. W. 1. Spillman and John D.
Black helped prepare the way for Tolley and Ezekiel. Henry A.
Wallace was the first agriculturist to pay attention to H. L. Moore's
seminal work on demand analysis. Sewall Wright, a geneticist in the
U.S. Bureau of Animal Industry during 1915-1925, interacted with
Wallace and Ezekiel on problems of statistical method. Henry Schultz,
Moore's one disciple, remained somewhat aloof from the agricultural
economists but they were continually aware of each other's work as
Schultz, like Moore, did virtually all of his empirical research on
agricultural commodities.
During the period of most intense activity in 'agricultural econo-
metrics', from 1922 to 1930, most of the contributors were in their
twenties or early thirties. They made dazzling impressions on some
slightly younger contemporaries who would later be among the first
recipients of the Nobel Prize in Economic Science. In a 1985 letter to
the author, Tinbergen said that his own early heroes included "Henry
Schultz and several agricultural economists". Leontief (1971, p. 5), in
his 1970 presidential address to the American Economic Association,
praised agricultural economists for providing "an exceptional example
of a healthy balance between theoretical and empirical analysis. . ..
They also were the first among economists to make use of the
advanced methods of mathematical statistics".
Formal recognition of the professional quality of the agricultural
econometricians came from the American Statistical Association and
the Econometric Society. Ezekiel became a vice-president of the ASA
in 1929 (at age 30) and a Fellow by 1935; Tolley, Waugh, H.
Working, and Wright became ASA Fellows in the 1940s. The
Econometric Society elected its first group of Fellows in 1933; Ezekiel
was included in the second group (1935), and Waugh and the two
32

Workings became Fellows in the 1940s. Moore and Henry Schultz


also became Fellows of both associations.
Why did agricultural economists and their associates take such a
commanding lead in the 1920s? On the negative side, the various
groups of general economists had disqualified themselves from any
activity of this sort. 1. N. Keynes (1917, p. 253) observed that "to
some economists the very idea of a mathematical treatment of
economic problems is not only repugnant, but seems even absurd".
This attitude characterized the great majority of general economists in
the United States and Germany at the time and a substantial number
even in England.
Many American economists took a dim view of economic theory of
any kind, whether mathematical or verbal, but attached great impor-
tance to the systematic development of economic data. The efforts of
Wesley Clair Mitchell and his associates at the National Bureau of
Economic Research (founded in 1920) were particularly notable but
provided little material for econometric analysis until the 1930s. The
leading English economists set great store by theoretical demand
curves but doubted whether Jevons' hope of constructing empirical
demand curves by statistics was capable of realization. Both groups
were convinced that efforts to confront theory with data would be
fruitless.
The general economics profession had arrived at an absolute
impasse. H. L. Moore (1908, 1911, 1914, 1917, 1919) made a heroic
effort to break it from within the profession; he received a scattering
of polite applause but no other general economist was moved to
emulation - apart from Moore's own graduate student, Henry
Schultz, who would complete his Ph.D. in 1925.
In contrast, Henry A. Wallace, the brilliant associate editor of
Wallaces' Farmer, wrote Moore in 1915 expressing great interest in
his work on 'laws of demand' for farm crops and asking Moore to
direct him to a book which explained in a simple arithmetical way how
to apply the method of least squares. Wallace's intention was to derive
similar laws of demand for cattle, hogs, and sheep. These statistical
demand curves were needed to explore the desirability of a forward-
pricing policy that would stabilize the incomes of livestock producers.
Wallace proceeded to derive such demand curves and explore such
policies in his 1920 book, Agricultural Prices.
Glenn L. Johnson (1986a, 1986b) distinguishes between three types
of research - disciplinary, subject matter, and problem-solving. The
33

latter two are inherently multidisciplinary. As of 1920, general


economists were deadlocked over methodological issues of interest
only to members of their own discipline; Moore addressed his
research to, and sought his rewards from, them. Wallace's research
involved a subject matter phase (the scientific study of factors involved
in the production, marketing and pricing of livestock) and a problem-
solving phase (designing a program to stabilize the incomes of
livestock producers). His intentions were stated as follows (Wallace,
1920, p. 3):
"This book is written for all those who may be interested in the
matter of prices of agricultural products, but more especially for
farmers of the corn belt states, students of agricultural colleges, county
agricultural agents, and leaders of farm organizations. Its purpose is to
promote a better understanding of the factors which influence prices
of farm products and stimulate an intelligent interest in statistical
economics ...".
Moore experienced great difficulty in selling his ideas to professors
of general economics; Wallace had little trouble in selling them to
students of agricultural economics. The latter were aware that several
applied sciences were relevant to the production and marketing of
(say) corn and hogs; these sciences were represented in the teaching
and research programs of the colleges of agriculture and their asso-
ciated agricultural experiment stations. Observation, measurement,
and experimentation were part of the culture.
Moreover, the agricultural experiment stations of the various states
were loosely coordinated into a national system with ties to the U.S.
Department of Agriculture. In and near Washington, D.C., the USDA
had concentrations of research scientists in bureaus corresponding to
major program areas in the state experiment stations. Between 1915
and 1922, several economic and statistical activities in USDA were
combined to form a new bureau - the Bureau of Agricultural Eco-
nomics (BAE).
The BAE as of 1922 provided an unprecedentedly rich environ-
ment for economic research. It included by far the largest group of
research-oriented economists that had ever been assembled in the
United States. Its Chief, Henry C. Taylor (formerly of the University
of Wisconsin), understood the importance of freedom of inquiry for
creative research.
The BAE was large enough and self-confident enough to be eclectic
in some of its recruiting. Tolley taught advanced statistics in the
34

USDA Graduate School during 1921-1922 and soon found that the
dominant intellect among his 40 or 50 students was a young Census
Bureau employee named Mordecai Ezekiel. Ezekiel had no farming
experience, and he had a B.S. degree in technical agriculture (not
economics). But he was brilliant, so Tolley offered him a job; Ezekiel
accepted because he admired Tolley. Louis Bean was a Harvard
M.B.A., but he had particular talents that BAE could use. Some
agricultural economists were recruited at the B.S. level and learned on
the job; among them was O. V. Wells, whom Tolley referred to as the
only other person he knew who could think as fast as Ezekiel.
The BAE during the 1920s was in a class by itself as a home for
applied econometrics; the land grant universities shared some of its
advantages. The size, economic organization, and political importance
of U.S. agriculture created a demand for comprehensible results that
could be relied on (within carefully-stated limits of applicability) by
farmers and policy-makers. These results were in the public domain
and they were supposed to reflect the state of the arts in the relevant
basic sciences. Analogous conditions were not met during the 1920s
in any agency, university department, or institute staffed by general
economists.
Additional historical and institutional background is given in Fox
(1987c). I tum now to some contributions of agricultural economists
and their close associates to applied econometrics. The contributions
involve multiple regression techniques, errors in variables, identifica-
tion problems, and examples of substantive research. Space permits
little more than an enumeration of these.

Multiple Regression Techniques, 1923-1935

The first problem tackled by Tolley and Ezekiel (1923) was the design
of efficient and accurate methods for computing linear multiple regres-
sion and correlation measures by the method of least squares. Their
version of the Doolittle method greatly extended the accessibility of
least squares regression techniques to economists.
The second problem tackled was that of curvilinear multiple regres-
sion for any number of variables. Its solution is credited exclusively to
Ezekiel (1924), though the need for it arose during the joint research
effort reported in Tolley et al. (1924). Faced with the problem of
estimating a production function in which the output would be
expected to show diminishing returns to each of three inputs, Ezekiel
35

computed a linear multiple regression equation by least squares and


then improved the fit with curvilinear net regressions drawn freehand
subject to a priori restrictions on shape; each net regression was
adjusted in turn by an iterative procedure which might be carried
through two or more cycles until the sum of squared residuals from
the curved surface was approximately minimized.
Ezekiel and Waugh did considerable work on 'joint functions', in
which the partial curvilinear regressions of XI on X 2 changed in shape
and level as functions of X3 and other variables. Examples are given in
Ezekiel (1930a).
As of 1929 there was no way to estimate the probable errors of
forecasts from graphic curvilinear regressions. Ezekiel (1930b) reports
the results of what we would now call Monte Carlo experiments, in
which statistical clerks applied graphic methods to a large number of
samples in which the dependent variable contained random errors
from normal distributions with known variances.
Louis Bean (1929b) developed a short-cut graphic method which
became very popular. F. V. Waugh used both graphic and Doolittle
methods in the 1920s. During 1932-1933, he spent several months
working with Ragnar Frisch in Oslo and became very much interested
in matrix methods. Waugh published a procedure for computing
multiple regression constants using the inverse matrix in 1935.

ProblemsofE"orsin Variables, 1912-1932

The presence of errors in variables was recognized from the begin-


ning. Tolley's applications of least squares at a computer at the
u.s. Coast and Geodetic Survey during 1912-1915 were motivated
exclusively by the existence of errors of observation. Waugh (1923;
reprinted 1984), in his publication on factors affecting the price of
New Jersey potatoes, listed six sources of error in his regression
estimates of prices. Three of these were errors, not necessarily
random, in the dependent and two independent variables; two others
would be classed as 'disturbances' in later terminology; and the sixth
source was computational errors of about one percent in two of the
regression coefficients.
Charles F. Sarle (1927, 1932) was trained as an economist but was
assigned to the Crop Reporting Service within BAE. In 1927 he
published a major bulletin on the adequacy and reliability of the
bureau's price estimates for crops at both state and national levels,
36

and in 1932 he published a similar analysis of its estimates of crop


yields. Both bulletins contained estimates of the standard errors of the
published data.
Holbrook Working (1922, 1925), Mordecai Ezekiel (1930a), and
Henry Schultz (1925, 1928) were very much aware of the approxi-
mate magnitudes of errors in specific variables, of the biasing effects
of random errors in the independent variables, and (in Schultz's case
particularly) of the availability of techniques for estimating lines and
planes of 'best fit' which assumed the presence of errors in all
variables.

Identification Problems, 1913-1934

Arguments about the nature of causation are as old as philosophy.


Experimental scientists usually content themselves with short causal
chains. Tolley (1917) published simple and partial correlations com-
puted from cross-section data on farm inputs and outputs. He com-
mented that "it is generally known before the analysis is attempted
which factors are causal and which resultant, and consequently there
should be little difficulty in interpreting the coefficients correctly".
(p.13)
Time series data presented nontrivial identification problems. The
identification problem for simultaneous demand and supply curves
was stated and solved by Lenoir in 1913. Moore (1914) was following
a c~usal chain which ran from rainfall (spring and summer) to yield
and production (fall) to price (winter) for some major crops. He
correctly chose production as his independent variable and price
as his dependent and obtained very reasonable downward-sloping
demand curves for corn, oats, hay, and potatoes. According to Stigler
(1962), three reviewers of Moore's 1914 book pointed out that his
procedure would yield valid demand curves only if the true demand
curves were quite stable while the corresponding supplies fluctuated
erratically - as was indeed the case. One of the reviewers, P. G.
Wright (1915), acknowledged help from his son, Sewall.
I have not come upon any study made by leading agricultural
econometricians using annual data which failed to recognize and solve
the demand-supply identification problem. Moore (1919) and Bean
(1929a) estimated supply functions using last year's price as one of the
independent variables. Haas and Ezekiel (1926) recognized that
unanticipated price changes could lead producers to market their hogs
37

a week or two earlier (or later) than usual, resulting in a moderate


decrease (or increase) in average weight per hog. But changes in the
numbers of sows bred and pigs raised were a much more important
kind of supply response, and it required a full year.
E. J. Working's 1927 paper has received undue attention from
general economists. The identification problem was already well
understood by Ezekiel (1928) and other agricultural econometricians.
Estimation of demand and supply curves in the truly simultaneous
case would have required two additional 'shift variables', one for each
curve. From 1914 until the early 1930s, lack of data on 'shift
variables' for demand functions forced extensive use of time trends.
Kuznets' annual series on the U.S. national income from 1919 to
1932 was first published in the mid-1930s. The Commerce Depart-
ment issued its first official estimates of national income in 1934; they
covered the years 1929 through 1932 and were never extended back
to earlier years (see Duncan and Shelton, 1978). Bean (1933) devel-
oped a monthly index of industrial workers' income, 1919-1932, and
Wells and Bean (1933) constructed a more broadly based monthly
index of urban consumers' income from 1919 through 1933. These
two papers are reproduced as Appendices A and B of Bean, Bollinger
and Wells (1937). Bean and Wells designed these indexes as measures
of changes in consumer demand (demand curve 'shifters') to improve
BAE's price analysis and outlook work.
Bean and Thorne (1932) showed that demand functions for beef
and pork which included a time trend seriously overestimated beef
and pork prices in 1930 (the first year of the Great Depression); when
the time trend was replaced by an index of factory payrolls, the
revised equations estimated 1930 prices quite accurately. Ezekiel
(1933) improved the two equations from a causal ordering standpoint.
He also observed (pp. 179-180) that simultaneous-equation methods
of some kind would be needed in cases where "values are actually
neither dependent nor independent, but are determined by some sort
of mutual interdependence".
As a practical matter, many apparent identification problems were
solved by spatial and temporal disaggregation and by the develop-
ment of data systems which distinguished between different market
levels and between different components of commodity supply and
distribution.
Had they needed to meet complicated identification problems head
on, the agricultural econometricians would most likely have called on
38

Sewall Wright. We have already noted that he had been familiar with
the demand-supply identification problem since 1915. Wright and
Ezekiel were well acquainted, and Wright's 1921 article, 'Correlation
and Causation', was cited by Ezekiel, Elliott (1927), Henry Schultz,
and others during the mid-twenties. Wright's method of 'path co-
efficients' was capable of quantifying the implications of causal
hypotheses for each separate path in very complicated systems. In his
1925 USDA bulletin, Com and Hog Correlations, Wright reduced a
system of 510 observed correlations to a central system of 14 path
coefficients connecting corn production, corn price, the summer price
of hogs, the winter price of hogs, and the amount of hog breeding in
successive years. Wright (1934) devoted several pages to applications
of path analysis to supply-demand models.

Impacts and Potentials ofthe Contributions

In March 1933, at the depths of the Great Depression, Roosevelt


became President of the United States, committed to a program of
action. Henry A. Wallace became Secretary of Agriculture and chose
Ezekiel and Bean as economic advisers. Elliott and Wells became
program planners and analysts in the Agricultural Adjustment Admin-
istration; Tolley accepted administrative responsibilities in the AAA
and later (1936-1938) became its top administrator. E. G. Nourse
(1953, p. 83) has said that "the use of professional economists by top-
level policy-makers in the Department of Agriculture at this time
surpassed anything seen before in any Federal Department".
Creativity and realism in applied econometrics were the bases for a
superb tradition of policy analysis which persisted in the BAE and its
successor agencies for decades. But policy analysis during the New
Deal and World War II years left little time for journal publication.
The first issue of Econometrica appeared in January 1933, and
it became the vehicle for disciplinary research dedicated to "the
Advancement of Economic Theory in its Relation to Statistics and
Mathematics". If Ezekiel, Waugh and other agricultural econometri-
cians had had the leisure to maintain a continuous presence in
Econometrica from 1933 through 1945 it is possible that the LIML
epidemic would have run a shorter and less disruptive co~rse.
The agricultural econometrics tradition regained its momentum in
the 1950s. Examples include Ezekiel and Fox (1959) and Fox (1953,
1954, and 1958).
39

Concluding Remarks

Where should we look for the history of econometrics? Our first


thought might be that it is all in Econometrica. But most of the articles
by Roos and Evans appeared in the American Mathematical Monthly,
the American Journal of Mathematics, and the Proceedings of the
National Academy of Sciences. Presumably the editors of those
journals were not much interested in economics but they could
recognize good mathematics and/or check the credentials of mathe-
matician-authors. The most that could be said for some editors of
economic journals at the time was that they could recognize mathe-
matics - and they wanted none of it!
Roos (1934, p. xiii) says that in 1928 or 1929 he had written a 65-
page paper which involved mathematics, economics and statistics. The
editor of a mathematical journal was willing to publish the mathe-
matical parts if Roos would delete the economics. The editor of a
statistical journal would publish the statistical and economic parts if
Roos would delete the mathematics. Finally, Roos submitted his paper
to an economic journal edited at a university where he had spent some
time as a National Research Fellow. It was accepted but publication
was delayed. After waiting for a year an editor suggested that Roos
write an abstract about a dozen pages in length. He did this and as a
consequence produced a paper that he himself had great difficulty in
reading four years later.
Econometrica was created (in 1933) because it was clearly needed;
before that, econometric articles were published in less appropriate
places, sometimes in distorted or truncated forms. Some econometric
articles were no doubt rejected by editors solicitous of their readers'
comfort and support. (Even in the 1960s, a long-time subscriber
complained that his association's journal had recently become so
mathematical that the only page he could read was the photograph of
the past-president!)
Gerhard Tintner published most of his 1936-1942 articles in the
right journal. The lack of response to them poses a more general
problem concerning the nature of scholarly influence in the develop-
ment of a field of research. It is not sufficient to be first and brightest,
though it is necessary to be early and bright. It depends also upon
one's determination to make one's self heard on a particular subject
and even more upon whether a sufficient number of other scholars
are, for reasons of their own, prepared to listen.
40

During 1936-1939 the agenda of economic theory and econo-


metrics was more crowded than it had ever been, with new macro-
economic paradigms proposed by Keynes, Myrdal, Kuznets, Leontief,
and Tinbergen competing for attention with innovations in value theory
and the theory of the firm by Hicks and Allen, Chamberlin, and
Robinson. No program chairman in his right mind would have
scheduled all these and Tintner for simultaneous sessions, but that
(figuratively) was what happened; moreover, Tintner was the youngest
and perhaps the most modest and the least well-known. From 1940
through 1945, the scholarly agenda was disrupted by war. These are
conjectures; they are not substitutes for historical scholarship.
Econometrica tells us next to nothing about agricultural econo-
metrics. Most of the leading figures were employed in the U.S. Bureau
of Agricultural Economics and/or the land grant universities. These
institutions were intended to promote the well-being of 30 million
farm people, nearly 30 percent of the total U.S. population as of 1920.
Much of the research done by agricultural economists dealt with
observable processes having measurable inputs and outputs. Those
who studied production processes on individual farms were particu-
larly aware of the genetics, physiology, and technology underlying
input, output, and cost data. Those who like Elliott (1927) studied the
responses of groups of farmers to market prices could visualize the
reactions of individuals with different sets of alternatives; Elliott
himself became the nation's leading authority on the delineation of
relatively homogeneous type-of-farming areas.
The same emphasis on direct observation and measurement was
extended to the study of marketing processes and even of price
formation. Waugh (1929) demonstrated that 'quality' in the cases of
several kinds of fresh vegetables in the Boston wholesale market
could be represented by three or four measurable attributes which
accounted for about half of the total variance among prices of
individual lots on any given day.
In Johnson's terms, most agricultural econometricians were engaged
in subject-matter and problem-solving research. Those among them
who created new statistical methods and new theory usually did so
primarily to improve the quality of the two types of applications. The
most creative, including Ezekiel and Waugh, were masters in fusing
theory and reality. Given their emphasis on operationality and lucidity,
their use of mathematics seldom exceeded the tolerance limits of the
editors of QJE and JASA; our bibliography includes at least three
41

articles from the former journal and five from the latter. It should not
surprise us to find parts of our history in them.
Eight or nine of our references are to bulletins of the U.S.
Department of Agriculture and various state agricultural experiment
stations. I believe few general economists are aware of the existence of
such publications, let alone their vast number and in many cases their
high quality and high prestige among the scientific communities for
which they were written. Great numbers are footnoted in Taylor and
Taylor (1952), most of them published during the years with which we
are concerned. An important part of our history is contained in them.
It is perhaps most astonishing to find one of the earliest and
greatest articles on causal ordering and identification in the U.S.
Department of Agriculture's Journal of Agricultural Research (Wright,
1921a). The article was at first rejected not on its merits but because
the editor believed that Wright, an animal geneticist, had no business
writing an article on statistics; it was reinstated through the interven-
tion of a maize geneticist in the Bureau of Plant Industry.
The formidable analysis of corn and hog correlations reported in
Wright (1925) was apparently done by him during World War I
(1917-1918) as a member of a committee appointed to assign hog
production quotas to the various states. Later, he wrote it up and
submitted it for publication as a USDA bulletin; this manuscript was
also rejected by the editorial office as out of Wright's field. Still later,
the manuscript was published through the intervention of Henry A.
Wallace, whose father, Henry c., had meanwhile become Secretary of
Agriculture. (My comments in this and the preceding paragraph are
based on a 1977 letter from Wright to the author.)
The editors' quibbles seem quite ludicrous when we realize that
Wright was soon recognized as a world-class scientist and one of the
'Big Three' co-founders (with R. A. Fisher and 1. B. S. Haldane) of the
field of population genetics. Wright (1921b) had a major impact on
animal breeding and Wright (1968-1978) must rank among the most
impressive scientific treatises ever written by a single author.
We have called attention to two traditions in theoretical and applied
econometrics which could greatly enrich the teaching and practice of
economic analysis today if they were widely known. The econometric
paradigm which coalesced during the 1940s and early 1950s incor-
porated an extremely narrow range of ideas. Errors in variables were
excluded from the paradigm for mathematical convenience, and the
exclusion was perpetuated in a set of computing instructions carefully
42

taught to several cohorts of graduate students. Only in the 1970s did


econometricians trained in the LIML tradition begin to take explicit
account of errors in variables; only then did their empirical work
begin to approach the levels of reliability established by agricultural
economists in the 1920s (cf. Fox, 1982).
The econometric paradigm also embodied a particular view of
probability which had been developed in connection with the natural
sciences. I had no quarrel with this view in my own econometric work,
but I note that Tintner (1949) was already advocating a broader view
of probability for the social sciences based on Carnap's concept of
degree of confirmation. Did the dominant paradigm also preclude
serious attention to Tintner's ideas in this area?
It appears to me that Tintner directly anticipated some important
results in modern growth theory and the non-static theories of pro-
duction, choice, and technological risk and uncertainty that were
independently discovered by others a decade or two later. Sewall
Wright's 1921 article completely satisfied my curiosity about the
identification problem when I studied it in 1941. Waugh's 1929 book
anticipated other econometric work on the dollar values of quality
attributes by more than three decades.
We would be richer by far if we had full access to our own history.
We need to recognize, and put in perspective, the contributions of
Tintner and many others. The retrospective evaluation of those con-
tributions will improve our future research. To make such evaluations
we need first-rate historians of econometric ideas and practices. Mary
Morgan (1984) of the University of York is evidently the first to come
forward with a book-length treatment of the field; there are indications
that others will soon be joining her. Morgan (1985a, 1985b) are
related articles on their way to publication.

REFERENCES

Part 1. The Mathematics and Economics of Optimization over Time:


Evans, Roos, and Tintner, 1922-1942

Allais, M. (1978), 'Theories of general economic equilibrium and maximum


efficiency', in G. Schwodiauer (ed.), Equilibrium and Disequilibrium in Economic
Theory, Dordrecht: D. Reidel Publ. Co.
43

Baumol, W. 1. (1951), Economic Dynamics, New York: Macmillan.


Christ, C. F. (1952), 'History of the Cowles Commission, 1932-1952', in Economic
Theory and Measurement: A Twenty Year Research Repon, 1932-1952, Cowles
Commission for Research in Economics, University of Chicago, Chicago, Illinois.
Davis, H. T. (1941), The Theory of Econometrics, Bloomington, Indiana: Principia
Press.
Edgeworth, F. Y. (1881), Mathematical Psychics, London: Kegan Paul.
Evans, G. C. (1918), Functionals and Their Applications, Part I of The Cambridge
Colloquium, 1916, New York, American Mathematical Society. Reprinted
separately by Dover Publications, New York, 1964.
Evans, G. C. (1922), 'A simple theory of competition', American Mathematical
Monthly 29: 371-380.
Evans, G. C. (1924), 'The dynamics of monopoly', American Mathematical Monthly
31: 77-83.
Evans, G. C. (1930), Mathematical Introduction to Economics, New York: McGraw-
Hill.
Fox, K. A. (1987a), Charles F. Roos, in John Eatwell, Murray Milgate, and Peter
Newman (eds.), The New Palgrave: A Dictionary of Economic Theory and
Doctrine, London: Macmillan (forthcoming).
Fox, K. A. (1987b), Gerhard Tintner, in John Eatwell, Murray Milgate, and Peter
Newman (eds.), The New Palgrave: A Dictionary of Economic Theory and
Doctrine, London: Macmillan (forthcoming).
Hotelling, H. (1931a), Review of Mathematical Introduction to Economics, by
Griffith C. Evans, American Mathematical Monthly 38: 101-103.
Hotelling, H. (1931 b), Review of Mathematical Introduction to Economics, by
Griffith C. Evans, Journal ofPolitical Economy 39: 107-109.
Hotelling, H. (1958), 'CO F. Roos, econometrician and mathematician', Science 128:
1194-1195.
Morgan, M. S. (1985a), Finding a Satisfactory Empirical Model, Department of
Economics, University of York, England. Draft paper for the J. J. Klant
Symposium, Amsterdam, December.
Roos, C. F. (1925), 'A mathematical theory of competition', American Journal of
Mathematics 47: 163-175.
Roos, C. F. (1927a), 'Dynamical economics', Proceedings of the National Academy
ofSciences 13: 145-150.
Roos, C. F. (1927b), 'A dynamical theory of economic equilibrium', Proceedings of
the National Academy ofSciences 13: 280-285.
Roos, C. F. (1927c), 'A dynamical theory of economics', Journal of Political
Economy 35: 632-656.
Roos, C. F. (1928), 'A mathematical theory of depreciation and replacement',
American Journal ofMathematics 50: 147-157.
Roos, C. F. (1934), Dynamic Economics, Cowles Commission momograph, No.1,
Bloomington, Indiana: Principia Press.
Samuelson, P. A. (1947), Foundations of Economic Analysis, Cambridge, Mass.:
Harvard University Press.
Samuelson, P. A. (1983), Foundations of Economic Analysis: Enlarged Edition,
Cambridge, Mass.: Harvard University Press.
44

Shoven, J. B. and Walley, J. (1984), 'Applied general-equilibrium models of taxation


and international trade: An introduction and survey', Journal of Economic
Literature 22: 1007-1051.
Simon, H. A. (1987), Griffith C. Evans, in John Eatwell, Murray Milgate, and Peter
Newman (eds.), The New Palgrave: A Dictionary of Economic Theory and
Doctrine, London: Macmillan (forthcoming).
Tintner, G. (1936), 'A note on distribution of income over time', Econometrica 4:
60-66.
Tintner, G. (1937), 'Monopoly over time', Econometrica 5: 160-170.
Tintner, G. (1938a), 'The maximization of utility over time', Econometrica 6: 154-
158.
Tintner, G. (1938b), 'The theoretical derivation of dynamic demand curves',
Econometrica 6: 375-380.
Tintner, G. (1939), 'Elasticities of expenditure in the dynamic theory of demand',
Econometrica 7: 266-270.
Tintner, G. (1941a), 'The theory of choice under subjective risk and uncertainty',
Econometrica 9: 298-304.
Tintner, G. (1941b), 'The pure theory of production under technological risk and
uncertainty', Econometrica 9: 305-312.
Tintner, G. (1942a), 'A contribution to the non-static theory of choice', Quarterly
Journal ofEconomics 56: 274-306.
Tintner, G. (1942b), 'A contribution to the non-static theory of production', in
Oscar Lange et al. (eds.), Studies in Mathematical Economics and Econometrics,
in Memory ofHenry Schultz, Chicago, University of Chicago Press.
Tintner, G. (1942c), 'The theory of production under non-static conditions', Journal
ofPolitical Economy 50: 645-667.
Tintner, G. (1946), 'A note on welfare economics', Econometrica 14: 69-78.
Volterra, V. (1887), 'Sopra Ie funzioni che dipendono da altre funzioni', R. Ace. dei
Lincei, Rend., Volume VI (3 notes).
Volterra, V. (1930), Theory of Functionals, and of Integral and Integra-Differential
Equations, London: Blackie and Son Limited.
Weintraub, E. R. (1983), 'On the existence of a competitive equilibrium: 1930-
1954', Journal ofEconomic Literature 21: 1-39.

Part II. Agricultural Economists as World Leaders m Applied


Econometrics, 1917-1933
Bean, L. H. (1929a), 'The farmers' response to price', Journal of Farm Economics
11: 368-385.
Bean, L. H. (1929b), 'A simplified method of graphic curvilinear correlation', Jour.
A mer. Stat. Assoc. 24: 386-397.
Bean, L. H. (1933), 'Money income of farmers and industrial workers, and selected
retail expenditures', The Agricultural Situation, Bureau of Agricultural Eco-
nomics, February. Reprinted as Appendix A, pp. 35-38, in Bean, Bollinger, and
Wells (1937).
Bean, L. H. and Thorne, G. B. (1932), The use of 'trends in residuals' in
constructing demand curves, Jour. Amer. Stat. Assoc. 27: 61-67.
45

Bean, L. H., Bollinger, P. H., and Wells, O. V. (1937), Nonagricultural Income as a


Measure of Domestic Demand, Program Planning Division, Agricultural Adjust-
ment Administration, U.S. Department of Agriculture, Washington, DC: Govern-
ment Printing Office.
Duncan, 1. W. and Shelton, W. C. (1978), Revolution in United States Government
Statistics, 1926-1976, U.S. Department of Commerce, Office of Federal Statistical
Policy and Standards, Washington, DC: Government Printing Office.
Elliott, F. F. (1927), Adjusting Hog Production to Market Demand, Illinois Univer-
sity Agr. Exp. Sta. Bu!. 293.
Ezekiel, M. (1924), 'A method of handling curvilinear correlation for any number of
variables', Jour. Amer. Stat. Assoc. 19: 431-453.
Ezekiel, M. (1928), 'Statistical analyses and the 'laws' of price', Quarterly Journal of
Economics 42: 199-227.
Ezekiel, M. (1930a), Methods ofCorrelation Analysis, New York, Wiley.
Ezekiel, M. (1930b), 'The sampling variability of linear and curvilinear regressions',
Annals ofMath. Stat. 1: 275-333.
Ezekiel, M. (1933), 'Some considerations on the analysis of the prices of competing
or substitute commodities', Econometrica 1: 172-180.
Ezekiel, M. and Fox, K. A. (1959), Methods of Correlation and Regression Analysis:
Linear and Curvilinear, 3rd ed., New York: Wiley.
Fox, K. A. (1953), The Analysis of Demand for Farm Products, Washington, DC,
USDA Tech. Bu!.1081.
Fox, K. A. (1954), 'Structural analysis and the measurement of demand for farm
products', Review ofEcon. and Stat. 36: 57-66.
Fox, K. A. (1958), Econometric Analysis for Public Policy, Ames, Iowa State
University Press. Re-issued 1977.
Fox, K. A. (1982), 'Structural analysis and the measurement of demand for farm
products: foresight, insight, and hindsight in the choice of estimation techniques',
in Richard H. Day (ed.), Economic Analysis and Agricultural Policy, Ames: Iowa
State University Press.
Fox, K. A. (1987c), 'Agricultural economics', in John Eatwell, Murray Milgate, and
Peter Newman (eds.), The New Palgrave: A Dictionary of Economic Theory and
Doctrine, London: Macmillan (forthcoming).
Haas, G. C. and Ezekiel, M. (1926), Factors Affecting the Price of Hogs, Washington,
DC, USDA Bu!. 1440.
Johnson, G. L. (1986a), Research Methodology for Economists. New York:
Macmillan.
Johnson, G. L. (1986b), 'Holistic modeling of multidisciplinary subject matter and
problem domains', in Karl A. Fox and Don G. Miles (eds.), Systems Economics:
Concepts, Models and Multidisciplinary Perspectives, Ames: Iowa State University
Press.
Keynes, 1. N. (1917), The Scope and Method of Political Economy, 4th ed., London,
Macmillan.
Lenoir, M. (1913), Etudes sur la Formation et Ie Mouvement des Prix. Paris: M.
Giard.
Leontief, W. W. (1971), 'Theoretical assumptions and non-observed facts', A mer.
Econ. Review 61: 1-7.
46

Moore, H. L. (1908), 'The statistical complement of pure economics', Quarterly


Journal ofEconomics 22: 1-33.
Moore, H. L. (1911), Laws of Wages: An Essay in Statistical Economics, New York:
Macmillan. Reprinted 1967, New York: Kelley.
Moore, H. L. (1914), Economic Cycles: Their Law and Cause, New York,
Macmillan. Reprinted 1967, New York: Kelley.
Moore, H. L. (1917), Forecasting the Yield and the Price of Cotton, New York,
Macmillan. Reprinted 1967, New York: Kelley.
Moore, H. L. (1919), 'Empirical laws of demand and supply and the flexibility of
prices', Political Science Quarterly 34: 546-567.
Morgan, M. S. (1984), The History of Econometric Thought: Analysis of the Main
Problems of Relating Theory to Data in the First Half of the Twentieth Century.
Ph.D. Thesis, University of London: England (to be published as a book).
Morgan, M. S. (1985b), Correspondence Problems and the History of Econometrics,
Department of Economics, University of York, England, September.
Nourse, E. G. (1953), Economics in the Public Service, New York: Harcourt, Brace.
Sarle, C. F. (1927), Reliability and Adequacy of Farm-Price Data, Washington, DC,
USDA Bul. 1480.
Sarle, C. F. (1932), Adequacy and Reliability of Crop- Yield Estimates, Washington,
DC, USDA Tech. Bul. 311.
Schultz, H. (1925), 'The statistical law of demand as illustrated by the demand for
sugar', Journal ofPolitical Economy 33: 481-504 and 577-637.
Schultz, H. (1928), Statistical Laws of Demand and Supply with Special Application
to Sugar, Chicago: University of Chicago Press.
Stigler, G. J. (1962), 'Henry L. Moore and statistical economics', Econometrica 30:
1-21.
Taylor, H. C. and Taylor, A. D. (1952), The Story of Agricultural Economics in the
United States, 1840-1932, Ames: Iowa State University Press.
Tintner, G. (1949), 'Foundations of probability and statistical inference', Journal of
the Royal Statistical Society, Series A (General) 112, Part III: 251-279.
Tolley, H. R (1917), The Theory of Correlation as Applied to Farm-Survey Data on
Fattening Baby Beef, Washington: DC, USDA Bul. 504.
Tolley, H. R and Ezekiel, M. (1923), 'A method of handling multiple correlation
problems', Jour. Amer. Stat. Assoc. 18: 994-1003.
Tolley, H. R, Black, J. D., and Ezekiel, M. (1924), Input as Related to Output in
Farm Organization and Cost of Production Studies, Washington: DC, USDA Bul.
1277.
Wallace, H. A. (1920), Agricultural Prices, Des Moines, Iowa: Wallace Publ. Co.
Waugh, F. V. (1923), Factors Influencing the Price of New Jersey Potatoes on the
New York Market, New Jersey Dept. of Agr. Circular 66. Reprinted in Waugh
(1984).
Waugh, F. V. (1929), Quality as a Determinant of Vegetable Prices, New York:
Columbia University Press.
Waugh, F. V. (1935), 'A simplified method of determining multiple regression
constants', Jour. Amer. Stat. Assoc. 30: 694-700.
Waugh, F. V. (1984), Selected Writings on Agricultural Policy and Economic
Analysis, 1. P. Houck and M. E. Abel (eds.), Minneapolis: University of
Minnesota Press.
47

Wells, O. V. and Bean, L. H. (1933), Income of Urban Consumers, 1919-33.


Prepared in the U.S. Bureau of Agricultural Economics, October. First published
as Appendix B, pp. 39-45, in Bean, Bollinger, and Wells (1937).
Working, E. 1. (1927), 'What do statistical 'demand curves' show?', Quarterly Journal
ofEconomics 41: 212-235.
Working, H. (1922), Factors Determining the Price of Potatoes in St. Paul and
Minneapolis, University of Minnesota Agr. Exp. Sta. Tech. Bul. 10.
Working, H. (1925), 'The statistical determination of demand curves', Quarterly
Journal ofEconomics 39: 503-543.
Wright, P. G. (1915), 'Review of Henry L. Moore', Economic Cycles: Their Law and
Cause, in Quarterly Journal of Economics 35: 631-641. Reviewer assisted by
Sewall Wright.
Wright, S. (1921a), 'Correlation and causation', Journal of Agricultural Research 20:
557-585.
Wright, S. (1921b), 'Systems of mating', Genetics 6: 111-178.
Wright, S. (1925), Corn and Hog Correlations, Washington, DC, USDA Bul. 1300.
Wright, S. (1934), 'The method of path coefficients', Annals of Math. Stat. 5: 161-
215.
Wright, S. (1968-1978), Evolution and the Genetics of Populations, Chicago:
University of Chicago Press. (4 volumes).
PART II
PLANNING MODELS
Energy Pricing in a Welfare Framework: Issues in
Equity, Efficiency and Exhaustibility

GOPAL K. KADEKODI
Institute ofEconomic Growth, Delhi, India

Abstract. This paper deals with a theoretical framework to develop a pricing system
for energy products. While doing so either in a market or planned economy, issues
of equity and exhaustibility are accounted along with some efficiency measures. The
dynamic path of optimal energy prices so evolved calls for corrections on marginal
cost pricing rules. Both interest rate changes and tax (or subsidy) measures are
required in a judicious way all along the period of energy production out of a stock
of exhaustible resources. In the long run, energy extraction is equivalent to capital
investment while only for short-run planning it can be treated under the con-
servation ethic namely, resource conserved is equivalent to capital investment
underground.

1. INTRODUCTION

Pricing of a product as per the received economic theory is generally


based on some efficiency considerations. If one accepts for the
moment, profit maximisation as an efficiency criterion, we get the well
known pricing rule saying that price minus marginal cost per unit of
price (i.e. price-cost margin) be inversely related to the absolute value
of price elasticity of demand. This efficiency pricing rule is applicable
to any market condition, be it competitive or monopolistic. When it
comes to a non-renewable resource, because of the dynamic links
between reserve exploration, extraction and inter-generational issues,
production scheduling and pricing based on efficiency rules alone
need not be socially optimum. 1
The problem of optimal pricing of exhaustible resources over time
was first dealt in detail by Hotelling (1931). Starting with a known
quantity of exhaustible resource of the same quality and a dynamic
demand behaviour, the inter-temporal adjustment in the royalty (or
profit) rate is derivable under a royalty maximisation condition. The
51

Jati K. Sengupta and Gopal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN-13: 978-94-010-8146-7
© 1988 Martinus Nijhojf, Dordrecht
52

so called Hotelling's r percent rule implies that the royalty must rise
along the optimum price path at a discount rate r. This will be the
case irrespective of whether the extraction cost is constant or varying.
In a welfare framework, as against revenue maximisation, the inter-
temporal choice of extraction rates implies pricing of such resources
covering not only the marginal extraction cost but also a scarcity rent.
This rental or degradation cost as coined by Solow and Wan (1976)
reflects the impact of current rate of extraction on future resource
cost. Under a back-stop technology and increasing marginal extraction
cost, Heal (1976) shows that contrary to the usual notions (not
attributable to Hotelling), the scarcity rent decreases over time and
reaches zero by the time the resource is depleted and a switching is
done to a substitute or additional resource.
In addition to exhaustibility and economic efficiency arguments, two
specific welfare aspects are also important in pricing of resources.
First, the distribution of income among people and the demand
patterns are uneven at any given point of time, and is also varying
over time. Therefore, the utility of income over time and across people
can affect the demand pattern and hence the time preference or
discount rates. Secondly, the social marginal utility of income from the
production of exhaustible resources may have to be adjudged while
determining the optimum price path. Both these distributional equity
factors affect the social welfare benefits of the resource. In a price
determination model then, the objective itself can be social welfare
rather than profit.
In this paper a welfare maximisation model is formulated with an
intention of deriving optimum price paths for energy production,
keeping exhaustibility and distributional equity into consideration.
Section 2 develops a price determination model with efficiency,
exhaustibility and equity considerations. The next section is devoted to
interpret the results and analyse the price paths under static and
dynamic conditions with alternative assumptions on distributional
patterns. Finally, comparative static analyses of the price scenarios for
changes in cost of energy production and discount rate are examined
for an optimal set of tax, subsidy and interest rate policies.

2. A MODEL OF PRICE DETERMINATION

Let there be only two commodities called consumption good (Qc) and
53

exhaustible energy good (Qr)' There are N individuals in a society,


each earning some income y according to a probability law f(y) to be
called income distribution. Let U(Pc , Pr , y) be the indirect utility
function of an individual with income y and facing market prices Pc
and Pr for the consumption and energy goods respectively. The
intertemporal welfare choice is stated as maximising the discounted
sum of individual utilities, subject to exhaustibility and constraints on
profit generation from the production of energy good. 2 The profit
generation constraint ensures the discounted surplus (i.e. price-cost)
over the course of resource extraction, to be at least equal to the fixed
cost F on account of investment on energy extraction projects. This is
an efficiency condition. Exhaustibility condition puts a restriction on
the extraction rate such that the total resources extracted either in a
finite or infinite time will be equal to the total available resource RD.
The optimisation model in a mathematical form can be stated using

rL
Lagrangian principle as:

Maximise L ~ N u (Po P" y) . fry) dy e-· dt +

+AI [ R, - r Q. dt ] + (1)

+ A, [r (P. - m) • Q. e-. dt - F ]

where Al and A2 are the lagrangian multipliers interpreted as shadow


prices of the resource or scarcity rent of the unutilised resources, and
of surplus profit respectively; m = the marginal cost of producing
energy; F = the fixed or investment cost of the energy installation
which may or may not depend upon energy production rate QI; <5 =
the social discount rate which depends upon social marginal utility of
income, assumed to be constant throughout the period of resource
extraction.
Here a weighted sum of all individuals' utilities over the span of
energy production is to be maximised subject to exhaustibility and
profit conditions. The choices in the welfare model are the energy
production rate Qr and its price Pro In view of the fact that demand
54

for energy is a function of its price, the basic optimisation can be done
with respect to energy price P, given its demand structure.
The optimisation process then implies that at each time t the
following condition be satisfied by the energy price to be so
determined. 3

au -e)/ aQ,
N
Jy
-a
P,
. f (y) e dy - At -a-
P,
+
(2)

(P, - m) a
Q
+ 11.2
1 [
Q, e -or = O.
ap,,+]

At this stage, certain behavioral properties and conditions should be


mentioned for further analyses. First, from the property of indirect
utility functions, we have,4
au au au au
ap, = -q, . a'
y
apc = -qc . -a-'
Y (3)

where, q, = q,(P" y) and qc = qc(PC' y) are the energy and consump-


tion goods demanded by an individual with income y and facing
market prices P, and PC' respectively. Second, it is assumed that
the price elasticities are constants at least in the feasible ranges of
demand. Third, we note that:

N L q, . fry) dy ~ Q,; N L q, . flY) dy - (1. (4)

The total demand for consumption and energy products are the sum
of individual demands. Fourth, it may be noted that the individual's
price elasticities are also those for the aggregates.
aQ, P, aq, P,
'YJ,=-' -=-. - etc. (5)
ap, Q, ap, q,
Finally, an average social marginal utility of income measured in terms
of the shadow price of profit or surplus be defined as:

au 1
B =
J
y
-. -
ay A2
. f(y) dy. (6)

Following Feldstein (1972), Ahmed and Stern (1981), Murty


55

(1983), and Kadekodi (1985), a measure of distributional charac-


teristic for energy is defined which takes note of both the income
distributional aspects and the social marginal utility of income.
au • - 1 • fey) dy
R f qr • -
= - _ ! ! . . - - _ -ay Az . = : . . - _ - - - -
-=---_

[f [f :: :,
r

q,J(y) dy ] f(y) dy ]
(7)
au 1
N f qr - ay
- • fey) dy
Az
---"-------':.------=:.--_--
B . Qr
This measure expresses an association between social marginal utility
of income and the demand pattern for energy products, resulting from
an income distribution fey) and a price structure (P" Pc). Specifically,
R r is a ratio of weighted average demand for energy over all income
classes (with weights being the social marginal utilities of different
income classes) to the average demand ~ corresponding to equal
social marginal utilities of income. Furthermore, B . R r can be inter-
preted as a weighted average social marginal utility of all income
classes, with consumption of energy as weights.
Specifically, if we assume that (i) the income distribution to be log-
normal with V(log y) = oZ, (ii) the commoditywise income elasticities
of demand to be constants (say a c and a r ), (iii) the elasticity of social
marginal utility of income to be a constant e, then the expressions (6)
and (7) can be written as: 5
B = e 1120(1 + 0)0 2 (6a)
(7a)
It can then be seen that for increased income inequality, Le. OZ
increasing, the average social marginal utility of income (for the
society as a whole) increases whereas the income distributional
characteristic decreases. In the limit we have the following: For an
extreme inequality when OZ -> 00 and B -> 00 note that Rr -> e.
Similarly, for a perfect equality of income, both Band R r tend to
unity. Any commodity which is of the nature of an essential good,
is expected to have a high distributional characteristic (due to low
values of a r and a c ). Luxury goods will have low distributional
characteristics.
56

Using (6a) and (7a) and the definitions of price elasticities, one can
rewrite (2) as:

A2 Q,BR,e- ol + A1 rJ, Q, -
P,
(2a)

-1,[ q,(P,-m) ~ +~]e-·=o.


Defining A = A/A2 as the scarcity rent of energy resource measured in
terms of the shadow price of profit or surplus value, we have the price
paths for energy products as:

(leO I + m) . 17,
P, = --"l-+-n---B-'--'-R'-'--- (8)
'" ,
The price path follows the Hotelling's rule with the scarcity value of
energy increasing at the discount rate o. The non-negativity of prices
is assured by imposing an assumption that 1 + rJ, - B . R, < O. The
initial price P, at t = 0 depends upon the intrinsic or opportunity
value of the resource in reserve before extraction, as well as the
income distributional and demand characteristics,
Following the same approach, the optimal price of consumption
goo,ds can be derived by maximising the welfare function in (1) subject
to budgetory conditions. The price of consumption goods can be
shown to be:

n
'Ire
~
Qc (B . Rr -1)
Pc = -----=-=------ . P" (9)
17
,
[B .R+ 11 ++ 17c
c
]
17,

where rJ,c is the cross-price elasticity of energy products with respect


to the price of consumption goods, Rc is the distributional charac-
teristic of consumption goods defined on the lines of R,. An analysis
of these price paths is attempted in Section 3.
57

3. OPTIMAL ENERGY PRICES, TAXES, SUBSIDIES AND


INTEREST RATES

The price path for the exhaustible energy resource can be analysed
under different scenarios, assumptions and comparative static para-
metric variations. Clearly, both Rotelling's basic result of inter-
temporal 0 percent increase in prices and Feldstein's static pricing
rule incorporating income distributional and social marginal utility
characteristics are special cases of this general pricing rule.
In the case of perfect competition in the resource market, the
optimising pricing rule (8) simplifies to:

limit Pr = A e bt + m. (10)
1],- 00

In other words, the optimal price of an exhaustible resource, net of its


marginal cost of extraction, rises at the rate o. This is so because,
under competitive conditions income distribution tends to be equal
and hence both Band R r also tend to unity. Under such equitable
conditions only the scarcity value and the social discount rate deter-
mine the price path. In the case of an extreme inequality, the income
elasticity of demand for energy products is generally quite high, with
the result for values of (j around 2 the exwession (1 + (j - (jar) is
likely to be negative. Then B . Rr = e 1/2 (ja (1 + (J - (Ja,) tends to zero as
a2 -> 00. In other words, for the case of extreme inequality, the price
of energy products approaches a limiting value:

Pr=(Aebf+m)( 1], ). (11)


1 + 1],

Clearly, the resultant price is much more than the price under the
competitive case.
Removal of budgetary condition amounts to a modified Rotelling's
rule which also accounts for income distributional characteristics. 6
A Of
P
r
= I e I I
B. R, 'YJr' (12)
58

By removing the exhaustibility condition in (1), we have the


Feldstein's static pricing rule namely,
P, - m 1- B . R,
- (13)
P, 117,1
The price-cost margin is inversely proportional to the price elasticity
of demand and the average social marginal utility of all income classes.
As much as the price path of the resource depends upon the socio-
economic characteristics like price elasticity of demand (17,), average
social marginal utility of income (B), the distributional characteristic
of the energy resource (R,), and the social discount rate (0), the
shadow price of the resource A also depends on these and several
other parameters. Therefore, apart from comparing the shadow price
with the optimum price to arrive at an optimal mix of taxes and
subsidies, their behaviours can also be analysed for changes in above
mentioned attributes of social articulation.
For further analysis of the shadow price or scarcity rental and the
optimum price paths, a demand function for energy be specified as:
Q, = APt, A > 0, 17, < O. (14)
The exhaustibility condition can be written using the expression for
optimal price (8) and the demand specification (14) as:

1+ - B . R ] 1],
oo
+ Aeo1f' dt = _H_O
"0 [

f
(m 17, '. (15)
o A 17,
Since an explicit solution to the energy scarcity rental is not easy to
obtain from Equation (15), its behaviour is deduced using implicit
function rules. Equation (15) and the shadow price of the resource A
can be expressed as:

(16)
_ ~ [ 1 + 17, - B . R, ] 1], = 0
A 17,

A = tjJ(m, 17" B, R" 0). (17)


59

One can study the behaviour of A in a comparative static frame-


work for changes in m, rJ" B, R" and O. The following results follow
using implicit function rules.
a;.
--=-
aB
1+ n - B . R ] 'I, - 1 (18)
-RoR, '/, ,
[
rJ, > 0.
J:
----'=---------'-'----='---

q, . A (m + J. e")"-' e& dt

The shadow price of energy increases with average social marginal


utility of income.

1+ n - B . R ] 'I, -
-Ro . B ./' ,
1

[
rJ, > 0.
J:
-------"'------'-----------='---
(19)
A . q, (m +J. e&)"-' e& dt

The shadow price of energy increases with the distributional charac-


teristics.

a;. -t (m +J. e&)"-' dt


o.
J:
< (20)
am
(m + J. e")"-' e& dt

The shadow price of energy decreases for increases in the marginal


cost of energy extraction.

a;. -J: (m + J. e&)"-' . J. • t e" dt


< o.
J:
(21)
ao
(m + J. e&)" -, e" dt
60

The shadow price of energy decreases for higher values of social


discount rate.
Using these results, it can be shown that, there exist t = t * and
t = t** such that 7

dP/dm > 0 for t ~ t*


< 0 for t > t* (22)
dP/do < 0 for t ~ t**,ap/do > 0 for t ~ t**.

Hotelling's result that price of energy resource be increasing at the


rate 0 is easy to see so long as the extraction cost m is zero or
constant. But, whenever the extraction cost is also increasing (either as
a function of cumulated resource extracted or otherwise), the price
path be analysed under specifications on the nature of extraction costs.
From the above results it follows that in the long-run (i.e. for t > t *)
the rental or shadow price of energy as well as the optimal prices shift
downwards for increased extraction costs. In other words, in a world
of increasing extraction costs, in the initial phase of energy production
the optimal price starts way above marginal extraction cost, converg-
ing towards the marginal cost thereby reducing the rental to zero. In
such a situation, it is logical to levy tax on the resource initially to the
extent of the scarcity rent, which can be gradually withdrawn in the
course of resource deplation.
The effects of increased discount rate upon the price of energy as
well as its scarcity rental are positive after a short time phase of
t ~ t **. This leads to increased resource conservation via the nega-
tive effect of price on demand. In other words, in the long run energy
resources are to be treated as capital goods and extraction of energy is
equivalent to investment in capital. An increased discount rate reduces
the incentives to invest (Le. extraction) and hence results in a lower
rate of energy extraction. Alternatively, the same can be argued by
interpreting the discount rate as the socially optimum rate of return on
any of competing portfolios. Whenever the present interest rate is
lower than this discount rate, it pays to postpone extraction to a future
date and earn better returns. Likewise, if the discount rate is lower
than the present interest rate, waiting is not worth and hence over
extraction of the resource follows.
In the case of a finite horizon model with a substitute resource as a
backstop technology, Farzin (1984) shows that changes in discount
rate bring about two conteracting effects. Conventionally, the discount
61

rate reflects a pure time preference, in which case, a reduction in


discount rate is an incentive to postpone resource extraction. This is
the conservation effect. Here conservation of resources is interpreted
as investment underground; therefore at lower interest rates, the
motivation to invest underground is higher and hence lower rates of
present day production. Farzin calls the other effect a disinvestment
effect in which resource production is treated as capital investment;
hence a reduction in discount rate is interpreted as lower cost of
capital borrowing and hence faster energy resource production. The
net result of these two effects, of course, depends on the availability of
substitute resources and their costs of recovery.
The optimal energy price paths in our case as shown in (22) are
influenced by the discount rates differently in two distinct time phases.
In the early phase of energy production (i.e. for t :::;; t **), only the
conventional conservation effect is dominating the price path. In such
a case changing interest rate may alone lead to conservation or
postponing energy production. In such a case if conservation is to be
achieved, it is possible by reducing the interest rate and taxing the
energy resource on and above its marginal cost, so as to raise the price
of the resource. In the long-run however, lowering of interest rate may
enhance resource exploitation (due to dominant disinvestment effect).
Therefore, it may be logical to raise the interest rate and tax the
resource simultaneously, so as to keep the conservation going. The
welfare policy package will have to be reversed of course, for an anti-
conservation motive.

NOTES

1. See Dasgupta and Heal (1978), Solow (1974), Solow and Wan (1976),
Herfindahl (1967), Northaus (1974), and Pindyck (1978) for detailed discussion
of these issues.
2. In order to keep the model specification in a manageable form we are not
introducing the profit constraint on the production of consumer goods; but can
be added if necessary without loss of any generality.
3. Similar condition can be derived for the price of consumer goods also.
4. The utility U (PC' P" y) be maximized subject to the budget constraint Pc • qc +
P, . q, = y.
5. See Kadekodi (1985) for details. These assumptions made here are not far from
reality at least for highly aggregated commodities considered here.
6. Here the income distributional characteristic and average social marginal utility
of income are to be redefined without A,2'
62

7. These follow from differentiating (8) with respect to m and b.

dP, = [ '7, ] [1 _eOI -----"-'-t_(m+_A.eO_I)~'-l_dt]


dm 1 + '7 - B . R fa>
" 0 (m + A. eOI)~,-1 eat dt

REFERENCES

Ahmed, E. and Stem, N. H. (1981), 'On the evaluation of indirect tax systems: an
application to India', (mimeo), Development Economic Research Centre, Univer-
sity of Warwick.
Dasgupta, P. and Heal, G. M. (1978), Economics of Exhaustible Resources,
Cambridge University Press.
Farzin, Y. H. (1984), 'The effect of the discount rate on depletion of exhaustible
resources', Journal ofPolitical Economy 92: 841-851.
Feldstein, M. S. (1972), 'The pricing of public intermediate goods', Journal of Public
Economics 1: 45-72.
Heal, G. (1976), 'The relationship between price and extraction cost for a resource
with a backstop technology', Bell Journal ofEconomics 7: 371-378.
Herfindaul, O. C. (1967), Depletion and Economic Theory, University of Wisconsin
Press.
Hotelling, H. (1931), 'The economics of exhaustible resources', Journal of Political
Economy 39: 137-175.
Kadekodi, Gopal K. (1985), 'A welfare approach to energy pricing: a case study for
India', Energy Journal 6.
Murty, M. N. (1983), 'Efficiency and distributional equity and optimal structure of
prices for public electricity supply in India', Institute of Economic Growth,
Working paper No. E/86/83.
Nordhaus, W. D. (1974), 'Resources as a constraint for growth', American
Economic Review 64.
Pindyck, R. S. (1978), 'The optimal exploration and production of non renewable
resources', Journal ofPolitical Economy, Vol. 5.

American Economic Review LXIV: 1-14.


Solow, R. M. (1974), 'The economics of resources or the resources of economics',

Solow, R. M. and Wan, F. Y. (1976), 'Extraction costs in the theory of exhaustible


resources', Bell Journal ofEconomics 7: 359-370.
Foreign Aid, Economic Development and
Income Distribution: Some Inferences
from a CGE Model for Egypt

JEFFREY B. NUGENT*
University ofSouthern California, Los Angeles, U.S.A.

Abstract. This paper examines the sensitivity of quantitative assessments of foreign


aid on the level of income and its distribution across income groups to alternative
modelling and parametric assumptions. It does so with the use of a computable
general equilibrium model for Egypt by subjecting the model to alternative
assumptions. Although the results are (as expected) sensitive to the assumptions
made, they indicate that at least potentially, as for example, when foreign aid takes
an appropriate form and is accompanied by appropriate offsetting policies, it can be
a useful tool for achieving development at minimum social cost.
The purpose of this paper is to examine the sensitivity of quantitative assessments
of the effects of foreign aid on the level of income and its distribution across income
groups to different modelling and parametric assumptions. It does so within an
analytical framework that is considerably more general and flexible than that which
has typically been used for such purposes. The sensitivity of the results is determined
by subjecting a computable general equilibrium (CGE) model of an important
foreign aid-receiving country, namely, Egypt, to a series of different simulation
experiments.
The presentation is organized as follows: Section 1 reviews the considerable
change in attitude to the efficacy of foreign aid in promoting economic development
and an equitable distribution of its benefits; Section 2 briefly describes the CGE
model which has been employed and its flexibility features; Section 3 outlines the
simulation experiments and presents the results; finally, the conclusions are
presented in Section 4.

1. CHANGING PERSPECTIVES ON THE USEFULNESS OF FOREIGN AID

Thirty-five years ago there was considerable hope that foreign aid
could playa vital role in the economic development of less developed
countries (LDCs). This hope was buoyed by the following considera-
tions: (1) LDCs were conceived of as being constrained almost exclu-
63

Jati K. Sengupta and Gopal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN-13: 978-94-010-8146-7
© 1988 Martinus Nijhojf, Dordrecht
64

sively by the shortage of capital, but to have excess supplies of other


resources, namely of labor, or of land or natural resources. (2) Based
on the familiar Harrod-Domar model which served as the basis for the
early development plans, foreign aid (savings) was viewed as being
strictly additive to domestic savings. Given the capital-output ratio,
this would imply that any increase in the ratio of foreign aid to GNP
would lead to a proportional increase in the rate of economic growth.
(3) The foreign aid provided to Europe in the late 1940's under the
Marshall Plan was universally acclaimed as being higWy successful. (4)
Because of the numerous equilibrating mechanisms underlying real
world economies, ranging from trade through migration and capital
mobility, there was considerable confidence that development, once
initiated, could be counted upon also to spread and trickle down to
the poor. (5) Likewise, thanks to the development of investment
criteria and social cost-benefit anlaysis, foreign aid, once provided,
could be assured of being allocated in an efficient way. (6) Based on
the favorable experience in private philanthropy with matching grants
and in governmental support programs with conditionality, on the one
hand, and the political and other costs associated with desirable policy
reforms, on the other hand, foreign aid was even considered to be an
indispensible tool for eliciting the desired reforms.
Over the last several decades, however, the case for each of these
beneficial effects of foreign aid on development and equitable distribu-
tion has been severely challenged, and perhaps even altogether
dashed. Two-gap models and other forms of analysis have frequently
shown the capital constraint not to be the binding one in the LDCs. 1
Empirical studies have demonstrated that foreign aid frequently serves
as a substitute for, rather than a complement to, domestic savings. 2
The analytical paradigm underlying mainstream development economics
has been shifting in such a way as to view the classical adjustment
mechanisms as being far less equilibrating than had previously been
conceived, and in some cases as even being disequilibrating. Most
foreign aid allocations have been decided on political rather than
economic grounds and allocated without having been subjected to
formal social cost-benefit analysis. Finally, rather than inducing desir-
able reforms, foreign aid seems to have allowed recipient governments
to postpone or defeat these reforms. 3
While many of the findings of deleterious consequences of foreign
aid are far from definitive and by no means necessary, in the light of
the remarkable shift in attitudes concerning the usefulness of foreign
65

aid in fostering development and equitable distribution, it is hardly


surprising that developed countries have drastically lowered their
provision offoreign aid in relation to GNP.
Existing studies of the effects of foreign aid have invariably employed
one or another of the following methods: (1) case studies, (2) highly
aggregated and simplified theoretical models (3) empirically based
macroeconomic models of the two-gap variety and (4) development
planning models mostly of the programming type.
Not surprisingly, each different type of analysis has emphasized
different types of effects and has its own benefits and costs. 4 For
example, the case studies tend to be broad in scope but of course
lacking in generality.5 The theoretical studies are rigorous but typically
analyze only a single issue, such as the terms of trade, which is
not particularly central to the relevant controversies.6 The macro-
economic, cross-section and planning model studies generally concen-
trate on a few macroeconomic consequences such as those on savings
and the balance of paymentsJ
In addition to the differences in emphasis and in the issues raised
between the different types of analysis that have been applied in the
literature, there are also very considerable variations within any such
type of analysis from one study to the next and to some extent also
from one vintage of such analysis to the next. It is for this reason that
in the present study, the emphasis is on investigating the sensitivity
of the effects of foreign aid to different modelling and behavioral
assumptions within a common and rather general modelling frame-
work.

2. THE CGE MODEL OF EGYPT

Because of its advantage as a practical tool for doing general equi-


librium analysis in the realistic context of multiple regions, products
and factors in which supplies and demands may be sensitive to relative
prices and for capturing distributional effects, the present study uses a
computable general equilibrium (CGE) model of Egypt.
Since textbook treatments of CGE models and their advantages and
disadvantages are now easily accessible, e.g., Dervis et al. (1982) and
Ginsburgh and Waelbroeck (1981), we proceed directly to a brief
outline of the CGE model for Egypt which has been employed in this
study.
66

The equations of the model are specified in Table 1 below. The


general construction of the model and parameter estimates have been
designed to be consistent with the 1976 Social Accounting Matrix
(SAM table) of the Egyptian economy provided by Eckaus et al.
(1978), though modified as described in Nugent and Williams (1984).
Twelve productive sectors are distinguished, four of which are rural
«1) staple food, (2) non-staple food, (3) cotton and (4) other agri-
culture), and eight of which are urban «5) food processing, (6)
textiles, (7) other manufacturing, (8) construction, (9) crude oil and
petroleum products, (10) transport and communications, (11) housing
and (12) other services). Also distinguished are six variable factors of
production, namely, 3 income level - skill labor classes 'lower 60%',
'next 30%', 'upper 10% " in both urban and rural areas, designated as
classes 1 through 6, respectively.
As is conventional in COE models, the specification of the model
and the subsequent computation of the model's solutions are divided
into a series of blocs. The presentation begins in Section 1 of Bloc A
where the output levels and inputs are determined for a given set of
initial conditions consisting of the prices of all commodities and
factors of production and the stocks of all fixed factors of production,
namely, those of sector-specific capital.
The production functions for the gross output of each sector s, X s '
and that for the value added component of that output ~ are given in
Equations (la) and (lb), respectively. The latter is assumed to be of
the familiar CES form with respect to the variable and fixed factors,
Rs and K s ' respectively; the former is based on fixed coefficients of
input i in sector s, which in turn is disaggregated in Equation (2) into
two components, the domestically supplied component dis and the
imported component mks . While the ais coefficients are assumed to be
technologically fixed within any given period (but flexible over time),
the decomposition of these coefficients into their dis and m ks com-
ponents is assumed to be flexible, depending in part on the relative
prices of the two components as indicated in Equation (3). The fixity
of the input-output coefficients implies also the fixity of the domestic
value added-output coefficients dvs defined in Equation (4). Section 1
is completed by the definitions of the total interindustry demands for
the domestically supplied and imported inputs by Equations (5) and
(6).
We move now to Section 2. The price index of domestic value
added Pvs is defined in Equation (7) as the difference between the
Table 1. Specification of the Egyptian CGE Model

Bloc A
I. Production Relationships

Production Function (1a) Xs=Min [ : ' XIs +M1s ... XiS +Mks ... X ns + Mus ] for each s, s = 1, ... , 12.
us a ls ais ans
for Gross Output
Production Function (1b) vs = ys[ar sR-Prks
s
+ (1 - a r)K-PrkSj-V/PrkS
ss for each s.
for Value Added

Fixity of Input-Output (2) XiS + M ks _ Xis + Mks = dis + mks for each i, s, = 1, ... , 12.
ais = X - X X
Coefficients s s s

mks - 0
mks [ Pdis ] 0",,1.
Substitution between (3) - -= ----=-0 -- for each k, s.
Imported and Domestic dis dis Pmks
Input Components

Domestic Value Added (4) dvs = ( I- ~ ais ) = ( 1- ~ dis - ~ mks ) for each S = 1, ... , 12.
Per Unit of Gross Output

Interindustry Demand (5) X: =L Xis for each i = 1, ... , 12.


for Domestically Produced
Commodity i

Interindustry Demand (6) M~= L M ks for each k = 1, ... , 12.


for Imported Commodity k
0\
-..,J
Table 1 (continued) 0\
00

II. Price and Profit Relationships

IS
Ps - L~ d PdIS - L mksPmks p*
i k =_s
Effective Price of Value (7) Pvs = drs for each s.
Added 1- L diS - L mks
k

Unit Profits (8) 1ls = Pvsdvs - L resPres . for each s.

Sectoral Profits (9) TIs = 1lsX s = PvsdvsXs - L ResPre = PvsdvsXs - RsPrs for each s.

Sectoral Profit Rate (10) 1l~ = (lls/Ks) for each s, s = 1, ... , 12.

Average Profit Rate (11) 1l~ve=LTIsILKs


s s

Sector-Specific Price (12) Prs = L b~sPre for each s.


Index of Variable
Resource Input
Purchasers' Prices of (13) Pdij = Ps(I + td;j) for each i, j.
Domestically Produced
Purchasers' Prices of (14) Pmkj = PmkwPfe(I + tmkj ) for each k, j.
Imported Goods
Table 1 (continued)

Aggregate GDP (15) P = L e~Pvs.


Deflator at Producers
Prices

Price Index of Domestic (16) Pdf = L !?Pdm .


Investment Goods

Price Index of Imported (17) Pm, = L g~Pmkn.


Investment Goods k

Price Deflator for (18) P, = (J°pd, + (1- (JO)Pm,.


Investment
Commodity-Specific (19) PC;e = J.l:ePd;e + I-l iePmke for each i and each c.
Consumption Price

Aggregate Consumption (20) PCe = L viPC;c foreachc= 1, ... ,6.


Price Deflator for i

Household Class c
III. Factor Demand and Factor Supply

(21) Rf = [ (1- ars)K7"p~:P+l ]-l/P


Demand Function for PvP/P+1yP/P+l / for each s.
s s a p/p+l - arsp~sP+l
rs .
Aggregate Variable
Resources

Demand Function for (22) R D = R D \ hO [ Prs ]or,r,. for each c and s.


cs s I... cs
Variable Resource Pre
Type c \0
'"
......:l
Table 1 (continued) o

Aggregate Demand for ( 23) RD


c = L R csD for each c.
Variable Resource
Type c
Fixed Aggregate Supply (24) R~ =R~ for each c.
of Variable Resource
(Labor) Type c

c
Variable Supply of (24a) R~ - R~o r w;;r l~Pr' [ :c~~c, ]~Y'd [ PrclPrco ]~pr,o.
Labor Type c co c,o Pr/Pljo
Prco
Pc,"

Full Employment (25) R~ = R~ for each c.


Equilibrium

BlocB
IV. Factor Income Generation and Expenditure Allocation

Disposable Income of (26) Yc = (1 - iyc)prcR~ + L TR hc for each c, where h = c!, ... , C6' n, G, F
Household Class c h

Disposable Income of (27) Yn = (1 - in)s L n s + L TR hn where h = c!, . .. , C6' n, G, F.


the Enterprise Sector
Table 1 (continued)

Disposable Income of (28) YG = I I tdis(Pdjs - tdjs)Xis +


The Government Sector i s

+I I tmks(Pm ks - tmks)Mks +
k s

+I I td;c(Pdjc -tdjc)Cd;c+
e

+I td;c(Pd;G - tdjG)Cd;G+

+I td;(Pd; -td;)ld;+

+I tmkn(Pmkn - tmkn)Imk +
k

+I I tm kc (Pm kc - tmkC> Cm kc +
k e

+I tmkc(PmkG -tmkG)CmkG+
k

+I tdj£ (Pd;£ - tdjdE; +

+I tYe Pre R ~ + I inns + I TR hG •


h

---l
.....
-...l
N
Table 1 (continued)

P,d iE "'-D'i2
E .
Sectoral Exports (29) Ei = f io [ pte. Pews ] WI
for each i.

Consumption Expenditure (30) CiePC;e = YiePC;e + cie [ ~. - ~ yj'Pc;,.] for each i and each c.
of Household Class c
of Goods Type i

(30a) Ce = Ceo + c - Yc + d(IR, - lRo)


Pc,

Consumption Breakdown (31) :kC = [ m~c ] [ Pdjc ]Omd •

Ie d it Pmks
into Domestic and
Imported Components

Import Component of mkc/djc ] C for each k and c.


(32) Cmke = [ 1 + (mkc/djc) IC
Consumption of Class k
by Household Type c
Domestic Component of (33) Cdit = (Cie - Cmke ).
Consumption of Class i
By Household Class c
Table 1 (continued)

Government Consumption (34) CdiG = CdiG .


and Breakdown into
Domestic and Imported
Components
(35) CmkG = Cm kG .
V. Savings, Investment and Trade Balances

Household Savings (36) Sc = Yc - L PCicC;c for each c.

Enterprise Savings (37) Sn = Yn'

Government Savings (38) SG = YG - L Pd;GCdiG - L PmkGCmkG'


k

Foreign Savings (39) S; = MN - EN - L TRFh - NFY . PNFY '


h

Total Savings (40) S = L Sc +SG +Sf +5".

Real Investment (41) 1=SIPi •

GDP, fGDI' [ _
fR ' ] fiR
Investment Function [
(41a) 1=10 GDP ] fRo'
o
-.J
W
Table 1 (continued)
-.)
~

Nominal Imports (42) MN = L L M~Pm"'kPfe + L L CmkcPmwkPfe+


s k c k

+ L CmkGPmw, Pfe + L ImwPmwkPfe.


k k

Nominal Exports (43) EN = L E;pd jE •

Investment Breakdown (44) Ii = ai/I.


by Commodity
il

Domestically Produced (45) f. [ PmkJr


Id' = d,I, r
pdill
Investment Goods
Imported Investment (46) Im k = Ij - Idj •
VI. Commodity Balances

Aggregate Demand for (47) X~ = ic


L Xis + E; + L Cd + Cd;G + Idi .
Domestically Produced
Commodity i
Aggregate Supply of (48) X~=X~ for each s.
Demand Equilibrium

Imports of Class k (49) Mk = L Mks + L Cmkc + CmkG + Imk for each k


s

Aggregate Imports (50) M= LMk .


k
75

price of the finished product Ps and the weighted average of the


domestically produced and imported commodity input prices, Pdis and
Pm/cs, respectively, per unit of value added, dvs • There is no loss in
generality for present purposes in normalizing the dvs and hence PVs
by setting dvs = 1 for each s.
Equations (8), (9), (10), and (11) define unit profits, sectoral
profits, the sector-specific rate of profit, and the overall average profit
rate, respectively. Equation (12) defines the sector-specific price index
of labor costs, Prs ' as the weighted average of the price indexes of the
different kinds of labor Pre> the weights bcs representing the relative
quantity shares of each type of labor c in the aggregate wage bill of
sector s. The matrices of domestic prices and import prices, Pd;j and
Pmkj , respectively, are defined in Equations (13) and (14).
Equation (15) defines the aggregate GDP price deflator as a
weighted average of the sectoral prices, the weights being the base
period quantity shares of value added of sector s in total product or
value added of the whole economy. Similarly, Equations (16), (17),
(18), (19), and (20) define price indexes for their respective expendi-
ture types, i.e., for domestically produced investment goods, PdI'
imported investment goods, Pm/, overall investment, PI' consumption
goods of type i by household type c, PC;c' and the overall price
deflator of consumption for each household type, Pc, c
as weighted
averages of their component price indexes, the weights being the base
period quantity shares.
Equation (21) consists of the profit-maximizing demand functions
for variable inputs. Each such function has been derived by maxi-
mizing profits, i.e., Equation (9), subject to the constraint of the CES
production function, with the sector-specific capital stock assumed to
be fixed in the short run at Ks • The advantage of these functional
forms is not only their generality but also their ease-of-estimation.
Specifically, one can assume the technical efficiency parameter, Ys' to
be equal to unity and the distribution parameters a,s and (1 - a,s) to
be approximated by the shares of aggregate variable resources (labor)
and capital in sectoral value added. Once again, these values can be
taken directly from the Egyptian SAM. The parameter P,k is, of
course, related to the elasticity of substitution a,k by a = 1/1 - p.
Equation (22) generates the sector and skill-specific demand for
labor. By setting the values of hcs = 0 for c = 1, 2, 3, and s = 1, 2, 3,
4 and for c = 4, 5, 6 and s = 5, 6, ... 12, rural-urban dualism in
labor markets is imposed.
76

Equation (23) aggregates these demands across sectors. Equation


(24) states that the aggregate supply of each variable resource type c
is considered to be fixed in any period. Alternatively, however, this
assumption can be relaxed by using the variable labor supply function
given by Equation (24a) which allows the labor supply of class c to
respond to changes in its real wage rate, its real income and/or its real
wage relative to that of a relevant alternative wage rate, e.g., the urban
wage relative to the rural wage of unskilled labor.
Section 3 is completed with the full employment equilibrium condi-
tion for R o Equation (25). The assumption of full employment,
implying that variable resource prices, i.e., the wage rates of all skill
types, are flexible in both directions, is admittedly a strong one for low
income, high population LDCs like Egypt. As an alternative specifica-
tion, the 'Lewis case', we specify the wage rate of unskilled labor to be
set exogenously, with excess supply prevailing in the market for that
skill type.
Bloc B and Section 4 of Table 1 begins by generating the dis-
posable incomes of the variable factors, i.e., of the six household types
c = 1 ... 6, Yo in Equation (26), of the enterprises, Yn , in Equation
(27), and of the government sector, YG in Equation (28). Since these
equations are essentially definitions, we feel it unnecessary to explain
them further. Note that there are quite a number of different sources
of government revenue involving a large number of different tax rate
instruments, thereby increasing the policy relevance of the present
spe~ification of the model.
Equation (29) is a demand function for exports of Cobb-Douglas
form. While the values of E; I and E;2 could be empirically estimated
without great difficulty, there is a considerable amount of available
information from other studies about such parameter values, making it
appropriate to assume values for these parameters and then sub-
sequently to subject them to sensitivity analysis.
The system of Equations (30) is the extended linear expenditure
system (ELES) which is especially convenient for empirical estimation
in multisectoral models such as the present one (Phlips, 1976; Lluch
et ai., 1977), although quite demanding in terms of data, and estimation
technique (requiring the imposition of constraints across equations).
In the case of the Egyptian SAM, however, values of the required
parameters can be obtained easily if one is to assume that workers in
the lowest skill groups, i.e., the lowest 60% of the rural and urban
labor forces, respectively, can be regarded as operating at the sub-
77

sistence margin. If so, their consumption patterns can be regarded as


providing the subsistence components ( Pc ric). The marginal propen-
sities to consume Cic can therefore be 'Cregarded as being out of
discretionary income, i.e. that amount of income above the subsistence
level. Equation (30a) is an alternative (Keynesian-neo-classical) func-
tion for generating aggregate consumption. In this case, the commodity
breakdown is accomplished by a modified version of (30) in which the
term in brackets is replaced by Cc as generated by Equation (30a).
Equations (31), (32), and (33) are CES demand functions that serve
to disaggregate the total consumer demand for each commodity class
down into its imported and domestically produced components. Equa-
tions (34) and (35) indicate that both components of government
consumption are exogenously determined.
As mentioned above, Section 5 is concerned with savings, invest-
ment and the trade balance. Equations (36)-(39) define savings of the
household sectors, So of the enterprise sector, S,p of the government
sector, SG' and of the foreign sector, Sr' respectively. While the first
three types of savings are assumed to be endogenously determined,
consistent with the basic purpose of the model, the level of foreign
savings, Sr' is exogenous. Equation (39) requires that solutions for
imports and exports, both evaluated in nominal terms, are consistent
with the exogenous Sr. Equation (40) defines aggregate savings and
Equation (41) converts aggregate savings (in nominal values) into real
investment by deflating by the price index of investment goods, PI'
which was defined in Equation (16) above.
Equation (42) defines the nominal value of imports as the sum of
the pretax values of intermediate goods imports, consumption goods
imports, government imports and investment goods imports. Likewise,
Equation (43) defines the nominal value of total exports. In the
absence of an independent investment equation, according to Equa-
tion (41) investment is essentially savings-determined. An alternative
is to use the independent investment demand function (41a). In either
case, as in the case of the input-output coefficients, aggregate invest-
ment, I, is broken down into specific investment goods, Ii' and then
subsequently into the domestic and imported components, Idi and
1mb in Equations (44), (45), and (46).
Finally, in Section 6 are the commodity balance equations. These
begin with the aggregate demand for domestically produced commo-
dity s, X~, in Equation (47). The supply of s, X;, from Equation (la)
above is set equal to the aggregate demand for s in Equation (48).
78

Equation (49) defines the aggregate demand for imports of type k (in
real terms) and, finally, Equation (50) aggregates over all commodity
types to obtain total imports.

3. SOME SIMULAnON EXPERIMENTS OF THE EFFECTS OF


FOREIGN AID WITH THE CGE MODEL FOR EGYPT

In this section we demonstrate the potential usefulness of CGE


models for evaluating the effects of different levels of foreign savings
and related policies under alternative assumptions concerning behav-
ioral relationships, functional forms and parameter values. We do so
by presenting and analysing the results of several types of simulation
experiments with respect to the Egyptian CGE model.
The parameters and nature of each of the several hundred simula-
tion runs conducted on the static version of the model are defined in
Table 2. The different simulations reflect different assumptions about

Table 2. Characterization of the Alternative Simulation Runs

Foreign Savings Options


Base: The observed level of 654 million Egyptians pounds. Results given in odd-
numbered rows
+10: The observed level plus 10% = 719.4 million Egyptian pounds. Results
given in even numbered Rows.

Labor Supply Options


Fixed: The labor supply of each region and income distributional group is fixed
at the level observed in the base year. Results given in Rows 1-2.
Variable: The labor supply of each region and sector is variable and computed
with the use of Equation (24a). Results given in Rows 3-16.
Lewis-Fixed: The labor supply of labor class 1 is computed via Equation (24a).
The labor supplies of all other sectors are fixed. Results given in Rows 17-30.
Lewis- Variable: The labor supply of labor class 4 is set arbitrarily high and those
of the other classes are computed via Equation (24a). Results given in Rows
31-46.

Consumption Function Options


Linear Consumption Function: Consumption computed as in Equation (30a).
Results given in Columns A-E.
Extended Linear Expenditure System: Consumption computed via Equation (30).
Results given in Column F.
79

Table 2 (continued)

Investment Function Options


None; i.e., investment automatically assumes the value given by aggregate savings
(investment is savings-determined). Results given in Columns A and F.
Full Investment Function: Gross investment computed according to Equation
(41 a). Results given in Column B.
Investment Function Insensitive to CDP: Gross investment computed according
to Equation (41a) but with f GDP = O. Result given in Column C.
Investment Function Insensitive to Interest Rate: Gross investment computed
according to Equation (41a) but with fiR = O. Results given in Column D.
Exogenous Investment: Gross investment is set equal to its observed base year
value of 1589 million Egyptian pounds. Results given in Column E.

Production Function Parameter Value Options


ai' the share of labor in value added in section i = the value observed in the
base year.
a i + 10 = the base value of a i plus 10%. Results given in Rows 5, 6, 19, 20, 33,
34.
Yi' technical efficiency parameter = the value observed in the base year.
Yi + 5 = the base value of Yi plus 10%. Results given in Rows 7, 8, 21, 22, 35,
36.
Vi' the economies of scale parameter = 1 in all sectors, i.e., signifying constant
returns to scale.
Vi = 0.9 in all sectors, i.e., decreasing returns to scale. Returns given in Rows 9,
10,23,24,37,38.
Vi = 1.2 in sector 7, i.e., economies of scale, otherwise = 1.0. Results given in
Rows 11, 12,25,26,39,40.
ark' the elasticity of substitution in production between capital and labor services
= 0.5 in all sectors.
a = 0.4 in all sectors. Results given in Rows 13, 14,27,28,41,42.
a = 0.6 in all sectors. Results given in Rows 15, 16,29,30,43,44.
a = 0.4 in urban sectors, = 0.6 in rural sectors. Results given in Rows 45, 46.

Note. All the equation numbers referred to are those given in Table 1.

(a) the level of foreign savings (Sf), (b) the supplies of labor of the
different region and income-skill groups, (c) the consumption func-
tions of these region and income-skill classes, (d) the investment
function, and (e) the parameters of the sectoral production functions
which indeed play such an important role in generating the factor
demands and hence the incomes of the different income-skill classes.
While the alternatives identified cover only a tiny fraction of the
sensitivity tests that could be performed, they include many of the
80

changes which a priori could be considered to make the most


difference in the results, including the domestic savings and investment
functions, and the demand and supply functions for labor. As noted in
Table 2, the effect of foreign saving in Tables 3 and 4 below can be
assessed most directly by comparing the results of any column and
odd-numbered row which pertains to the base level of Sf (654 million
Egyptian pounds at 1976 prices) with that in the even-numbered row
entries immediately below it, the even-numbered rows containing the
results for the case in which Sf is 10% higher (719.4 million Egyptian
pounds).
Because both the development planner's interest and the two-gap
models have traditionally focussed on the effects of foreign savings on
GDP and gross investment (I), in Table 3 at least, we concentrate on
the effect of Sf on these variables.
We begin in entry AI, i.e., Row 1, Column A, with what we call the
base version of the model, namely, a version of the model with the
following options: fixed labor supply, no investment function (implying
that investment is entirely savings-determined), the linear consumption
function, i.e., Equation (30a) in Table 1, for determining aggregate real
consumption, the breakdown of aggregate consumption expenditures
being determined by the linear expenditure system LES (a modified
version of Equation (30) in Table 1 above), and the base level of Sf
The corresponding simulation in which all the other assumptions are
identical but the level of Sf is 10% higher, i.e., 719.4, instead of 654,
million Egyptian pounds in labelled A2. By comparing the solution
values of run A2 with the values of the corresponding variables of AI,
which are given in rows 1 and 2 under Column A of Table 2, one can
determine the effects of the higher level of foreign savings on all
variables in the system.
By comparing the solution values for GDP and I of run Al with
those of run A2 given in the first two rows of column A in Table 3,
one can see that GDP is raised by 6 and I by 58 as a result of the
increased Sf of 65.4. Since Sd = (1 - Sf), it can readily be seen that
the estimate of the coefficient of Sf on Sd implied by these estimates
is -0.113 or considerably below the empirical estimates obtained by
Weisskopf (1972b) and Griffin and Enos (1970). Since this result
derives from one particular and untested set of modelling assumptions,
we in no way mean to suggest that our estimate is more realistic.
Indeed, the main purpose of the sensitivity experiments is to draw at
least moderately upon the considerable flexibility of the modeling
Table 3. Sensitivity Analysis of the Effects on Gross Domestic Product (GDP) and Gross Investment (I) of Different Levels of
Foreign Savings to Alternative Sets of Assumptions

Row A B C D E F
-
GDP I GDP I GDP I GDP I GDP I GDP I

1 6583 1589 6583 1589 6583 1589 6583 1589 6583 1589 6583 1589
2 6589 1647 6590 1631 6590 1631 6590 1591 6590 1589 6589 1621
3 6584 1590 6584 1590 6584 1590 6585 1591 6585 1589 6592 1597
4 6673 1711 6683 1687 6685 1681 6710 1620 6722 1589 6684 1687
5 6515 1552 6513 1558 6511 1562
6 6605 1672 6613 1653 6614 1652
7 6752 1674 6756 1663 6761 1653
8 6843 1799 6858 1765 6865 1750
9 6148 1313 6132 1366 6121 1400..
10 6226 1424 6219 1447 6211 1473
11 6661 1588 6678 1622 6680 1616
12 Sno 1748 6780 1720 6784 1709
13 6568 1581 6568 1582 6567 1583
14 6650 1697 6658 1674 6660 1670
15 6592 1592 6593 1592 6593 1591
16 6687 1717 6698 1693 6702 1687
17 6581 1590 6580 1590 6580 1590 6581 1588 6581 1589 6582 1589
18 6603 1644 6606 1630 6611 1616 6614 1596 6615 1589 6617 1606
19 6407 1437 6400 1472 6396 1486
20 6432 1481 6427 1503 6424 1515
21 6831 1756 6838 1727 6842 1716
22 6852 1820 6863 1777 6869 1759
23 6443 1588 6445 1578 6443 1588 00
.....
00
N
Table 3 (continued)

24 6459 1653 6464 1625 6463 1634


25 6606 1562 6604 1571 6604 1570
26 6630 1608 6631 1606 6638 1604
27 6415 1427 6407 1465 6404 1479
28 6435 1475 6429 1499 6427 1510
29 6697 1707 6703 1682 6705 1674
30 6722 1766 6732 1727 6734 1718
31 6581 1588 6580 1589 6589 1596 6580 1590 6580 1589 6580 1589
32 6718 1621 6733 1589 6787 1697 6673 1711 6685 1687 6688 1681
33 6530 1555 6527 1560 6525 1564
34 6624 1676 6634 1657 6635 1654
35 6728 1669 6734 1658 6739 1649
36 6824 1791 6842 1762 6850 1748
37 6122 1308 6103 1361 6089 1397
38 6204 1420 6196 1443 6186 1471
39 6675 1626 6677 1622 6680 1610
40 6774 1749 6786 1722 6792 1710
41 6593 1587 6593 1588 6593 1587
42 6680 1704 6690 1682 6693 1675
43 6571 1590 6572 1589 6571 1590
44 6671 1716 6684 1691 6688 1696
45 6677 1384 6577 1572 6600 1371
46 6776 1477 6689 1665 6704 1443

Notes. All figures are in millions of Egyptian pounds at 1976 prices. For explanation of modelling assumptions and parameter
values see Table 2 and text. Blank entries indicate simulation not performed.
83

Table 4. Index of Income Inequality: Ratio of the Real Wage Rate of the Highest
Income Group in the Urban Sector to that of the Lowest Income Group in the Rural
Sector for Alternative Assumptions About Levels of Foreign Savings, Functional
Forms and Parameter Values

A B C D E F

1 0.961 0.962 0.961 0.962 0.961 0.959


2 1.025 1.016 1.015 0.987 0.986 1.004
3 0.061 0.961 0.961 0.961 0.961 0.960
4 0.985 0.979 0.977 0.961 0.953 0.980
5 0.954 0.955 0.955
6 0.976 0.972 0.971
7 0.976 0.974 0.971
8 1.001 0.993 0.989
9 0.934 0.947 0.955
10 1.102 0.962 0.969
11 1.018 0.979 0.978
12 1.005 0.998 0.995
13 0.944 0.945 0.946
14 0.966 0.960 0.960
15 0.977 0.977 0.977
16 1.004 0.997 0.995
17 0.949 0.950 0.950 0.950 0.950 0.960
18 1.085 1.089 1.045 1.097 1.098 1.100
19 1.176 1.167 1.164
20 1.340 1.334 1.331
21 0.795 0.802 0.806
22 0.914 0.923 0.928
23 0.525 0.526 0.524
24 0.603 0.607 0.606
25 1.067 1.065 1.065
26 1.218 1.219 1.219
27 1.203 1.197 1.195
28 1.324 1.321 1.320
29 0.711 0.719 0.721
30 0.858 0.871 0.873
31 0.948 0.948 0.948 0.948 0.948 0.950
32 0.987 0.986 0.986 0.985 0.984 0.988
33 0.994 0.994 0.994
34 1.033 1.033 1.033
35 0.909 0.909 0.909
36 0.946 0.946 0.946
37 0.854 0.855 0.857
38 0.886 0.887 0.888
84

Table 4 (continued)

39 0.977 0.977 0.977


40 1.919 1.017 1.017
41 1.004 1.004 1.005
42 1.040 1.039 1.039
43 0.901 0.901 0.901
44 0.944 0.944 0.988
45 1.030 1.032 1.032 1.157
46 1.045 1.050 1.050 1.144

Footnotes: I Odd numbers rows pertain to the base level of foreign savings, i.e.,
654 million Egyptian pounds. Even numbered rows assume a 10% higher level of
foreign savings, i.e., 719.4 million Egyptian pounds. 2 For explanation of other rows
and column numbers see Table 2.

package so as to determine how these results would be affected by


alternative assumptions about closing rules, functional forms and
parameter values.
Without going into many examples, it should be obvious from the
results of Table 3 that the results reveal considerable sensitivity to the
assumptions concerning the modeling of labor supply, consumption,
investment and the parameters of the production function. For exam-
ple, with respect to the effects on gross investment I that result from
the simulated increase in Sf of 65.4, the increases vary from 0 in all
such comparisons in column E to 160 in All-A12. The zero gains
in I obtained from column E are, of course, immediately attributable
to the assumption of column E that real investment is exogenously set
at the base level. Since this particular case would seem to be especially
artificial, it can perhaps be ignored for the purpose of identifying a
realistic range for such an effect. The next lowest increases of I (of
between 2 and 8) which are registered by comparing the results of
simulation runs D1 and D2 and those of runs D17 and D18 can also
be ruled out for essentially the same reason. 8 Likewise, the largest
increases which occur rather frequently in column A of between 111
and 160 are also rather unrealistic since these results are generated
under the assumptions (a) that investment is entirely savings-deter-
mined, and (b) that other rather favorable circumstances prevail, such
as that the labor supply is rather variable, economies of scale prevail
in the investment goods sector, and/or the elasticity of substitution in
production is relatively high.
85

Excluding these extreme cases leaves us with a more realistic range


of increases in I of from 17 between runs F25 and F26 to 106
between runs B13 and B14 and between runs B43 and B44. Most of
the alternative increases are in the still narrow range of 44-102. If
these results are at all representative, they suggest that it is not
impossible that domestic savings can be increased by foreign saving 9
rather than decreased by such savings as the increasingly pessimistic
interpretations of empirical evidence on the two-gap model would
have us believe. Not surprisingly, the largest increases in I are
obtained when the labor supply is most variable, when savings play a
dominant role in the determination of investment, and when the
production function assumptions are such that fairly sizeable increases
in real GDP are obtained.
While the range of absolute increases in real GDP associated with
the ten percent increase in Sf is also rather large, varying from lows of
6 or 7 between the corresponding entries of rows 1 and 2 to a high of
153 between E31 and E32, in terms of percentage changes the range
is rather narrow, namely from 0.1 to 2.3%. The vast majority of such
estimates, moreover, is between O.~ and 1.7%. While these increases
are certainly far from overwhelming, they are not zero as much of the
two-gap literature has assumed them to be, and in absolute terms they
are not inconsequential.
Similar tables could be constructed from the results of the com-
parable runs for all other endogenous variables in the model, such as
the sectoral levels of output, employment, imports, exports and so on.
Given our focus on income distribution and the apparently disappoint-
ing experience of foreign aid in that respect which was referred to in
Section 1 above, in Table 4 we present the simulation results for one
simple and convenient indicator of income inequality appropriate to
the case in which there are very few income groups (making the
computation of Gini or Theil coefficients rather meaningless). Bearing
in mind also the general equilibrium character of the model wherein
the burden of adjustment is on wage rates rather than on the levels of
employment, our measure of income inequality is the ratio of the real
wage 10 of the highest income group, class 3 'urban upper 10%', to that
index of the lowest income group, class 4 'rural lower 60%'.
Once again, the results show a fair amount of sensitivity to the
alternative parameter values. Note that in the table as a whole, the
ratio of the two wage rate indexes varies from a low of 0.524 in C23
to a high of 1340 in A20. While the former indicates a substantial
86

decrease in the real wage rate differential between these two groups
relative to that of the base run, the latter ratio indicates a substantial
widening of the wage differentials, again relative to the base run, i.e.,
what actually transpired according to the 1976 Egyptian SAM.
By reflecting on the characteristics (assumptions) of these runs
which yield the extreme values of these relative wage rate indexes, it
becomes easy to understand how these results arise.
Given our primary objective of measuring the effects of external
assistance, we once again concentrate on comparisons with respect to
the different levels of foreign assistance represented by the value of
the ratio in a particular column of an odd-numbered row with that of
the corresponding cell of the even-numbered row immediately below
it. As the reader can easily see, such comparisons show that an
increased level of foreign savings almost invariably raises the degree of
income inequality. The reason for this derives from the fact that the
investment goods sectors, sectors 7 and 8, 'other industry' and 'con-
struction', the demand for, and hence the output of, which are
increased by virtue of the sizeable increases in real gross investment
induced by the additional foreign savings, (according to the Egyptian
SAM utilized in these simulations) employ only urban workers. The
increase in the demand for urban labor, even if most of it were in the
form of the lowest skilled class for that region, has the effect of raising
the real wage rates of urban labor relative to those of rural labor and
hence also the inequality index employed here. The only exceptions to
this rule occur between the cells of row 45 and the corresponding
ones of row 46. The difference in these cases is attributable to the fact
that the elasticity of substitution (ark) is assumed to be considerably
lower in these sectors (i.e., all urban sectors) than in the rural sectors,
requiring a greater reduction (or less of an increase) in the real wages
of urban labor in order to increase employment and hence output in
these sectors than in the rural ones.
While the results of Table 3 seemed to offer grounds for greater
optimism with respect to the effects of foreign savings on both income
and investment and hence the rate of growth of income, the results of
Table 4 indicate that, unless certain offsetting policies or programs are
adopted, increased levels of foreign savings would have the effect of
increasing the degree of income inequality. These results, of course,
call attention to the need for specific programs and policies for
offsetting the inequality-increasing effect of higher Sf
The reader should be reminded, however, that increased intergroup
87

inequality in real wage rates need not imply overall increased income
inequality among households both because intragroup income inequal-
ity may be reduced by fuller employment of those in that income class
as a whole and because fuller employment of the low income surplus
labor group may well allow household incomes of this class to be
increased substantially even without increases in wage rates. Likewise,
even if the increased real wage inequality result would carry over to
inequality in real disposable income, as indeed our results (not shown)
indicate that it does, increased income inequality need not imply
decreased absolute income or increased poverty among the poorest
group (the lowest income group in the rural sector, income class 4).
Indeed, the results for real disposable income of the rural poor (not
shown) demonstrate that this is not the case. Real disposable income
of the lowest income group, invariably increases, though generally only
modestly, with increased external assistance.

4. CONCLUSIONS AND SUGGESTIONS FOR FURTHER RESEARCH

The primary intention of the preceding section is to illustrate the


potential usefulness of CGE models in analysing the actual or poten-
tial effect of foreign assistance. While all of the simulations with the
model described above have pertained to the static model, i.e., are
those of the comparative static type, dynamic experiments could also
be performed. In this respect, one could easily follow the differential
effects of the different levels of Sf successively through one or more
different future periods. For example, any higher levels of investment
made possible by higher levels of Sf could be used to increase the
sector-specific capital stocks in the next period and then the compara-
tive static experiments could be undertaken once again. Likewise, the
values of various other exogenous variables and parameters could be
updated so as to make them more realistic to the expected future
conditions with and/or without the effects of extra foreign aid of the
previous periods.
Because in principle the dynamic effects of higher levels of foreign
savings could be positive or negative, depending on the relative
importance of the capital-enhancing effects, on the one hand, and the
distortion-increasing ones on the other hand, simulations of this sort
could be of great help in identifying the degree of sensitivity of the
88

results to alternative assumptions about the underlying parameter


values, etc. As such, the CGE results could be essential in the identifi-
cation-of priorities in statistics gathering and empirical estimation.
Another important use of the model could be investigating the
effects of alternative forms which Sf might take, such as education
(and hence changing the mix of workers between the three different
skill classes in each region), and/or projects which would affect
parameters in the production function of specific sectors. So too, the
model could be advantageous in investigating the effects of accom-
panying policy reforms. Indeed, in some of our own thus far un-
reported experiments along these lines we have found that such
policies can be of considerable help in offsetting the otherwise
unwanted effects of foreign assistance, such as those on income
distribution.
Before concluding this exposition of our simulation experiments, it
should be emphasized that the results presented only begin to take
advantage of all the flexibility provided in the modelling system. Our
experience goes well beyond the results reported. In certain cases the
outcomes explain why certain other combinations of assumptions were
not reported in the sensitivity analysis runs provided in Table 3. For
example, the reason why results are not presented for another column,
say column G, which would have combined an investment function as
in the column E runs with the ELES consumption-savings functions
used in the runs of column F was that the results were found to be
identical to those of column F. This is the case because, with the
ELES option in force no form of savings is sensitive to the interest
rate. Hence, savings are determined independent of this interest rate,
implying that, if investment, which according to Equation (41a) is
sensitive to both GDP and interest rate, should be different from
savings, the interest rate (which is affected only by savings-investment
disequilibrium) would adjust in such a way as to equate investment
with savings. Therefore, any interest rate-sensitive investment function
is redundant when the ELES option is in effect. On the other hand,
with both the ELES savings function and an investment function
which is not sensitive to interest rate in operation simultaneously,
savings and investment will normally be inconsistent and convergent
solutions cannot be achieved. This explains why no results are pre-
sented for this case. Another valuable benefit provided by simulation
experiments of this sort, therefore, is to help the user understand the
economics of the system by forcing the user to come up with explana-
89
tions for why solutions are not affected by certain changes or why in
certain situations equilibrium solutions cannot be obtained.
Besides these primarily methodological lessons, one cannot help but
conclude with also a substantive message concerning the effects of
foreign aid. While the generally disappointing experience with foreign
aid programs and projects, and the possibilities of harmful aid-induced
biases of various sorts cannot and should not be denied, the results,
nevertheless, show that, when aid is properly designed, managed and
implemented and the unwanted effects such as greater income inequal-
ity or increased capital intensity are accompanied by policies designed
to offset these effects, foreign aid remains a potentially useful tool for
assisting LDCs in achieving their development objectives at minimum
social cost.

NOTES

*' The author expresses his apprecIation for the programming assistance of
Charles Williams who is the co-author and co-designer of the CGE modelling
package used in this paper and to fle editors for their useful comments and
suggestions. Gerhard Tintner's pioneering work in modelling some of the
issues discussed here, much of which was done at the University of Southern
California, was also an inspiration to this work.
1. See, e.g., Chenery and Strout (1966), Landau (1971), and Weisskopf (1972a,
b).
2. Among the most important studies of this type are Houthakker (1965), Griffin
(1970), Griffin and Enos (1970).
3. See especially Bauer (1981) and Tendler (1975).
4. One should also bear in mind that foreign aid may take many different forms,
each form possibly giving rise to different kinds of effects. For a relatively
comprehensive but outdated textbook analysis see Mikesell (1968).
5. For a nice example of the case study approach as well as numerous references
to other case studies see Tendler (1975).
6. This kind of an issue has a rich tradition in economics. For a recent and
especially relevant example see Chichilnisky (1980).
7. See, e.g., Adelman and Robinson (1978), Dervis et ai. (1982), and de Melo
(1982).
8. I is virtually exogenous in these cases since from Equations (41a), with the
elasticity of investment with respect to the interest rate set to zero, it can be
influenced only by the rate of change in GDP which, because of the fixed
labor supply and other assumptions of these cases, is minimal, i.e., 0.1 %.
9. This result pertains to the static results only. If the foreign aid takes the form
of loans, future net savings may be reduced, thereby giving the present value
of the net saving stream an ambiguous sign.
90

10. The real wage rates are the nominal wage rates divided by the class-specific
price indexes.

REFERENCES

Adelman, I. and Robinson, S. (1978), Income Distribution Policy in Developing


Countries, Stanford: Stanford University Press.
Bauer, P. T. (1981), Equality, the Third World and Economic Delusion, Cambridge:
Harvard University Press, esp. Ch. 5-7.
Chenery, H. B. and Strout, A. M. (1966), 'Foreign assistance and economic
development', American Economic Review 56: 679-732.
Chichilnisky, G. (1980), 'Basic goods, the effects of commodity transfers and the
international economic order', Journal ofDevelopment Economics 7: 505-520.
de Melo, M. H. (1982), 'A simulation of development strategies in an economy-wide
model', Economic Development and Cultural Change 30: 335-350.
Dervis, K, de Melo, 1., and Robinson, S. (1982), General Equilibrium Models for
Development Policy, Cambridge: Cambridge University Press.
Eckaus, R. H., McCarthy, F. D., and Mohie-Eldin, A (1981), 'A social accounting
matrix for Egypt, 1976', Journal of Development Economics 9: 183-204.
Ginsburgh, V. A and Waelbroeck, 1. L. (1981), Activity Analysis and General
Equilibrium Modelling, Amsterdam: North-Holland.
Griffin, K B. (1970), 'Foreign capital, domestic savings and economic development',
Bulletin of the Oxford University Institute of Economics and Statistics 32: 99-
112.
Griffin, K B. and Enos, J. L. (1970), 'Foreign assistance: objectives and con-
sequences', Economic Development and Cultural Change 18: 313-327.
Houthakker, H. S. (1965), 'On some determinants of saving in developed and
underdeveloped countries', in E. A G. Robinson (ed.), Problems in Economic
Development.
Landau, L. (1971), 'Savings functions for Latin America', in H. B. Chenery (ed.),
Studies in Development Planning. Cambridge: Harvard University Press.
Lluch, C, Powell, A A, and Williams, R. A (1977), Pattern in Household Demand
and Saving, New York: Oxford University Press.
Mikesell, R. F. (1968), The Economics ofForeign Aid, Chicago: Aldine.
Nugent, J. B. and Williams, C S. (1984), A User's Guide to Economy-wide Modeling
of External Assistance Effects, Resource Allocation and Growth, Los Angeles:
University of Southern California.
Phlips, L. (1974), Applied Consumption Analysis, Amsterdam: North-Holland Pub!.
Co.
Tendler, J. (1975), Inside Foreign Aid, Baltimore and London: John Hopkins
University Press.
Weisskopf, T. A. (1 972a), 'An econometric test of alternative constraints of the
growth of underdeveloped countries', Review of Economics and Statistics 54:
67-78.
Weisskopf, T. A (1972b), 'The impact of foreign capital inflow on domestic savings
in underdeveloped countries', Journal ofInternational Economics 2: 25-38.
PART III
ANALYSIS OF EFFICIENCY
The Active Approach of Stochastic Optimization
with New Applications

JATI K. SENGUPTA
University ofCalifornia, Santa Barbara, California, U.S.A.

Abstract. The use of active approach in characterizing robust solutions in a linear


stochastic production system is analyzed here through minimax and other solutions.
Some policy implications are also discussed.

1. INTRODUCTION

An ordinary linear programming (LP) problem


~
maxz=c'x,x E R={xIAx ~ b,x ~ O} (1.1)
becomes stochastic when the set () = (A, b, c) of parameters depends
on the random states of nature i.e., () = (}(s), sEW where W is the
index set. How to define an optimal decision in the face of this
uncertain situation is the critical issue in many policy models. Two
major approaches to this issue are known as the passive and the active
approach. Let WI be a subset of the set W of states of nature such
that for all s E WI the following two sets X = {x(s): s E Wd,
X* = {x*(s): s E Wd of feasible and optimal feasible vectors exist
and are nonempty. Then the passive approach requires the decision-
maker (DM) to wait for a sufficient number of sample observations
from the set X* and the associated set Z* of optimal profits where
Z* = {z*(s): s E WI' z*(s) = c'(s)x*(s)}. Let Fe(x*) the cumulative
distribution of x* = x*(s) for s E WI and let Fe(z*) be the associated
distribution of optimal profits z* = z*(s), S E WI' These two
distributions Fe(x*), Fe(z*) are used in the passive approach to choose
a terminal optimal decision vector X O say. For example X O may be the
mode of the distribution of Fe(x*). Likewise the distribution Fe(z*)
may be used to estimate the probability of a ruin (or a security) level
Zo of profits i.e., Prob( z* ~ Zo), where Zo is known or preassigned.
93

lati K. Sengupta and Copal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN-13: 978-94-010-8146-7
© 1988 Martinus Nijhojf, Dordrecht
94

The active approach proposed by Tintner [6, 7] in the theory of


stochastic programming develops an adaptive control procedure,
which allows a cautious policy and a method of updating as more new
information becomes available. Unlike the passive approach it starts
from a given information structure and converts the original stochastic
problem into a two-stage decision process. For instance the resource
vector b(s) is transformed to f:3(s) by the allocation matrix U = (u ij ),
uij ~ 0, ~;=l uij = 1, where uij denotes the proportion of resource bi
allocated to producing the output xj • Let U be the set

U = !U f
j-l
Uij = 1, uij ~ 0,
(1.2)

i-I, 2, ... ,m; i-I, 2, ... , n 1


of all feasible allocations. Given any allocation Uo E U, the DM
considers in the first stage the two conditional distributions Fe(x* I Uo),
Fe(z* I Uo) e.g., their mean, mode or variance in order to assess the
riskiness of the allocation choice Uo. Then at the second stage it selects
a terminal allocation decision from the set U
Our object here is two-fold. One is to extend the information-
adaptive characteristics of the active approach so as to show that a
robust solution can be characterized in this framework. Secondly, we
consider an application in the new area of data envelopment analysis
[1, 2] which measures Pareto-efficiency of a set of decision-making
units each using several inputs to produce a single output. The
problem here is to characterize efficiency through suitable LP models,
when the input and output data admit of stochastic variations.

2. ROBUST SOLUTIONS

Consider first a simple example of the active approach with two


outputs Xl' X:z and two inputs bl , b2:
max z = Xl + X:z
subject to allxl + al2 x2 bl~
(2.1)
~IXI + ~2X2 ~ b2
Xl ~ U, X:z ~ 0.
95

The input availabilities bi and the production coefficients aij are


assumed to be random:
b.
1
=6. J
+ P,R. (2.2)
where bar denotes mean values and the errors a ij and Pi are assumed
for simplicity to be mutually independent with zero means and finite
variances. In the passive ~pproach for example, we replace aij and bi
by their mean values aij , bi and solve the LP model (2.1) at the mean.
Let Xo be the optimal solution vector in this case and Zo be the
associated optimal profit. The risk associated with this policy Xo
measured for example by the variance of z* conditional on x = Xo has
to be evaluated before one can recommend Xo as the final decision.
Let V (z* I xo) be this conditional variance. It is clear that if V (z* I Xo) is
very high, then Xo would be very risky and the DM has to look for an
alternative solution. One way to generate alternative solutions is to
adopt the active approach by introducing the allocation ratios uij for
the resources and analyzing the implications of selecting them at
alternative levels. For instance, the constraints of the LP model (2.1)
get transformed as:
all XI ~ Ull bl, al2 x2 ~ (1 - ulI)b 1
~IXI ~ ~I bl , ~2X2 ~ (1 - ~J)b2 (2.2)
Xj~O, j=I,2, UII'~I~O.

Now assume that the errors a ij , Pi in the set DI defined by (2.2)


satisfy the following equations at the optimal basis for a specific set
(u~ I' ug l ) of allocation ratios:
+ PI)ulI}/(a ll + all)
_ - 0-
XI - {b l
(2.3)
~ = {62 + (2) (1- Ugl)}/(~I + a 21 )·
On expanding the right-hand sides, assuming the errors to be sym-
metric and taking expectations we obtain

(2.4)
0-
E(~) = (1 - U21)b2 + (1 - u21)b20 22 • 0-2! 1 + 2]
30 22
a22 (a22)3 (a22)2

where 07i is the variance of au' If we assumed instead zero errors,


96

then the optimal solutions are


°-
U 11 b l
0 -
(1 - u2l)b2
xlO = - - - , Xzo =
all a22

Since it follows from (2.4) that E(x;) > x;o, for i = 1, 2, it is clear
that it pays to have information on the probability distribution of Xl
and Xz. For a specific choice of the allocation ratios (uy I' ug l ), the gain
in higher profits must of course be evaluated against any higher risk
due to higher variance of profits. The trade-off between expected
profit and its risk is explored in the active approach by varying the
allocations lkJ E U. This may be called the mean variance efficiency
frontier. The selection of a point on the mean variance frontier as the
terminal decision may be made by the DM either on the basis of his
attitudes to risk or, by the risk premium prevailing in the current
capital market.
As a specific example consider that the vector c alone in the
objective function z = c' X is random and the risk averse DM
minimizes the risk R(z) = (x' Vx)/2 i.e.,
Min R(z) = (+)x'Vx
S.t. Ax=b (3.1)
m'x=r
where c is distributed with mean vector m and the covariance-matrix
V and x = x( lkJ) is conditional on a specific choice of the allocation
matrix lkJ E U. Note that we have deliberately ignored the non-
negativity constraints on x. For a fixed positive value of rand
b = b( lkJ), the optimal solution x* and the associated mean variance
efficiency frontier may then be directly calculated as

a~ = kr 2 - kgr + khg + b'Q-Ib (3.2)


where
k =m'[V-l{I -A'Q-I V-1llm, h =m'V-IA'Q-lb
Q = AV-IA', g = b' Q-l V-I m + km' V-lA' Q-I b
and
x* = V-I[k(r - h)m + A'Q-I(b - k(r- h) V-Iml.
It is clear that the quadratic function a~ = a~(r) in (3.2) specifies the
efficiency frontier as the value of r is varied in the positive orthant.
97

Some implications of the efficiency frontier (3.2) may be noted. First,


as the level , of expected profits increases, the optimal risk a~ may
tend to rise or fall, depending on the probability distribution of vector
c. Usually however after a point the higher , will lead to higher
optimal risk a~ i.e., this is the case when the value of k is positive and
a~ is strictly convex in ,. Hence the value '*of , at which a~ is
further minimized may be easily calculated as:

'* = g/2 when a~(,**) = a~* = kgh + b' Q-t b - g2/ 4. (3.3)
Secondly, let us consider two allocation processes Ut , ~ belonging
to U which transform the resource vector b to b(l), b(2) respectively.
Let a~(i) = a~(b(i» be the corresponding varaince frontier (3.2) for
i = 1, 2. If the two variance frontiers intersect and it holds that
a~(l) < a~(2) to the right Oeft) in a local neighborhood, then the
allocation U t E U may be said to dominate ~ E U in terms of lower
risk. Since the set U is compact, there must exist an allocation process
denoted for example by up E U, which is not dominated in some local
neighborhood Dt i.e., a~(P) ~ a~(i) for U j E U, i = 1, 2, ... ,
P - 1. In this case the allocation\ up E U is efficient in D t in the
sense that it minimizes the optimal risk for a given value of ,. Thirdly,
the allocation matrices U E U of the active approach may be used to
select m out of n outputs (n > m), so that we have a total of K
selections where K is the combination of n outputs taking m at a
time. Let x(j), j = 1, 2, ... , K denote a particular selection with
parameters VU), mU), AU) and bU). Then we can set-up the
following optimization problem:

Max R(z) = (t)x(j),V(j)x(j)


S.t. B(j)x(j) = b(j) (3.4)
mU)'xU) =', all j = 1,2, ... , K
where B (j) is the basis matrix in A (j) = [B (j) N (j)] with N (j) as the
nonbasic matrix. Let x*(j) be the optimal solution of (3.4) with the
maximal risk level denoted by aJ* = a 2(x*(j» = x*(j)' V(j)x*(j).
Since K is finite, we have at least one selection x*(O) with the property
minj aJ* = a5* for j = 1, 2, ... , K where x*(O) is the minimax
solution. This minimax solution is robust in the sense that it minimizes
the maximum risk. Since the allocation set U is compact, this minimax
solution (a solutions) must always exist. It has two features of caution.
One is that it defines a best of the worst selection rule i.e., for each
98

selection j it considers the most pessimistic outcome. The other


feature is that it is directly comparable wtih the LP optimal solutions
where at most m of the n outputs can be positive. Clearly, if more
than m outputs are to be included in every selection, we can redefine
the vector xU) in (3.4) as having m + i dimensions (i = 1, 2, ... ,
n - m) and solve the optimization problem analogous to (3.4), where
the total number of selections would vary of course. Let x*(O, m + i)
be the minimax solution of order m + i where i = 1, 2, ... , n - m,
and o~* (m + i) be the associated risk level. Then the effect of
diversification may be measured by the relative decrease of
o5*(m + i) as i is increased from 1 to (n - m). .
One disadvantage of the above formulation of a .robust solution is
that it cannot easily be generalized to the case when the parameters
A, b other than the vector C are also random. For the general case of
the LP model max z(s) = c(s)'x, X E R(s) = {x IA(s)x ~ b(s),
x ~ O} where sEW is a particular state of nature, we consider the
game-theoretic equivalents:

Min A(S)
n

s.t. L gij(S)qj ~ A(s)bi(s), i = 1,2, ... , m


j=1
(4.1)
n
L qj = 1, qj ~ 0, h = 1,2, ... , n
j=1

and
Max ,u(s)
n

s.t. L Pihij(S) ~ ,u(s)cj(s), j = 1,2, ... , n


i=1
(4.2)
n

L Pi = 1, Pi ~ 0, i = 1,2, ... , m
i=1
99

P = (Pi) are the two mixed strategy vectors corresponding to the


primal-dual variables of the LP model. For every s E WI we denote
the optimal solutions of (4.1) by qO(s) = (qJ(s», laCs) and those of
(4.2) by pO(s) = (p~(s», ,u°(s). Since the set X* of optimal feasible
vectors is assumed to be nonempty, we have lO(s) = ,u°(s) for every
given s E WI by the strong duality principle. In the passive approach
for example we wait for the sample observations on s; thus we may
form the mean t:stimates gij' bi' cj and solve for the optimal solutions
denoted by qO, lOpo, ,u0. The risk associated with the solution qO may
be evaluated from the optimal basis equation

L (gij + Eij)qJ = l O( bi + Ei ) (4.3)


j=l

where the errors Eij , Ei are small perturbations around the mean
° °
values. Clearly the conditional variance V(l IqO) of l given q = (qJ)
can be evaluated in principle and if this is too high, the DM has to
select another terminal decision vector different from qo.
The active approach introduces allocation mechanisms through the
resource matrices u E U so that each hies) is changed to f3i(S) =
hi (s Iu) and hence bi changed to f3i' Thus for each allocation mecha-
nism u = up it introduces a sequence of optimal solutions denoted as
(qO( t), XO( t», for which a sequence of conditional variances VO( t) =
V (l o( t) I q-0(t» can be calculated as in (4.3). Let T be a finite set of
selections t = 1, 2, ... , T of the allocation mechanism uy E U. The
mean variance efficiency frontier is then specified by (l O(t), VO( t);
t = 1, 2, ... , T). Knowledge of this frontier may be utilized by the
DM in several ways. First, he may apply the method in (3.3) if there is
a region over which VO(t) is a strictly convex function of XO(t).
Secondly, the observed risk premium in the market may be used to
determine an equilibrium point on the mean variance frontier, just as
the capital market line in portfolio theory identifies a market equi-
librium. Thirdly, he may evaluate the higher conditional moments of
the distribution of l O(t) over and above the mean and the variance
and decide on the final solution vector. This will of course depend on
his risk attitudes and the form of the conditional probability distribu-
tion of the payoff l O( t). If he has any subjective goal e.g., A, then the
criterion of minimizing the distance d 2(t) = (A - XO(t»2/VO(t) over
100

t = 1, 2, . . . , T would help in selecting a terminal decision vector


which is very close to his subjective goal.
Let us now consider two sets Wo and WA to which the state s can
belong. The set Wo is a reference set, whereas WA is a comparison set;
both the sets are assumed to satisfy the optimal basis equations (4.3)
with m-dimensional vectors ijo. The strategy vector ijO( Wo) and the
associated value ;,:O( Wo) are then said to be ro!'ust relative to the
comparison set WA, if the optimality of [ijO( Wo), AO( Wo)] is not altered
by any choice from the set WA • If the alternative set WA coincides with
the entire set W of states of nature, then ijO( Wo) becomes absolutely
robust. Restricting ourselves to the mean variance frontiers we can
characterize the robust solutions very easily. For example, denote the
two efficiency frontiers as

VO(WO) = f(;':O(Wo» = f(A o)


VO(WA) = f(;':O(WA» = f(AA)'

Normally they would be strictly convex functions as increasing the


mean would augment the variances also. If the two curves intersect at
a value A, say, then ;,:O( Wo) would be optimal (i.e. relatively robust) if
VO( Wo) ~ VO( WA) to the right (or left) of the intersection point.
;,:O(Wo) would be absolutely robust if the curve f(A o) lies below that of
f(AA) for all relevant values of ;':0.

3. EFFICIENCY IN STOCHASTIC SYSTEMS

We consider now an application to the recent field of efficiency


measurement by data envelopment analysis (DEA) pioneered by
Charnes, Cooper and his associates [1, 2]. The DEA model measures
the relative efficiency of a set of decision-making units (DMUs) by
stipulating that a given DMU is not efficient in producing its outputs
from given amounts of inputs, if it can be shown that some other
DMU or, combination of DMUs can produce more of some output
without producing less of any other output and without utilizing more
of any input. This technique is most useful for application to public
sector DMUs, where the concept of profit cannot be defined due to
101

lack of input and output prices. Let Xij be the quantity of input
i = 1, 2, , m and jj be the single output for the j-th DMU where
j = 1,2, , n. Then the DEA model may be simply represented by
the following LP model:

min gk = L f3i Xik


{J i-I

subject to
m
(5.1)
L f3iXij ~ Yj; j = 1,2, ... , n
i=1

f3i ~ 0; i = 1,2, ... , m

By letting the reference DMU index k vary over 1, 2, ... , n one may
determine n optimal weights f3*(k), k = 1, 2, ... , n for the inputs.
For the more general case of x outputs (s ~ 1), the DEA model
becomes \

min gk = L f3i Xik


{J,a i-I

subject to

m s
L f3i Xij ~ L a,y,j; j = 1,2, ... , n (5.2)
i=1 ,-1

L a,Y,k = 1
,=1

f3i ~ 0, i = 1,2, ... , m; a, ~ 0, r = 1,2, ... , s.

In empirical applications the observed input and output quantities


denoted by X = (x ij ), Y = (Yj) are generally positive for all i and j. In
such cases the LP model above is always feasible, as it is obvious from
102

the dual:
n
max z = L Yj).,j
J. j=1

subject to
n (5.3)
L xij).,j ~ Xik ; i = 1,2, ... , m
j-l

).,j ~ 0; j = 1, 2, ... , n.

Hence there must exist an optimal feasible vector f3* = f3*(k) for any
given k.
The primal and dual problems may be written in vector-matrix
terms as
min gk = x~f3
fJ
subject to X' f3 ~ Y (5.4)
f3~0

and
max Z =Zk = y' )"
J.

subject to X)., ~ xk (5.5)


A ~ O.
Here prime denotes transpose, X k is the m-element column vector
(xid, X = (xij ) the m by n input-output matrix and Y = (Yj) is the n-
element column vector. The decision variables are the column vectors
f3 and )., with m and n elements respectively. Two types of interpreta-
tions have been given for the decision vectors such as f3. One is that
they represent a set of nonnegative weights which provide the effi-
ciency ranking and comparison of DMUs in the cluster of n units. A
second approach is to interpret the f3* coefficients as the parameters
of suitable production frontiers. Thus if the k-th DMU is efficient in
terms of the model (5.1) we would have Yk = L r= I f3f Xik = Yt where
f3* ~ 0; but for any other DMU (say j), which is not efficient we
would have y7 r-
= L I f3f Xij > Yj so that its relative inefficiency is
measured by the ratio Y/ where y7, y7
may be termed the potential
103

output for j-th DMU. One aspect of the DEA method not yet fully
explored is the manner in which the stochastic variations in input-
output data affect the parameter f3 * used in efficiency measurement.
Two methods may be suggested for defining a robust solution vector
such as f3*. One is to replace the objective function of (5.4) by
min g = x' f3, where x is the mean vector of inputs such as Xi =
(lin) k;_1xij' The LP model then takes the form:
min g=x'f3; R ={f3:x'f3 ~ y;f3 ~ O} (5.6)
{JER

Let /3* be the optimal solution of (5.6) and assume that there are data
variations (i.e., Ed around the mean i.e., xk = X + Ek within a certain
neighborhood N e such that the optimality of /3* is preserved for all
Ek E Ne • Then the solution /3* is robust in Ne and it may be used to
rank the n DMUs in order of efficiency. Several implications of
characterizing such a robust solution /3* have been discussed by
Sengupta [4, 5J and applied empirically.
The second method is to apply a minimax procedure to estimate the
production frontier from the observed data set (X, Y), on the analogy
of a regression procedure based on the least squares (LS) approach.
Let ej(f3) = k ~_I f3iXij - Yj be the error in Yj associated with a vector
f3, then the Chebyshev method of estimation consists in finding a
vector point f3 0 at which the minimax level L(f3°) of loss is achieved,
where
L = L (f3 0) = min max Ie/f3) I. (6.1)
(J 1 <;j <; n

So long as the errors ej(f3) are bounded, the solution f3 0 also known
as the Chebyshev solution always exists, even when the inequalities
ej (f3) ~ 0, j = 1, 2, ... , n are mutually inconsistent. The objec-
tive function (6.1) is closely related to that of the least absolute
value (LAV) method of estimation wI¥ch minimizes the los function
k;=1 Iej(f3) I with respect to f3. Let f3 be an LAV estimator. It is
known [3J that the LAV estimator has a significantly smaller standard
error than the LS estimator for any r~gression model with high-
kurtosis disturbances and that the error (f3 - f3) in the LAV estimator
is approximately distributed as a normal variate with mean zero and
covariance matrix k2 . (X'Xr 1 where k 2/n is the variance of the
median of a sample of size n from the error distribution and X is the
data matrix.
104

On using the minimax objective function (6.1) we obtain the trans-


formed LP model

Min f3m+t
m

subject to L f3iXij - Yj ~ f3m+t; j = 1,2, ... , n (6.2)


i=1

f3i ~ 0, i = 1,2, ... , m + 1

where 13m + I is a nonnegative scalar. One could also adjoin the


additional constraints such as
m

ej(f3) = L f3iXij - Yj ~ 0, j = 1,2, ... , n. (6.3)


i-I

It is clear that if the feasible set R = {f3: ej (f3) ~ 0, j = 1, 2, n} .A • • ,

is non-empty, then an optimal so~uton of (6.2) must exist. Let f3 be an


optimal solution of (6.2). Then f3 is robust in the same sense as the
LAV estimator mentioned before. Moreover it preserves the linearity
of the DEA model unlike the LS approach which would make it
quadratic. By adjoining the condition ~r=t f3i = 1 to (6.2) one could
interpret the m-tuple vector 13 as mixed strategies of a suitable zero-
sum two-person game.
To show the use of the active approach we now consider the dual
formulation (5.5) rewritten as a maximin model:

S.t. Xi - L Xij}",j ~ }",n+1


j=1

(6.4)
n m
Xi = Ui L Xij' Ui ~ 0, L Ui = 1
j=1 i=1

i=1,2, ... ,m; j=I,2, ... ,n.

Here Ui is the nonnegative proportion of total input of type i. Clearly


if ui = 1/ n, then Xi = Xi is the mean level of input i. In general the m-
105

tuple vector x can be represented in the homoscedastic case by


x=x + e (6.5)
where x = (Xi) is the mean (or median) input vector (i.e. for the mean
Xi = (lin) I;=1 Xij) and e is a random vector which may follow any of
several multivariate probability distributions. If the probability distri-
bution of e is such that it has mean zero and a finite variance-
covariance matrix Jt:, then one can easily characterize how closely the
input vector points x are scattered around its mean X. This is done
through the multivariate distance function D2 = D 2(x; X, Jt:) defined
as follows:

(6.6)
where it is assumed that the variance-covariance matrix Ve is non-
singular. Thus if all the vector points x in the cluster are very close to
the mean, the value of D2 will be close to zero. The higher the value
of D2, the farther the points are scattered away from the mean level.
The role of the variance-covariance matrix Jt: is more subtle however.
It has two practical implications. One is that it acts as a filtering
device, whereby with noisy data any difference Xi - Xi is deflated or
corrected in terms of its standard errors, so that the squared distance
between x and X can be more reliably computed. Secondly, if the
underlying probability distribution of the errors e can be assumed to
be normal (e.g. under certain conditions by the central limit theorem),
then the random quantity D2 in (6.6) is known to follow a chi-square
distribution with m degrees of freedom if the variance-covariance
matrix Jt: is known. Thus one may choose a level of significance a
and determine a suitable positive scalar C = ca such that x defines a
confidence region

(6.7)

in the sense that for any set of points x belonging to region R we have
Prob(x E R) = a. By varying a one can therefore vary the size of
the neighborhood around X. This formulation now represents the
industry manager's efficiency problem comprising n units or enter-
prises in an explicit form. Thus he may choose points which are close
to X in the sense defined by (6.7), thus generating new solutions which
are either the same as before or different. In the first case we have
robust solutions within the neighborhood defined by (6.7). In the
106

second case we introduce a decision region N (x, x) around x as

(6.8)

where xa , xb are two suitable fixed points such that for any point x
belonging to N(x; x), the original solution vector ~* associated with
mean inputs x remains optimal. Now let B be the intersection or-the
two sets R and N(x; x), where xa , xb are fixed and a is varied in (6.7)
such that B is a nonempty set. It is easy to show [51 that such a
construction is always feasible for a given solution vector ~*. Thus a
robust solution vector could be specified through suitable variations in
the allocation ratios U i •
An important implication of the active approach formulated in this
way is that it can be used to characterize a nonempty core of the many
player game associated with (6.4). Assume that the n DMUs form a
set N = {I, 2, . . . , n} of n players who can form coalitions (or
mergers) of different sizes. Let S be a coalition of less than n players
and N is the grand coalition of all the n players. For the LP problem

Max z = I y.k
J J
jES

s.t. I XijA j ~ Xi' i = 1,2, ... , m (7.1)


j-S

Aj ~ 0 j E S

We may form the characteristic function v(S) for the coalition seN
as

v( S) = max I YjA j
yES

s.t. I XijA j ~ Xt(S), i = 1,2, ... , m (7.2)


jES

Aj ~ 0, j E S
where xi(S) is a particular allocation of input i form the total amount
~ j E S xij the coalition has. It is easy to show that this input-allocation
107

game has a nonempty core and hence a nonempty set of allocations.


Thus in practical terms one could analyze the various optimal solu-
tions resulting from the merger of n DMUs which form coalitions.
Thus let 51 and 52 be two mutually disjoint coalitions of the grand
coalition N, then so long as v( 51) > v(52) > 0, one can improve the
total payoff by a reallocation process. If the payoffs for the grand
coalition N and any other coalition 5 are v(N) and v(5) respectively,
the associated optimal vectors (A *(N), f3*(N» and (A *(5), f3*(5» may
be used to characterize the efficiency ranking and measurement. The
implications of additional information in the form of new data on
(x jj ' Yj) as they affect these solution vectors (A *, f3*) and the respective
payoffs v( 5) and v( N) may be evaluated. Such evaluations are helpful
in (a) calculating the incentives for forming coalitions by the different
DMUs, (b) in estimating criteria of fairness and equity in deciding the
allocation rules of the active approach and (c) in allowing risk
aversion by different coalitions or their members.

4. CONCLUDING REMARKS

We may make a few comments on the various robust solutions we


have proposed by using the active approach. First, the game-theoretic
models (6.2) and (6.4) which provide Chebyshev solutions in an LP
framework can be further analyzed for sensitivity to stochastic varia-
tions in the data matrix X jj . In particular the Bayesian methods of
updating would prove helpful, although they may lead to nonline-
arities. Second, a quadratic method of obtaining a Chebyshev solution
may also be applied by noting that m out of n DMUs (i.e., m < n)
are selected by the LP models. Let K be the total number of such
selections and a typical selection be indexed by k. Then the optimal
basis for any selection k for example would appear in case of (5.1)
m

L f31(k)xjj (k) = Yj(k);


j-1
j = 1,2, ... , m.

The mean and variance of optimal output for selection k would be


fl(k) = ~;:1 f3;ij(k) and a 2(k) = i'(k)V(k)i(k) where V(k) is the
variance-covariance matrix of vectors f3* = (f3i). We now define the
108

minimax model
Min flo
kEK

subject to a 2( k) ~ flo
#(k) ~ r
where flo and r are scalar positive quantities. By using a distance
function
d(k) = i(k), V(k)i(k) - wfl*'(k)x(k)
this may be put in the form
Min Max d(k)
p' i

where w is a nonnegative weight preassigned. By analogy with the


theory of optimal statistical design, the above solution may be termed
the Chebyshev minimax solution.
Third, the use of the active approach to define an imputation in the
core may be easily related to the organizational system of hierarchical
decision levels and such problems are much more important in
dynamic problems of intertemporal optimization.

REFERENCES

[1) C~arnes, A. and Cooper, W. W. (1985), 'Preface to topics in data envelopment


analysis', Annuals of Operations Research 2: 59-94.
[2) Charnes, A., Cooper, W. W., and Rhodes, E. (1978), 'Measuring the efficiency
of decision-making units', European Journal of Operations Research 2: 429-
444.
[3) Dielman, T. and Pfaffenberger, R. (1982), 'Least absolute value estimation in
linear regression: a review', Studies in Management Sciences 19: 31-52.
[4) Sengupta, 1. K. (1986), Stochastic Optimization and Economic Models,
Dordrecht: D. Reidel Pub\. Co.
[5) Sengupta, J. K. (1986), 'Data envelopment analysis for efficiency measurement
in the stochastic case', to be published in Computers and Operations Research
[6) Tintner, G. (1960), 'A note on stochastic linear programming', Econometrica
28: 490-495.
[7) Tintner, G. and Sengupta, J. K. (1972), Stochastic Economics: Stochastic
Processes, Control and Programming, New York: Academic Press.
Efficient Portfolios: Risk Shares and Monetary Policy

PAUL VAN MOESEKE


Massey University, Palmerston North, New Zealand

Abstract. The paper first groups some key results in the choice of E, V efficient
portfolios by homogeneous programming. The optimal portfolio and the risk share
accruing to the financial intermediary are determined by the rate of interest. This
portfolio solves a two-person game. Existence theorems are established for Nash-
type equilibria in imperfect capital markets. This is done first for financial institu-
tions acting on given monetary policy parameters. The result is then generalized to
allow interaction with the instruments of monetary policy.

1. INTRODUCTION: THE TRUNCATED MINIMAX

Among the first, and to my knowledge the very first, contributions to


what is now the vast field of Stochastic Programming, were the
seminal papers by the late Professor Gerhard Tintner (1955a, b;
1957a, b; 1960) on stochastic linear programs, i.e. linear programs
with random coefficients. He proposed both the passive approach,
where the distributions of solutions are derived from those of the
coefficients; and the active approach, where explicit decision rules
intervene, directly or indirectly, in the selection of decisions with
uncertain outcomes.
The fact that Stochastic Programming is now recognized as a full-
fledged subdiscipline (No. 90 C15) in the Mathematics Subject Classi-
fication of Mathematical Reviews and Zentralblatt fUr Mathematik
bears witness to the reach of Tintner's insights. Compendia of con-
tributions to stochastic programming appear in the literature from
time to time, see inter al. Kolbin (1971), Bereanu (1972), Dempster
(1980), and Sengupta (1980). The scope of Tintner's oeuvre is of
course much wider than the economics of uncertainty and colleagues
far more qualified than myself have gathered here in print to illustrate
the full breadth of his contribution. See also the surveys by Fox (1969,
1987) and Sengupta (1969).
109

Jati K. Sengupta and Gopal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN-13: 978-94-010-8146-7
© 1988 Martinus Nijhojf, Dordrecht
110

Perhaps I can pay no better tribute to my mentor, albeit on a more


modest level, than to exemplify his impact on his students by retracing
the path of my own development in this area. My introduction to the
field dates back to my M.S. thesis at Iowa State University under
Tintner's supervision in 1959-60. His suggestion to pursue the active
approach led to the 'truncated minimax' decision criterion, formulated
in that thesis (Moeseke, 1960), formally proposed at the 1962
Econometric Society Meeting (Moeseke, 1963) and taken up by
Baumol (1963) in the context of portfolio selection. I was later to
collaborate with Tintner on the passive approach to base duality in
parametric programming (Tintner and Moeseke, 1964). It was typical
of Gerhard Tintner's generous attitude to former students that
throughout the further development of these and related ideas we kept
in touch until my last visit to the Vienna Institute for Advanced
Studies in 1980.
The approach of my M.S. thesis was further developed in my 1964
Ph.D. thesis at Yale University under the supervision of the late
Professor Tjalling C. Koopmans and published in the Yale Economic
Essays (Moeseke, 1965a). If for f: X X Y -> R only the choice of the
first argument x E X is determined by the decision maker, while
y E X is selected by nature, most existing criteria are indeterminate:
the standard minimax criterion e.g. is indeterminate in 'games against
nature' whenever

min f(x, y) = -00, all x (1.1)

and impractical whenever the left in (1.1) is extremely unfavourable


for all x.
While in the former case a choice of x is pointless, in the latter case
the choice hardly matters. If Y is measurable and the distribution of f,
now considered as a random function f(x), can be derived for all x,
suppose for simplicity that f(x) is normally distributed with first and
second moments denoted Ex and a 2x; then the truncated minimax

¢(x) == Ex - max (1.2)

is a maximand 'truncating' minima at a confidence level given by the


'risk attitude' m ~ 0: comparing decisions x by (1.2) lifts the
indeterminacy of (1.1). The risk attitude is the weight m attached to
the standard deviation relative to expected returns.
111

The criterion is equivalent to


max k
K

where
K == {kIProb(f(x) ~ k) ~ a;x EX}

t:
and, for normal distributions for instance,

a - (Z"f ' /2 exp(-1'12) dl.

Again, the criterion is evidently not limited to normal distributions


since the probability statement expressed by the Bienayme-Tcheby-
cheff inequality
Prob[1 I(x) - Ex I ~ max] ~ 11m 2 (1.3)
can always be made, d. Leblanc and Moeseke (1979) on minimizing
the probability offailure by (1.2).
To avoid confusion, and since usage is not uniform, we note
explicitly that the term '(truncated) minimax' refers to maximization in
X over (truncated) minima in Y, rather than the other way around.
(This is the more conservative attitude: if X is a polytope, for instance,
taking a truncated minimum in Y over maximizers in X can be limited
to the comparison of vertices in x.) There is, of course, an immense
literature on decision criteria under risk. For a recent survey and
axiomatic approach see Yaari (1984, 1985).
Sections 2, 3 summarize and group results allowing the selection of
efficient and optimal portfolios by the truncated minimax. Section 4
adapts the Kakutani theorem to establish the existence of equilibrium
in imperfect capital markets where optimal portfolios of financial
institutions are determined interdependently and depend on monetary
policy as well.

2. STOCHASTIC LINEAR AND LINEAR HOMOGENEOUS PROGRAMS

We shall consider the standard linear program


max ex (5)
x
112

where

X={x~OIAx~b}

and where only the c vector is random, with known (subjective)


distribution. By the truncated-minimax criterion the program generates
another:
max(Ex - max) (H)
x

where Ex is short for (Ec)x and ax for (XVX)I!2 while V is the


covariance matrix of the coordinates of c.
The truncated minimax has transformed the stochastic linear pro-
gram (S) into a linear homogeneous program (H) so that both the
duality theorems of homogeneous programming (Moeseke, 1965a,
1974; Eisenberg, 1963; Leblanc and Moeseke, 1976) and the relevant
algorithms (Moeseke, 1965a; Dreze and Moeseke, 1974) apply.
By the duality theorem of homogeneous programming, for any
primal solution x* and any dual solution v*,

Ex* - max* = v*b


or
Ex* = v*b + max*. (2.1)
In homogeneous, as in linear, programming, revenue is distributed
exactly over factor rewards. For the typical production program with
revenue Ex*, resources b and resource rents v*, under competitive
conditions the latter tend to coincide with market factor rewards so
that by (2.1) the term max* may be interpreted as the return to a new
factor, the risk-bearing one. Institutionally this may be an insurance
company, a risk-bearing intermediary, or the entrepreneur himself.
Furthermore,

m = (Ex* - v*b)/ax*
so that the risk attitude m can, in principle, be measured objectively
as the coefficient of variation of net returns.
Empirically plausible orders of magnitude found for m were 1.5 for
Midwest farmers and 0.5 for sharemarket investors, ct. Moeseke
(1965a) and Moeseke and Hohenbalken (1974).
113

A program x* is called E, a efficient if for no x E X

where ~ --+ ¥:, i.e., E can only be increased by increasing a and a


can only be reduced by lowering E. Clearly, the definition is equiva-
lent to Markowitz (1959) E, V efficiency. We have proved elsewhere
(Moeseke, 1966, 1968), as an application of the general theory of
efficiency, that:
(1) for all m > 0 (respectively ~ 0), every (respectively at least
one) solution x* of (H) is an efficient portfolio;
(2) conversely, that varying m ~ 0 reveals the complete set of
efficient portfolios as solutions to (H);
(3) for changes in m necessarily fj.mfj.ax ~ 0, i.e. the standard
deviation of a solution to (H) is a non-increasing function of the
relative weight m attached to it.
One notes, finally, that program (H) reflects a risk-averting, expec-
tational, or risk-seeking attitude according as m > 0, = 0, or < 0;
that it is a linear program for m = 0; and that it becomes a 'safety
first' criterion as m --+ 00.

3. OPTIMAL PORTFOLIOS

Program (S) is a portfolio program if X is a budget set:

X={x~Olux~l} (3.1 )

where u is the n-tuple of units, xj the fraction of the budget dollar


invested in security j and cj is the net return (dividends plus capital
gains) per dollar's worth of the security.
In the particular case of portfolio selection we shall call optimal a
portfolio that is, in a sense to be made precise, the most cautious one
possible among efficient portfolios. We define it as a solution to the
program

max min ~(x, m) (3.2)


x M
114

where
?(x, m) == Ex - max
M == {m ~ 0 IEx - max ~ r}
and r is the rate of interest on deposits. To see this, recall that by the
duality theorem of homogeneous programming any solution x* to
max (Ex - max) = max ?(x, m)
x x

satisfies
Ex* - max* = A* = ?(x*, m) (3.3)
where A* is the optimal value of the dual variable A assigned to the
single budget constraint ux ~ 1, i.e. the budget dollar's marginal value
to the investor for given m.
Define

G(m) == max ?(x, m). (3.4)


x

We showed in Moeseke (1968, 1980) that G(m) is convex non-


increasing in m. If m were sufficiently large for (3.3) to drop below r
then the marginal value of investment would be worth less, to the
investment trust (financial intermediary) with risk attitude m, than the
rate on deposits, an inconsistent position.
Hence, among efficient portfolios an optimal one allocates the
budget with maximal caution subject to the restriction that the
marginal value of the budget dollar is not exceeded by its marginal
cost, viz. the rate the financial intermediary has to pay on deposits.
Economically trivial cases are excluded by assuming that
G(m) > r form=O (al)
G(m) < r for m large enough. (a2)
Unless (al) is satisfied no portfolio has a higher expected yield than r
(as G is nonincreasing) and there is no point in attracting deposits.
Further, unless (a2) is satisfied the budget dollar's marginal value by
(3.3) would exceed or equal its cost r in the estimation of even the
most conservative investors: they would consequently bid up r until
(a2) were satisfied.
As G(m) is nonincreasing and convex it follows at once from (al),
115

(a2) that
G(m+) = r
for a unique m + such that 0 < m + < liz (explicit proofs in Moeseke,
1968, 1980) and that, if x+ is a maximizer of ¢(x, m+), the pair
(x+, m+) solves (3.2).
We have shown elsewhere (Moeseke, 1980) that (3.2) is a two-
person convex game with kernel ¢ defined on the strategy spaces
X X Nt where Nt =
[0, liz]. Both X and Nt are convex, compact and
¢ is concave in x, linear in m. Hence there is a solution (x+, m+),
where m+ is unique, such that
min max ¢ = max min ¢ = ¢(x+, m+) = r.
M x x M

Empirical examples illustrating the selection of optimal portfolios


by major trust funds are found in Moeseke and Hohenbalken (1974)
and Young (1985).

4. IMPERFECT CAPITAL MARKETS: EXISTENCE THEOREMS

We are now in a position to state existence proofs for alternative,


relatively realistic, equilibrium models of capital markets where the
players in Model I are financial institutions, while Models II and III
include the monetary authority as well. The latter determines discount
rates, hence interest rates on deposits, as well as the reserve coeffi-
cients fixing the budget fractions the various financial institutions must
hold in 'riskless assets' (government bonds).l
The capital markets we envisage are imperfect in the sense that
none of the three requirements of the perfect market are satisfied:
financial institutions are not infinitesimal and their investment deci-
sions measurably interact; the institutions do not have perfect informa-
tion and their subjective distributions and risk attitudes differ; finally,
entry into any category of finance companies is subject to legal and
institutional restrictions.
The existence of equilibria is proved for three models and the
method of proof, which is a generalization of the Nash theorem, is of
some interest in its own right. Investors' subjective distributions, hence

1. On the increasing interest-rate volatility of treasuries see, however, Stein (1986).


116

their respective mini-maximands ~i> now depend not only on their


own, but also on their competitors' investment decisions. In Model I
interest rates ri payable on deposits are given for every financial
institution, directly or indirectly, by the discount policy of the mone-
tary authority. In Models II and III investors' subjective distributions
also depend on the instruments of monetary policy so that the
monetary authority is one of the participants in the game with no
other objective, however, than to accommodate the capital market, a
purely monetarist position.
Proofs are based on the Kakutani fixed-point theorem: an upper
semi-continuous (usc) convex correspondence
f: Y- Y
taking a convex compact Y into itself has a fixed point y*:
y* E fy*.
Notation. In what follows non-indexed capitals denote Cartesian set
products over all i:
X =nx i, all i.
Non-indexed variables, parameters and functions x, y, r, 13, m, ~ are
finite sequences over all i: thus
x =(Xl' Xi, x3, x4,· ..);
and variables with a negative index -i are sequences with the i-th
member omitted: thus

Financial institutions or investors are indexed i. When we say below


that ri is the interest rate, and Pi the reserve coefficient, set by the
monetary authority for i, one may, of course, assume that the r i ,
respectively the Pi> are equal for all i within a certain category of
institutions (merchant banks, pension funds, savings and loan associa-
tions etc.).

4.1. Modell

The series r of interest rates is given by the monetary authority. Every


investor i selects an optimal portfolio. However, the subjective distri-
bution for i is now conditional upon decisions L i and we replace
117

(3.4) by
Gi(milx-i );: max ¢i(Xi , mil X-i),
Xi

where
Xi ;: {Xi ~ 0 I ux ~ 1},
and in (a2) write Gi (m i IX-i)' Define

M, E [ 0, m~ m, l
PROPOSITION 4.1. There exist sequences x*, m* such that all
investors hold optimal portfolios, i.e. portfolios satisfying
¢i(xi, mi I X!i) = min max ¢i(Xi , mil X!i), all i. (4.1)
Mi Xi

Proof Define the correspondence r 1: X --+ X as follows. For


arbitrary x E X write x' E r 1x if

¢i(X;, m; IL i ) = min max ¢i(Xi , mil X-i), all i. (4.2)


Mj x..

The correspondence r 1 sending x into x' takes the convex compact X


into itself and is clearly convex and closed: it is consequently convex
usc. By the Kakutani theorem there is an x* E r 1x* i.e. such that
(4.1) holds after replacing x;, m; by xi, mi and L i by X!i in
(4.2). QED.

Observe that m; is uniquely determined in (4.2) for a given distribu-


tion, i.e. for given X- i (cf. Section 3).

4.2. Model II

Discount strategies r of the monetary authority, as well as investors'


decisions x, now codetermine equilibrium. Subjective distributions are
influenced by other investors as well as by the interest rates r: we
must determine two equilibrium series, viz. x* and r*.
The strategy sets R i are defined as the closed segments
Ri ;: [Gi(O), Gi(mi )]·
118

PROPOSITION 4.2. There exist sequences r*, m*, x* such that all
investors hold optimalportfolios, i.e. portfolios satisfying

~i(xi, mi I r*, X~i) = min max ~i(Xi' mil r*, X~i)' (4.3)
Mi Xi

Proof Define the correspondence f 2 taking the compact convex


(R X X) into itself, as follows. For arbitrary (r, x) E (R X X) write
(f, x') E f 2(r, x) if
~; == ~i(X;, m; Ir, x_;) = min max ~i(Xi' mil r, X-i)' all i.
Mi Xi

Observe that ~; = Gi(m;) = ri by the argument in Section 3. The


correspondence f 2 sending (r, x) into (¢" x') is convex usc. By the
Kakutani theorem there is a (~*, x*) E f 2(r*, x*) i.e. such that (4.3)
holds. QED.

4.3. Model III

Finally, we incorporate another instrument of monetary policy, viz. the


reserve coefficient(s), into the equilibrium model by specifying that i
has to invest at least a fraction Pi of the budget in government bonds
with fixed return c~. The fraction Pi E [0, 1] is the reserve coefficient.
The new mini-maximand for i is
(4.4)
and the new budget set
Xi == {(x?, Xi) ~ °Ix? + uXi ~ 1, x? ~ Pd,
where x? is the budget fraction invested by i in government bonds.
Regarding the latter as riskless assets we derived in Leblanc and
Moeseke (1979) a number of separation theorems generalizing Tobin's
(1958) well-known result. In the same article we derived elasticity and
substitution theorems relating the different monetary instruments. We
concluded by showing that in this model our optimal portfolio solves
max (cx; -c?)/ax;. (4.5)
Xi

It is known that the solution to (4.5) is the perfectly competitive


market portfolio (cf. Levy and Sarnat, 1972; Lintner, 1965; Sharpe,
119

1964), which, assuming complete agreement (homogeneous expecta-


tions) among investors, will be held by every one of them. 2
In contrast, of course, Model ill looks at equilibrium among
(categories of) investors and the monetary authority using various
instruments in an imperfect capital market. The strategy sets Bi are
defined as closed segments:
Bi = [0,1], where f3i E Bi, all i.
Note that distributions, hence the ~i' now depend on decisions r, 13 by
the monetary authority, as well as on decisions (xo -i> L i ) by other
investors.
We showed in Leblanc and Moeseke (1979) that, for any mi ~ 0,
if (4.4) has a maximum in Xi then it has a maximum such that
x? E {f3i' 1}. Henceforth we ignore the trivial case x? = 1, which
clearly can occur only if ~i = c?: this would mean that prospects in
the security market are so poor compared with the yield on c?
governments that the investor would voluntarily invest the entire
budget in the latter so that the reserve constraint becomes inoperative.
Hence we take x? = f3i' all i.

PROPOSITION 4.3. There exist sequences x*o, x*, m*, r*, 13* such
that all investors hold optimal portfolios, i.e. portfolios satisfying

(4.6)
all i.

Proof. Define the correspondence r 3 taking the compact convex


(R X B X X) into itself, as follows. For arbitrary (f, i3,
x) in that set
write

if
,{,.' =- Y'i
Y'i Xi' Xi', m ,i I r,- 13-,x-
,{,. (0, - )
i
= min max ¢i(X?, Xi' mil r, i3, X-i), all i.
Mi Xi

2. See however Hogan et ai. (1979) for an econometric study of the effects of
banking regulations.
120

The correspondence [3 sending (r, (3, i) into (f, xo" x') is convex
usc and by the Kakutani theorem there is an (r*, xo*, x*), where
r* = ~* and xo* = {3*, satisfying (4.6). QED.

REFERENCES

Baumol, W. (1963), 'An expected gain-confidence limit criterion for portfolio


selection', Management Science 10: 174-182.
Bereanu, B. (1972), Probtemes lineaires et stochastiques et quelques-unes de leurs
aplications, Lyon: Publications Econometriques.
Dempster, M. A H. (1980), 'Introduction to stochastic programming', in Dempster,
M. A H. (ed.), Stochastic Programming, London: Academic Press, pp. 3-59.
Dreze,1. and Moeseke, P. van (1974), 'A finite algorithm for homogeneous portfolio
programming', in Moeseke, P. van (ed.), Mathematical Programs for Activity
Analysis, Amsterdam: North-Holland, pp. 79-91.
Eisenberg, E. (1963), 'Duality in homogeneous programming', Proceedings of the
American Mathematical Society 12: 783-787.
Fox, K A (1969), 'The invisible revolution in economics: emergence of a mathe-
matical science', in Fox, K A, Sengupta, 1. K, and Narasimham, G. V. L. (eds.),
Economic Models, Estimation and Risk Programming: Essays in Honor of
Gerhard Tintner, Berlin: Springer, pp. 2-19.
Fox, K A and Tintner, Gerhard (1987), in Eatwell, 1., Milgate, M., and Newman, P.
(eds.), The New Palgrave: a Dictionary of Economic Theory and Doctrine.
London: Macmillan (forthcoming).
Hogan, W., Sharpe, I., and Volker, P. (1979), Regulation, Risk and the Pricing of
Australian Bank Shares, 1957-76, Working Papers in Economics No. 29.
University of Sydney.
Kolbin, V. V. (1971), 'Stochastic programming', in Gamkrelidze, R. V. (ed.), Progress
in Mathematics: Probability Theory, Mathematical Statistics, and Theoretical
Cybernetics, New York: Plenum Press, pp. 1-75 (vol. 11).
Leblanc, G. and Moeseke, P. van (1976), 'The Le Chatelier principle in convex
programming', Review ofEconomic Studies 43: 143-147.
Leblanc, G. and Moeseke, P. van (1979), 'Portfolios with reserve coefficient',
Metroeconomica 31: 103-118.
Levy, H. and Sarnat, M. (1972), Investment and Portfolio Analysis, New York:
Wiley.
Lintner, J. (1965), 'The valuation of risk assets and the selection of risky investments
in stock portfolios and capital budgets', Review of Economics and Statistics 47:
13-37.
Markowitz, H. (1959), Portfolio Selection, New York: Wiley.
Moeseke, P. van (1960), 'Stochastic linear programming', M.S. Thesis. Ames: Iowa
State University.
121

Moeseke, P. van (1963), 'Minimax-maximax solution to linear programming under


risk', (Abstract). Econometrica 31: 749-750.
Moeseke, P. van (1965a), 'Stochastic linear programming', Yale Economic Essays 5:
197-253.
Moeseke, P. van (1965b), 'A general duality theorem of convex programming',
Metroeconomica 17: 161-170.
Moeseke, P. van (1966), 'Ordre d'efficacite et portefeuilles efficaces', Cahiers du
Seminaire d'Econometrie (Universite de Paris) 9: 67-82.
Moeseke, P. van (1968), 'Towards a theory of efficiency', in Quirk, 1. and Zarley, A
(eds.), Papers in Quantitative Economics, Lawrence: Kansas University Press, pp.
1-30 (vol. 1).
Moeseke, P. van (1974), 'Saddlepoint in homogeneous programming without Slater
condition', Econometrica 42: 593-596.
Moeseke, P. van (1980), 'Stochastic portfolio programming: the game solution', in
Dempster, M. A H. (ed.), Stochastic Programming, London: Academic Press, pp.
497-505.
Moeseke, P. van and Hohenbalken, B. von (1974), 'Efficient and optimal portfolios
by homogeneous programming', Zeitschrift fUr Operations Research 18: 205-
214.
Sengupta, 1. K. (1969), 'The econometric work of Gerhard Tintner', in Fox, K. A,
Sengupta,1. K., and Narasimham, G. V. L. (eds.), Economic Models, Estimation
and Risk Programming: Essays in Honor of Gerhard Tintner, Berlin: Springer, pp.
20-34.
Sengupta, 1. K. (1980), 'Stochastic programming: a selective survey of recent
economic applications', in Dempster, M. A H. (ed.), Stochastic Programming,
London: Academic Press, pp. 525-536.
Sharpe, W. (1964), 'Capital asset prices: a theory of market equilibrium under
conditions of risk', Journal ofFinance 19: 425-442.
Stein, 1. L. (1986), The Effects of Interest Rate and Stock Index Futures upon the
Rate of Capital Formation, Working Paper Series No. 86-2. Providence: Brown
University.
Tintner, G. (1955a), 'Stochastic linear programming with application to agricultural
economics', Second Symposium on Linear Programming, Washington, D.C.:
National Bureau of Standards, pp. 197 ft. (vol. 1).
Tintner, G. (1955b), 'Programmazione lineare stocastica con applicazioni a problemi
di economia agraria', Giornale Degli Economisti, pp. 3 ft.
Tintner, G. (1957a), 'Les programmes lineaires stochastiques', Revue d'Economie
Politique 67: pp. 601 ft.
Tintner, G. (1957b), 'La theorie probabiliste de Camap et son application aux
problemes de l'econometrie', Economie Appliquee 10: pp.19 ft.
Tintner, G. (1960), 'A note on stochastic linear programming', Econometrica 28: pp.
490 ft.
Tintner, G. and Moeseke, P. van (1964), 'Base duality theorem for stochastic and
parametric linear programming', Unternehmensforschung 8: 73-79.
Tobin, 1. (1958), 'Liquidity preference as behaviour towards risk', Review of
Economic Studies 25: 65-86.
122

Yaari, M. E. (1984), 'Univariate and multivariate comparisons of risk aversion:


a new approach', Research Memorandum No. 64. Jerusalem: The Hebrew
University.
Yaari, M. E. (1985), 'Risk aversion without diminishing marginal utility and the dual
theory of choice under risk', Research Memorandum No. 65. Jerusalem: The
Hebrew University.
Young, M. (1985), 'Portfolio selection by homogeneous programming', M.A. Thesis.
Palmerston North: Massey University.
Extremal Principle Solutions of Games
in Characteristic Function Form: Core,
Chebychev and Shapley Value Generalizations

A. CHARNES, B. GOLANY, M. KEANE, AND J. ROUSSEAU


University of Texas at Austin, Texas, u.s.A.

INTRODUCTION

In 1966, W. Lucas [11 exhibited a 10 person game with no von


Neumann-Morgenstern solution. D. Schmeidler [21 then originated the
nucleolus, proved it exists for every game, is unique and is contained
in the kernel and the bargaining set (thereby establishing the first
elementary proof of their existence). Charnes' idea of defining solu-
tions by use of uni- or poly-extremal principles involving comparison
of coalitional worths and payoffs thereto was specialized by Charnes
and Kortanek [3] to non-Archimedean linear programs characterizing
and generalizing the nucleolus and to the class of convex nucleus
solutions given by minimizing a convex function of the coalitional
excesses subject to simple conditions like the payoff vector being a
division of the grand coalition value or an imputation.
Although Charnes and Keane presented convex nucleus solutions
characterizing the core and the Shapley value at the 1970 Inter-
national Congress of Mathematicians in Nice and in [4, 5], and
although the Charnes-Cooper Entropic solution was explored in L.
Seiford's Ph.D. thesis with Charnes [6], due to various pre-emptive
emergencies none of these were submitted for publication in the
literature. Today, active research on modifications and extensions of
the characteristic function to reflect likely results of an implicit
bargaining stage as in Gately [7], Littlechild and Vaidya [8], Charnes,
Rousseau and Seiford [9], Michener [10], Charnes and Golany [11]
make these early unpublished results particularly interesting and
relevant to research involving non-superadditive generalizations of
solutions or situations wherein some coalitional values are unknown.
123

Jati K. Sengupta and Gopal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN-13: 978-94-010-8146-7
© 1988 Martinus Nijhojf, Dordrecht
124

Thus in the following we present our extremal principle generaliza-


tions of the core and Shapley value which includes a new formula for
the latter. We also present a new notion, the "Chebychev" solution.
From these the mathematical simplicity and efficiency of the extremal
principles versus axiomatic definitions may be apparent. In particular,
the relevance of these to current work on the 'homocore' [11], the
'Harsanyi-Selten valuation' [12] and to Charnes' original idea of a
polyextremal game form somewhere between the extensive form and
the characteristic function form in which the coalitional worths are
implicit or endogeneous may be appreciated.

THE CORE GENERALIZATION

We define our generalized solution to the game (N, v) as an optimal


solution x* to the convex programming problem:

min L Ix(S) - v(S)1


x 5 ~ N
(GC)

subject to x( N) = v( N) where x( S) @ L Xi
;ES

Notice that this problem is well defined and soluble whether or not
(N, v) is super-additive. Indeed, the concept may be further extended
by .weighting the coalitional excesses which appear in the absolute
values. The relation of this extremal principle to the core is rendered
in the following:

THEOREMl.

L I xeS) - v(S) I ~ K n @ 2 n - 1 v(N) - L v(s)


S~N S~N

x(n)-v(N) S"N

with equality iff X is in the core.


(We call Kn 'Keane's constant ').
125

Proof

L \x(S) - v(S)1 = L [xeS) - v(S)] + L [v(S)-x(S)]


S,. N x(S)" v(S) x(S) < v(S)

= L [xeS) - v(S)] + 2 L [v(S) - xeS)]


S "N x(S) < v(S)

Now

L x(S)= L x(S)+···+ L x(S)+ .. ·+ L xeS)


S" N ISI=1 ISI-k ISI=n

Recall that x( S) = ~ i E S Xi and that each sum contains the Xi for


every player i the same number of times. Thus, the number of times
one has Xl + ~ + ... + xn in ~ISI=k xeS) is the number of times
player 1 appears in coalitions of size k. Clearly, this is (:=:) for k ~ 2.
Hence

= (n-1)
k-1 v(N), since x(N) = v(N).
So

S~N X(S) = 1+
I (n-1) 1 + ... + (n-1)
k-1 + ... + (n-1)]
n-1 v(N).
But

So

L xeS) = 2n - 1 v(N).
S"N
126

Hence

L I x(S) - v(S)1 = 2 n - 1v(N) - L v(S) +


S~N S~N

x(N)=v(N)

+ 2 L Ix(S)- v(S)1
x(S) < v(S)

= Kn + 2 L Ix(S)- v(S)I.
x(S)<v(S)

Thus, the left side equals K n iff {Xi: x(S) < v(S)} = 0, i.e. X is in the
core of (N, v). Q.E.D.

THE CHEBYCHEV SOLUTION

We define the Chebychev solution to be any optimal solution x* of


min Asubjectto A ~ Ix(S) - v(S) I,
(",x)
(CH)
VS ~ N and x(N) = v(N).
Evidently no super-additivity of (N, v) is needed. Interestingly, this
solution may be related to the core via Keane's constant as follows:

THEOREM 2.
A* ~ K n l(2 n -1)
with equality iff x* is in the core of (N, v).
Proof Sum the inequalities for 1 S I = k to obtain

L Ix(S)- v(S)I.
ISI=k

Hence
n

L L Ix(S)-v(S)1
k-l ISI=k
127

or

(2 n - 1)). ~ L Ix(S)- v(S)1


S';;N

Thus
,*
I\. ~ 2nK
n . h
WIt .
equality
-1

iff L Ix*(S)-v(S)I=Kn
S';;N

iffx* is in the core = {x:x(S) ~ v(S), 'tiS ~ N}. Q.E.D.

THE GENERALIZED SHAPLEY VALUE

We define a generalized Shapley value for the game (N, w) as any


optimal solution x* to the quadratic extremal problem.

min L [w(S) - v(S)j2 m(S) subject to x(N) = v(N) (GS)


S';;N

where m(S)= m(ISI);: m(s).

THEOREM 3. The problem (GS) has the unique solution

Xi
*= 2-f3 [ /A-i + f3w(N) - M ]
n

where

/A-i = L w(S)m(S), f3 = L
n-I
m(k)
(2)
k= '
n
S?- i k=1 1

M = L /A-i
i=1
128

Proof Considering the Lagrangian L(x, ).) and setting its gradient
equal to zero, we have

L(x,).) = L [w(S) - v(S)J2m(S) + 2), [x(N) - w(N)]


S";'N

and

0=-2 L [w(S) - x(S)]m(S) + 2), [x(N) - w(N)]


S";'N
S~ i

0= x(N) - w(N).

Thus

). = L [w(S) - x(S)]m(S) and x(N) = w(N).


s~i

In terms of the above defined Jli and s = IS I, we have

Jli - ). = L x(S)m(s)
S ~ i

n-l

= L L x(S)m(s), noting x(N) = w(N).


k=! S=k
S~ i

n-l

= L m(k) L x(S)
k=l S-k
S ~i

=
n-l
L m( k)
k=l
( n
k =1)1 Xi +
n-l
L m( k)
k~2
(
n
k
=2)
2 L. Xj
I""
129

+ L
n-I

k=2
m(k) n
k
= w(N)
(2)
2

= L
n-I

k=1
m(k) n
k
(2)
= 1
Xi + L
n-I

k=2
= w(N)
(2)
m(k) n
k 2

where

and

rJ =
n-I
L
n
m(k) k
(2)
= w(N).
k=2 2

Thus, summing on Xi to reach w(N), we obtain


1
A = - [M - f3w(N) - nrJw(N»),
n
and

* _ Ii
Xi -
1 [
Mi + f3w(N) - M ]
n . Q.E.D.

Notice that this solution, in common with the Shapley value, is linear
in the w( S). Thus a wide class of solutions, which are additive over
general games, has been specified. The Shapley value, as will be
shown, is the special case in which the coalitional weights

m(S)= ( n- 2 )-1 and f3=n-l.


s-1
130

We obtain

THEOREM 4. The Shapley value, specified by the imputation

<l>i = -1 L (n-l)-1
_ [w(S).:... w(S - {i})]
n S~i s 1

is the (GS) - solution specified by

xi = _1_ [M + (n -
i
l)w(N) - M]
n -1 n

arising from the weights m(S) = (:=~r and f3 = n -1. .

Proof Since both Shapley value and (GS)-solution are additive over
games, it suffices to match them over all 'Rlh unit vector' games i.e.
games with

w(R) = 1, w(S) = 0 for S 'f R.


For such a game one obtains

)J.i -
-1 m(r)w(R),
0
,
ER
$R

and
n

M = L )J.i = rm(r)w(R) = rm(r).


i-1

Then

x'!' = mer)
f3
11 -- (r/n),
(rln),
ER
I fER

= mer)
f3
I (n - r)/n,
-rln,
iER
i fE R.
131

For the RIb unit vector game

~ (n -1 )-1, ER
n r-1
<p.=
I

_~(n-1)-I, itER.
n r-1

Setting x1 = <Pi'

m/3(r) -_ (n -r l)-l r _l, 1 ~ r ~ n.

Normalizing by setting m(l) = 1, we must have /3 = n - 1. To


check that this is consistent with our definition of /3, note

f3= L
n-I
m(k) n- (2) = L
n-I (
n- 2)-1 (n- 2) =n-1.
k=1 k-1 k=1 k-1 k-1
Thereby we obtain the above, new, expression for <Pi of the Shapley
value. Q.E.D.

EXTREMALPmNCWLESFOR
INCOMPLETE CHARACTERISTIC FUNCTIONS

A major difficulty in application of characteristic function theory is


generally the lack of knowledge of v( S), 'V S ~ N. With n players
there are 2n - 1 possible coalitions, a tremendous number of values of
v(S) to know even for small n. The extremal principle solutions
defined above can, however, be easily modified to involve only the
relatively few actually possible coalitions through use of zero weights.
For example,

min L w(S) I xeS) - v(S) I (Wac)


S"N
x(N)=v(N)

min A, A - w(S) I xeS) - v(S) I ~ 0 (WCH)


x(N)=v(N)

where w( S) is the weight for coalition S.


132

For the generalized Shapley values, the extremal principle solution


has already been given. We note that argument similar to that for ac
and CH will yield corresponding theorems for wac and WCH.
But the flexibility of the extremal approach and the Chebychev
metric also applies to obtaining a new simple solution using the basic
idea of 'incremental propensity to disrupt'.
Thus, instead of complicated nucleolus impositions on the homo-
mollifier game, one can directly employ

s n-s
min A, A ~ w(S) - v(S) + v(S) - xeS) (HMCH)
x(N)=v(N) n n

where the weights w(S) may be arbitrarily specified and

s _ n- s
-
n
v(S) +
n
v(S), v(S) =v(N) - v(N - S),

is the homomollifier of (N, v).


Clearly, theorems similar to the above will result for HMCH. The
relationship of these to current, more complicated, homocore and
monocore notions is yet to be explored. Evidently there will be close
coincidence for some simple games.

REFERENCES

[1) Lucas, W. F. (1968), 'A game with no solution', Bull. Am. Math. Soc. 74:
237-239.
[21 Schmeidler, D. (1963), 'The nucleolus of a characteristic function game',
Research Memorandum 23, Mathematics Dept., The Hebrew University of
Jerusalem.
(3) Charnes, A. and Kortanek, K. O. (1970), 'On classes of convex and pre-
emptive nuclei for n-person games', in H. W. Kuhn (ed.), Proc. 1967 Princeton
Symposium on Mathematical Programming, Princeton, N.J.
[4] Charnes, A. and Keane, M. (1969), 'Convex nuclei and the shapley value',
Center for Cybernetic Studies Research Report 12, The University of Texas,
Austin.
[5] Charnes, A. and Keane, M. (1970), 'Convex nuclei and the shapley value
(abstract)" Proc. Int. Congress ofMathematicians, Nice.
[61 Seiford, L. (1977), 'Entropic solutions and disruption solutions for n-person
games', Ph.D. Dissertation Department of Mathematics, University of Texas,
Austin.
133

[7] Gately, D. (1974), 'Sharing the gains from regional cooperation: a game
theoretic application to planning investment in electric power', Int. Economic
Review 15: 195-208.
[8] Littlechild, S. C. and Vaidya, K. G. (1976), 'The propensity to disrupt and the
disruption nucleolus of a characteristic function game', Int. Jour. Game Theory
5: 151-161.
[9] Chames, A, Rousseau, J., and Seiford, L. (1977), 'Complements, mollifiers
and the propensity to disrupt', Int. Jour. Game Theory 7, 37-50.
[10] Michener, H., Yuen, K., and Sakurai, M. (1981), 'Experimental games', Int.
Jour. Game Theory 10: 75.
[11] Chames, A and Golany, B. (1983), 'Homocores, cores and operational
inefficiency in superadditive n-person games', Int. 1. Systems Sci. 14: 877-
893.
[121 Chames, A, Rousseau, J., and Seiford, L. (1982), 'Mollifiers for games in
normal form and the Harsanyi-Selten valuation function', Int. Jour. Game
Theory 11: 163-174.
Econometric Estimation of Decision Models
Under Uncertainty

T. V. S. RAMAMOHAN RAOI
Indian Institute of Technology, Kanpur

Abstract. Endogenous randomness of the demand curves is endemic to firms in


monopolistic competition. Since the costs of resolving uncertainty are prohibitive
they may be at best able to obtain some information about the demand curves.
Confronted with such random demand they may pursue a policy of fixing the price
and/or quantity and face the fluctuating market in such a way as to maximize
expected profits. This makes the estimation of the ex ante demand curves from ex
post behavior quite hopeless. The major accomplishment of this paper is the
development of a method of estimating the ex ante parameters. Sensitivity of the
parameter estimates to the variations in the capacity level of output, average
production level, and price have been exhibited for various behavioral patterns of
such firms.

1. NATURE OF THE PROBLEM

Consider a firm operating in a market which can be characterized as


monopolistic competition. The demand curve of such a firm is often
influenced by factors outside the market (exogenous variation which
may also be random) and by the actions and reactions of rival firms
(endogenous randomness). In more recent literature on the theory of
the firm it has been recognized that for anyone firm the information
and search costs 2 may be so prohibitive as to prevent it from making
attempts to resolve endogenous randomness. Perforce the manage-
ment has to determine its choices in such a stochastic milieu.
In particular, the management has to determine the optimal change
in the price and the quantity of output to be produced and offered on
the market whenever a change in the demand curve is brought about
by an exogenous and/or endogenous variation. 3 The following exam-
ple is of obvious contemporary relevance. Since the video boom is
expected to alter the demand patterns for cinema theaters it would be
135

Jati K. Sengupta and Gopal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN-13: 978-94-010-8146-7
© 1988 Martinus Nijhojf, Dordrecht
136

necessary to examine the changes that should be brought about in the


prices and the entertainment tax. An analysis of this nature necessi-
tates estimation of the ex ante demand and cost functions. 4
The econometric estimation problem can be described concretely
only by considering the decision-making patterns of firms when they
are confronted with demand uncertainty. Leland (1972) has shown
that four different alternatives are available depending on the market
environment:
(i) Determine the price and the quantity to be produced and sold
only after the demand curve gets to be known,s
(ii) Fix a price ex ante and accept the quantity demanded as the
production level after the randomness is resolved every period of
time,6
(iii) Fix a production level ex ante and allow the price to be
determined by the market as it unfolds, and
(iv) Fix a price as well as the production level ex ante but allow the
volume of sales to vary on the basis of endogenous randomness. This
alternative may be modified by an inventory policy whenever it is
feasible.?
In decision situations of type (i) it will be generally observed that
the ex post price and quantity choices over time will be positively
correlated and each of the observations is on a different demand
curve. However, the estimation of the ex ante demand curve is jeop-
ardized since the factors responsible for the endogenous randomness
cannot be obtained by adopting conventional regression procedures.
An identification problem of this nature has not been examined in
econometric theory so far. Further, the estimation problem, in all its
generality, appears to be hopeless apriori. Consequently, the present
study is an attempt to outline a useful, albeit somewhat preliminary,
approach to this problem.

2. SPECIFlCAnON OF THE PROBLEM FOR ESTIMATION

In order to sharply focus on the problem involved, while keeping the


analysis simple and manageable, it would be convenient to proceed in
the following manner.
Let the demand curve for the firm be represented by

p= a - f3X+u,
137

where
p = price per unit sold,
X = quantity sold at any point of time, and
u = random variable.
That is, the price which can be obtained on the market for any
given volume of sales is uncertain.
However, once the firm chooses the quantity X of output pro-
duced,8 the costs of production are well-defined. Let
C=F+mX, where
C = total cost of producing X units,
F = fixed cost, and
m = marginal cost 9 of producing a unit of X.
Now, for illustrative purposes, consider the decision process of type
(iv) detailed in Section 1. For this case the following decision proce-
dure is optimal:
(a) Fix a price p*,
(b) Fix a quantity X* which the firm will produce during each unit
of time, and
(c) Allow the actual X sold at price p* to be determined by the
market during each unit of time. 10
Hence, the observations generated in the actual operation of such a
decision process consist of:
(a) A fixed price p*,
(b) Actual quantity sold, X, which varies over time, and
(c) The total cost C. This will be mostly constant if X* is unaltered
over the time period under consideration.
The econometric estimation problem consists of obtaining usable
estimates of a, f3 as well as F and m. This is the crux of the identifica-
tion problem encountered in dealing with decision making situations
under uncertainty.
Clearly, there is no direct theoretical resolution 11 of the identifica-
tion problem. An indirect approach to the problem of parameter
estimation has to be developed. This is the main task of the rest of the
present study.

3. THE CERTAINTY CASE

Leland (1972) labels a decision situation of type (i) as the certainty


138

case. Such a firm generally waits until it gets to know the realized
value of the random variable u. It decides p and/or X only after this
information is available. That is, for any specific time period, it
chooses
2fJX = (a + u) - m, and p = m + fJx.
However, note that over time the value of u varies and this is the
primary source of variation in the observations generated for this firm.
Further, as noted earlier, an increase in u causes both X and p to rise.
The observed X, and consequently the total cost of production,
varies over time. Hence, the regression procedure can be utilized to
obtain estimates of F and m which are the parameters in the cost
curve. In the sequel it will be presumed that an estimate of the
parameter m is available.
For purposes of concreteness in the estimation process it will be
postulated that the random variable u is uniformly distributed over
the interval (0, 2A) where A is an unknown parameter. 12 Reconsider
the choice of X where u = 2fJX - (a - m).
Then, by the method of moments, it can be shown that
A = 2fJX - (a - m),
where
x = sample mean of the observed values of X.
Similarly, fJ can be obtained from the equation
p =m +fJx,
where
p = sample mean of the observed values of p.
Consequently, given an estimator of m,

~ = (p - m)/X, and d = 2p - m - .Ii


provided an estimate of A can be obtained from the sample.
The economic theory concerning the decision making process of the
type (i) firm does not provide any further information. Hence, it is
necessary to extend the method of moments to the second moment as
well. This results in
A2 = 8 2fJ2, where 8 2 = (12/n) L (X - Xf,
139

n = number of observations in the sample, and the summation is over


all these observations.
From this it follows that
A= f)~,
where f) is defined by the preceding equation.B This completes a
solution to the problem of estimating the desired parameters.
Figure 1 is drawn to exhibit the sensitivity of the parameter esti-
mates 14 to variations in X, p, and m holding the observed second

"
fX
... -~_ . -_.
10'6 ..... - 1.
- m=2·0

"
"
10'3 m~2'5

J ·014

,
10'0
400 450 500
X "- ..... m=2'0
"
A ........ ,/
"- ....
·44 ·010 ~ ,
'9

,,
,, '008
400 450 500
X
,
" m=2·0
·36 ,I
"- , ,

·2 B I--_-..L.._ _-'---_---L-_ X
400 450 500
Fig.1a.
140

"
Q'

13
I'
"
"
(3 , 11
m=2·0

\ ,,-" "
~"

,
,
,,
~"
·012 m=2·0 I 9 ..... "
....
I
\/
....

·010 I 7 P
I
I 6 7 8

I
I "A
I
'5
I'

- "

.....
P .....

"
·006 m~2·0
'4
6 7 8
\...... " .....

'"
,." ..... m:02·5

·2
6 7 8
is

Fig. lb.

moment at a constant value of 100. The following salient aspects may


be noted: 15
(a) When the observed average price (p) = marginal cost (m), the
estimates of {3 and A will both be zero. That is, the market will
approximate a competitive market with every firm having perfect
information regarding the market price. Consequently, there is no
market uncertainty.
(b) For a given marginal cost of production, an increase in p
signals an increase in uncertainty, a greater departure from competi-
tiveness, and an increase in demand. a is most sensitive to variations
inp.
(c) Increases in m tend to reduce {3 (that is, increase competitive-
ness) and enable the firms to offer the same price only if uncertainty is
reduced. But it is far more important to note that the firm would be
141

charging the same price at a lower marginal cost only if the market
demand is substantially higher.

4. THE PRICE SETTING FIRM

Consider a price setting firm next. That is, it corresponds to the type
(ii) decision structure alluded to earlier. For this case it would be
necessary to define the process of fixing the price.
Let the firm be risk neutral. 16 That is, the firm will be presumed to
maximize expected profits. To proceed with the analysis systematically
note that the
Expected revenue = p* E,
and
Expected cost = F + mE,
where
E = (a - p* + A )/(J,
and
p* = parametrically chosen price.
Clearly, E represents the expected market demand at the price chosen
by the firm.
From these expressions the choice of p* implied by the expected
profit maximization can be written as
2p*=(a +A +m).
Further, as in the previous section, it can be shown that
~ = (p* - m)/X,
A=8~,
and
a =2p*-A -m,
where
8 2 =(3/n)L(X-Xf
Figure 2 provides a numerical illustration of the sensitivity of the
142

11-0

10'6
1\
f3
'014

....
10-2 X
"A
400 450 500 """ .... ' ..... , m:2-0

·22 "I.
\
\
'010
"" .... ...
" "-
\
\
m:2·0
·008
400 450 500
X

,~
·18 , , ....
'e

m:2·5
.14 I I I
X
400 450 500

Fig. 2a.

parameter estimates to the variations in the observed X, p*, and m.


The sample variance is fixed at 100 as in the previous section. I? The
following aspects are noteworthy:
(a) Even in this case the competititve market approximation with
no uncertainty is obtained when p* = m; that is, when the firm sets
price at marginal cost.
(b) All the three parameter estimates are highly sensitive to
143

"
cr .-
,- f3"
ni:.2·0 .- '"
,
,-
,
11 \,--'" "012 m=2·0 I

, , .-
,-
\/
9
,-
,-•
m=2·5
\ ·010 I
,,
7

P
I \
m=2·5
6 7 8
"
A
·006
.
P
6 7 8
·25 m:::2'0 ;0"

\... ..... ;0 . . . . . .

·15 ;0
K
.. K

·05 p"
6 7 8
Fig.2b

variations in p*. Variations in X and m have only a marginal effect on


fl and A. However, the estimator is highly sensitive to changes in a
the marginal cost of production.
The foregoing analysis of the price fixing firm assumed that the firm
does not encounter capacity limits in fulfilling market demand. It
would be more realistic to postulate the existence of such limitS. I8
Assume further that neither can the product be stored nor can orders
be backlogged. Under these conditions it may be far more meaningful
to hypothesize that the firm fixes a price and capacity level of output
but adjusts production level of each time period on the basis of
market demand. I9
The solution in this alternative case would be somewhat different.
For, in this decision making milieu, there are two possible states of
demand at the parametrically chosen price level:
(a) If X < X*, X is offered and the cost incurred accordingly, or
(b) If X ~ X*, X* is sold on the market.
144

It may now be verified that

Expected revenue
= (p*u*l2f3A)(a - p* + O.5u*) + (p*X*I2A)(2A - u*),

where

u*= f3X*- a + p*,


and

Expected cost
= (u*I2Af3) (Ff3 + am - p*m) + (mu*2/4Af3) +
+ (1I2A)(F+ mX*)(2A - u*).
Maximizing the expected profits with respect to p* and X* and
proceeding as in the above case it can be shown that 20

p= (p* -
m)/X,
a =2p*- m -A,
and

A = f3(x* - X).
Figure 3 conveys an approximate idea regarding the sensitivity of
the estimates to variations in X, p*, and m. The following observations
are pertinent: 21
(a) As X approaches X* and full utilization of capacity is recorded
at each point of time the market uncertainty is eliminated. Hence, the
estimate of A becomes zero in the limit. The simulated results confirm
this expectation. However, note that the stability of the market
demand curve can signify either the emergence of a competitive
market organization or the possibility of significant monopoly power
to the firm. /3 reduces with X but appears to tend to zero only for
high values of m.
(b) Similarly, it was noted that as m tends to p* the estimates of A
and 13 become zero and the value of a tends to p*. This is also in
agreement with apriori expectations.
(c) However, it was found that all the three parameters - viz.,
a, 13, and A - are far more sensitive to X and p* than they are to
variations in the marginal cost itself. 22
145

II
()(

II

"
·014

8'5
500 X

·010

400 500 X

O'---~-:----'-----l--X
400 450 500
Fig.3a.

5. THE QUANTITY SETTING FIRM

A few commodities would be such that the firm will have to sell all the
output produced at the going market price. Only some agricultural
commodities and style goods will belong to this classification. How-
ever, this case must be considered for logical completeness in a
discussion of decision models under uncertainty.
A quantity setting firm fixes X* to be produced and sold on the
market. In this process the firm is postulated to accept whatever price
it can obtain on the market. The expected profit for such a firm can be
written as
En = (a +A - m)X* - PX*2 - F,
146

1\
ot. ,
11'5 m=2'0
,,
I
I
I

\1' {3
1\

., "
10'5 I
/
I
/
·014 m=2·0
,I \ I
9'5 I m=2'5 \l/
p.
·012
I
,
I
/

1\ 7 8 9 /
A '010
2·0
·008
..
p
7 8 9
.......
1·5 m~2'O
\ ........
.....
1·0
.. JI ....

·5 p •
7 8 9
Fig.3b.

so that the optimal choice of X* is given by


X* = (a +A - m)12{3.
Even in this case the model is somewhat under determined. Hence,
utilizing the first and second moments it can be shown that
A=8,
d=2ft-A -m,
and
fl = (ft - m)/X*,
147

where
()2 = (3/n) L (p - pf
As before, if the price at each point of time equals the marginal cost,
then both the estimates of {3 and A would be zero. Further, the
estimate of a would be equal to m so that the market environment
can be described as competitive with none of the firms experiencing
any market uncertainty.
Notice that, as in the previous sections, it has so far been assumed
that an independent estimate of the parameter m is available. How-
ever, since X* and C are practically invariant over the sample, an
estimate of m cannot be obtained by a regression procedure. Instead,
an estimate of F will have to be obtained from a knowledge of the
capital costs, life of machines, and the value of m calculated on this
basisP
The sensitivity of the parameter estimates to the observed variations
in X*, p, and m can again be examined.
The following aspects may be noted:
(a) Neither A nor a depend upon the observed X*. Variations in
X* can only alter the estimated value of {3. Quite clearly, ~ decreases
with an increase in X*.
(b) Similarly, A is invariant with respect to changes in p and m.
(c) Both a and ~ vary directly with p.
Given these observations it may be concluded that the estimation
problem in the context of type (iii) decision procedures is the most
uninteresting of the various cases examined so far.

6. THE CASE OF LARGE MANUFACTURING ORGANIZATIONS

Large manufacturing organizations, where there are significant eco-


nomies of scale in operations, generally tend to maintain a significantly
high level of production to obtain a cost advantage over the com-
petitors. However, they are still subject to random demand curves due
to the reactions of the rivals. It may be necessary for such firms to fix
a price as well in the hope of stabilizing their demand curve over time
at the predefined level of production. 24
In consonance with the analysis of the previous sections let both the
demand and the cost curve be linear. With a p* fixed apriori, the
firm would sell X units of output if X < X*, and sell X* units if
148

x ~ X*. Consequently, the expected revenue can be represented by


Expected revenue
= (p*u*I2A{3) (a - p*) + (p*u*2/4A{3) +
+ (p*X*I2A ) (2A - u*),
where
u* = {3X* - a + p*.
Maximization of the expected profits gives two equations in p* and
X*. Further, since
p*= a - {3X+u
it follows that
p* = a - {3X + A.
The requisite parameters, obtained from these three equations, are
~ = A(1 - 2-,n/p*)/(X* - X),
a=p*+~X-A,
and
A is obtained from the equation
A(m - p*)2 = p*2[(m - p*) + ~X*].
Figure 4 provides an idea regarding the sensitivity of the estimated
values of the parameters for X, p*, and m. It may be noted that 25
(a) X tends to X* if each of the observed values of sales approxi-
mates X*. Under such conditions the market demand would have
stabilized. Consequently, much of the uncertainy is eliminated and A
tends to zero quite rapidly. However, for a given p*, it is only larger
values of m which keep the firms competitive. A small marginal cost
perforce indicates that the large firm may have cost advantages which
confer some monopoly power to it. This, in fact, is the reason for the
increase in Pas X approaches X* when the m value is smaller.
(b) For sufficiently large values of m, both A and Ptend to zero as
X approximates X*. It may be observed that under these conditions
the estimated a approaches p* and the market is competitive.
(c) Variations in p* create a somewhat parallel effect on the
parameters a and {3. However, the estimated uncertainty remains high
for all values of p*. In fact, for small values of m the occurrence of a
149

"
~
m=2·0
... ........ ........ .
8
",
,
\ ... y

f3"
4
·016
m=2·5
0 X .... I
-..::
"
A ·012 ""

8,5 ·008 I
X
400

4-5

Fig.4a.

larger p* choice signifies a greater degree of monopoly power rather


than endogenous randomness of market demand.

7. INVENTORIES AND STOCHASTIC DEMAND

In most practical situations the firm has the flexibility of smoothing


out fluctuations in demand by holding inventory. Generally this can be
a conscious decision of the firm so that the desired level of inventory
is also a decision variable. However, for purposes of the present study
the variations in inventory will be considered to be involuntary.26
Suppose that at the beginning of any time period the firm has a
stock of inventory determined by 1. As such the total supply which the
firm plans to offer is X* + I. Then, if the actual demand X, at the
150

~
m::.2·0 /
9·5
/
\../
y ",

~
;'

7·5 ;'

0·145

3·5~-~--~-p· 0·125

"
A

Fig.4b.

predetermined price p*, falls short of X* + I the actual sales will be


X and an amount X* + I - X is added to the inventory. On the
other hand, X* + I will be sold 27 whenever X ~ X* + I.
The expected revenue under these assumptions is given by

Expected revenue
= (p*u*12A{3) (a - p*) + (p*u*2/4A{3) +
+ (p*12A) (X* + I) (2A - u*),
where

u* = {3 (X* + I) - a + p*.
151

The inventory cost of the decision process can be shown to be


inventory cost = iU*2/4A{3,
where
i = cost of holding one unit of inventory for one unit of time
Similarly, since the level of production is fixed, the total costs of
production can be written as
C=F+mX*.
Maximizing the expected profit results in two equations from which
p* and (X* + I) can be chosen uniquely. In other words, the firm
would prefer to choose a level of production for each period of time
in such a way that the total supply is invariant. Consequently, in the
rest of the analysis, X* and I will have to be viewed as sample
averages.
With this interpretation in perspective the parameters of the model
can be estimated by

H = 1 - 2( m + i)l(p* + i)
P=AHI(X*- X),
a = p* + ~(X + I) - A,
and
A is estimated from the equation
A(m - p*)2 = (p* + i)2[(m - p*) + p(X* + 1)1.

The results of numerical calculation, with respect to changes in


X, p*, and m are analogous to those of the previous section. In
particular,
(a) a and A are far more sensitive to changes in X than they are
to variations in p*,
(b) P is relatively more sensitive to X though the overall variation
remains relatively low, and
(c) the firm has a monopoly power for low values of m when X
approaches X*. However, the monopoly power is eliminated for
values of m which are close to p*.
In the present case the changes in the levels and costs of inventory
provide a distinctive feature for parameter estimation. Figure 5
provides a basis to reflect on these aspects. 28 It may be noted that
152

1\

_A--,
.. ___ e
't
m=2·0
7·5
1\

6·5 f3
,0118
5·5 I
m::.2·0
20 30
A
1\ ·0114
~/
1i)::.2·5-- " " •
5·0 ,0110
20 30

4·0

3·0 - ... (
m=2·0

20
- 30 1

Fig.5a.

(a). the estimates of a and A are far more sensitive to variations in


m than they are to either I or i,
(b) in general, a reduction in i and/or an increase in I tends to
increase the stability of the market demand curve and reduce the
extent of uncertainty, and
(c) increases in i, to the extent that they are similar to the changes
p.
in m, indicate a similar pattern of variations in However, the effect
of I on pmay be much more important compared to that of i itself.

8. SOME FURTHER OBSERVATIONS

In the foregoing analysis it was shown that estimation of the ex ante


parameters is possible for all the alternative decision structures in
153

II
tX
8'0
...... -f--"'-"
m=2·0

7·0
II
[3

5'0 I
J.
·08 .16
1\
A

.110
·08 -16 .•
5·0

4·0
m=2·0

3·0 -.- ..J-- .--..


,..
...
·08 ·16
Fig.5b

models of decision making under uncertainty. In particular, it was


expected apriori that a reduction in endogenous randomness may be a
characteristic of either a competitive market organization or of signifi-
cant monopoly power. The estimated results indicate that for the price
setting firm the tendency is indicative of competitiveness whereas for
the quantity setting firm it is closer to a reflection of monopoly power.
This result must be qualified by referring to the relation between
marginal cost and observed price. When the marginal costs are high in
relation to price. competitiveness is the more likely outcome even in
the case of the quantity setting firm. As expected, the existence of
inventory policy generally adds to the degree of monopoly power of
the firm.
For a given j3, a higher market price is indicative of both a higher a
and a higher degree of uncertainty. The results of the present study
154

indicate that endogenous randomness is the more influential causative


factor.
However, it has not been possible to get similar results if more
general functional forms are utilized. Further, the multiproduct firm
would be difficult to deal with even under the simplified assumptions.
This problem is acute in the context of the quantity setting firm.
Meaningful confidence intervals for the parameter estimates are not
as yet available. Similarly, there is no obvious method of generating
measures of goodness of fit. The problem is particularly severe in the
case of firms which fix both p* and X* ex ante.
The under determined nature of the simpler models suggests that
there can be attempts on the part of the firm to adaptively learn about
the nature of the endogenous randomness and eliminate it as far as
possible. A meaningful economic theory of this aspect of the behavior
would be most helpful. The case where p* and X* are fixed did not
pose a similar problem because the behavior already reflects an
attempt to reduce randomness by adopting alternative strategies.
Hence, though the present study is a useful beginning to the
problem there are still several dimensions of the specification which
must be examined in detail.

NOTES

1. I benefitted from discussions with R. R. Barthwal and Ashok Mittal. However,


the responsibility for the contents is my own.
2. Generally these two aspects of the specification are designated as transaction
costs.
3. Firstly, it is evident that the actions and reactions of rival firms may result in a
systematic shift in the demand curve for the firm. But, for the present
purposes, such changes will be characterized as purely stochastic. A few
modifications of this will be taken up in the sequel. Secondly, the exogenous
variation can be in the cost curves as well as the demand curves.
4. Rao et al. (1984) examined this problem partially.
5. In the original formulations of Chamberlin the firm is postulated to utilize the
experience of the most recent past rather than make attempts to guess or
resolve randomness. However, ignoring any additional information that can be
obtained is never really optimal. Hence, this behavioral mode is not relevant in
the present context.
6. The capacity of the firm may be a constraint on this decision. This will be
reexamined presently.
7. Firstly, there are a few other variations on these themes as well. A notable
example occurs in freight transportation. Generally, freight is consolidated at a
155

truck terminal or a railway yard so as to reduce the extent of demand


variability and/or uncertainty. This also suggests that even in other contexts
there may be an adaptive attempt to elicit information about endogenous
randomness with a view to reducing it over time to the extent it is possible.
Secondly, some aspects of multiproduct decisions can also be taken up in the
framework of the present study though the related results are not reported
here.
8 In some cases it is necessary to distinguish between the quantity sold and the
quantity produced. Appropriate changes will be made in the sequel on the
basis of analytical requirements.
9. It may be argued that this specification of the cost curve is unrealistic. But
only a few other functional forms can be accomodated in the estimation
methods outlined in the present study. The general problem remains elusive.
10. The modifications introduced by an inventory policy will be considered in a
subsequent section.
11. Only a resolution by appropriate economic theory is implied in this context.
Artificial statistical conditions of econometric theory are largely irrelevant in
such practical problems.
12. During the initial stages of this work the conventional normality assumption
was made in the hope that all the inference procedures can be carried over
from the classical linear estimation process. But this adds an extra parameter
for estimation, results in nonlinear estimators, and complicates inference. The
alternative presented here is manageable though it is not as yet possible to
develop all the statistical inference procedures.
13. The model is somewhat under determined. This is evident from the fact that
the second moment had to be utilized to obtain the estimators for the
parameters. However, somewhat more realistically, the firm may be expected
to make an attempt to estimate the randomness adaptively. Such an alternative
approach may be more satisfactory from the viewpoint of economic theory.
Unfortunately, it has not been possible to provide a concrete specification of
this alternative. Part of the reason for this is of course the difficulty associated
with the assignment of transaction costs. It may also be remarked that Leland's
(1972) framework did not envisage this as a possible behavioral response.
14. Since there is a closed form solution to the estimation problem this may
appear to be a redundant exercise. However, there will be cases in subsequent
sections where a closed solution is not possible. Consequently, these numerical
estimates are being presented to facilitate comparison.
15. fi = 6.5 in Figure 1(a). Similarly, in Figure l(b) the value of X is fixed as 400.
16. It is possible to estimate the degree of risk aversion in these models by
utilizing the procedures of this section. This will, however, be ignored for
simplicity of exposition.
17. In Figure 2(a) p* is set equal to 6.5. Similarly, in Figure 2(b) the value of X is
chosen to be 400.
18. The analysis with a limit on capacity which is exogenously fixed does not, in
itself, offer any additional insights.
19. The choice of the capacity level of output has implications for the fixed cost.
However, since the output decision X is variable over time the observed
156

variations in the costs can still be utilized to obtain an estimate of the marginal
cost.
20. Notice that in this case there are three equations involving the observed
decisions, viz., X*, p*, and X. This system is exactly determined and the
second moment is no longer necessary to obtain the estimators.
21. Figure 3(a) was drawn by fixing X* = 500, and p* = 6.5. In Figure 3(b) while
X* was fixed at 500, X was chosen to be 400.
22. It would have been instructive if a comparison of the estimated degree of
uncertainty with or without capacity limits can be obtained. However, the
estimates are not comparable.
23. It should be remarked that this procedure will not be adequate in the context
of multi-product firms. The system will remain under determined unless the
proportions in which the outputs will be produced can be determined
endogenously and utilized for estimation of the marginal cost parameters.
24. In the context of power generation, where the output cannot normally be
stored, the power plants generally choose a policy of fixed operations levels as
well as tariffs a priori. The possibility of the firm holding inventory to smooth
out sales over time will be examined in the next section.
25. In Figure 4(a) the assumed value for p* is 6.5. Similarly, in Figure 4(b) X* is
fixed at 500 while X takes the value 400.
26. This is the tradition of the Mills' model of decisions under uncertainty.
27. The assumption that there can be no backlogging of orders is being main-
tained. It should be easy enough to see that these assumptions can be relaxed
further without having to alter the basic analytical framework of this section.
28. In Figure 5(a), X* = 500, X = 400, p* = 6.5, and i = 0.08. Similarly, in
Figure 5(b), 1= 25 while i is allowed to vary.

REFERENCES

Leland, H. E. (1972), 'The theory of firm facing uncertain demand', American


Economic Review, 278-291.
Rao, T. V. S. R. et al. (1984), 'Equity and efficiency in entertainment tax determina-
tion', Indian Economic Journal, 45-54.
On Estimating the Elasticity of Factor Substitution
by Nonlinear Least Squares

JAMES H. GAPINSKI
Florida State University, Tallahassee, U.S.A.
and
T. KRISHNA KUMAR
University ofHyderabad, Hyderabad, India

All this time the Guard was looking at her, first


through a telescope, then through a microscope,
and then through an opera-glass. At last he said,
"You're travelling the wrong way," and shut up
the window, and went away.
Lewis Carroll
Through the Looking-Glass

Abstract. After briefly reviewing the recent history of estimating the elasticity of
factor substitution by nonlinear least squares from the CES production function, this
note activates the elasticity estimates generated by a previous Monte Carlo study and
processes them to yield relative frequency distributions of the elasticity estimator.
The note then deals with the issue of estimating the 'location parameter' of the
underlying sampling distribution. Mean, mode and median are compared as
estimators of the 'location parameter'. It is shown that all the estimators are biased,
and biased towards unity - the Cobb-Donglas function. Among all these point
estimators median estimator is shown to have the least percent bias. The paper
highlights the limitations of CES production function specification and Nonlinear
Least Squares estimation of the elasticity of factor substitution. But if one must use
CES specification and NLS estimation it shows how to cumulate evidence from
comparable studies.

DIRECT ESTIMATION OF CES PRODUCTION FUNCTION

Nonlinear least squares (NLS) is a frequently used tool for estimating


the elasticity of factor substitution, a, from the CES production
157

Jati K. Sengupta and Gopal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN-13: 978-94-010-8146-7
© 1988 Martinus Nijhojf, Dordrecht
158

function. Since NLS extracts the a estimate directly from the function
without recourse to marginal conditions, it has obvious appeal to
researchers interested in the production structures of planned econo-
mies. Weitzman [15], for example, pressed it into service during his
1970 investigation of the Russian growth performance. Three years
later Asher and Kumar [1] relied on it in their look at the growth
records of six countries, half (namely, Hungary, Russia, and Yugoslavia)
being planned in orientation. NLS also appeals in those instances
where the marginal conditions, while valid, are difficult to invoke
either because the data on prices have limitations or, more funda-
mentally, because the economic units under examination have a non-
competitive nature. Such consideration led Ryan [11] to adopt it in his
1973 work on British manufacturing industries. Even in the absence
of special circumstances, NLS appeals as the 1970 inquiry by Tsurumi
[14] into the production side of Canadian manufacturing attests.
Support for NLS in the CES setting continues and can be found in the
new generation of econometrics texts, notably in the 1980 volume by
Judge et aL [5].

LIMITATIONS OF DIRECT ESTIMATION OF


ELASTICITY OF SUBSTITIJfION

The fame of NLS in estimating a has not come untarnished, however;


indeed, its successes seem to be outnumbered by its failures. In 1967
Bodkin and Klein [2] applied the CES, specified first with multiplica-
tive and then with additive disturbances, to data on the US economy
using NLS, but discovered such large standard errors for the estimates
of p, p = (1 - a)/a, that they rejected both fits. Asher and Kumar [1]
likewise observed imprecision in the NLS estimates of a during their
aforementioned 1973 effort and higWighted that fault a year later [6]
in commenting on the Weitzman piece. Another year later Tsang and
Persky [13] made a similar observation in their study of US manufac-
turing industries: the estimates were decidedly insignificant. Even the
probe of British manufacturing by Ryan [11] and the examples of
search procedures by Judge et at. [5] (pp. 732, 737, 739) left traces of
the anomaly.
This imprecision soon became the subject of several Monte Carlo
studies, among the first of which were the twin investigations by
Kumar and Gapinski [7, 8] appearing in 1974. Kumar and Gapinski
(KG) argued that the imprecision of the a estimator, S, was due to
159

flatness of the error sum-of-squares (ESSQ) surface in the S direction


for S > 1 approximately, and they suggested that the flatness was due
either to high intercorrelation among the explanatory variables or to
insensitiveness of ESS function to large changes in S [8] (p. 566).
Tsang and Persky in their 1975 article attributed the imprecision of
the estimator to high intercorrelation among the partial derivatives of
the CES with respect to its parameters and the resulting ill-condi-
tioning of the Jacobian matrix [13]. This finding is equivalent to the
earlier finding of Kumar and Gapinski just cited, as the flatness in ESS
function and the likelihood surface is due to the ill-conditioning of the
matrix of the partial derivatives of the CES function. Trouble from
that same quarter was noted again in 1980, this time by Judge et al [5]
(p. 724). An alternative explanation for the unreliability came from the
1980 work by Thursby [12] extending the KG endeavors: either the
estimator had no expected value or, if it did, the estimator's variance
had an inherent tendency to be large.

FURTHER INSIGHT REGARDING DIRECT ESTIMATION OF


'0' USING NLS

From the foregoing review it should be apparent that much has


already been learnt about the reliability of the NLS estimator S.
However, additional insights can be gained by resurrecting the a
estimates, Sj' generated by KG and by examining them in detail over
the unrestricted range of values produced. This tactic would serve to
broaden the KG inquiry in as much as it would wrest new information
from the outlying Sj' Moreover, it would throw additional light on the
Thursby research by exploring the properties of an S released from an
arbitrarily imposed bound 1 of 30. Since it is possible to obtain time
series of cross-section data from the same production structure a
detailed examination of our frequency distributions can throw impor-
tant light on pooling sampling evidence on the elasticity of factor
substitution, when the same is obtained using the NLS method with
different data sets.

ESTIMATION PROCEDURE

At the heart of the KG analysis lie the two production models


Qi = y i'i(OKjP + (1- O)LjPtfJ/Pui (1)
160

and
(2)
where, of course, a = 1/(1 + p). Data on capital, K i , and labor, L i ,
consist of annual figures for the US private domestic economy and
cover a twenty-year period. The parameters in the deterministic
portions of each specification carry the assignments y = 1.00, P =
0.03, ~ = 0.40, a = 0.35 and 1.35, with {3 = 0.80 and 1.20; hence
they give rise to four parameter combinations depending upon the
values of a and {3. The disturbances Ui and Wi obey lognormal and
normal distributions respectively. A total of 150 sets of 20 values for
ui are drawn randomly from the lognormal along with 150 sets of 20
values for Wi from the normal fashioning 150 samples of data on
Equation (1) and an equal number on Equation (2) for any parameter
combination. Applying NLS to these data therefore yields 150 Sj from
each model for each parameter combination. The Error Sum of
Squares (ESS) is a nonlinear function of the unknown parameters.
Hence the estimation by the method of least squares does not yield
a closed-form solution as in the ordinary least squares. The NLS
method developed by Marquardt [10] is essentially a gradient or
steepest descent method for minimization of ESS. It starts with an
initial set of estimates of the parameters (guesses) and expresses the
ESS as a Taylor Series linear approximation around the initial guesses.
The second round estimates are obtained by adding the optimally
determined differences or increments to the initial guesses. The
optimal differences are those that minimise the linearised ESS using
the standard matrix algebra of OLS. The second round estimates are
used as reference points to arrive at a revised linear approximation
of the ESS. This procedure is repeated iteratively until either the
parameter estimates converge or until the ESS reaches a minimum
level as determined relative to pre-specified tolerance limits. To help
shorten the convergence process and to give the regression package
every opportunity for producing reliable Sj' the true parameter values
are used as the initial estimates. 2

ANALYSIS OF FREQUENCY DISTRIBUTIONS OF


ELASTICITY ESTIMATOR

Eight Sj series emerge from this exercise, and their relative frequency
161

distributions appear in Table 1. When a = 0.35 most ~ for either


model fall below 5 while only a handful passes 100. A similar pattern
arises when a = 1.35. For that a, however, the distributions are less
skewed. The right tails of the distributions are thicker for a = 1.35
than for a = 0.35. Notice that this is consonant with the earlier finding
of a flat ESSQ surface in the S direction for S roughly greater than
one. The a estimates, initiated at the true parameter values, are more
likely to encounter flatness when a = 1.35 than when a = 0.35.
Consequently they are more likely to be propelled to the tail in the
former circumstance. 3
Since the first cell always contains the bulk of the estimates, it
merits extra attention. Table 2 presents frequency distributions perti-
nent to the abbreviated range. Those curves resemble the previous
ones in the sense that their probabilities fall mainly in the beginning
partitions; specifically, more than 96% of the Sj lie at or below 1.50
when a = 0.35 and more than 73% do the same when a = 1.35. But
the distinguishing feature of results in Table 2 is the fact that a bulk of
the distribution lies within 0.75 range of a = 1.0, namely between
0.25 and 1.75. The probability mass in this range varied between 60
to 80% in the eight cases. Thus Table 2 further advises that the
estimator S has a bias toward unity.4

ESTIMAnON OF LOCAnON OR CENTRAL TENDENCY PARAMETER

We have sufficient information already to suggest that the NLS


estimator of a has a skewed distribution with fat right hand tail. This
is an unpalatable situation for applied economists. This suggests that
we either look for alternate estimators of the elasticity of factor
substitution with better distributional properties or develop better
methods for making inferences in such pathological situations. One
hypothetical situation we might consider is that we have several
independent data sets, all generated by a single CES production
structure. Each data set would give one NLS estimate of the elasticity
of factor substitution. How do we then pool the sample evidence on a
obtained from different samples? This problem is equivalent to the
estimation of the location - or central tendency parameter of a
skewed distribution with fat right taiLS We have three typical candi-
dates, the mean, the mode, and the median. We present below the
properties of these three estimators.
Table 2. Relative frequency distribution for the a estimator in the abbreviated range

Partition Model (1) Model (2)


a=0.35 a= 1.35 a=0.35 a = 1.35 a=0.35 a = 1.35 a= 0.35 a= 1.35
/3 = 0.80 /3 = 0.80 1.20
/3 = /3 =1.20 /3 = 0.80 /3 = 0.80 1.20
/3 = /3 = 1.20

0< Sj < 0.25 0.329 0.075 0.252 0.035 0.345 0.013 0.283 0.000
0.25 < ~ < 0.50 0.200 0.150 0.343 0.141 0.183 0.169 0.276 0.098
0.50 < Sj < 0.75 0.057 0.100 0.084 0.071 0.063 0.078 0.152 0.146
0.75 < Sj < 1.00 0.336 0.088 0.273 0.129 0.345 0.104 0.262 0.085
1.00 < ~ < 1.25 0.050 0.213 0.014 0.329 0.014 0.325 0.028 0.341
1.25 < ~ < 1.50 0.014 0.113 0.007 0.094 0.014 0.078 0.000 0.122
1.50 < ~ < 1.75 0.000 0.063 0.000 0.012 0.000 0.091 0.000 0.037
1.75 < Sj < 2.00 0.000 0.025 0.000 0.024 0.000 0.013 0.000 0.061
2.00 < Sj < 2.25 0.007 0.038 0.007 0.012 0.007 0.000 0.000 0.024
2.25 < Sj < 2.50 0.000 0.025 0.000 0.035 0.014 0.013 0.000 0.012
2.50 < Sj < 2.75 0.000 0.025 0.007 0.012 0.000 0.026 0.000 0.012
2.75 < ~ < 3.00 0.007 0.013 0.000 0.012 0.000 0.026 0.000 0.000
3.00 < Sj < 3.25 0.000 0.000 0.007 0.012 0.000 0.000 0.000 0.000
3.25 < Sj < 3.50 0.000 0.013 0.000 0.000 0.000 0.013 0.000 0.012
3.50 < Sj < 3.75 0.000 0.013 0.007 0.000 0.000 0.026 0.000 0.012
3.75 < Sj < 4.00 0.000 0.000 0.000 0.024 0.014 0.000 0.000 0.000
4.00 < Sj < 4.25 0.000 0.013 0.000 0.012 0.000 0.013 0.000 0.024
4.25 < ~ < 4.50 0.000 0.013 0.000 0.024 0.000 0.000 0.000 0.000
4.50 < ~ < 4.75 0.000 0.013 0.000 0.024 0.000 0.000 0.000 0.012
4.75 < Sj < 5.00 0.000 0.013 0.000 0.000 0.000 0.013 0.000 0.000
0< Sj < 5.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000
......
S Total counts for the four parameter combinations are 140,80, 143, and 85 respectively in model (1) and 142,77,145, and 82 0'1
W
respectively in model (2).
164

Table 3 presents the properties of mean, when the same is com-


puted using three different cut-off levels for the outliers (Smax). If low
values are used for outlier norm (Smax = 5) the mean is biased towards
unity - the Cobb-Douglas case. When this cut-off level is increased
this tendency of bias towards Cobb-Douglas is blurred. Even when
this cut-off level is fixed at a moderate level (Smax = 10) the percent
bias is quite large and it varies between 46 and 106%.
Table 4 presents the mode and median estimators along with the
percent biases associated with them. In most cases with moderate
value for outlier cut-off (Smax = 10) the mode estimator has lower
percent bias than the mean estimator. For the cut-off norm used by
Thursby, mode estimator is uniformly superior to the mean estimator.
Except for the case of a = 0.35 and f3 = 1.2 combination with
multiplicative disturbance, in all other cases the median estimator has
lower percentage bias than the mean and mode estimators. Hence the
median estimator gets the Gold, mode the Silver, and mean does not
even qualify. When the true value of a is greater than unity when the
NLS estimator gives rise to unpleasant pathologies the median estima-
tor stands out as the best estimator with percent bias ranging between
7 and 15%. Both the mode and median estimators demonstrate a clear
bias towards unity, the Cobb-Douglas case.

WHERE DO WE STAND?

The major finding of various independent investigations is that NLS


estimator of elasticity of factor substitution is imprecise and that the
sampling distribution of this estimator is skewed with a peak near
unity and with a fat tail on the right. A detailed study of mean, mode,
and median as the point estimators of the location parameter of such a
sampling distribution demonstrated that median is the best of the
three and that it can be used, particularly when the true a is suspected
to be greater than unity.
In real life situations we do come across independent comparable
samples corning from the same production structure. We may have
estimates of a from different studies. The problem then is how to
cumulate the evidence on a provided by those studies to arrive at an
estimate of a. The options open are: (i) take the average of all
estimates, (ii) take that estimate which occurs most frequently, or (iii)
Table 3. Properties of the mean as an estimator

Model (1) Model (2)


0=0.35 0= 1.35 0= 0.35 0=1.35 0=0.35 0= 1.35 0=0.35 0= 1.35
f3 =0.80 f3 = 0.80 1.20
f3 = 1.20
f3 = f3 = 0.80 f3 = 0.80 1.20
f3 = 1.20
f3 -

Outlier Norm Smax = 5


Mean 0.582 1.275 0.580 1.321 0.625 1.262 0.518 1.258
Percent Bias 66.29 -5.55 65.71 -2.15 78.57 -6.52 48.00 -6.81

Outlier Norm Sma>< = 10


Mean 0.696 1.976 0.580 1.998 0.721 2.245 0.518 2.029
Percent Bias 98.86 46.37 65.71 48.00 106.00 66.30 48.00 50.30

Outlier Norm Smax = 30


Mean 1.111 3.526 1.168 4.635 0.983 4.194 0.518 3.153
Percent Bias 217.43 161.18 233.71 243.33 180.86 210.67 48.00 133.56

.....
0\
VI
~
~
~

Table 4. Properties of the mode and median estimators

Model (1) Model (2)


0=0.35 0= 1.35 0= 0.35 0=1.35 0=0.35 0=1.35 0=0.35 0= 1.35
f3 = 0.80 f3 = 0.80 1.20
f3 = 1.20
f3 = f3 = 0.80 f3 = 0.80 1.20
f3 = f3 -1.20

Mode 0.875 1.125 0.375 1.125 0.625 1.125 0.125 1.125


Percent Bias 150.00 -16.67 7.14 -16.67 78.57 -16.67 -64.29 -16.67
Median 0.471 1.245 0.420 1.200 0.488 1.219 0.439 1.159
Percent Bias 34.57 -7.78 20.00 -11.11 39.43 -9.70 25.43 -14.15
167

take that value which separates evenly all the estimates, one half being
less and the other being more than it. 6
The variation between estimates of different studies can be due to
random errors, measurement errors, differences in the range of varia-
tion of the explanatory variables, etc. The practical significance of our
analysis is that if the variation between the estimates is only due to
random disturbances then the median of the values can be taken as
the cumulated evidence provided by all the estimates. But to the
extent that the variation is not entirely due to random disturbances
one must correct for the variation attributable to other factor's before
using the median estimator. Special tools of meta analysis, such as
those developed by Hunter et al. for a simple correlation coefficient
[41, are needed for this purpose.

NOTES

1. In Thursby's article [12] (p. 297) any ~ to exceed 30 from the KG regression
technique is treated as equaling 30.
2. Equation (1) is fitted in logs, and both equations are fitted subject to parametric
transformations prompted by the inability of the regression program to impose
the parametric inequalities implicit in forms (1) and (2). Those transformations,
which include the replacement of p by (1 - 0)/0, enable the a estimates and
their standard errors to be calculated directly by the program. The numerical
method of estimation is that of Marquardt [10). More particulars concerning the
experimental design can be found in [7) (pp. 258-260) [8) (pp. 563-564).
3. For the record, the ESSQ surface remains flat when S exceeds one regardless of
whether 0= 0.35 or 1.35 [81 (p. 566).
4. A bias toward one is virtually a tradition in elasticity estimation. Griliches [3)
(pp. 286-297), for instance, discussed it while Thursby [121 (p. 299)
demonstrated it at the start of the eighties in the context of nonlinear models.
5. For a detailed treatment of point estimation of location parameter under a
variety of distributional assumptions the reader may see Lehman [9).
6. There are other variants. For example one may take the mean value of all
estimators found to be significant. The reader may refer to Hunter et af. [4] for
some interesting illustrations of cumulating evidence across studies.

REFERENCES

[I) Asher, Ephraim, and Kumar, Krishna, T. (1973), 'Capital-labor substitution


and technical progress in planned and market oriented economies: a compara-
tive study', Southern Economic Journal 40: 103-109.
168

[2] Bodkin, Ronald G. and Klein, Lawrence, R. (1967), 'Nonlinear estimation of


aggregate production functions', Review of Economics and Statistics 49: 28-
44.
[3] Griliches, Zvi. (1967), 'Production functions in manufacturing: some prelimi-
nary results', in Murray Brown (ed.), The Theory and Empirical Analysis of
Production, Studies in Income and Wealth, New York: National Bureau of
Economic Research, Vol. 31, pp. 275-322.
[4] Hunter, John E., Schmidt, Frank L., and Jackson, Gregg B. (1982), Meta
Analysis: Cumulating Research Findings Across Studies, Sage Publications.
[5] Judge, George G., Griffiths, William E., Hill, R. Carter, and Lee, Tsoung-Chao
(1980), The Theory and Practice of Econometrics, New York: John Wiley and
Sons.
[6] Kumar, T. Krishna, and Asher, Ephraim (1974), 'Soviet postwar economic
growth and capital-labor substitution: comment', American Economic Review
64: 240-242.
[7] Kumar, T. Krishna and Gapinski, James H. (1974), 'Nonlinear estimation of
the CES production function: sampling distributions and tests in small
samples', Southern Economic Journal 41: 258-266.
[8] Kumar, T. Krishna and Gapinski, James H. (1974), 'Nonlinear estimation of
the CES production parameters: a Monte Carlo study', Review of Economics
and Statistics 56: 563-567.
[9] Lehman, E. L. (1983), Theory of Point Estimation, Wiley.
[10] Marquardt, Donald W. (1963), 'An algorithm for least-squares estimation of
nonlinear parameters', SIAM Journal on Applied Mathematics 11: 431-441.
[11] Ryan, Terence M. (1973), 'C.E.S. production functions in british manufac-
turing industry: a cross-section study', Oxford Economic Papers, New Serie~,
25: 241-250.
[12] Thursby, Jerry (1980), 'Alternative CES estimation techniques', Review of
Economics and Statistics 62: 295-299.
[13] Tsang, Herbert H., and Persky, Joseph 1. (1975), 'On the empirical content of
CES production functions', Economic Record 51: 539-548.
[141 Tsurumi, Hiroki (1970), 'Nonlinear two-stage least squares estimation of CES
production functions applied to the Canadian manufacturing industries,
1926-1939, 1946-1967', Review of Economics and Statistics 52: 200-207.
[15] Weitzman, Martin L. (1970), 'Soviet postwar economic growth and capital-
labor substitution', American Economic Review 60: 676-692.
Recent Nonparametric Measures of
Productive Efficiency

JATI K. SENGUPTA
University ofCalifornia, Santa Barbara, California, u.S.A.

1. INTRODUCTION

Since Farrell (1957) introduced a nonparametric method of estimating


a production frontier, there have occurred three major developments
in recent years for measuring productive efficiency in a nonparametric
way. One is the attempt by Afriat (1972) to apply a nonparametric
method for testing whether a fInite body of input-output data (or, in
some cases price quantity data) is consistent with an optimal produc-
tion (or profIt) behavior. His characterizations associate a production
function with a given input output data set subject to the limitation
that the production functions have a certain property e.g., quasi-
concavity, monotonicity and that the data points are represented as
efficent either exactly or as nearly as possible. This line of economic
consistency tests has been followed up by several authors in recent
times e.g., Hanoch and Rothschild (1972), Diewert and Parkan (1983)
and Varian (1984). A second major attempt is by Johansen (1972)
who proposed a linear programming (LP) model of deriving an
industry production frontier from input output data of individual
fIrms. By explicitly introducing a statistical distribution of the input
output coefficients (Le. the capacity distribution) and a capacity utiliza-
tion function he derived the conditions under which the aggregate
production frontier will have some functional forms such as Cobb-
Douglas. Several authors have followed up this line of work e.g., Sato
(1975), Forsund and Jansen (1985).
The third approach is by Charnes et ai. (1978), who generalized the
Farrell method of efficiency measurement in terms of vector inputs and
vector outputs and showed its equivalence to Pareto efficiency. This
line of work, termed as 'data envelopment analysis' (DEA) has been
very widely applied in management science and operations research
169
Jati K. Sengupta and Gopal K. Kadekodi (eds.), Econometrics of Planning and Efficiency.
ISBN 90-277-3602-1
© 1988 Martinus Nijhoff, Dordrecht
170

particularly to public sector organizations where the data on input and


output prices are frequently unavailable and the profit maximization
criterion inapplicable. The DEA model has provided in recent times a
very active field of research for many authors e.g., Sengupta and Sfeir
(1986) and Banker and Maindiratta (1986).
The parametric theory of production frontier has chosen to specify
a particular form of one-sided probability distribution of errors asso-
ciated with a specific form of the production function e.g. Aigner et at.
(1977). Recently Jondrow et at. (1982) has extended this approach to
a stochastic production frontier
Yi = g(xi , f3) + Ci
ci=Vi-Ui;Vi - N(O,a~),ui ~ 0 (1.1)
i = 1,2, ... , N,
where the nonnegative error term U measures technical inefficiency in
the sense that it specifies the shortfall of output (Yi) from its maximum
possible value [g(x;, f3) + v;]. They have discussed a method of
decomposing the estimate ii of the residual term into two separate
estimates Vi and Ui , in the case where U follows either a half-normal or
exponential density. Thus the technical inefficiency can be measured
for each unit or observation as in the Farrell approach and also as an
average over all the units or observations.
Our objective here is two-fold: to attempt a critical review of the
three nonparametric measures above and to develop a nonparametric
characterization of the efficiency distribution along the production
frontier and its implications.

2. RECENT NONPARAMETRIC THEORY

Consider n data points Dj = (xj' Yj), j = 1, 2, ... , n for each firm or


decision-making unit (DMU) where Y is a single good produced by the
vector x of m inputs. If the inputs and outputs in the data set are
nonnegtive, then Atriat (1972) showed that there must exist a function

F(x) - max [jt YjAj: j~' XjAj ,; j~' Aj ~ 1, Aj ;> 0]


X, (2.1)

which is non-decreasing concave and such that Yj ~ F(xj ). Further-


more, there exists a non-decreasing concave function f(x) satisfying
171

Yj = f (xj ) for all j, if and only if


n n

L XjA j ~ Xs implies L YjAj ~ Ys' (2.2)


j=! j-I

This condition (2.2) holds if and only if Yj = F(xj). The function F(x)
represents a nonparametric production frontier such that it is every-
where not greater than any other non-decreasing concave function.
The observed data set D = (Dj : j = 1, 2, .. , , n) is said to be
consistent with the frontier function F(x) if it satisfies the conditions
(2.2). The data consistency problem thus reduces to solving the linear
programming (LP) problem:

(2.3)

where aj = x/Yj is the input coefficient vector for each j and x is any
one of the n input vectors. However the observed data set D may fail
this consistency test in at least two ways. One arises when the non-
negativity conditions are dropped e.g. the inputs and outputs are
allowed to vary over negative and positive domains such as dummy
variables or, the framework of statistical designs. A second situation
occurs when the data set is subject to a stochastic generating mecha-
nism. In this case, a part of the data set D may satisfy the consistency
requirement with a probability p, while the remaining part may fail the
consistency test i.e. may allow no feasible solution to the LP problem
(2.3). If the above probability p is very low (e.g. less than 0.25), then
the frontier function F(x) would have a very low probability of
realization. We may thus define as follows.

DEFINITION 1. The frontier production function F(x) p-rationalizes


the observed data set D in a weak sense, if for all points in D the
consistency requirement holds with a probability of at least p (0 ~ p
~ 1).

DEFINITION 2. The frontier production function F(x) p-rationalizes


the observed data set D in a strong sense, if for all points in D the
probability of satisfying the consistency requirement is at its maximum
possible value p.
172

Note that these definitions can be applied to the dual of the LP


model (2.3):
G(x) = min [{J'x: /3' aj ~ 1,j = 1,2, ... , n. /3 ~ 0] (2.4)
where /3 is the m-element dual vector and prime denotes its transpose.
Let uj be the marginal probability of feasibility:
Prob(/3' aj ~ 1, /3 ~ 0) ~ uj , 0 < uj ~ 1 (2.5)
and assume it to be positive for all j belonging to the data subset D.
We then define the programming model:
H(x, p*) = min max [/3'x: /3 ESp] (2.6)
fJ p

where
Sp = {p: p = min uj and (2.5) holds}
1 ~j" n

p* = optimal value of p, 0 <p ~ 1.

DEFINITION 3. The frontier production function H(x, p*) p*-


rationalizes the observed data set D, if for all points in D we have the
minimax solution (/3*, p*) in (2.6) where /3* is the optimal value of
vector /3.
We note some consequences of these definitions. First, the con-
sistency tests, stochastic as they are, may not always maintain the LP
structure. For instance the chance-constrained model (6.9) is generally
nonlinear for random variations of the input coefficient vectors aj and
various types of decision rules e.g. zero-order, first order, second
order may be used as suboptimal solutions.
Second, the consistency (or efficiency) hypothesis may hold only for
a proper subset D of D in the sense that for all points not belonging
to D, either the consistency requirement fails or holds only with a
probability less than p. This implies that one may characterize the
'statistical contamination' of the subset D when it is enlarged to
include more and more points of D not belonging to D. The standard
regression approach estimates the model Yj = /3'xj + Ej, where the
disturbance term E is assumed to be symmetric around the mean value
of zero and j belongs to D. But the nonparametric production frontier
approach screens the data set D to identify a subset D such that for
each kED we must have /3*' Xk = Yk' /3* ~ 0 in the LP problem
173

(2.4) i.e. if for any j we get f3*'xj > Yj' 13* ~ 0, then the unit j is not
efficient i.e. j f£ D. It is thus clear that the statistical divergence of the
regression model and nonparametric model can be tested in terms of
the contamination of the efficiency subset D. Third, the probabilistic
nature of the consistency (or efficiency) hypothesis implies that the ad
hoc procedure of replacing the random variables by their mean values
and solving for the LP model at the mean may not have a very high
probability of realization in many situations. For example, consider the
LPmodel
Min[f3l + 132: af3l + 132 ~ 7, bf3l + 132 ~ 4,131' 132 ~ 01
where a, b are uniformly distributed as {I ~ a ~ 4} and {1/3 ~ b
~ I}. On using the mean values of a and b in the LP model above
we get f3j = 18/11, f3i = 32/11. If we compute the probability of the
event that this optimal solution is feasible with respect to the original
problem, then we get
Prob{(a, b): af3j + f3i ~ 7, bf3j + f3i ~ 4 and f3j, f3i ~ o}
=Prob{(a,b):a ~ 5!2,b ~ 2/3}=0.25.
Thus this optimal solution (f3j, f3i) is infeasible with a probability as
high as 0.75. It is clear that in this case the optimal solution at the
mean is more often infeasible than not. This type of criticism applies
equally well to the LP method of efficiency estimation by Timmer
(1971) who used the mean inputs in the objective function as:
G(x)=min[,8'x:,8'aj ~ 1,,8 ~ 0,j=I,2, ... ,n] (2.7)
where x = (Xi) is the mean input vector. On the basis of these
definitions above one can state some theoretical results.

THEOREM 1. If the elements of the data set D = (Dj : j = 1, ... , n}


are all positive, then there must exist a nonempty subset D of D and a
frontier production function F(x), such that F(x) p-rationalizes the
data subset D in a weak sense for some positive p (0 < p ~ 1).
Furthermore, there must exist another subset 15 of D and a frontier
function H (x, p*) such that H (x, p*) p* -rationalizes the data subset
15.
Proof. Since the input vector aj is always positive, the LP model
(2.3) is always feasible for any positive input vector x. Let D be the
region of feasibility and T be the region in the data space where the
objective function of (2.1) attains its maximum. We may then con-
174

struct a subset D in the data space by taking the intersection of the


two regions D and T. This intersection cannot be empty and since 15
is a subset of D, it must have a positive probability p. By a similar
argument it can be shown that a subset 15 exists such that it is p*-
rationalized by some frontier function H (x, p*).

THEOREM 2. Let Dm + 1 be the set of subsets of order m + 1 (i.e. one


output and m inputs) of the data set D and Dm + 1(p*) be an efficiency
subset of Dm + 1 that is p *-rationalized in a strong sense. Then any
enlargement of the efficiency subset Dm + 1 by adding extra points from
the set D cannot increase the probability p*.
Proof. Let Pk denote the relative frequency at which the unit k
(k E Dm + 1) turns out to be efficient in terms of the LP model (2.4)
in the sense P:;:l aki f3i - 1 = 0, f3* ~ O}. For any fixed set Dm + 1
we can arrange the n DMU (units) in a decreasing order in terms of
the maximum values of Pk achieved. Thus the order P(l) ~ Pm ~
. •• ~ P(n) > 0 indicates a rearrangement of the n units in a
decreasing scale of efficiency frequency i.e. DMU(l) is the most often
efficient and DMU(n) is the least often. Let P* be defined such that
p* = max U, u = {min p( k): kEDm + I} and Dm + 1(p*) is the efficiency
subset which includes only those units which are efficient at least with
a probability p* > O. Hence the result.
Two implications of this result are of some practical value. One is
that it develops a method of clustering the data set into a modal
efficiency subset Dm + 1(p*) and its explanatory value can be easily
assessed by a simple linear regression model with dummy variables
(i.e. zero-one categorical variables) as regressors explaining the output
response (Sengupta, 1987a). Secondly, the divergence between the
average practice production function and the best practice one can be
directly analyzed in terms of truncation or enlargement of the effi-
ciency subset Dm + 1(p*). Some empirical applications by Sengupta and
Sfeir (1986,1987) suggest that the gains from such clustering methods
may be quite significant.

2.1. Industry Production Function

By using two inputs (i = 1, 2) and one output (y) and n production


units (j = 1, 2, ... , n). Johansen (1972) sets up the following LP
175

model to determine the short-run industry production function Y =


F(~, V2):

Max Y = L Yj (2.8a)
n

sub,iect
J 1)) ~ Vi' i = 1, 2
to \L a··y (2.8b)
j=1

(2.8c)
where Y is aggregate output and ~, Jt; are the two current inputs for
the industry as a whole where j = 1, 2, ... , n refers to plants (or
units) with a capacity of Yj assumed given. The necessary first order
conditions are:

1- L f3iaij
~
<: ° when
Yj=Yj
Yj E [0, Yj)'
Yj =0
(2.9)

The variables f31' f32 are the shadow prices of the two current inputs,
the optimal values of which denote the marginal productivities of the
inputs in the industry function. On taking the most realistic situation
of the three cases in (2.9), we obtain the dual LP model for deter-
mining the industry production function
Min C = L f3i ~ subject to f3 E R(f3) (2. lOa)
where

(2. lOb)

It is clear that the inputs can be increased from 2 to m, in which case


the LP model is very similar to the one considered in (2.4), except for
two differences. One is that the criterion of maximum industry output
is used in this formulation, implying a two-stage screening process of a
decentralized system. Thus each firm to be efficient must first be
efficient in its own technical production function and then be efficient
among all other firms in the industry as a whole. Secondly, the
statistical distribution of the input coefficients (aij)' which is called
'capacity distribution' and their realized values determine the efficient
level of the maximum industry output.
176

Let aj , az be the two input coefficients distributed over the n


production units according to a bivariate probability density function
[(aj, az) and let G(a) = G(aj, az) = {(aj, a 2): {3ja j + {32az ~ 1,
{3j ~ 0, {32 ~ O} be the utilization region in the parameter space
describing the pattern of exploitation of capacity with the two input
coefficients aj , az. Then we could define the aggregate output (Y =
~ Yj) and the two aggregate inputs J!;, Jt2 as:

Y= ff G(a)
[(aj, az) da j daz = g({3j' {32)

(2.11)

where the functions g({3j' {32), h j({3j, {32), and ~({3j, {32) represent
output and the two inputs corresponding to any given set of feasible
values of {3j and {32 belonging to the utilization region G(a j , az).
Assuming invertibility and other regularity conditions we may solve
for {3j and {32 from (2.11):
/3j = h~l(V;, liz), /32 = hzl(V(, liz)
and substituting these values in the first equation of (2.11) we obtain
the macro production function:
(2.12)
Two remarks may be made about this derivation. One is that there are
two types of aggregation problem as pointed out by Seierstad (1985),
one given by the capacity distribution [( aj , az) within the utilization
region G(·) as above and the other by the empirically given distribu-
tion of only those micro units which are identified by some optimal
solution of LP model (2.10a). The first type of aggregation implies that
the capacities of all micro units with the same input coefficients can be
added together for computing the total capacity of the industry. The
second one constructs a core subset of efficiency units from the
optimal solutions of the LP models (2.10) when j belongs to a subset
177

V of the entire set of data points D. Thus the second aggregation


explicitly recognizes the gap in efficiency between the two subsets V
and the complement of fj in D, but characterizes this gap in terms of
the observed probability of the set of points belonging to the subset V.
The second point to note is that the macro production function
F(~, J-i) in (2.12) need not be linear (or piecewise linear) even
though the LP models (2.8) and (2.10) generate the macro function
through suitable aggregation. This is due to the initial distribution
assumed (or empirically derived) for the input coefficients i.e. the size
and shape of the capacity distribution f(a l , ~). Thus Houthakker
(1956) found that if the capacity distribution follows a generalized
Pareto distribution f(a l , ~) = Aa~1-la~2-1 where A, aI' a 2 are
constants, then the macro production function (2.12) takes the Cobb-
Douglas form:
In F(~, J-i) = In B + y1ln ~ + Y2ln V2
where
Yl = a/(1 + a l + a2), Y2 = a/(1 + a l + a 2)
B = a constant.
Suppose we now intend to compare the sequence of LP formula-
tions in the Afriat model (2.4) and the single LP formulation in the
Johansen ?lodel (2.10). Assume that the input coefficients aij are given
and let DA be the efficiency subset of the data set for which the
efficient inputs are denoted by x*(A) and the associated price vectors
by .B*(A). Let 11 = (Ti~) be the mean vector of the aggregate inputs
V = (v:) and let DJ be the efficiency subset of the data set in the
Johansen model for which the efficient inputs are 11 and .B*(J) be the
corresponding efficient price vectors. Then we can state the following
result.

THEOREM 3. If the intersection set D of the two efficiency subsets


VA and VJ is not empty, then there exists some efficient price vector
13* E V such that it equals both f3*(A) and f3*(J). Furthermore such
an efficient price vector 13* always exists if the data set D contain only
positive inputs and outputs.
v:
Proof Minimizing L f3i is equivalent to minimizing L f3i V; in the
Johansen model (2.10) and hence the optimal price vector f3*(J)
would remain efficient in a certain neighborhood NJ of the mean input
vector V. Since the intersection set V of the two efficiency subsets VA
178

and D) is nonempty, there must exist an input vector x*(A) belonging


to the neighborhood N) of the mean vector v: Hence it follows that
f3* = f3*(A) = B*(J) for some f3* E 15. In case of positive inputs and
outputs the intersection set 15 cannot be empty, hence the result.
Remark 1. It is possible to construct from the sequence of efficient
solutions of the Atriat model (2.10) an equivalent aggregate model by
adding the inputs of the efficient units only.
Remark 2. If the intersection set 15 and the associated probability
Prob(f3* E D) are not small, then they can be used to characterize
the robustness of the efficiency measure f3 * generated by the mean LP
model: min L f3i ~ subject to (2.10b).
Next we consider the problem of nonlinear efficiency measures in
the Johansen model but not through the capacity distribution and the
associated aggregation process. We assume the input coefficients to be
given but the aggregate input vector varying due to interaction effects
from pairs of inputs like i and k. Now consider the efficiency subset
of the Afriat model (2.4) and assume that its inputs x*(A) are related
to the mean vector v = (Vi)' Vi = ~/n of the Johansen model (2.10)
as
x*(A) = V + E (2.13)
where E is a random vector with a given probability distribution. If the
distribution of E is such that it has mean zero and a finite variance-
covariance matrix ~, then one can easily characterize how closely the
input vector points x*(A) are scattered around the mean v. This is
done through the multivariate distance function D2 = D2(X*(A); v, ve )
defined as follows:
(2.14)
Thus, if all inputs x* in the efficiency subset denoted by x*(A) are
very close to the mean vector V of the Johansen model, then the value
of D2 will be close to zero Le. the higher the value of D2, the farther
the points x* are scattered away from the mean vector v of the
Johansen model. If we assume normality then the distance function
(2.14) above can be given a more precise probabilistic interpretation
as follows:

THEOREM 4. Let the input vectors x* = x*(A) be normally dis-


tributed with mean v and variance-covariance matrix Ve defined in
(2.13) and let a be the probability Prob(L f3iX; ~ g) for some fixed
179

level of cost g = ga' If there exists an a in the interval 0.50 ~ a ~


1.0, then there must exist an optimal vector f3* = f3: which solves the
following convex program

Min g = v' f3 + q Jf3' ~f3


{3

subject to (2.15)

L f3iaij ~ 1, f3i ~ 0, q = F~l( a).

For a = 0.50 this optimal vector f3* = f3*0.5 reduces to the linear
efficiency measure.
Proo/Since

Prob(P' x* ~ g) - Prob ( P J~: ;'p ~V


) ,; ~; )

we obtain

g = v'f3 + qJf3'~f3

where q = F~l( a) ~ a if a ~ 0.50. Clearly g is a convex function of


f3 if a ~ 0.50. For a = 0.50, q is zero and g becomes the linear
function g = v' f3 of the Johansen model (2.10).
Remark 1. This result is useful in showing that the nonlinear
efficiency measure f3* and its associated probability a may be more
representative (more diversified) than the linear measures.
Remark 2. This result may be viewed as a stochastic generalization
of the aggregation rule defined in Theorem 3. Thus if x*(A) converges
to the mean vector v in (2.1 3) in the mean square sense, then the
linear efficiency vector f3*(A) of the Afriat model tends asymptotically
to f3 *( J) of the Johansen model.
Remark 3. The nonlinear part of the objective function of (2.15)
may be interpreted as the cost of divergence from the mean input level
v of the Johansen model.
180

2.2. Data Envelopment Analysis

The approach of data envelopment analysis (DEA) sets up a sequence


of n LP models, one for each reference unit or DMU and tests if it is
technically efficient or not. Let k be the reference unit in the following
model:
m

Min & = L Xik f3i


fJ i-I

subject to L f3iXi ~ Yj' j = 1, ... , n (2.16)


i=\

f3i ~ 0, i = 1,2, ... , m.


For a fixed k where k belongs to the index set In = {I, 2, ... , n} let
f3*(k) be the optimal solution of the above LP model (2.16). Then the
unit k is efficient in the DEA model if it holds that Ii xijf3i(k) = Yk
and Sk = Yk - Yk = 0, where Sk is the slack variable representing the
excess of potential output Yk = Ii Xki f3i(k) over actual output Yk. By
varying k over the set In the DEA model generates the whole
efficiency surface in piecewise linear forms which may be charac-
terized by the set B* = {f3*(k), k E In}.
Two features of this DEA model (2.16) when viewed in the light of
Afriat and Johansen models must be noted here. First, the aggregate
industry model of Johansen-type form is easily obtained here by
replacing the objective function gk by its aggregate g = L ~ = \ gk and
this is equivalent to mini~zing I Xi f3i' where Xi = L;= I x;/ n is the
mean level of input i. Let /3* be the optimal solution in this case. To
what extent this can adequately represent the set B* associated with
the efficiency surface depends on its robustness. For instance if the
optimal basis associated with {3* remains invariant for a subset [1* of
the set B* where B* = {/3*(k), k E Is} and Is = {I, 2, ... , s, s ~ n},
then it is robust in some sense if the proportion p = sin is large.
Clearly p can range from zero to one. For large n the limiting value
p = limn _ (sin) of this sample proportion p may be used as a
00

measure of the asymptotic stability of the efficiency vector {3*. We


note however that this is not the only measure of robustness. An
alternative approach is to use a minimax or Chebyshev approach as
181

follows. Consider a traditional production function as


Yj = g(xj , wj ), j E In (2.17)
where the m-element vector xj represents the inputs controlled by
each unit j and wj are the nondiscretionary inputs serving as external
conditions affecting output, e.g., they are like human capital, mana-
gerial ability or disembodied form of technical change. One simple
way to model the production function (2.17) is to write it as
Yj = h(wj ) • yj (2.18)
where yj = !(x1j , x2j , ••• , xmj ) = I;:l xJ31 for example and <
h( wj ) ~ 1 for all j. In this case the term h( wj ) acts like Hicks-neutral
°
technical change, i.e. Yj = yj if h( wj ) = 1 for any efficient subset to
which j belongs and Yj < yj for h(wj ) < 1 when j does not.belong
to the efficient production surface. The efficiency gap between Yj and
y* is expressed above as a ratio but if we write it as a difference we
obtain
Yj = yj - /3o(wj ), /3o(wj ) ~ 0, j E In (2.19)
where the nonnegative variable /30 = /30( wj ) now represents the
efficiency gap between the two sets, the efficient set (Y1, xj ) and the
rest which are not efficient. An optimal estimate /31; of /3o(wj ) may be
obtained from the linearized version of the minimax or Chebyshev
model as follows
Min °
/30 S.t. ~ L /3i X ij
i
- Yj .~ /30
(2.20)
/3i ~ 0, /30 ~ 0.
It is clear that this efficiency gap /31; would be the lowest when the
sample includes only those units which are most often efficient with a
high frequency.
A second feature of the DEA model (2.16) is that the constraint set
remains fixed, while the objective function varies over the n units. But
the production function implicit in the constraints can be written as

Yj =/3'xj -Ej ; Ej ~ 0, j E In

where the component Cj is the random disturbance term. In many


practical situations the input vectors xj are under the control of the
DMUs and hence nonstochastic but the outputs Yj are partly stochas-
182

tic. Since the major effect of the constraints Ej ~ 0 is to alter the


statistical distribution of the constrained output from that of the
unconstrained response, it is less meaningful to select objective func-
tions such as the mean cost L i;/3i or the cost for a particular
reference unit. To see the implications consider the output response
model
y=P'X-E,P'X ~ 0 (2.21)
where the errors E = (Ej ) are assumed to be continuously distributed
such that they are statistically independent with a constant mean and
coristant variance and It (x, P) = P'x is assumed to be nonnegative
with x being nonrandom. The constrained or limited dependent
variable y is then defined by y = {max(O, y), with y ~ O}. It is clear
that the mean 11 and variance 0'2 of yare related to the mean It and
variance a 2 of the unconstrained variable y by
11 ~ It, 0'2 ~ a2
and hence the constrained model is more stable in terms of the
coefficient of variation i.e. 0'/11 ~ a/It. Clearly the output response y
in the constrained model has a mixed probability distribution with a
point mass of the probability Prob(y ~ 0) at zero. For a given data
set D = {(x, y): (xj , Yj), j E In} suppose there exists a subset
15 = {(X, y): (Xj' Yj), j E Is, s ~ n} for which the probability
Prob(y ~ 0) = Uv holds for some Uv, 0 < Uv ~ 1, then the following
results can be stated:

THEOREM 5. There exist a maximum probability level u* and the


associated efficiency vector P* which solve the following program for
somej ED:

Proof. Since Uv exists and y has a continuous distribution, Prob(y ~


0) = Uv implies

Prob ( ~ p'X) = F (p'X)


E ~ -;;- -;;- = Uo.
183

Since the cumulative distribution P(·) is continuous and increasing


we get f3 ' x/o = P-\Uo). Now the domain Uo ~ u ~ 1 of u is
compact and the inverse function P-\u) is continuous in this domain.
Hence there exists a maximum probability level u* and the associated
efficiency vector f3*.
Remark 1. If the form of the distribuition function P(·) is known
e.g. normal, then the value q = P-l(U) can be numerically calculated
for all u, 0 ~ u ~ 1. Thus for the normal distribution q is positive if
u > 0.50.
Remark 2. If for the entire data set D the above result holds for
u* = 1, then the data set JED may be said to be insensitive to
stochastic variations.

THEOREM 6. For every positive level c there exists a unique optimal


solution f3* = f3*( c) which minimizes the variance 6 2 of the con-
strained output y under the restrictions R = {f31 # = f3 X ~ c, ~ f3i =
I

1, f3 ~ O}, if the error e is normally distributed with mean zero and a


constant variance in (2.21). Purthermore there exists some subset D for
which the LP model: Min # = f3' x, f3 E R = {f31 ~i f3iXij ~ Yj'
JED, L f3i = 1, f3i ~ 0, all i} has an optimal solution f3* such that
#* = f3*' X would also minimize the variance 6 2 of the constrained
output at # * = c.
Proof. On setting 0 = 1.0 without any loss of generalization the
mean 11 and variance 62 of the constrained output ycan be written as:
11 = f(p,) + p,F(p,)
6 = 11(# -11) + P(#)
2

where f(') is the probability density of a unit normal variate and


P( . ) is the corresponding cumulative distribution. By direct differen-
tiation it can be easily verified that the second derivative of 6 2 with
respect to p, is negative for any positive p,; also 6 2 is an increasing
function of # such that 6 2 tends to 0 2 as # ~ ex>. Thus the unique
optimal solution f3* = f3*( c) exists for every fixed level of positive c.
Moveover the subset D is not empty and hence the LP model:
{Min # = f3' x, f3 E R} has an optimal solution f3* for some JED.
Let c* = f3*' x. Then f3* is feasible in R and since 6 2 is a strictly
concave function of c, it attains its minimum value at c = c*.
Remark 1. It is possible to derive a similar result when the errors e
follow any arbitrary continuous distribution, except that for the strict
concavity of 6 2 we may need some minor modifications.
184

Remark 2. Since the mean constrained output J1 is a strictly convex


function of fl for every positive level of fl, the optimal solution f3* of
the LP model can be used to compute the associated value of J1 =
J1(fl*), fl* = f3*'x. By varying the sample sizes in the subset 15, one
can thus observe how J1 and 6 2 change as fl and hence f3* change.
This would be of great value in sensitivity analysis. For example one
may consider the mean LP model
Min L f3;x; S.t. L f3;X;j ~ Yj' j E 15
p (2.22)
f3~0

for a subset 15 of the data set and examine how robust is the optimal
efficiency vector f3* in (2.22). As an empirical application we refer to
the educational production function studies for California reported
elsewhere (Sengupta, 1987b) for 25 public school districts with four
inputs, where two tests were made. In one the sample size n was
varied as n = 9, 12, 20, 25 and the mean LP model (2.22) was run.
The optimal values f3*' = (0.247, 0.523, 0.626, 0.179) were the same
in each case. In the second case the sample set of n = 25 units was
first arranged in a decreasing order of efficiency, as measured by the
optimal solution of the LP model (2.22) with n = 25 and then the
sample set is divided into six groups with 4 in each group except the
last group which contained 5 units. Thus the first group contained the
100% efficiency units in the sense ~1=1 f3"tXij = Yj' the second group
contained the next four efficient (but less than 100% efficient) units
and so on. For each of the first three groups the optimal efficiency
vector /3* turned out to be the same as before i.e. /3*' = (0.247,
0.523, 0.626, 0.179). This shows the relative insensitivity of the
optimal efficiency vector /3 * for such data variations. Since the
objective function in (2.22) may be interpreted as the criterion of
minimizing the sum of absolute deviations of errors Le. Min ~ j I Cj I =
Min[~j ~i {f3;X;j - Yj}] = n Min ~; f3i X; since ~i f3iXij ~ Yj' the
estimate /3 *'x is in the form of a generalized median and hence less
sensitive to outlying observations.

3. NONPARAMETRIC ANALYSIS OF EFFICIENCY

Even in the nonparametric theory of the production frontier, the use


185

of the statistical distribution of efficiency has invariably followed a


parametric form of a specific distribution e.g. the capacity distribution
in the Johansen model is assumed to have specific forms such as
uniform or generalized Pareto. Also the estimate of the efficiency
distribution is not based on any normative hypothesis such as a
production frontier. We propose here a nonparametric method of
estimating the distribution of the efficiency specified by the DEA
model (2.16), which can also be utilized in Atriat and Johansen
models.
Let us refer to the DEA model (2.16) and its n LP solutions
denoted by the set B* = {f3*(k), k E In}. For any reference unit k we
denote the efficiency ratio ej(k) = y/yj(k), where yj(k) = ~i f3i(k)x ij
is the optimal output for the selection k in the objective function. The
samples {e(l), e(2), ... , e(n)} of efficiency ratio may be viewed as
random drawings from a parent population, which we assume for
simplicity to have an unknown absolutely continuous probability
density with a positive domain la, bl. If the unknown density, say g( e)
has infinite range, we only consider estimating the truncated density

feel-gee)! J: g(l)dl, ... e" b


due to practical considerations. We partition the closed interval la, bl
by a = to < t1 < . . . < tk = b and consider the histogram estimators
fH(t) ofthe form:

Cr for tr ~ t < tr+ 1> r = 0, 1, ... , k-1


fH(t)= cr-1fortk =b (3.1)
{
o otherwise

where tH(t) ~ 0 and J~ tH(t) dt = 1. To estimate the population


histogram of the form (3.1), we consider the entire sample space
{ej ( k), j E In' k E In} and count the number of observations falling
in the r lh interval. Let nr be this number; then the population
parameters Cr of (3.1) can be estimated by

• =
C
~ for r = 0,1, ... , k - 1
r n(tr+ 1 - tr)
186

and hence
~'fort, ~ t < t'+1>r=O,1, ... ,k-1
lH(t) =

C,-l for tk = b
otherwise.
(3.1)

The intuitive appeal of the sample estimate fH(t) of the population


density fH(t) is clear. The number of observations falling into each of
the intervals is a multinominal variate and hence the sample propor-
tion n,ln estimates I ;~+ 1 fH( I) dr which for small at = t,+ 1 - I,
equals fH(t) at and hence n,l(n at) estimates the population density
fH(t). Rosenblatt (1956) has proved that the sample estimator lH(t) is
a consistent estimator of the population density fH( t) = fH( e) under
three regularity conditions: f (t) has continuous derivatives up to order
three except at the endpoints of [a, b), it is bounded on [a, b) and the

00 and hn -+ °
spacing number hn defined by 2hn = t,+l,n - t'on is such that if n -+
then nhn -+ 00. For most applied situations these
regularity conditions would hold and hence the sample histogram
estimate lH of the population density f(l) in its histogram form fH(I)
can be easily applied. Also by using the multinomial distribution the
likelihood function L(~, nl , .... , n,-ll Co, cI , . . . , C,-l) = L can be
written as
k-l

L = n (c,) (3.3)
,=0

where c, has to be positive for each r for a nondegenerate likelihood


function. It is clear that by maximizing the log-likelihood function
In L = ~~:~ n, In c, subject to ~~:~ C,(t'+l - t,) = 1 we obtain the
sample estimate lH (t) of the population density function fH (t) given in
(3.2). Thus the estimate fH(t) of the efficiency distribution is both
nonparametric and maximum likelihood and the latter has the con-
sistency property under suitable large sample conditions.
Two important uses of this nonparametric characterization of the
efficiency distribution may now be indicated. The first shows that a
modal efficiency subset can be defined by using the efficiency density
lH(t) to cluster the observed units into groups in a scale of efficiency
frequency. The difference between the production frontier (i.e. best-
practice production function) and the average-practice production
function can then be explained in terms of contamination of the modal
187

efficiency subset introduced by including units which are either


inefficient most frequently or efficient rarely. The second type of use
looks at the divergence between the two distributions, one given by
lH(t) = lH(e) as above and the other associated with the criterion of
minimum absolute deviations i.e. min ~ I Ejl = min ~j I (~i f3i X ij -
Yj)l. If the latter density is denoted by tHee), then one applied measure
of divergence of the two distributions is given by
k
d12 = L (c" - C )112 (3.4)
,-0

where c, is the sample estimate corresponding to tH(t). If the two


densities are close (not close) to each other, d12 tends to be small
(large). Thus the statistic d12 can be used as measure of affinity of the
two distributions.
Consider the objective function of the DEA model (2.16) and write
it as
m

min gk = L (Xi + Uik)f3i (3.5)


i-I

where X ik = Xi + Uik is viewed as the sum of the mean input level Xi


and the deviation or error component Uik • Let X*(k) be the optimal
nondegenerate basis of order m and f3*( k) be the associated efficiency
vector for k ~ n. Let (X*, P*) be the corresponding set when Uik is
zero for each i and k. Now we rank the set {X*(k): k ~ n} of
optimal bases in terms of their frequency of occurrence. Let X rO) be
the modal optimal basis when f3rO) is observed to have the highest
frequency of occurrence. Clearly if the scatter (Xik ) of inputs is closely
centered around the mean level Xi' then the set {X*(k): k ~ n} would
be very close to the optimal basis X rO) with a probability of occur-
rence close to one. For the next highest frequency of occurrence the
0
optimal set may be denoted by (Xr1)' f3 »). Thus if Po, PI' P2 ... are
the consecutive relative frequencies of occurrence of efficiency with
Po ~ PI ~ P2 ~ ... we have the optimal bases X*(k) ordered as
xtO)' X 0)' Xr2) ... with their corresponding efficiency vectors f3t?;,
f3t1)' f3tz), ... and so on. To evaluate the role of this modal vector f3 (0)
in efficiency comparison we may use a dummy variable method of
stratification of the entire data set before we perform an ordinary
regression - e.g., we regress the observed output (y) on the m dummy
188

variables one for each input as follows:


m
Yj = a o + L a;Dij + error j = 1,2, ... , n
i=1

where

D .. = 11, if j belongs to X~) (3.6)


IJ 0, otherwise.

The role of 'contamination' can now be introduced by extending the


coverage of j to include X('I)' X('2) and so on in D;j' In other words we
could observe the effect of less frequently efficient units in our sample
i.e. the optimal production function would tend towards an average
production function by such contamination. Some empirical calcula-
tions reported elsewhere (Sengupta, 1987b) show the remarkable
power of this stratification of the data set in terms of improving the
R 2 value of regression functions.
There is a second way of looking at the dummy variable regres-
sions. Since it holds that y(k) = X*(k)f3*(k) for the optimal basis
X*( k), hence we can denote by Yk the conditional expectation
E (y IX), when X = X*( k). Thus

Yo if X =X~)
YI if X = X~)
E(yj x) = (3.7)

where k = 1, 2, ... ,K ~ n are the distinct optimal bases. Define the


following dummy variable regressors

Ok = 11 if X = X~) (3.8)
o otherwise
then the conditional regression model (3.7) can be rewritten in a linear
form
(3.9)
Since this is in the form of a regression model, the importance of the
189

modal efficiency set xtO) may be directly tested from the explanatory
power (i.e. R 2 value) of the regression equation
y = YoOo + constant + error.
Thus if R2 is low (high), the modal efficiency set is less (more)
important in the estimation of the production frontier.
NQw we consider the divergence of the two density functions JH( e)
and tH(e) measured for eX3:.JTlple by (3.4). Consider a data set D for
which the density function tH(e) corresponds to the case of minimum
absolute derivations (MAD):

_ _ { ~, for t, ~ t < t'+1> r = 0, 1, ... , k-1


tHee) = tH(t) = C,-l for tk = b
o otherwise.
Let g;" and gtbe defined as g;" = min gk' gt= max gk in (3.5) where
g;,.~g*~gt

and g* is the minimum objective function associated with the MAD


solution. If the inputs X k = {Xik } are normally and independently
distributed, then the optimal values g! would follow a normal distribu-
tion e.g. with mean v and variance 0 2. As the sample size n increases,
the MAD optimal solution g* would tend to converge to the popula-
tion mean p, with probability one. Thus there must exist some N such
that for n ~ N, we would have the sequence g! converge stochas-
tically to p, Le.
lim Prob {I g! - p, I > E} = O. (3.9)
n ';J:. N

Hence for any subset 15 for which (3.9) holds, we would have
robustness of the type associated with the MAD solution. The close-
ness of g! and g* can also be measured by the rate at which the
divergence measure d 12 defined in (3.4) tends to zero in the limit.

4. GENERAL IMPLICATIONS

Two general implications of the efficiency distribution above may be


briefly discussed. First of all, one may identify the interval for which
the density functions JH(t) and lH(t) attain their maximum values i.e.
190

their mode and then compare the closeness of these two densities in
terms of this location measure. If the underlying distributions are
normal (or tend to normality by the conditions of the central limit
theorem), then this will amount to a comparison of their means and
hence the usual statistical tests for the difference of means can be
performed. As an empirical example consider the educational produc-
tion function with four inputs reported by Sengupta (1987b) where
only nondegenerate optimal LP solutions (i.e., 11 out of 25) are taken:

LPs
(unit k) Pi P! P! In Remark

1 0.359 0.146 0.350 0.079


2 0.157 0.790 0.898 0.262
5 (t*) 0.247 0.523 0.626 0.179
7 0.359 0.146 0.350 0.079
8 0.157 0.790 0.898 0.262
12 0.206 4.035 0.700 1.214 .... outlier
13 0.117 0.906 1.022 0.299
16 0.247 0.523 0.626 0.179
18 0.359 0.146 0.350 0.079
23 0.206 4.035 0.700 1.214 .... outlier
25 0.247 0.523 0.626 0.179

(Mean 0.250 0.499 0.638 0.177


(Restricted)

Clearly if we exclude the two outliers (i.e. units 12 and 23), the mean
of grs is very close to the MAD solution g* which is identical with
the LP model for unit 5.
A second implication of the conditional regression model (3.7) is
that one can apply the theory of optimal statistical design in this
framework. For instance, write the optimal basis equations as
X*(k)f3*(k) = y(k)
where B*(k) is denoted by the square matrix X*( k) of o!der m.
Assuming nondegeneracy o( the optimal basis we obtain f3*(k) =
X*-ly(k), where the vector f3*(k) has nonnegative elements. We now
assume that the output vector y( k) is random as follows:
y(k) = JA,(k) + u(k)
where u( k) is the error vector assumed to be normally distributed
with zero mean and a variance-covariance matrix V fixed for all k =
191

1, 2, ... , n. Thus we obtain the pseudo-regression equation


y(k) = X*(k)f3*(k) + u(k) (3.10)
where we assume that the conditional expectation E (y( k) I X*( k» of
y(k) equals X*(k)f3*(k), since E(u(k) I X(k» is assumed to be zero.
In OLS (ordinary least squares) models this assumption is necessarily
fulfilled but in this case it may not be, hence we term it the pseudo-
normal equations of the pseudo-regression model where the popula-
tion mean vector Ji(k) equals X*(k)f3*(k). Note that the errors u(k)
in (3.10) are disturbances ~onstrained under the optimal basis X*(k).
Since our LP estimate f3*( k) is obtainable in the nondegenerate case
as:
P*(k) = X*-l(k)y(k) = X*-l(k) [X*(k)f3 *(k) + u(k)]
therefore its variance-covariance matrix is

E[(P*(k) - f3*(k» (P*(k) - f3*(k»)'] = X*-l(k) ~*(k)'-l


where prime denotes transpose. Which of the n LP estimators k) P(
would we now accept as the best? By the optimal design criterion we
minimize the generalized variance:

min d = I X*-l(k) ~*(k)'-II


k = I(X*(k)'X*(k)fl I • I VI

to obtain the best estimator P(O) say. But since V js not estimable due
to lack of degrees of freedom, this estimator f3(0) is obtained by
maximizing the determinant of [X*(kl'X*(k)J. As Chernoff (1972)
has shown the design associated with f3(0), also called the D-optimal
design is equivalent under certain regularity and continuity conditions
to the so-called A-optimal design based on the Chebyshev criterion
which minimizes the maximum variance of prediction among the
linear unbiased estimators of {3*( k).
As an example consider three DMUs each with one unit of output
and the following two inputs:

DMU 1 DMU z DMU J Average


4

2 4 3 3
2 1 2 1.6
192

clearly the third unit is relatively inefficient since it is dominated by


DMU I , which can produce the same level of output with one unit less
of input Xl' The value of the determinant Dk = IX*( k)'X*( k) I turns
out to be as follows

DMU's

1-2 2-3 3-1 1-4


36 25 4 7.84

Thus the best LP estimator !J(O) is given by the vector (:~~) where the
two units DMU I and DMU2 are in the optimal basis. It turns out that
in this case the optimal basis X(O) associated with !J(O) turns out to be
the modal efficiency set defined in Section 2; this is also identical to
the single LP model associated with MAD.
We now summarize our main results.
1. The data consistency problem in nonparametric theory of the
production frontier has to allow for stochastic feasibility, since
some input output data may fail to maintain feasibility with a high
probability.
2. The stochastic nature of the data aggregation problem when there
are input constraints leads us to consider a nonparametric theory
of distribution of efficiency. According to this theory the distribu-
tion of efficiency may be analyzed in relation to that associated
with the mean absolute deviation (MAD) approach.
3. Under certain general conditions the MAD approach has some
robustness properties, which can be utilized to characterize suitable
subsets (called the efficiency subset) of the overall data set. Such
characterizations are useful in running regressions with (0, 1)
dummy variable regressors and testing for the contamination of the
efficiency subset.

REFERENCES

Afriat, S. N. (1972), 'Efficiency estimation of production functions', International


Economic Review 13: 568-598.
Aigner, D. 1., Lovell, C. A. K., and Schmidt, P. (1977), 'Formulation and estimation
of stochastic frontier production function models', Journal of Econometrics 6:
21-38.
193

Banker, R. D. and Maindiratta, A (1986), 'Piecewise loglinear estimation of efficient


production surfaces', Management Science 32: 126-135.
Charnes, A, Cooper, W. W., and Rhodes, E. (1978), 'Measuring the efficiency of
decision-making units', European Journal of Operations Research 2: 429-444.
Chernoff, H. (1972), Sequential Analysis and Optimal Design, Society for Industrial
and Applied Mathematics, Philadelphia.
Diewert, W. E. and Parkan, C. (1983), Linear Programming Tests of Regularity
Conditions for Production Function, Quantitative Studies on Production and
Prices. Physica-Verlag, Austria.
Farrell, M. 1. (1957), 'The measurement of productive efficiency', Journal of Royal
Statistical Society, Series A, 120: 253-290.
Forsund, F. R. and Jansen, E. S. (1985), The Interplay Between Sectoral Models
Based on Micro Data and Models for the National Economy, Production Multi-
Sectoral Growth and Planning, North Holland, Amsterdam, 1985.
Hanoch, G. and Rothschild, M. (1972), 'Testing the Assumptions of production
theory: a nonparametric approach', Journal ofPolitical Economy 80: 256-275.
Houthakker, H. S. (1956), 'The Pareto distribution and the Cobb-Douglas produc-
tion function in activity analysis', Review ofEconomic Studies 23: 27-31.
Johansen, L. (1972), Production Functions, North Holland, Amsterdam.
Jondrow, 1., Lovell, C. A K, Materov, 1. S., and Schmidt, P. (1982), 'On the estima-
tion of technical efficiency in the stochastic frontier production function model',
Journal ofEconometrics 11: 233-238.
Rosenblatt, M. (1956), 'Remarks on some nonparametric estimates of a density
function', Annals ofMathematical Statistics 27,832-835.
Sato, K (1975), Production Functions and Aggregation, North Holland, Amsterdam.
Seierstad, A (1985), Properties of Production and Profit Functions Arising from the
Aggregation of a Capacity Distribution of Micro Units, Production Multi-sectoral
Growth and Planning, North Holland, Amsterdam.
Sengupta, 1. K and Sfeir, R. E. (1986), 'Production frontier estimates of scale in
public schools in California', Economics ofEducation Review 5: 121-137.
Sengupta,1. K and Sfeir, R. E. (1987), 'Efficiency measurment by data envelopment
analysis with econometric applications', Applied Economics (to appear).
Sengupta, J. K. and Sfeir, R. E. (1986a), 'Data envelopment analysis for efficiency
measurement in the stochastic case', Computers and Operations Research (to
appear).
Sengupta, J. K. (1987a), 'Efficiency measurement in nonrnarket systems through data
envelopment analysis', International Journal ofSystem Science (to appear).
Sengupta, 1. K. (1987b), 'Production frontier estimation to measure efficiency: a
critical evaluation in the light of data envelopment analysis', Managerial and
Decision Economics (to appear).
Sengupta, 1. K. (1987b), 'The measurement of productive efficiency: a robust
minimax approach', Managerial and Decision Economics (to appear).
Timmer, C. P. (1971), 'Using a probabilistic frontier function to measure technical
efficiency', Journal of Political Economy 79: 776-794.
Varian, H. (1984), 'The nonparametric approach to production analysis', Econo-
metrica 52: 579-597.
ADVANCED STUDIES IN THEORETICAL AND APPLIED ECONOMETRICS
VOLUME 11

1. Paelinck J.H.P. (ed.): Qualitative and Quantitative Mathematical Economics, 1982.


ISBN 90 247 2623 9.
2. Ancot J.P. (ed.): Analysing the Structure of Economic Models, 1984.
ISBN 90 247 2894 O.
3. Hughes Hallett A.J. (ed.): Applied Decision Analysis and Economic Behaviour,
1984.
ISBN 9024729688.
4. Sengupta J.K.: Information and Efficiency in Economic Decision, 1985.
ISBN 90 247 3072 4.
5. Artus P. and Guvenen 0., in collaboration with Gagey F. (eds.): International
Macroeconomic Modelling for Policy Decisions, 1986.
ISBN 90 247 3201 8.
6. Vilares M.J.: Structural Change in Macroeconomic Models, 1986.
ISBN 90 247 3277 8.
7. Carraro C. and Sartore D. (eds.): Developments of Control Theory for Economic
Analysis, 1987.
ISBN 90 247 3345 6.
8. Broer D.P. (ed.): Neoclassical Theory and Empirical Models of Aggregate Firm
Behaviour, 1987.
ISBN 90 247 3412 6.
9. Italianer A. (ed.): Theory and Practice of International Trade Linkage Models, 1986.
ISBN 90 247 3407 X.
10. Kendrick D.A.: Feedback, 1988.
ISBN 90 247 3593 9.
11. Sengupta J.K. and Kadekodi G.K. (eds.): Econometrics of Planning and Efficiency,
1988.
ISBN 90 247 3602 1.

You might also like