Microeconomics: Department of Economics Faculty of Economics and Management Doc. Ing. Iveta Zentková, Phd. 07 / 2006

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 78

Department of Economics

Faculty of Economics and Management


doc. Ing. Iveta Zentková, PhD.
07 / 2006

Microeconomics

Essays by

Alexander Frech
Felix Hötzinger
Olaf Löbl
Eckart Margenfeld
Michael Mirz
Serge Schäfers
Content

I. Classical and intertemporal models of consumer behaviour 3

II. Choice under uncertainty 11

III. Asset markets and risk assets 18

IV. The neoclassical theory of production in short-run and


long-run 26

V. The neoclassical theory of cost in short-run and long-run 33

VI. The theory of the firm in perfect competition and


monopolistic competition 46

VII. The theory of oligopoly and monopoly 57

VIII. Input markets 64

IX. General equilibrium 70

References

2
I. Classical and intertemporal models of consumer behaviour

1. Consumer Preferences

1.1 Consumption Bundles

The economic model of consumer behaviour is very simple: people choose the
best things they can afford. The objects of consumer’s choice are called
consumption bundles, i.e.

(x1, x2) =X or (y1, y2) =Y

The consumer will rank these bundles according to their desirability. In other
words, the consumer will decide, whether he strictly prefers one over the other
(i.e. X > Y), or, that he is indifferent between X and Y (X ≈ Y). If the consumer
prefers or is indifferent between X and Y we would call this weakly prefers one or
the other (X ≥ Y). X ≈ Y bundles can be shown graphically below in a so-called
indifference curve, where goods x1 and x2 are shown on the two axes. The shaded
area to the right of the indifference curve represents any weakly preferred set:

Weakly
preferred
set

x2

Indifference curve
curve

x1

1.2 Assumption about Preferences

It is unrealistic to find a situation i.e., where X > Y and, at the same time, Y > X.
Therefore, assumptions about the consistency of consumers’ preferences are
made. Some of these assumptions are so fundamental, that they are referred to as
“axioms” of consumer theory:

3
Complete
Any two bundles can be compared, that is X ≥ Y or Y ≥ X, or both, in which case
the consumer is indifferent between the two bundles  not realistic, unimportant
in Microeconomics.

Reflexive
Any bundle is at least as good as an identical bundle (X ≥ Y).

Transitive
If X ≥ Y and Y ≥ Z, than the assumption is made that X is at least as good as Z.

Monotonicity
More is superior; bundle X (x1, y1 + one unit) is preferred over bundle Y (y1,
y2).

Satiation
Satiation point or bliss point is the overall best bundle for the consumer in terms
of his own preferences. Both possibilities, too much of something or too little of
something, equally do not satisfy the consumer.

2. Utility

Utility today is seen as a way to describe consumer preferences. A utility function


is a way of assigning a number to every possible consumption bundle such that
more preferred bundles get assigned larger numbers than less preferred bundles.
I.e. a bundle (x1, x2) is preferred over a bundle (y1, y2) if and only if the utility of
X is larger than the utility of Y.

u (x1, x2) > u (y1, y2)

2.1 Ordinal Utility

The only importance of a utility assignment is how it orders / ranks the bundles of
goods. The size of utility difference of any different consumption bundle does not
matter. Because of the emphasis on ordering bundles of goods, this kind of utility
is referred to as ordinal utility.

Bundle u1 u2
A 3 17
B 2 10
C 1 .002

2.2 Cardinal Utility

Cardinal utilities are theories which deal with the significance of the magnitude of
a utility. The size of the utility difference between two bundles is supposed to
have some sort of significance, i.e.: “I am willing to pay twice as much for bundle
A as opposed to bundle B”. This can be also shown in a so called utility function
where 4(A) > 2(B). More advanced theories of cardinal utility functions include
“Perfect Substitutes”, “Quasilinear Preferences”, “Cobb-Douglas Preferences”…

4
2.3 Marginal Utility

The rate of change measured when the consumer gets a little more of good 1 in a
bundle of (x1, x2) is called the marginal utility MU1 with respect to good 1,
mathematically: MU1 = change U / change x1.

3. Choice

3.1 Optimal Choice

Consumers choose the most preferred bundle from their budget set. Below, a
budget set and several indifference curves of the consumer are drawn in the
diagram.

Budget
line

x2*

x1*
In the diagram, the bundle of goods that is associated with the highest indifference
curve that just touches the budget line (x1*, x2*) is the optimal choice for the
consumer. This is the best bundle, the consumer can afford. In this case, the
bundles above the budget line, which do not intersect it, are of higher preference
to the consumer, hence are not affordable for the consumer. As shown in the
graph, the optimal point (choice) does not cross the budget line. It is also called
“Interior Optimum”.

3.2 Substitutes and Compliments

The optimal choice of goods 1 and 2 at a certain price and certain income is called
the consumers “demanded bundle”. Hence, when prices and income change, the
consumer’s optimal choice will change.
Different preferences will lead to different demand functions. A demand function
is a function that relates the optimal choice – the quantities demanded – to the
different values of prices and income. In other words, the demand function shows
the optimal amounts of each of the goods as a function of the prices and income
faced by the consumer.

5
x1 = x1(p1, p2, m)

x2 = x2(p1, p2, m)

The left hand side of the equation stands for the quantity demanded. The right
hand side of the equation is the function that relates the prices and income to that
quantity. If two goods are perfect substitutes, then a consumer will purchase the
cheaper one.
Perfect compliments are goods, of which the consumer will always buy an equal
amount of each. The most obvious example is a pair of shoes. Again, here the
optimal choice will be on the budget line. We therefore can solve the equation
mathematically as:

p1x + p2x = m or x1 = x2 = x = m / p1+p2

The demand function for the optimal choice here is quite obvious. Since the two
goods are always consumed together, it is just as if the consumer is spending all
his money on a single good that has the price p1 + p2.

There are cases where the consumer spends all of his money on the goods he likes
and none of “neutral” or even “bad goods”. Thus, if commodity 1 is “good” and
commodity 2 is “neutral” or “bad”, the demand function expresses itself as:

x1 = m / p1; x2 = 0

Concave preferences always represent a boundary choice; it represents a situation


where the consumer spends always his total budget only on one of the two goods
of his preference, never on both.

4. Demand

Research on how a choice responds to changes in the economic environment is


known as comparative statistics. Two situations are compared; before and after
the change in the economic environment. Only two things will affect the optimal
choice in our model: prices and income. Therefore, the investigation in the model
will focus on how the demand changes when prices and income change.
“Normal goods” are goods for which the demand increases, when income
increases. For a normal good the quantity demanded always changes in the same
way as income changes.

∆ x1 / ∆ m > 0

“Inferior goods” are goods for which the demand decreases, when income
increases (i.e. low quality goods, fast food, etc.).
Whether a good is inferior or not depends on the level of income under
examination. It might very well be the case, that an inferior good is consumed
more after income increases. After a certain point of income increase, however,
demand for that good will usually decline. An income offer curve or income
expansion path (shown below) illustrates the bundles of goods that are demanded

6
at different levels of income. If both goods are normal goods, the income
expansion path will show a positive slope.

Budget
line Income offer curve

Indifference
curves

5. Slutsky equation

It is possible to construct examples where the optimal demand for goods


decreases, when prices fall. A good with that property is called “Giffen good”.

There are really two effects that appear when the price of a good changes: the rate
where you can exchange one good for another changes and your total purchasing
power is altered. The first part – the change in demand due to the change in the
rate of exchange between the two goods – is called the substitution effect. The
second effect – the change in demand due to having more (or less) purchasing
power – is called the income effect.

In order to illustrate this, it makes sense to break the price movement into two
steps: first, to change the relative price and adjust money income so as to hold the
purchasing power constant; second, adjust purchasing power while holding
relative prices constant. This can be seen graphically in two phases. Pivot and
Shift, when the price of good 1 changes and income stays fixed.

Therefore, first, the budget line pivots around the vertical axis (X to Y). A parallel
shift of the budget line is the movement that occurs when income changes while
relative prices remain constant (Y to Z). The figure below will illustrate the two
movements of the budget line.

7
Indifference
curve

Original
budget
line
. x Original
choice
.Z
Final
choice

Pivot
.
Y Shift

5.1 The substitution effect

The economic meaning of the pivoted budget line illustrates itself in a constant
purchasing power for the consumer in the sense that the original bundle of goods
is just affordable at the new pivoted line. The formula for this reads as follows:

∆ m = ∆ p1 * x1

Note that if the price of good 1 goes down, the adjustment of income will be
negative. When a price goes down, the consumer’s purchasing power goes up.
Therefore one has to decrease the consumer’s income in order to keep purchasing
power at its original degree.
The optimal purchase in the figure above is denoted at the pivoted budget line at
point Y. This bundle of goods is the optimal bundle of goods when prices change
and income is adjusted. Once again, the movement from X to Y is called the
substitution effect.

5.2 The income effect

A parallel shift of the budget line occurs when income changes while relative
prices remain constant. This is also called income effect, changing income while
keeping prices fixed at the new price. The above figure illustrates this by moving
from point Y to Z. More precisely, the income effect is the change in the demand
for good 1 when we change income from m’ to m, holding the price of good 1
fixed at p’1:

∆ x1n = x1(p’1,m) – x1(p’1,m’)


8
The income effect will either tend to increase or decrease the demand for good 1,
depending on whether we have a normal good or an inferior good.

5.3 Slutsky Identity

Putting all the above into perspective, the mathematical formula implies further,
that the total changes in demand equal the substitution effect plus the income
effect. This is also called “Slutsky Identity”.
While the substitution effect must always be negative – opposite the change in
price – the income effect can go either way. Thus, the total effect may either be
positive or negative. However, if we are talking about a normal good, income and
substitution effect do go in the same direction.

5.4 Hicks Substitution Effect

The Hicks Substitution Effect states that the budget line pivots around the
indifference curve rather than around the original choice. In this way, the
consumer faces a new budget line that has the same relative price as the final
budget line, but has a different income. The consumer’s purchasing power with
the new budget line will no longer be sufficient to purchase his original bundle of
goods. However, it will be sufficient to purchase a bundle that is just indifferent to
his original bundle. This can be illustrated in the chart below.

Final Budget

Original Budget

Original Choice
Final Choice

Substitution e. Income effect.

The Slutsky substitution effect gives the consumer just enough money to get back
to his old level of consumption while the Hicks substitution effect gives the
consumer just enough money to get back to his old indifference curve.

9
6. Intertemporal Choice

Choices of consumption over time are known as intertemporal choices. The shape
of an indifference curve will indicate the consumers’ tastes for consumption at
different times. An indifference curve with the slope of -1 would represent tastes
of consumers who did not care whether they consumed today or tomorrow.
An indifference curve for perfect complements would indicate that consumers
want to consume equal amounts today and tomorrow. Such consumers would be
unwilling so substitute consumption from one period to the other.
However, in reality, it is most common that consumers are willing to substitute
some amount of consumption today for consumption tomorrow ( savings).
The optimal choice for consumption can be examined in each of the two periods:

• if the consumer chooses a pattern where c1 < m1, he is a lender


• if the consumer chooses a pattern where c1 > m1, he is a borrower

Graphically shown as:

Borrower Lender

m2 choice
c2

C2
choice m2

m1 c1 c1 m1

To be a borrower or lender changes as interest rates changes. Let’s assume the


consumer is a lender and now interest rate increase. He will remain a lender. If the
consumer is a borrower and the interest rate declines, he will remain a borrower.
The above might not be true the other way around. I.e. if a consumer is a lender
and increase rates decrease, he might at one point very well switch to become a
borrower.

10
II. Choice under uncertainty

Randomness in economic theory is often referred to in two distinct categories:


first, situations in which the decision-maker can assign mathematical probabilities
to the randomness he faces,1 and second situations when it is impossible to
express the randomness in mathematical probabilities.2 According to this
distinction, theories can be divided between those which use the assignment of
probabilities and those which do not. Corresponding to this standard distinction
the first chapter will focus on Subjective Expected Utility Theory in the case of
“uncertainty” and on the Von Neumann-Morgenstern Theory in the case of
“risk”.3

1 Expected utility theory

1.1. Bernoulli utility function

We define “Expected Utility Theory” (EUT) as the theory of decision-making


under risk based on a set of preferences. The basics of this theory go back to
Daniel Bernoulli (1732)4. He showed in the so-called St. Petersburg Paradox, that
the principle of maximizing the expected outcome is not a useful concept for
decision-making.

In the St. Petersburg game people were asked how much they would pay for
the following prospect: if tails comes out of the first toss of a fair coin, to
receive nothing and stop the game, and in the complementary case to
receive two guilders and stay in the game; if tails come out of the second
toss of the coin, to receive nothing and stop the game, and in the
complementary case to receive four guilders and stay in the game; and so
on ad infinitum. The expected monetary value of this prospect is infinite.
Since the people always set a definite, possibly quite small upper value on
the St. Petersburg prospect, it follows that they do not price it in terms of its
expected monetary value.5

Bernoulli developed the idea of using expected utility of money outcomes as a


measure for decision making. He states that each expected outcome has different
utilities for the decision maker and thus the outcome with the highest utility will
be chosen. This is not necessarily the highest absolute outcome. Or in other
words: Decision making of players is not based on statistic outcomes but on
expected utilities. And since expected utilities are subject to individual
preferences each expected utility function can be considered unique.

1
Referred to as „risk“. See also Knight, F.H., chapter I.I.26
2
Referred to as “uncertainty”
3
Knight, F.H., chapter I.I.26
4
Daniel Bernoulli, (*8.2.1700 -  17.3.1782)
5
Mongin, P. (1997), p. 342-350

11
In general, by Bernoulli’s logic, the valuation of any risky venture takes the
expected utility form:

E (u │p, X) = ∑ XE X p (x)u(x)

1.2. Von Neumann-Morgenstern expected utility function

The very foundation of classical utility theory where laid by John von Neumann
und Oskar Morgenstern (1947) who used the concept of Bernoulli to develop the
expected utility function, combining mathematical probabilities with expected
utility. They attempted to axiomatize6 Bernoulli’s hypothesis in terms of agents’
preferences over different ventures with random prospects (lotteries). In other
words: The decision-makers problem is to choose among lotteries (set of
probabilities) and to find the best lottery. And Von Neumann and Morgenstern
showed that if an agent has preferences defined over lotteries, then there is a
utility function

U: ∆(X)→R

that assigns a utility to every lottery p Є ∆ (X) that represents these preferences.
They proclaim that this theory describes rational decision making. The expected
utility theorem formulates several assumptions which together with a set of
axioms form the cornerstone of decision making. Using this framework rational
decision making opts for the alternative which maximizes expected utility.7

1.3. Comparative statistics using revealed preferences8

The terminology of comparative statistics refers to the method of comparing two


different states or outcomes of a choice. To rationalize observed consumer
behavior several distinctions are made defining revealed preferences:

(1) When out of two possible options (x, y), x was chosen instead of y we can
deduct that the utility of x is at least as large as the utility of y. In this case
x is directly revealed preferred to y.
(2) In case of a sequence of revealed preference comparisons, x would be
referred to as revealed preferred.
(3) In case of locally nonsatiated utility functions x’ lies closer to y. This
contradicts utility maximization and x’ is strictly directly revealed
preferred to x.
Using these observations the generalized axiom of revealed preference as a
consequence of utility maximization can be derived. Based on this axiom several
statistic demand compensation9 can be shown.

6
Axioms of Preference are: Independence, Transitivity, Completeness, the Archimedean axiom
(continuity)
7
Fishburn, P.C. (1989), p. 127-158
8
Varian, H. (1984), p.141-145

12
Comparative statistics typically involve calculations designed to show the
direction in which changes in the environment move peoples’ optimal decision.
Convincing comparative statistic results are the ones that hold even if you impose
only weak restrictions on preferences. That is why the method for comparative
statistics sometimes tends to be sophisticated.10

2 Money lotteries and risk aversion

2.1 Arrow-Pratt measure of risk aversion

Risk aversion is a concept in different branches such as economics, finance, and


psychology used to explain the behaviour of consumers and investors under
uncertainty. Risk aversion is the reluctance of a person to accept a bargain with an
uncertain payoff rather than another bargain with more certain but possibly lower
expected payoff. The inverse of a person's risk aversion is sometimes called their
risk tolerance.11

For example: A person is given the choice between a bet of either receiving $100
or nothing, both with a probability of 50%, or instead, a certain payment. Now he
is risk averse if he would rather accept a payoff of less than $50 (for example,
$40) with probability 100% than the bet, risk neutral if he was indifferent
between the bet and a certain $50 payment, risk-loving (risk-proclive) if it
required that the payment be more than $50 (for example, $60) to induce him to
take the certain option over the bet. The average payoff of the bet, the expected
value would be $50. The certain amount accepted instead of the bet is called the
certainty equivalent, the difference between it and the expected value is called the
risk premium.

In the theory of Arrow (1965) and Pratt (1964), risk-aversion is characterized by


the concavity of the utility function over money income. The diminishing
marginal utility of wealth helps to explain aversions to large scale risk. In other
words: One Euro that helps us avoid poverty is more valuable than a Euro that
helps us become very rich.12

2.2 Relative risk aversion and absolute risk aversion

2.2.1 Absolute risk aversion

The higher the curvature of the utility function u(c), the higher risk aversion.
Since utility functions are not uniquely defined a measure that stays constant is
needed. This measure is the Arrow-Pratt measure of absolute risk-aversion, or
coefficient of absolute risk aversion, defined as

9
e.g. the Hicksian compensation or the Slutsky compensation. Varian, H. (1984), p. 144
10
Peters, M. (2005), page 1-13
11
Peters, M. (2005), page 8
12
Rabin, M. (2000), p.4

13
.

The following expressions relate to this term:

• Constant absolute risk aversion if ru(c) is constant with respect to c


• Decreasing/increasing absolute risk aversion (DARA/IARA) if ru(c) is
decreasing/increasing.

2.2.2 Relative risk aversion

The Arrow-Pratt measure of relative risk-aversion or coefficient of relative risk


aversion is defined as

As for absolute risk aversion, the corresponding terms constant relative risk
aversion and decreasing/increasing relative risk aversion are used. This measure
has the advantage that it is still a valid measure of risk aversion, even if it changes
from risk-averse to risk-loving, i.e. is not strictly convex/concave over all c.13

2.3 Jensen’s inequality

Jensen’s inequality is named after the Danish mathematician Johan Jensen, and it
relates the value of a convey function of an integral to the integral of the convex
function. This inequality can be stated generally using measure theory14, and it
can be stated generally using probability theory15. The two statements say the
same thing.
If preferences admit an expected utility representation with the Bernoulli utility
function u(x), it follows from the definition of risk aversion that the decision
maker is risk avers if:

13
Mas-Colell, A. (1995), p. 167 f.

14
Let (Ω,A,µ) be a measure space, such that µ(Ω) = 1. If g is a real-valued function that is µ-
integrable, and if φ is convex function on the range of g, then

15
In the terminology of probability theory, µ is a probability measure. The function g is replaced
by a real-valued random variable X (just another name for the same thing, as long as the context
remains one of "pure" mathematics). The integral of any function over the space Ω with respect to
the probability measure µ becomes an expected value. The inequality then says that if φ is any
convex function, then

14
∫u (x)dF(x) ≤ u (∫ x dF(x))

In the context of expected utility theory risk aversion is equivalent to the


concavity of the utility function u(*).16
One application of Jensen’s inequality can be found in investment management.
The basic idea is that for analyzing the performance of an investment it is
necessary not only to look at the overall return, but also of the risk. For example
out of two mutual funds with the same returns, a rational investor would choose
the less risky fund. Jensen’s measure is a way to determine if a portfolio earns
proper returns for its level of risk.

3 Subjective probability function

In the von Neumann-Morgenstern theory, probabilities are assumed to be


“objective”. In this respect, they followed the classical view that randomness and
probabilities exist inherently in Nature and cannot be influenced by the agent.
This point of view can be discussed controversial and some statisticians and
philosophers have long objected to this view of probability, arguing that
randomness is not an objectively measurable phenomenon but rather “knowledge”
phenomenon.

In this view, a coin toss is not necessarily characterized by randomness: if


we knew the shape and weight of the coin, the strength of the tosser, the
atmospheric conditions of the room in which the coin is tossed, the distance
of the coin-tosser’s hand from the ground, etc., we could predict with
certainty whether it would be heads or tails. However, as this information is
commonly missing, it is convenient to assume it is a random event and
ascribe probabilities to heads or tails.

In short, probabilities are really a measure of the lack of knowledge about the
conditions which might affect the coin toss and thus merely represent our beliefs
about experiment.17
Other economists, such as Irving Fisher (1906) or Frank P. Ramsey (1926)
asserted instead that probability is related to the knowledge possessed by an
individual alone rather than to general knowledge. In Ramsey’s opinion, it is
personal belief that governs probabilities and not disembodied knowledge. As a
consequence “probability” is subjective.18
The problem with the subjectivist point of view is that it seemed impossible to
derive mathematical expressions for probabilities from personal beliefs.
However Frank Ramsey’s great contribution in his 1926 paper was to suggest a
way of deriving a consistent theory of choice under uncertainty that could isolate
beliefs from preferences while still maintaining subjective probabilities. In so

16
Mas-Colell, A. (1995), p. 185-186
17
As Knight expressed it, “if the real probability reasoning is followed out to its conclusion, it
seems that there is ‘really’ no probability at all, but certainty, if knowledge is complete.”
(Knight, 1921:219)
18
Economists following these opinions are often referred to as „subjectivists“.

15
doing, Ramsey provided the first attempt for an axiomatization of choice under
uncertainty.19
The subjective nature of probability assignments can be made clearer by thinking
of situations like horse race. In this case the most spectators face more or less the
same lack of knowledge about the horses, the track, the jockeys, etc. Yet, while
sharing the same “knowledge” different people place different bets on the
winning horse. The basic idea behind the Ramsey’s-de Finetti derivation is that by
observing the bets people make, one can presume this reflects their personal
beliefs on the outcome of the race. Thus, Ramsey and de Finetti argued,
subjective probabilities can be inferred from observation of people’s actions.
Leonard Savage (1954) succeeded in giving a simple axiomatic basic to expected
utility with subjective uncertainty based on the ideas of Ramsey and de Finetti
and the assumptions of transitivity20, order21, invariance22, dominance23,
cancellation24 and continuity25.
According to this theory the decision is made in dependence to the subjective
expected utility (SEU). The subjective expected utility is the sum of all expected
utilities of the single consequences multiplied with the probability of realization.

SEU = subjective utility x subjective probability of realization

Concluding it can be said, that the main difference is the treatment of


“uncertainty” and “utility” as subjective variables rather than objective
probabilities.

4 State preference approach

The “state preference” approach to uncertainty was introduced by Kenneth J.


Arrow (1953) and further detailed by Gerard Debreu (1959:Ch.7).
The basic principle is that it can reduce choices under uncertainty to a
conventional choice problem by changing the commodity structure appropriately.
The state-preference approach is thus distinct from the conventional
“microeconomic” treatment of choice under uncertainty, such as that of von
Neumann and Morgenstern (1944), in that preferences are not formed over
“lotteries” directly but, instead, preferences are formed over state-contingent
commodity bundles. In this reliance on states and choices of actions which are
effectively functions from states to outcomes, it is much closer in spirit to
Leonard Savage (1954). It differs from Savage in not relying on the assignment of
subjective probabilities, although such a derivation can be made.
The basic proposition of the state preference approach to uncertainty is that
commodities can be differentiated not only by their physical properties and

19
Independently of Ramseys, Bruno de Finetti (1931, 1937) had also suggested a similar
derivation of subjective probability.
20
Meaning a consistent rank order preference (prefer A<B<C and not C<A).
21
Meaning a clear preference for one out of two possibilities.
22
Meaning that the decision maker is not affected by the way of how alternatives are presented.
23
Meaning that the choice with greater utility dominates preferences.
24
Meaning that identical probabilities with the same utility leave decision to chance.
25
Meaning that gamble is prefered to „sure outcomes“ if the odds are high enough.

16
location in space and time but also by their location in “state”. By this we mean
that “ice cream when it is rainy” ia a different commodity than “ice cream when it
is sunny” and thus are treated differently by agents and can command different
prices. Thus, letting S be the set of mutually-exclusive ”states of nature” (e.g. S
={rainy, sunny}), then we can index every commodity by the state of nature in
which it is received and thus construct a set of “state-contingent” markets.26
Insurance is a natural application of the statepreference approach precisely
because it is a explicit “ state-contingent” contract.

5 Concluding considerations

The expected utility model was first proposed by Daniel Bernoulli as a solution to
the St. Petersburg paradox. Bernoulli argued that the paradox could be resolved if
decision makers displayed risk aversion. Based on these ideas the first important
use of the expected theory was that of John von Neumann and Oskar Morgenstern
who used the assumption of expected utility maximization in their formulation of
game theory. The expected utility theorem says that a von Neumann-Morgenstern
utility function exists if the agent's preference relation on the space of simple
lotteries satisfies four axioms: completeness, transitivity, convexity/continuity,
and independence. Independence is probably the most controversial of the
axioms. A variety of generalized expected utility theories have arisen, most of
which drop or relax the independence axiom. The in chapter 2 discussed Arrow-
Pratt measures of risk aversion for von Neumann-Morgenstern utility functions
have become a standard in analyzing problems in microeconomics of uncertainty.
They have been used to characterize the qualitative properties of demand in
insurance and asset markets, to examine the properties of risk taking in taxation
models, etc. to name just a few applications. The limitations of classical expected
utility considerations are outlined in chapter 3. The findings of Savage led to a
normative theory of decision-making based on subjective utility expectations.
Whereas the state-preference approach distinguishes states of nature of
commodities (which substitute “lotteries”) and can be related to Savage and the
theories of risk aversion.

26
Yaari, M. (1969), p. 315-329

17
III. Asset Markets and Risk Assets

Return

The term “return on investment” or simply “return”, is used to refer to any of a


number of metrics of the change in an asset's or portfolio's accumulated value
over some period of time. Accumulated value can be measured in different ways.
In investment management, a distinction is drawn between total returns and net
returns. The former are calculated from accumulate values reflecting only price
appreciation and income from dividends or interest. The latter are calculated from
accumulated values that also reflect items such as management fees, custody fees,
transaction costs, taxes, and perhaps even inflation.

Risk

Definition:

Risk is the potential impact (positive or negative) to an asset or some


characteristic of value that may arise from some present process or from some
future event. In everyday usage, "risk" is often used synonymously with
"probability" and restricted to negative risk or threat. In professional risk
assessments, risk combines the probability of an event occurring with the impact
that event would be. Financial risk is often defined as the unexpected variability
or volatility of returns, and thus includes both potential worse than expected as
well as better than expected returns. References to negative risk below should be
read as applying to positive impacts or opportunity (e.g. for loss read "loss or
gain") unless the context precludes.

Risk is often mapped to the probability of some event which is seen as


undesirable. Usually the probability of that event and some assessment of its
expected harm must be combined into a believable scenario (an outcome) which
combines the set of risk, regret and reward probabilities into an expected value for
that outcome. In statistical decision theory, the risk function of an estimator δ(x)
for a parameter θ, calculated from some observables x; is defined as the
expectation value of the loss function L,

where: δ(x) = estimator ; θ = the parameter of the estimator

There are many informal methods used to assess or to measure risk. Although it is
not usually possible to directly measure risk. Formal methods measure the value
at risk.

18
In scenario analysis risk is distinct from threat. A threat is a very low-probability
but serious event - which some analysts may be unable to assign a probability in a
risk assessment because it has never occurred, and for which no effective
preventive measure (a step taken to reduce the probability or impact of a possible
future event) is available. The difference is most clearly illustrated by the
precautionary principle which seeks to reduce threat by requiring it to be reduced
to a set of well-defined risks before an action, project, innovation or experiment is
allowed to proceed.

RAROC

RAROC is a risk based profitability measurement framework for analysing risk-


adjusted financial performance and providing a consistent view of profitability
across businesses. RAROC is defined as the ratio of risk adjusted return to
economic capital. Economic capital is a function of market risk, credit risk, and
operational risk. This use of capital based on risk improves the capital allocation
across different functional areas of a bank. RAROC system allocates capital for 2
basic reasons: 1) Risk management and 2) Performance evaluation.

For risk management purposes, the main goal of allocating capital to individual
business units is to determine the banks optimal capital structure (i.e. economic
capital allocation is closely correlated with individual business risk).

As a performance evaluation tool, it allows Banks to assign capital to business


units based on the economic value added of each unit.

Risk Premium

A risk premium is the minimum difference between the expected value of an


uncertain bet that a person is willing to take and the certain value that he is
indifferent to.

Risk premium in finance

In finance, the risk premium can be the expected rate of return above the risk-free
interest rate.

Debt: In terms of bonds it usually refers to the credit spread (the difference
between the bond interest rate and the risk-free rate).

Equity: In the equity market it is the returns of a company stock, a group of


company stock, or all stock market company stock, minus the risk-free rate. The
return from equity is the dividend yield and capital gains. The risk premium for
equities is also called the equity premium.

Standard deviation

In probability and statistics, the standard deviation is the most common measure
of statistical dispersion. Simply put, standard deviation measures how spread out
the values in a data set are. More precisely, it is a measure of the average distance
of the data values from their mean. If the data points are all close to the mean,

19
then the standard deviation is low (closer to zero). If many data points are very
different from the mean, then the standard deviation is high (further from zero). If
all the data values are equal, then the standard deviation will be zero. The
standard deviation has no maximum value although it is limited for most data sets

The standard deviation is defined as the square root of the variance. This means it
is the root mean square (RMS) deviation from the arithmetic mean. The standard
deviation is always a positive number (or zero) and is always measured in the
same units as the original data. For example, if the data are distance
measurements in meters, the standard deviation will also be measured in meters.

A distinction is made between the standard deviation σ (sigma) of a whole


population or of a random variable, and the standard deviation s of a subset-
population sample. The formulae are given below.

Definition and calculation

The standard deviation of a random variable X is defined as:

where E(X) is the expected value of X.

Not all random variables have a standard deviation, since these expected values
need not exist. If the random variable X takes on the values x1,...,xN (which are
real numbers) with equal probability, then its standard deviation can be computed
as follows. First, the mean of X, , is defined as:

Next, the standard deviation simplifies to:

In other words, the standard deviation of a discrete uniform random variable X


can be calculated as follows:

For each value xi calculate the difference between xi and the average
value .

Calculate the squares of these differences. Find the average of the squared
differences. This quantity is the variance σ2.

20
Capital market theory

The capital asset pricing model (CAPM) is used in finance to determine a


theoretically appropriate required rate of return (and thus the price if expected
cash flows can be estimated) of an asset, if that asset is to be added to an already
well-diversified portfolio, given that asset's non-diversifiable risk. The CAPM
formula takes into account the asset's sensitivity to non-diversifiable risk (also
known as systematic risk or market risk), in a number often referred to as beta (β)
in the financial industry, as well as the expected return of the market and the
expected return of a theoretical risk-free asset.

The model was introduced by Jack Treynor, William Sharpe, John Lintner and
Jan Mossin independently, building on the earlier work of Harry Markowitz on
diversification and modern portfolio theory. Sharpe received the Nobel Memorial
Prize in Economics (jointly with Harry Markowitz and Merton Miller) for this
contribution to the field of financial economics.According to the CAPM, the
relation between the expected return on a given asset i, and the expected return on
a proxy portfolio m (here, the market portfolio) is described as:

Where:

 E(ri) is the expected return on the capital asset


 rf is the risk-free rate of interest
 βim (the beta) the sensitivity of the asset returns to market returns, or

 also ,
 E(rm) is the expected return of the market

E(rm) − rf is sometimes known as the market premium or risk premium (the


difference between the expected market rate of return and the risk-free rate of
return).

For the full derivation see Modern portfolio theory.

Asset pricing

Once the expected return, E(ri), is calculated using CAPM, the future cash flows
of the asset can be discounted to their present value using this rate to establish the
correct price for the asset. In theory, therefore, an asset is correctly priced when
its observed price is the same as its value calculated using the CAPM derived
discount rate. If the observed price is higher than the valuation, then the asset is
overvalued (and undervalued when the observed price is below the CAPM
valuation). Alternatively, one can "solve for the discount rate" for the observed
price given a particular valuation model and compare that discount rate with the
CAPM rate. If the discount rate in the model is lower than the CAPM rate then
the asset is overvalued (and undervalued for a too high discount rate).

21
Asset-specific required return

The CAPM returns the asset-appropriate required return or discount rate - i.e. the
rate at which future cash flows produced by the asset should be discounted given
that asset's relative riskiness. Betas exceeding one signify more than average
"riskiness"; betas below one indicate lower than average. Thus a more risky stock
will have a higher beta and will be discounted at a higher rate; less sensitive
stocks will have lower betas and be discounted at a lower rate. The CAPM is
consistent with intuition - investors (should) require a higher return for holding a
more risky asset.

Since beta reflects asset-specific sensitivity to non-diversifiable, i.e. market risk,


the market as a whole, by definition, has a beta of one. Stock market indices are
frequently used as local proxies for the market - and in that case (by definition)
have a beta of one. An investor in a large, diversified portfolio (such as a mutual
fund) therefore expects performance in line with the market.

Risk and diversification

The risk of a portfolio is comprised of systematic risk and specific risk.


Systematic risk refers to the risk common to all securities - i.e. market risk.
Specific risk is the risk associated with individual assets. Specific risk can be
diversified away (specific risks "average out"); systematic risk (within one
market) cannot. Depending on the market, a portfolio of approximately 15 (or
more) well selected shares might be sufficiently diversified to leave the portfolio
exposed to systematic risk only.

A rational investor should not take on any diversifiable risk, as only non-
diversifiable risks are rewarded. Therefore, the required return on an asset, that is,
the return that compensates for risk taken, must be linked to its riskiness in a
portfolio context - i.e. its contribution to overall portfolio riskiness - as opposed to
its "stand alone riskiness." In the CAPM context, portfolio risk is represented by
higher variance i.e. less predictability.

The efficient (Markowitz) frontier

22
Efficient Frontier

The CAPM assumes that the risk-return profile of a portfolio can be optimized -
an optimal portfolio displays the lowest possible level of risk for its level of
return. Additionally, since each additional asset introduced into a portfolio further
diversifies the portfolio, the optimal portfolio must comprise every asset,
(assuming no trading costs) with each asset value-weighted to achieve the above
(assuming that any asset is infinitely divisible). All such optimal portfolios, i.e.,
one for each level of return, comprise the efficient (Markowitz) frontier.

Because the unsystematic risk is diversifiable, the total risk of a portfolio can be
viewed as beta.

The market portfolio

An investor might choose to invest a proportion of his wealth in a portfolio of


risky assets with the remainder in cash - earning interest at the risk free rate (or
indeed may borrow money to fund his purchase of risky assets in which case there
is a negative cash weighting). Here, the ratio of risky assets to risk free asset
determines overall return - this relationship is clearly linear. It is thus possible to
achieve a particular return in one of two ways:

By investing all of one’s wealth in a risky portfolio or by investing a proportion in


a risky portfolio and the remainder in cash (either borrowed or invested).

For a given level of return, however, only one of these portfolios will be optimal
(in the sense of lowest risk). Since the risk free asset is, by definition,
uncorrelated with any other asset, option 2) will generally have the lower variance
and hence be the more efficient of the two.

This relationship also holds for portfolios along the efficient frontier: a higher
return portfolio plus cash is more efficient than a lower return portfolio alone for
that lower level of return. For a given risk free rate, there is only one optimal
portfolio which can be combined with cash to achieve the lowest level of risk for
any possible return. This is the market portfolio.

Assumptions of CAPM:

 All investors have rational expectations.


 All investors are risk averse.
 There are no arbitrage opportunities.
 Returns are distributed normally.
 Fixed quantity of assets.
 Perfect capital markets.
 Separation of financial and production sectors.
 Thus, production plans are fixed.
 Risk-free rates exist with limitless borrowing capacity and universal
access.

23
Shortcomings of CAPM

The model does not appear to adequately explain the variation in stock returns.
Empirical studies show that low beta stocks may offer higher returns than the
model would predict. Some data to this effect was presented as early as a 1969
conference in Buffalo, New York in a paper by Fischer Black, Michael Jensen,
and Myron Scholes. Either that fact is itself rational (which saves the efficient
markets hypothesis but makes CAPM wrong), or it is irrational (which saves
CAPM, but makes EMH wrong – indeed, this possibility makes volatility
arbitrage a strategy for reliably beating the market).

Capital Market Line

James Tobin (1958) added the notion of leverage to portfolio theory by


incorporating into the analysis an asset which pays a risk-free rate. By combining
a risk-free asset with a portfolio on the efficient frontier, it is possible to construct
portfolios whose risk-return profiles are superior to those of portfolios on the
efficient frontier. The capital market line is the tangent line to the efficient frontier
that passes through the risk-free rate on the expected return axis.

Capital Market Line

In this graphic, the risk-free rate is assumed to be 5%, and a tangent line, called
the capital market line, has been drawn to the efficient frontier passing through
the risk-free rate. The point of tangency corresponds to a portfolio on the efficient
frontier. That portfolio is called the super-efficient portfolio.

Arbitrage Pricing Theory

Arbitrage pricing theory (APT) holds that the expected return of a financial asset
can be modelled as a linear function of various macro-economic factors or
theoretical market indices, where sensitivity to changes in each factor is
represented by a factor specific beta coefficient. The model derived rate of return
will then be used to price the asset correctly - the asset price should equal the
expected end of period price discounted at the rate implied by model. If the price

24
diverges, arbitrage should bring it back into line. The theory was initiated by the
economist Stephen Ross in 1976.

If APT holds, then a risky asset can be described as satisfying the following
relation:

where

 E(rj) is the risky asset's expected return,


 RPk is the risk premium of the factor,
 rf is the risk free rate,
 Fk is the macroeconomic factor,
 bjk is the sensitivity of the asset to factor k, also called factor loading,
 and εj is the risky asset's idiosyncratic random shock with mean zero.

That is, the uncertain return of an asset j is a linear relationship among n factors.
Additionally, every factor is also considered to be a random variable with mean
zero.

Note that there are some assumptions and requirements that have to be fulfilled
for the latter to be correct: There must be perfect competition in the market and
the total number of factors.

25
IV. The neoclassical theory of production in short-run and long-run

1. Introduction

In microeconomic view the production is nothing else as the conversion of inputs


into outputs. In other words it is an economic process that uses factor resources to
create a commodity that is suitable for exchange. The process can include
manufacturing, storing, transportation, and packaging.
As a process production occurs through time and space and is measured as a “rate
of output per period of time”. Therefore we have three aspects to production
processes:

• the quantity of commodity produced


• the form of the good produced
• the temporal and spatial distribution of the commodity produced.

In this way the production process can be defined as any activity that increases the
similarity between the pattern of demand for goods, and the quantity, form, and
distribution of these goods available to the market place.
The inputs or resources used in any production process are called factors of
production. Classical economics distinguish between three factors:

• Land or natural resources – naturally occurring goods such as soil and


minerals that are used in the creation of products.
• Labour - human effort used in production which includes technical and
marketing expertise.
• Capital goods – human-made goods or means of production which are
used for the production of other goods. These include machinery, tools
and buildings.

Capital goods are those that have previously undergone a production process.
They are previously produced means of production and are sometimes called as
“technology” as a factor of production. Investment is important for the future
increase of the economy.
These factors were codified originally in the analysis of Adam Smith, 1776,
David Ricardo, 1817, and the later contributions of Karl Marx, who calls these
factors as the “holy trinity” of political economy.
But this classical view was further developed and we have until these present days
some more means that deals with factors of production:

• Entrepreneurs and managerial skills (people who organize and manage


other productive resources to make goods and services).
• Human Capital (the quality of labour resources which can be improved
through investments, education, and training).

The translation of demands for commodities into demand for factor services
necessitates some clearly defined technologies which tell us how commodities are
produced and how factors are distributed and, in addition, how much the process
of conversion from services to commodities costs. In this view production is a

26
matter of indirect exchange and so we extend the tools of analysis derived in the
context of pure exchange to analysing production.

In this seminar paper it will be analysed the neoclassical approach of the


production theory concerning the importance of the technology in connection with
the production functions.

2. The Neoclassical Theories of Production

The upper mentioned idea that production is an indirect exchange was the heart of
the theory of the Lausanne School of Léon Walras (1874) and Vilfredo Pareto
(1896, 1906). These Lausanne theories of production were embedded in the
general equilibrium system. As a result, the basic production unit – the “firm” –
was relegated into a subsidiary role. Indeed, Walras ignored the decision-making
role of producers entirely.
Regarding profit-maximization, choice of factor inputs and the marginal
productivity theory of distribution we find to a good part in other scholars,
notably the “Paretian” school during its height where it was consolidated by Jacob
Viner (1931), John Hicks (1939) and Paul Samuelson (1947). To these
approaches belongs the integration of the theory of production into the Paretian
general equilibrium theory as well.
After World War II, the theory of production veered off in another direction,
exploiting the activity analysis and linear programming methods developed by the
Cowles Commission. The “Neo-Walrasian” theory of production (Koopmans,
1951; Debreu, 1959) covers much of the same ground as the Paretian theory,
albeit using somewhat different methods but the Neo-Walrasians have asserted
the greater “generality” of their methods.
But in all capital is assumed to be an endowed factor of production rather than a
produced factor of production.

3. The Properties of Production Function

The production function is a mathematical function of input factors that


summarizes the process of conversion into a particular commodity. The
relationship of output to inputs is non-monetary, i.e. a production function relates
physical inputs to physical outputs. Prices and costs are not considered. The
analysis of output technologically possible from a given set of inputs abstracts
away from engineering and managerial problems inherently associated within a
particular production process. The engineering and managerial problems of
technical efficiency are assumed to be solved, so that analysis can focus on the
problems of allocative efficiency.
The general form of a production function was first proposed by P. Wicksteed
(1894) and can be expressed as

Y = f(X1, X2, X3, ….. Xm)

which relates a single output Y to a series of input factors X of production. In the


neoclassical way we have a production technology for the one-output/two-inputs
case. The two inputs we call L (labour) and K (capital). The production set is

27
essentially the set of technically feasible combinations of output Y and inputs, K
and L

Y = f (K,L)

This form excludes joint production, i.e. that a particular process of production
yields no more than one output (no multiple co-products).
The technologies production function states the maximum amount of output
possible from an input bundle and has the form

Y = f(X1, Λ, Xn)

3.1 Characteristics

The function f(x) is continuous throughout, single valued and has continuous 1st,
2nd, and 3rd order partial derivatives. The functions presuppose technical
efficiency and state the maximum attainable output from each (X1, ….. Xn).
Inputs and outputs are rates of flow per unit of time t0, where t0 is sufficiently
long to allow for completion of technical process.

The production functions (one input, one output):

Output Level

Y
Y = f(x)

Y’ = f(x’) is the maximal output level


Y’ obtainable from X’ input units

X’ X

3.2 Total, average, and marginal physical product

The total physical product of a variable input-factor identifies what outputs are
possible using various levels of variable input. The diagram shows a typical total
product curve. Output increases as more inputs are employed up until point A.
The maximum output possible is Ym.

Ym A

Xn

28
The average physical product is the total product divided by the number of units
of variable input employed. It is the output of each unit of input. For example 10
employees produce 50 units per day, the average product of variable labour input
is 5 units per day.
The marginal physical product of a variable input is the rate-of-change of the
output level as the level of input changes, holding all other input levels fixed due
to a change in the variable input. Typically the marginal product of one input
depends upon the amount used of other inputs and is diminishing if it becomes
smaller as the level of input increases.
This states that you add more and more of a variable input you will reach a point
beyond which the resulting increase in output starts to diminish. This concept is
also known as the law of diminishing marginal returns.

3.3 Homogeneous and homothetic production functions

There are two special types of production functions which are seldom in reality.
The production function Y = f(X1, X2) is said to be homogeneous of degree n, if
given any positive constant k, f(kX1,kX2) = knf(X1,X2).

when n > 1, the function exhibits increasing returns


n < 1, the function exhibits decreasing returns
n = 1, the function exhibits constant returns

Homothetic functions are a special type of homogeneous functions in which the


marginal rate of technical substitution is constant along the function.

3.4 Returns-to-Scale

Marginal products describe the change in output level as a single input level
changes.
Returns-to-scale describes how the output level changes as all input levels change
in direct proportion (e.g. all input levels doubled, or halved). When all input
levels are increasing proportionately, there need be no diminution of marginal
products since each input will always have the same amount of other inputs with
which to work. Input productivities need not fall and so returns-to-scale can be
constant or increasing.
The elasticity of production measures the sensitivity of total product to a change
in an input in percentage terms (E = %∆Y / %∆L).

3.5 The long-run and the short-runs

The long-run is the circumstance in which a firm is unrestricted in its choice of all
input levels. If all inputs are allowed to be varied, then the diagram would express
outputs relative to total inputs, and the production function would be a long run
production function.
The short-run is a circumstance in which a firm is restricted in some way in its
choice of at least one input level. There are some reasons for that like
temporarily being unable to install or remove machinery
being required by law to meet affirmative action quotas
having to meet domestic content regulations
temporarily being unable to cancel contracts

29
A useful way to think of the long-run is that the firm can choose as it pleases in
which short-run circumstance to be. It is a managerial task to use economic
analysis to make business decisions involving the best allocation of firms’ scare
resources in order to achieve the firms’ goals – in particular, to maximize profit.
The decision agent acts rationally in pursuit of his goal which is to maximize
profits. He has perfect knowledge of technical production relationships and input
and product price relationships.

4. Technology

Rather than comparing inputs to outputs or the choice between two outputs as it is
shown in the elasticity of substitution it is also possible to assess the mix of inputs
employed in production.
You can use a lot of labour with a minimal amount of capital or vice versa or any
combination between. For most goods, there are more than just two inputs. But
for better understanding and illustrating we use the two input case.
A technology is a process by which inputs are converted to an output. Usually
several technologies will produce the same product. The question is which is the
best technology and how do we compare technologies.
Xi denotes the amount used of input i. An input bundle is a vector of the input
levels (X1, X2, … Xn). A production plan is an input bundle and an output level Y.
A production plan is feasible if Y ≤ f(X1, Λ, Xn). The collection of all feasible
production plans is the technology set.

4.1 Technical rate of substitution X2


8
A perfect substitution shows the situation 6
of linear and parallel slopes: 3

9 18 24 X1

The Rate of Substitution answers the question: At what rate will a firm substitute
one input for another without changing its output level?

4.2 Fixed proportions technologies (Leontief Technology)

If there is no flexibility in technique we have fixed input requirements in order to


produce a single unit of output. Consequently we need vY units of capital and uY
units of labour. In other words, K=vY are the capital requirements and L=vY are
the labour requirements. As a result the only technique is L/K=u/v. In other words
there is a particular fixed proportion of capital and labour required to produce
output. There are constant returns to scale but no substitution is possible.

30
L

L’ Y’

L* Y*
u/v
K* K’ K Leontief (no Substitution isoquants)

Technical Rate of Substitution is the rate X2


at which input 2 must be given up as input 1
increases so as to keep the output level constant.
It is the slope of the isoquant l.

Y=100

X1

4.3 The Cobb-Douglas technology

In contrast to Leontief, Cobb-Douglas production function allows for substitution.


The production function has the form Y=aLbKc.
The original version Y=aLbK1-b with constant returns to scale (b+1-b=1) was
introduced by Cobb in 1928. He estimated the production function of U.S.
manufacturing output for the years 1899-1922.

If b + c = 1, there are constant returns


b + c > 1, increasing returns
b + c < 1, decreasing returns to scale.

The Cobb-Douglas function is homothetic, i.e. if all inputs are multiplied by λ, the
output is multiplied by a function of λ. Specifically it is homogenous if relative
price changes and producers will like to change combination of inputs if
technology permits this (unlike Leontief).

The elasticity of substitution measures how easily inputs can be substituted if


relative prices change: percentage change of ratio of inputs for given percentage
change of price ratio. For a Cobb-Douglas production function, the elasticity of
substitution is 1, i.e. if the relative prices change by 1%, the ratio of inputs will
change by 1%. This is so because for a Cobb-Douglas production function, in the
optimal combination of inputs, the ratio of total expenditures for the inputs is
constant.

At the Cobb-Douglas Technical Rate of Substitution (TRS) all isoquants are


hyperbolic, asymptoting to, but never touching any axis.

31
X2

A Cobb-Douglas example

4 X1

4.4 Well-behaved technologies

A well behaved technology is monotonic, and convex. Monotonicity means: More


of any input generates more output.
Convexity means: If the input bundles X´and X´´ both provide Y units of output
then the mixture tX´+ (1-t)X´´ provides at least Y units of output, for any 0<t<1.
Convexity implies that the TRS increases (becomes less negative) as X1 increases
(slope is convex to the origin).

Convexity Monotonic and not monotonic

X2 Y

Y=120

Y=100
X1 X

5. Summary

The production theory as shown simplifies the production reality of a firm, yet it
is useful for understanding of the characteristics of production processes. But
there are some critical annotations to be done. Especially in the decision making
process the Neoclassical theory restricts its producer to only one person making a
decision. We have today many modern multi-onwner corporations in which
hundreds of shareholders with conflicting desires have decision-making-power.
The Paretian firm is owned by a single entrepreneur who has an unchallenged
power of decisions over all aspects like: product, technique of production, hiring
of factors, etc. but not in the decision of any prices, he considers prices as
“given”.
Today producers take prices as variables and not as parameters for their decisions
and are confronted with a perfect competition. Therefore we have to analyse and
consider the cost functions within a production process as well.

32
V. The neoclassical theory of cost in short-run and long-run

Introduction

Neoclassical economics refers to a general approach (a “metatheory") to


economics based on supply and demand which depends on individuals (or any
economic agent) operating rationally, each seeking to maximize their individual
utility or profit by making choices based on available information. Mainstream
economics is largely neoclassical in its assumptions, at least at the microeconomic
level. There have been many critiques of neoclassical economics, often
incorporated into newer versions of neoclassical theory as circumstances change.

Overview

Neoclassical economics is the singular element of several schools of thought in


economics. There is not complete agreement on what is meant by neoclassical
economics, and the result is a wide range of neoclassical approaches to various
problem areas and domains -- ranging from neoclassical theories of labor to
neoclassical theories of demographic changes.
As expressed by E. Roy Weintraub, neoclassical economics rests on three
assumptions, although certain branches of neoclassical theory may have different
approaches: People have rational preferences among outcomes that can be
identified and associated with a value.
Individuals maximize utility and firms maximize profits. People act independently
on the basis of full and relevant information. From these three assumptions,
neoclassical economists have built a structure to understand the allocation of
scarce resources among alternative ends -- in fact understanding such allocation is
often considered the definition of economics to neoclassical theorists. Here's how
William Stanley Jevons presented the basic problem of economics:

Given, a certain population, with certain needs and powers of production, in


possession of certain lands and other sources of material: required, the mode of
employing their labour which will maximize the utility of their produce.

From the basic assumptions of neoclassical economics comes a wide range of


theories about various areas of economic activity. For example, profit
maximization lies behind the neoclassical theory of the firm, while the derivation
of demand curves leads to an understanding of consumer goods, and the supply
curve allows an analysis of the factors of production. Utility maximization is the
source for the neoclassical theory of consumption, the derivation of demand
curves for consumer goods, and the derivation of factor supply curves and
reservation demand.
Neoclassical economics emphasizes equilibria, where equilibria are the solutions
of individual maximization problems. Regularities in economies are explained by
methodological individualism, the doctrine that all economic phenomena can be
ultimately explained by aggregating over the behavior of individuals. The
emphasis is on microeconomics. Institutions, which might be considered as prior
to and conditioning individual behavior, are de-emphasized. Economic
subjectivism accompanies these emphases. See also general equilibrium.

33
Production and cost

• Production Relationship: Q = f (L, K)


• Inputs (L and K) and output (Q) are all expressed as flows (per unit time)
not stocks
• Capital usually refers to flows of “services” provided by durable producer
goods (whichare themselves the outputs of production)

Example: Beer production

• five ingredients: water, yeast, hops, grain,and malted barley


• labor: brewmaster, Laverne and Shirley capital: brewing vats,
bottling/canning facilities, delivery trucks
• IP: recipe, business model, trademark, brand image
• NB: the recipe is 5,000 years old!

Production function:

• You can think of this production function as the case when with all inputs
besides Labor are held constant.
• Production function is “efficient” frontier of production possibilities.

Average and marginal products:

34
Relation among TP, AP, MP:

Looking ahead - Cost function with single input: a cost function C(Q) gives the
minimum cost of producing each possible quantity of output.

Case of Single Input: C(Q) = w*L(Q), where L(Q) is minimum needed to produce

(Q)= w*f -1(Q), Example: Q = f(L) = L1/2

Inverting the production function: L(Q) = f-1(Q) = Q2

Cobb-Douglas Production: Q = aLαKβ, where a > 0, α > 0, and β > 0.

Isoquants: all possible combinations of inputs that exactly produce a given output
level: all (L, K) such that: f(L, K) = Q0 (constant).

35
Example Cobb-Douglas:
Q = K1/2 L1/2 Q2 = KL K(L; Q) = Q2/L

Marginal rate of technical substitution

Definition: MRTS measures the amount of an input L the firm would require in
place of a unit less of another input, K, to be able to produce the same output as
before: MRTSL,K = -dK/dL (for constant output). Note: Marginal products and the
MRTS are related:

MPL * dL + MPK * dK = 0  MPL / MPK = - dK/dL = MRTSL,K

• NB: similarity with indifference curves and MRS!


• If both marginal products are positive, the slope of the isoquant is negative
and so MRTS > 0
• If we have diminishing marginal returns, we also have a diminishing
marginal rate of technical
• substitution: dMRTSL,K / dL < 0
• Cobb-Douglas: MRTSL,K = αK /βL

Elasticity of substitution

Definition: The elasticity of substitution, σ, measures how the capital-labor ratio,


K/L, response to a change in the ability to tradeoff inputs (i.e., MRTSL, K): σ =
%∆(K/L) / %∆MRTSL,K = [d(K/L) / dMRTSL,K] [MRTSL,K/(K/L)]. Note: “σ” is
a pure number that measures the ease with which a firm can substitute one input
for another.

Example:
Cobb Douglas

σ = [d(K/L) /dMRTSL,K] [MRTSL,K/(K/L)] = (-3/-3) (4/4) = 1.

36
Returns to scale

Definition: will output increase more or less proportionately when ALL inputs
increase by the same percentage amount?

RTS = [%∆Q]/[%∆(all inputs)]

If a 1% increase in all inputs Then we have returns


results in a … to scale of this kind:
less than 1% increase in output Decreasing (DRTS)
exactly a 1% increase in output Constant (CRTS)
greater than 1% increase in Increasing (IRTS)

Visualizing returns to scale

Check: Q1 >/=/< 2Q0 then IRTS/CRTS/DRTS


Example: Cobb Douglas production: Q = LαKβ

- if α + β > 1 then IRTS


- if α + β = 1 then CRTS
- if α + β < 1 then DRTS

37
Extreme production functions

1. Linear: Q = f(L, K) = aL + bK
- • MRTS constant
- • Constant returns to scale
- • σ = ∞.

2. Fixed proportions: Q = f(L, K) = min {aL, bK}


• L-shaped isoquants
• MRTS varies (0, infinity, undefined)
•σ=0

Summary I.

1. Production function relates output to the efficient use of all possible input
levels.
2. The effect of an input on production can be measured by its average and
marginal products.
3. An isoquant gives all input combinations that generate same level of
output.
4. The ability to substitution one input for another is measured by the
Marginal Rate of Technical Substitution (MRTS), which normally
satisfies diminishing MRTS.
5. The production function, isoquant and MRTS have a direct counterpart in
consumer theory to the utility function, the indifference curve and MRS.

38
6. We can summarize ability to tradeoff one input for another by the
elasticity of substitution.
7. Returns to scale measure how output increases with the proportionate
increase in all inputs.

Cost minimization

Significance and meaning of economic cost

Why care about costs?


• Should drive business decisions regarding price, production, investment,
etc.
• Affects which companies and technologies succeed, and which ones fail.
• Determines the size of firms.
• Determines the level, structure and trends in prices paid for goods and
services.

Meaning of economic cost:


• Measure the use of resources in the production of goods and services.
• Accountants measure only explicit expenses (and sometimes not even
those).
• “Opportunity cost” includes the value of employed resources in their best
alternative use what matters is expenditure affected by adecision there
may also be “external costs” born by those not involved in production
(e.g.pollution).

Taxonomy of costs

Total v. Average v. Marginal


• AC = TC / Q also referred to as “unit costs”
• MC = ÄTC / ÄQ sometimes called “incremental”

Fixed versus variable

• how costs vary with level of output


• [total cost] = [variable cost] + [fixed cost]

39
Short run versus long run

• as before, depends on the time period considered, and hence whether


inputs are “fixed” or “variable”
• make distinction between “fixed cost” and “fixed factor”

Sunk versus avoidable (non-sunk)

• difference depends on whether cost can be avoided by some decision


• e.g., fixed cost can be avoided by shutting down.
• usually treat variable costs as avoidable, fixed costs as sunk
• because they are unavoidable, “sunk” costs should be ignored when
making decisions

Other cost distinctions

• production versus transaction costs


• one-time versus recurring costs

Cost minimization

The firm’s problem: A profit maximizing firm won’t spend more to produce its
output than it has to: MinimizeL,K TC = rK + wL, Subject to: f(L,K) = Q0

The solution:

In words: find the cheapest input combination that produces the desired level of
output.
Iso-quant curve: input combinations that produce the same quantity of output,
Slope of iso-quant = - MRTSL,K= - MPL/MPK.
Iso-cost lines: input combinations that cost same amount wL + rK = C (a
constant), K = (C - wL ) / r, slope of iso-cost = ∆K / ∆L (along an iso-cost-line) =
- w / r, compare: budget lines

Putting two together: MRTS L,K = MPL / MPK = w / r


Solution: another view - MPL / w = MPK / r

• 1/w = the amount of labor that can be purchased for $1.


• MPL = the amount of output that can be produced with last unit of labor.
• MPL/w = output derived from last $ spent on labor.
• Similar interpretation for capital.
• Therefore, equate incremental output of last $ spent on each input across
inputs.

40
Comparative statics

• Output Expansion Path: L(Q) and K(Q) are labor and capital levels that
minimize cost.
• Plot optimal cost-minimizing input combinations as output increases (i.e.,
moves in the northeast direction).
• If the cost-minimizing quantity of an input rises (falls) with output, then it
is a “normal” (“inferior”) input.
• Compare: income-consumption curve.

Factor Price Change: L(w; Q) gives labor that minimizes cost for each wage rate –

• increase the price of one input (e.g., wage).


• factor substitution: All else equal, an increase in w must decrease labor and
increase capital due to diminishing MRTSL,K.
• compare: price-consumption curve.

41
Short-run costs

What is the “short run”?


• period over which one input (e.g., capital) is “fixed”.
• short-run production function: Q = f ( L, K).
• payment to the “fixed factor” (i.e. K—) becomes a fixed cost.

Short-run Cost: C(Q; K—) = w L(Q; K—) + r K—


•variable cost: just the variable/labor expense SRVC = w L(Q; K— )
•fixed cost: unavoidable expense of “fixed factor” SRFC = r K—
•input demand functions are the solutions to the short run cost
minimization problem
• So demand for variable inputs depends on availability of “fixed factor”

Short run average costs

SRATC = C(Q; K) /Q = [w L(Q; K) + r K] /Q

Decomposed into fixed and variable components


• variable: SRAVC = w L(Q; K) /Q
• fixed: SRAFC = r K/Q

Properties

• SRAVC typically has the “U shape”


• since TC > VC, SRATC > SRAVC (where difference is SRAFC)
• SRAFC falls to 0 as Q increases

Short run marginal cost

• addition to cost of producing last unit of output SRMC = ∆C/∆Q = ϑC/ϑQ


• note that any fixed cost is irrelevant to marginal cost: SRMC = ∆VC / ∆Q

Properties
• diminishing marginal product => SRAC and SRMC rising
• when SRMC > SRATC, then SRATC rising (and vice versa)

42
• consequently, SRMC = SRATC when SRATC minimum, similarly SRMC =
SRAVC when SRAVC minimum
• compare with marginal product: SRMC = ∆(wL(Q,K)) / ∆ Q = w (∆Q/∆L)-1 = w
/ MPL

Comparing short and long run

Recall
• SRTC = w L (Q; K—) + r K— (i.e., one input fixed)
• LRTC = w L(Q) + r K(Q) (i.e., all inputs vary freely)

Consequences of Flexibility
• SRTC > LRTC and SRATC > LRATC for all Q
• SRMC steeper than LRMC

Scaleability

The problem:

• business model works for limited market: certain “lot size,” geographic area,
customer type, and so on.
• can it be replicated for broader market?

Success Stories:

• geographic expansion: McDonald’s, Domino’s, Starbucks, AOL


• product expansion: GE, Dell, Staples
• both: Walmart

Failures:

• professional services, video rentals


• grocery delivery (viz., WebVan)

Summary II.

1. Opportunity cost is the relevant notion of economic cost.


2. A profit-maximizing firm will minimize the cost of producing its chosen
level of output.
3. Costs are minimized when the MRTS equals the input price ratio.
4. The input demand functions show how the cost minimizing quantities of
inputs vary with the quantity of the output and the input prices.
5. The short run cost minimization problem solves the firm’s problem when
one or more inputs are fixed. Returns to scale have a counter part in the
shape of the cost function captured by degree of “economies of scale.”

43
Criticisms of neoclassical economics

Neoclassical economics is sometimes criticised for having a normative bias. In


this view, it does not focus on explaining actual economies, but instead on
describing a "utopia" in which Pareto optimality obtains. Key assumptions of
neoclassical economics which are widely criticized as unrealistic include:
The focus on individuals in the economy may obscure analysis of wider long term
issues, such as whether the economic system is desirable and stable on a finite
planet of limited natural capital.
The assumption that individuals act rationally may be viewed as ignoring
important aspects of human behavior. Many see the "economic man" as being
demonstrably different to a real man on the real earth -- they are not human, and
are increasingly criticized for not being human. The assumption of rational
expectations which has been introduced in some more modern neo-classical
models (sometimes also called new classical) may also be strongly criticized on
the grounds of realism.
Large corporations might perhaps come closer to the neoclassical ideal of profit
maximisation, but this is not necessarily viewed as desirable if this comes at the
expense of a "locust-like" neglect of wider social issues.
Problems with making the neoclassical general equilibrium theory compatible
with an economy that develops over time and includes capital goods. This was
explored in a major debate in the 1960s - the Cambridge Capital Controversy -
about the validity of neoclassical economics, with an emphasis on the economic
growth, capital, aggregate theory, and the marginal productivity theory of
distribution. There were also internal attempts by neoclassical economists to
extend the Arrow-Debreu model to disequilibrium investigations of stability and
uniqueness. However a result known as the Sonnenschein-Mantel-Debreu
theorem suggests that the assumptions that must be made to insure that the
equilibrium is stable and unique are quite restrictive.
In the opinion of some, these developments have found fatal weaknesses in
neoclassical economics. Economists, however, have continued to use highly
mathematical models, and many equate neoclassical economics with economics,
unqualified. Mathematical models include those in game theory, linear
programming, and econometrics, many of which might be considered non-
neoclassical. So economists often refer to what has evolved out of neoclassical
economics as "mainstream economics". Critics of neoclassical economics are
divided in those who think that highly mathematical method is inherently wrong
and those who think that mathematical method is potentially good even though if
contemporary methods have problems.
The basic theory about downward sloping aggregate demand curve for any
product is criticized for its allegedly too big assumption, that individual
consumers have identical preferences which do not change when the wealth of
individual changes (some critics of neoclassical claim, that these assumptions are
not told for young students until their faith in the discipline is strong enough). In
general, allegedly too unrealistic assumptions are one of the most common
criticisms towards neoclassical economics. For example, many theories assume
perfect knowledge for market actors and the most common theory of finance
markets assumes that debts are always paid back and that any actor can raise as
much loan as he wants at any given point of time.

44
The basic theory of production in neoclassical economics is criticized of assuming
wrongly rationales for producers. According to the theory, increasing production
costs are the reason for producers not to produce over certain amount. Some
empirical counter arguments claim that most of producers in economy are not
doing their production decisions in the light of increasing production costs (for
example, they often may have additional capacity that could be taken into use, if
producing more was desirable).
Often at individual levels, variables such as supply and demand, which are
independent, are (allegedly wrongly) assumed to be independent also at aggregate
level. This criticism has been applied to many central theories of neoclassical
economics.
Theory of perfect competition is criticized by claiming that it wrongly assumes
that demand curve for one firm is flat, while as a matter of fact, it has to be (very)
slightly bending, since in that theory, the demand curve for individual firm is a
part of aggregate demand curve that is not flat. Taking this into account would
ruin the theory.
The critique of the assumption of rationality is not confined to social theorists and
ecologists. Many economists, even contemporaries, have criticized this vision of
economic man. Thorstein Veblen put it most sardonically:
lightning calculator of pleasures and pains, who oscillates like a homogeneous
globule of desire of happiness under the impulse of stimuli that shift about the
area, but leave him intact.
Herbert Simon's theory of bounded rationality has probably been the most
influential of the heterodox approaches. Is economic man a first approximation to
a more realistic psychology, an approach only valid in some sphere of human
lives, or a general methodological principle for economics? Early neoclassical
economists often leaned toward the first two approaches, but the latter has
become prevalent.
Neoclassical economics is also often seen as relying too heavily on complex
mathematical models, such as those used in general equilibrium theory, without
enough regard to whether these actually describe the real economy. Many see an
attempt to model a system as complex as a modern economy by a mathematical
model as unrealistic and doomed to failure. Famous answer to this criticism is
Milton Friedman's claim that theories should be judged by their ability to predict
events rather than by the realisticity of their assumptions. Naturally, many claim
that neoclassical economics (as well as other branches of economics) has not been
very good at predicting events.
Critics of neoclassical models accuse it of copying of 19th century mechanics and
the "clockwork" model of society which seems to justify elite privileges as arising
"naturally" from the social order based on economic competitions. This is echoed
by modern critics in the anti-globalization movement who often blame the
neoclassical theory, as it has been applied by the IMF in particular, for inequities
in global debt and trade relations. They assert it ignores the complexity of nature
and of human creativity, and seeks mechanical ideas like equilibrium:
And in Poinset's Elements de Statique..., which was a textbook on the theory of
mechanics bristling with systems of simultaneous equations to represent, among
other things, the mechanical equilibrium of the solar system, Walras found a
pattern for representing the catallactic equilibrium of the market system. (William
Jaffe).

45
VI. Theory of the firm in perfect competition and monopolistic competition

1. Theory of the firm

The “theory of the firm” is a relatively modern economic construct and one with
several variants. Firms produce goods and provide services and, as such, they play
an important part in the supply/demand interaction that characterizes the
economic order of non-socialist societies. A business firm’s costs, prices, and
economic power naturally matter to theoretical economists as well as to those
shaping practical economic policy. Economists since Adam Smith have been
committed to the notion that markets allocate resources better than alternative
means do.27
One early and popular perception of the modern firm linked the size and structure
of business enterprises to the state of technology. Technological advances make
new forms of production and organization possible. The productive genius of
Henry Ford, for example, was possible because technology made it feasible to
mass produce and market automobiles. Advances in transportation,
communications, and production technologies in the late nineteenth century
invited, if not compelled, the growth of interstate business firms in the United
States.
Neoclassical economic models could capture the characteristics of perfectly
competitive and monopolistic markets as outlined in chapter 2 and 3. Since most
industries fit neither model, however, economists also explored the world of
imperfectly competitive markets, oligopolistic industries, and “monopolistic
competition.” The behaviour and performance of firms in markets with these
characteristics cannot be predicted with the certainty of producers in perfectly
competitive markets or its monopolistic alternative, but they can be analyzed
using the tools of neoclassical economics.28

1.1 Profit maximization and loss minimization

The competitive firm tries to maximise profits, given that he is a price taker.29
Price is therefore exogenous and the firm just chooses quantities. It therefore tries
to maximise π = pq - c(q) by choice of q, where c(q) is the (total) cost function for
the firm.30 From this behaviour the optimization rule can be deducted which states
that the marginal revenue of each action must be equal to its marginal cost. Or in
other words: At the profit-maximizing level of output, marginal revenue and
marginal cost are exactly equal.

27
Garvey, G. E. (2003), page 525-540

28
In many sectors of the economy markets are best described by the term oligopoly - where a few
producers dominate the majority of the market and the industry is highly concentrated. In a
duopoly two firms dominate the market although there may be many smaller players in the
industry.

29
Meaning that the price of the firm’s output is the same regardless of the quantity that the firm
decides to produce.
30
A firm’s costs reflect is production process.

46
To extend this analysis of profit maximization the cost curves can be considered:
The firm’s marginal-cost curve (MC) is upward sloping31, the average-total-cost
(ATC) curve is U-shaped. And the marginal-cost curve crosses the average-total-
cost curve at the minimum of average total costs.

Costs The firm maximizes


and profit by producing
Revenue the quantity at
which marginal cost MC
equals marginal
revenue.
MC2
ATC
P=MR1 P = AR = MR
AVC

MC1

0 Q1 QMAX Q2 Quantity
32
Figure 1.1: Profit Maximization for a competitive firm

The market price (P) equals marginal revenue (MR) and average revenue (AR).
At the quantity Q1, marginal revenue MR1 exceeds marginal cost MC1, so raising
production increases profit. At the quantity Q2 marginal cost MC2 is above
marginal revenue MR2, so reducing production increases profit. The profit
maximizing quantity Qmax is found where the horizontal price line intersects the
marginal-cost curve.33

1.2 Economies and diseconomies of scale

As outlined cost considerations are crucial for a profit maximizing firm. In doing
so the considerations of economies of scale are important.34 The shape of the
long-run average-total-cost curve (see figure 1.1) conveys important information
about technology of producing a good. When long run average cost declines as
output increases, there are said to be economies of scale. When long-run average
total cost rises as output increases, there are said to be diseconomies of scale. And

31
Under perfect market conditions with the firm being price taker the marginal-revenue equals the
market price.
32
Mankiw, N.G. (1998), p. 296
33
Varian, R. (1984), p. 22-46
34
Adam Smith identified the division of labor and specialization as the two key means to achieve
a larger return on production. Through these two techniques, employees would not only be able
to concentrate on a specific task, but with time, improve the skills necessary to perform their
jobs and production level increases.

47
when long-run average total cost does not change there is said to be constant
returns to scale.35

Figure 1.2: Long-run average total cost curve

2. Theory of the firm under perfect competition

The degree to which a market or industry can be described as competitive


depends in part on how many suppliers are seeking the demand of consumers and
the ease with which new businesses can enter and exit a particular market in the
long run.
The spectrum of competition ranges from highly competitive markets where there
are many sellers, each of whom has little or no control over the market price - to a
situation of pure monopoly where a market or an industry is dominated by one
single supplier who enjoys considerable benefits in setting prices, unless subject
to some form of direct regulation by the government.
Competitive markets operate on the basis of a number of assumptions. When
these assumptions are dropped - we move into the world of imperfect
competition. These assumptions are discussed below:36

1) Many suppliers each with an insignificant share of the market – this means that
each firm is too small relative to the overall market to affect price via a change in
its own supply – each individual firm is assumed to be a price taker.

2) An identical output produced by each firm – in other words, the market


supplies homogeneous or standardised products that are perfect substitutes for
each other. Consumers perceive the products to be identical.

3) Consumers have perfect information about the prices all sellers in the market
charge – so if some firms decide to charge a price higher than the ruling market
price, there will be a large substitution effect away from this firm.

35
Mankiw, N.G. (1998), page 284
36
E.g.: Mankiw, G.N. (1998), page 291 following

48
4) All firms (industry participants and new entrants) are assumed to have equal
access to resources (technology, other factor inputs) and improvements in
production technologies achieved by one firm can spill-over to all the other
suppliers in the market.

5) There are assumed to be no barriers to entry & exit of firms in long run – which
means that the market is open to competition from new suppliers – this affects the
long run profits made by each firm in the industry. The long run equilibrium for a
perfectly competitive market occurs when the marginal firm makes normal profit
only in the long term.

6) No externalities in production and consumption so that there is no divergence


between private and social costs and benefits.

2.1 Short Run Price and Output for the Competitive Industry and Firm

In the short run the equilibrium market price is determined by the interaction
between market demand and market supply. In diagram 1 is shown, that price P1
is the market-clearing price and this price is taken by each of the firms. Because
the market price is constant for each unit sold, the AR curve also becomes the
Marginal Revenue curve (MR). A firm maximises profits when marginal revenue
= marginal cost. In the diagram below, the profit-maximising output is Q1. The
firm sells Q1 at price P1. The area shaded is the economic profit made in the short
run because the ruling market price P1 is greater than average total cost.37

Figure 2.1: Economic Profit38

Not all firms make supernormal profits in the short run. Their profits depend on
the position of their short run cost curves. Some firms may be experiencing sub-
normal profits because their average total costs exceed the current market price.
Other firms may be making normal profits where total revenue equals total cost
(i.e. they are at the break-even output). In diagram 2, the firm shown has high
37
Varian, R. (1984), p. 22-46
38
Mankiw, G.N. (1998), page 285 following

49
short run costs such that the ruling market price is below the average total cost
curve. At the profit maximising level of output, the firm is making an economic
loss.

Diagram 2.2: Economic Loss39

2.2 The Effects of a change in Market Demand

Figure 2.3 describes the increase in market demand. This causes an increase in
market price and quantity traded. The firm's average revenue curve shifts up to
AR2 (=MR2) and the profit maximising output expands to Q2, with the MC curve
as the firm's supply curve. Higher prices cause an expansion along the supply
curve. Following the increase in demand, total profits have increased. An inward
shift in market demand would have the opposite effect.40

Diagram 2.3: Increased Market Demand

39
Mankiw, G.N. (1998), page 285 following

40
The effect of a change in market supply (perhaps arising from a cost-reducing technological
innovation available to all firms in a competitive market) is a ceteris paribus consideration not
further outlined in this work.

50
2.3 The adjustment process of a perfectly competitive industry towards
the long run equilibrium

If most firms are making abnormal profits in the short run there will be an
expansion of the output of existing firms and new firms might enter the market.
Firms are responding to the profit motive and supernormal profits act as a signal
for a reallocation of resources within the market. The addition of new suppliers
causes an outward shift in the market supply curve, as shown in figure 2.4.41

Figure 2.4: expansion of output

Making the assumption that the market demand curve remains unchanged, higher
market supply will reduce the equilibrium market price until the price = long run
average cost. At this point each firm is making normal profits only. There is no
further incentive for movement of firms in and out of the industry and a long-run
equilibrium has been established. The entry of new firms shifts the market supply
curve to MS2 and drives down the market price to P2. At the profit-maximising
output level Q3 only normal profits are being made. There is no incentive for
firms to enter or leave the industry. Thus a long-run equilibrium is established.

Figure 2.5: expansion of output42

41
Mankiw, G.N. (1998), page 285 following
42
Mankiw, G.N. (1998), page 285 following

51
2.4 Perfect competition and economic efficiency

Perfect competition is used to compare with other market structures (such as


monopoly and oligopoly) because it displays high levels of economic efficiency.
In both the short and long run, price is equal to marginal cost (P=MC) and
therefore allocative efficiency43 is achieved – the price that consumers are paying
in the market reflects the factor cost of resources used up in producing / providing
the good or service.
Productive efficiency occurs when price is equal to average cost at its minimum
point. This is not achieved in the short run – firms can be operating at any point
on their short run average total cost curve, but productive efficiency is attained in
the long run because the profit maximising output is achieved at a level where
average (and marginal) revenue is tangential to the average total cost curve. The
long run of perfect competition, therefore, exhibits optimal levels of static
economic efficiency.44

43
Allocative efficiency defines a state where all resources are allocated to their highest valued use
(no other possibility exist where they would make greater profit). Whereas productive
efficiency describes a way of producing in the lowest cost manner.
44
Another form of economic efficiency – dynamic efficiency – relates to aspects of market
competition such as the rate of innovation in a market, the quality of output provided over time
and is not subject to these considerations.

52
3. Theory of the firm under monopolistic competition

Monopolistic competition is a common market form. Many markets can be


considered as monopolistically competitive, often including the markets for
books, clothing, films and service industries in large cities.
Monopolistically competitive markets have the following characteristics:

• There are many producers and many consumers in a given market.


• Consumers have clearly defined preferences and sellers attempt to
differentiate their products from those of their competitors; the
goods and services are heterogeneous.
• There are no barriers to entry and exit.

The characteristics of a monopolistically competitive market are almost exactly


the same as in perfect competition, with the exception of heterogeneous products,
and that monopolistic competition involves non-price competition (based on
subtle product differentiation). This gives the company influence over the market;
it can raise its prices without losing all the customers, owing to brand loyalty.
This means that an individual firm's demand curve is downward sloping, in
contrast to perfect competition, which has a perfectly elastic demand schedule.
A monopolistically competitive firm acts like a monopolist in that the firm is able
to influence the market price of its product by altering the rate of production of
the product. Unlike in perfect competition, monopolistically competitive firms
produce products that are not perfect substitutes. In the short-run, the
monopolistically competitive firm can exploit the heterogeneity of its brand so as
to reap positive economic profit.
In the long-run, distinguishing characteristic that enables one firm to gain
monopoly profits will be duplicated by competing firms. This competition will
drive the price of the product down and, in the long-run, the monopolistically
competitive firm will make zero economic profit.
Unlike in perfect competition, the monopolistically competitive firm does not
produce at the lowest attainable average total cost. Instead, the firm produces at
an inefficient output level, reaping more in additional revenue than it incurs in
additional cost versus the efficient output level.45

3.1 Short run price and output for the monopolistic industry and firm

In the short run, the monopolistically competitive firm faces limited competition.
There are other firms that sell products that are good, but not perfect, substitutes
for the firm's own product. In other words: every firm has a monopoly of its own
product. When the product is differentiated, that means the firm has some
monopoly power, and that means we must use the monopoly analysis, as in Figure
3.1 below.

45
Luis M. B. (2000), page 84-85

53
Figure 3.1: short run monopolistic revenues46

We see that, the marginal revenue is less than the price. The firm will set its
output so as to make marginal cost equal to marginal revenue, and charge the
corresponding price on the demand curve, so that in this example, the monopoly
sells 1000 units of output (per week, perhaps) for a price of $85 per unit.
But this is just a short run situation. We see that the price is greater than the
average cost (which is $74 per unit, in this case) giving a profit of $11,000 per
week. This profitable performance will attract new competition in the long run.

3.2 Long Run Price and Output for the Monopolistic Industry and Firm

In monopolistic competition, when one firm or product variety is profitable, it will


attract more competition -- more substitutes and closer substitutes for the
profitable product type. Thus, demand will shift downward and costs will
increase. This will go on as long as the firm and its product type remain
profitable. A new "long run equilibrium" is reached when (economic) profits have
been eliminated. This is shown in Figure 3.2:

Figure 3.2: Long run equilibrium47

46
Mankiw, G.N. (1998), page 291 following
47
Mankiw, G.N. (1998), page 291 following

54
In this example, the firm can break even by selling 935 units of output at a price
of $76 per unit. The profit -- zero -- is the greatest profit the firm can make, so
profit is being maximized with the output that makes MC=MR.
Zero (economic) profit is also the condition for long run equilibrium in a p-
competitive industry. But this equilibrium is not the ideal that the long run
equilibrium in a perfectly competitive industry is.

3.3 Critique of monopolistic competition

While monopolistically competitive firms are inefficient, it is usually the case that
the costs of regulating prices for every product that is sold in monopolistic
competition by far exceed the benefits; the government would have to regulate all
firms that sold heterogeneous products - an impossible proposition in a market
economy.
Another concern of critics of monopolistic competition is that it fosters
advertising and the creation of brand names. Critics argue that advertising induces
customers into spending more on products because of the name associated with
them rather than because of rational factors. This is refuted by defenders of
advertising who argue that (1) brand names can represent a guarantee of quality,
and (2) advertising helps reduce the cost to consumers of weighing the tradeoffs
of numerous competing brands.
There are unique information and information processing costs associated with
selecting a brand in a monopolistically competitive environment. In a monopoly
industry, the consumer is faced with a single brand and so information gathering
is relatively inexpensive. In a perfectly competitive industry, the consumer is
faced with many brands. However, because the brands are virtually identical,
again information gathering is relatively inexpensive. Faced with a
monopolistically competitive industry, to select the best out of many brands the
consumer must collect and process information on a large number of different
brands. In many cases, the cost of gathering information necessary to selecting the
best brand can exceed the benefit of consuming the best brand (versus a randomly
selected brand).

4. Concluding considerations

Based on the assumption of profit maximizing companies was shown that it is


impossible for a firm in perfect competition to earn abnormal profit in the long
run, which is to say that a firm cannot make any more money than is necessary to
cover its costs. If a firm is earning abnormal profit in the short term, this will act
as a trigger for other firms to enter the market. They will compete with the first
firm, driving the market price down until all firms are earning normal profit. On
the other hand, if firms are making a loss, then some firms will leave the industry,
reduce the supply and increase the price. Therefore, all firms can only make
normal profit in the long run.
Perhaps the closest thing to a perfectly competitive market would be a large
auction of identical goods with all potential buyers and sellers present. By design,
a stock exchange closely approximates this, though there is no way to guarantee

55
atomicity. As perfect competition is a theoretical absolute, there are no real-world
examples of a perfectly competitive market.
Therefore the described model of monopolistic competition is by far closer to
reality although it brings suboptimal results. Mainly because the rule of P= MR =
MC (price equals marginal revenues equals marginal costs) holds true for a
competitive firm only, whereas for a monopoly firm price exceeds marginal costs
(P > MR = MC).
Concluding it can be said that this theoretical approach assumes profit
maximization, but can also be used to show how a change in objectives48 (for
example from profit maximisation to revenue maximisation) affects the price and
output of a business.

48
Other objectives can be: (1) Satisficing: Satisficing behaviour involves the owners setting
minimum acceptable levels of achievement in terms of business revenue and profit e.g. a
target rate of growth of sales, or an acceptable rate of return on capital. (2) Sales Revenue
Maximisation : Annual salaries and other perks might be more closely correlated with total
sales revenue rather than profits. Revenue is maximised when marginal revenue (MR) = zero.
(3) Constrained Sales Revenue Maximisation - Shareholders of a business may introduce a
constraint on the decisions of managers.

56
VII. The theory of oligopoly and monopoly

Considerations about market situations very often go about the question under
what circumstances markets come to equilibrium. Which market constellations
make demand and supply coming together and what prices are accepted by
consumers to clear markets appropriate?
It can be clearly assumed that the situation of competition between suppliers
definitely is a decisive influencing factor. A lot of considerations and reflections
are made about markets under condition of perfect competition where many
suppliers act on the market.
The following reflections concern the constellations of markets under oligopoly
and monopoly conditions where just one or a few actors supply customers
demand.

1. Monopoly

1.1 General consideration about monopoly constellation

A monopoly is an industry in which there is one seller. Because it is the only


seller, the monopolist faces a downward-sloping demand curve, the industry
demand curve. The downward-sloping demand curve means that if the monopolist
wants to sell more, it must lower its price. (assuming that price discrimination is
not possible; that the firm can charge only one price.) Because the monopolist
must lower price to sell more, the extra or marginal revenue it gets from selling
another unit is less than the price it charges. Thus, its marginal revenue curve lies
below its demand curve. In contrast, for a seller who is a price taker, demand is
identical with marginal revenue.
A first approach to the case of monopoly is shown by the following table.
Marginal cost is the value of the additional resources needed to produce another
unit of output. The marginal benefit to consumers is the price that consumers are
willing to pay for each unit. This column should be recognized as a demand
curve. The maximization principle leads to the fact that the economically efficient
amount to produce is five, the amount that gives consumers the greatest value. To
produce the first unit, the firm takes resources that have a value of 100 and turns
them into something with a value of 121. Because this transformation has
increased value, producing the first unit is more economically efficient than
producing none. By this logic, producing the sixth unit would decrease economic
efficiency because the firm would take resources with a value of 100 and
transform them into something with a value of only 98.

Table: Example of monopoly constellation

Output Marginal Cost Marginal benefit buyers Marginal benefit sellers


1 100 121 121

2 100 116 111


3 100 111 106
4 100 108 105
5 100 101 94
6 100 98 95

57
The monopolist, however, will find it most profitable to produce only four units
because it does not see marginal benefit the same way that buyers see it. For the
seller, the extra benefit of the second unit is only 111. It sells the second unit for
116, but to sell the second unit, it had to reduce the price it charged by 5. Thus, it
"lost" 5 money units on the first unit, so the net increase in its revenue was only
111.
Using the maximization principle, one can see that producing beyond the fourth
unit is not in the interests of the firm. The fifth unit brings in added benefits of
only 94 to the firm (it sells for 101, but to sell it, the firm lowers price on other
units), but costs an added 100. From the point of view of the buyers, however, the
fourth unit should be produced. It brings them more added benefits than it uses
resources.
The discussion above is illustrated below. The seller attempts to set marginal cost
equal to marginal revenue, or to produce at q0. From the consumers' viewpoint,
the best amount to produce would be q1. The monopolist restricts output because
of a divergence between marginal benefit as the firm perceives it and marginal
benefit as buyers perceive it. Producing beyond q0 is not in the interests of the
firm because the extra benefit it sees, the marginal revenue curve, is less than the
extra cost of production, shown by the marginal cost curve. Extra output is in the
interests of buyers because the extra benefit they get, shown by the demand curve,
is greater than the extra costs of production.

In terms of the production-possibilities frontier shown below, an economy with


some industries competitive (all transactors are price takers) and others
monopolized (sellers are price searchers) will produce at point b. However,
consumers would be better off at point a because the gain of x amount of
monopolized goods has a greater value than the loss of y amount of competitive
goods. Because a monopolist restricts production from what a competitive
industry would do, too many resources are being used in the competitive industry
and not enough in the monopolized industry. Thus, the existence of monopoly
violates product-mix efficiency. Because marginal rates of substitution are not
equal to marginal rates of transformation, the economy produces the wrong mix
of products.

58
The unexploited value of monopoly leads to two questions. First, one can ask
whether people have found ways to capture this value. Because the search for
value occupies a great deal of people's talents and energies in a market economy,
it should not surprise us to learn that there is a commonly-used way to capture this
value. Sellers can capture it with price discrimination.
Second, one can ask if the government can eliminate the unexploited value with
some sort of intervention. Economists have suggested two important ways for the
government to intervene, through antitrust actions and through regulation.

1.2 Amoroso-Robinson equation

As shown above the effect of price elasticity gets very important under monopoly
constellation, where suppliers manage prices and lead their output to maximum or
optimum profit. The supplying company has to think about price elasticity of
demand by changing output and output prices. Price elasticity of demand in this
case can be seen as aggregated consumption equation. How much does demand
increase or decrease if prices are lowered or rose by one percent? So profit can be
only upgraded by increased prices if elasticity is not that high to cause a lowered
output making the sum of sales shrink below the price-surplus effect.
The availability of goods end especially of substitute products always has effect
on elasticity so under monopoly constellation (without adequate substitutes)
elasticity is lower than under constellation with many suppliers.
Essentially price elasticity of demand lies 1 over price elasticity of sales, so price
elasticity of sales goes to 0 if price elasticity of demand goes 1. It seems to lead to
extreme account of sales if elasticity of sales is 0.
Leading to an equation the price (p) elasticity (E) of sales (U) EU,p can be
transformed into the following terms:

[1]

[2]

[3]

59
[4]

[5]

[6]

[7]

[8]

At last [8] the equation of Amoroso-Robinson leads to the marginal revenue and
answers the question how much sales grow with another output unit x.
In combination with marginal costs the equation leads to Cournot’s point at the
price-sales function which marks companies optimal sales.
The price-sales function under monopoly constellation can be seen as function of
customers demand for the good supplied by the monopolist. Because of this all
theoretical considerations about households demand can be derived.

Because of the monopolists possibility to use prices as variables for action –


without risk to lose all turnouts by raising prices like under competition
constellation – the monopolist can be seen as price decision maker in comparison
(to competitive constellation) to an output decision maker.

60
2. Oligopoly

2.1 General consideration about oligopoly (duopoly) constellation

An oligopoly is a market form in which a market is dominated by a small number


of sellers (oligopolists). The word is derived from the Greek for few sellers.
Because there are few participants in this type of market, each oligopolist is aware
of the actions of the others. Oligopolistic markets are characterised by
interactivity. The decisions of one firm influence, and are influenced by, the
decisions of other firms. Strategic planning by oligopolists always involves taking
into account the likely responses of the other market participants. An oligopoly is
a form of economy. As a quantitative description of oligopoly, the four-firm
concentration ratio is often utilized. This measure expresses the market share of
the four largest firms in an industry as a percentage. Using this measure, an
oligopoly very often is defined as a market in which the four-firm concentration
ratio is above 40%.
The problem of interdependence has thwarted economists' attempts to develop a
good theory of oligopoly. When there are only a few sellers, each recognizes that
his decisions affect others who may react to what he does.
This problem of interdependence can be shown in terms of game theory in a
situation that is identical to the prisoners' dilemma. The table below shows the
pricing options of the only two gas stations in an isolated town. The payoffs in the
center are profits. If both stations charge high prices, the joint profits are
maximized. But then each has a temptation to cut prices to get to a more favorable
corner. If one of them gives in to this temptation, it may start a gas price war as
the other firm must retaliate. They then end in the least favorable position of
lowest joint profits.

Table: Oligopoly (Duopoly) as prisoner’s dilemma

Supplier 1
Pricing strategy  100% 90%
Opponent 100/100 10/130
Supplier 130/10 20/20

There may be no equilibrium solution in a situation of this sort. Rather, there may
be a period of collusion in which firms agree (though it may be an unspoken
agreement) to keep prices high. Then, the collusion may disintegrate as firms
begin cheating and finally a new period of collusion may begin. Whether sellers
collude or compete will depend on many factors that can be difficult to measure
and put into a theory, such as the number of sellers, their personalities, whether
they have equal or unequal shares of the market, whether their costs are the same,
the ease of cheating and of detecting cheating, and whether the sellers can
compete on nonprice bases such as service and quality.
A thorough examination of the possibilities of oligopolistic strategies and how
well they fit observed behavior of real-world oligopolies is a large and
controversial subject that is beyond the scope of these readings. The important
point concerning economic efficiency is that if oligopolists perceive their demand
curves as downward-sloping (that is, if they take into account that the amount
they produce will have a significant effect on the price they can charge), their

61
marginal revenue curves will lie below their demand curves and they will restrict
output relative to what an industry of price takers would. Thus, there will be an
efficiency loss involved. Most economists use the term "market power" to
describe the ability of any price maker to set price.
When the possession of market power is profitable, it should attract new entrants
into the industry. If entry is easy, then the existence of very few or even only one
firm may not result in economic inefficiency. The threat of potential entry may be
enough competition to keep the industry operating at or close to the competitive
solution. In this case, the market is a contestable market. However, if entry is not
easy but there are significant barriers to entry, the threat of competition is less.
Barriers to entry exist when there are sunken costs-expenses that cannot be
recovered once a firm has entered the industry. Where these costs are high, the
industry probably operates as the theory of monopoly suggests it will.

2.2 Approaches to oligopoly theory (exemplarily Cournot, von Stackelberg)

There are two general types of theories for oligopoly. Conjectural Variation
Models on one hand and Limit Pricing Models on the other.
In conjectural variation models the firms in the industry are taken as given and
each firm makes certain assumptions about what the others reactions will be to its
own actions. For example, in the Cournot model each firm assumes there will be
no reaction on the part of the other firms. In the limit price models one firm
chooses its action taking into account the possible entry or exit of competitive to
or from the market.

In the Cournot Model each firm presumes no reaction on the part of the other
firms to a change in its output. Thus, ∂Q/∂qi = 1. Therefore the first order
condition for a maximum profit of the i-th firm is:

p0 - b*(Qoi + 2qi) = Ci1

where Qoi is the output of the firms other than the i-th. When this is solved for qi
the result is:

qi = (p0 - Ci1)/2b - Qoi/2

However it is more convenient to represent the first order condition and its
solution as:

p0 - b*(Q + qi) = Ci1 and qi = (p0 - Ci1)/b - Q.

Now we can sum the above equation over the n firms. The result is:

Q = n(p0/b) - C1/b - n*Q

where C1 is the sum of the Ci1. The solution for Q is:

Q = [n/(n+1)](p0/b) - [1/(n+1)]C1/b

When this output is substituted into the inverse demand function the result is:

62
p = [1/(n+1)]p0 + [1/(n+1)]C1

or if we let c1=C1/n:

p = [1/(n+1)]p0 + [n/(n+1)]c1

where c1 represents the average of the marginal costs of the n firms. It can be seen
from the last term that as the number of firms increase without bound the market
price approaches c1.
If one follow through with this model one would have to take in consideration that
the firms with above average marginal cost would be making a loss on variable
costs and would cease production.

Heinrich von Stackelberg proposed a model of oligopoly in which one firm, a


follower, takes the output of the other firm as given (a Cournot type oligopolist)
and adjusts its output accordingly. The other firm, a leader, takes into account the
adjustment which the follower firm will make. The output decision of a Cournot
oligopolist is given by the equations above. Thus if a leader firm increases its
output qL by 1 unit the follower firm will decrease its output by one half of a unit.
The term ∂Q/∂qL = 1/2 for the leader firm so the first order condition for the
leader firm is:

∂UL/∂qL = (p0 - b*Q) -b*(1/2)*qL - CL1 = 0

qL = (p0 - CL1)(2/3b) - 2QoL/3.

Carrying through with the analysis as shown below indicates that the market price
will be:

p = [1/(n+2)]p0 + [(n+1)/(n+2)]c1

where c1 is now the weighted average of the marginal costs of the firm with all of
the follower firms given an equal weight and the leader firm given a weight of
twice that of the follower firms. The leader firm has the effect on the industry of
two follower firms. Otherwise the result is the same as in the case of the Cournot
oligopoly.

63
VIII. Input markets

Market

Definition: A market is, as defined in economics, a social arrangement that allows


buyers and sellers to discover information and carry out a voluntary exchange.
Along with a right to own property, it is one of the two key institutions that
organize trade. The existence of markets is one of the key components of
capitalism. Though markets are often viewed as being located in a physical
marketplace that allow a face-to-face meeting, markets may exist in any medium
that allows social interaction, such as through mail or over the Internet.

Structure: The information function of a market requires, at a minimum, that the


buyer and seller are both aware of what is being sold and if a voluntary
transaction is possible. Economic models assume that such knowledge is perfect,
including in knowledge of alternatives and other factors affecting the proposed
sale/purchase.
Markets rely on adjustments to price to coordinate individual decision making
relating to supply and demand. For example, suppose that more buyers want a
certain good than is available from sellers at a given price. The solution requires
either that buyer reduce their demand for the good, or that sellers produce more of
the good. These results are accomplished by a rise in the price of that good: some
buyers will refuse to pay the higher price, while more sellers are willing to offer
the good for the increased price. In cases where more of an item is available than
people will buy, the reverse effect (a drop in price) will make the choices of
buyers and sellers compatible. Markets are thus efficient, in the economic sense,
in that the buyers who value a good most highly will buy from sellers most
willing to sell.

While barter markets exist, most markets require the existence of currency or
other form of money. An economic system in which goods and services (and
resources required to produce those goods and services) are mediated by markets
is called a market economy. Critics of the market economy have tried or proposed
a command economy or other non-market economy. The attempt to mix socialism
with the incentives created by a market is known as market socialism, which
includes the relatively recent socialism with Chinese characteristics, though some
argue that socialism and markets are fundamentally incompatible.

Input Markets

Definition: Input markets are markets for goods and services needed in a
production process.

On a higher level, there are two different kinds of inputs in a production process:
1. Production factors and
2. Preliminary product; these products have mostly been produced with other
production factors themselves

64
Production factors

In the classical economics, the factors work, capital, ground and knowledge are
designated as production factors (since Adam Smith and David Ricardo) The
factor work is represented by the individual human being.
Primarily the factor ground referred the farmland. Later it was extended to all
kinds of natural resources like crude oil, minerals etc. Because of the increasing
shortage of production factors like water and other commodities this production
factor is just called nature.
The production of all goods starts with the production factor nature. But there are
no ready for use goods in the nature; there are just commodities witch have been
made converted first and there is work needed for this conversion. Times by time
people have learned to multiply their power by using tools and machines. These
production factors are called capital. Unlike nature, capital is called a derivative
production factor or producing production factor. Some economist means that
knowledge and information should also be added to the production factors.

The Five Capitals Model

The Five Capitals Model of sustainable development was developed by the


organization Forum for the Future. The model groups together:
 Natural capital
 Social capital
 Human capital
 Manufactured capital
 Financial capital

Preliminary product

Preliminary products are measuring the value of the used, transformed and
processed goods and services.

Factors of production

Factors of production are resources used in the production of goods and services
in economics. Classical economics distinguishes between three factors:

 Land or natural resources naturally-occurring goods such as oil and


minerals that are used in the creation of products. The payment for land is
rent.
 Labour - human effort used in production which also includes technical
and marketing expertise. The payment for labour is a wage.
 Capital goods human-made goods (or means of production) which are
used in the production of other goods. These include machinery, tools and
buildings. In a general sense, the payment for capital is called interest.

These were codified originally in the analyses of Adam Smith, 1776, David
Ricardo, 1817, and the later contributions of Karl Marx and John Stuart Mill as
part of one of the first coherent theories of production in political economy. Marx
refers in “Das Kapital” to the three factors of production as the "holy trinity" of
political economy.

65
In the classical analysis, working capital was generally viewed as being a stock of
physical items such as tools, buildings and machinery. This view was explicitly
rejected by Marx. Modern economics has become increasingly uncertain about
how to define and theorise capital.
With the emergence of the knowledge economy, more modern analysis often
distinguishes this physical capital from other forms of capital such as "human
capital" and Intellectual Capital which require intangible management orientated
techniques to manage such as Balanced Scorecard, Risk Management, Business
Process Reengineering, Knowledge Management, and Intellectual Capital
Management
Prior to the Information Age the land, labour, and capital were used to create
substantial wealth due to their scarcity. Following the Information Age (circa
1971-1991), and the Knowledge Age (circa 1991 to 2002) and the current
Intangible Economy (circa 2002+) the primary factors of production today are
intangible. These factors of production are knowledge, collaboration, process-
engagement, and time quality. According to economic theory, a "factor of
production" is used to create value and economic performance. As the four factors
of production today are all intangible, the current economic age is called the
Intangible Economy. Intangible factors of production are subject to network
effects and the contrary economic laws such as the law of increasing returns. It is
therefore important to differentiate between convetional (tangible) economics and
intangible economics when discussing issues related to factors of production
which change according to the economic era that society is experiencing. For
example, land was a key factor of production in the Agricultural Age.
Some economists mention enterprise, entrepreneurship, individual capital or just
"leadership" as a fourth factor. However, this seems to be a form of labor or
"human capital." When differentiated, the payment for this factor of production is
called profit. This is when entrepreneurs think of ideas, organise the other three
factors of production, and take risks with their own money and the financial
capital of others.
In a market economy, entrepreneurs combine land, labor, and capital to make a
profit. In a planned economy, central planners decide how land, labor, and capital
should be used to provide for maximum benefit for all citizens.
The classical theory, further developed, remains useful to the present day as a
basis of microeconomics. Some more means that deal with factors of production
are as follows:
Entrepreneurs are people who organize other productive resources to make goods
and services. The economists regard entrepreneurs as a specialist form of labor
input. The success and/or failure of a business often depends on the quality of
entrepreneurship.
Capital has many meanings including the finance raised to operate a business.
Normally though, capital means investment in goods that can produce other goods
in the future. It can also be referred to as machines, roads, factories, schools, and
office buildings in which humans produced in order to produce other goods and
services. Investment is important if the economy is to achieve economic growth in
the future.
Human Capital is the quality of labour resources which can be improved through
investments, education, and training.
Fixed Capital this includes machinery, work plants, equipment, new technology,
factories, buildings, and goods that are designed to increase the productive
potential of the economy for future years.

66
Working Capital this includes the stocks of finished and semi-finished goods that
we be economically consumed in the near future or will be made into a finished
consumer good in the near future.

Human capital

Human capital is a way of defining and categorizing peoples' skills and abilities as
used in employment and as they otherwise contribute to the economy. Many early
economic theories refer to it simply as labour, one of three factors of production,
and consider it to be a commodity -- homogeneous and easily interchangeable.
Other conceptions of labour are more sophisticated.

Knowledge and capital

The introduction of the term is explained and justified by the unique


characteristics of knowledge. Unlike physical labour (and the other factors of
production), knowledge is:
Expandable and self generating with use: as doctors get more experience; their
knowledge base will increase, as will their endowment of human capital. The
economics of scarcity is replaced by the economics of self-generation.
Transportable and shareable: knowledge is easily moved and shared. This transfer
does not prevent its use by the original holder. However, the transfer of
knowledge may reduce its scarcity-value to its original possessor.

Human capital and labour power

In some way, the idea of "human capital" is similar to Karl Marx's concept of
labour power: to him, under capitalism workers had to sell their labour-power in
order to receive income (wages and salaries). But long before Mincer or Becker
wrote, Marx pointed to "two disagreeably frustrating facts" with theories that
equate wages or salaries with the interest on human capital.

Classical view as the base of microeconomic theory

Although it did not deal substantially with complex issues of a sophisticated


modern economy, the classical theory is useful as the basis of microeconomics,
however many distinctions one cares to make or macro-theory or political
economy one chooses to apply to trade them off or set their valuations in society
at large.
Land has become natural capital, imitative aspects of Labour have become
instructional capital, creative or inspirational aspects or "Enterprise" have become
individual capital (in some analyses), and social capital has become increasingly
important. The classical relationship of financial capital and infrastructural capital
is still recognized as central, but there is a wider debate on means of production
and various means of protection, or „rights", to secure their reliable use.

Resource-based view

The Resource-Based View (RBV) is an economic tool used to determine the


resources available to a firm, which ought to be exploited in order for that firm to
develop a strategy for achieving sustainable competitive advantage. Barney

67
(1991) formalised this theory, although it was Wernerfelt (1984) who introduced
the idea of resource position barriers being roughly analogous to entry barriers in
the positioning school.
The key points of the theory are:

1) Identify the firm’s potential key resources


2) Evaluate whether these resources fulfil the following criterion:

Valuable - they enable a firm to implement strategies that improve its efficiency
and effectiveness

1) Rare - not available to other competitors


2) Imperfectly imitable - not easily implemented by others
3) Non-substitutable - not able to be replaced by some other non-rare resource

Definitions

Resources: Firm resources include all assets, capabilities, organizational


processes, firm attributes, information, knowledge, etc. controlled by a firm that
enable the firm to conceive of and implement strategies that improve its efficiency
and effectiveness. (Barney)
(Resources are inputs into a firm's production process, such as capital, equipment,
the skills of individual employees, patents, finance, and talented managers.
Resources are either tangible or intangible in nature. (Kay)

Competitive advantage: A firm achieves competitive advantage when it is able to


implement a “value creating strategy not simultaneously being implemented by
any current or potential competitors (Barney)

Criticisms

Priem and Butler made four key criticisms:


1) The RBV is tautological
2) Different resource configurations can generate the same value for firms
and thus would not be competitive advantage
3) The role of product markets is underdeveloped in the argument
4) The theory has limited prescriptive implications

Further criticisms are:


It is perhaps difficult (if not impossible) to find a resource which satisfies all of
the Barney's VRIN criterion. There is the assumption that a firm can be profitable
in a highly competitive market as long as it can exploit advantageous resources,
but this may not necessarily be the case. It ignores external factors concerning the
industry as a whole; Porter’s Industry Structure Analysis ought also be
considered.

Cost-of-production theory of value

In economics, the cost-of-production theory of value is the belief that the value of
an object is decided by the resources that went into making it. The cost can be

68
composed of any of the factors of production including labour, capital, land, or
technology.
Two of the most common cost-of-production theories are the medieval just price
theory and the classical labour theory of value. The labour theory of value, which
interprets labour-value as the determinant of prices, was first developed by Adam
Smith and later expanded by David Ricardo and Karl Marx. Most classical
economists (as well as nearly all Marxists) subscribe to it. However, Marx's
theory is not a true cost-of-production theory since the value of a commodity
contains a component of surplus value unrelated to the physical cost of producing
it. The magnitude of this surplus value may be unrelated to production-costs. A
somewhat different theory of cost-determined prices is provided by the "neo-
Ricardian school" of Piero Sraffa and his followers.
The most common counterpoint to this is the marginal theory of value which
asserts that economic value is set by the consumer's marginal utility. This is the
view most commonly held by the majority of contemporary mainstream
economists.
The Polish economist Michał Kalecki distinguished between sectors with "cost-
determined prices" (such as manufacturing and services) and those with "demand-
determined prices" (such as agriculture and raw material extraction).
In microeconomics, production is the act of making things, in particular the act of
making products that will be traded or sold commercially. Production decisions
concentrate on what goods to produce, how to produce them, the costs of
producing them, and optimising the mix of resource inputs used in their
production. This production information can then be combined with market
information (like demand and marginal revenue) to determine the quantity of
products to produce and the optimum pricing.

Production theory basics

In microeconomics, Production is simply the conversion of inputs into outputs. It


is an economic process that uses resources to create a commodity that is suitable
for exchange. This can include manufacturing, storing, shipping, and packaging.
Some economists define production broadly as all economic activity other than
consumption. They see every commercial activity other than the final purchase as
some form of production.
Production is a process, and as such it occurs through time and space. Because it
is a flow concept, production is measured as a “rate of output per period of time”.

There are three aspects to production processes:

 the quantity of the commodity produced,


 the form of the good produced,
 the temporal and spatial distribution of the commodity produced.

A production process can be defined as any activity that increases the similarity
between the pattern of demand for goods, and the quantity, form, and distribution
of these goods available to the market place.

69
IX. General equilibrium

General Equilibrium (linear) supply and demand curves. This diagram is based on
Walras' analysis.

Introduction

General equilibrium theory is a branch of theoretical microeconomics. It seeks to


explain production, consumption and prices in a whole economy.

General equilibrium tries to give an understanding of the whole economy using a


bottom-up approach, starting with individual markets and agents.
Macroeconomics, as developed by the Keynesian economists, uses a top-down
approach where the analysis starts with larger aggregates. Since modern
macroeconomics has emphasized microeconomic foundations, this distinction has
been slightly blurred. However, many macroeconomic models simply have a
'goods market' and study its interaction with for instance the financial market.
General equilibrium models typically model a multitude of different goods
markets. Modern general equilibrium models are typically complex and require
computers to help with numerical solutions.

In a market system, the prices and production of all goods, including the price of
money and interest, are interrelated. A change in the price of one good, say bread,
may affect another price, for example, the wages of bakers. If bakers differ in
tastes from others, the demand for bread might be affected by a change in bakers'
wages, with a consequent effect on the price of bread. Calculating the equilibrium
price of just one good, in theory, requires an analysis that accounts for all of the
millions of different
70
Introduction in general equilibrium theory / The Walras-Cassel system

The "Walras-Cassel" model refers to the general equilibrium model with


production introduced in Léon Walras's Elements of Pure Economics (1874). The
Walrasian model fell into disuse soon after 1874 as general equilibrium theorists,
particularly in the 1930s in the English-speaking world, opted for the Paretian
system. The Walrasian model was resurrected in Gustav Cassel's Theory of Social
Economy (1918), but even after that, its analysis was confined to the German-
speaking world, notably in the Vienna Colloquium in the 1930s, where it was
corrected and expanded by Abraham Wald (1936). It only really broke through
the English-speaking barrier in the 1950s, when there was a resurgence of interest
in general equilibrium with linear production technology and existence of
equilibrium questions. However, in the dextrous hands of Arrow, Debreu,
Koopmans and the Cowles Commission, the Walras-Cassel model was quickly
replaced by the more nimble "Neo-Walrasian" model, which fused aspects of
Walrasian and Paretian traditions.

As outlined by Walras, the basics of the model are the following: individuals are
endowed with factors and demand produced goods; firms demand factors and
produce goods with a fixed coefficients production technology. General
equilibrium is defined as a set of factor prices and output prices such that the
relevant quantities demanded and supplied in each market are equal to each other,
i.e. both output and factor markets clear. Competition ensures that price equal cost
of production for every production process in operation.

Despite its superficial resemblance to some elements of Classical Leontief-Sraffa


models (e.g. fixed production coefficients, price-cost equalites, steady-state
growth, etc.), the Walras-Cassel model is inherently and completely Neoclassical.
Equilibrium is still identified where market demand is equal to market supply in
all markets rather than being conditional on replication and cost-of-production
conditions. The Walras-Cassel model yields a completely Neoclassical subjective
theory of value based on scarcity, rather than a Classical objective theory of value
based on cost. Furthermore, in the Walras-Cassel system equilibrium prices and
quantites are only obtained jointly by solving the system simultaneously, whereas
the Classicals would solve for prices and quantities separately.

It might be worthwhile to run down a quick preliminary description of the


Walras-Cassel model in order to get up the intuition for what is to follow. Let v
denote factors, x denote produced outputs, w be factor prices and p denote output
prices. Individuals are endowed with factors and desire produced outputs. They
decide upon their supply of factors (which we call F(p, w)) and their demand for
outputs (which we call D(p, w)) by solving their utility-maximizing problem.
Firms have no independent objective function: they mechanically take the factors
supplied to them by consumers and convert them to the produced goods the
consumers desire via a fixed set of production coefficients, which we denote B.

We face two further sets of equations which form the heart of the Walras-Cassel
system: one set makes factor supply equal to factor demand by firms ("factor
market clearing") and is written as v = B¢ x; a second set says that the output

71
price equals cost of production for each production process ("perfect
competition") and is written p = Bw. We shall refer to both of these as the linear
production conditions of the Walras-Cassel model. It is important to note that
these are not functions, but rather equilibrium conditions.

Notice then what is given: consumer's preferences (utility), endowments of factors


and production technology. From these components we should be able to derive
in equilibrium: (1) factor prices, w*; (2) output prices, p*; (3) quantity of factors,
v* and (4) quantity of produced outputs, x*. An equilibrium is defined when these
components are such that (1) households maximize utility; (2) firms do not violate
perfect competition; (3) factor and output markets clear.

The four sets of equations we have outlined connect the entire system together in
equilibrium. Their functions can be outlined as follows:
(i) D(p, w) connects output prices and output quantities;
(ii) F(p, w) connects factor prices and factor quantities;
(iii) v = B¢ x connects output quantities and factor quantities;
(iv) p = Bw connects output prices and factor prices.

To ground our intuition more clearly, we can appeal to Figure 1, where we


schematically depict the logic of the Walras-Cassel equations. Heuristically
speaking, suppose we have two markets, one for factors (on the left) and one for
outputs (on the right). Note that supply of factors F(p, w) on the left is upward-
sloping with respect to factor prices w, while demand for outputs D(p, w) on the
right is downward-sloping with respect to output prices p. The elasticities of
factor supply and output demand curves reflect the impact of prices and wages on
household utility-maximizing decisions.
[Two caveats: firstly, yes, these are all supposed to be vectors and, yes, Figure 1
makes no sense in that context; but the diagram is merely a heuristic device, not a
graphical depiction of the true model; secondly, the output demand function is
also a function of w and the factor supply function is also a function of p, so there
is interaction between the diagrams which will cause the curves to shift around;
for simplicity, we shall suppress these cross-effects by assuming that factor
supplies do not respond to p and output demands do not respond to w.]

Figure 1 - Schematic Depiction of the Walras-Cassel Model

It is important to note how the factor supply and output demand decisions of
households sandwich this entire problem, with the linear production conditions
sitting passively in the middle. Fixing any one of the four items (w, p, x or v) at

72
its equilibrium value, we can determine the rest [although to do so, we must
assume that output demand and factor supply functions are invertible: e.g. given
v, we can determine what w is by the factor supply function F(p, w) and given x,
we can determine what p by the output demand function D(p, w); naturally, this is
a very strong assumption and not a very clear one in the manner it is stated].

It might be worthwhile to go through it "algorithmically" from some starting point


(trace this with the arrows in Figure 1). Suppose equilibrium output prices, p*, are
given. From p*, we get x* by the output demand function D(p, w) and we obtain
w* by the competition condition p = Bw. In their turn, x* gives us v* via the
factor market clearing condition v = B¢ x while w* gives us v* via the factor
supply function, F(p, w). If this is truly equilibrium, then it had better be that the
v*s computed via the two different channels are identical to each other.

Equivalently, suppose we start from equlibrium output demands, x*. Thus, given
x*, we get p* by the output demand function D(p, w) and v* by the factor market
clearing condition v = B¢ x. In their turn, p* gives us w* by the competition
condition p = Bw and v* gives us w* by the factor supply function F(p, w). For
equilibrium, we need it that both of the w* are the same. We go through
analogous stories when we start with equilibrium factor quantities, v*, or
equilibrium factor prices, w*.

The main lesson is this: in the Walras-Cassel system, there is no necessary


direction of determination from one thing to another. The Walras-Cassel system is
a completely simultaneous system where equilibrium prices (w*, p*) and
equilibrium quantities (v*, x*) are determined jointly. It does not matter whether
we say "prices determine cost of production" or "cost of production determines
prices", etc. In equilibrium, price equals cost of production, but this is obtained as
a solution to a simultaneous system, not by causal direction. The only exogenous
data are preferences of households, endowments and technology.

Properties and characterization of general equilibrium

Basic questions

Basic questions in general equilibrium analysis are concerned with the conditions
under which an equilibrium will be efficient, which efficient equilibria can be
achieved, when an equilibrium is guaranteed to exist and when the equilibrium
will be unique and stable.
First Fundamental Theorem of Welfare Economics

The first theorem states that any equilibrium is Pareto efficient

The technical condition for the result to hold is the fairly weak one that consumer
preferences are locally nonsatiated, which rules out a situation where all
commodities are "bads". Additional implicit assumptions are that consumers are
rational, markets are complete, there are no externalities and information is
perfect. While these assumptions are certainly unrealistic, what the theorem
73
basically tells us is that the sources of inefficiency found in the real world are not
due to the decentralized nature of the market system, but rather have their sources
elsewhere.

Second Fundamental theorem of welfare economics

While every equilibrium is efficient, it is clearly not true that every efficient
allocation of resources will be an equilibrium. However, the Second Theorem
states that every efficient allocation can be supported by some set of prices. In
other words all that is required to reach a particular outcome is a redistribution of
initial endowments of the agents after which the market can be left alone to do its
work. This suggests that the issues of efficiency and equity can be separated and
need not involve a trade off. However, the conditions for the Second Theorem are
stronger than those for the First, as now we need consumers' preferences to be
convex (convexity roughly corresponds to the idea of diminishing marginal
utility, or to preferences where "averages are better than extrema").

Existence

Even though every equilibrium is efficient, neither of the above two theorems say
anything about the equilibrium existing in the first place. To guarantee that an
equilibrium exists we once again need consumer preferences to be convex
(although with enough consumers this assumption can be relaxed both for
existence and the Second Welfare Theorem). Similarly, but less plausibly,
feasible production sets must be convex, excluding the possibility of economies of
scale.
Proofs of the existence of equilibrium generally rely on fixed point theorems such
as Brouwer fixed point theorem or its generalization, the Kakutani fixed point
theorem. In fact, one can quickly pass from a general theorem on the existence of
equilibrium to Brouwer’s fixed point theorem. For this reason many mathematical
economists consider proving existence a deeper result than proving the two
Fundamental Theorems.

Uniqueness

Although generally (assuming convexity) an equilibrium will exist and will be


efficient the conditions under which it will be unique are much stronger. While
the issues are fairly technical the basic intuition is that the presence of wealth
effects (which is the feature that most clearly delineates general equilibrium
analysis from partial equilibrium) generates the possibility of multiple equilibria.
When a price of a particular good changes there are two effects. First, the relative
attractiveness of various commodities changes, and second, the wealth
distribution of individual agents is altered. These two effects can offset or
reinforce each other in ways that make it possible for more than one set of prices
to constitute an equilibrium.
A result known as the Sonnenschein-Mantel-Debreu Theorem states that the
aggregate (excess) demand function inherits only certain properties of individual's

74
demand functions, and that these (Continuity, Homogeneity of degree zero,
Walras' law, and boundary behavior when prices are near zero) are not enough to
guarantee uniqueness.
There has been much research on conditions when the equilibrium will be unique,
or which at least will limit the number of equilibria. One result states that under
mild assumptions the number of equilibria will be finite (see Regular economy)
and odd (see Index Theorem). Furthermore if an economy as a whole, as
characterized by an aggregate excess demand function, has the revealed
preference property (which is a much stronger condition than revealed preferences
for a single individual) or the gross substitute property then likewise the
equilibrium will be unique. All methods of establishing uniqueness can be thought
of as establishing that each equilibrium has the same positive local index, in
which case by the index theorem there can be but one such equilibrium.

Determinacy

Given that equilibria may not be unique it is of some interest whether any
particular equilibrium is at least locally unique. This means that comparative
statics can be applied as long as the shocks to the system are not too large. As
stated above in a Regular economy equilibria will be finite, hence locally unique.
One reassuring result, due to Debreu, is that "most" economies are regular.
However recent work by Michael Mandler (1999) has challenged this claim. The
Arrow-Debreu model is neutral between models of production functions as
continuously differentiable and as formed from (linear combinations of) fixed
coefficient processes. Mandler accepts that, under either model of production, the
initial endowments will not be consistent with a continuum of equilibria, except
for a set of Lebesgue measure zero. However, endowments change with time in
the model and this evolution of endowments is determined by the decisions of
agents (e.g., firms) in the model. Agents in the model have an interest in equilibria
being indeterminate:
"Indeterminacy, moreover, is not just a technical nuisance; it undermines the
price-taking assumption of competitive models. Since arbitrary small
manipulations of factor supplies can dramatically increase a factor's price, factor
owners will not take prices to be parametric." (Mandler 1999, p. 17)
When technology is modeled by (linear combinations) of fixed coefficient
processes, optimizing agents will drive endowments to be such that a continuum
of equilibria exist:
"The endowments where indeterminacy occurs systematically arise through time
and therefore cannot be dismissed; the Arrow-Debreu model is thus fully subject
to the dilemmas of factor price theory." (Mandler 1999, p. 19)
Critics of the general equilibrium approach have questioned its practical
applicability based on the possibility of non-uniqueness of equilibria. Supporters
have pointed out that this aspect is in fact a reflection of the complexity of the real
world and hence an attractive realistic feature of the model.

Stability

In a typical general equilibrium model the prices that prevail "when the dust
settles" are simply those that coordinate the demands of various consumers for

75
various goods. But this raises the question of how these prices and allocations
have been arrived at and whether any (temporary) shock to the economy will
cause it to converge back to the same outcome that prevailed before the shock.
This is the question of stability of the equilibrium, and it can be readily seen that
it is related to the question of uniqueness. If there are multiple equilibria then
some of them will be unstable. Then, if an equilibrium is unstable and there is a
shock, the economy will wind up at a different set of allocations and prices once
the converging process completes. However stability depends not only on the
number equilibria but also on the type of the process that guides price changes
(for a specific type of price adjustment process see Tatonnement). Consequently
some researchers have focused on plausible adjustment processes that will
guarantee system stability, that is, prices and allocations always converging to
some equilibrium, though when there exists more than one, which equilibrium it
is will depend on where one begins.

Problems and computeable

Unresolved problems in general equilibrium

Research building on the Arrow-Debreu model has revealed some problems with
the model. The Sonnenschein-Mantel-Debreu results show that, essentially, any
restrictions on the shape of excess demand functions are stringent. Some think
this implies that the Arrow-Debreu model lacks empirical content. At any rate,
Arrow-Debreu equilibria cannot be expected to be unique, or stable.
A model organized around the tatonnement process has been said to be a model of
a centrally planned economy, not a decentralized market economy. Some research
has tried, not very successfully, to develop general equilibrium models with other
processes. In particular, some economists have developed models in which agents
can trade at out-of-equilibrium prices and such trades can affect the equilibria to
which the economy tends. Particularly noteworthy are the Hahn process, the
Edgeworth process, and the Fisher process.

The data determining Arrow-Debreu equilibria include initial endowments of


capital goods. If production and trade occur out of equilibrium, these endowments
will be changed further complicating the picture.

In a real economy, however, trading, as well as production and consumption, goes


on out of equilibrium. It follows that, in the course of convergence to equilibrium
(assuming that occurs), endowments change. In turn this changes the set of
equilibria. Put more succinctly, the set of equilibria is path dependent... [This path
dependence] makes the calculation of equilibria corresponding to the initial state
of the system essentially irrelevant. What matters is the equilibrium that the
economy will reach from given initial
endowments, not the equilibrium that it would have been in, given initial
endowments, had prices happened to be just right (Franklin Fisher, as quoted by
Petri (2004)).
The Arrow-Debreu model of intertemporal equilibrium, in which forward markets
exist at the initial instant for goods to be delivered at each future point in time,
can be transformed into a model of sequences of temporary equilibrium.
Sequences of temporary equilibrium contain spot markets at each point in time.

76
Roy Radner found that in order for equilibria to exist in such models, agents (e.g.,
firms and consumers) must have unlimited computational capabilities.

Although the Arrow-Debreu model is set out in terms of some arbitrary


numeraire, the model does not encompass money. Frank Hahn, for example, has
investigated whether general equilibrium models can be developed in which
money enters in some essential way. The goal is to find models in which
existence of money can alter the equilibrium solutions, perhaps because the initial
position of agents depends on monetary prices.
Some critics of general equilibrium modeling contend that much research in these
models constitutes exercises in pure mathematics with no connection to actual
economies. "There are endeavors that now pass for the most desirable kind of
economic contributions although they are just plain mathematical exercises, not
only without any economic substance but also without any mathematical value"
(Nicholas Georgescu-Roegen 1979). Georgescu-Roegen cites as an example a
paper that assumes more traders in existence than there are points in the set of real
numbers.

Although modern models in general equilibrium theory demonstrate that under


certain circumstances prices will indeed converge to equilibria, critics hold that
the assumptions necessary for these results are extremely strong. As well as
stringent restrictions on excess demand functions, the necessary assumptions
include perfect rationality of individuals; complete information about all prices
both now and in the future; and the conditions necessary for perfect competition.
However some results from experimental economics suggest that even in
circumstances where there are few, imperfectly informed agents, the resulting
prices and allocations often wind up resembling those of a perfectly competitive
market.
Frank Hahn defends general equilibrium modeling on the grounds that it provides
a negative function. General equilibrium models show what the economy would
have to be like for an unregulated economy to be Pareto efficient.

Computable general equilibrium

Until the 1970s, general equilibrium analysis remained theoretical. However, with
advances in computing power, and the development of input-output tables, it
become possible to model national economies, or even the world economy, and
solve for general equilibrium prices and quantities under a range of assumptions.

77
References
Arrow, K., Hahn, F. H.: "General Competitive Analysis", San Francisco, 1971
Arrow K. J., G. Debreu: "The Existence of an Equilibrium for a Competitive Economy" Econometrica, vol.
XXII, 265-90, 1954
Bergman, U. Michael: Properties of Production Functions, University of Copenhagen, 2005
Debreu, G.: "Theory of Value", New York, 1956
Demmler, Horst: Grundzüge der Mikroökonomie, 4th edition, München, 2000
Eatwell, John: "Walras's Theory of Capital", The New Palgrave: A Dictionary of Economics (Edited by
Eatwell, J., Milgate, M., and Newman, P.), London, 1989
Feess, Eberhard: Mikroökonomie, 4th edition, Marburg, 2004
Fishburn, P.C.: Retrospective on the utility theory of Von Neumann and Morgenstern, in Journal of Risk and
Uncertainty 2, 1989, p. 127-158
Frank, Robert H.: Microeconomics and Behaviour, McGraw-Hill, 2003
Fraser, Iain: The Cobb-Douglas Production Function: An Antipodean Defence?, Economic Issues, 2002
Garvey, E.G.: The theory of the firm and catholic social teaching, in Journal of markets and morality, Vol. 6,
No.2, 2003, page 525-540
Georgescu-Roegen, Nicholas: "Methods in Economic Science", "Journal of Economic Issues", V. 13, N. 2
(June): 317-328, 1979
Grandmont, J. M.: "Temporary General Equilibrium Theory", "Econometrica", V. 45, N. 3 (Apr.): 535-572,
1977
Jaffe, William: "Walras' Theory of Capital Formation in the Framework of his Theory of General
Equilibrium", Economie Appliquee, V. 6 (Apr.-Sep.): 289-317, 1953
Knight, F.H.: Risk, Uncertainty and Profit, Chicago: Houghton Mifflin Company 1921
Lorenz, Wilhelm: MikroOnline, www.mikroo.de, Weringerode, 2006
Luis, M.B.: Introduction to Industrial Organisation , Massachusetts Institute of Technology Press, 2000, Page
84-85
Mandler, Michael: Dilemmas in Economic Theory: Persisting Foundational Problems of Microeconomics,
Oxford, 1999
Mankiw, G.N.: Principles of Economics, Second Edition, New York 1998
Mas-Colell, A.:Whinston, M.D., Green, J.R.: Microeconomic Theory, Oxford University Press 2005
Mongin, P.: Expected Utility Theory, in: J.Davis, W. Hands and U. Maki, Edward Elgar (publisher):
Handbook of Economic Methodology, London 1997
Peters, M.: Expected Utility and Risk Aversion, St. Gallen 2005
Petri, Fabio: General Equilibrium, Capital, and Macroeconomics: A Key to Recent Controversies in
Equilibrium Theory, London, 2004
Pindyck, Robert S. / Rubinfeld, David L.: Mikroökonomie, 6th edition, München, 2005
Rabin, M.: Diminishing marginal utility of wealth cannot explain risk aversion, Berkley 2000
Sandilands, R. u. Hillary, D.: The Cobb-Douglas Production Function, Oct. 1999
Schumann, Jochen et al.: Grundzüge der mikroökonomischen Theorie, Heidelberg, 1999
Varian, H.R.: Microeconomic Analysis, New York 1984
Varian, H. R.: Grundzüge der Mikroökonomik, 3rd edition, München, 1995
Yaari, M.: Some measures of risk aversion and their uses, in Journal of Economic Theory, Vol.1, p. 315-329

78

You might also like