Professional Documents
Culture Documents
Microeconomics: Department of Economics Faculty of Economics and Management Doc. Ing. Iveta Zentková, Phd. 07 / 2006
Microeconomics: Department of Economics Faculty of Economics and Management Doc. Ing. Iveta Zentková, Phd. 07 / 2006
Microeconomics: Department of Economics Faculty of Economics and Management Doc. Ing. Iveta Zentková, Phd. 07 / 2006
Microeconomics
Essays by
Alexander Frech
Felix Hötzinger
Olaf Löbl
Eckart Margenfeld
Michael Mirz
Serge Schäfers
Content
References
2
I. Classical and intertemporal models of consumer behaviour
1. Consumer Preferences
The economic model of consumer behaviour is very simple: people choose the
best things they can afford. The objects of consumer’s choice are called
consumption bundles, i.e.
The consumer will rank these bundles according to their desirability. In other
words, the consumer will decide, whether he strictly prefers one over the other
(i.e. X > Y), or, that he is indifferent between X and Y (X ≈ Y). If the consumer
prefers or is indifferent between X and Y we would call this weakly prefers one or
the other (X ≥ Y). X ≈ Y bundles can be shown graphically below in a so-called
indifference curve, where goods x1 and x2 are shown on the two axes. The shaded
area to the right of the indifference curve represents any weakly preferred set:
Weakly
preferred
set
x2
Indifference curve
curve
x1
It is unrealistic to find a situation i.e., where X > Y and, at the same time, Y > X.
Therefore, assumptions about the consistency of consumers’ preferences are
made. Some of these assumptions are so fundamental, that they are referred to as
“axioms” of consumer theory:
3
Complete
Any two bundles can be compared, that is X ≥ Y or Y ≥ X, or both, in which case
the consumer is indifferent between the two bundles not realistic, unimportant
in Microeconomics.
Reflexive
Any bundle is at least as good as an identical bundle (X ≥ Y).
Transitive
If X ≥ Y and Y ≥ Z, than the assumption is made that X is at least as good as Z.
Monotonicity
More is superior; bundle X (x1, y1 + one unit) is preferred over bundle Y (y1,
y2).
Satiation
Satiation point or bliss point is the overall best bundle for the consumer in terms
of his own preferences. Both possibilities, too much of something or too little of
something, equally do not satisfy the consumer.
2. Utility
The only importance of a utility assignment is how it orders / ranks the bundles of
goods. The size of utility difference of any different consumption bundle does not
matter. Because of the emphasis on ordering bundles of goods, this kind of utility
is referred to as ordinal utility.
Bundle u1 u2
A 3 17
B 2 10
C 1 .002
Cardinal utilities are theories which deal with the significance of the magnitude of
a utility. The size of the utility difference between two bundles is supposed to
have some sort of significance, i.e.: “I am willing to pay twice as much for bundle
A as opposed to bundle B”. This can be also shown in a so called utility function
where 4(A) > 2(B). More advanced theories of cardinal utility functions include
“Perfect Substitutes”, “Quasilinear Preferences”, “Cobb-Douglas Preferences”…
4
2.3 Marginal Utility
The rate of change measured when the consumer gets a little more of good 1 in a
bundle of (x1, x2) is called the marginal utility MU1 with respect to good 1,
mathematically: MU1 = change U / change x1.
3. Choice
Consumers choose the most preferred bundle from their budget set. Below, a
budget set and several indifference curves of the consumer are drawn in the
diagram.
Budget
line
x2*
x1*
In the diagram, the bundle of goods that is associated with the highest indifference
curve that just touches the budget line (x1*, x2*) is the optimal choice for the
consumer. This is the best bundle, the consumer can afford. In this case, the
bundles above the budget line, which do not intersect it, are of higher preference
to the consumer, hence are not affordable for the consumer. As shown in the
graph, the optimal point (choice) does not cross the budget line. It is also called
“Interior Optimum”.
The optimal choice of goods 1 and 2 at a certain price and certain income is called
the consumers “demanded bundle”. Hence, when prices and income change, the
consumer’s optimal choice will change.
Different preferences will lead to different demand functions. A demand function
is a function that relates the optimal choice – the quantities demanded – to the
different values of prices and income. In other words, the demand function shows
the optimal amounts of each of the goods as a function of the prices and income
faced by the consumer.
5
x1 = x1(p1, p2, m)
x2 = x2(p1, p2, m)
The left hand side of the equation stands for the quantity demanded. The right
hand side of the equation is the function that relates the prices and income to that
quantity. If two goods are perfect substitutes, then a consumer will purchase the
cheaper one.
Perfect compliments are goods, of which the consumer will always buy an equal
amount of each. The most obvious example is a pair of shoes. Again, here the
optimal choice will be on the budget line. We therefore can solve the equation
mathematically as:
The demand function for the optimal choice here is quite obvious. Since the two
goods are always consumed together, it is just as if the consumer is spending all
his money on a single good that has the price p1 + p2.
There are cases where the consumer spends all of his money on the goods he likes
and none of “neutral” or even “bad goods”. Thus, if commodity 1 is “good” and
commodity 2 is “neutral” or “bad”, the demand function expresses itself as:
x1 = m / p1; x2 = 0
4. Demand
∆ x1 / ∆ m > 0
“Inferior goods” are goods for which the demand decreases, when income
increases (i.e. low quality goods, fast food, etc.).
Whether a good is inferior or not depends on the level of income under
examination. It might very well be the case, that an inferior good is consumed
more after income increases. After a certain point of income increase, however,
demand for that good will usually decline. An income offer curve or income
expansion path (shown below) illustrates the bundles of goods that are demanded
6
at different levels of income. If both goods are normal goods, the income
expansion path will show a positive slope.
Budget
line Income offer curve
Indifference
curves
5. Slutsky equation
There are really two effects that appear when the price of a good changes: the rate
where you can exchange one good for another changes and your total purchasing
power is altered. The first part – the change in demand due to the change in the
rate of exchange between the two goods – is called the substitution effect. The
second effect – the change in demand due to having more (or less) purchasing
power – is called the income effect.
In order to illustrate this, it makes sense to break the price movement into two
steps: first, to change the relative price and adjust money income so as to hold the
purchasing power constant; second, adjust purchasing power while holding
relative prices constant. This can be seen graphically in two phases. Pivot and
Shift, when the price of good 1 changes and income stays fixed.
Therefore, first, the budget line pivots around the vertical axis (X to Y). A parallel
shift of the budget line is the movement that occurs when income changes while
relative prices remain constant (Y to Z). The figure below will illustrate the two
movements of the budget line.
7
Indifference
curve
Original
budget
line
. x Original
choice
.Z
Final
choice
Pivot
.
Y Shift
The economic meaning of the pivoted budget line illustrates itself in a constant
purchasing power for the consumer in the sense that the original bundle of goods
is just affordable at the new pivoted line. The formula for this reads as follows:
∆ m = ∆ p1 * x1
Note that if the price of good 1 goes down, the adjustment of income will be
negative. When a price goes down, the consumer’s purchasing power goes up.
Therefore one has to decrease the consumer’s income in order to keep purchasing
power at its original degree.
The optimal purchase in the figure above is denoted at the pivoted budget line at
point Y. This bundle of goods is the optimal bundle of goods when prices change
and income is adjusted. Once again, the movement from X to Y is called the
substitution effect.
A parallel shift of the budget line occurs when income changes while relative
prices remain constant. This is also called income effect, changing income while
keeping prices fixed at the new price. The above figure illustrates this by moving
from point Y to Z. More precisely, the income effect is the change in the demand
for good 1 when we change income from m’ to m, holding the price of good 1
fixed at p’1:
Putting all the above into perspective, the mathematical formula implies further,
that the total changes in demand equal the substitution effect plus the income
effect. This is also called “Slutsky Identity”.
While the substitution effect must always be negative – opposite the change in
price – the income effect can go either way. Thus, the total effect may either be
positive or negative. However, if we are talking about a normal good, income and
substitution effect do go in the same direction.
The Hicks Substitution Effect states that the budget line pivots around the
indifference curve rather than around the original choice. In this way, the
consumer faces a new budget line that has the same relative price as the final
budget line, but has a different income. The consumer’s purchasing power with
the new budget line will no longer be sufficient to purchase his original bundle of
goods. However, it will be sufficient to purchase a bundle that is just indifferent to
his original bundle. This can be illustrated in the chart below.
Final Budget
Original Budget
Original Choice
Final Choice
The Slutsky substitution effect gives the consumer just enough money to get back
to his old level of consumption while the Hicks substitution effect gives the
consumer just enough money to get back to his old indifference curve.
9
6. Intertemporal Choice
Choices of consumption over time are known as intertemporal choices. The shape
of an indifference curve will indicate the consumers’ tastes for consumption at
different times. An indifference curve with the slope of -1 would represent tastes
of consumers who did not care whether they consumed today or tomorrow.
An indifference curve for perfect complements would indicate that consumers
want to consume equal amounts today and tomorrow. Such consumers would be
unwilling so substitute consumption from one period to the other.
However, in reality, it is most common that consumers are willing to substitute
some amount of consumption today for consumption tomorrow ( savings).
The optimal choice for consumption can be examined in each of the two periods:
Borrower Lender
m2 choice
c2
C2
choice m2
m1 c1 c1 m1
10
II. Choice under uncertainty
In the St. Petersburg game people were asked how much they would pay for
the following prospect: if tails comes out of the first toss of a fair coin, to
receive nothing and stop the game, and in the complementary case to
receive two guilders and stay in the game; if tails come out of the second
toss of the coin, to receive nothing and stop the game, and in the
complementary case to receive four guilders and stay in the game; and so
on ad infinitum. The expected monetary value of this prospect is infinite.
Since the people always set a definite, possibly quite small upper value on
the St. Petersburg prospect, it follows that they do not price it in terms of its
expected monetary value.5
1
Referred to as „risk“. See also Knight, F.H., chapter I.I.26
2
Referred to as “uncertainty”
3
Knight, F.H., chapter I.I.26
4
Daniel Bernoulli, (*8.2.1700 - 17.3.1782)
5
Mongin, P. (1997), p. 342-350
11
In general, by Bernoulli’s logic, the valuation of any risky venture takes the
expected utility form:
E (u │p, X) = ∑ XE X p (x)u(x)
The very foundation of classical utility theory where laid by John von Neumann
und Oskar Morgenstern (1947) who used the concept of Bernoulli to develop the
expected utility function, combining mathematical probabilities with expected
utility. They attempted to axiomatize6 Bernoulli’s hypothesis in terms of agents’
preferences over different ventures with random prospects (lotteries). In other
words: The decision-makers problem is to choose among lotteries (set of
probabilities) and to find the best lottery. And Von Neumann and Morgenstern
showed that if an agent has preferences defined over lotteries, then there is a
utility function
U: ∆(X)→R
that assigns a utility to every lottery p Є ∆ (X) that represents these preferences.
They proclaim that this theory describes rational decision making. The expected
utility theorem formulates several assumptions which together with a set of
axioms form the cornerstone of decision making. Using this framework rational
decision making opts for the alternative which maximizes expected utility.7
(1) When out of two possible options (x, y), x was chosen instead of y we can
deduct that the utility of x is at least as large as the utility of y. In this case
x is directly revealed preferred to y.
(2) In case of a sequence of revealed preference comparisons, x would be
referred to as revealed preferred.
(3) In case of locally nonsatiated utility functions x’ lies closer to y. This
contradicts utility maximization and x’ is strictly directly revealed
preferred to x.
Using these observations the generalized axiom of revealed preference as a
consequence of utility maximization can be derived. Based on this axiom several
statistic demand compensation9 can be shown.
6
Axioms of Preference are: Independence, Transitivity, Completeness, the Archimedean axiom
(continuity)
7
Fishburn, P.C. (1989), p. 127-158
8
Varian, H. (1984), p.141-145
12
Comparative statistics typically involve calculations designed to show the
direction in which changes in the environment move peoples’ optimal decision.
Convincing comparative statistic results are the ones that hold even if you impose
only weak restrictions on preferences. That is why the method for comparative
statistics sometimes tends to be sophisticated.10
For example: A person is given the choice between a bet of either receiving $100
or nothing, both with a probability of 50%, or instead, a certain payment. Now he
is risk averse if he would rather accept a payoff of less than $50 (for example,
$40) with probability 100% than the bet, risk neutral if he was indifferent
between the bet and a certain $50 payment, risk-loving (risk-proclive) if it
required that the payment be more than $50 (for example, $60) to induce him to
take the certain option over the bet. The average payoff of the bet, the expected
value would be $50. The certain amount accepted instead of the bet is called the
certainty equivalent, the difference between it and the expected value is called the
risk premium.
The higher the curvature of the utility function u(c), the higher risk aversion.
Since utility functions are not uniquely defined a measure that stays constant is
needed. This measure is the Arrow-Pratt measure of absolute risk-aversion, or
coefficient of absolute risk aversion, defined as
9
e.g. the Hicksian compensation or the Slutsky compensation. Varian, H. (1984), p. 144
10
Peters, M. (2005), page 1-13
11
Peters, M. (2005), page 8
12
Rabin, M. (2000), p.4
13
.
As for absolute risk aversion, the corresponding terms constant relative risk
aversion and decreasing/increasing relative risk aversion are used. This measure
has the advantage that it is still a valid measure of risk aversion, even if it changes
from risk-averse to risk-loving, i.e. is not strictly convex/concave over all c.13
Jensen’s inequality is named after the Danish mathematician Johan Jensen, and it
relates the value of a convey function of an integral to the integral of the convex
function. This inequality can be stated generally using measure theory14, and it
can be stated generally using probability theory15. The two statements say the
same thing.
If preferences admit an expected utility representation with the Bernoulli utility
function u(x), it follows from the definition of risk aversion that the decision
maker is risk avers if:
13
Mas-Colell, A. (1995), p. 167 f.
14
Let (Ω,A,µ) be a measure space, such that µ(Ω) = 1. If g is a real-valued function that is µ-
integrable, and if φ is convex function on the range of g, then
15
In the terminology of probability theory, µ is a probability measure. The function g is replaced
by a real-valued random variable X (just another name for the same thing, as long as the context
remains one of "pure" mathematics). The integral of any function over the space Ω with respect to
the probability measure µ becomes an expected value. The inequality then says that if φ is any
convex function, then
14
∫u (x)dF(x) ≤ u (∫ x dF(x))
In short, probabilities are really a measure of the lack of knowledge about the
conditions which might affect the coin toss and thus merely represent our beliefs
about experiment.17
Other economists, such as Irving Fisher (1906) or Frank P. Ramsey (1926)
asserted instead that probability is related to the knowledge possessed by an
individual alone rather than to general knowledge. In Ramsey’s opinion, it is
personal belief that governs probabilities and not disembodied knowledge. As a
consequence “probability” is subjective.18
The problem with the subjectivist point of view is that it seemed impossible to
derive mathematical expressions for probabilities from personal beliefs.
However Frank Ramsey’s great contribution in his 1926 paper was to suggest a
way of deriving a consistent theory of choice under uncertainty that could isolate
beliefs from preferences while still maintaining subjective probabilities. In so
16
Mas-Colell, A. (1995), p. 185-186
17
As Knight expressed it, “if the real probability reasoning is followed out to its conclusion, it
seems that there is ‘really’ no probability at all, but certainty, if knowledge is complete.”
(Knight, 1921:219)
18
Economists following these opinions are often referred to as „subjectivists“.
15
doing, Ramsey provided the first attempt for an axiomatization of choice under
uncertainty.19
The subjective nature of probability assignments can be made clearer by thinking
of situations like horse race. In this case the most spectators face more or less the
same lack of knowledge about the horses, the track, the jockeys, etc. Yet, while
sharing the same “knowledge” different people place different bets on the
winning horse. The basic idea behind the Ramsey’s-de Finetti derivation is that by
observing the bets people make, one can presume this reflects their personal
beliefs on the outcome of the race. Thus, Ramsey and de Finetti argued,
subjective probabilities can be inferred from observation of people’s actions.
Leonard Savage (1954) succeeded in giving a simple axiomatic basic to expected
utility with subjective uncertainty based on the ideas of Ramsey and de Finetti
and the assumptions of transitivity20, order21, invariance22, dominance23,
cancellation24 and continuity25.
According to this theory the decision is made in dependence to the subjective
expected utility (SEU). The subjective expected utility is the sum of all expected
utilities of the single consequences multiplied with the probability of realization.
19
Independently of Ramseys, Bruno de Finetti (1931, 1937) had also suggested a similar
derivation of subjective probability.
20
Meaning a consistent rank order preference (prefer A<B<C and not C<A).
21
Meaning a clear preference for one out of two possibilities.
22
Meaning that the decision maker is not affected by the way of how alternatives are presented.
23
Meaning that the choice with greater utility dominates preferences.
24
Meaning that identical probabilities with the same utility leave decision to chance.
25
Meaning that gamble is prefered to „sure outcomes“ if the odds are high enough.
16
location in space and time but also by their location in “state”. By this we mean
that “ice cream when it is rainy” ia a different commodity than “ice cream when it
is sunny” and thus are treated differently by agents and can command different
prices. Thus, letting S be the set of mutually-exclusive ”states of nature” (e.g. S
={rainy, sunny}), then we can index every commodity by the state of nature in
which it is received and thus construct a set of “state-contingent” markets.26
Insurance is a natural application of the statepreference approach precisely
because it is a explicit “ state-contingent” contract.
5 Concluding considerations
The expected utility model was first proposed by Daniel Bernoulli as a solution to
the St. Petersburg paradox. Bernoulli argued that the paradox could be resolved if
decision makers displayed risk aversion. Based on these ideas the first important
use of the expected theory was that of John von Neumann and Oskar Morgenstern
who used the assumption of expected utility maximization in their formulation of
game theory. The expected utility theorem says that a von Neumann-Morgenstern
utility function exists if the agent's preference relation on the space of simple
lotteries satisfies four axioms: completeness, transitivity, convexity/continuity,
and independence. Independence is probably the most controversial of the
axioms. A variety of generalized expected utility theories have arisen, most of
which drop or relax the independence axiom. The in chapter 2 discussed Arrow-
Pratt measures of risk aversion for von Neumann-Morgenstern utility functions
have become a standard in analyzing problems in microeconomics of uncertainty.
They have been used to characterize the qualitative properties of demand in
insurance and asset markets, to examine the properties of risk taking in taxation
models, etc. to name just a few applications. The limitations of classical expected
utility considerations are outlined in chapter 3. The findings of Savage led to a
normative theory of decision-making based on subjective utility expectations.
Whereas the state-preference approach distinguishes states of nature of
commodities (which substitute “lotteries”) and can be related to Savage and the
theories of risk aversion.
26
Yaari, M. (1969), p. 315-329
17
III. Asset Markets and Risk Assets
Return
Risk
Definition:
There are many informal methods used to assess or to measure risk. Although it is
not usually possible to directly measure risk. Formal methods measure the value
at risk.
18
In scenario analysis risk is distinct from threat. A threat is a very low-probability
but serious event - which some analysts may be unable to assign a probability in a
risk assessment because it has never occurred, and for which no effective
preventive measure (a step taken to reduce the probability or impact of a possible
future event) is available. The difference is most clearly illustrated by the
precautionary principle which seeks to reduce threat by requiring it to be reduced
to a set of well-defined risks before an action, project, innovation or experiment is
allowed to proceed.
RAROC
For risk management purposes, the main goal of allocating capital to individual
business units is to determine the banks optimal capital structure (i.e. economic
capital allocation is closely correlated with individual business risk).
Risk Premium
In finance, the risk premium can be the expected rate of return above the risk-free
interest rate.
Debt: In terms of bonds it usually refers to the credit spread (the difference
between the bond interest rate and the risk-free rate).
Standard deviation
In probability and statistics, the standard deviation is the most common measure
of statistical dispersion. Simply put, standard deviation measures how spread out
the values in a data set are. More precisely, it is a measure of the average distance
of the data values from their mean. If the data points are all close to the mean,
19
then the standard deviation is low (closer to zero). If many data points are very
different from the mean, then the standard deviation is high (further from zero). If
all the data values are equal, then the standard deviation will be zero. The
standard deviation has no maximum value although it is limited for most data sets
The standard deviation is defined as the square root of the variance. This means it
is the root mean square (RMS) deviation from the arithmetic mean. The standard
deviation is always a positive number (or zero) and is always measured in the
same units as the original data. For example, if the data are distance
measurements in meters, the standard deviation will also be measured in meters.
Not all random variables have a standard deviation, since these expected values
need not exist. If the random variable X takes on the values x1,...,xN (which are
real numbers) with equal probability, then its standard deviation can be computed
as follows. First, the mean of X, , is defined as:
For each value xi calculate the difference between xi and the average
value .
Calculate the squares of these differences. Find the average of the squared
differences. This quantity is the variance σ2.
20
Capital market theory
The model was introduced by Jack Treynor, William Sharpe, John Lintner and
Jan Mossin independently, building on the earlier work of Harry Markowitz on
diversification and modern portfolio theory. Sharpe received the Nobel Memorial
Prize in Economics (jointly with Harry Markowitz and Merton Miller) for this
contribution to the field of financial economics.According to the CAPM, the
relation between the expected return on a given asset i, and the expected return on
a proxy portfolio m (here, the market portfolio) is described as:
Where:
also ,
E(rm) is the expected return of the market
Asset pricing
Once the expected return, E(ri), is calculated using CAPM, the future cash flows
of the asset can be discounted to their present value using this rate to establish the
correct price for the asset. In theory, therefore, an asset is correctly priced when
its observed price is the same as its value calculated using the CAPM derived
discount rate. If the observed price is higher than the valuation, then the asset is
overvalued (and undervalued when the observed price is below the CAPM
valuation). Alternatively, one can "solve for the discount rate" for the observed
price given a particular valuation model and compare that discount rate with the
CAPM rate. If the discount rate in the model is lower than the CAPM rate then
the asset is overvalued (and undervalued for a too high discount rate).
21
Asset-specific required return
The CAPM returns the asset-appropriate required return or discount rate - i.e. the
rate at which future cash flows produced by the asset should be discounted given
that asset's relative riskiness. Betas exceeding one signify more than average
"riskiness"; betas below one indicate lower than average. Thus a more risky stock
will have a higher beta and will be discounted at a higher rate; less sensitive
stocks will have lower betas and be discounted at a lower rate. The CAPM is
consistent with intuition - investors (should) require a higher return for holding a
more risky asset.
A rational investor should not take on any diversifiable risk, as only non-
diversifiable risks are rewarded. Therefore, the required return on an asset, that is,
the return that compensates for risk taken, must be linked to its riskiness in a
portfolio context - i.e. its contribution to overall portfolio riskiness - as opposed to
its "stand alone riskiness." In the CAPM context, portfolio risk is represented by
higher variance i.e. less predictability.
22
Efficient Frontier
The CAPM assumes that the risk-return profile of a portfolio can be optimized -
an optimal portfolio displays the lowest possible level of risk for its level of
return. Additionally, since each additional asset introduced into a portfolio further
diversifies the portfolio, the optimal portfolio must comprise every asset,
(assuming no trading costs) with each asset value-weighted to achieve the above
(assuming that any asset is infinitely divisible). All such optimal portfolios, i.e.,
one for each level of return, comprise the efficient (Markowitz) frontier.
Because the unsystematic risk is diversifiable, the total risk of a portfolio can be
viewed as beta.
For a given level of return, however, only one of these portfolios will be optimal
(in the sense of lowest risk). Since the risk free asset is, by definition,
uncorrelated with any other asset, option 2) will generally have the lower variance
and hence be the more efficient of the two.
This relationship also holds for portfolios along the efficient frontier: a higher
return portfolio plus cash is more efficient than a lower return portfolio alone for
that lower level of return. For a given risk free rate, there is only one optimal
portfolio which can be combined with cash to achieve the lowest level of risk for
any possible return. This is the market portfolio.
Assumptions of CAPM:
23
Shortcomings of CAPM
The model does not appear to adequately explain the variation in stock returns.
Empirical studies show that low beta stocks may offer higher returns than the
model would predict. Some data to this effect was presented as early as a 1969
conference in Buffalo, New York in a paper by Fischer Black, Michael Jensen,
and Myron Scholes. Either that fact is itself rational (which saves the efficient
markets hypothesis but makes CAPM wrong), or it is irrational (which saves
CAPM, but makes EMH wrong – indeed, this possibility makes volatility
arbitrage a strategy for reliably beating the market).
In this graphic, the risk-free rate is assumed to be 5%, and a tangent line, called
the capital market line, has been drawn to the efficient frontier passing through
the risk-free rate. The point of tangency corresponds to a portfolio on the efficient
frontier. That portfolio is called the super-efficient portfolio.
Arbitrage pricing theory (APT) holds that the expected return of a financial asset
can be modelled as a linear function of various macro-economic factors or
theoretical market indices, where sensitivity to changes in each factor is
represented by a factor specific beta coefficient. The model derived rate of return
will then be used to price the asset correctly - the asset price should equal the
expected end of period price discounted at the rate implied by model. If the price
24
diverges, arbitrage should bring it back into line. The theory was initiated by the
economist Stephen Ross in 1976.
If APT holds, then a risky asset can be described as satisfying the following
relation:
where
That is, the uncertain return of an asset j is a linear relationship among n factors.
Additionally, every factor is also considered to be a random variable with mean
zero.
Note that there are some assumptions and requirements that have to be fulfilled
for the latter to be correct: There must be perfect competition in the market and
the total number of factors.
25
IV. The neoclassical theory of production in short-run and long-run
1. Introduction
In this way the production process can be defined as any activity that increases the
similarity between the pattern of demand for goods, and the quantity, form, and
distribution of these goods available to the market place.
The inputs or resources used in any production process are called factors of
production. Classical economics distinguish between three factors:
Capital goods are those that have previously undergone a production process.
They are previously produced means of production and are sometimes called as
“technology” as a factor of production. Investment is important for the future
increase of the economy.
These factors were codified originally in the analysis of Adam Smith, 1776,
David Ricardo, 1817, and the later contributions of Karl Marx, who calls these
factors as the “holy trinity” of political economy.
But this classical view was further developed and we have until these present days
some more means that deals with factors of production:
The translation of demands for commodities into demand for factor services
necessitates some clearly defined technologies which tell us how commodities are
produced and how factors are distributed and, in addition, how much the process
of conversion from services to commodities costs. In this view production is a
26
matter of indirect exchange and so we extend the tools of analysis derived in the
context of pure exchange to analysing production.
The upper mentioned idea that production is an indirect exchange was the heart of
the theory of the Lausanne School of Léon Walras (1874) and Vilfredo Pareto
(1896, 1906). These Lausanne theories of production were embedded in the
general equilibrium system. As a result, the basic production unit – the “firm” –
was relegated into a subsidiary role. Indeed, Walras ignored the decision-making
role of producers entirely.
Regarding profit-maximization, choice of factor inputs and the marginal
productivity theory of distribution we find to a good part in other scholars,
notably the “Paretian” school during its height where it was consolidated by Jacob
Viner (1931), John Hicks (1939) and Paul Samuelson (1947). To these
approaches belongs the integration of the theory of production into the Paretian
general equilibrium theory as well.
After World War II, the theory of production veered off in another direction,
exploiting the activity analysis and linear programming methods developed by the
Cowles Commission. The “Neo-Walrasian” theory of production (Koopmans,
1951; Debreu, 1959) covers much of the same ground as the Paretian theory,
albeit using somewhat different methods but the Neo-Walrasians have asserted
the greater “generality” of their methods.
But in all capital is assumed to be an endowed factor of production rather than a
produced factor of production.
27
essentially the set of technically feasible combinations of output Y and inputs, K
and L
Y = f (K,L)
This form excludes joint production, i.e. that a particular process of production
yields no more than one output (no multiple co-products).
The technologies production function states the maximum amount of output
possible from an input bundle and has the form
Y = f(X1, Λ, Xn)
3.1 Characteristics
The function f(x) is continuous throughout, single valued and has continuous 1st,
2nd, and 3rd order partial derivatives. The functions presuppose technical
efficiency and state the maximum attainable output from each (X1, ….. Xn).
Inputs and outputs are rates of flow per unit of time t0, where t0 is sufficiently
long to allow for completion of technical process.
Output Level
Y
Y = f(x)
X’ X
The total physical product of a variable input-factor identifies what outputs are
possible using various levels of variable input. The diagram shows a typical total
product curve. Output increases as more inputs are employed up until point A.
The maximum output possible is Ym.
Ym A
Xn
28
The average physical product is the total product divided by the number of units
of variable input employed. It is the output of each unit of input. For example 10
employees produce 50 units per day, the average product of variable labour input
is 5 units per day.
The marginal physical product of a variable input is the rate-of-change of the
output level as the level of input changes, holding all other input levels fixed due
to a change in the variable input. Typically the marginal product of one input
depends upon the amount used of other inputs and is diminishing if it becomes
smaller as the level of input increases.
This states that you add more and more of a variable input you will reach a point
beyond which the resulting increase in output starts to diminish. This concept is
also known as the law of diminishing marginal returns.
There are two special types of production functions which are seldom in reality.
The production function Y = f(X1, X2) is said to be homogeneous of degree n, if
given any positive constant k, f(kX1,kX2) = knf(X1,X2).
3.4 Returns-to-Scale
Marginal products describe the change in output level as a single input level
changes.
Returns-to-scale describes how the output level changes as all input levels change
in direct proportion (e.g. all input levels doubled, or halved). When all input
levels are increasing proportionately, there need be no diminution of marginal
products since each input will always have the same amount of other inputs with
which to work. Input productivities need not fall and so returns-to-scale can be
constant or increasing.
The elasticity of production measures the sensitivity of total product to a change
in an input in percentage terms (E = %∆Y / %∆L).
The long-run is the circumstance in which a firm is unrestricted in its choice of all
input levels. If all inputs are allowed to be varied, then the diagram would express
outputs relative to total inputs, and the production function would be a long run
production function.
The short-run is a circumstance in which a firm is restricted in some way in its
choice of at least one input level. There are some reasons for that like
temporarily being unable to install or remove machinery
being required by law to meet affirmative action quotas
having to meet domestic content regulations
temporarily being unable to cancel contracts
29
A useful way to think of the long-run is that the firm can choose as it pleases in
which short-run circumstance to be. It is a managerial task to use economic
analysis to make business decisions involving the best allocation of firms’ scare
resources in order to achieve the firms’ goals – in particular, to maximize profit.
The decision agent acts rationally in pursuit of his goal which is to maximize
profits. He has perfect knowledge of technical production relationships and input
and product price relationships.
4. Technology
Rather than comparing inputs to outputs or the choice between two outputs as it is
shown in the elasticity of substitution it is also possible to assess the mix of inputs
employed in production.
You can use a lot of labour with a minimal amount of capital or vice versa or any
combination between. For most goods, there are more than just two inputs. But
for better understanding and illustrating we use the two input case.
A technology is a process by which inputs are converted to an output. Usually
several technologies will produce the same product. The question is which is the
best technology and how do we compare technologies.
Xi denotes the amount used of input i. An input bundle is a vector of the input
levels (X1, X2, … Xn). A production plan is an input bundle and an output level Y.
A production plan is feasible if Y ≤ f(X1, Λ, Xn). The collection of all feasible
production plans is the technology set.
9 18 24 X1
The Rate of Substitution answers the question: At what rate will a firm substitute
one input for another without changing its output level?
30
L
L’ Y’
L* Y*
u/v
K* K’ K Leontief (no Substitution isoquants)
Y=100
X1
The Cobb-Douglas function is homothetic, i.e. if all inputs are multiplied by λ, the
output is multiplied by a function of λ. Specifically it is homogenous if relative
price changes and producers will like to change combination of inputs if
technology permits this (unlike Leontief).
31
X2
A Cobb-Douglas example
4 X1
X2 Y
Y=120
Y=100
X1 X
5. Summary
The production theory as shown simplifies the production reality of a firm, yet it
is useful for understanding of the characteristics of production processes. But
there are some critical annotations to be done. Especially in the decision making
process the Neoclassical theory restricts its producer to only one person making a
decision. We have today many modern multi-onwner corporations in which
hundreds of shareholders with conflicting desires have decision-making-power.
The Paretian firm is owned by a single entrepreneur who has an unchallenged
power of decisions over all aspects like: product, technique of production, hiring
of factors, etc. but not in the decision of any prices, he considers prices as
“given”.
Today producers take prices as variables and not as parameters for their decisions
and are confronted with a perfect competition. Therefore we have to analyse and
consider the cost functions within a production process as well.
32
V. The neoclassical theory of cost in short-run and long-run
Introduction
Overview
33
Production and cost
Production function:
• You can think of this production function as the case when with all inputs
besides Labor are held constant.
• Production function is “efficient” frontier of production possibilities.
34
Relation among TP, AP, MP:
Looking ahead - Cost function with single input: a cost function C(Q) gives the
minimum cost of producing each possible quantity of output.
Case of Single Input: C(Q) = w*L(Q), where L(Q) is minimum needed to produce
Isoquants: all possible combinations of inputs that exactly produce a given output
level: all (L, K) such that: f(L, K) = Q0 (constant).
35
Example Cobb-Douglas:
Q = K1/2 L1/2 Q2 = KL K(L; Q) = Q2/L
Definition: MRTS measures the amount of an input L the firm would require in
place of a unit less of another input, K, to be able to produce the same output as
before: MRTSL,K = -dK/dL (for constant output). Note: Marginal products and the
MRTS are related:
Elasticity of substitution
Example:
Cobb Douglas
36
Returns to scale
Definition: will output increase more or less proportionately when ALL inputs
increase by the same percentage amount?
37
Extreme production functions
1. Linear: Q = f(L, K) = aL + bK
- • MRTS constant
- • Constant returns to scale
- • σ = ∞.
Summary I.
1. Production function relates output to the efficient use of all possible input
levels.
2. The effect of an input on production can be measured by its average and
marginal products.
3. An isoquant gives all input combinations that generate same level of
output.
4. The ability to substitution one input for another is measured by the
Marginal Rate of Technical Substitution (MRTS), which normally
satisfies diminishing MRTS.
5. The production function, isoquant and MRTS have a direct counterpart in
consumer theory to the utility function, the indifference curve and MRS.
38
6. We can summarize ability to tradeoff one input for another by the
elasticity of substitution.
7. Returns to scale measure how output increases with the proportionate
increase in all inputs.
Cost minimization
Taxonomy of costs
39
Short run versus long run
Cost minimization
The firm’s problem: A profit maximizing firm won’t spend more to produce its
output than it has to: MinimizeL,K TC = rK + wL, Subject to: f(L,K) = Q0
The solution:
In words: find the cheapest input combination that produces the desired level of
output.
Iso-quant curve: input combinations that produce the same quantity of output,
Slope of iso-quant = - MRTSL,K= - MPL/MPK.
Iso-cost lines: input combinations that cost same amount wL + rK = C (a
constant), K = (C - wL ) / r, slope of iso-cost = ∆K / ∆L (along an iso-cost-line) =
- w / r, compare: budget lines
40
Comparative statics
• Output Expansion Path: L(Q) and K(Q) are labor and capital levels that
minimize cost.
• Plot optimal cost-minimizing input combinations as output increases (i.e.,
moves in the northeast direction).
• If the cost-minimizing quantity of an input rises (falls) with output, then it
is a “normal” (“inferior”) input.
• Compare: income-consumption curve.
Factor Price Change: L(w; Q) gives labor that minimizes cost for each wage rate –
41
Short-run costs
Properties
Properties
• diminishing marginal product => SRAC and SRMC rising
• when SRMC > SRATC, then SRATC rising (and vice versa)
42
• consequently, SRMC = SRATC when SRATC minimum, similarly SRMC =
SRAVC when SRAVC minimum
• compare with marginal product: SRMC = ∆(wL(Q,K)) / ∆ Q = w (∆Q/∆L)-1 = w
/ MPL
Recall
• SRTC = w L (Q; K—) + r K— (i.e., one input fixed)
• LRTC = w L(Q) + r K(Q) (i.e., all inputs vary freely)
Consequences of Flexibility
• SRTC > LRTC and SRATC > LRATC for all Q
• SRMC steeper than LRMC
Scaleability
The problem:
• business model works for limited market: certain “lot size,” geographic area,
customer type, and so on.
• can it be replicated for broader market?
Success Stories:
Failures:
Summary II.
43
Criticisms of neoclassical economics
44
The basic theory of production in neoclassical economics is criticized of assuming
wrongly rationales for producers. According to the theory, increasing production
costs are the reason for producers not to produce over certain amount. Some
empirical counter arguments claim that most of producers in economy are not
doing their production decisions in the light of increasing production costs (for
example, they often may have additional capacity that could be taken into use, if
producing more was desirable).
Often at individual levels, variables such as supply and demand, which are
independent, are (allegedly wrongly) assumed to be independent also at aggregate
level. This criticism has been applied to many central theories of neoclassical
economics.
Theory of perfect competition is criticized by claiming that it wrongly assumes
that demand curve for one firm is flat, while as a matter of fact, it has to be (very)
slightly bending, since in that theory, the demand curve for individual firm is a
part of aggregate demand curve that is not flat. Taking this into account would
ruin the theory.
The critique of the assumption of rationality is not confined to social theorists and
ecologists. Many economists, even contemporaries, have criticized this vision of
economic man. Thorstein Veblen put it most sardonically:
lightning calculator of pleasures and pains, who oscillates like a homogeneous
globule of desire of happiness under the impulse of stimuli that shift about the
area, but leave him intact.
Herbert Simon's theory of bounded rationality has probably been the most
influential of the heterodox approaches. Is economic man a first approximation to
a more realistic psychology, an approach only valid in some sphere of human
lives, or a general methodological principle for economics? Early neoclassical
economists often leaned toward the first two approaches, but the latter has
become prevalent.
Neoclassical economics is also often seen as relying too heavily on complex
mathematical models, such as those used in general equilibrium theory, without
enough regard to whether these actually describe the real economy. Many see an
attempt to model a system as complex as a modern economy by a mathematical
model as unrealistic and doomed to failure. Famous answer to this criticism is
Milton Friedman's claim that theories should be judged by their ability to predict
events rather than by the realisticity of their assumptions. Naturally, many claim
that neoclassical economics (as well as other branches of economics) has not been
very good at predicting events.
Critics of neoclassical models accuse it of copying of 19th century mechanics and
the "clockwork" model of society which seems to justify elite privileges as arising
"naturally" from the social order based on economic competitions. This is echoed
by modern critics in the anti-globalization movement who often blame the
neoclassical theory, as it has been applied by the IMF in particular, for inequities
in global debt and trade relations. They assert it ignores the complexity of nature
and of human creativity, and seeks mechanical ideas like equilibrium:
And in Poinset's Elements de Statique..., which was a textbook on the theory of
mechanics bristling with systems of simultaneous equations to represent, among
other things, the mechanical equilibrium of the solar system, Walras found a
pattern for representing the catallactic equilibrium of the market system. (William
Jaffe).
45
VI. Theory of the firm in perfect competition and monopolistic competition
The “theory of the firm” is a relatively modern economic construct and one with
several variants. Firms produce goods and provide services and, as such, they play
an important part in the supply/demand interaction that characterizes the
economic order of non-socialist societies. A business firm’s costs, prices, and
economic power naturally matter to theoretical economists as well as to those
shaping practical economic policy. Economists since Adam Smith have been
committed to the notion that markets allocate resources better than alternative
means do.27
One early and popular perception of the modern firm linked the size and structure
of business enterprises to the state of technology. Technological advances make
new forms of production and organization possible. The productive genius of
Henry Ford, for example, was possible because technology made it feasible to
mass produce and market automobiles. Advances in transportation,
communications, and production technologies in the late nineteenth century
invited, if not compelled, the growth of interstate business firms in the United
States.
Neoclassical economic models could capture the characteristics of perfectly
competitive and monopolistic markets as outlined in chapter 2 and 3. Since most
industries fit neither model, however, economists also explored the world of
imperfectly competitive markets, oligopolistic industries, and “monopolistic
competition.” The behaviour and performance of firms in markets with these
characteristics cannot be predicted with the certainty of producers in perfectly
competitive markets or its monopolistic alternative, but they can be analyzed
using the tools of neoclassical economics.28
The competitive firm tries to maximise profits, given that he is a price taker.29
Price is therefore exogenous and the firm just chooses quantities. It therefore tries
to maximise π = pq - c(q) by choice of q, where c(q) is the (total) cost function for
the firm.30 From this behaviour the optimization rule can be deducted which states
that the marginal revenue of each action must be equal to its marginal cost. Or in
other words: At the profit-maximizing level of output, marginal revenue and
marginal cost are exactly equal.
27
Garvey, G. E. (2003), page 525-540
28
In many sectors of the economy markets are best described by the term oligopoly - where a few
producers dominate the majority of the market and the industry is highly concentrated. In a
duopoly two firms dominate the market although there may be many smaller players in the
industry.
29
Meaning that the price of the firm’s output is the same regardless of the quantity that the firm
decides to produce.
30
A firm’s costs reflect is production process.
46
To extend this analysis of profit maximization the cost curves can be considered:
The firm’s marginal-cost curve (MC) is upward sloping31, the average-total-cost
(ATC) curve is U-shaped. And the marginal-cost curve crosses the average-total-
cost curve at the minimum of average total costs.
MC1
0 Q1 QMAX Q2 Quantity
32
Figure 1.1: Profit Maximization for a competitive firm
The market price (P) equals marginal revenue (MR) and average revenue (AR).
At the quantity Q1, marginal revenue MR1 exceeds marginal cost MC1, so raising
production increases profit. At the quantity Q2 marginal cost MC2 is above
marginal revenue MR2, so reducing production increases profit. The profit
maximizing quantity Qmax is found where the horizontal price line intersects the
marginal-cost curve.33
As outlined cost considerations are crucial for a profit maximizing firm. In doing
so the considerations of economies of scale are important.34 The shape of the
long-run average-total-cost curve (see figure 1.1) conveys important information
about technology of producing a good. When long run average cost declines as
output increases, there are said to be economies of scale. When long-run average
total cost rises as output increases, there are said to be diseconomies of scale. And
31
Under perfect market conditions with the firm being price taker the marginal-revenue equals the
market price.
32
Mankiw, N.G. (1998), p. 296
33
Varian, R. (1984), p. 22-46
34
Adam Smith identified the division of labor and specialization as the two key means to achieve
a larger return on production. Through these two techniques, employees would not only be able
to concentrate on a specific task, but with time, improve the skills necessary to perform their
jobs and production level increases.
47
when long-run average total cost does not change there is said to be constant
returns to scale.35
1) Many suppliers each with an insignificant share of the market – this means that
each firm is too small relative to the overall market to affect price via a change in
its own supply – each individual firm is assumed to be a price taker.
3) Consumers have perfect information about the prices all sellers in the market
charge – so if some firms decide to charge a price higher than the ruling market
price, there will be a large substitution effect away from this firm.
35
Mankiw, N.G. (1998), page 284
36
E.g.: Mankiw, G.N. (1998), page 291 following
48
4) All firms (industry participants and new entrants) are assumed to have equal
access to resources (technology, other factor inputs) and improvements in
production technologies achieved by one firm can spill-over to all the other
suppliers in the market.
5) There are assumed to be no barriers to entry & exit of firms in long run – which
means that the market is open to competition from new suppliers – this affects the
long run profits made by each firm in the industry. The long run equilibrium for a
perfectly competitive market occurs when the marginal firm makes normal profit
only in the long term.
2.1 Short Run Price and Output for the Competitive Industry and Firm
In the short run the equilibrium market price is determined by the interaction
between market demand and market supply. In diagram 1 is shown, that price P1
is the market-clearing price and this price is taken by each of the firms. Because
the market price is constant for each unit sold, the AR curve also becomes the
Marginal Revenue curve (MR). A firm maximises profits when marginal revenue
= marginal cost. In the diagram below, the profit-maximising output is Q1. The
firm sells Q1 at price P1. The area shaded is the economic profit made in the short
run because the ruling market price P1 is greater than average total cost.37
Not all firms make supernormal profits in the short run. Their profits depend on
the position of their short run cost curves. Some firms may be experiencing sub-
normal profits because their average total costs exceed the current market price.
Other firms may be making normal profits where total revenue equals total cost
(i.e. they are at the break-even output). In diagram 2, the firm shown has high
37
Varian, R. (1984), p. 22-46
38
Mankiw, G.N. (1998), page 285 following
49
short run costs such that the ruling market price is below the average total cost
curve. At the profit maximising level of output, the firm is making an economic
loss.
Figure 2.3 describes the increase in market demand. This causes an increase in
market price and quantity traded. The firm's average revenue curve shifts up to
AR2 (=MR2) and the profit maximising output expands to Q2, with the MC curve
as the firm's supply curve. Higher prices cause an expansion along the supply
curve. Following the increase in demand, total profits have increased. An inward
shift in market demand would have the opposite effect.40
39
Mankiw, G.N. (1998), page 285 following
40
The effect of a change in market supply (perhaps arising from a cost-reducing technological
innovation available to all firms in a competitive market) is a ceteris paribus consideration not
further outlined in this work.
50
2.3 The adjustment process of a perfectly competitive industry towards
the long run equilibrium
If most firms are making abnormal profits in the short run there will be an
expansion of the output of existing firms and new firms might enter the market.
Firms are responding to the profit motive and supernormal profits act as a signal
for a reallocation of resources within the market. The addition of new suppliers
causes an outward shift in the market supply curve, as shown in figure 2.4.41
Making the assumption that the market demand curve remains unchanged, higher
market supply will reduce the equilibrium market price until the price = long run
average cost. At this point each firm is making normal profits only. There is no
further incentive for movement of firms in and out of the industry and a long-run
equilibrium has been established. The entry of new firms shifts the market supply
curve to MS2 and drives down the market price to P2. At the profit-maximising
output level Q3 only normal profits are being made. There is no incentive for
firms to enter or leave the industry. Thus a long-run equilibrium is established.
41
Mankiw, G.N. (1998), page 285 following
42
Mankiw, G.N. (1998), page 285 following
51
2.4 Perfect competition and economic efficiency
43
Allocative efficiency defines a state where all resources are allocated to their highest valued use
(no other possibility exist where they would make greater profit). Whereas productive
efficiency describes a way of producing in the lowest cost manner.
44
Another form of economic efficiency – dynamic efficiency – relates to aspects of market
competition such as the rate of innovation in a market, the quality of output provided over time
and is not subject to these considerations.
52
3. Theory of the firm under monopolistic competition
3.1 Short run price and output for the monopolistic industry and firm
In the short run, the monopolistically competitive firm faces limited competition.
There are other firms that sell products that are good, but not perfect, substitutes
for the firm's own product. In other words: every firm has a monopoly of its own
product. When the product is differentiated, that means the firm has some
monopoly power, and that means we must use the monopoly analysis, as in Figure
3.1 below.
45
Luis M. B. (2000), page 84-85
53
Figure 3.1: short run monopolistic revenues46
We see that, the marginal revenue is less than the price. The firm will set its
output so as to make marginal cost equal to marginal revenue, and charge the
corresponding price on the demand curve, so that in this example, the monopoly
sells 1000 units of output (per week, perhaps) for a price of $85 per unit.
But this is just a short run situation. We see that the price is greater than the
average cost (which is $74 per unit, in this case) giving a profit of $11,000 per
week. This profitable performance will attract new competition in the long run.
3.2 Long Run Price and Output for the Monopolistic Industry and Firm
46
Mankiw, G.N. (1998), page 291 following
47
Mankiw, G.N. (1998), page 291 following
54
In this example, the firm can break even by selling 935 units of output at a price
of $76 per unit. The profit -- zero -- is the greatest profit the firm can make, so
profit is being maximized with the output that makes MC=MR.
Zero (economic) profit is also the condition for long run equilibrium in a p-
competitive industry. But this equilibrium is not the ideal that the long run
equilibrium in a perfectly competitive industry is.
While monopolistically competitive firms are inefficient, it is usually the case that
the costs of regulating prices for every product that is sold in monopolistic
competition by far exceed the benefits; the government would have to regulate all
firms that sold heterogeneous products - an impossible proposition in a market
economy.
Another concern of critics of monopolistic competition is that it fosters
advertising and the creation of brand names. Critics argue that advertising induces
customers into spending more on products because of the name associated with
them rather than because of rational factors. This is refuted by defenders of
advertising who argue that (1) brand names can represent a guarantee of quality,
and (2) advertising helps reduce the cost to consumers of weighing the tradeoffs
of numerous competing brands.
There are unique information and information processing costs associated with
selecting a brand in a monopolistically competitive environment. In a monopoly
industry, the consumer is faced with a single brand and so information gathering
is relatively inexpensive. In a perfectly competitive industry, the consumer is
faced with many brands. However, because the brands are virtually identical,
again information gathering is relatively inexpensive. Faced with a
monopolistically competitive industry, to select the best out of many brands the
consumer must collect and process information on a large number of different
brands. In many cases, the cost of gathering information necessary to selecting the
best brand can exceed the benefit of consuming the best brand (versus a randomly
selected brand).
4. Concluding considerations
55
atomicity. As perfect competition is a theoretical absolute, there are no real-world
examples of a perfectly competitive market.
Therefore the described model of monopolistic competition is by far closer to
reality although it brings suboptimal results. Mainly because the rule of P= MR =
MC (price equals marginal revenues equals marginal costs) holds true for a
competitive firm only, whereas for a monopoly firm price exceeds marginal costs
(P > MR = MC).
Concluding it can be said that this theoretical approach assumes profit
maximization, but can also be used to show how a change in objectives48 (for
example from profit maximisation to revenue maximisation) affects the price and
output of a business.
48
Other objectives can be: (1) Satisficing: Satisficing behaviour involves the owners setting
minimum acceptable levels of achievement in terms of business revenue and profit e.g. a
target rate of growth of sales, or an acceptable rate of return on capital. (2) Sales Revenue
Maximisation : Annual salaries and other perks might be more closely correlated with total
sales revenue rather than profits. Revenue is maximised when marginal revenue (MR) = zero.
(3) Constrained Sales Revenue Maximisation - Shareholders of a business may introduce a
constraint on the decisions of managers.
56
VII. The theory of oligopoly and monopoly
Considerations about market situations very often go about the question under
what circumstances markets come to equilibrium. Which market constellations
make demand and supply coming together and what prices are accepted by
consumers to clear markets appropriate?
It can be clearly assumed that the situation of competition between suppliers
definitely is a decisive influencing factor. A lot of considerations and reflections
are made about markets under condition of perfect competition where many
suppliers act on the market.
The following reflections concern the constellations of markets under oligopoly
and monopoly conditions where just one or a few actors supply customers
demand.
1. Monopoly
57
The monopolist, however, will find it most profitable to produce only four units
because it does not see marginal benefit the same way that buyers see it. For the
seller, the extra benefit of the second unit is only 111. It sells the second unit for
116, but to sell the second unit, it had to reduce the price it charged by 5. Thus, it
"lost" 5 money units on the first unit, so the net increase in its revenue was only
111.
Using the maximization principle, one can see that producing beyond the fourth
unit is not in the interests of the firm. The fifth unit brings in added benefits of
only 94 to the firm (it sells for 101, but to sell it, the firm lowers price on other
units), but costs an added 100. From the point of view of the buyers, however, the
fourth unit should be produced. It brings them more added benefits than it uses
resources.
The discussion above is illustrated below. The seller attempts to set marginal cost
equal to marginal revenue, or to produce at q0. From the consumers' viewpoint,
the best amount to produce would be q1. The monopolist restricts output because
of a divergence between marginal benefit as the firm perceives it and marginal
benefit as buyers perceive it. Producing beyond q0 is not in the interests of the
firm because the extra benefit it sees, the marginal revenue curve, is less than the
extra cost of production, shown by the marginal cost curve. Extra output is in the
interests of buyers because the extra benefit they get, shown by the demand curve,
is greater than the extra costs of production.
58
The unexploited value of monopoly leads to two questions. First, one can ask
whether people have found ways to capture this value. Because the search for
value occupies a great deal of people's talents and energies in a market economy,
it should not surprise us to learn that there is a commonly-used way to capture this
value. Sellers can capture it with price discrimination.
Second, one can ask if the government can eliminate the unexploited value with
some sort of intervention. Economists have suggested two important ways for the
government to intervene, through antitrust actions and through regulation.
As shown above the effect of price elasticity gets very important under monopoly
constellation, where suppliers manage prices and lead their output to maximum or
optimum profit. The supplying company has to think about price elasticity of
demand by changing output and output prices. Price elasticity of demand in this
case can be seen as aggregated consumption equation. How much does demand
increase or decrease if prices are lowered or rose by one percent? So profit can be
only upgraded by increased prices if elasticity is not that high to cause a lowered
output making the sum of sales shrink below the price-surplus effect.
The availability of goods end especially of substitute products always has effect
on elasticity so under monopoly constellation (without adequate substitutes)
elasticity is lower than under constellation with many suppliers.
Essentially price elasticity of demand lies 1 over price elasticity of sales, so price
elasticity of sales goes to 0 if price elasticity of demand goes 1. It seems to lead to
extreme account of sales if elasticity of sales is 0.
Leading to an equation the price (p) elasticity (E) of sales (U) EU,p can be
transformed into the following terms:
[1]
[2]
[3]
59
[4]
[5]
[6]
[7]
[8]
At last [8] the equation of Amoroso-Robinson leads to the marginal revenue and
answers the question how much sales grow with another output unit x.
In combination with marginal costs the equation leads to Cournot’s point at the
price-sales function which marks companies optimal sales.
The price-sales function under monopoly constellation can be seen as function of
customers demand for the good supplied by the monopolist. Because of this all
theoretical considerations about households demand can be derived.
60
2. Oligopoly
Supplier 1
Pricing strategy 100% 90%
Opponent 100/100 10/130
Supplier 130/10 20/20
There may be no equilibrium solution in a situation of this sort. Rather, there may
be a period of collusion in which firms agree (though it may be an unspoken
agreement) to keep prices high. Then, the collusion may disintegrate as firms
begin cheating and finally a new period of collusion may begin. Whether sellers
collude or compete will depend on many factors that can be difficult to measure
and put into a theory, such as the number of sellers, their personalities, whether
they have equal or unequal shares of the market, whether their costs are the same,
the ease of cheating and of detecting cheating, and whether the sellers can
compete on nonprice bases such as service and quality.
A thorough examination of the possibilities of oligopolistic strategies and how
well they fit observed behavior of real-world oligopolies is a large and
controversial subject that is beyond the scope of these readings. The important
point concerning economic efficiency is that if oligopolists perceive their demand
curves as downward-sloping (that is, if they take into account that the amount
they produce will have a significant effect on the price they can charge), their
61
marginal revenue curves will lie below their demand curves and they will restrict
output relative to what an industry of price takers would. Thus, there will be an
efficiency loss involved. Most economists use the term "market power" to
describe the ability of any price maker to set price.
When the possession of market power is profitable, it should attract new entrants
into the industry. If entry is easy, then the existence of very few or even only one
firm may not result in economic inefficiency. The threat of potential entry may be
enough competition to keep the industry operating at or close to the competitive
solution. In this case, the market is a contestable market. However, if entry is not
easy but there are significant barriers to entry, the threat of competition is less.
Barriers to entry exist when there are sunken costs-expenses that cannot be
recovered once a firm has entered the industry. Where these costs are high, the
industry probably operates as the theory of monopoly suggests it will.
There are two general types of theories for oligopoly. Conjectural Variation
Models on one hand and Limit Pricing Models on the other.
In conjectural variation models the firms in the industry are taken as given and
each firm makes certain assumptions about what the others reactions will be to its
own actions. For example, in the Cournot model each firm assumes there will be
no reaction on the part of the other firms. In the limit price models one firm
chooses its action taking into account the possible entry or exit of competitive to
or from the market.
In the Cournot Model each firm presumes no reaction on the part of the other
firms to a change in its output. Thus, ∂Q/∂qi = 1. Therefore the first order
condition for a maximum profit of the i-th firm is:
where Qoi is the output of the firms other than the i-th. When this is solved for qi
the result is:
However it is more convenient to represent the first order condition and its
solution as:
Now we can sum the above equation over the n firms. The result is:
Q = [n/(n+1)](p0/b) - [1/(n+1)]C1/b
When this output is substituted into the inverse demand function the result is:
62
p = [1/(n+1)]p0 + [1/(n+1)]C1
or if we let c1=C1/n:
p = [1/(n+1)]p0 + [n/(n+1)]c1
where c1 represents the average of the marginal costs of the n firms. It can be seen
from the last term that as the number of firms increase without bound the market
price approaches c1.
If one follow through with this model one would have to take in consideration that
the firms with above average marginal cost would be making a loss on variable
costs and would cease production.
Carrying through with the analysis as shown below indicates that the market price
will be:
p = [1/(n+2)]p0 + [(n+1)/(n+2)]c1
where c1 is now the weighted average of the marginal costs of the firm with all of
the follower firms given an equal weight and the leader firm given a weight of
twice that of the follower firms. The leader firm has the effect on the industry of
two follower firms. Otherwise the result is the same as in the case of the Cournot
oligopoly.
63
VIII. Input markets
Market
While barter markets exist, most markets require the existence of currency or
other form of money. An economic system in which goods and services (and
resources required to produce those goods and services) are mediated by markets
is called a market economy. Critics of the market economy have tried or proposed
a command economy or other non-market economy. The attempt to mix socialism
with the incentives created by a market is known as market socialism, which
includes the relatively recent socialism with Chinese characteristics, though some
argue that socialism and markets are fundamentally incompatible.
Input Markets
Definition: Input markets are markets for goods and services needed in a
production process.
On a higher level, there are two different kinds of inputs in a production process:
1. Production factors and
2. Preliminary product; these products have mostly been produced with other
production factors themselves
64
Production factors
In the classical economics, the factors work, capital, ground and knowledge are
designated as production factors (since Adam Smith and David Ricardo) The
factor work is represented by the individual human being.
Primarily the factor ground referred the farmland. Later it was extended to all
kinds of natural resources like crude oil, minerals etc. Because of the increasing
shortage of production factors like water and other commodities this production
factor is just called nature.
The production of all goods starts with the production factor nature. But there are
no ready for use goods in the nature; there are just commodities witch have been
made converted first and there is work needed for this conversion. Times by time
people have learned to multiply their power by using tools and machines. These
production factors are called capital. Unlike nature, capital is called a derivative
production factor or producing production factor. Some economist means that
knowledge and information should also be added to the production factors.
Preliminary product
Preliminary products are measuring the value of the used, transformed and
processed goods and services.
Factors of production
Factors of production are resources used in the production of goods and services
in economics. Classical economics distinguishes between three factors:
These were codified originally in the analyses of Adam Smith, 1776, David
Ricardo, 1817, and the later contributions of Karl Marx and John Stuart Mill as
part of one of the first coherent theories of production in political economy. Marx
refers in “Das Kapital” to the three factors of production as the "holy trinity" of
political economy.
65
In the classical analysis, working capital was generally viewed as being a stock of
physical items such as tools, buildings and machinery. This view was explicitly
rejected by Marx. Modern economics has become increasingly uncertain about
how to define and theorise capital.
With the emergence of the knowledge economy, more modern analysis often
distinguishes this physical capital from other forms of capital such as "human
capital" and Intellectual Capital which require intangible management orientated
techniques to manage such as Balanced Scorecard, Risk Management, Business
Process Reengineering, Knowledge Management, and Intellectual Capital
Management
Prior to the Information Age the land, labour, and capital were used to create
substantial wealth due to their scarcity. Following the Information Age (circa
1971-1991), and the Knowledge Age (circa 1991 to 2002) and the current
Intangible Economy (circa 2002+) the primary factors of production today are
intangible. These factors of production are knowledge, collaboration, process-
engagement, and time quality. According to economic theory, a "factor of
production" is used to create value and economic performance. As the four factors
of production today are all intangible, the current economic age is called the
Intangible Economy. Intangible factors of production are subject to network
effects and the contrary economic laws such as the law of increasing returns. It is
therefore important to differentiate between convetional (tangible) economics and
intangible economics when discussing issues related to factors of production
which change according to the economic era that society is experiencing. For
example, land was a key factor of production in the Agricultural Age.
Some economists mention enterprise, entrepreneurship, individual capital or just
"leadership" as a fourth factor. However, this seems to be a form of labor or
"human capital." When differentiated, the payment for this factor of production is
called profit. This is when entrepreneurs think of ideas, organise the other three
factors of production, and take risks with their own money and the financial
capital of others.
In a market economy, entrepreneurs combine land, labor, and capital to make a
profit. In a planned economy, central planners decide how land, labor, and capital
should be used to provide for maximum benefit for all citizens.
The classical theory, further developed, remains useful to the present day as a
basis of microeconomics. Some more means that deal with factors of production
are as follows:
Entrepreneurs are people who organize other productive resources to make goods
and services. The economists regard entrepreneurs as a specialist form of labor
input. The success and/or failure of a business often depends on the quality of
entrepreneurship.
Capital has many meanings including the finance raised to operate a business.
Normally though, capital means investment in goods that can produce other goods
in the future. It can also be referred to as machines, roads, factories, schools, and
office buildings in which humans produced in order to produce other goods and
services. Investment is important if the economy is to achieve economic growth in
the future.
Human Capital is the quality of labour resources which can be improved through
investments, education, and training.
Fixed Capital this includes machinery, work plants, equipment, new technology,
factories, buildings, and goods that are designed to increase the productive
potential of the economy for future years.
66
Working Capital this includes the stocks of finished and semi-finished goods that
we be economically consumed in the near future or will be made into a finished
consumer good in the near future.
Human capital
Human capital is a way of defining and categorizing peoples' skills and abilities as
used in employment and as they otherwise contribute to the economy. Many early
economic theories refer to it simply as labour, one of three factors of production,
and consider it to be a commodity -- homogeneous and easily interchangeable.
Other conceptions of labour are more sophisticated.
In some way, the idea of "human capital" is similar to Karl Marx's concept of
labour power: to him, under capitalism workers had to sell their labour-power in
order to receive income (wages and salaries). But long before Mincer or Becker
wrote, Marx pointed to "two disagreeably frustrating facts" with theories that
equate wages or salaries with the interest on human capital.
Resource-based view
67
(1991) formalised this theory, although it was Wernerfelt (1984) who introduced
the idea of resource position barriers being roughly analogous to entry barriers in
the positioning school.
The key points of the theory are:
Valuable - they enable a firm to implement strategies that improve its efficiency
and effectiveness
Definitions
Criticisms
In economics, the cost-of-production theory of value is the belief that the value of
an object is decided by the resources that went into making it. The cost can be
68
composed of any of the factors of production including labour, capital, land, or
technology.
Two of the most common cost-of-production theories are the medieval just price
theory and the classical labour theory of value. The labour theory of value, which
interprets labour-value as the determinant of prices, was first developed by Adam
Smith and later expanded by David Ricardo and Karl Marx. Most classical
economists (as well as nearly all Marxists) subscribe to it. However, Marx's
theory is not a true cost-of-production theory since the value of a commodity
contains a component of surplus value unrelated to the physical cost of producing
it. The magnitude of this surplus value may be unrelated to production-costs. A
somewhat different theory of cost-determined prices is provided by the "neo-
Ricardian school" of Piero Sraffa and his followers.
The most common counterpoint to this is the marginal theory of value which
asserts that economic value is set by the consumer's marginal utility. This is the
view most commonly held by the majority of contemporary mainstream
economists.
The Polish economist Michał Kalecki distinguished between sectors with "cost-
determined prices" (such as manufacturing and services) and those with "demand-
determined prices" (such as agriculture and raw material extraction).
In microeconomics, production is the act of making things, in particular the act of
making products that will be traded or sold commercially. Production decisions
concentrate on what goods to produce, how to produce them, the costs of
producing them, and optimising the mix of resource inputs used in their
production. This production information can then be combined with market
information (like demand and marginal revenue) to determine the quantity of
products to produce and the optimum pricing.
A production process can be defined as any activity that increases the similarity
between the pattern of demand for goods, and the quantity, form, and distribution
of these goods available to the market place.
69
IX. General equilibrium
General Equilibrium (linear) supply and demand curves. This diagram is based on
Walras' analysis.
Introduction
In a market system, the prices and production of all goods, including the price of
money and interest, are interrelated. A change in the price of one good, say bread,
may affect another price, for example, the wages of bakers. If bakers differ in
tastes from others, the demand for bread might be affected by a change in bakers'
wages, with a consequent effect on the price of bread. Calculating the equilibrium
price of just one good, in theory, requires an analysis that accounts for all of the
millions of different
70
Introduction in general equilibrium theory / The Walras-Cassel system
As outlined by Walras, the basics of the model are the following: individuals are
endowed with factors and demand produced goods; firms demand factors and
produce goods with a fixed coefficients production technology. General
equilibrium is defined as a set of factor prices and output prices such that the
relevant quantities demanded and supplied in each market are equal to each other,
i.e. both output and factor markets clear. Competition ensures that price equal cost
of production for every production process in operation.
We face two further sets of equations which form the heart of the Walras-Cassel
system: one set makes factor supply equal to factor demand by firms ("factor
market clearing") and is written as v = B¢ x; a second set says that the output
71
price equals cost of production for each production process ("perfect
competition") and is written p = Bw. We shall refer to both of these as the linear
production conditions of the Walras-Cassel model. It is important to note that
these are not functions, but rather equilibrium conditions.
The four sets of equations we have outlined connect the entire system together in
equilibrium. Their functions can be outlined as follows:
(i) D(p, w) connects output prices and output quantities;
(ii) F(p, w) connects factor prices and factor quantities;
(iii) v = B¢ x connects output quantities and factor quantities;
(iv) p = Bw connects output prices and factor prices.
It is important to note how the factor supply and output demand decisions of
households sandwich this entire problem, with the linear production conditions
sitting passively in the middle. Fixing any one of the four items (w, p, x or v) at
72
its equilibrium value, we can determine the rest [although to do so, we must
assume that output demand and factor supply functions are invertible: e.g. given
v, we can determine what w is by the factor supply function F(p, w) and given x,
we can determine what p by the output demand function D(p, w); naturally, this is
a very strong assumption and not a very clear one in the manner it is stated].
Equivalently, suppose we start from equlibrium output demands, x*. Thus, given
x*, we get p* by the output demand function D(p, w) and v* by the factor market
clearing condition v = B¢ x. In their turn, p* gives us w* by the competition
condition p = Bw and v* gives us w* by the factor supply function F(p, w). For
equilibrium, we need it that both of the w* are the same. We go through
analogous stories when we start with equilibrium factor quantities, v*, or
equilibrium factor prices, w*.
Basic questions
Basic questions in general equilibrium analysis are concerned with the conditions
under which an equilibrium will be efficient, which efficient equilibria can be
achieved, when an equilibrium is guaranteed to exist and when the equilibrium
will be unique and stable.
First Fundamental Theorem of Welfare Economics
The technical condition for the result to hold is the fairly weak one that consumer
preferences are locally nonsatiated, which rules out a situation where all
commodities are "bads". Additional implicit assumptions are that consumers are
rational, markets are complete, there are no externalities and information is
perfect. While these assumptions are certainly unrealistic, what the theorem
73
basically tells us is that the sources of inefficiency found in the real world are not
due to the decentralized nature of the market system, but rather have their sources
elsewhere.
While every equilibrium is efficient, it is clearly not true that every efficient
allocation of resources will be an equilibrium. However, the Second Theorem
states that every efficient allocation can be supported by some set of prices. In
other words all that is required to reach a particular outcome is a redistribution of
initial endowments of the agents after which the market can be left alone to do its
work. This suggests that the issues of efficiency and equity can be separated and
need not involve a trade off. However, the conditions for the Second Theorem are
stronger than those for the First, as now we need consumers' preferences to be
convex (convexity roughly corresponds to the idea of diminishing marginal
utility, or to preferences where "averages are better than extrema").
Existence
Even though every equilibrium is efficient, neither of the above two theorems say
anything about the equilibrium existing in the first place. To guarantee that an
equilibrium exists we once again need consumer preferences to be convex
(although with enough consumers this assumption can be relaxed both for
existence and the Second Welfare Theorem). Similarly, but less plausibly,
feasible production sets must be convex, excluding the possibility of economies of
scale.
Proofs of the existence of equilibrium generally rely on fixed point theorems such
as Brouwer fixed point theorem or its generalization, the Kakutani fixed point
theorem. In fact, one can quickly pass from a general theorem on the existence of
equilibrium to Brouwer’s fixed point theorem. For this reason many mathematical
economists consider proving existence a deeper result than proving the two
Fundamental Theorems.
Uniqueness
74
demand functions, and that these (Continuity, Homogeneity of degree zero,
Walras' law, and boundary behavior when prices are near zero) are not enough to
guarantee uniqueness.
There has been much research on conditions when the equilibrium will be unique,
or which at least will limit the number of equilibria. One result states that under
mild assumptions the number of equilibria will be finite (see Regular economy)
and odd (see Index Theorem). Furthermore if an economy as a whole, as
characterized by an aggregate excess demand function, has the revealed
preference property (which is a much stronger condition than revealed preferences
for a single individual) or the gross substitute property then likewise the
equilibrium will be unique. All methods of establishing uniqueness can be thought
of as establishing that each equilibrium has the same positive local index, in
which case by the index theorem there can be but one such equilibrium.
Determinacy
Given that equilibria may not be unique it is of some interest whether any
particular equilibrium is at least locally unique. This means that comparative
statics can be applied as long as the shocks to the system are not too large. As
stated above in a Regular economy equilibria will be finite, hence locally unique.
One reassuring result, due to Debreu, is that "most" economies are regular.
However recent work by Michael Mandler (1999) has challenged this claim. The
Arrow-Debreu model is neutral between models of production functions as
continuously differentiable and as formed from (linear combinations of) fixed
coefficient processes. Mandler accepts that, under either model of production, the
initial endowments will not be consistent with a continuum of equilibria, except
for a set of Lebesgue measure zero. However, endowments change with time in
the model and this evolution of endowments is determined by the decisions of
agents (e.g., firms) in the model. Agents in the model have an interest in equilibria
being indeterminate:
"Indeterminacy, moreover, is not just a technical nuisance; it undermines the
price-taking assumption of competitive models. Since arbitrary small
manipulations of factor supplies can dramatically increase a factor's price, factor
owners will not take prices to be parametric." (Mandler 1999, p. 17)
When technology is modeled by (linear combinations) of fixed coefficient
processes, optimizing agents will drive endowments to be such that a continuum
of equilibria exist:
"The endowments where indeterminacy occurs systematically arise through time
and therefore cannot be dismissed; the Arrow-Debreu model is thus fully subject
to the dilemmas of factor price theory." (Mandler 1999, p. 19)
Critics of the general equilibrium approach have questioned its practical
applicability based on the possibility of non-uniqueness of equilibria. Supporters
have pointed out that this aspect is in fact a reflection of the complexity of the real
world and hence an attractive realistic feature of the model.
Stability
In a typical general equilibrium model the prices that prevail "when the dust
settles" are simply those that coordinate the demands of various consumers for
75
various goods. But this raises the question of how these prices and allocations
have been arrived at and whether any (temporary) shock to the economy will
cause it to converge back to the same outcome that prevailed before the shock.
This is the question of stability of the equilibrium, and it can be readily seen that
it is related to the question of uniqueness. If there are multiple equilibria then
some of them will be unstable. Then, if an equilibrium is unstable and there is a
shock, the economy will wind up at a different set of allocations and prices once
the converging process completes. However stability depends not only on the
number equilibria but also on the type of the process that guides price changes
(for a specific type of price adjustment process see Tatonnement). Consequently
some researchers have focused on plausible adjustment processes that will
guarantee system stability, that is, prices and allocations always converging to
some equilibrium, though when there exists more than one, which equilibrium it
is will depend on where one begins.
Research building on the Arrow-Debreu model has revealed some problems with
the model. The Sonnenschein-Mantel-Debreu results show that, essentially, any
restrictions on the shape of excess demand functions are stringent. Some think
this implies that the Arrow-Debreu model lacks empirical content. At any rate,
Arrow-Debreu equilibria cannot be expected to be unique, or stable.
A model organized around the tatonnement process has been said to be a model of
a centrally planned economy, not a decentralized market economy. Some research
has tried, not very successfully, to develop general equilibrium models with other
processes. In particular, some economists have developed models in which agents
can trade at out-of-equilibrium prices and such trades can affect the equilibria to
which the economy tends. Particularly noteworthy are the Hahn process, the
Edgeworth process, and the Fisher process.
76
Roy Radner found that in order for equilibria to exist in such models, agents (e.g.,
firms and consumers) must have unlimited computational capabilities.
Until the 1970s, general equilibrium analysis remained theoretical. However, with
advances in computing power, and the development of input-output tables, it
become possible to model national economies, or even the world economy, and
solve for general equilibrium prices and quantities under a range of assumptions.
77
References
Arrow, K., Hahn, F. H.: "General Competitive Analysis", San Francisco, 1971
Arrow K. J., G. Debreu: "The Existence of an Equilibrium for a Competitive Economy" Econometrica, vol.
XXII, 265-90, 1954
Bergman, U. Michael: Properties of Production Functions, University of Copenhagen, 2005
Debreu, G.: "Theory of Value", New York, 1956
Demmler, Horst: Grundzüge der Mikroökonomie, 4th edition, München, 2000
Eatwell, John: "Walras's Theory of Capital", The New Palgrave: A Dictionary of Economics (Edited by
Eatwell, J., Milgate, M., and Newman, P.), London, 1989
Feess, Eberhard: Mikroökonomie, 4th edition, Marburg, 2004
Fishburn, P.C.: Retrospective on the utility theory of Von Neumann and Morgenstern, in Journal of Risk and
Uncertainty 2, 1989, p. 127-158
Frank, Robert H.: Microeconomics and Behaviour, McGraw-Hill, 2003
Fraser, Iain: The Cobb-Douglas Production Function: An Antipodean Defence?, Economic Issues, 2002
Garvey, E.G.: The theory of the firm and catholic social teaching, in Journal of markets and morality, Vol. 6,
No.2, 2003, page 525-540
Georgescu-Roegen, Nicholas: "Methods in Economic Science", "Journal of Economic Issues", V. 13, N. 2
(June): 317-328, 1979
Grandmont, J. M.: "Temporary General Equilibrium Theory", "Econometrica", V. 45, N. 3 (Apr.): 535-572,
1977
Jaffe, William: "Walras' Theory of Capital Formation in the Framework of his Theory of General
Equilibrium", Economie Appliquee, V. 6 (Apr.-Sep.): 289-317, 1953
Knight, F.H.: Risk, Uncertainty and Profit, Chicago: Houghton Mifflin Company 1921
Lorenz, Wilhelm: MikroOnline, www.mikroo.de, Weringerode, 2006
Luis, M.B.: Introduction to Industrial Organisation , Massachusetts Institute of Technology Press, 2000, Page
84-85
Mandler, Michael: Dilemmas in Economic Theory: Persisting Foundational Problems of Microeconomics,
Oxford, 1999
Mankiw, G.N.: Principles of Economics, Second Edition, New York 1998
Mas-Colell, A.:Whinston, M.D., Green, J.R.: Microeconomic Theory, Oxford University Press 2005
Mongin, P.: Expected Utility Theory, in: J.Davis, W. Hands and U. Maki, Edward Elgar (publisher):
Handbook of Economic Methodology, London 1997
Peters, M.: Expected Utility and Risk Aversion, St. Gallen 2005
Petri, Fabio: General Equilibrium, Capital, and Macroeconomics: A Key to Recent Controversies in
Equilibrium Theory, London, 2004
Pindyck, Robert S. / Rubinfeld, David L.: Mikroökonomie, 6th edition, München, 2005
Rabin, M.: Diminishing marginal utility of wealth cannot explain risk aversion, Berkley 2000
Sandilands, R. u. Hillary, D.: The Cobb-Douglas Production Function, Oct. 1999
Schumann, Jochen et al.: Grundzüge der mikroökonomischen Theorie, Heidelberg, 1999
Varian, H.R.: Microeconomic Analysis, New York 1984
Varian, H. R.: Grundzüge der Mikroökonomik, 3rd edition, München, 1995
Yaari, M.: Some measures of risk aversion and their uses, in Journal of Economic Theory, Vol.1, p. 315-329
78