Professional Documents
Culture Documents
Quantitative Methods For Finance
Quantitative Methods For Finance
Composed functions
Types of functions
Foundations of Calculus 2
Warm up: a bit of definitions
FUNCTIONS ON : A function of a real variable x with
domain D is a rule that assigns a unique real number to
each number x in D
Foundations of Calculus 5
Basic geometric properties of a function
The basic geometric properties of a function are whether it is
increasing or decreasing and the location of its local and
global minima and maxima (if any)
A function f is increasing if > implies ( ) > ( )
while a function f is decreasing if > implies ( ) <
( )
Figure 1
The function depicted in Figure 1 is
decreasing over and increasing
over
The point where the function turns
from decreasing to increasing is a
(global) minimum for this function,
in this case zero
Foundations of Calculus 6
Basic geometric properties of a function
The function depicted in Figure 2 is Figure 2
increasing over and decreasing over
The point where the function turns from
decreasing to increasing, is a (global)
maximum for this function, here zero
If a function f changes from decreasing to
increasing at , the point ( , ( )) is a
local minimum of the function f; if
( ) ( ) for all x then the point is a
global minimum
If a function f changes from increasing to decreasing at ,
the point ( , ( )) is a local maximum of the function f; if
( ) ( ) for all x then the point is a global maximum
We will come back to these notions when we speak of
optimization
Foundations of Calculus 7
Different functions
The simplest functions are the monomials, those functions which
can be written as = for some number a (coefficient)
and some positive integer k, where k is said to be the degree of
the monomial
For instance = 6 is a monomial of the second order
A polynomial is a function formed by adding up different
monomials; the degree of the polynomial is highest degree of any
monomial that appears in the function
For instance = 6 +2x is a polynomial of the third order
+
Rational functions are ratios of polynomials, e.g., =
Foundations of Calculus 8
Examples of popular functions (1/2)
Type Description Example
Constant function
Constant: y = (polynomial of y=5
degree zero)
Straight line: Linear function
= + (polynomial of y = 5x + 3
degree one)
Parabola: Quadratic function
= + + (polynomial of = + +2
degree two)
Hyperbola:
Rational function = / =
= /
Power function: Monomial of /
= =
= degree k
Foundations of Calculus 9
Examples of popular functions (2/2)
Power
Foundations of Calculus 10
Reference
Foundations of Calculus 11
Foundations of Calculus
Limits: Definition
Limits: an example
Continuity: Definition
Foundations of Calculus 2
Limits: Definition
Foundations of Calculus 3
Limits: Definition
Note: we do not need the function to be defined at in order
for the limit lim ( ) to exist
lim =1
Foundations of Calculus 4
Rules for Limits
Foundations of Calculus 5
Continuity: a definition
There are several (equivalent) definitions of continuity. A “naïve”
definition of continuity states that a function is continuous if an
approximate knowledge of x is sufficient to approximate f(x).
( ) <
whenever
<
Foundations of Calculus 6
Continuity: a definition
Alternative definitions.
lim = ( )
Foundations of Calculus 7
Continuity: a definition
Example of a function
that is NOT continuous
Foundations of Calculus 8
Reference
Foundations of Calculus 9
Foundations of Calculus
Interpretation of derivatives
Differentiability
Foundations of Calculus 2
The slope of functions
Start from a geometric
interpretation: when we
study the graph of a
function we would like
to have a measure of the
steepness of the graph at
a point or several points
If the function is linear, this
is easy: the slope is given by
the coefficient that multiplies x
However, this is less trivial for a non-linear function as the
one depicted above
We can define the steepness of a curve at a particular point
as the slope of the straight line that just touches the curve at
that point, i.e., that is tangent to the curve at that point
(point P in the figure) 3
Foundations of Calculus
The slope of functions
The slope of the tangent to the graph at point P, with coordinates
(a, f(a)), is called derivative of f at P and is denoted by f’(a)
How can we find the slope of the tangent of f at the point P,
with coordinates (a, f(a))?
Consider a point Q that is also on the graph of f and is close
to P, i.e., suppose that the x-coordinate of Q is a+h where h is
a small number different from zero
Foundations of Calculus 4
The slope of functions
When Q moves towards P (Q
tends to P) along the graph of f,
the secant PQ tends to the
tangent to the graph at P
The x-coordinate a+h must tend
to a so that h must tend to zero
Therefore, we define the slope of
the tangent to the graph at P as
the number to which
approaches when h goes to zero
Hence, the derivative of a function f at a point a of its domain
is
Foundations of Calculus 5
Interpretation of derivatives
In economics, other (non geometric) interpretations of the
derivative are often more useful
It is a rate of change!
If is a function that expresses the cost of producing x
units of a good, then we interpret as the marginal cost
at x
( ) = lim ( ) Cost that we face
when we make a
Note: Do not forget that the slope of a non-linear “small” increment to
function is NOT CONSTANT! The derivative the production level
depends on the point where you compute it
(meaning that if the cost function is non linear,
increasing the production from 10,000 to 10,001
units is not the same as increasing the
production from 50,000 to 50,001
Foundations of Calculus 6
Rules for computing derivatives
Theorem (for a proof see BS, chapter 2):
For any positive integer k, the derivative of = at is
=
CHAIN RULE: = ( )
Foundations of Calculus 8
An example
Take the derivatives of the following functions:
ln
Foundations of Calculus 9
Differentiability
A function f is differentiable at if the limit
( )
lim
exists and is the same for each sequence which converges to 0.
If a function is differentiable at every in its domain we say that
the function is differentiable
Example of a function
that is continuous but
NOT differentiable
No derivative
at x0=0
Foundations of Calculus 10
Higher order derivatives
The derivative of a function f is often called the first derivative
of f. If f’ is also differentiable, we can differentiate f’ to get the
second derivative, f’’. The derivative of f’’ (if it exists) is called
the third derivative of f and so on. Typically, for our applications
first and second derivatives are enough, bur higher-order
derivatives may be computed
Foundations of Calculus 12
Foundations of Calculus
Linear and polynomial
approximations
In this video
Polynomial approximations
Foundations of Calculus 2
Linear approximation and differentials
Linear functions are very easy to manipulate: therefore it is
natural to try to find a “linear approximation” to a given
function
Consider a function f(x) that is differentiable at =
The tangent to the graph at ( , ( )) follows the equation
y= + for x close to
Therefore, a linear approximation of the function f around
is given by
( ) + for x close to
or provided that is small
Foundations of Calculus 4
Linear approximation and differentials
The less is the slope of f, the more precise is the
approximation; in addition, the larger is , the less precise
the approximation
We shall see this point in more depth with an example:
consider the function = ln( )
Starting from 10 = 2.3 compare the actual change in y
given a certain change in x with its linear approximation that
is,
Foundations of Calculus 5
Linear approximation and differentials
Note: a change = 100 is quite big if we are at = 10,
leading to a rather imprecise approximation (see previous
slide)
But what if we are at = 10000?
Starting from ln 10000 = 9.20 compare the actual change in
y given a certain change in x with its linear approximation
that is,
Foundations of Calculus 6
Polynomial Approximations
If approximations provided by linear functions are not
sufficiently accurate it is natural to try quadratic appro-
ximations or approximations by polynomials of higher order
The quadratic approximation to ( ) about = is
1
+ + ( )
2
More generally, we can approximate ( ) about = by an
nth polynomial (the nth-order Taylor polynomial)
Foundations of Calculus 7
Polynomial Approximations
Consider the function = and see what happens when
we try to approximate it around 2 with linear vs. quadratic
approximations:
Foundations of Calculus 8
References
Foundations of Calculus 9
Foundations of Calculus
Foundations of calculus 2
Natural exponential functions
Consider the following function:
1
= 1+
Foundations of calculus 4
Natural exponential functions
Alternatively, if the interest is compounded semi-annually, we
will get
2 = 1 + 50% 1 + 50% = 1 +
More generally,
1
= 1+
Foundations of calculus 7
Natural logarithmic functions
The inverse of the natural exponential function is the natural
logarithmic function, =
Foundations of calculus 8
Natural logarithmic functions
Sometimes economists prefer to represent (and study) a
function = ( ) in log-log terms
This means that they apply a change to the variables such that
Y= ln( ) and X = ln( ); therefore = and = =
elasticity
Foundations of calculus 9
Reference
Foundations of calculus 10
Foundations of Linear Algebra
Transpose of a matrix
Important definitions
3
Foundations of linear algebra
Transpose of a matrix
The transpose of a matrix A denoted by A’ or is obtained
by interchanging the rows and the columns of the matrix, e.g.,
6 2
6 0 4
= and = 0 9
2 9 2
4 2
Note that if A is n x m then A’ is m x n
4
Foundations of linear algebra
Important definitions
An identity matrix (typically denoted by I) is a square matrix
with 1 in is principal diagonal and 0 everywhere else, e.g.,
1 0 0
= 0 1 0
0 0 1
The identity matrix is the matrix counterparty of the number
1, because A =A, and A = A where A is a generic n x m
matrix
1 0 0
= 0 , = 1 , = 0
0 0 1
The vectors that contain one element equal to 1 and the other
elements equal to zero are called unit vectors
N unit vectors are a basis of
The rank of a matrix tells us the maximum number of linearly
independent rows or columns of the matrix
If the matrix is n x m the maximum rank is min(n, m)
Foundations of calculus 10
Foundations of Linear Algebra
Matrix operations
In this video …
Matrix operations
Checking singularity
2 1 8 4
4 =
5 3 20 12
3
Foundations of linear algebra
Matrix Operations
In order to be conformable for addition two matrices must
have the same size
ADDITION OF TWO MATRICES
, … , , … ,
… , … + … , …
, … , , … ,
, + , … , + ,
= … , + , …
, + , … , + ,
For instance,
3 2 7 4 10 6
+ =
5 1 2 1 3 0
4
Foundations of linear algebra
Matrix Operations
In order to be conformable for subtraction, two matrices must
have the same size
SUBTRACTION OF TWO MATRICES
, … , , … ,
… , … … , …
, … , , … ,
, , … , ,
= … , , …
, , … , ,
For instance
3 2 7 4 4 2
=
5 1 2 1 7 2
5
Foundations of linear algebra
Matrix Operations
In order to be conformable for multiplication, if the size of the
first matrix is n x m, the size of the second matrix must be m x
h; the result of the multiplication has size n x h
MULTIPLICATION OF MATRICES
, … , , … , , ,
… , … … , … = ,
, … , , … , , ,
LEAD MATRIX LAG MATRIX
is the result of the multiplication of the first row of the
,
lead matrix “with” the first column of the lag matrix
…
is the result of the multiplication of the n row of the lead
,
matrix “with” the h-th column of the lag matrix
6
Foundations of linear algebra
Matrix Operations
As an example of multiplication of matrices try
7
Foundations of linear algebra
Matrix Operations
8
Foundations of linear algebra
Matrix Operations
9
Foundations of linear algebra
Division: the inverse of a matrix
The inverse of a matrix, denoted as exists only if the
matrix is a square matrix
The inverse of matrix satisfies the conditions =I and
=I
Not every square matrix has an inverse (being a square
matrix is a necessary but not sufficient condition)
11
Foundations of linear algebra
Properties of the determinant
12
Foundations of linear algebra
Properties of the determinant
13
Foundations of linear algebra
The inverse matrix in Excel
We do not want to go into the technicalities of matrix inversion (the
interested Reader may read Chang, Chapter 5)
However, again Excel can help us with the computation
The relevant function is MINVERSE
MINVERSE (C5:E7)
15
Foundations of linear algebra
Foundations of Linear Algebra
System of equations
In this video …
2
Foundations of Linear Algebra
Systems of linear equations
In general, an equation is said to be linear if it has the form
+ + …+ =
The coefficients , , …, d are fixed numbers and therefore
they are called parameters
, , …, stand for variables
A system of linear equations consists of a set of linear
equations that must be solved simultaneously
5
Foundations of linear algebra
Other ways to solve system of linear equations
Another way to solve a system of linear equations (without
using matrix algebra) is Gaussian Elimination
Start from a linear system and obtain another linear system that
has the same solution of the original one but that is simpler to
solve
Let us see the following example
6
Foundations of linear algebra
Other ways to solve system of linear equations
Now we have
Now you eliminate the from the third equation by subtracting the
first equation
7
Foundations of linear algebra
Other ways to solve system of linear equations
We obtain
Now the system is in triangular form and this is easy to solve using
«back substitution», i.e., solve last equation, then secondo then first
=1
y=2
x = -3
We can check the solution using matrix formulation
1 2 1 2
2 6 1 = 7
1 1 4 3 8
Foundations of linear algebra
Reference
9
Foundations of linear algebra
A Review of Optimization Methods
Introduction to optimization
In this video …
2
A Review of Optimization Methods
Optimization: Statement of the Problem
Optimization == maximizing (or minimizing) some objective
function, y = f(x), by picking one or more appropriate values of
the control (aka choice) variable x
The most common criterion of choice among alternatives in
economics (and finance) is the goal of maximizing something
(like the profit of a firm) or minimizing something (like costs
or risks)
For instance, think of a risk-averse investor who wants to
maximize a mean-variance objective by picking an
appropriate set of portfolio weights
Maxima and minima are also called extrema and may be
relative (or local, that is, they represent an extremum in the
neighborhood of the point only) or global
Key assumption: f(x) is n times continuously differentiable
3
A Review of Optimization Methods
Optimization: Statement of the Problem
In the leftmost graph, optimization is trivial: the function is a
constant and as such all points are at the same time maxima
and minima, in a relative sense
In the second plot, f(x) is monotonically increasing, there is no
finite maximum, if the set of nonnegative real numbers is the
domain (as the picture implies)
The points E and F on the right are examples of a relative
(local) extrema
A function can well have several relative extrema, some of
which may be maxima while others are minima
4
A Review of Optimization Methods
Candidate points: The First-Derivative Test
As a first step we want to identify the “candidate” points to solve
the optimization problem, i.e., all the local extrema
Indeed, global extrema must also be local extrema or end points
of f(x) on its domain
• If we know all the relative maxima, it is necessary only to select the
largest of these and compare it with the end points in order to
determine the absolute maximum
Key Result 1 (First-Derivative Test): If a relative extremum of the
function occurs at x = x0, then either f'(x0) = 0, or f'(x0) does not
exist; this is a necessary condition (but NOT sufficient)
5
A Review of Optimization Methods
Candidate points: The First-Derivative Test (2/2)
Key Result 1 Qualified: If f’(x0) = 0 then the value of f(x0) will be:
(a) A relative maximum if the derivative f'(x) changes its sign
from > 0 to <0 from the immediate left of the point x0 to its
immediate right
(b) A relative minimum if f'(x) changes its sign from negative
to positive from the immediate left of x0 to its immediate right
(c) Neither a relative maximum nor a relative minimum if f'(x)
has the same sign on both the immediate left and right of point
x0 (inflection point)
NOTE: we are assuming that the function
is continuous and possesses continuous
derivatives => for smooth functions,
relative extreme points can occur only
when the first derivative has a zero value Inflection point
6
A Review of Optimization Methods
One Example
7
A Review of Optimization Methods
Concavity, Convexity, and Second-Order Derivatives
A strictly concave (convex)
function is such that if we pick any
pair of points M and N on the
function and join them by a Inflection point
straight line, the line segment MN
Concave Convex
must lie entirely below (above)
the curve, except at points M and
N
9
A Review of Optimization Methods
Two Examples
11
A Review of Optimization Methods
A Review of Optimization Methods
Partial derivatives
Hessian Matrix
2
A Review of Optimization Methods
Functions with more than one variable
We are now going to generalize the earlier results to
optimization problems for functions of several variables, i.e.,
Functions : , i.e., = ( , ,…, )
In fact, functions from to will be popping up very often
in your future studies
For instance, the return of a portfolio is a linear function of the returns
of the n assets that compose the portfolio:
= + + +
Another example is a utility function , ,…, of a bundle of
consumption goods
However, we first need to generalize the concept of derivative
to the case of functions of several variables
This leads us to the introduction partial derivatives and of
Jacobian derivatives
3
A Review of Optimization Methods
Partial derivatives and the Jacobian
Definition: Let : . Then for each variable at each
point = ( , , … , ) in the domain of f, the partial
derivative with respect to is
if the limits exists. Only the ith variable changes, while the others
stay constant
= , ,…, ( )
5
A Review of Optimization Methods
Second Order Derivatives and Hessians
If the n partial derivative functions of f are continuous
functions at the point in we say that f is continuously
differentiable at
If all the n partial derivatives / are themselves
differentiable we can compute their partial derivatives
=
7
A Review of Optimization Methods
One Example
Consider the function , =3 +4 +7
Let us compute the Hessian matrix; we already computed
=6 +4 , =6 + 12 +7
=6 ; =6 + 24 ; = 12 + 12
Hessian matrix is
6 12 + 12
12 + 12 6 + 24
8
A Review of Optimization Methods
Optimization: the case of n-variable functions
Now we are ready to generalize optimization to the case of n-
variable functions
The strategy remains looking for candidate points (relative
extrema) and then try to isolate global ones among them
is a critical point for f if it fulfills
= ,
which means that
( ) = 0, for each i
=3 3 =0
Solving that is non-trivial and time consuming
You get three critical points:
A(0,0); B( , ); C( , )
Step 2: compute the Hessian matrix
36 + 6
=
14
A Review of Optimization Methods
Reference
15
A Review of Optimization Methods
A Review of Optimization Methods
2
A Review of Optimization Methods
Hints of Constrained Optimization
Up to these points, all control variables have been independent
of each other: the decision made regarding one variable does
not impinge upon the choices of the remaining variables
• E.g., a two-product firm can choose any value for Q1 and any Q2 it
wishes, without the two choices limiting each other
• If the firm in the example is somehow required to fulfill a restri-
ction (e.g., a production quota) in the form of Q1 + Q2 = k, how-
ever, the independence between the choice variables will be lost
• The new optimum satisfying
the production quota constitu-
tes a constrained optimum, which,
in general, may be expected to
differ from the free optimum
Key Result : A constrained maxi-
mum can never exceed the free
maximum
3
A Review of Optimization Methods
Hints of Constrained Optimization
In general, a constrained maximum can be expected to achieve a
lower value than the free maximum, although, by coincidence, the
two maxima may happen to have the same value
o We had added another constraint intersecting the first constraint at
a single point in the xy plane, the two constraints together would
have restricted the domain to that single point
o Then the locating of the extremum would become a trivial matter
• In a meaningful problem, the number and the nature of the
constraints should be such as to restrict, but not eliminate, the
possibility of choice
o Generally, the number of constraints should be less than the
number of choice variables
Under C < N equality constraints, when we can write a sub-set
of the choice variables as an explicit function of all others, the
former can be substituted out:
max , ,…,
, ,…,
. . = ,…, ,…, = ,…, 4
A Review of Optimization Methods
Hint: Lagrange Multiplier Method
becomes: max , ,…, ,
, ,…,
an unconstrained problem
However, the direct substitution method cannot be applied
when the C constraints do not allow us to re-write the objective
functions in N – C free control variables
• Even if some of the variables become implicit functions of others,
it would be complex to proceed because the objective would
become “highly composite”
In such cases, we often resort to the method of Lagrange
(undetermined) multipliers
• The goal is to convert a constrained extremum problem into a
form such that the first-order condition of the free extremum
problem can still be applied
• For instance, consider an objective function z = f(x.y) subject to
the constraint g(x,y)=c where c is a constant
5
A Review of Optimization Methods
Hint: Lagrange Multiplier Method
The Lagrangian problem is:
max , [ , ]
, ,
6
A Review of Optimization Methods
Reference
7
A Review of Optimization Methods
Elementary choice under uncertainty
- dominance
2
Podcast 15: Elementary choice under uncertainty - mean-variance criterion
Choice under uncertainty: mean-variance (dominance)
A security MV-dominates another security if it is characterized by a
higher expectation and by lower variance of payoffs than another one
When this is the case, the best known approach at this point consists
of summarizing the distributions of asset returns through their
mean and variance:
2
Podcast 16: Preference representation theorem and its meaning
Utility-Based Choice Under Certainty
3
Podcast 16: Preference representation theorem and its meaning
Utility-Based Choice Under Certainty
4
Podcast 16: Preference representation theorem and its meaning
Utility-Based Choice Under Certainty
Modern microeconomic theory describes individual behavior as the
result of a process of optimization under constraints
o The objective is determined by individual preferences
o Constraints depend on an investor’s wealth and on market prices
To develop such a rational theory of choice under certainty, we
postulate the existence of a preference relation, represented by the
8
Podcast 16: Preference representation theorem and its meaning
Utility-Based Choice Under Certainty
Under the axioms of choice, a continuous, time-invariant, real-valued
ordinal utility function u
Transitivity: For bundles a, b, and c, if a b and b c, then a
9
Podcast 16: Preference representation theorem and its meaning
Utility-Based Choice Under Certainty
Under the axioms of choice, a continuous, time-invariant, real-valued
ordinal utility function u
13
Podcast 16: Preference representation theorem and its meaning
Utility-Based Choice Under Certainty
Under the axioms of choice, a continuous, time-invariant, real-valued
ordinal utility function u
14
Podcast 16: Preference representation theorem and its meaning
Utility-Based Choice Under Certainty
Given u(·) and a monotone increasing transformation v(·), the function
v(u(·)) represents the same preferences as the original u(·)
o Different investors will be characterized by heterogeneous preferences
and as such will express different utility functions, as identified by
heterogeneous shapes and features of their u(·) functions
o However, because a b if and only if u(a u(b), any monotone
increasing transformation v(·) will be such that v(u(a v(u(b)), or,
assuming v(·) monotone increasing cannot change the ranking
u
(·), the function (u
u(·)
o E.g., if u(a u(b), (u(a))3 u(b))3 … guys, any guess?
15
Podcast 16: Preference representation theorem and its meaning
Utility-Based Choice Under Certainty
Given u(·) and a monotone increasing transformation v(·), the function
v(u(·)) represents the same preferences as the original u(·)
16
Podcast 16: Preference representation theorem and its meaning
Utility-Based Choice Under Certainty
Given u(·) and a monotone increasing transformation v(·), the function
v(u(·)) represents the same preferences as the original u(·)
17
Podcast 16: Preference representation theorem and its meaning
The expected utility theorem
2
Podcast 17: Expected utility theorem
Utility-Based Choice Under Uncertainty
o Ranking vectors of monetary payoffs involves more than pure elements
of taste or preferences
o E.g., when selecting between some stock A that pays out well during
recessions and poorly during expansions and some stock B that pays out
according to an opposite pattern, it is essential to forecasts the
probabilities of recessions and expansions
Disentangling pure preferences from probability assessments is a
complex problem that simplifies to a manageable maximization
problem only under special assumptions, that is when the expected
utility theorem (EUT) applies
Under the EUT, an investor's ranking over assets with uncertain
monetary payoffs may be represented by an index combining, in the
most elementary way (i.e., linearly):
a preference ordering on the state-specific payoffs
the state probabilities associated to these payoffs
The EUT simplifies the complex interaction between probabilities and
preferences over payoffs in a linear way, i.e., by a simple sum of
products 3
Podcast 17: Expected utility theorem
Utility-Based Choice Under Uncertainty
o Ranking vectors of monetary payoffs involves more than pure elements
of taste or preferences
o E.g., when selecting between some stock A that pays out well during
recessions and poorly during expansions and some stock B that pays out
according to an opposite pattern, it is essential to forecasts the
probabilities of recessions and expansions
Disentangling pure preferences from probability assessments is a
complex problem that simplifies to a manageable maximization
problem only under special assumptions, that is when the expected
utility theorem (EUT) applies
Under the EUT, an investor's ranking over assets with uncertain
monetary payoffs may be represented by an index combining, in the
most elementary way (i.e., linearly):
a preference ordering on the state-specific payoffs
the state probabilities associated to these payoffs
The EUT simplifies the complex interaction between probabilities and
preferences over payoffs in a linear way, i.e., by a simple sum of
products 4
Podcast 17: Expected utility theorem
The Expected Utility Theorem
Under the assumptions of the EUT, one ranks assets/securities on the
basis of the expectation of the utility of their payoffs across states
Under the six axioms specified below, there exists a cardinal,
continuous, time-invariant, real-valued Von Neumann-Morgenstern
(VNM) felicity function of money U(·), such that for any two
lotteries/gambles/securities (i.e., probability distributions of
monetary payoffs) x and y,
x y)]
where for a generic lottery z (e.g., one that pays out either x or y),
7
Podcast 17: Expected utility theorem
The Expected Utility Theorem: Supporting Axioms
8
Podcast 17: Expected utility theorem
The axioms supporting the Expected
Utility Theorem
called p and q
4
The Expected Utility Theorem: Supporting Axioms
The axioms supporting the EUT are (i comple-
teness
o
Completeness:
z to l, l to z
Transitivity: For any lotteries z, l, and h, if z l and l h, then z h
Continuity
1 2 1 2
o
is received under conditions of uncertainty, through a lottery
: Let x, y, z x > y > z, then there
x, z
o
Completeness:
z to l, l to z
Transitivity: For any lotteries z, l, and h, if z l and l h, then z h
Continuity
1 2 1 2
o
is received under conditions of uncertainty, through a lottery
: Let x, y, z x > y > z, then there
x, z
o
Completeness:
z to l, l to z
Transitivity: For any lotteries z, l, and h, if z l and l h, then z h
Continuity
1 2 1 2
o
is received under conditions of uncertainty, through a lottery
: Let x, y, z x > y > z, then there
x, z
4
Completeness of EUT-Induced Rankings
Different VNM felicity functions may induce rather different rankings of
lotteries/securities/portfolios, but these will always be complete
This example shows that the type of felicity function assumed for an
investor may matter a lot
Instead of a log-utility function, assume U(Ri -(Ri - - Ri
o
aversion to risk
o
u(x1,x2,…,xM
W
We shall always assume non-satiated individuals, U’(W
o Gordon Gekko’s greed, https://www.youtube.com/watch?v=VVxYOQS6ggk
To understand what risk aversion means, consider a bet where the
investor either receives an amount h with probability ½ or must pay
an amount h with probability ½, so the bet in expectation is fair
The intuitive notion of “being averse to risk” is that that for any level
of wealth W, an investor would not wish to enter in such a bet:
o
aversion to risk
o
u(x1,x2,…,xM
W
We shall always assume non-satiated individuals, U’(W
o Gordon Gekko’s greed, https://www.youtube.com/watch?v=VVxYOQS6ggk
To understand what risk aversion means, consider a bet where the
investor either receives an amount h with probability ½ or must pay
an amount h with probability ½, so the bet in expectation is fair
The intuitive notion of “being averse to risk” is that that for any level
of wealth W, an investor would not wish to enter in such a bet:
U’(W W dW
decreases as W grows larger
If U’(W U’’(W
o Positive deviations from a fixed
average wealth do not help as
much as the negative ones hurt
o The segment connecting W – h and W + h lies below the utility function
4
Podcast 20: Defining and measuring risk aversion
Defining Risk Aversion
A risk-averse investor is one who always prefers the utility of the
expected value of a fair bet to the expectation of the utility of the same
concave
U’(W W dW
decreases as W grows larger
If U’(W U’’(W
o Positive deviations from a fixed
average wealth do not help as
much as the negative ones hurt
o The segment connecting W – h and W + h lies below the utility function
5
Podcast 20: Defining and measuring risk aversion
Other Risk Preference Types
A risk-
convex (linear
We obtain risk-loving behavior when
the
,
U’(W W dW increases as W
grows larger
If U’(W U’’(W
o Positive deviations from a fixed average wealth give more happiness
than the unhappiness caused by negative deviations
The case of risk neutral investors obtains if U’(W
o From standard integration of the marginal utility function, it follows
W b W a + bW, a linear utility function
6
Podcast 20: Defining and measuring risk aversion
Other Risk Preference Types
A risk-
convex (linear
We obtain risk-loving behavior when
the
,
U’(W W dW increases as W
grows larger
If U’(W U’’(W
o Positive deviations from a fixed average wealth give more happiness
than the unhappiness caused by negative deviations
The case of risk neutral investors obtains if U’(W
o From standard integration of the marginal utility function, it follows
W b W a + bW, a linear utility function
7
Podcast 20: Defining and measuring risk aversion
Other Risk Preference Types
8
Podcast 20: Defining and measuring risk aversion
Absolute and relative risk aversion
coefficients
2
Podcast 21: Absolute and relative risk aversion coefficients
Absolute and Relative Risk Aversion Coefficients
How can we manage to measure risk aversion and compare the risk
aversion of different decision makers?
Given that under mild conditions, risk aversion is equivalent to
U''(W)<0 for all wealth levels, one simplistic idea is to measure risk
o E.g., John is more risk averse than Mary is iff |UJohn''(W)| > |UMary''(W)|
Unfortunately, this approach leads to an inconsistency because when
UJohn(W) = a + bUMary(W) with b > 0 and b , clearly U’’John(W) =
bUMary''(W) UMary''(W) > 0
But we know that by construction, John and Mary have the same
preferences for risky gambles and therefore that it makes no sense to
state the John is more risk averse than Mary
Two famous measures that escape these drawbacks are the
coefficients of absolute/relative risk aversion:
o E.g., John is more risk averse than Mary is iff |UJohn''(W)| > |UMary''(W)|
Unfortunately, this approach leads to an inconsistency because when
UJohn(W) = a + bUMary(W) with b > 0 and b , clearly U’’John(W) =
bUMary''(W) UMary''(W) > 0
But we know that by construction, John and Mary have the same
preferences for risky gambles and therefore that it makes no sense to
state the John is more risk averse than Mary
Two famous measures that escape these drawbacks are the
coefficients of absolute/relative risk aversion:
(ABSOLUTE)
9
Podcast 21: Absolute and relative risk aversion coefficients
ARA and RRA and the Odds of
accepting a bet
o W; h)-
“mark-up” in the odds of the bet that the investor requires to tolerate it
o W; h) depends on the size of the bet, h, in a very
o W; h)-
“mark-up” in the odds of the bet that the investor requires to tolerate it
o W; h) depends on the size of the bet, h, in a very
o W; h)-
“mark-up” in the odds of the bet that the investor requires to tolerate it
o W; h) depends on the size of the bet, h, in a very
An increase in either absolute risk aversion and in the size of the bet
have identical effects
W; h) turns out to be independent of wealth
Therefore so that
7
Two Examples
John is characterized by VNM function
Therefore so that
An increase in either absolute risk aversion and in the size of the bet
have identical effects
W; h) turns out to be independent of wealth
Therefore so that
8
Applications to Real-Life Examples
Because casino are for-profit companies and hence they «rig» chance
games in their favor, a gambler structu-
W; h) for all h implied by the gambles
Therefore no risk-averse agent should ever walk into a casino, ever!
However, not all risk-
risk-lover, and are both negative and therefore W; h
W; h) to accept risky, unfair gambles
In short, constant ARA agents care for the absolute size (h) of
gambles, while constant RRA care for their relative size ( )
9
Applications to Real-Life Examples
Because casino are for-profit companies and hence they «rig» chance
games in their favor, a gambler structu-
W; h) for all h implied by the gambles
Therefore no risk-averse agent should ever walk into a casino, ever!
However, not all risk-
risk-lover, and are both negative and therefore W; h
W; h) to accept risky, unfair gambles
In short, constant ARA agents care for the absolute size (h) of
gambles, while constant RRA care for their relative size ( )
Applications to Real-Life Examples
or
2
Podcast 23: ARA and RRA and the Risk Premium
ARA and RRA and the Risk Premium
The certainty equivalent of a risky bet is the (maximum) amount of
money one is willing to pay for the risky bet, less than its expected value
The other interpretation of ARA and RRA is that they relate to size of
the risk premium characterizing a gamble/lottery/security
o This derives from the very definition of risk aversion and it is simply an
application of the standard Jensen’s inequality:
or
3
Podcast 23: ARA and RRA and the Risk Premium
ARA and RRA and the Risk Premium
The risk premium measures the difference between the expected value
of a bet and the certainty equivalent an investor is willing to pay for it
The difference between the expected value of a risky prospect and its
W,H):
W,H W+H]- CE(W,H)
It represents the maximum amount the agent would be willing to pay
to avoid the gamble implied by the risky asset
W,H) must be s.t.: U( W+H]- W, H)) W+H)]
o The length of both red segments
result follows 4
Podcast 23: ARA and RRA and the Risk Premium
ARA and RRA and the Risk Premium
The risk premium measures the difference between the expected value
of a bet and the certainty equivalent an investor is willing to pay for it
The difference between the expected value of a risky prospect and its
W,H):
W,H W+H]- CE(W,H)
It represents the maximum amount the agent would be willing to pay
to avoid the gamble implied by the risky asset
W,H) must be s.t.: U( W+H]- W, H)) W+H)]
o The length of both red segments
result follows 5
Podcast 23: ARA and RRA and the Risk Premium
ARA and RRA and the Risk Premium
For small risks, ARA and RRA are proportional to the risk premium but
are interacted with variance, i.e., the perceived quantity of risk
6
Podcast 23: ARA and RRA and the Risk Premium
ARA and RRA and the Risk Premium
×(Quantity of risk)
o As before, because ARA(W W)/W, we can re-write the result as:
o h
then John(W; h 2= 5 euros, and CE = 95
o Let’s check what the definition yields:
7
Podcast 23: ARA and RRA and the Risk Premium
ARA and RRA and the Risk Premium
×(Quantity of risk)
o As before, because ARA(W W)/W, we can re-write the result as:
o h
then John(W; h 2= 5 euros, and CE = 95
o Let’s check what the definition yields:
8
Podcast 23: ARA and RRA and the Risk Premium
A Different Definition of Risk Premium
Possible to convert these ideas into the classical definition of a
percentage risk premium to be added to asset returns to compensate
a decision-maker for the risk she runs
Any risky gamble H H so
9
Podcast 23: ARA and RRA and the Risk Premium
A Different Definition of Risk Premium
Possible to convert these ideas into the classical definition of a
percentage risk premium to be added to asset returns to compensate
a decision-maker for the risk she runs
Any risky gamble H H so
These functions are called linear risk tolerance (LRT) utility functions
(alternatively, HARA utility functions, where HARA stands for
hyperbolic absolute risk aversion, since ARA(W) defines a hyperbola)
LRT utility functions have many attractive properties:
These functions are called linear risk tolerance (LRT) utility functions
(alternatively, HARA utility functions, where HARA stands for
hyperbolic absolute risk aversion, since ARA(W) defines a hyperbola)
LRT utility functions have many attractive properties:
Quadratic, IARA
Linear, risk-neutral U(W) = a + bW with b > 0
Quadratic utility poses a few problems: e.g., the investor is not
nonsatiated for all wealth levels; she is satiated below the bliss
These functions are called linear risk tolerance (LRT) utility
functions (alternatively, HARA, hyperbolic absolute risk aversion,
because their ARA(W) defines a hyperbola)
Podcast 24: A Few Common Utility of Wealth Functions
From the Density of Wealth to the Density of U(W)