Professional Documents
Culture Documents
Lecture 1 - Introduction - What Is Market Risk Management ?
Lecture 1 - Introduction - What Is Market Risk Management ?
Lecture 1 - Introduction - What Is Market Risk Management ?
Lecture 1
What Is Market Risk Management?
Riccardo Rebonato
1 Plan of the Course
• to choose the most appropriate risk measurement method given the task
at hand;
This first lecture will be more theoretical than the rest of the course.
The reason for dwelling on more theoretical aspects is that the conceptual foun-
dations that we will lay today should remain with us, perhaps in the background,
for the rest of the course when we get down to work and measure market risk.
I will introduce these concepts in a heuristic and intuitive manner in the first
lecture.
We will only seldom explicitly use these concepts again, but please don’t erase
them from your memory.
I will argue that if we want to understand what went wrong in risk man-
agement practices in 2007-2008, focussing on failures of risk measurement
may not be the best place to start.
I will briefly sketch the origins of VaR as the risk measure of choice to
determine prudential capital.
Text
?????
• Finally I will explain why the high-dimensional joint distribution of risk
factors and the univariate distribution of profits and losses (P&L) play
such a crucial role in risk measurement and management.
For instance, the concept of ‘coherent’ risk measures, which we will encounter
later in this lecture, was developed after the VaR statistic had been enshrined
in the risk regulatory framework — and it so happens that VaR is not a coherent
measure of risk after all.
The current practical approach to risk management developed in substantial
disregard of some fundamental insights about the nature of risk from finance
theory and asset pricing.
This is not necessarily a bad thing, but we should not assume that risk mea-
surement (let alone risk management) stands on the same solid theoretical and
empirical grounds as, say, microeconomics.
Notwithstanding the quote above by Greenspan, the links with ‘the writings of
the University of Chicago’s Harry Markowitz’ are in reality rather tenuous.
The ‘story’ behind the emergence of the risk measurement discipline is much
‘messier’ than the usual text-book reconstructions allow one to perceive.
4 The Historical Origins of Value at Risk (VaR)
Source: Adamodar, (date unknown), Value at Risk (VaR), working paper, NYU
What this work has in common with VaR is the focus on the volatility of, and
correlation among, risk factors.
• In the 1980s the US Securities and Exchange Commission (SEC) made the
first link between the capital held by financial firms and the potential losses
that would be incurred at the 95% confidence interval over a thirty-day
period. This capital was referred to as ‘haircut’, not as VaR.
Most banks co-ordinated their efforts and fulfilled the regulatory require-
ment via the use of VaR.
• In 1999 the Basel II Accord set out for the first time the possibility to calcu-
late the capital component pertaining to market risk using VaR calculated
using a firm’s internal methodology (to be approved by the regulators).
The crisis of 2007-2008 have raised important questions about what went wrong
in the practice of risk management.
For instance:
• Was it a failure of risk measurement (were the calculations of whatever
risk metrics was deemed useful faulty?)
• Were people using the wrong risk measures (ie, would the focus on risk
measures other than VaR have given rise to different outcomes)?
(Greenspan, 2009):
• the payoffs that accrue to the various economic agents who are involved
in the management of risk;
Note: the existence of subsidies is not the same as the existence of exter-
nalities. There are two linked but different reasons why subsidized SIFIs
‘matter’, only one of which has a direct systemic dimension.
• What is the relevance to the enforced risk management of the depositor
guarantee granted by government to systemically important deposit-taking
institutions. How can moral hazard be curbed?
• The ‘covenant’ between the government and the banks. [Discussion here]
How was this web of interests, subsidies and externalities reflected in the pre-
crisis regulatory framework?
7 The Risk Management Failure of 2007-2008
Revisited
As we shall see, the conceptual framework that put market risk measurement
centre stage in the calculation of regulatory capital rests on the twin pillars of
enlightened self-interest, and the superiority of the private sector to come up
with the technology to capture, measure and manage risk.
Here is what Greenspan (2009) wrote:
The argument here seems to be that the enlightened self-interest of the pri-
vate decision-makers of subsidized, systematically-important institutions can
produce an outcome that is good for ‘everybody’ (shareholders managers of a
bank, taxpayers...)
Is it tenable that there should exist one ‘good’ form of risk management, inde-
pendent of who will use it?
If it did, then we should just go to the guys who can measure risk best (and
these were supposed to be the banks), and ask them to get on with it.
Consider:
• Risk management from the perspective of the shareholders / the managers
of capital-regulated financial institutions.
How can we look in a unified manner at all these different perspectives of risk?
• different economic agents are impacted by the failures of banks, and benefit
from the profits banks make, to very different extents and in very different
circumstances;
We have therefore concluded that it seems unlikely that such different payoffs
will give rise to identical responses to the management of systemic financial
risk.
• the monetary value of these payoffs in each state of the world (by the way,
1. and 2. is what traditional risk measurement is all about);
If one had full confidence in this approach, all questions about risk (and reward,
and a lot more) would be answered by applying this formalism.
In reality the expected utility formalism has been subject to several different
sources of criticism, such as:
• it may be correct in theory, but this is not how people ‘really’ make choices
in reality;
• it is conceptually flawed.
We will not attack the problem of market risk management from the utility-
function perspective.
(We should ask ourselves, however, why is the idea of managing risk using EU
suspect, but the idea of doing so by looking at one percentile sound...)
However, EU provides a very useful tool to frame questions about risk, and
therefore we will take a brief look at what EU theory says.
We will not use the output of EU theory to get risk numbers out, but to help
our intuition.
12 Choice Theory : Risk Aversion in Rational
Choice Theory
In this section we want to understand how risk and reward are looked at in
finance theory: ie, through the prism of rational choice and utility theory.
1. given any two prospects, A and B, we can always say whether we prefer
A (A ≻ B) or B (B ≻ A) or whether we are indifferent (A B).
if A ≻ B =⇒ A ≻ (1 − ǫ) B + εC (1)
(This mean that there are no cliffs in preferences.)
Under certainty we can say that
L1 ≻ L2 =⇒
So, if lottery LA has outcomes (L1A, L2A, L3A) with probabilities π 1A, π 2A, π 3A
and lottery LB has outcomes (L1B , L2B , L3B ) with probabilities π 1B , π 2B , π3B ,
then Jenny choices over lotteries can be described as if she first computed in
her mind
EU (LA) = π iAu LiA (4)
1=1,3
and
EU (LB ) = πiB u LiB
1=1,3
and then picked LA if the expected utility of LA, EU (LA), was greater than
the expected utility of LB , EU (LB ), and vice versa.
13 Adding Empirical Information
Nice as they are these results do not take us very far, unless we add some
empirical flesh to the beautiful but dry theoretical bones.
We can model this by saying that people assign a ‘degree of pleasure’ (‘utility’)
to consumption that keeps on increasing with how much they consume, but
that does not increase one-for-one with the amount of consumption.
Again, the fourth slice of cake never tastes as good as the first.
How can we model this?
A concave function such as the log or the square root function displays this
behaviour.
These two simple ingredients are enough to show that Jenny should be risk
averse.
That she will prefer a certain consumption to entering a lottery with the
same expectation.
• risk premium
• certainty equivalent
0.15
Power utility function
0.05
-0.15
risk neutr
tyil
it-0.25 Beta=0.25
u
Beta=0.5
-0.35
log
-0.45 Beta=2
Beta=4
-0.55 Beta=6
-0.65
consumption
Figure 4: Power utility functions for different values of the risk aversion coeffi-
cient. The limit for β → 0 corresponds to risk neutrality; ehe limit for β → 1
corresponds to the logarithmic utility function.
A risk premium, π, associated with a zero-expected-value lottery, ǫ, is the
amount of money you would be willing to pay to avoid entering the lottery.
[What can it depend on?]
So
u (c − π) = E [u (c + ǫ)] (5)
You are just as happy is your wealth decreases for sure by π or if you are forced
to enter the fair lottery.
In general, calculating the expected utility of a risky prospect is tantamount
to multiplying the utility corresponding to each different monetary outcome by
the real-world probability of that outcome occurring.
Let’s take the power utility function as our starting point. This is given by
1 β
u (c) = c (6)
β
In the limit for β going to 1 this becomes a straight line, utility is the same as
wealth/consumption, and we have risk neutrality. [Why?]
Now, utility functions imply the same ordering of preferences up to a linear
transformation (ie, if we change u (c) into a + b × u (c). Therefore let’s
change the power utility function to
1 β 1 cβ − 1
u (c) = c − = (7)
β β β
This agent is faced with a lottery that will increase her wealth to 1.25 with
probability 0.5 and will decrease her wealth to 0.75 with probability 0.75.
What is the expected utility, eu(bet), from entering the fair bet? We have
1 1
eu(bet) = u (0.75) + u (1.25) = −0.03227
2 2
This is less than the investor’s original utility, so she will not willingly enter the
bet.
Consider now the case of a risk-neutral investor, whose utility is given by the
limit for β going to 0 of
cβ − 1
u (c) = log (c) = lim
β→1 β
For the risk-neutral investor the expected utility of the lottery is the same as
the original before the bet:
1 1 1 1
lim (0.75)β + lim (1.25)β − = 0 = u (1) .
2 β→1 β β→1 β β
We can ask the question:
by how much does the ‘bad-state’ probability, p(bad), have to increase for the
expected utility of a risk-neutral investor to be the same as the expected utility
of the logarithmic-utility risk-neutral investor?
A quick calculation gives
1 1
p(bad) lim (0.75)β−1+[1 − p(bad)] lim (1.25)β−1 = −0.03227 =⇒
β→1 β β→1 β
p(bad) = 0.56454
We can call this probability the individual risk-neutral probability, or the individual-
Q-measure probability.
So, we have
Probabilities P-measure (real world) Q-measure (risk-neutral world)
p (good) 0.5 0.43546
p (bad) 0.5 0.56454
Note how the bad-state probability has increased under Q: Q-probabilities are
more pessimistic.
The important point is the following:
The important take-away here is that when we make choices under risk
real-world probabilities get deformed.
15 The same example with a twist
Consider now two agents with the same logarithmic utility function, the same
real-world probabilities, but with the payoffs reversed: in the state where player
1 gets 1.25, player 2 gets 0.75, and vice versa.
Probabilities p (state 1) p (state 2)
P layer1 0.56454 0.43546 (10)
P layer2 0.43546 0.56454
Now there now longer is a univocal ‘good’ and ‘bad’ state for everyone: there
are state 1 and state 2 instead: the state that is good for one player is bad for
the other.
We know already that in making choices about risk, true (real-world) proba-
bilities — the ones that risk measurement techniques strive to determine — tell
only a part of the story.
But the new point is that, even if all the agents affected by a set of outcomes
had the same aversion to risk and agreed on the real-world probabilities, the
difference in payoffs generate different ‘deformations’ of the subjective real-
world probabilities.
Both players would reject the fair bet, but, if allowed to improve their expected
utility by hedging (risk management), they would hedge against very different
things.
• [Case study: the Head of Risk Management and the CEO looking at sub-
prime exposure of their bank in July 2007 .]
Novel interesting approaches are emerging to enrich this view (Hersh Shefrin,
Behavioural Risk Management, 2015), but it is too early to tell how fruitful
they will be.
17 Different Measures for Risk Management — 2
Important as the cognitive shortcomings of economic agents may be, the analy-
sis of subsidies, externalities and attaching payoffs should make clear that even
perfectly rational agents will take different risk management decisions once
faced with the same measurement real-world P-distribution of returns.
To summarize: there are two distinct reasons why the real-world probability
distribution of outcomes may be deformed:
• one is linked to our risk aversion, and is totally compatible with a rational
(but not risk-neutral) economic agent — this is the P-to-Q transformation
alluded to above. Even if we learn to ‘think straight’ (ie, if we eliminate
our cognitive biases) these deformations will remain.
• the second brings into play cognitive deficiencies and various ‘errors’ in as-
sessing probabilities. [Example in class of overconfidence: the 10-question
game]
The deformations of the real-world probabilities caused by cognitive biases can
be severe.
According to some authors, in some contexts these biases cause the largest
deformations of the ‘true’ probabilities. However, as we have seen, even in
the absence of cognitive limitations and biases, each non-risk-neutral economic
agent will deform the real-world probability distribution in what is for her the
‘pessimistic’ direction.
And if the payoffs of various economic agents are different, the deformations
they impose to the real-world probability distribution will be very different.
• yet, despite their superior technical ability to measure and manage risk
, financial institutions should still be ‘goaded’ by ‘capital incentives’ to
embark in a sophisticated programme of risk discovery, quantification and
management;
• regulators should set standards for risk management, but not prescribe
how risk management should be carried out: again ,"private firms know
best".
19 A Quote
"I think we all agree that regulators should not manage the financial
risk of banks — banks should."
[Comment]
20 Why do enlightened firms need encourage-
ment?
Jorion (1997):
• Why did they not do it of their own accord in the first place?
• How does all of this fit with the ‘enlightened self -interest’ of banks?
21 Stepping Back: Why VaR? What other mea-
sures of market risk ‘are there’ ?
The historical account given at the beginning of this lecture of how VaR became
the risk-capital measure of choice from a regulatory perspective stresses the
almost serendipitous series of event that made VaR as a key tool to capture
and aggregate market exposure first, and to create a link.
This prompts the obvious question of whether there are other — and possi-
bly better — risk measures (understood as a mapping from a profit-and-loss
distribution to a single number ).
• why do we need risk measures at all, if rational investors are utility max-
imizers? Why don’t economic agents just maximize their utility? [Same
utility functions for everyone?]
There are several reasons why risk measures are preferred to expected utility
maximization:
Given the discussion above — given, that is, that we do not want to go down
the utility maximization route — and given that we have assumed that we know
the probabilities attaching to all the possible outcomes for these portfolios, we
would like to use a risk measure instead. What properties should a desirable
risk measure enjoy?
A reasonable risk measure should have the following properties:
Suppose that we have a fixed set, Ω = {ω1, ω2, ..., ωn}, of possible market
outcomes, and a number of possible portfolios, z ∈ Z.
(Formally we can say that Z is a real linear vector space, but we don’t nee to
go into that.)
Suppose that we can associate to each portfolio, z, its discounted values, {x},
one for each possible market outcome: x = x (z; Ω) .
Next, consider a function, ρ, associated with the known set of possible market
outcomes, that takes in one or more portfolios, transforms these input portfolios
into their (discounted) payoffs, x (ω), and returns a real number.
So, take portfolio z. The function(al) ρ takes in z, first transforms it ‘internally’
to its payoffs, x (z1; Ω), in all the possible states of the world, and, from all
these numbers, returns a ‘risk number’ ρ (z):
Any function that does all of this is called a monetary measure of risk.
24.2 Properties of Coherent Risk Measures
What we have asked so far of this risk function is too vague, however, to be of
much use.
What properties should we ask of the monetary measure of risk function, ρ, for
it to reflect our intuition about the risk of a portfolio?
How should one define monetary risk measures in such a way that
we can define which risk we should accept or which we should reject?
This is what they came up with.
All monetary measures of risk generate the set of input portfolios (of positions)
which are acceptable (say, to a regulator).
More precisely, the acceptance set is the set, ΩZ , of all positions such that
ρ (Z) ≤ k (14)
Coherent risk measures are supposed to generate acceptance sets that make
good sense for a regulator.
By looking at coherent risk measures from the acceptability lens, we can also
understand better now the translational invariance condition: adding cash does
not decreases the uncertainty associated with a portfolio, but increases the
acceptability of the portfolio.
26 Stochastic Dominance
Are there criteria of choice between two portfolios that use the expected-utility
framework in a model independent way?
Answering this question leads to the concept of Stochastic Dominance. The
definition first.
A F SD B (16)
Consider now the probability distribution of the losses, x, associated with Port-
folio A and Portfolio B. One can easily show that this condition is satisfied
iff
ΦA (x) ≤ ΦB (x) (17)
1.2
0.8
0.6
0.4
0.2
0
-6 -4 -2 0 2 4 6
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
-6 -4 -2 0 2 4 6
First-order stochastic dominance is nice, but rarely met in practice — after all,
what we have described is very close to the definition of arbitrage!
2. Portfolio Y : gain $100 with probability 0.5 and $200 with probability 0.5
[Discuss]
Now suppose that your utility function is always increasing and concave (ie,
u (x) is a concave function of x). I know nothing else about your utility
function. (So, I know that you like more to less, and that you are risk averse -
aren’t we all?)
Then we have
1
E [u (X)] = E [u (180)] ≥ E [u (150)] ≥ (E [u (200)] + E [u (100)]) = E [u (Y )]
2
(18)
0.8
0.6
0.4
0.2
0
0 50 100 150 200 250 300 350
Figure 7:
One can show that, any risk averse, non-satiable investor will always prefer
prospect A to prospect B iff
x x
A SSD B ⇐⇒ ΦA (t) dt ≤ ΦB (t) dt
−∞ −∞
0.8
0.6
0.4
0.2
0
-6 -4 -2 0 2 4 6
0.8
0.6
0.4
0.2
0
-6 -4 -2 0 2 4 6
0
-6 -4 -2 0 2 4 6
Figure 11:
Prcatitioners perfer to use risk meansures — ie, "risk numbers".
• which possible values our portfolios can assume over the set Ω;
• a joint probability distribution of the (possibly very large, but finite) number
of risk factors that can affect the value of each of our portfolios;
• a mapping from each of these joint realization to the value (P&L) of our
portfolio. This is our first essential requirement.
So, the plan of action is to obtain a univariate distribution of profit and losses
from the impossibly high-dimensional joint distribution of risk factors
• an accurate mapping from any joint realization of the risk factors to the
value of the portfolio of interest;
• the use of this univariate probability distribution to get the risk measure
of our choice (or the value of utility).
28 How does the univariate P&L distribution gets
manipulated?
Different risk measures ‘take in’ and ‘do different things’ to the P&L probability
distribution. For instance:
• Conditional expected shortfall calculates the average ‘in the tail’ (ie, past a
given percentile). We use more information, but only in the tail. Since we
are only looking at information in the tail, it goes without saying that we
cannot say anything about risk/reward trade-offs. The strategic decisions
on investments are made on another floor.
Figure 12:
• Utility maximization takes into account every little bit of the full distribu-
tion. One can ask questions about risk/return trade-offs. Pity that only a
selected breed of financial professionals likes it!
29 A last word of caution
A few words on the volatility, σ. Since we almost invariably deal with Gaussian
models, the volatilities are ‘absolute’ or ‘normal’ (not percentage) volatilities.
So, we write a volatility as σ r = 0.0120, never as σr = 1.2%. As for yields, we
often refer to one hundredth of a 0.01 volatility as a ‘basis point volatility’. So,
we sometimes say ‘the volatility of the short rate is 120 basis points’ instead of
writing σr = 0.0120. The important thing to remember is that, whatever units
we may want to express volatilities in, they will always have the dimensions of
3
[$] [t]− 2 . This would not be the case if we used a different (but still affine)
model, such as the one by Cox, Ingersoll and Ross (1985a, 1985b).
Suppose now that the time-t value of the process is xt, and that this value does
not coincide with the reversion level, θ . If we neglect the stochastic term, how
quickly does the process halve its distance to θ? It is easy to show (see Chapter
8) that the ‘half life’, H(κ), (as this time is called) is given by
log 2
H(κ) = . (22)
κ
In arriving at this result we have neglected the stochastic term, ie, we have
assumed that the process (19) was actually of the form
Let’s now move to discrete time, and let’s consider the trusted work-horse of
time series analysis, the autoregressive, order-1, AR(1), process
Note that Equation (24) is just a regression, with the values of the variable at
time t as independent variables (the ‘right-hand’ variables) and the values of
the variable at time t + 1 as the dependent variables (the ‘left-hand’ variables).
Then φ is the slope and µ the intercept.
The parameter φ is important, because it determines the stability of the process:
if |φ < 1|, the system is stable, and the further back in time a disturbance has
occurred, the less it will affect the present. Random walks are a special case
of this, where φ = 1: yesterday’s increment does not affect in any way today’s
increment. Full memory is therefore retained of any shock that occurred in the
past. As the shock ‘persists’ (in expectation), an AR(1) process with φ = 1
is said to be persistent.
If |φ > 1| the system is unstable, which is a less scary way to say that it blows
up.
One can show that the (theoretical) mean, m, of the AR(1) process xt is given
by
µ
m= , (26)
1−φ
its (theoretical) variance, V ar (xt), is given by
v2
V ar (xt) = 2
(27)
1−φ
and that the (theoretical) serial correlation, corrn, between realizations n pe-
riods apart is given by
corrn = φn. (28)
Finally, for future reference we state without proof that the time-t expectation
of the process xt i steps ahead of time t, Et [xt+i], is given by
1 − φi
Et [xt+i] = φixt + µ (29)
1−φ
Exercise 2 Derive Equation (29) and show that it coincides with Equation (26)
when the time i goes to infinity.
30.3 Parallels between AR(1) Processes and the Ornstein-
Uhlenbeck Process*
With these properties and definitions under the belt, we can establish a par-
allel between a discrete AR(1) process and the discrete-time versions of the
continuous-time mean-reverting process (19). We go back to the discretization
the Ornstein-Uhlenbeck process obtained above:
√
xt+1 = κθ∆t + xt (1 − κ∆t) + σ ∆tǫt.
Comparing this expression with the definition of a discrete-time AR(1) process,
this means that a discretized Ornstein-Uhlenbeck process is just a special
AR(1) process with
µ = κθ∆t (30)
φ = (1 − κ∆t) (31)
ν 2 = σ2∆t (32)
Given a time series (xt) the parameters of the AR(1) process can be estimated
by regressing the left-hand variable (the ‘y’ variable), xt+1, against the right-
hand variable (the ‘x’ variable, the regressor), xt. Once again, the independent
variable (the regressor) is the lagged left-hand variable: the intercept will give
µ, and the slope will give φ.
In principle, using Equations (30) to (32) we can then relate the regression
slope, the intercept and the variance of the error to the parameters of the
mean-reverting, continuous-time process, (19). In practice, this brute-force
approach usually requires some tender loving care before it can be made to
yield reasonable results, but we won’t go into this here.
In the rest of the book we will eclectically and opportunistically move from the
continuous-time and the discrete-time (AR(1)) formulations of a process so as
best to serve our needs.
31 Appendix 3 — Some Results from Stochastic
Calculus*
We present without proof some results from stochastic calculus that we will
use when we switch from a discrete-time to a continuous-time treatment. For
those readers who do not like seeing rabbits being pulled out of hats, a non-
scary introduction to stochastic calculus is given in Klebaner (2005). And for
those more demanding readers who do not want to take any shortcuts, and for
whom only the best will do, Karatzas and Shreve (1988) is the place to go. It
is so good, that one day I will finish it myself.
31.1 Ito’s lemma
Consider now a function, yt, of the stochastic quantity, xt, that evolves ac-
cording to Equation (33): yt = f (xt). Let’s assume that the function f (xt)
can be differentiated twice with respect to xt and at least once with respect to
time. What will the process for the variable yt look like?
Ito’s lemma gives us the answer: given the process for the state variable in
Equation (33), the process of the new variable, yt, will have the following form:
∂yt ∂yt 1 ∂ 2yt 2 ∂yt
dyt = + µ (xt, t) + 2 σ (xt, t) dt + σ (xt, t) dzt. (34)
∂t ∂xt 2 ∂xt ∂xt
For readers who are familiar with ‘traditional’, but not stochastic, calculus, the
1 ∂ 2yt 2
only surprising term is the ‘convexity’ term 2 2 σ (xt, t).
∂xt
dtdt = 0 (37)
dzdt = 0 (38)
Another useful result is the following∗. Let’s integrate over time (from time t
to time T ) the diffusive process for xt. What we have is called an Ito integral,
ItT ,
T T
ItT = dxs = µ (s) ds + σ (xs, s) dzs. (41)
t t
Its value at time T is not known at time t, because it will depend on the path
followed by dzs from time t to time T . However its expectation is known at
time t, and is given by
T T
Et ItT = Et µ (s) ds + σ (xs, s) dzs = µ (s) ds (42)
t t
∗ See Kleber (2005), page 94.
because, for any σ (xs, s),
T
Et σ (xs, s) dzs = 0. (43)
t
31.3.1 The Normal Case
From this we can obtain some interesting results. Take a process, xt. If its
drifts and volatilities are of the form
1 T
σT
t ≡ σ2 (s) ds. (48)
T −t t
As for the (conditional) expectation of the process, E [xT |xt], from the result
above about the expectation of an Ito integral we have:
T
E [xT |xt] = xt + µ (s) ds. (49)
t
31.3.2 The Log-Normal Case
Note carefully: we started with the expectation of the square of the integral of a
stochastic quantity (the value of the path-dependent integral, tT σ (xs, s) dzs)
and we ended with the integral of the expectation of the square of the ‘volatility’.
If the volatility does not depend on the stochastic variable, σ (xs, s) = σ (s),
this gives
T 2 T
Et σ (xs, s) dzs = σ (xs, s)2 ds (57)
t t
which says something even more amazing, namely that the time-t expectation
2
of tT σ (xs, s) dzs is a deterministic quantity, and that this quantity is just
equal to the ‘variance delivered’ from time t to time T . This property means
that there are no ‘lucky’ paths, and that, for a Brownian process, along each and
every path, no matter how short or how long, the variance is always known at
the outset. This is the property upon which replication strategies for derivatives
pricing are built, but we will not go into that.