Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

Interest Rate Volatility and No-Arbitrage Term

Structure Models
Scott Joslin
1
Anh Le
2
1
University of Southern California, Marshall
2
University of North Carolina, Kenan-Flagler
SoFiE 5th Annual Conference, June 20, 2012
Overview
Basic Question
What are the roles of no-arb on estimation of ane TS with SV?
Motivation: no-arb has little impact on estimation of ane
gaussian TS:
Duee (2011),
Joslin, Singleton, and Zhu (2011),
Joslin, Le, and Singleton (2012).
We nd: just the opposite results for TS with SV.
The main tool to reach this conclusion is that volatility must
be a positive process
Must always stay positive under the historical measure (P)
Must always stay positive under the risk-neutral measure (Q)
Overview
Basic Question
What are the roles of no-arb on estimation of ane TS with SV?
Motivation: no-arb has little impact on estimation of ane
gaussian TS:
Duee (2011),
Joslin, Singleton, and Zhu (2011),
Joslin, Le, and Singleton (2012).
We nd: just the opposite results for TS with SV.
The main tool to reach this conclusion is that volatility must
be a positive process
Must always stay positive under the historical measure (P)
Must always stay positive under the risk-neutral measure (Q)
Overview
Basic Question
What are the roles of no-arb on estimation of ane TS with SV?
Motivation: no-arb has little impact on estimation of ane
gaussian TS:
Duee (2011),
Joslin, Singleton, and Zhu (2011),
Joslin, Le, and Singleton (2012).
We nd: just the opposite results for TS with SV.
The main tool to reach this conclusion is that volatility must
be a positive process
Must always stay positive under the historical measure (P)
Must always stay positive under the risk-neutral measure (Q)
Overview
Basic Question
What are the roles of no-arb on estimation of ane TS with SV?
Motivation: no-arb has little impact on estimation of ane
gaussian TS:
Duee (2011),
Joslin, Singleton, and Zhu (2011),
Joslin, Le, and Singleton (2012).
We nd: just the opposite results for TS with SV.
The main tool to reach this conclusion is that volatility must
be a positive process
Must always stay positive under the historical measure (P)
Must always stay positive under the risk-neutral measure (Q)
Assumptions
One stochastic volatility factor (A
1
(N) TS),
It is possible to extend our analysis to allow for multiple
volatility factors.
The volatility factor is spanned,
All we need is that interest rate volatility is not completely
unspanned.
Assumptions
One stochastic volatility factor (A
1
(N) TS),
It is possible to extend our analysis to allow for multiple
volatility factors.
The volatility factor is spanned,
All we need is that interest rate volatility is not completely
unspanned.
Basic Argument
Key building blocks of ane TS:
X
t+1
= K
P
0
+ K
P
1
X
t
+
P
t+1
,
X
t+1
= K
Q
0
+ K
Q
1
X
t
+
Q
t+1
,
r
t
=
0
+

1
X
t
,
Y
t
= A + BX
t
.
where X
t
= (V
t
, G

t
)

.
Standard rotations allow us to rotate X
t
into yields portfolios
P
t
(think level, slope, curvature)
Basic Argument
Key building blocks of ane TS:
X
t+1
= K
P
0
+ K
P
1
X
t
+
P
t+1
,
X
t+1
= K
Q
0
+ K
Q
1
X
t
+
Q
t+1
,
r
t
=
0
+

1
X
t
,
Y
t
= A + BX
t
.
where X
t
= (V
t
, G

t
)

.
Standard rotations allow us to rotate X
t
into yields portfolios
P
t
(think level, slope, curvature)
Basic Argument
Key building blocks of ane TS:
X
t+1
= K
P
0
+ K
P
1
X
t
+
P
t+1
,
X
t+1
= K
Q
0
+ K
Q
1
X
t
+
Q
t+1
,
r
t
=
0
+

1
X
t
,
Y
t
= A + BX
t
.
where X
t
= (V
t
, G

t
)

.
Standard rotations allow us to rotate X
t
into yields portfolios
P
t
(think level, slope, curvature)
Basic Argument
Key building blocks of ane TS:
X
t+1
= K
P
0
+ K
P
1
X
t
+
P
t+1
,
X
t+1
= K
Q
0
+ K
Q
1
X
t
+
Q
t+1
,
r
t
=
0
+

1
X
t
,
Y
t
= A + BX
t
.
where X
t
= (V
t
, G

t
)

.
Standard rotations allow us to rotate X
t
into yields portfolios
P
t
(think level, slope, curvature)
Basic Argument
Therefore:
P
t+1
= K
P
0
+ K
P
1
P
t
+
P
t+1
, (1)
P
t+1
= K
Q
0
+ K
Q
1
P
t
+
Q
t+1
, (2)
Y
t
= A + BP
t
, (3)
V
t
= +

P
t
(4)
We can scale and such that:
E
t
[(
P
1,t+1
)
2
] = +

P
t
. (5)
Basic Argument
Therefore:
P
t+1
= K
P
0
+ K
P
1
P
t
+
P
t+1
, (1)
P
t+1
= K
Q
0
+ K
Q
1
P
t
+
Q
t+1
, (2)
Y
t
= A + BP
t
, (3)
V
t
= +

P
t
(4)
We can scale and such that:
E
t
[(
P
1,t+1
)
2
] = +

P
t
. (5)
Basic Argument
This suggests a natural way to estimate V
t
is through two time
series regressions:
P
t+1
= K
P
0
+ K
P
1
P
t
+
P
t+1
,
E
t
[(
P
1,t+1
)
2
] = +

P
t
.
Issue: Not all s guarantee that V
t
be strictly positive.
We need parameter constraints to ensure V
t
> 0.
These are what Dai and Singleton call admissibility conditions.
Basic Argument
This suggests a natural way to estimate V
t
is through two time
series regressions:
P
t+1
= K
P
0
+ K
P
1
P
t
+
P
t+1
,
E
t
[(
P
1,t+1
)
2
] = +

P
t
.
Issue: Not all s guarantee that V
t
be strictly positive.
We need parameter constraints to ensure V
t
> 0.
These are what Dai and Singleton call admissibility conditions.
Basic Argument
This suggests a natural way to estimate V
t
is through two time
series regressions:
P
t+1
= K
P
0
+ K
P
1
P
t
+
P
t+1
,
E
t
[(
P
1,t+1
)
2
] = +

P
t
.
Issue: Not all s guarantee that V
t
be strictly positive.
We need parameter constraints to ensure V
t
> 0.
These are what Dai and Singleton call admissibility conditions.
Basic Argument
This suggests a natural way to estimate V
t
is through two time
series regressions:
P
t+1
= K
P
0
+ K
P
1
P
t
+
P
t+1
,
E
t
[(
P
1,t+1
)
2
] = +

P
t
.
Issue: Not all s guarantee that V
t
be strictly positive.
We need parameter constraints to ensure V
t
> 0.
These are what Dai and Singleton call admissibility conditions.
What s are admissible?
must guarantee that V
t
is autonomous:
E
P
t
[V
t+1
] = V
t
The LHS: E
P
t
[V
t+1
] = E
P
t
[ +

P
t+1
] =

K
P
1
P
t
+ constant.
The RHS: V
t
=

P
t
+ constant.
Matching terms: (K
P
1
)

= must be an eigenvector of
(K
P
1
)

.
Analogously, and importantly, must be an eigenvector of
(K
Q
1
)

.
What s are admissible?
must guarantee that V
t
is autonomous:
E
P
t
[V
t+1
] = V
t
The LHS: E
P
t
[V
t+1
] = E
P
t
[ +

P
t+1
] =

K
P
1
P
t
+ constant.
The RHS: V
t
=

P
t
+ constant.
Matching terms: (K
P
1
)

= must be an eigenvector of
(K
P
1
)

.
Analogously, and importantly, must be an eigenvector of
(K
Q
1
)

.
What s are admissible?
must guarantee that V
t
is autonomous:
E
P
t
[V
t+1
] = V
t
The LHS: E
P
t
[V
t+1
] = E
P
t
[ +

P
t+1
] =

K
P
1
P
t
+ constant.
The RHS: V
t
=

P
t
+ constant.
Matching terms: (K
P
1
)

= must be an eigenvector of
(K
P
1
)

.
Analogously, and importantly, must be an eigenvector of
(K
Q
1
)

.
What s are admissible?
must guarantee that V
t
is autonomous:
E
P
t
[V
t+1
] = V
t
The LHS: E
P
t
[V
t+1
] = E
P
t
[ +

P
t+1
] =

K
P
1
P
t
+ constant.
The RHS: V
t
=

P
t
+ constant.
Matching terms: (K
P
1
)

= must be an eigenvector of
(K
P
1
)

.
Analogously, and importantly, must be an eigenvector of
(K
Q
1
)

.
What s are admissible?
must guarantee that V
t
is autonomous:
E
P
t
[V
t+1
] = V
t
The LHS: E
P
t
[V
t+1
] = E
P
t
[ +

P
t+1
] =

K
P
1
P
t
+ constant.
The RHS: V
t
=

P
t
+ constant.
Matching terms: (K
P
1
)

= must be an eigenvector of
(K
P
1
)

.
Analogously, and importantly, must be an eigenvector of
(K
Q
1
)

.
What s are admissible?
must guarantee that V
t
is autonomous:
E
P
t
[V
t+1
] = V
t
The LHS: E
P
t
[V
t+1
] = E
P
t
[ +

P
t+1
] =

K
P
1
P
t
+ constant.
The RHS: V
t
=

P
t
+ constant.
Matching terms: (K
P
1
)

= must be an eigenvector of
(K
P
1
)

.
Analogously, and importantly, must be an eigenvector of
(K
Q
1
)

.
Summary:
In identifying , MLE will rely on information from:
1
Time series: E
t
[(
P
1,t+1
)
2
] = +

P
t
,
2
P-admissibility: is an eigenvector of (K
P
1
)

,
3
No-arb restrictions: is an eigenvector of (K
Q
1
)

.
The nal estimates of represent trade-os among 1), 2),
and 3).
We argue that the cross-sectional information will give us very
precise estimates of K
Q
1
, therefore 3) will be the most
important restriction.
Summary:
In identifying , MLE will rely on information from:
1
Time series: E
t
[(
P
1,t+1
)
2
] = +

P
t
,
2
P-admissibility: is an eigenvector of (K
P
1
)

,
3
No-arb restrictions: is an eigenvector of (K
Q
1
)

.
The nal estimates of represent trade-os among 1), 2),
and 3).
We argue that the cross-sectional information will give us very
precise estimates of K
Q
1
, therefore 3) will be the most
important restriction.
What do we know about K
Q
1
?
We know that:
E
Q
t
[y
t+1
] = f
t
Therefore:
E
Q
t
[P
t+1
] = WE
Q
t
[y
t+1
] = Wf
t
= P
f
t
.
Combined with
E
Q
t
[P
t+1
] = K
Q
0
+ K
Q
1
P
t
we have
P
f
t
= K
Q
0
+ K
Q
1
P
t
What do we know about K
Q
1
?
We know that:
E
Q
t
[y
t+1
] = f
t
Therefore:
E
Q
t
[P
t+1
] = WE
Q
t
[y
t+1
] = Wf
t
= P
f
t
.
Combined with
E
Q
t
[P
t+1
] = K
Q
0
+ K
Q
1
P
t
we have
P
f
t
= K
Q
0
+ K
Q
1
P
t
What do we know about K
Q
1
?
We know that:
E
Q
t
[y
t+1
] = f
t
Therefore:
E
Q
t
[P
t+1
] = WE
Q
t
[y
t+1
] = Wf
t
= P
f
t
.
Combined with
E
Q
t
[P
t+1
] = K
Q
0
+ K
Q
1
P
t
we have
P
f
t
= K
Q
0
+ K
Q
1
P
t
What do we know about K
Q
1
?
Dierence of P vs. Q
Ignoring measurement noise and convexity:
1
Under P, observe:
P
t+1

realization
= K
P
0
+ K
P
1
P
t

expectation
+
P
t+1

innovation
2
Under Q, observe:
P
f
t

forward
= K
Q
0
+ K
Q
1
P
t

expectation
3
We observe risk-neutral expectation rather than infer it from
regression!
What do we know about K
Q
1
?
Dierence of P vs. Q
Ignoring measurement noise and convexity:
1
Under P, observe:
P
t+1

realization
= K
P
0
+ K
P
1
P
t

expectation
+
P
t+1

innovation
2
Under Q, observe:
P
f
t

forward
= K
Q
0
+ K
Q
1
P
t

expectation
3
We observe risk-neutral expectation rather than infer it from
regression!
What do we know about K
Q
1
?
Dierence of P vs. Q
Ignoring measurement noise and convexity:
1
Under P, observe:
P
t+1

realization
= K
P
0
+ K
P
1
P
t

expectation
+
P
t+1

innovation
2
Under Q, observe:
P
f
t

forward
= K
Q
0
+ K
Q
1
P
t

expectation
3
We observe risk-neutral expectation rather than infer it from
regression!
Putting the pieces together
1
No-arb restrictions require a Q measure and hence K
Q
1
2
Cross-sectional information very precisely identies what K
Q
1
is.
3
The volatility factor V
t
= +

P
t
must be autonomous, thus
must be an eigenvector of K
Q
1
.
4
As a result, the no-arbitrage restrictions give very strong
identication information about what the volatility factor
should look like.
Putting the pieces together
1
No-arb restrictions require a Q measure and hence K
Q
1
2
Cross-sectional information very precisely identies what K
Q
1
is.
3
The volatility factor V
t
= +

P
t
must be autonomous, thus
must be an eigenvector of K
Q
1
.
4
As a result, the no-arbitrage restrictions give very strong
identication information about what the volatility factor
should look like.
Putting the pieces together
1
No-arb restrictions require a Q measure and hence K
Q
1
2
Cross-sectional information very precisely identies what K
Q
1
is.
3
The volatility factor V
t
= +

P
t
must be autonomous, thus
must be an eigenvector of K
Q
1
.
4
As a result, the no-arbitrage restrictions give very strong
identication information about what the volatility factor
should look like.
Empirical Results
We estimate 3-factor models using
3-month, 6-month and 1-5 year Treasury zeros
Monthly data from March 1984 to December 2006
Estimation precision of K
P
1
To estimate K
P
1
, we can run the 3-factor VAR(1):
P
t+1
= constant + K
P
1
P
t
+
P
t+1
Given the standard errors of K
P
1
, we can perform test of
whether a particular linear combination

P
t
can serve as the
volatility factor. i.e. whether

P
t
is autonomous under the P
measure.
We will roam over all possible s and report the p-values of
all the tests.
The idea is if the standard errors of K
P
1
are small, they will
reject many and only accept a small number of potential
s.
Estimation precision of K
P
1
To estimate K
P
1
, we can run the 3-factor VAR(1):
P
t+1
= constant + K
P
1
P
t
+
P
t+1
Given the standard errors of K
P
1
, we can perform test of
whether a particular linear combination

P
t
can serve as the
volatility factor. i.e. whether

P
t
is autonomous under the P
measure.
We will roam over all possible s and report the p-values of
all the tests.
The idea is if the standard errors of K
P
1
are small, they will
reject many and only accept a small number of potential
s.
Estimation precision of K
P
1
To estimate K
P
1
, we can run the 3-factor VAR(1):
P
t+1
= constant + K
P
1
P
t
+
P
t+1
Given the standard errors of K
P
1
, we can perform test of
whether a particular linear combination

P
t
can serve as the
volatility factor. i.e. whether

P
t
is autonomous under the P
measure.
We will roam over all possible s and report the p-values of
all the tests.
The idea is if the standard errors of K
P
1
are small, they will
reject many and only accept a small number of potential
s.
Estimation precision of K
P
1
p-values testing if P
t
is P-autonomous. Loadings are normalized
such that sums up to 1.
Estimation precision of K
Q
1
To estimate K
Q
1
, we can run the regression:
P
f
t
= constant + K
Q
1
P
t
+ noises
Given the standard errors of K
Q
1
, we can perform test of
whether a particular linear combination

P
t
can serve as the
volatility factor. i.e. whether

P
t
is autonomous under the Q
measure.
We will roam over all possible s and report the p-values of
all the tests.
The idea is if the standard errors of K
Q
1
are small, they will
reject many and only accept a small number of potential
s.
Estimation precision of K
Q
1
To estimate K
Q
1
, we can run the regression:
P
f
t
= constant + K
Q
1
P
t
+ noises
Given the standard errors of K
Q
1
, we can perform test of
whether a particular linear combination

P
t
can serve as the
volatility factor. i.e. whether

P
t
is autonomous under the Q
measure.
We will roam over all possible s and report the p-values of
all the tests.
The idea is if the standard errors of K
Q
1
are small, they will
reject many and only accept a small number of potential
s.
Estimation precision of K
Q
1
p-values testing ifs P
t
is Q-autonomous. Loadings are
normalized such that adds up to 1.
Dierent Estimates of K
Q
1
1
Regression:
P
f
t
= constant + K
Q
1
P
t
+ noises
- Free of volatility considerations
2
Estimate an A
0
(3) model and compute the model-implied K
Q
1
- Conditional volatility is constant
3
Estimate an A
1
(3) model and compute the model-implied K
Q
1
- Conditional volatility is time-varying
Dierent Estimates of K
Q
1
1
Regression:
P
f
t
= constant + K
Q
1
P
t
+ noises
- Free of volatility considerations
2
Estimate an A
0
(3) model and compute the model-implied K
Q
1
- Conditional volatility is constant
3
Estimate an A
1
(3) model and compute the model-implied K
Q
1
- Conditional volatility is time-varying
Dierent Estimates of K
Q
1
1
Regression:
P
f
t
= constant + K
Q
1
P
t
+ noises
- Free of volatility considerations
2
Estimate an A
0
(3) model and compute the model-implied K
Q
1
- Conditional volatility is constant
3
Estimate an A
1
(3) model and compute the model-implied K
Q
1
- Conditional volatility is time-varying
Dierent Estimates of K
Q
1
1
Regression:
P
f
t
= constant + K
Q
1
P
t
+ noises
- Free of volatility considerations
2
Estimate an A
0
(3) model and compute the model-implied K
Q
1
- Conditional volatility is constant
3
Estimate an A
1
(3) model and compute the model-implied K
Q
1
- Conditional volatility is time-varying
Dierent Estimates of K
Q
1
Regression:

1.0066 0.1230 0.4078


0.0082 0.9659 0.4510
0.0020 0.0008 0.7589

Implied by A
0
(3):

1.0084 0.1088 0.4111


0.0105 0.9786 0.4491
0.0042 0.0098 0.7851

Implied by A
1
(3):

1.0085 0.1102 0.4084


0.0104 0.9781 0.4479
0.0042 0.0088 0.7870

Dierent Estimates of K
Q
1
Regression:

1.0066 0.1230 0.4078


0.0082 0.9659 0.4510
0.0020 0.0008 0.7589

Implied by A
0
(3):

1.0084 0.1088 0.4111


0.0105 0.9786 0.4491
0.0042 0.0098 0.7851

Implied by A
1
(3):

1.0085 0.1102 0.4084


0.0104 0.9781 0.4479
0.0042 0.0088 0.7870

Dierent Estimates of K
Q
1
Regression:

1.0066 0.1230 0.4078


0.0082 0.9659 0.4510
0.0020 0.0008 0.7589

Implied by A
0
(3):

1.0084 0.1088 0.4111


0.0105 0.9786 0.4491
0.0042 0.0098 0.7851

Implied by A
1
(3):

1.0085 0.1102 0.4084


0.0104 0.9781 0.4479
0.0042 0.0088 0.7870

Dierent Estimates of K
Q
1
The closeness between K
Q
1
s implied by the A
0
(3) and A
1
(3)
models is striking!!
What can we make of it?
For A
0
(3), K
Q
1
is chosen in a very simple manner:
1
No-arb restrictions: tting bond prices well!
For A
1
(3), recall that there are potential tradeos among:
1
Time series
2
P-admissibility
3
No-arb restrictions
Timeseries information and P-admissibility restrictions must
not matter in the estimation of the A
1
(3) model.
The volatility factor V
t
is strongly pinned down by the
cross-sectional and not time series information.
Dierent Estimates of K
Q
1
The closeness between K
Q
1
s implied by the A
0
(3) and A
1
(3)
models is striking!!
What can we make of it?
For A
0
(3), K
Q
1
is chosen in a very simple manner:
1
No-arb restrictions: tting bond prices well!
For A
1
(3), recall that there are potential tradeos among:
1
Time series
2
P-admissibility
3
No-arb restrictions
Timeseries information and P-admissibility restrictions must
not matter in the estimation of the A
1
(3) model.
The volatility factor V
t
is strongly pinned down by the
cross-sectional and not time series information.
Dierent Estimates of K
Q
1
The closeness between K
Q
1
s implied by the A
0
(3) and A
1
(3)
models is striking!!
What can we make of it?
For A
0
(3), K
Q
1
is chosen in a very simple manner:
1
No-arb restrictions: tting bond prices well!
For A
1
(3), recall that there are potential tradeos among:
1
Time series
2
P-admissibility
3
No-arb restrictions
Timeseries information and P-admissibility restrictions must
not matter in the estimation of the A
1
(3) model.
The volatility factor V
t
is strongly pinned down by the
cross-sectional and not time series information.
Dierent Estimates of K
Q
1
The closeness between K
Q
1
s implied by the A
0
(3) and A
1
(3)
models is striking!!
What can we make of it?
For A
0
(3), K
Q
1
is chosen in a very simple manner:
1
No-arb restrictions: tting bond prices well!
For A
1
(3), recall that there are potential tradeos among:
1
Time series
2
P-admissibility
3
No-arb restrictions
Timeseries information and P-admissibility restrictions must
not matter in the estimation of the A
1
(3) model.
The volatility factor V
t
is strongly pinned down by the
cross-sectional and not time series information.
Dierent Estimates of K
Q
1
The closeness between K
Q
1
s implied by the A
0
(3) and A
1
(3)
models is striking!!
What can we make of it?
For A
0
(3), K
Q
1
is chosen in a very simple manner:
1
No-arb restrictions: tting bond prices well!
For A
1
(3), recall that there are potential tradeos among:
1
Time series
2
P-admissibility
3
No-arb restrictions
Timeseries information and P-admissibility restrictions must
not matter in the estimation of the A
1
(3) model.
The volatility factor V
t
is strongly pinned down by the
cross-sectional and not time series information.
Comparison with Gaussian models
This is very dierent from the Gaussian model:
Duee (2011) and Joslin, Singleton, and Zhu (2011) say no
arbitrage doesnt help us forecast
Joslin, Le, and Singleton (2011) says that no arbitrage tells us
nothing about volatility when time series average pricing
errors are zero.
No-arbitrage and the CS regression
We prove that the no-arb restriction in might explain the
failure of A
M
(N) class of models in rationalizing the CS
regression as shown by Dai and Singleton (2002).
No-arbitrage and the CS regression
1 2 3 4 5 6 7 8 9 10
3
2.5
2
1.5
1
0.5
0
0.5
1
Years to maturity


data
A
0
(3)
A
1
(3)
A
2
(3)
F
1
(3)
F
2
(3)
No-arbitrage and the CS regression
1 2 3 4 5 6 7 8 9 10
3
2.5
2
1.5
1
0.5
0
0.5
1
Years to maturity


data
A
0
(3)
A
1
(3)
A
2
(3)
F
1
(3)
F
2
(3)
No-arbitrage and the CS regression
1 2 3 4 5 6 7 8 9 10
3
2.5
2
1.5
1
0.5
0
0.5
1
Years to maturity


data
A
0
(3)
A
1
(3)
A
2
(3)
F
1
(3)
F
2
(3)
No-arbitrage and the CS regression
1 2 3 4 5 6 7 8 9 10
3
2.5
2
1.5
1
0.5
0
0.5
1
Years to maturity


data
A
0
(3)
A
1
(3)
A
2
(3)
F
1
(3)
F
2
(3)
Conclusions
No-arb restrictions provide very strong identication
information for the volatility factor.
Good news or bad news?
Good news if the TS model is reasonably well-specied
If the TS model is not well specied, no-arb restrictions may
misguide quite a bit. In this case, the rst place to start xing
the model is the Q dynamics.
Conclusions
No-arb restrictions provide very strong identication
information for the volatility factor.
Good news or bad news?
Good news if the TS model is reasonably well-specied
If the TS model is not well specied, no-arb restrictions may
misguide quite a bit. In this case, the rst place to start xing
the model is the Q dynamics.
Conclusions
No-arb restrictions provide very strong identication
information for the volatility factor.
Good news or bad news?
Good news if the TS model is reasonably well-specied
If the TS model is not well specied, no-arb restrictions may
misguide quite a bit. In this case, the rst place to start xing
the model is the Q dynamics.
Conclusions
No-arb restrictions provide very strong identication
information for the volatility factor.
Good news or bad news?
Good news if the TS model is reasonably well-specied
If the TS model is not well specied, no-arb restrictions may
misguide quite a bit. In this case, the rst place to start xing
the model is the Q dynamics.

You might also like