Professional Documents
Culture Documents
Perturbation and Projection Methods For Solving DSGE Models PDF
Perturbation and Projection Methods For Solving DSGE Models PDF
Methods for Solving DSGE Models
Lawrence J. Christiano
Discussion of projections taken from Christiano‐Fisher, ‘Algorithms for Solving Dynamic Models with Occasionally
Binding Constraints’, 2000, Journal of Economic Dynamics and Control.
Discussion of perturbations primarily taken from Judd’s textbook. Also:
Wouter J. den Haan and Joris de Wind, ‘How well‐behaved are higher‐order perturbation solutions?’ DNB working
paper number 240, December 2009.
For pruning, see Kim, Jinill, Sunghyun Kim, Ernst Schaumburg, and Christopher A. Sims, 2008, “Calculating and
using second‐order accurate solutions of discrete time dynamic equilibrium models,” Journal of Economic
Dynamics and Control, 32(11), 3397–3414. Also, Lombardo, 2011, ‘On approximating DSGE models by series
expansions,’ European Central Bank.
Outline
• A Toy Example to Illustrate the basic ideas.
– Functional form characterization of model solution.
– Projections and Perturbations.
• Neoclassical model.
– Projection methods
– Perturbation methods
• Stochastic Simulations and Impulse Responses
– Focus on perturbation solutions of order two.
– The need for pruning.
Simple Example
• Suppose that x is some exogenous variable
and that the following equation implicitly
defines y:
hx, y 0, for all x ∈ X
• Let the solution be defined by the ‘policy rule’,
g:
y gx
‘Error function’
• satisfying
Rx; g ≡ hx, gx 0
• for all x ∈ X
The Need to Approximate
• Finding the policy rule, g, is a big problem
outside special cases
– ‘Infinite number of unknowns (i.e., one value of g
for each possible x) in an infinite number of
equations (i.e., one equation for each possible x).’
• Two approaches:
– projection and perturbation
Projection
ĝx;
• Find a parametric function, , where is a
vector of parameters chosen so that it imitates
Rx; g 0
the property of the exact solution, i.e.,
for all x ∈ X , as well as possible.
• Choose values for so that
• The method is defined by the meaning of ‘close
ĝx;
to zero’ and by the parametric function, ,
that is used.
Projection, continued
• Spectral and finite element approximations
ĝx;
– Spectral functions: functions, , in which
ĝx; x∈X
each parameter in influences for all
example:
n 1
ĝx; ∑ i Hi x,
i0
n
H i x x i ~ordinary polynominal (not computationaly efficient)
H i x T i x,
T i z : −1, 1 → −1, 1, i th order Chebyshev polynomial
: X → −1, 1
Zeros:
Chebyshev polynomials
‐0.71 0.71
Projection, continued
ĝx;
– Finite element approximations: functions, ,
ĝx;
in which each parameter in influences
over only a subinterval of x ∈ X
ĝx; 1 2 3 4 5 6 7
4
2
X
Projection, continued
• ‘Close to zero’: two methods
x : x1 , x2 , . . . , xn ∈ X
• Collocation, for n values of
1 n
choose n elements of so that
R̂ x i ; hx i , ĝx i ; 0, i 1, . . . , n
– how you choose the grid of xi’s matters…
• Weighted Residual, for m>n values of
x : x 1 , x 2 , . . . , x m ∈ X choose the n i ’s
xxxxxxxx
m
j1
Example of Importance of Grid Points
• Here is an example, taken from a related problem, the problem
of interpolation.
– You get to evaluate a function on a set of grid points that you
select, and you must guess the shape of the function
between the grid points.
This is called the
• Consider the ‘Runge’ function, ‘Runge phenomenon’,
1 , k ∈ −5, 5 discovered by Carl
fk Runge in 1901.
1 k2
• Next slide shows what happens when you select 11 equally‐
spaced grid points and interpolate by fitting a 10th order
polynomial.
– As you increase the number of grid points on a fixed interval
grid, oscillations in tails grow more and more violent.
• Chebyshev approximation theorem: distribute more points in
the tails (by selecting zeros of Chebyshev polynomial) and get
convergence in sup norm.
How You Select the Grid Points Matters
Figure from Christiano‐Fisher, JEDC, 2000
Example of Importance of Grid Points
• Here is an example, taken from a related problem, the problem
of interpolation.
– You get to evaluate a function on a set of grid points that you
select, and you must guess the shape of the function
between the grid points.
• Consider the ‘Runge’ function,
fk 1 , k ∈ −5, 5
1 k2
• Next slide shows what happens when you select 11 equally‐
spaced grid points and interpolate by fitting a 10th order
polynomial.
– As you increase the number of grid points on a fixed interval
grid, oscillations in tails grow more and more violent.
• Chebyshev approximation theorem: distribute more points in
the tails (by selecting zeros of Chebyshev polynomial) and get
convergence in sup norm.
Projection, continued
• ‘Close to zero’: two methods
x : x1 , x2 , . . . , xn ∈ X
• Collocation, for n values of
1 n
choose n elements of so that
R̂ x i ; hx i , ĝx i ; 0, i 1, . . . , n
– how you choose the grid of xi’s matters…
• Weighted Residual, for m>n values of
x : x 1 , x 2 , . . . , x m ∈ X choose the n i ’s
xxxxxxxx
m
j1
Perturbation
• Projection uses the ‘global’ behavior of the functional
equation to approximate solution.
– Problem: requires finding zeros of non‐linear equations.
Iterative methods for doing this are a pain.
– Advantage: can easily adapt to situations the policy rule is not
continuous or simply non‐differentiable (e.g., occasionally
binding zero lower bound on interest rate).
• Perturbation method uses Taylor series expansion
(computed using implicit function theorem) to approximate
model solution.
– Advantage: can implement procedure using non‐iterative
methods.
– Possible disadvantages:
• Global properties of Taylor series expansion not necessarily very good.
• Does not work when there are important non‐differentiabilities (e.g.,
occasionally binding zero lower bound on interest rate).
Taylor Series Expansion
f :R→R
• Let be k+1 differentiable on the open
interval and continuous on the closed interval
between a and x.
– Then,
fx P k x R k x
– where
Taylor series expansion about x a :
P k x fa f 1 ax − a 1 2
2!
f ax − a 2 . . . k!1 f k ax − a k
remainder:
R k x 1
k1!
f k1
x − a k1 , for some between x and a
– Question: is the Taylor series expansion a good
approximation for f?
Taylor Series Expansion
• It’s not as good as you might have thought.
• The next slide exhibits the accuracy of the
Taylor series approximation to the Runge
function.
– In a small neighborhood of the point where the
approximation is computed (i.e., 0), higher order
Taylor series approximations are increasingly
accurate.
– Outside that small neighborhood, the quality of
the approximation deteriorates with higher order
approximations.
Taylor Series Expansions about 0 of Runge Function
1.4
Increasing order of approximation leads to improved
accuracy in a neighborhood of zero and reduced accuracy
1.2 further away.
5th order Taylor series expansion
1
0.8
0.2
-1.5 -1 -0.5 0 0.5 1 1.5
x
Taylor Series Expansion
• Another example: the log function
– It is often used in economics.
– Surprisingly, Taylor series expansion does not provide
a great global approximation.
– This expression diverges as N→∞ for
x such that x −a a ≥1
Taylor Series Expansion of Log Function
2
About x=1
1.8
25th order approximation
1.6
5th order approximation
1.4
1.2
Taylor series approximation deteriorates
infinitely for x>2 as order of approximation log(x)
1 increases.
0.8
0.6
2nd order approximation
0.4
0.2
10th order approximation
0
x
Taylor Series Expansion
• In general, cannot expect Taylor series
expansion to converge to actual function,
globally.
– There are some exceptions, e.g., Taylor’s series
fx e x , cosx, sinx
expansion of about x=0
converges to f(x) even for x far from 0.
– Problem: in general it is difficult to say for what
values of x the Taylor series expansion gives a
good approximation.
Taylor Versus Weierstrass
• Problems with Taylor series expansion does not represent a
problem with polynomials per se as approximating
functions.
• Weierstrass approximation theorem: for every continuous
function, f(x), defined on [a,b], and for every 0, there
exists a finite‐ordered polynomial, p(x), on [a,b] such that
|fx − px| , for all x ∈ a, b
• Weierstrass – polynomials may approximate well, even if
sometimes the Taylor series expansion is not very good.
– Our analysis of the Runge function illustrates the point: The
Taylor series expansion of the Runge function diverges over a
large portion of the domain, while the sequence of polynomials
associated with the zeros of the Chebyshev polynomials
converges to the Runge function in the sup norm.
• Ok, we’re done with the digression on the
Taylor series expansion.
• Now, back to the discussion of the
perturbation method.
– It approximates a solution using the Taylor series
expansion.
Perturbation Method
x∗ ∈ X
• Suppose there is a point, , where we
know the value taken on by the function, g,
that we wish to approximate:
gx ∗ g ∗ , some x ∗
• Use the implicit function theorem to
approximate g in a neighborhood of x ∗
• Note:
Rx; g 0 for all x ∈ X
→
d j
j
R x; g ≡ j
Rx; g 0 for all j, all x ∈ X.
dx
Perturbation, cnt’d
• Differentiate R with respect to and evaluate
x
x∗
the result at :
R 1 x ∗ d hx, gx|xx ∗ h 1 x ∗ , g ∗ h 2 x ∗ , g ∗ g ′ x ∗ 0
dx
′ ∗ h 1 x ∗ , g ∗
→ g x −
h 2 x ∗ , g ∗
• Do it again!
2
R x ∗ d 2
hx, gx| ∗ h 11 x ∗ , g ∗ 2h 12 x ∗ , g ∗ g ′ x ∗
xx
dx 2
h 22 x ∗ , g ∗ g ′ x ∗ 2 h 2 x ∗ , g ∗ g ′′ x ∗ .
x∈X
– over to verify that it is everywhere close
to zero (or, at least in the region of interest).
Example of Implicit Function Theorem
y
hx, y 1 x 2 y 2 − 8 0
2
4
x ∗
gx ≃ g∗ − ∗ x − x ∗
g
‐4
4
x
‐4
h 1 x ∗ , g ∗ x ∗
g′ ∗
x − − ∗ h 2 had better not be zero!
∗ ∗
h 2 x , g g
Neoclassical Growth Model
• Objective:
1−
c −1
E 0 ∑ t uc t , uc t t
1−
t0
• Constraints:
c t expk t1 ≤ fk t , a t , t 0, 1, 2, . . . .
a t a t−1 t , t ~ E t 0, E 2t V
k t , a t ~given numbers
• Here, t1 ~random variable
time t choice variable, k t1
• Parameter, , indexes a set of models, with
the model of interest corresponding to
1
Solution
• A policy rule,
k t1 gk t , a t , .
• With the property:
ct
ct1
k t1 a t1
• for all a t , k t , .
Projection Methods
• Let
ĝk t , a t , ;
– be a function with finite parameters (could be either
spectral or finite element, as before).
• Choose parameters, , to make
Rk t , a t , ; ĝ
– as close to zero as possible, over a range of values of
the state.
– use weighted residuals or Collocation.
Occasionally Binding Constraints
• Suppose we include a non‐negativity constraint on
investment.
– Lagrangian approach. Add a multiplier, λt, to the set of
functions to be computed, and add the (‘complementary
slackness’) equation:
Non‐negativity constraint on investment
≥0 ≥0
t expgk t , a t , − 1 − expk t 0
• Conceptually straightforward to apply preceding
solution method. For details, see Christiano‐Fisher,
‘Algorithms for Solving Dynamic Models with
Occasionally Binding Constraints’, 2000, Journal of
Economic Dynamics and Control.
– This paper describes a wide range of strategies, including
those based on parameterizing the expectation function,
that may be easier, when constraints occasionally bind.
Perturbation
• Conventional application of the perturbation approach, as in the toy
example, requires knowing the value taken on by the policy rule at a point.
• The overwhelming majority of models used in macro do have this
property.
– In these models, can compute non‐stochastic steady state without any
knowledge of the policy rule, g.
k∗
– Non‐stochastic steady state is such that
1
– and 1−
k∗ log .
1 − 1 −
Perturbation
• Error function:
ct
ct1
k t , a t , .
– for all values of
• So, all order derivatives of R with respect to its
arguments are zero (assuming they exist!).
Four (Easy to Show) Results About
Perturbations
• Taylor series expansion of policy rule:
linear component of policy rule
gk t , a t , ≃ k g k k t − k g a a t g
1 g kk k t − k 2 g aa a 2t g 2 g ka k t − ka t g k k t − k g a a t . . .
2
– g 0 : to a first order approximation, ‘certainty equivalence’
– All terms found by solving linear equations, except coefficient on past
gk
endogenous variable, ,which requires solving for eigenvalues
– To second order approximation: slope terms certainty equivalent –
g k g a 0
– Quadratic, higher order terms computed recursively.
First Order Perturbation
• Working out the following derivatives and
evaluating at k t k ∗ , a t 0
R k k t , a t , ; g R a k t , a t , ; g R k t , a t , ; g 0
‘problematic term’ Source of certainty equivalence
• Implies: In linear approximation
R k u ′′ f k − e g g k − u ′ f Kk g k − u ′′ f k g k − e g g 2k f K 0
R a u ′′ f a − e g g a − u ′ f Kk g a f Ka − u ′′ f k g a f a − e g g k g a g a f K 0
R −u ′ e g u ′′ f k − e g g k f K g 0
Absence of arguments in these functions reflects they are evaluated in k t k ∗ , a t 0
Technical notes for next slide
u ′′ f k − e g g k − u ′ f Kk g k − u ′′ f k g k − e g g 2k f K 0
1 f − e g g − u ′ f Kk g − f g − e g g 2 f K 0
k k
u ′′
k k k k
1 f − 1 e g u ′ f Kk f f K g e g g 2 f K 0
k u ′′
k k k
1 fk − 1 u ′ f Kk f k g g 2 0
k
eg fK f K u ′′ e g f K eg k
1 − 1 1 u ′ f Kk g k g 2k 0
u ′′ e g f K
• Simplify this further using:
f K ~steady state equation
f K K−1 expa 1 − , K ≡ expk
exp − 1k a 1 −
f k expk a 1 − expk f K e g
f Kk − 1 exp − 1k a
f KK − 1K−2 expa − 1 exp − 2k a f Kk e −g
• to obtain polynomial on next slide.
First Order, cont’d
Rk 0
• Rewriting term:
1 − 1 1 u ′ f KK g k g 2k 0
u ′′ f K
0 g k 1, g k 1
• There are two solutions,
– Theory (see Stokey‐Lucas) tells us to pick the smaller
one.
– In general, could be more than one eigenvalue less
than unity: multiple solutions.
gk ga
• Conditional on solution to , solved for
Ra 0
linearly using equation.
• These results all generalize to multidimensional
case
Numerical Example
• Parameters taken from Prescott (1986):
• Second order perturbation:
3.88 0.98 0.996 0.06 0.07 0
ĝk t , a t−1 , t , k ∗ gk k t − k ∗ ga at g
0.014 0.00017 0.067 0.079 0.000024 0.00068 1
1 g kk k t − k g aa
2
a 2t g 2
2
−0.035 −0.028 0 0
g ka k t − ka t g k k t − k g a a t
Comparing 1st and 2nd Order
Approximation
• In practice, 1st order approximation is often used.
Want to know if results change much, going from
1st to 2nd order.
– They appear to change very little in our example.
• Following is a graph that compares the policy
rules implied by the first and second order
perturbation.
• The graph itself corresponds to the baseline
parameterization, and results are reported in
parentheses for risk aversion equal to 20.
‘If initial capital is 20 percent away from steady state, then capital
choice differs by 0.03 (0.035) percent between the two approximations.’
‘If shock is 6 standard deviations away from its mean, then capital
choice differs by 0.14 (0.18) percent between the two approximations’
0.04
0.18
0.035 0.16
order) - k t+1 (1 order) )
st
0.025 0.12
=2 0.1
0.02
= 20
0.08
nd
nd
0.015
100*( kt+1 (2
0.005 0.02
0 0
-20 -10 0 10 20 -20 -10 0 10 20
* 100*at, percent deviation of initial shock from steady state
100*( kt - k ), percent deviation of initial capital from steady state
Number in parentheses at top correspond to = 20.
Outline Done!
• A Toy Example to Illustrate the basic ideas.
– Functional form characterization of model solution.
– Projections and Perturbations.
• Neoclassical model.
– Projection methods Done!
– Perturbation methods
Next
• Stochastic Simulations and Impulse Responses
– Focus on perturbation solutions of order two.
– The need for pruning.
Next
• Stochastic simulations
– Mapping from shocks and initial conditions to data.
– 2nd order approximation and simulation.
– Pruning: a way to do 2nd order approximation.
– Naïve simulation versus pruning.
– Spurious steady state.
– Extended example of simulation and pruning.
• Impulse response functions
– Conditional impulse response function (IRF).
– Unconditional IRF.
Simulations
• Artificial data simulated from a model can be
used to compute second moments and other
model statistics.
– These can be compared with analog statistics
computed in the data to evaluate model fit.
• Simulated model data can be compared with
actual data, to evaluate model fit.
• The computation of impulse responses, when the
solution is nonlinear, requires simulations.
Shock and initial conditions Simulation data
k t , a t , , t1
• Mapping from to kt+2 :
k t , a t , , t1 , t2
• Mapping from to kt+3 :
k t , a t , , t1 , . . . , tj
• Similarly obtain mapping from
to yt+j+1, for j=1,2,… .
2nd order Taylor expansion and simulation
• We do not have the exact policy rule, g, and so
must do some sort of approximation in
computing the simulations.
• Consider the 2nd order expansion of the
k t , a t , t1 , k t2
mapping from to :
ggk t , a t , , a t t1 ,
yt kt − k∗
– After some (painful!) algebra ( ):
y t2 g 2k y t g k g a g a a t g a t1
1 g kk g 2k g k g kk y 2t g k g 2a 2g ka g a g k g aa g aa 2 a 2t g aa 2 2t1 g g k 1 2
2
g k g kk g a g ka 1y t a t g ka g k y t t1 g ka g a g aa a t t1 g a t1
Pruning: a way to do 2nd order approximation
• Second order approximation, mapping to kt+3
k t , a t , , t1 , t2
from is even more algebra‐
intensive. Mapping to kt+4, kt+5, … worse still.
• Turns out there is a simple way to compute
these mappings, called pruning1
– First, draw t1 , t2 , t3 , . . .
– Then, solve linear system:
ỹ tj1 g k ỹ tj g a a tj , a tj a tj−1 tj , j 1, 2, 3, . . .
– Finally,
y tj1 g k y tj g a a tj 1 g kk ỹ 2tj g aa a 2tj g g ka ỹ tj a tj
2
1Kim, Kim, Schaumburg and Sims, 2008.
Pruning: a way to do 2nd order approximation
• Second order approximation, mapping to kt+3
k t , a t , , t1 , t2
from is even more algebra‐
intensive. Mapping to kt+4, kt+5, … worse still.
• Turns out there is a simple way to compute
these mappings, called pruning1
Brute force substitution
– First, draw t1 , t2 , t3 , . . . verifies that pruning delivers
second order approximation
– Then, solve linear system: on previous slide.
ỹ tj1 g k ỹ tj g a a tj , a tj a tj−1 tj , j 1, 2, 3, . . .
– Finally,
y tj1 g k y tj g a a tj 1 g kk ỹ 2tj g aa a 2tj g g ka ỹ tj a tj
2
1Kim, Kim, Schaumburg and Sims, 2008.
Technical note to previous slide
ggk t , a t , , a t t1 ,
• Taylor series expansion of
about k t k ∗ , a t t1 0
– First order terms:
k t : g k gk t , a t , , a t t1 , g k k t , a t , g 2k
a t : g k gk t , a t , , a t t1 , g a k t , a t , g a gk t , a t , , a t t1 , g k g a g a
t1 : g a gk t , a t , , a t t1 , g a
: g k gk t , a t , , a t t1 , g k t , a t , g gk t , a t , , a t t1 , g k g g 0
– Second order terms:
k 2t : g kk g 2k g k g kk
a 2t : g k g 2a g ka g a g k g aa g ak g a g aa 2
2t1 : g aa 2
2 : g kk g 2 g k g g k g g k g g g k g g
k t a t : g kk g a g k g ka g k g k g ka
k t t1 : g ka g k
k t : g kk g g k g k g k g k g k 0
a t t1 : g ka g a g aa
a t : g kk g a g g k g a g k g a g ak g g a 0
t1 : g ak g g a g a g a
Pruning: a way to do 2nd order approximation
• Second order approximation, mapping to kt+3
k t , a t , , t1 , t2
from is even more algebra‐
intensive. Mapping to kt+4, kt+5, … worse still.
• Turns out there is a simple way to compute
these mappings, called pruning1
Note that if yt+j were used here,
– First, draw t1 , t2 , t3 , . . . then higher order powers than
two would appear, and would
– Then, solve linear system: not be second order expression.
ỹ tj1 g k ỹ tj g a a tj , a tj a tj−1 tj , j 1, 2, 3, . . .
– Finally,
y tj1 g k y tj g a a tj 1 g kk ỹ 2tj g aa a 2tj g g ka ỹ tj a tj
2
1Kim, Kim, Schaumburg and Sims, 2008.
Pruning: a way to do 2nd order approximation
• Second order approximation, mapping to kt+3
k t , a t , , t1 , t2
from is even more algebra‐
intensive. Mapping to kt+4, kt+5, … worse still.
• Turns out there is a simple way to compute
By using ỹ t instead of y t
these mappings, called pruning1to remove higher order terms,
– First, draw t1 , t2 , t3 , . . . you are in effect doing what a
gardener who removes (prunes)
– Then, solve linear system: diseased or unwanted parts of
plants.
ỹ tj1 g k ỹ tj g a a tj , a tj a tj−1 tj , j 1, 2, 3, . . .
– Finally,
y tj1 g k y tj g a a tj 1 g kk ỹ 2tj g aa a 2tj g g ka ỹ tj a tj
2
1Kim, Kim, Schaumburg and Sims, 2008.
Pruning: a way to do 2nd order approximation
• Second order approximation, mapping to kt+3
k t , a t , , t1 , t2
from is even more algebra‐
intensive. Mapping to kt+4, kt+5, … worse still.
• Turns out there is a simple way to compute
By using ỹ instead of y
these mappings, called pruning To remove higher order terms, tj tj
• This corresponds to the following value of the
capital stock: This is very large, probably not worth worrying
about in neoclassical model, but the analog object
in medium‐sized New Keynesian models is closer
y k − k ∗ logK/K∗
to actual steady state.
because g ignored
K expy k ∗ ≈ exp2. 86 3. 9 790. 3 (after rounding)
The Spurious Steady State in 2nd Order
Approximation of Neoclassical Model
• Because of scale problem, it is hard to see the
policy rule when graphed in the ‘natural way’,
kt+1 against kt .
• Instead, will graph:
at=0
-0.01
Impact of at
reversed at high k.
0 0.5 1 1.5 2 2.5 3 3.5
kt - k*
Stylized Representation of 2nd Order
Approximation of Policy Rule, a=0
3 The shape of the policy rule to the
left of the first positive steady state
2.5
corresponds to what we know qualitatively.
In neoclassical model, second steady state seems far enough
2
to right that (perhaps) we don’t have to worry about it.
Kt+1
1.5
This (spurious) steady state marks a transition to
0.5
unstable dynamics. Simulations would explode if
capital stock got large enough. Convexity implies you
explode really fast if you get into this region
0
0 0.5 1 1.5 2 2.5
Kt
Next
Done
• Stochastic simulations
– Mapping from shocks and initial conditions to data.
– 2nd order approximation and simulation.
– Pruning: a way to do 2nd order approximation.
– Naïve simulation versus pruning.
– Spurious steady state.
– Extended example of simulation and pruning.
Next
• Impulse response functions
– Conditional impulse response function (IRF).
– Unconditional IRF.
Extended Example of Simulation and
Pruning
• Simplified version of 2nd order approximation
to neoclassical model solution:
y t y t−1 y 2t−1 t , 0. 8, 0. 5, E 2t , 0. 10
1.2
1
t 2
0.8
After a few
0.6 big positive
yt+1 shocks, this
t 0
0.4
0.2
process will
0
explode.
Explosion big
-0.2
t −2 because of
-0.4
45 degree line convexity.
-0.4 -0.2 0 0.2 0.4 0.6 0.8
yt
Simulations, std dev = σ/10
0.04
0.03
0.02
0.01
yt+1 0
-0.01
-0.02
-0.03
-0.04
solid line: y t y t−1 y 2t−1 t
∗ − line: y t y t−1 t
-0.05
yt
Two lines virtually the same for small shocks.
Simulations , std dev = σ
yt+1
2
∗ line: y t y t−1 t
1
By period 71, this explodes to MATLAB’s Inf and remains there.
0.5
-0.5
-1
50 100 150 200 250 300 350 400 450 500
yt
Two lines wildly different for large shocks…….naive simulations explode!
Pruning
• Procedure for simulating artificial data, using second
order Taylor series expansion of simulated data as a
function of the shocks and initial conditions1.
• First, draw a sequence, 1 , 2 , . . . , T
ỹ 1 , ỹ 2 , . . . , ỹ T
• Next, solve for in the linear component
of the process:
ỹ t ỹ t−1 t
• The (‘pruned’) solution to the 2nd order difference
y1 , y2 , . . . , yT
equation is in
y t y t−1 ỹ 2t−1 t
• Note that the yt’s cannot explode.
1For pruning with higher order perturbations, see Lombardo (2011).
Simulations, std dev = σ/10
0.04
0.03
0.02
0.01
yt+1 0
-0.01
-0.02
-0.03
yt
Simulations, std dev = σ
0.5
0.4
0.3
0.2
0.1
yt+1
0
-0.1
-0.2
-0.3
-0.5
∗ − line: ỹ t ỹ t−1 t The two lines now differ a lot.
50 100 150 200 250 300 350 400 450 500
yt
Next
done
• Stochastic simulations
– Mapping from shocks and initial conditions to data.
– 2nd order approximation and pruning.
– Naïve simulation of 2nd order approximation, versus
pruning.
– Spurious steady state.
– Extended example. done
• Impulse response functions
– Conditional impulse response function (IRF).
– Unconditional IRF.
Impulse Response Function
• (Conditional) impulse response function:
– Impact of a shock on expectation of future variables.
Ey tj | t−1 , shock t ≠ 0 − Ey tj | t−1 , shock t 0, j 0, 1, 2, . . .
– Impulse responses are useful for building intuition
about the economic properties of a model.
– When you condition on initial information, can
compare responses in recessions versus booms.
– Can also used for model estimation, if you have the
empirical analogs from VAR analysis.
Impulse Response Function, cnt’d
• Example: t−1
y t y t−1 t
y t−1 t y t−1
• Obviously:
Ey t | t−1 , t ≠ 0 − Ey t | t−1 , t 0 t
• Also
y t1 y t t1
2 y t−1 t1 t
• So that:
Ey t1 | t−1 , t 2 y t−1 t , Ey t1 | t−1 , t 0 2 y t−1
→ Ey t1 | t−1 , t − Ey t1 | t−1 , t 0 t
• In general:
Ey tj | t−1 , t ≠ 0 − Ey tj | t−1 , t 0 j t
Impulse Responses, cnt’d
• Easy in the linear system!
– Impulse responses not even a function of t−1
• Different story in our 2nd order approximation,
especially because of the spurious steady
state..
nd
Same form as our 2 order approximation
• Example:
y t y t−1 y 2t−1 t
2
y t1 y t−1 y 2t−1 t y t−1 y 2t−1 t t1
– So,
Ey t1 | t−1 , t − Ey t1 | t−1 , t 0 t 2t 2y t−1 y 2t−1 t
– Ouch! Much more complicated…is a function of
elements of t−1
IRF’s, cnt’d
• Too hard to compute IRF’s by analytic
formulas, when equations are not linear.
y t y t−1 y 2t−1 t
• What we need:
– Fix a value for t−1 y t−1 y 2t−1
– Compute: our example
– Subtract:
Ey tj | t−1 , t − Ey tj | t−1 , t 0, j 1, 2, 3, . . . , T
IRF’s, cnt’d
• Computational strategy
– From a random number generator, draw:
1 1 1
t1 , t2 , . . . . . , tT
y t−1 y 2t−1 t
– Using the stochastic equation, and the given
compute (by pruning)
1 1 1
y t1 , y t2 , . . . , y tT
– Repeat this, over and over again, R (big) times, to obtain
1 1 1
y t1 , y t2 , . . . , y tT
...
R R R
y t1 , y t2 , . . . , y tT
– Finally,
R
Ey tj | t−1 , t 1
R ∑ ytj , j 1, 2, . . . , T
l
l1
IRF’s, cnt’d
Ey tj | t−1 , t 0, j 1, 2, 3, . . . , T.
• To get , just
repeat the preceding calculations, except set
t 0
• To do the previous calculations, need R and T.
– Dynare will do these calculations.
– In the stoch_simul command,
• R is set by including the argument, replic=R.
• T is set by including irf=T.
Next
done
• Stochastic simulations
– Mapping from shocks and initial conditions to data.
– 2nd order approximation and pruning.
– Naïve simulation of 2nd order approximation, versus
pruning.
– Spurious steady state.
– Extended example. done
• Impulse response functions
– Conditional impulse response function (IRF). done
– Unconditional IRF.
A Different Type of Impulse Response Function
• The previous concept of an impulse response
function required specifying the information
t−1
set, .
– How to specify this is not often discussed…in part
because with linear solutions it is irrelevant.
t−1
– With nonlinear solutions, makes a
difference.
– How to choose ?
t−1
– One possibility: nonstochastic steady state.
– Another possibility: stochastic mean.
Unconditional Impulse Response
Functions
Unconditional IRF
• Note that
Ey tj | t−1 , t − Ey tj | t−1 , t 0 , j 1, 2, 3, . . . , T
t−1
– is a function of .
t−1
– Evaluate the IRF at the mean of as follows
• Suppose there is date 0, date t and date T, where T>t
and t is itself large.
t
• Draw R sets of shocks (no need to draw )
• Compute two sets (by pruning)
1 1 1 1 1 1 1
y 0 , y 1 , . . . , y t−1 , y t , y t1 , y t2 , . . . , y tT
...
R R R R R R R
y 0 , y 1 , . . . , y t−1 , y t , y t1 , y t2 , . . . , y tT
• The period t+j IRF is computed by averaging across
l=1,…,R, for given t+j, j=0,1,…,T. Then, subtract, as
before.
• In Dynare, t is set with drop=t parameter in
stoch_simul command.
Conclusion
• For modest US‐sized fluctuations and for aggregate quantities, it may be
reasonable to work with first order perturbations.
– This assumption deserves much further testing.
– Can do this by studying the error function.
– Also, try fancier approximations and see if it changes your results.
• One alternative to first order perturbations is higher order perturbations.
– These must be handled with care, as they are characterized by spurious steady
states, which may be the transition point to unstable dynamics.
– Must do some sort of pruning to compute IRF’s, or just to simulate data.
• An alternative is to apply projection methods.
– Perhaps these have less problems with spurious steady states.
– Computation of solutions is more cumbersome in this case.
• First order perturbation: linearize (or, log‐linearize) equilibrium conditions
around non‐stochastic steady state and solve the resulting system.
– This approach assumes ‘certainty equivalence’. Ok, as a first order
approximation.
List of endogenous variables determined at t
Solution by Linearization
• (log) Linearized Equilibrium Conditions:
E t 0 z t1 1 z t 2 z t−1 0 s t1 1 s t 0
• Posit Linear Solution:
s t − Ps t−1 − t 0.
z t Az t−1 Bs t Exogenous shocks
• To satisfy equil conditions, A and B must:
0 A2 1 A 2 I 0, F 0 0 BP 1 0 A 1 B 0
• If there is exactly one A with eigenvalues less
than unity in absolute value, that’s the solution.
Otherwise, multiple solutions.
• Conditional on A, solve linear system for B.