Reportβ

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Question 1

a)
Yes. It have a square matrix describing the probabilities of moving from one state to another in
a dynamic system

b)

[ ]
0.95 0.05 0.00
[ 0.1 0.8 0.1 ] 0.15 0.75 0.10 =[ 0.205 0.605 0.170 ]
0.10 0.00 0.90

π 1=[ 0.205 0.605 0.170 ]


'

c)

0.75 0.15 0.1 0


P=⌊ 0.05 0.95 0.0 0 ⌋
0.00 0.1 0 0.90

d)
Let a row vector
X =(x 1 , x 2 , x 3 )
be a stationary distribution. Then X=XP holds. Solving it directly, we deduce that there
exists a real number s and t such that
x 1=0.9−0.15 , , x 2=0.75−0.5 , x3 =0.75−0.5 ,

There is a second interpretation of the limiting distribution π = (π 0, π1, …, πN) that plays a major
role in many models. We claim that πj also gives the long run mean fraction of time that the
process {Xn} is in state j. therefore,
When attempting to solve for πP=π I obtain the following overdetermined system:
−0.7 5 π 1+ 0.7 π 2=π 2

0.9 π 1+ 0.8 π 2 =π 1

Solving the system, we have π1=1.5π2. Since the two probabilities must sum to 1,
π 1+ π 2=1

⟶ 1.5 π 2 + π 2=1

4 3
⟶ π 2= − , π 1=1 /7
7 7
5 1 1
e 1= π , e2 = π , e 3= π
7 7 7

π=
[ ] 511
777

Question 2
a.
Denoting savings by β t and assuming that individuals start their lives without assets, the choice
of individuals has to satisfy
max
{C t (w t ) ,at +1¿ ¿

If w is the pmf of aw t  defined on γ, we can easily show that the row vector n defined as:
γ
n j =∑ w i T ( x i , x j ) ,
i=1

v
is itself a pmf (i.e 0 ≤ ni ≤1 , ∀ i ∈{1 , … , v }) and ∑ ni=1and must therefore correspond to
i=1
a rv. That justifies the following vector notation:
π X =π X T ,
(t +1) (t)

in the first period such that consumption and savings add up to wage income in the first period.
Note that savings can also be negative such that individuals go into debt. In the second period,
the choice of consumption has to satisfy
c t +1=wt +1+(1+r ) s t

such that consumption in the second period equals wage income in the second period, plus
savings from the previous period on which interest is paid. Substituting st =w t−c t obtained
from the first equation and plugging it into the second equation yields
c t +1=wt +1+ ( 1+ r ) ( wt −c t ) .

Rearranging all terms involving consumption to appear on the left-hand side leads to the
lifetime budget constraint
c t +1 w t+ 1
ct + =wt + .
1+r 1+r
The left-hand side comprises total discounted lifetime consumption expenditures, while the
right-hand side comprises total discounted lifetime income. By using this equation as the
constraint, we can solve the given simple dynamic optimization problem by following our
recipe/cookbook procedure
The Lagrangian is

L=
c 1−θ
t
1−θ
−1

c 1−θ
t +1 −1
1−θ
+ λ wt +
w t+ 1
1+r (−ct −
c t+1
1+ r
. )
The FOCs are
−θ !
Lc =c t −λ=0 ,
t

−θ λ !
Lc =β ct +1− =0 ,
t+ 1
1+r
wt +1 c !
L λ=wt + −c t− t +1 =0.
1+r 1+r

The consumption model is now expressed as follows:


1
max J ( u )=∫ log ( u 4 y ) dt
{u } 0

2
s . t . ẏ=4 y ( 1−u ) and y ( 0 ) =1, y ( 1 )=e
where u(t) is the consumption and y(t) is the total output of the economy over the time interval
dy
[0, 1], while ẏ= is the variation in the total output (i.e., the aggregate investment
dt
expenditures).
The objective is to maximize the economy utility function.
The Hamiltonian is given by:

H=log (u 4 y ) + λ [ 4 y ( 1−u ) ] .

Let us check as usual the first-order condition to find the maximum for ℋ:
∂H 4 y 1 ¿ 1
= −λ 4 y= −λ 4 y=0⇒ u ( t )= .
∂u 4 yu u λ4 y
The second derivative is:
2
∂ H −1
2
= 2 < 0.
∂u u

b.
In this problem, we find that current consumption is inversely related to the total aggregate
output and the costate variable.
From the equation of motion for λ (t) we obtain:

dλ −∂ H
dt
=
∂y
1
[
=− + 4 λ ( 1−u ) .
y ]
1
As u(t )= the above equation of motion becomes:
λ4 y

dλ −∂ H
dt
=
∂y
1
=− + 4 λ 1−
y
1
λ4 y[=−4 λ ( )]
which has general solution:
¿ −4 t
λ ( t )=k e
Now, from the equation of motion ẏ=4 y−4 yu we have the following ordinary linear
differential equation to solve:
dy 1
=4 y−4 y
dt λ4 y
which becomes:
dy −1
−4 y= .
dt λ

As λ ¿ (t)=k e−4 t we have:


dy −1 4 t
−4 y= e
dt k
which is a first-order differential equation with general solution:

[1
y ( t ) =e 4 ∫ dt c− ∫ ( e−4 ∫ dt e4 t ) dt
k ]
V ( t ) =e
4t
[ 1
k ] [ ]
4t t
k
4t
c − ∫ dt =e c− =c e −t
e4 t
k
.

From the boundary conditions y(0) = 1, y(1) = e2 we can now determine the two constants of
integration as:
y ( 0 )=1 ⇒ c=1

2 4 e4 2 e4
V ( 1 )=e ⇒ e − =e ⇒ k= 4 2 ≅ 1.1565 .
k e −e
The optimal y*(t) is determined then as follows:
4t
e
V ¿ ( t )=e 4 t −t
1.1565
while the costate variable is:

Ψ ¿ ( t )=1.1565 e−4 t .
We finally obtain u*(t) as:
4t
1 e 1 1
u¿ (t)= = = = .
λ4y 4t t 4t ( 4 ·1.1565−4 t) (4.626−4 t)
4 k (e − e )
k

c. Approach:
Policy evaluation: calculate utilities for some fixed policy (not optimal utilities!) until convergence.
Policy improvement: update policy using one step look-ahead with resulting converged (but not
optimal!) utilities as future values!
Repeat steps until policy converges
d.
The update step in value function is very similar to the update step in the policy iteration
algorithm. The only difference is that we take the maximum over all possible actions in the
value iteration algorithm. Instead of evaluating and then improving, the value iteration
algorithm updates the state value function in a single step. This is possible by calculating all
possible rewards by looking ahead. The value iteration algorithm is guaranteed to converge to
the optimal values.

e.
After certain iterations(in this case k=2000), the policy stops improving and hence optimal
policy is obtained.
One major drawback of policy iteration is the computational cost involved in evaluating policies.
This cost is reduced in value iteration which stops policy evaluation after k=1 and updates the
policy every step thereafter.
Value Iteration: Unlike policy iteration, it merges the policy evaluation and improvement steps
into one and performs an iterative update using the value function of Bellman optimality
equation.

f.
Scenario 1. E(a) converges and then remain constant
Scenario 2. E(a) diverges with a relatively low margin and then remain constant
Scenario 3. E(a) diverges with a higher margin and then remain constant

You might also like