Solution_set_P_DS_13 (3)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Probability for Data Science – Master’s Degree in Data Science – A.Y.

2022/2023
Academic Staff: Francesca Collet, Paolo Dai Pra

PROBLEM SET 8
Discrete-time Markov chains IV: limiting probabilities, first step analysis and MCMC methods

P13.1. Consider a Markov chain with state space S = {0, 1, 2, 3, 4}. Suppose P04 = 1; and suppose
that when the chain is in state i, i > 0, the next state is equally likely to be any of the states
0, 1, . . . , i − 1. Find the limiting probabilities of this Markov chain.
Solution. From the text we deduce that, if i > 0, the transition probabilities are given by

 1 if j ∈ {0, 1, . . . , i − 1}

Pij = i
0 otherwise

and the transition matrix results in


 
0 0 0 0 1
1 0 0 0 0
 
1 1

P=
2 2 0 0 0.
1 1 1
0 0

3 3 3
1 1 1 1
4 4 4 4 0

From the transition matrix P, we deduce the transition


graph shown in the figure aside. All states communi- •I 1U g
cate with each other (0 → 4 → 3 → 2 → 1 → 0),
therefore the chain has only one communication class. w
•O 0 of 8 •F
2
Being periodicity a class property, it suffices to analyze
one single state. Let us focus on state 0. Observe that
2
P00 > 0 (0 → 4 → 0) and P00 3
> 0 (0 → 4 → 1 → 0).
Therefore, since gcd(2, 3) = 1, we obtain d0 = 1 and

the chain is aperiodic. / •3
•4
Since the chain is positive recurrent (as the state space is finite), irreducible and aperiodic, the
convergence theorem holds. Therefore, the limiting probabilities are the entries of the stationary
distribution π = π0 π1 π2 π3 π4 , that is

lim Pijn = πj for all i, j ∈ S.


n→+∞

To determine the stationary distribution we have to solve the following system of linear equations
(
π = πP
P4
j=0 πj = 1,

which is equivalent to

π0 = π1 + 12 π2 + 31 π3 + 1
4 π4  
π1 = 12 π0 12



 1 1 1

 
 π0 = 37
 π1 = 2 π2 + 3 π3 + 4 π4

 
 

1 6
 π2 = 3 π0  π1 =
 
 

 π = 1π +1π
   37
2 3 3 4 4
⇔ π3 = 14 π0 ⇔ π2 = 4
37

 π3 = 14 π4 
 
 3

 
 π4 = π0 
 π3 = 37
π4 = π0

 
 

 
 π +π +π +π +π =1 
 π = 12
37 .

 0 1 2 3 4 4
π0 + π1 + π2 + π3 + π4 = 1

We obtained π = 12 6 4 3 12
and then it follows that

37 37 37 37 37

n 12 n 6 n 4
lim Pi0 = π0 = , lim Pi1 = π1 = , lim Pi2 = π2 = ,
n→+∞ 37 n→+∞ 37 n→+∞ 37
3 12
n
lim Pi3 = π3 = and n
lim Pi4 = π4 = ,
n→+∞ 37 n→+∞ 37
for all i ∈ S.

P13.2 (á difficult). Consider a Markov chain (Xn )n∈N with state space S = {0, 1, . . . , k} and
with transition probabilities
if j = i − 1
 i
 k


Pij = k−i
k if j = i + 1

otherwise

0

if i ∈ {1, 2, . . . , k − 1} and P00 = Pkk = 0, P01 = Pk,k−1 = 1. Determine the stationary distribution
and, if possible, the limiting probabilities.
Solution. From the given transition probabilities we deduce the transition graph shown in the figure
below.
k−1 k−i+1 k−i k−i−1 1
1 k k k k k

     
•0U •1 ... U •i−1
V
•iV •i+1 ... U •k−1
W
•k

1 i−1 i i+1 k−1 1


k k k k k

All states communicate with each other (0 → i → k → 0, for any i ∈ {1, . . . , k − 1}); the chain has
only one communication class, meaning it is irreducible. Moreover, as the state space is a finite
set, it is positive recurrent. As a consequence, the stationary distribution exists and it is unique.
To determine the stationary distribution π = π0 π1 . . . πk , we solve the system comprised of


the following linear equations


k
X
πi Pij = πj , for j = 0, . . . , k, (1)
i=0
k
X
πj = 1. (2)
j=0

Observe that, from (1), we get


• for j = 0:
1
π0 = π1 P10 = π1 ⇔ π1 = kπ0 ,
k
• for j = 1:

2 k k(k − 1)
π1 = π0 P01 + π2 P21 = π0 + π2 ⇔ π2 = (π1 − π0 ) = π0 ,
k 2 2

• for j = 2:
 
k−1 3 k k−1 k(k − 1)(k − 2)
π2 = π1 P12 + π3 P32 = π1 + π3 ⇔ π3 = π2 − π1 = π0 ,
k k 3 k 6

• ...
We prove by induction that πj = kj π0 , for all j = 0, . . . , k. Of course the identity is true for


j = 0. We suppose it holds for every j ≤ r and we prove that it holds also for j = r + 1, yielding
k
π0 . From (1), by setting j = r, we obtain

πr+1 = r+1

k−r+1 r+1
πr = πr−1 Pr−1,r + πr+1 Pr+1,r = πr−1 + πr+1 ,
k k
which is equivalent to
 
k k−r+1
πr+1 = πr − πr−1
r+1 k
    k
k k k−r+1 k (by inductive hypothesis πr = r
π0 and
= − π0 k 
r+1 r k r−1 πr−1 = r−1 π0 )
  
k k (k − 1)!
= − π0
r+1 r (k − r)!(r − 1)!
k!
= · (k − r) π0
(r + 1)!(k − r)!
 
k
= π0 ,
r+1

as wanted. To conclude and obtain the value of π0 , we exploit the normalization condition (2). We
have
k   k  
X k X k j k−j 1
1= π0 = π0 1 1 = 2k π0 ⇔ π0 = k .
j=0
j j=0
j 2
= (1 + 1)k
(Newton’s binomial
theorem)

Therefore, the stationary distribution is given by π = π0 πk , where πj = 2−k kj , for


 
π1 ...
each j = 0, 1, . . . , k.

The Markov chain (Xn )n∈N is periodic. The chain always need an even number of steps to reenter
a given state, implying that it has period 2. As a consequence, the convergence theorem does not
hold.

P13.3. Consider a Markov chain (Xn )n∈N with state space S = {1, 2, 3, 4, 5} and with transition
matrix
0 12 0 21 0
 
1 0 1
0 0
2 2 
P=  01 0 1 0 0
1 1
.

3 0 3 0 3
0 0 0 0 1
(a) Draw the transition graph.
(b) Use the first step analysis to determine the probability that the chain, starting from state 1,
will eventually reach state 5.
Solution.
(a) From the transition matrix P we deduce the transition graph shown in the figure below.
1/2

1/2

' "
1
• bg •2
1/2
/ •3 o 1/3
•4
1/3
/ •5
Z Z
1/2
1 1

1/3

(b) Let Pi denote the probability that, starting from state i, the chain will eventually reach state 5.
We want to determine P1 . By conditioning on the outcome of the first transition of the chain,
we obtain  


 P1 = 12 P2 + 1
2 P4 

 P1 = 12 P2 + 1
2 P4
 1 1
 1
 P2 = 2 P 1 + P3  P2 = 2 P 1

 

 2 
P3 = 0 ⇔ P3 = 0
 
 P4 = 13 P1 + 1 1  P4 = 31 P1 + 1
 

 3 P3 + 3 P5 
 3

 

 P =1  P = 1,
5 5

from which it follows P1 = 7.


2

E13.4. There is a mouse in our flat! See figure below. It moves from room to room randomly.
From any room, it chooses one of the adjacent rooms with equal probability and runs there. The
movements occur at times n ∈ N.
TOILET HALL KITCHEN

LIVING ROOM BEDROOM

GARDEN

We set two traps; one is installed in the bedroom and the other one in the kitchen. As soon as
the mouse enters a room with a trap, it is caught and it will never ever run into another room.
Moreover, if the mouse leaves the flat and enters the garden, it will never come inside again.
Let Xn be the position of the mouse at time n. The process (Xn )n∈N is a homogeneous discrete-time
Markov chain.
(a) Determine the state space and the transition matrix of the chain.
(b) Draw the transition graph of the chain.
(c) Specify the communication classes of the chain and determine whether they are transient or
recurrent.
(d) Compute the probability that the mouse, starting in the toilet, is in the living room after two
moves.
(e) Use the first-step analysis to determine the probability that the mouse, starting in the toilet,
escape the flat before being caught in a trap.
Solution.
(a,b) The states of the Markov chain are the rooms of our flat: toilet (T), hall (H), kitchen (K),
living room (L), bedroom (B) and garden (G); therefore, in short, we have S = {T, H, K, L, B, G}.
The transition matrix is  
0 1 0 0 0 0
 1 0 1 1 0 0
3 3 3 
0 0 1 0 0 0
P=  1 1 1

0 3 0 0 3 3 
0 0 0 0 1 0
0 0 0 0 0 1
and the corresponding transition graph is shown in the figure below.
1
3

v 1
3
/ •K
•T 6 •H \ f 1

1 1 1
3 3
 1

•L
3
/ •B
f 1

1
3

•G f 1
(c) Observe that none of the states except for K is accessible from state K and the same goes for
states B and G; as a consequence, each of these states is in a class in its own. The states T,
H and L communicate with each other and they are not accessible from any other state, then
they are in the same class. Thus, this Markov chain consists of the four communication classes
{T, H, L}, {K}, {B} and {G}. Being absorbing, the classes {K}, {B} and {G} are recurrent; while,
the remaining class is transient.
(d) Recall that two-step transition probabilities are the entries of the matrix P2 . We have to
determine PTL
2
= (P2 )TL . We obtain
 
0
1
3
2
 0 1
PTL = 0 1 0 0 0 0  0 = 3 .

 
0
0

(e) Let Pi denote the probability that, starting from state i ∈ S, the chain will eventually reach
state G. We want to determine PT . We know that PK = PB = 0 and PG = 1. By conditioning
on the outcome of the first transition of the chain, we obtain
 


 P T = P H 

 PT = PH
 1 1 1
 1 1
 PH = 3 P T + 3 PK + 3 PL  PH = 3 PT + 3 PL

 

 
PL = 31 PH + 13 PB + 13 PG ⇔ PL = 13 PH + 13

 



 PK = PB = 0 

 P K = PB = 0

 

 P =1  P =1
G G

from which it follows PT = 15 .

P13.5 (v From past exam). Consider a discrete-time Markov chain with state space S =
{0, 1, 2, 3, 4, 5}. Suppose Pi,i−1 = 1, for i > 0, and suppose that, when the chain is in state 0, the
next state is equally likely to be any of the states 1, 2, 3, 4 and 5.
(a) Determine the transition probability matrix of the chain and draw the corresponding transition
graph.
(b) Specify the communication classes and determine whether they are transient or recurrent. Is
the Markov chain irreducible?
(c) Compute the proportion of time that the chain spends in an odd state in the long run.
Moreover, if it exists (explain why or why not!), compute limn→+∞ (P21
n n
+ P31 n
+ P13 n
+ P15 ).
(d) Modify the transition probability matrix of the chain so that the states 2 and 4 become
absorbing states. Use the first-step analysis to determine the probability that the chain,
starting at state 0, reaches state 4.
Solution.
(a) The solely non-null transition probabilities are
1
Pi,i−1 = 1 and P0,i = ,
5
for all i ∈ {1, 2, 3, 4, 5}. As a consequence, the transition probability matrix results in
 1 1 1 1 1
0 5 5 5 5 5
1 0 0 0 0 0
 
0 1 0 0 0 0
P=
0

 0 1 0 0 0
0 0 0 1 0 0
0 0 0 0 1 0
and the corresponding transition graph is shown in the figure below.
1
5
1
5
1
5
1
5
1

#
5
  
•0 o •1 o •2 o •3 o •4 o •5
1 1 1 1 1

(b) All states communicate with each other (0 → 5 → 4 → 3 → 2 → 1 → 0), therefore the chain
has only one communication class and hence it is irreducible. As the Markov chain has a
finite state space, the class must be recurrent.
(c) In the long run, the proportion of time spent by the chain in an odd state (i.e., in state
1 or state 3 or state 5) is π1 + π3 + π5 , where the πi ’s are the entries of the stationary
distribution π = π0 π1 π2 π3 π4 π5 . To determine the stationary distribution, we


solve the following system of linear equations



π = πP
P5
i=0 πi = 1,

which is equivalent to

 π0 = π1
π1 = π0

 
π1 = 15 π0 + π2

 

 

π2 = 45 π0

 

 π2 = 15 π0 + π3

 

 


  π3 = 3 π0

5
π3 = 15 π0 + π4 ⇔
  π4 = 25 π0
 π4 = 15 π0 + π5

 


π5 = 15 π0

 

 
1
 π5 = 5 π0

 

 



 π0 + π1 + π2 + π3 + π4 + π5 = 1
π0 + π1 + π2 + π3 + π4 + π5 = 1

1
π0 =


 4
 1
π1 =




 4
 1
 π2 =

5
⇔ 3


 π3 = 20

1

π4 =



 10

1

π5 = 20 .

In conclusion, we obtain π1 + π3 + π5 = 1
4 + 3
20 + 1
20 = 20 .
9

We check that the chain is aperiodic. Being periodicity a class property, it suffices to analyze
one single state. Let us focus on state 0. Observe that P00 2
> 0 (0 → 1 → 0) and P00 3
>0
(0 → 2 → 1 → 0). Therefore, since gcd(2, 3) = 1, we obtain d0 = 1 and the conclusion follows.
Since the chain is positive recurrent (as the state space is finite), irreducible and aperiodic, the
convergence theorem holds. Thus, the limiting probabilities are the entries of the stationary
distribution, that is
lim Pijn = πj for all i, j ∈ S.
n→+∞

As a consequence, it yields

n n n n 1 3 1 7
lim (P21 + P31 + P13 + P15 ) = 2π1 + π3 + π5 = + + = .
n→+∞ 2 20 20 10
(d) We modify the evolution of the chain so that
the states 2 and 4 become absorbing states.
From the transition probability matrix P, we
derive the new transition probability matrix
P̄, given by 1
5
1
0 15 15 15 15 15 5
 
1
5
1 0 0 0 0 0  1
  5
0 0 1 0 0 0  1

P̄ = 
 . #
5
   
0 0 1 0 0 0  •0 o •1 •D 2 o •3 •D 4 o •5

0 0 0 0 1 0  1 1 1

0 0 0 0 1 0 1 1

The corresponding transition graph is


shown in the figure aside.
Let Pi denote the probability that, starting from state i, the chain will eventually reach state 4.
We want to determine P0 . By conditioning on the outcome of the first transition of the chain, we
obtain 

 P0 = 15 P1 + 15 P2 + 15 P3 + 51 P4 + 15 P5


 P1 = P 0



 P2 = P 3 = 0


 P = P = 1,
4 5

from which it follows P0 = 21 .

P13.6. Let S = {−1, +1}n and let i = (i1 , . . . , in ) be a generic element of S. We induce an
undirected graph structure on the state space S; we use S as vertex set and we introduce a concept
of neighboring elements of S: i ∼ j (neighbors) if and only if i and j differ for exactly one entry.
(a) Show that, in the induced graph, every element of S has n neighbors.
(b) Consider the Markov chain with state space S and such that, when the chain is in state i,
the next state is equally likely to be any of the neighboring elements of i. Determine the
transition matrix M of this chain.
(c) Let
e−βF (i)
π(i) = P −βF (j)
(i ∈ S),
j∈S e
Pn
where β is a strictly positive constant and F (i) = v=1 iv . Describe a Metropolis-Hastings
Markov chain having π as stationary distribution; use the matrix M found in (b) as reference
transition matrix.
Solution.
(a) Given i ∈ S and v ∈ {1, . . . , n}, let iv = (i1 , . . . , iv−1 , −iv , iv+1 , . . . , in ) denote the string
obtained from i by flipping the sign of the v-th entry. Observe that j ∼ i if and only if j = iv ,
for any v ∈ {1, . . . , n}. As a consequence, an element i ∈ S has n neighbors.
(b) Since only transitions between neighboring elements are allowed and since each state has n
neighbors, we obtain the transition probabilities

if j = iv , for any v ∈ {1, . . . , n}


( −1
n
Mij = (i, j ∈ S).
0 otherwise

The transition matrix M = (Mij )i,j∈S is symmetric due to the symmetry of the relation ∼.
Moreover, it is irreducible, as by flipping one entry at a time any string can be transformed
into any other in the state space.
(c) First we have to choose the symmetric and irreducible transition matrix working as reference
matrix for the Metropolis-Hastings algorithm. We use the matrix M found in item (b)∗ . Since
the acceptance probability for the transition from i to iv is
π(iv )
  n o
v
= min 1, e−β[F (i )−F (i)] = min 1, e2βiv ,

Ai iv = min 1,
π(i)
the Metropolis-Hastings chain is given by
if j = iv , for any v ∈ {1, . . . , n}

n−1 min 1, e2βiv




if j = i
P
Pij = 1 − k6=i Pik (i, j ∈ S).

otherwise

0

P13.7 (Ising model† ). Let Λ ⊂ Z2 be a finite subset of the 2-dimensional integer lattice; and
let {1, 2, . . . , |Λ|} be an ordering of the vertices of Λ. At each vertex i ∈ Λ, we attach a random
variable σi ∈ {−1, +1} and we obtain the configuration space
SΛ = {−1, +1}Λ = σ = σ1 , . . . , σ|Λ| : σi ∈ {−1, +1}, for i = 1, . . . , |Λ| ,
 

the set of all possible strings of length |Λ| with entries ±1. The Ising model is defined through the
probability measure
e−βH(σ)
π(σ) = P −βH(σ)
(σ ∈ SΛ ),
σ∈SΛ e
where β is a positive real parameter and the function H : SΛ → R is given by
X
H(σ) = − σi σj .
i,j∈Λ:i∼j

The notation i ∼ j indicates that vertices i and j are adjacent in Λ.


(a) Describe a Metropolis-Hastings Markov chain having π as stationary distribution; construct
the reference matrix M as suggested in Problem 13.6.
(b) (Optional ) Let m := |Λ|−1 i∈Λ σi . Simulate the chain and use the Ergodic Theorem to give
P
an estimate of Eπ (m2 ), where Eπ denotes the expectation with respect to the probability
distribution π. Plot Eπ (m2 ) against β.
(A possible) Solution. Observe that the variables/spins σi ’s desire to be aligned: for every fixed
β the configurations in which adjacent variables are of the same sign have higher probability (by
definition of π). Moreover, the larger the parameter β is chosen, the more enhanced this imitative
tendency is. An overall net magnetization (m 6= 0) appears for sufficiently big values of β. See plot
at the end of the exercise.
(a) First we have to choose the symmetric and irreducible transition matrix M working as ref-
erence matrix for the Metropolis-Hastings algorithm. We mimick the strategy we used in
Problem 9.1. We induce an undirected graph structure on SΛ as follows: σ ∼ σ 0 if and
only if σ and σ 0 differ for exactly one entry. Given σ ∈ SΛ and i ∈ {1, . . . , |Λ|}, let
σ i = (σ1 , . . . , σi−1 , −σi , σi+1 , . . . , σ|Λ| ) denote the state obtained from σ by flipping (only)
the i-th entry. Observe that σ 0 ∼ σ if and only if σ 0 = σ i , for any i ∈ {1, . . . , |Λ|}. As a
consequence, an element σ ∈ SΛ has |Λ| neighbors. Since we allow only transitions between
neighboring elements and since each state has |Λ| equally likely neighbors, the transition
matrix M has elements
|Λ|−1 if σ 0 = σ i (for any i = 1, . . . , |Λ|)
(
Mσσ0 = (σ, σ 0 ∈ SΛ ).
0 otherwise
∗ Why are we suggested to make this choice? Letting the reference chain move uniformly at random between all

configurations, by taking M = 2−n i,j∈S , is not always appropriate numerically; if |S| = 2n becomes large, the


entries of the matrix M become easily very small, producing rounding errors. A simple and more convenient strategy
is to allow only transitions between “close” states, that is strings that differ only by one single entry. Doing so, at
each time step, the tentative next state is chosen uniformly at random between n neighbors.
† The Ising model is a mathematical model of ferromagnetism in statistical mechanics. The model consists of

discrete variables that represent magnetic moments of atomic spins that can be in one of two states. The spins
are arranged in a graph, usually a lattice, allowing each spin to interact with its neighbors. See the webpage
https://en.wikipedia.org/wiki/Ising_model for further details.
The acceptance probability for the transition from σ to σ i is

π(σ i )
  n o n o
i P
Aσσi = min 1, = min 1, e−β[H(σ )−H(σ)] = min 1, e−2βσi j∈Λ:j∼i σj
π(σ)

and, therefore, the Metropolis-Hastings chain is described by


 n P o
−2βσi j∈Λ:j∼i σj


 |Λ|−1
min 1, e if σ 0 = σ i (for any i = 1, . . . , |Λ|)

Pσσ0 = P
1 − ν6=σ Pσν if σ 0 = σ


otherwise.

0

(b) You may find a detailed step by step explanation on how to simulate the Ising model in
Python in the video available at https://www.youtube.com/watch?v=K--1hlv9yv0. The
notebook with the relative code can be found at https://github.com/lukepolson/youtube_
channel/blob/main/Python%20Metaphysics%20Series/vid14.ipynb. The present solution
is a simplified version of the aforementioned code.

import numpy as np
import matplotlib.pyplot as plt
import numba
from numba import njit
from scipy.ndimage import convolve, generate_binary_structure

Generate an initial condition by choosing a random 2d grid of ±1 variables. Take a 50 × 50


grid. At each vertex i in the grid, attach σi = −1 with probability 0.25 and σi = +1 with
probability 0.75.

# 50 by 50 grid
N = 50

# initial condition
init_random = np.random.random((N,N))
lattice_n = np.zeros((N, N))
lattice_n[init_random>=0.25] = 1
lattice_n[init_random<0.25] = -1

Show the initial condition. A yellow (resp. violet) pixel represents a +1 (resp. −1) variable.

plt.figure()
plt.imshow(lattice_n)
Define a function to get the energy of a configuration. It takes
P in a configuration σ, i.e. a 2d
grid of ±1 variables, and compute the function H(σ) = − i∼j σi σj .

# compute the energy of a configuration


def get_energy(lattice):
# applies the nearest neighbours summation
kern = generate_binary_structure(2, 1)
kern[1][1] = False
arr = -lattice * convolve(lattice, kern, mode='constant', cval=0)
return arr.sum()

Define the Metropolis-Hastings algorithm. It takes in the initial configuration, the number of
time steps to run algorithm for, the parameter β and the energy of the initial configuration.
It returns a vector collecting the value of m2 at each time step.

def metropolis(spin_arr, times, B, energy):


spin_arr = spin_arr.copy()
net_spins = np.zeros(times-1)
for t in range(0,times-1):
# pick random point on array and flip spin
x = np.random.randint(0,N)
y = np.random.randint(0,N)
spin_i = spin_arr[x,y] #initial spin
spin_f = spin_i*-1 #proposed spin flip

# compute change in energy


E_i = 0
E_f = 0
if x>0:
E_i += -spin_i*spin_arr[x-1,y]
E_f += -spin_f*spin_arr[x-1,y]
if x<N-1:
E_i += -spin_i*spin_arr[x+1,y]
E_f += -spin_f*spin_arr[x+1,y]
if y>0:
E_i += -spin_i*spin_arr[x,y-1]
E_f += -spin_f*spin_arr[x,y-1]
if y<N-1:
E_i += -spin_i*spin_arr[x,y+1]
E_f += -spin_f*spin_arr[x,y+1]

# change state with designated probabilities


dE = E_f-E_i
if (dE>0)*(np.random.random() < np.exp(-B*dE)):
spin_arr[x,y]=spin_f
energy += dE
elif dE<=0:
spin_arr[x,y]=spin_f
energy += dE

net_spins[t] = (spin_arr.sum()/N**2)**2

return net_spins

Define a function to estimate§ Eπ (m2 ) for many different values of the parameter β. It takes
in a configuration σ and a set of values for the parameter β. It returns a vector collecting
§ Let N (j) be the
it denote the number of time steps the Metropolis-Hastings algorithm is run for and let σ
configuration obtained as output of the j-th iteration of the algorithm. If Nit is sufficiently large, by applying the
the estimates of Eπ (m2 ) for different values of β. The estimates are obtained as averages of
700 000 time steps.

def get_spin(lattice, Bs):


ms = np.zeros(len(Bs))
for i, bj in enumerate(Bs):
spins = metropolis(lattice, 700000, bj, get_energy(lattice))
ms[i] = spins[-700000:].mean()
return ms

Run the previous function to generate the estimates of Eπ (m2 ) for different values of β. The
parameter β ranges from 0.1 to 2 with a step size of 0.05.

Bs = np.arange(0.1, 2, 0.05)
ms_n = get_spin(lattice_n, Bs)

Show the estimate of Eπ (m2 ) as a function of the parameter β.

plt.figure(figsize=(8,5))
plt.plot(Bs, ms_n, 'o--', label='75% of spins started positive')
plt.xlabel(r'$\beta$')
plt.ylabel(r'$E_{\pi}(m^2)$')
plt.legend(facecolor='white', framealpha=1)
plt.show()

Ergodic Theorem, we have the following approximation


PNit  (j) 2

−1
Eπ (m2 ) ≈ Nit |Λ|−1
P
j=1 i∈Λ σi .

You might also like