optimisationCW 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

2022/11/24 18:43 optimisationCW - Jupyter Notebook

ii):

In [90]:

import numpy as np
def gradient_method(x0,N,a,d,gamma):
x = x0
itera=0
Azero=np.zeros((N+1,N+1))
A=np.zeros((N,N))
blist = np.zeros(N)
b = np.zeros(N+1).T
for n in range(N):
blist[n] = (a**(N-n-1))*d
for p in range(N+1):
b[p] = x*(a**p)
for i in range(N):
for j in range(N):
if j<i:
A[i,j]=blist[N-i+j-1]
if j==i:
A[i,j]=d
for m in range(N):
Azero[m+1,1:]=A[m,:]
I=np.zeros((N+1,N+1))
for k in range(N+1):
for l in range(N+1):
if k == l:
I[k,l]=gamma/2
u=np.dot(np.linalg.inv(np.dot(Azero.T,Azero)+I),np.dot(Azero.T,-b))
u = u[1:,]
return u

In [102]:

gamma = [0.001,0.01,0.1,1]
u1=gradient_method(1,40,1,-0.01,0.001)
u2=gradient_method(1,40,1,-0.01,0.01)
u3=gradient_method(1,40,1,-0.01,0.1)
u4=gradient_method(1,40,1,-0.01,1)
import matplotlib.pyplot as plt
plt.title(label='The optimal u* for different $\gamma$')
plt.xlabel("N")
plt.ylabel("u*")
plt.plot(u1,'-b',label='$\gamma$=0.001')
plt.plot(u2,'-r',label='$\gamma$=0.01')
plt.plot(u3,'-y',label='$\gamma$=0.1')
plt.plot(u4,'-g',label='$\gamma$=1')
plt.legend()
plt.show()

localhost:8890/notebooks/Desktop/optimisationCW.ipynb# 1/3
2022/11/24 18:43 optimisationCW - Jupyter Notebook

In [92]:

def gradient_method2(x0,N,a,d,gamma):
x = x0
itera=0
Azero=np.zeros((N+1,N+1))
A=np.zeros((N,N))
blist = np.zeros(N)
b = np.zeros(N+1).T
for n in range(N):
blist[n] = (a**(N-n-1))*d
for p in range(N+1):
b[p] = x*(a**p)
for i in range(N):
for j in range(N):
if j<i:
A[i,j]=blist[N-i+j-1]
if j==i:
A[i,j]=d
for m in range(N):
Azero[m+1,1:]=A[m,:]
I=np.zeros((N+1,N+1))
for k in range(N+1):
for l in range(N+1):
if k == l:
I[k,l]=gamma/2
u=np.dot(np.linalg.inv(np.dot(Azero.T,Azero)+I),np.dot(Azero.T,-b))
x=np.dot(Azero,u)+b
return x

In [103]:

gamma = [0.001,0.01,0.1,1]
x1=gradient_method2(1,40,1,-0.01,0.001)
x2=gradient_method2(1,40,1,-0.01,0.01)
x3=gradient_method2(1,40,1,-0.01,0.1)
x4=gradient_method2(1,40,1,-0.01,1)
import matplotlib.pyplot as plt
plt.title(label='The associated optimal trajectories for different $\gamma$')
plt.xlabel("N")
plt.ylabel("optimal trajectories")
plt.plot(x1,'-b',label='$\gamma$=0.001')
plt.plot(x2,'-r',label='$\gamma$=0.01')
plt.plot(x3,'-y',label='$\gamma$=0.1')
plt.plot(x4,'-g',label='$\gamma$=1')
plt.legend()
plt.show()

𝛾
We first looking at the graph of 'The optimal u* for different ', we can see the larger N is, the lower𝑢𝑖 𝛾 𝑢𝑖
is. And for all different , the value of for large N will converges
𝛾
to 0. And when is increasing, the start point of 𝑢𝑖 will decreasing, and 𝑢𝑖 tends to 0 faster.

𝛾 𝛾 𝑥𝑖
Then we turn to the graph of trajectories for different , we can see that for all , 𝛾
start with initial value 1 and then decreasing untill 0. When is increasing, the speed
of the value tends to 0 is increasing.

iii):

We can see from the graph of optimal x that whether use the 𝑢𝑚𝑎𝑥
will not change too much since the lines are similar. They start from the same point and tend to the
same value. The only difference is that the line of no 𝑢𝑚𝑎𝑥
is lower than the one of 𝑢𝑚𝑎𝑥 almost all time.

However, we can see a huge difference for optimal u. Since we have a 𝑢𝑚𝑎𝑥, the srart point becomes 𝑢𝑚𝑎𝑥
. And the lines tend to be similar quickly. But it tries fewer
numbers.

Here we have the objective function, I set the function and its gradient,using the matrix S and b we get from previous. Then I use gradient descent
method to solve the problem and get the optimal x and u. Finally set the gamma to get the result and plot the graph.

localhost:8890/notebooks/Desktop/optimisationCW.ipynb# 2/3

You might also like