hw10 2021sols

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

HW#10 Solutions

1. For a n⇥m real matrix X where n > m (a tall matrix) with full rank, we define pseudo-inverse
X † by X † = (X T X) 1 X T .

(a) Show that w 2 Rn in the range R(X) of X if and only if XX † w = w.


(b) Show that XX † is an orthogonal projection operator onto range space R(X) of X.
That is show that for any z 2 Rn , the vector z1 = XX † z is in R(X) and z z1 is in
R(X)? := {w|wT v = 0 for all v 2 R(X)}.
(c) Let X1 and X2 respectively be full rank matrices of dimensions n ⇥ m1 and n ⇥ m2 ,
where n > max(m1 , m2 ). Use the above results to show that y 2 R(X1 ) \ R(X2 ) if and
only if both (X1 X1† X2 X2† )y = 0 and X1 X1† y = y.

Solution:

(a) Let w 2 R(X). This implies there is a v such that w = Xv. Therefore XX † w =
XX † Xv = X(X T X) 1 X T Xv = Xv = w.
Conversely let XX † w = w which implies w 2 R(X) since w = Xv0 where v0 = X † w.
(b) z1 is of the form Xv0 where v0 = X † z, and therefore z1 2 R(X). Note that z z1 =
(I XX † )z. Therefore for any w 2 R(X),

<z z1 , w >=< (I XX † )z, w > = < (I XX † )z, XX † w >


= z T (I X(X T X) 1 X T )X(X T X) 1
X T w = 0,

(c) Let y 2 R(X1 ) \ R(X2 ); from above, this implies y = X1 X1† y = X2 X2† y which implies
(X1 X1† X2 X2† )y = 0 and X1 X1† y = y.
Conversely, let (X1 X1† X2 X2† )y = 0 and X1 X1† y = y. This implies y = X1 X1† y = X2 X2† y
which implies y 2 R(X1 ) and y 2 \R(X2 ); that is y 2 R(X1 ) \ R(X2 ).
2. (you have solved part of this problem and have solutions for some of the parts) Consider an
LTI system given by

ẋ = Ax + Bu
y = Cx,

where x 2 R4 and
2 3 2 3
1 4 0 0 2 0 
60 1 0 07 6 7
A=6 7 , B = 61 07 , and C = 0 1 0 0
.
40 0 1 0 5 4 0 1 5 1/3 1/3 0 1
0 2 0 1 1 0

You can use Matlab to answer the following questions.

(a) Find the Controllability matrix for this system and determine if the system is control-
lable.
(b) Find the Observability matrix for this system and determine if the system is observable.
(c) Determine the uncontrollable modes and the unobservable modes by using Hautus-
Rosenbrock test (Here modes correspond to both eigenvalues and vectors).
(d) Is this system stabilizable? Is it detectable?
(e) Compute the Kalman Controllable and Observable Canonical realizations for this sys-
tem.
(f) Determine a minimal realization by forming a observable canonical realization from the
controllable canonical form.
(g) determine the states that are unobservable and controllable.
(h) Characterize the states that are unobservable but not controllable.

Solution:
Python Code
Import the numpy library and linalg library and give convenient aliases

[51]: import numpy as np


import numpy.linalg as la

Defining Matrices A, B, C

[52]: A=np.array([[-1,4,0,0],[0,1,0,0],[0,0,-1,0],[0,2,0,-1]])
B=np.array([[2,0],[1,0],[0,1],[1,0]])
C=np.array([[0,1,0,0],[-1/3,-1/3, 0, 1]])

# Solution to (a)
Defining Controllability Matrix

[53]: Con=np.hstack([B,A@B,A@A@B,A@A@A@B])
print('Con',Con)

Con [[ 2 0 2 0 2 0 2 0]
[ 1 0 1 0 1 0 1 0]
[ 0 1 0 -1 0 1 0 -1]
[ 1 0 1 0 1 0 1 0]]
Finding rank of the controllability matrix

[54]: n_1=la.matrix_rank(Con)
print('n_1:',n_1)

n_1: 2
(a) Answer: The system is not Controllable

1 Solution to (b)

Defining the observability Matrix Obs

[55]: Obs=np.vstack([C,C@A,C@A@A,C@A@A@A])
print('Obs',Obs)

Obs [[ 0. 1. 0. 0. ]
[-0.33333333 -0.33333333 0. 1. ]
[ 0. 1. 0. 0. ]
[ 0.33333333 0.33333333 0. -1. ]
[ 0. 1. 0. 0. ]
[-0.33333333 -0.33333333 0. 1. ]
[ 0. 1. 0. 0. ]
[ 0.33333333 0.33333333 0. -1. ]]

1
[56]: n_2=la.matrix_rank(Obs)
print('n_2:',n_2)

n_2: 2
(b) Ans: Observability matrix has rank 2 which is less than 4. Thus system is not observable

2 Solution to part (c)

2.1 Uncontrollable modes

We find the uncontrollable modes. The Controllability matrix has rank 2. Thus we need to find two
uncontrollable modes. Finding eigenvalues and left eigenvectors (which are right eigenvectors of
A T .)
[57]: from numpy.linalg import eig

[58]: AT=A.T
e,f=la.eig(AT)
print('eigenvalues e:',e)
print('eigenvectors f:',f)

eigenvalues e: [ 1. -1. -1. -1.]


eigenvectors f: [[ 0. 0.4472136 0. 0. ]
[ 1. -0.89442719 0. -0.70710678]
[ 0. 0. 1. 0. ]
[ 0. 0. 0. 0.70710678]]
Choosing two columns in the null space of Obs

[59]: t1=f[:,[0]]
t2=f[:,[1]]
t3=f[:,[2]]
t4=f[:,[3]]

[60]: I=np.identity(4)
ellc=1
testMatc=np.hstack([ellc*I-A,B])
print(t1.T@testMatc)

[[0. 0. 0. 0. 1. 0.]]
As t1T [lI A B] 1 6= 0, the mode (1, t1 ) is controllable.

[61]: ellc=-1
testMatc=np.hstack([ellc*I-A,B])
print(t2.T@testMatc)

2
[[0. 0. 0. 0. 0. 0.]]
As t2T [lI A B] 1 = 0, the mode ( 1, t2 ) is not controllable.
[62]: ellc=-1
testMatc=np.hstack([ellc*I-A,B])
print(t3.T@testMatc)

[[0. 0. 0. 0. 0. 1.]]
As t3T [lI A B] 6= 0, the mode ( 1, t3 ) is controllable.

[63]: ellc=-1
testMatc=np.hstack([ellc*I-A,B])
print(t4.T@testMatc)

[[0. 0. 0. 0. 0. 0.]]
As t4T [lI A B] = 0, the mode ( 1, t4 ) is not controllable
Thus (1, t1 ), ( 1, t3 ) are controllable whereas ( 1, t2 ) and ( 1, t4 ) are uncontrollable.

2.2 Finding Unobservable modes

[64]: w,v=eig(A)
print('eigenvalues:w',w)
print('eigenvectors:v',v)

eigenvalues:w [-1. -1. 1. -1.]


eigenvectors:v [[1. 0. 0.81649658 0. ]
[0. 0. 0.40824829 0. ]
[0. 0. 0. 1. ]
[0. 1. 0.40824829 0. ]]
There are two eigenvalues at -1 repeated three times and at 1. There are four eigenvectors; three
associated with -1 and one with 1. ta, tb,tc, td are the four eigenvectors which are extracted from
v as below
[65]: ta=v[:,[0]]
tb=v[:,[1]]
tc=v[:,[2]]
td=v[:,[3]]

ta, tb, td are associated with eigenvalue -1 and tc with eigenvalue 1


We now use the PBH test to see which modes are not observable
[66]: ell=-1
I=np.identity(4)
testMat=np.vstack([ell*I-A,C])

3
print(testMat@ta)

[[ 0. ]
[ 0. ]
[ 0. ]
[ 0. ]
[ 0. ]
[-0.33333333]]
From above ta is observable as [lI A]t a 6= 0. Thus ( 1, t a ) is an observable mode

[67]: print(testMat@tb)

[[0.]
[0.]
[0.]
[0.]
[0.]
[1.]]
From above tb is observable as [lI A]tb 6= 0. Thus ( 1, tb ) is an observable mode

[68]: print(testMat@td)

[[0.]
[0.]
[0.]
[0.]
[0.]
[0.]]
From above td is not observable as [lI A]td = 0. Thus ( 1, td ) is an observable mode.
Note that tc is an eigenvector with eigenvalue 1. We check whether it is observable with l set to 1.

[69]: ell=1
testMat=np.vstack([ell*I-A,C])
print(testMat@tc)

[[0.00000000e+00]
[0.00000000e+00]
[0.00000000e+00]
[0.00000000e+00]
[4.08248290e-01]
[5.55111512e-17]]
From above tc is observable as [lI A]tc 6= 0. Thus (1, tc ) is an observable mode.
Thus of four eigenvectors t a , tb , tc and td only td is unobservable. However, there should be two
unobservable modes. Note that a linear combination of eigenvectors is an eigenvectors. Let te =
3t a + tb .

4
[70]: te=3*ta+tb
ell=-1
testMat=np.vstack([ell*I-A,C])
print(testMat@te)

[[0.]
[0.]
[0.]
[0.]
[0.]
[0.]]
Thus ( 1, te ) is an unobservable mode.
In summary ( 1, td ) and ( 1, te ) are unobservable modes.

3 Solution to part (d)

The unstable mode with eigenvalue 1 is both controllable and observable. Thus the system is
stabilizable and detectable

4 Solution to part (e)

4.1 Determining the Observable Canonical Form.

[71]: print('obs:',Obs)

obs: [[ 0. 1. 0. 0. ]
[-0.33333333 -0.33333333 0. 1. ]
[ 0. 1. 0. 0. ]
[ 0.33333333 0.33333333 0. -1. ]
[ 0. 1. 0. 0. ]
[-0.33333333 -0.33333333 0. 1. ]
[ 0. 1. 0. 0. ]
[ 0.33333333 0.33333333 0. -1. ]]
Finding two vectors that are independent in the null space of Obs

[72]: X2=np.array([[3,0],[0,0],[0,1],[1,0]])
print('X2',X2)

X2 [[3 0]
[0 0]
[0 1]
[1 0]]

5
Ensuring that columns in X2 are in the null space of Obs

[73]: ObsX2=Obs@X2
print('ObsX2',ObsX2)

ObsX2 [[0. 0.]


[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]]
Choosing two more columns to define Q which is invertible with last two columns being X2

[74]: Q1T=np.array([[0,1,0,0]])
Q1=Q1T.T
Q2T=np.array([[1,0,0,0]])
Q2=Q2T.T
Q3=X2[:,[0]]
Q4=X2[:,[1]]
Q=np.hstack([Q1,Q2,X2])
print('Q',Q)

Q [[0 1 3 0]
[1 0 0 0]
[0 0 0 1]
[0 0 1 0]]
Finding the transformed matrices

[75]: invQ=la.inv(Q)
Abar=invQ@A@Q
Bbar=invQ@B
Cbar=C@Q
print('Abar',Abar)
print('Bbar',Bbar)
print('Cbar',Cbar)

Abar [[ 1. 0. 0. 0.]
[-2. -1. 0. 0.]
[ 2. 0. -1. 0.]
[ 0. 0. 0. -1.]]
Bbar [[ 1. 0.]
[-1. 0.]
[ 1. 0.]
[ 0. 1.]]
Cbar [[ 1. 0. 0. 0. ]

6
[-0.33333333 -0.33333333 0. 0. ]]
Picking the first 2X2 submatrix of Abar to extract the observable canonical realization

[76]: AbarO=Abar[np.ix_([0,1],[0,1])]
BbarO=Abar[np.ix_([0,1],[0,1])]
CbarO=Cbar[np.ix_([0,1],[0,1])]
print('AbarO',AbarO)
print('BbarO',BbarO)
print('CbarO',CbarO)

AbarO [[ 1. 0.]
[-2. -1.]]
BbarO [[ 1. 0.]
[-2. -1.]]
CbarO [[ 1. 0. ]
[-0.33333333 -0.33333333]]
Thus ĀO , B̄O , C̄O , D̄O = 0 provides a observable realization.

4.2 Determining the Controllable Canonical form

[77]: print('Con',Con)

Con [[ 2 0 2 0 2 0 2 0]
[ 1 0 1 0 1 0 1 0]
[ 0 1 0 -1 0 1 0 -1]
[ 1 0 1 0 1 0 1 0]]
Note that the rank of the controllability matrix is n1 = 2. Thus we need to find two independent
columns from Controllability matrix and choose two more to form the matrix P such that P is
invertible. We choose the first two columns to form X1
[78]: X1=np.hstack([Con[:,[0]],Con[:,[1]]])
print('X1=',X1)

X1= [[2 0]
[1 0]
[0 1]
[1 0]]
Forming the transformation matrix P

[79]: P3=np.array([[1,0,0,0]]).T
P4=np.array([[0,0,0,1]]).T
P=np.hstack([X1,P3,P4])
print('determinant(P)=',la.det(P))

determinant(P)= 1.0

7
Determining matrices under transformation using P

[80]: Atilde=la.inv(P)@A@P
Btilde=la.inv(P)@B
Ctilde=C@P
print('Atilde=',Atilde)
print('Btilde=',Btilde)
print('Ctilde=',Ctilde)

Atilde= [[ 1. 0. 0. 0.]
[ 0. -1. 0. 0.]
[ 0. 0. -1. 0.]
[ 0. 0. 0. -1.]]
Btilde= [[1. 0.]
[0. 1.]
[0. 0.]
[0. 0.]]
Ctilde= [[ 1.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[ 1.11022302e-16 0.00000000e+00 -3.33333333e-01 1.00000000e+00]]
Extracting the 2 ⇥ 2 controllable canonical realization

[81]: AtildeC=Atilde[np.ix_([0,1],[0,1])]
BtildeC=Btilde[np.ix_([0,1],[0,1])]
CtildeC=Ctilde[np.ix_([0,1],[0,1])]
print('AtildeC=',AtildeC)
print('BtildeC=',BtildeC)
print('CtildeC=',CtildeC)

AtildeC= [[ 1. 0.]
[ 0. -1.]]
BtildeC= [[1. 0.]
[0. 1.]]
CtildeC= [[1.00000000e+00 0.00000000e+00]
[1.11022302e-16 0.00000000e+00]]
Ãc , B̃c , C̃c , D̃c = 0 provides a controllable realization.

5 Solution to part (f)

We will first check the observability of Ãc , B̃c , C̃c , D̃c = 0 which is a controllable realization.

[82]: obs=np.vstack([CtildeC,CtildeC@AtildeC])
print('rank(obs)=',la.matrix_rank(obs))

rank(obs)= 1

8
Rank(obs) is 1 when Ãc has dimension 2 ⇥ 2. Thus the realization Ãc , B̃c , C̃c , D̃c = 0 is not
observable. We will proceed and identity a vector in the null space of the observability matrix.

[83]: print('obs=',obs)

obs= [[1.00000000e+00 0.00000000e+00]


[1.11022302e-16 0.00000000e+00]
[1.00000000e+00 0.00000000e+00]
[1.11022302e-16 0.00000000e+00]]

0
The vector is in the null space of obs.
1

0
We form the transformation matrix Q̃ using the first column be independent of (which will
1
be the second column)
[84]: Qtilde=np.identity(2)
print('Qtilde=',Qtilde)

Qtilde= [[1. 0.]


[0. 1.]]
As the transformation matrix is identity the original realization Ãc , B̃c , C̃c , D̃c = 0 is in the ob-
servable canonical form. We need to pick the (1, 1) element of Ãc , B̃c , C̃c , D̃c = 0.
The (1, 1) elements result in $a=1, b=1, c=1. $ Thus the observable realization of the controllable
realization is given by a = 1, b = 1, c = 1, d = 0. The transfer function is thus

1 1
G (s) = c(sI a) b+d = .
s 1

9
2 3 2 3
2 0
6 17 607
(g) The controllable space Sc is spanned by 6 7 6 7
405 and 415 (note that the dim(range(Controllability))=2
1 0
and therefore any two independent columns of the controllability 2 3 matrix will span the
2 0
6 1 07
controllable subspace)). That is Sc = R(X1 ) where X1 = 6 7
40 15. Also note that if V
1 0
represents the observability matrix then rank(V)=2 2 and
3 therefore dim(null(V))=2. Thus
3 0
6 0 07
the unobserved space Sō = R(X2 ), where X2 = 6 7
40 15 (as V X2 = 0 and columns of X2
1 0
are independent). Therefore controllable and unobservable 2 space Scō = R(X1 )\R(X2 ) =3
0.2333 0.3333 0 0.0333
6 0.3333 0.1667 0 0.16677
N (X1 X1† X2 X2† )\N (I X1 X1† ). Now X1 X1† X2 X2† = 6 4
7
0 0 0 0 5
0.0333 0.1667 0 0.0667
2 3
0.3333 0.3333 0 0.3333
† 6 0.3333 0.8333 0 0.16677
and I X1 X1 = 6 7. Let y 2 N (X1 X † X2 X † ) \ N (I
4 0 0 0 0 5 1 2

0.3333 0.1667 0 0.8333


8 2 39 82 39
>
> 0 >
> > 0 >
> >
< 6 7 = < 607=
† 6 07 6 7
X1 X1 ) which can be shown to the span 4 5 . Therefore Scō = span 4 5 .
>
> 1 > > >
> 1 > >
: ; : ;
0 0
82 3 2 39 82 39
>
> 3 0 >> >
> 0 >
<6 7 6 7= <6 7> =
07 607 07
(h) Now Sō = span 6 ,
405 415> where as S cō = span 6
415>. Therefore Sc̄ō is character-
>
> > >
> >
: ; : ;
1 0 0
ized by Sō \ (Scō )c (this is not a subspace).
3. The system ẋ = Ax + Bu; y = Cx is controlled using static output feedback u = Hy + r.
Show that the resulting closed-loop system

ẋ = (A + BHC)x + Br, y = Cx

has the same controllability/observability properties as the original system. [Hint: Use the
Hautus-Rosenbrock test to show that the uncontrollable (and the unobservable) modes of the
open-loop and the static-output-feedback closed-loop systems are the same.]

A B
Solution: Let ( , z) represent a uncontrollable mode of ; that is z T [ I A B] = 0;
C 0
therefore z T A = z T and z T B = 0. . Now consider

zT [ I (A + BHC) B] = 0 = [z T ( I A) T
z|{z} T
B HC z|{z}
B ] = 0.
| {z }
=0 =0 =0


A + BHC B
This implies that ( , z) is a uncontrollable mode of the closed-loop system .
C 0
 
A B µI A
Similarly Let (µ, v) represent an unobservable mode of ; that is v = 0.
C 0 C
that is (µI A)v = 0 and Cv = 0. Now consider
 
µI (A + BHC) (µI A)v BH(Cv)
v= = 0.
C Cv

A + BHC B
This implies that (µ, v) is an unobservable mode closed-loop system .
C 0
Therefore the uncontrollable (and unobservable) modes of the open-loop and the static output-
feedback closed-loop systems are the same.
4. Consider the dynamics
!0
p̈ + ṗ + !02 p = !02 f
Q
(a) Cast the system in the state space form. Find the eigenvalues of the A matrix of the
state space realization.
(b) Let !0 = 1rad/s and let Q = 20. Compute the poles of the transfer function from f to
y.
✓ ◆
0
(c) Compute a control strategy f that will drive the system from an initial state to
1
✓ ◆ ✓ ◆
0 p
(here the state is in one second.
0 ṗ
 
A B 0 1
5. Consider a single-input single-output state-space system , where A = ,
C D 1 0

1
B= , and D = 0.
1

(a) Assume all the states are individually measured. Determine K for the feedback design
such that the closed-loop system has poles at { 1, 1}.
(b) Assume only one state is measured, that is C = [1 0]. Determine an observer (i.e.
determine the observer gain L) that places the observer poles at { 1, 1}. Use the
estimated states to implement the control signal u = K x̂ + r. Determine the state
Ak Bk
space realization of the resulting controller. Also determine the state-space
Ck Dk

Acl Bcl
realization of the closed-loop system from the input r to the output y.
Ccl Dcl
Solution:
Python Code

[2]: import numpy as np


import numpy.linalg as la

(a) Finding state feedback u = Kx + r such that A + BK has eigenvalues at ( 1, 1)

[4]: A=np.array([[0,1],[-1,0]])
B=np.array([[1],[1]])
print('A=',A)
print('B=',B)

A= [[ 0 1]
[-1 0]]
B= [[1]
[1]]

[6]: Con=np.hstack([B, A@B])


print('Rank(Con)=',la.matrix_rank(Con))

Rank(Con)= 2
Desired Polynomial is Dd (s) = (s + 1)2 = s2 + 2s + 1. Computing Dd ( A) = A2 + 2A + I.

[7]: DeltaDA=A@A+2*A+np.identity(2)

[8]: en=np.array([[0],[1]])

Note that if the control is of the form u = Kx + r then Ackermanns formula is given by K =
en Con 1 Dd ( A). (If the control is of the form u = Kx + r then K = +en Con 1 Dd ( A).) The
question asks for u = Kx + r.

[10]: K=-(en.T)@(la.inv(Con))@(DeltaDA)
print('K=',K)

K= [[1. 1.]]

[15]: from numpy.linalg import eig

With u = Kx + r closed loop matrix Acl = A + BK.

[16]: Acl=A+B@K
ell,vecs=eig(Acl)
print('Closed-loop Eigenvalues=',ell)

Closed-loop Eigenvalues= [-1. -1.]


(b) Finding L such that A + LC has eigenvalues at ( 1, 1). Note that eigenvalues of A + LC is
the same as eigenvalues of ( A + LC ) T = A T + C T L T . We will place eigenvalues of A T + C T L T
at desired locations

1
Letting Ā = A T , B̄ = C T and K̄ = L T , the problem becomes choosing K̄ such that Ā + B̄K̄ to have
eigenvalues at ( 1, 1). We will proceed to design K̄.

[19]: C=np.array([[1,0]])
Abar=A.T
Bbar=C.T
ConBar=np.hstack([Bbar,Abar@Bbar])
DeltaDAbar=Abar@Abar +2*Abar + np.identity(2)

[23]: Kbar=-(en.T)@(la.inv(ConBar))@(DeltaDAbar)
print('Kbar=',Kbar)

Kbar= [[-2. 0.]]

[24]: L=Kbar.T
print('L=',L)

L= [[-2.]
[ 0.]]
Check Eigenvalues of A + LC

[25]: e,vecs=eig(A+L@C)
print('Eigenvalues of A+LC=',e)

Eigenvalues of A+LC= [-1. -1.]

2
If we use the controller u = K x̂ = 1x̂1 1x̂2 , then the controller system is described by

x̂˙ = (A + BK + LC)x̂ Ly
u = K x̂

, that is the controller system is given by


2   3
 3 0 2
A + BK + LC L
=4 ⇥ 2 1⇤ 0 5.
K 0
1 1 0

The closed-loop system is given by

ẋ = Ax + BK(x x̃) + Br
˜˙x = (A + LC)x̃
y = Cx

That is the closed-loop system is given by


2 2   3 3
1 0 1 1 23
2   3 1
A + BK BK B 6 6 2 1 7 7
6 6  1 1 7 4 1 5 7
4 0 ⇥ A⇤ + LC 0 5=6 4 1 1 5 7.
6 0 0 7
4 5
C 0 0 ⇥⇥ ⇤ ⇤1 0
1 0 0 0

Note that the eigenvalues of the closed loop system are { 1, 1, 1, 1}.

You might also like