Download as pdf or txt
Download as pdf or txt
You are on page 1of 106

Linear System Theory and Design SA01010048 LING QING

2.1 Consider the memoryless system with characteristics shown in Fig 2.19, in which u denotes
the input and y the output. Which of them is a linear system? Is it possible to introduce a new
output so that the system in Fig 2.19(b) is linear?

Figure 2.19

Translation: 2.19 u y
2.19(b)

Answer: The input-output relation in Fig 2.1(a) can be described as:

y = a *u

Here a is a constant. It is a memoryless system. Easy to testify that it is a linear system.


The input-output relation in Fig 2.1(b) can be described as:

y = a *u + b

Here a and b are all constants. Testify whether it has the property of additivity. Let:

y1 = a * u1 + b

y2 = a * u2 + b
then:

( y1 + y 2 ) = a * (u1 + u 2 ) + 2 * b
So it does not has the property of additivity, therefore, is not a linear system.
But we can introduce a new output so that it is linear. Let:

z = y b

z = a *u
z is the new output introduced. Easy to testify that it is a linear system.
The input-output relation in Fig 2.1(c) can be described as:

y = a (u ) * u

a(u) is a function of input u. Choose two different input, get the outputs:

y1 = a1 * u1

1
Linear System Theory and Design SA01010048 LING QING

y2 = a2 * u 2
Assure:

a1 a 2
then:

( y1 + y 2 ) = a1 * u1 + a 2 * u 2
So it does not has the property of additivity, therefore, is not a linear system.

2.2 The impulse response of an ideal lowpass filter is given by

sin 2 (t t 0 )
g (t ) = 2
2 (t t 0 )
for all t, where w and to are constants. Is the ideal lowpass filter causal? Is is possible to built
the filter in the real world?

Translation: tw to

Answer: Consider two different time: ts and tr, ts < tr, the value of g(ts-tr) denotes the output at
time ts, excited by the impulse input at time tr. It indicates that the system output at time
ts is dependent on future input at time tr. In other words, the system is not causal. We
know that all physical system should be causal, so it is impossible to built the filter in
the real world.

2.3 Consider a system whose input u and output y are related by

u (t ) for t a
y (t ) = ( Pa u )(t ) :=
0 for t > a
where a is a fixed constant. The system is called a truncation operator, which chops off the
input after time a. Is the system linear? Is it time-invariant? Is it causal?

Translation: a
a

Answer: Consider the input-output relation at any time t, t<=a:
y=u
Easy to testify that it is linear.
Consider the input-output relation at any time t, t>a:

y=0

Easy to testify that it is linear. So for any time, the system is linear.
Consider whether it is time-invariable. Define the initial time of input to, system input is
u(t), t>=to. Let to<a, so It decides the system output y(t), t>=to:

2
Linear System Theory and Design SA01010048 LING QING

u (t ) for t 0 t a
y (t ) =
0 for other t
Shift the initial time to to+T. Let to+T>a , then input is u(t-T), t>=to+T. System output:

y ' (t ) = 0

Suppose that u(t) is not equal to 0, y(t) is not equal to y(t-T). According to the definition,
this system is not time-invariant.
For any time t, system output y(t) is decided by current input u(t) exclusively. So it is a
causal system.

2.4 The input and output of an initially relaxed system can be denoted by y=Hu, where H is some
mathematical operator. Show that if the system is causal, then

Pa y = Pa Hu = Pa HPa u

where Pa is the truncation operator defined in Problem 2.3. Is it true PaHu=HPau?


Translation: y=Hu H
Pa 2.3
PaHu=HPau
Answer: Notice y=Hu, so:

Pa y = Pa Hu

Define the initial time 0, since the system is causal, output y begins in time 0.
If a<=0,then u=Hu. Add operation PaH in both left and right of the equation:

Pa Hu = Pa HPa u

If a>0, we can divide u to 2 parts:

u (t ) for 0 t a
p (t ) =
0 for other t

u (t ) for t > a
q (t ) =
0 for other t
u(t)=p(t)+q(t). Pay attention that the system is casual, so the output excited by q(t) cant
affect that of p(t). It is to say, system output from 0 to a is decided only by p(t). Since
PaHu chops off Hu after time a, easy to conclude PaHu=PaHp(t). Notice that p(t)=Pau,
also we have:

Pa Hu = Pa HPa u

It means under any condition, the following equation is correct:

Pa y = Pa Hu = Pa HPa u

PaHu=HPau is false. Consider a delay operator H, Hu(t)=u(t-2), and a=1, u(t) is a step
input begins at time 0, then PaHu covers from 1 to 2, but HPau covers from 1 to 3.

3
Linear System Theory and Design SA01010048 LING QING

2.5 Consider a system with input u and output y. Three experiments are performed on the system
using the inputs u1(t), u2(t) and u3(t) for t>=0. In each case, the initial state x(0) at time t=0 is
the same. The corresponding outputs are denoted by y1,y2 and y3. Which of the following
statements are correct if x(0)<>0?
1. If u3=u1+u2, then y3=y1+y2.
2. If u3=0.5(u1+u2), then y3=0.5(y1+y2).
3. If u3=u1-u2, then y3=y1-y2.
Translation: u y u1(t),
u2(t) u3(t)t>=0 x(0)
y1,y2 y3 x(0)
Answer: A linear system has the superposition property:

1 x1 (t 0 ) + 2 x 2 (t 0 )
1 y1 (t ) + 2 y 2 (t ), t t 0
1u1 (t ) + 2 u 2 (t ), t t 0
In case 1:

1 = 1 2 = 1

1 x1 (t 0 ) + 2 x 2 (t 0 ) = 2 x(0) x(0)
So y3<>y1+y2.
In case 2:

1 = 0.5 2 = 0.5

1 x1 (t 0 ) + 2 x 2 (t 0 ) = x(0)
So y3=0.5(y1+y2).
In case 3:

1 = 1 2 = 1

1 x1 (t 0 ) + 2 x 2 (t 0 ) = 0 x(0)
So y3<>y1-y2.

2.6 Consider a system whose input and output are related by

u 2 (t ) / u (t 1) if u (t 1) 0
y (t ) =
0 if u (t 1) = 0
for all t.
Show that the system satisfies the homogeneity property but not the additivity property.
Translation: ,,.
Answer: Suppose the system is initially relaxed, system input:

p (t ) = u (t )

a is any real constant. Then system output q(t):

4
Linear System Theory and Design SA01010048 LING QING

p 2 (t ) / p(t 1) if p(t 1) 0
q(t ) =
0 if p (t 1) = 0

u 2 (t ) / u (t 1) if u (t 1) 0
=
0 if u (t 1) = 0
So it satisfies the homogeneity property.
If the system satisfies the additivity property, consider system input m(t) and n(t),
m(0)=1, m(1)=2; n(0)=-1, n(1)=3. Then system outputs at time 1 are:

r (1) = m 2 (1) / m(0) = 4

s (1) = n 2 (1) / n(0) = 9

y (1) = [m(1) + n(1)] 2 /[ m(0) + n(0)] = 0

r (1) + s (1

So the system does not satisfy the additivity property.

2.7 Show that if the additivity property holds, then the homogeneity property holds for all rational
numbers a . Thus if a system has continuity property, then additivity implies homogeneity.
Translation: a

Answer: Any rational number a can be denoted by:
a = m/n
Here m and n are both integer. Firstly, prove that if system input-output can be described
as following:
x y
then:
mx my
Easy to conclude it from additivity.
Secondly, prove that if a system input-output can be described as following:
x y
then:

x/n y/n

Suppose:
x/n u
Using additivity:

n * ( x / n) = x nu

So:
y = nu

u = y/n

5
Linear System Theory and Design SA01010048 LING QING

It is to say that:

x/n y/n

Then:

x*m/n y*m/n
ax ay
It is the property of homogeneity.

2.8 Let g(t,T)=g(t+a,T+a) for all t,T and a. Show that g(t,T) depends only on t-T.
Translation: t,T ag(t,T)=g(t+a,T+a) g(t,T) t-T
Answer: Define:

x = t +T y = t T

So:
x+ y x y
t= T=
2 2
Then:
x+ y x y
g (t , T ) = g ( , )
2 2
x+ y x y
= g( + a, + a)
2 2
x+ y x+ y x y x+ y
= g( + , + )
2 2 2 2
= g ( y,0)

So:
g (t , T ) g ( y,0)
= =0
x x
It proves that g(t,T) depends only on t-T.

2.9 Consider a system with impulse response as shown in Fig2.20(a). What is the zero-state
response excited by the input u(t) shown in Fig2.20(b)?

Fig2.20

6
Linear System Theory and Design SA01010048 LING QING

Translation: 2.20(a) 2.20(b) u(t)



Answer: Write out the function of g(t) and u(t):

t 0 t 1
g (t ) =
2 t 1 t 2

1 0 t 1
u (t ) =
1 1 t 2
then y(t) equals to the convolution integral:
t
y (t ) = g (r )u (t r )dr
0

If 0=<t=<1, 0=<r=<1, 0<=t-r<=1:


t
t2
y (t ) = rdr =
0
2
If 1<=t<=2:
t 1 1 t
y (t ) = g (r )u (t r )dr +
0
g (r )u (t r )dr + g (r )u (t r )dr
t 1 1

= y1 (t ) + y 2 (t ) + y 3 (t )
Calculate integral separately:
t 1
y1 (t ) = g (r )u (t r )dr
0
0 r 1 1 t r 2

t 1
(t 1) 2
= rdr =
0
2
1
y 2 (t ) =
t 1
g (r )u (t r )dr 0 r 1 0 t r 1

1
1 (t 1) 2
= rdr =
t 1
2

2

t
y 3 (t ) = g (r )u (t r )dr 1 r 2 0 t r 1
1

t
t 2 1
= (2 r )dr = 2(t 1)
1
2
3
y (t ) = y1 (t ) + y 2 (t ) + y 3 (t ) = t 2 + 4t 2
2

7
Linear System Theory and Design SA01010048 LING QING

2.10 Consider a system described by



y+ 2 y 3 y = u u
What are the transfer function and the impulse response of the system?
Translation:
Answer: Applying the Laplace transform to system input-output equation, supposing that the
System is initial relaxed:

s 2Y ( s ) + 2sY ( s ) 3Y ( s ) = sY ( s ) Y ( s )
System transfer function:

U (s) s 1 1
G (s) = = 2 =
Y ( s) s + 2s 3 s + 3
Impulse response:
1
g (t ) = L1 [G ( s )] = L1 [ ] = e 3t
s+3

2.11 Let y(t) be the unit-step response of a linear time-invariant system. Show that the impulse
response of the system equals dy(t)/dt.
Translation: y(t) dy(t)/dt.
Answer: Let m(t) be the impulse response, and system transfer function is G(s):
1
Y ( s) = G ( s) *
s
M (s) = G (s)

M ( s) = Y ( s) * s

So:

m(t ) = dy (t ) / dt

2.12 Consider a two-input and two-output system described by

D11 ( p) y1 (t ) + D12 ( p) y 2 (t ) = N 11 ( p)u1 (t ) + N 12 ( p )u 2 (t )

D21 ( p) y1 (t ) + D22 ( p) y 2 (t ) = N 21 ( p )u1 (t ) + N 22 ( p )u 2 (t )


where Nij and Dij are polynomials of p:=d/dt. What is the transfer matrix of the system?
Translation: Nij Dij p:=d/dt

Answer: For any polynomial of p, N(p), its Laplace transform is N(s).
Applying Laplace transform to the input-output equation:

D11 ( s )Y1 ( s ) + D12 ( s )Y2 ( s ) = N 11 ( s )U 1 ( s ) + N 12 ( s )U 2 ( s )

D21 ( s )Y1 ( s ) + D22 ( s )Y2 ( s ) = N 21 ( s )U 1 ( s ) + N 22 ( s )U 2 ( s )

8
Linear System Theory and Design SA01010048 LING QING

Write to the form of matrix:

D11 ( s ) D12 ( s ) Y1 ( s ) N 11 ( s ) N 12 ( s ) U 1 ( s )
D ( s ) D ( s ) Y ( s ) = N ( s ) N ( s ) U ( s )
21 22 2 21 22 2
1
Y1 ( s ) D11 ( s ) D12 ( s ) N 11 ( s ) N 12 ( s ) U 1 ( s )
Y ( s ) = D ( s ) D ( s ) N ( s ) N ( s ) U ( s )
2 21 22 21 22 2
So the transfer function matrix is:
1
D11 ( s ) D12 ( s ) N 11 ( s ) N 12 ( s )
G (s ) = N ( s) N ( s)
D21 ( s ) D22 ( s ) 21 22
By the premising that the matrix inverse:
1
D11 ( s ) D12 ( s )
D ( s) D ( s)
21 22
exists.

2.11 Consider the feedback systems shows in Fig2.5. Show that the unit-step responses of the
positive-feedback system are as shown in Fig2.21(a) for a=1 and in Fig2.21(b) for a=0.5.
Show also that the unit-step responses of the negative-feedback system are as shown in Fig
2.21(c) and 2.21(d), respectively, for a=1 and a=0.5.

Fig 2.21
Translation: 2.5 a=1
2.21(a) a=0.5 2.21(b)
2.21(c) 2.21(b) a=1 a=0.5

9
Linear System Theory and Design SA01010048 LING QING

Answer: Firstly, consider the positive-feedback system. Its impulse response is:

g (t ) = a i (t i )
i =1

Using convolution integral:



y (t ) = a i r (t i )
i =1

When input is unit-step signal:


n
y ( n) = a i
i =1

y (t ) = y (n) n t n +1

Easy to draw the response curve, for a=1 and a=0.5, respectively, as Fig 2.21(a) and Fig
2.21(b) shown.
Secondly, consider the negative-feedback system. Its impulse response is:

g (t ) = (a ) i (t i )
i =1

Using convolution integral:



y (t ) = (a ) i r (t i )
i =1

When input is unit-step signal:


n
y ( n) = ( a ) i
i =1

y (t ) = y (n) n t n +1

Easy to draw the response curve, for a=1 and a=0.5, respectively, as Fig 2.21(c) and Fig
2.21(d) shown.

2.14 Draw an op-amp circuit diagram for


2 4 2
x= x + u
0 5 4

y = [3 10]x 2u

2.15 Find state equations to describe the pendulum system in Fig 2.22. The systems are useful to

model one- or two-link robotic manipulators. If , 1 and 2 are very small, can you

consider the two systems as linear?

10
Linear System Theory and Design SA01010048 LING QING

Translation: 2.22

Answer: For Fig2.22(a), the application of Newtons law to the linear movements yields:

d2 2
f cos mg = m (l cos ) = ml ( sin cos )
dt 2

d2 2
u f sin = m (l sin ) = ml ( cos sin )
dt 2

Assuming and to be small, we can use the approximation sin = , cos =1.

By retaining only the linear terms in and , we obtain f = mg and:
g 1
= + u
l ml

Select state variables as x1 = , x 2 = and output y =

0 1 1
x= x+ u
g / l 0 1 / ml

y = [1 0]x
For Fig2.22(b), the application of Newtons law to the linear movements yields:

d2
f1 cos 1 f 2 cos 2 m1 g = m1 (l1 cos 1 )
dt 2

2
= m1l1 ( 1 sin 1 1 cos 1 )

d2
f 2 sin 2 f1 sin 1 = m1 2 (l1 sin 1 )
dt

2
= m1l1 ( 1 cos 1 1 sin 1 )

d2
f 2 cos 2 m2 g = m2 (l1 cos 1 + l 2 cos 2 )
dt 2

2 2
= m2 l1 ( 1 sin 1 1 cos 1 ) + m2 l 2 ( 2 sin 2 2 cos 2 )

d2
u f 2 sin 2 = m2 2 (l1 sin 1 + l 2 sin 2 )
dt

2 2
= m2 l1 ( 1 cos 1 1 sin 1 ) + m2 l 2 ( 2 cos 2 2 sin 2 )

11
Linear System Theory and Design SA01010048 LING QING


Assuming 1 , 2 and 1 , 2 to be small, we can use the approximation sin 1 = 1 ,

sin 2 = 2 , cos 1 =1, cos 2 =1. By retaining only the linear terms in 1 , 2 and

1 , 2 , we obtain f 2 = m2 g , f1 = (m1 + m2 ) g and:
(m1 + m2 ) g m g
1 = 1 + 2 2
m1l1 m1l1

(m1 + m2 ) g (m + m2 ) g 1
2 = 1 1 2 + u
m1l 2 m1l 2 m2 l 2

Select state variables as x1 = 1 , x 2 = 1 , x3 = 2 , x 4 = 2 and output

y1 1
y = :
2 2

0 1 0 0 x1 0
x 1 0
x 2 (m1 + m2 ) g / m1l1 0 m2 g / m1l1 0 x 2 u
= +
0 0 0
1 x3 0
x3

x (m1 + m2 ) g / m1l 2 0 (m1 + m2 ) g / m2 l 2 0 x 4 1 / m2 l 2
4

x1
x
y1 1 0 0 0 2
y = 0 0 1 0 x3
2

x4

2.17 The soft landing phase of a lunar module descending on the moon can be modeled as shown
in Fig2.24. The thrust generated is assumed to be proportional to the derivation of m, where
m is the mass of the module. Then the system can be described by

m y = k m mg
Where g is the gravity constant on the lunar surface. Define state variables of the system as:

x1 = y , x 2 = y , x3 = m , y = m
Find a state-space equation to describe the system.
Translation: 2.24 m
g

Answer: The system is not linear, so we can linearize it.
Suppose:

12
Linear System Theory and Design SA01010048 LING QING



2
y = gt / 2 + y y = gt + y y = g + y


m = m0 + m m=m

So:


( m0 + m) ( g + y ) = k m ( m0 + m) g



m0 g m g + m0 y = k m m0 g m g



m0 y = k m

Define state variables as:




x1 = y , x 2 = y , x 3 = m , y = m
Then:

x1 x1
0 1 0 0

x = 0 0 0 x + k / m u
2 2 0
0 0 0 1
x3 x3



x1


y = [1 0 0] x 2

x3

2.19 Find a state equation to describe the network shown in Fig2.26.Find also its transfer function.
Translation: 2.26
Answer: Select state variables as:

x1 : Voltage of left capacitor

x 2 : Voltage of right capacitor

x3 : Current of inductor

Applying Kirchhoffs current law:

13
Linear System Theory and Design SA01010048 LING QING


x3 = ( x 2 L x3 ) / R


u = C x1 + x1 R

u = C x 2 + x3

y = x2
From the upper equations, we get:

x1 = x1 / CR + u / C = x1 + u

x 2 = x3 / C + u / C = x3 + u

x3 = x 2 / L Rx3 / L = x 2 x3

y = x2
They can be combined in matrix form as:

x1 1 0 0 x1 1
x = 0 0 1 x + 1u
2 2
x3 0 1 1 x3 0


x1
y = [0 1 0] x 2
x3
Use MATLAB to compute transfer function. We type:
A=[-1,0,0;0,0,-1;0,1,-1];
B=[1;1;0];
C=[0,1,0];
D=[0];
[N1,D1]=ss2tf(A,B,C,D,1)
Which yields:
N1 =
0 1.0000 2.0000 1.0000
D1 =
1.0000 2.0000 2.0000 1.0000
So the transfer function is:
^ s 2 + 2s + 1 s +1
G (s) = 3 2
= 2
s + 2s + 2s + 1 s + s + 1

14
Linear System Theory and Design SA01010048 LING QING

2.20 Find a state equation to describe the network shown in Fig2.2. Compute also its transfer
matrix.
Translation: 2.2
Answer: Select state variables as Fig 2.2 shown. Applying Kirchhoffs current law:

u1 = R1C1 x1 + x1 + x 2 + L1 x 3 + R2 x 3

C1 x 1 + u 2 = C 2 x 2

x3 = C 2 x 2

u1 = R1 ( x3 u 2 ) + x1 + x 2 + L1 x 3 + R2 x3

y = L1 x 3 + R2 x3
From the upper equations, we get:

x1 = x3 / C1 u 2 / C1

x 2 = x3 / C 2

x3 = x1 / L1 x 2 / L1 ( R1 + R1 ) x3 / L1 + u1 / L1 + R1u 2 / L1

y = x1 x 2 R1 x3 + u1 + R1u 2

They can be combined in matrix form as:


x1 0 0 1 / C1 x1 0 1 / C1
x + 0 u
x = 0 0 1/ C2 2 0 1
2 u
x3 L1 L1 ( R1 + R2 ) / L1 x3 1 / L1 R1 / L1 2

x1
u
y = [ 1 1 R1 ] x 2 + [1 R2 ] 1
x3 u 2

Applying Laplace Transform to upper equations:


^ L1 s + R2 ^
y(s) = u 1 (s)
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s

( L1 s + R2 )( R1 + 1 / C 2 s ) ^
+ u 2 (s)
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s

^ L1 s + R2 ( L1 s + R2 )( R1 + 1 / C 2 s )
G ( s) =
L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s L1 s + R1 + R1 + 1 / C1 s + 1 / C 2 s

15
Linear System Theory and Design SA01010048 LING QING

2.18 Find the transfer functions from u to y1 and from y1 to y of the hydraulic tank system

shown in Fig2.25. Does the transfer function from u to y equal the product of the two
transfer functions? Is this also true for the system shown in Fig2.14?

Translation: 2.25 u y1 y1 y

u y 2.14
Answer: Write out the equation about u , y1 and y :

y1 = x1 / R1

y 2 = x 2 / R2

A1 dx1 = (u y1 ) / dt

A2 dx 2 = ( y1 y ) / dt
Applying Laplace transform:
^ ^
y 1 / u = 1 /(1 + A1 R1 s )
^ ^
y/ y 1 = 1 /(1 + A2 R2 s )
^ ^
y/ u = 1 /(1 + A1 R1 s )(1 + A2 R2 s )
So:
^ ^ ^ ^ ^ ^
y/ u = ( y 1 / u ) ( y/ y 1 )
But it is not true for Fig2.14, because of the loading problem in the two tanks.

16
3.1consider Fig3.1 ,what is the representation of the vector x with respect to the basis (q1 i2 ) ?

What is the representation of q1 with respect ro (i2 q 2 ) ? 3.1 , x (q1 i2 )

? q1 (i2 q 2 ) ?

1 8
If we draw from x two lines in parallel with i2 and q1 , they tutersect at q1 and i2 as
3 3

1 8
shown , thus the representation of x with respect to the basis (q1 i2 ) is , this can
3 3
1
3 0 13
1
be verified from x = = q 2 [ ]
i2 3 =
8 1 1 8
3 3
3
To find the representation of q1 with respect to (i2 q 2 ) ,we draw from q1 two lines in

3
parallel with q 2 and i2 , they intersect at -2 i2 and q 2 , thus the representation of q1
2

3
with respect to (i2 q 2 ) , is 2 , this can be verified from
2

2 0 2 2
3
q1 = = i2 [ ]
q2 3 = 3
1 2 1 2 2


3.2 what are the 1-morm ,2-norm , and infinite-norm of the vectors x1 = [2 3 1] , x 2 = [1 1 1] ,


x1 = [2 3 1] , x 2 = [1 1 1] 1-,2- ?
3
x1 = x1i = 2 + 3 + 1 = 6 x2 = 1+1+1 = 3
1 1
i =1
1 1
x1 = x1' x1 = (2 2 + (3) 2 + 12 ) 2
= 14 x2 = (12 + 12 + 12 ) 2
= 3
2 2

x1 = max xi x1i = 3 x2 = max xi x 2i = 1


3.3 find two orthonormal vectors that span the same space as the two vectors in problem 3.2 , 3.2
.
Schmidt orthonormalization praedure ,
u1 1
u1 = x1 = [2 3 1] q1 = = [2 3 1]
u1 14
u2 1
u 2 = x 2 ( q1' x 2 ) q1 = x 2 q2 = = [1 1 1]
u2 3

2 1
1 1
The two orthomormal vectors are q1 = 3 q2 = 1
14 3
1 1

In fact , the vectors x1 and x 2 are orthogonal because x1 x 2 = x 2 x1 = 0 so we can only

2 1
x1 1 x2 1
normalize the two vectors q1 = = 3 q2 = = 1
x1 14 x2 3
1 1

3.4 comsider an n m matrix A with n m , if all colums of A are orthonormal , then

AA = I m , what can you say abort AA ? n m A( n m ), A

, AA = I m AA ?

Let [ ] [ ]
A = a1 a 2 a m = aij n m
, if all colums of A are orthomormal , that is

n
0 if i j
ai' ai = ali ali =
i =1 1 if i= j

a1'
' a1' a1 a1' a 2 a1' a m 1 0 0
a 2 ' '
[
AA = ai a 2 a m' = ]
= 0 1 0 = I m
a m' a1 a m' a 2 a m' a m 0 0 1
a '
m
a1'
'
a 2 m
[ ]
m
then AA = ai' a 2' a m' = ai ai' = ( ail a jl ) nn
i =1 i =1

a '
m
m
0 if i j
in general a il a jl =
i =1 1 if i= j

if A is a symmetric square matrix , that is to say , n=m ail a jl for every i, l = 1,2 n

then AA = I m
3.5 find the ranks and mullities of the following matrices
0 1 0 4 1 1 1 2 3 4

A1 = 0 0 0 A2 = 3 2 0 A3 = 0 1 2 2
0 0 1 1 1 0 0 0 0 1

rank ( A1 ) = 2 rank ( A2 ) = 3 rank ( A3 ) = 3


Nullity ( A1 ) = 3 2 = 1 Nullity ( A2 ) = 3 3 = 0 Nullity ( A3 ) = 4 3 = 1

3.6 Find bases of the range spaces of the matrices in problem 3.5 3.5
1 0

the last two columns of A1 are linearly independent , so the set 0 ,
0 can be used as a

0
1

basis of the range space of A1 , all columns of A2 are linearly independent ,so the set

4 2 1
2 0 can be used as a basis of the range spare of A
3 2
1 1
0

let A3 = a1 [ a2 a3 ]
a 4 ,where ai denotes of A3 a1 and a 2 and a3 and a 4 are

linearly independent , the third colums can be expressed as a3 = a1 + 2a 2 , so the set

{a1 a2 }
a3 can be used as a basis of the range space of A3

2 1 1

3.7 consider the linear algebraic equation 3 3 x = 0 x it has three equations and two

1 2 1

unknowns does a solution x exist in the equation ? is the solution unique ? does a solution exist if


y = [1 1 1] ?

2 1 1

3 3 x = 0 x , ?

1 2 1


? y = [1 1 1] ?
2 1

Let A = 3 3 = a1
[ ]
a 2 , clearly a1 and a 2 are linearly independent , so rank(A)=2 ,
1 2

y is the sum of a1 and a 2 ,that is rank([A y ])=2, rank(A)= rank([A y ]) so a solution x

exists in A x = y

Nullity(A)=2- rank(A)=0

The solution is unique x = [1 1] if y = [1 1 1] , then rank([A y ])=3 rank(A), that is

to say there doesnt exist a solution in A x = y

1 2 3 4 3

3.8 find the general solution of 0 1 2 2 x = 2 how many parameters do you have ?

0 0 0 1 1

1 2 3 4 3

0 1 2 2 x = 2 ,?

0 0 0 1 1

1 2 3 4 3

Let A = 0 1 2 2


y = 2 we can readily obtain rank(A)= rank([A y ])=3 so
0 0 0 1 1

this y lies in the range space of A and x p = [ 1 0 o 1] is a solution Nullity(A)=4-3=1 that
means the dimension of the null space of A is 1 , the number of parameters in the general solution

will be 1 , A basis of the null space of A is n = [1 2 1 o] thus the general solution of

1 1
0 2
A x = y can be expressed an x = x p + n = + for any real
0 1

1 0
is the only parameter

3.9 find the solution in example 3.3 that has the smallest Euclidean norm 3
,
0 1 0 1
4 1 2 + 2 4
the general solution in example 3.3 is x = + 1 + 2 =
1 2 for
0 1 0 1

0 0 1 2

any real 1 and 2

the Euclidean norm of x is

x = 12 + ( 1 + 2 2 4) 2 + ( 1 ) 2 + ( 2 ) 2
= 3 12 + 5 22 + 4 1 2 8 1 16 2 + 16

4
11
x 4 8
= 0 3 1 + 2 2 4 = 0 1 =
2 1
11 x = 11 has the smallest Euclidean
x 16 4
= 0 5 2 + 2 1 8 = 0 2 = 11
2 2 11
16

11
norm ,

3.10 find the solution in problem 3.8 that has the smallest Euclidean norm 3.8
,
1 1
0 2
x= + for any real
0 1

1 0

x = ( 1) 2 + (2 ) 2 + 2 + 1 = 6 2 2 + 2
5
6
3
the Euclidean norm of x is x 1 has the
= 0 12 2 = 0 = x = 6
6
1
6
1

smallest Euclidean norm

n n 1
3.11 consider the equation x{n} = A x[0] + A bu[0] + A n 2 bu[1] + + Abu[n 2] + bu[n 1]

where A is an n n matrix and b is an n 1 column vector ,under what conditions on A and b will there
exist u[o], u[1], u[ n 1] to meet the equation for any x[n], and x[0] ?

A n n , b n 1 , A b , u[o], u[1], u[ n 1] ,

x[n], and x[0] , x{n} = A n x[0] + A n 1 bu[0] + A n 2 bu[1] + + Abu[n 2] + bu[n 1] ,

u[n 1]
u[n 2]
write the equation in this form x{n} = A x[0] = [b, Ab A b]
n n 1


u[0]

where [b, Ab A n1 b] is an n n matrix and x{n} A n x[0] is an n 1 column vector , from the equation we

can see , u[o], u[1], u[ n 1] exist to meet the equation for any x[n], and x[0] ,if and only if

[b, Ab A n 1 b] = n under this condition , there will exist u[o], u[1], u[n 1] to meet the equation for any

x[n], and x[0] .

2 1 0 0 0 1
0 2 1
0
0 2
3.12 given A = b= b = what are the representations of A with respect to
0 0 2 0 1 3

0 0 0 1 1

{
the basis b Ab A2 b } {
A 3 b and the basis b Ab A2 b }
A 3 b , respectively?

2 1 0 0 0 1
0
2 1 0 0 2
A=
0 0 2 0
b = b = A b
1 3
{ Ab A2 b } {
A 3 b b Ab A2 b }
A 3 b

0 0 0 1 1
0 1 24
1 4 32
? Ab = 2
, A b = A Ab = , A b = A A b =
3 2
2 4 16

1 1 1
A 4 b = 8b + 20 Ab 18 A 2 b + 7 A 3 b
0 0 0 8
we have 1 0 0 20 thus the representation of A
Ab [ Ab A2 b ] [
A3 b = b Ab A2 b 3
]
A b =
0 1 0 18

0 0 1 7
0 0 0 8
1 0 0 20
with respect to the basis b { Ab A2 b }
A 3 b is A =
0 1 0 18

0 0 1 7
4 15 50 152
7 20 52 128
Ab = 2
,A b = 3
,A b = ,A b =
4
6 12 24 48

1 1 1 1
A 4 b = 8b + 20 Ab 18 A 2 b + 7 A 3 b
0 0 0 8
1 0 0 20
Ab [ Ab A2 b ] [
A3 b = b Ab A2 b 3
]
A b =
0 1 0 18
thus the representation of A with

0 0 1 7

0 0 0 8
1 0 0 20
respect to the basis b { Ab A2 b }
A 3 b is A =
0 1 0 18

0 0 1 7

3.13 find Jordan-form representations of the following matrices


jordan :
1 4 10 0 1 0 1 0 1 0 4 3

A1 = 0 2 0 , A2 = 0 0
1 . A3 = 0 1 0 . A4 = 0 20 16 .
0 0 3 2 4 3 0 0 2 0 25 20

the characteristic polynomial of A1 is 1 = det(I A1 ) = ( 1)( 2)( 3) thus the eigenvelues of A1 are

1 ,2 , 3 , they are all distinct . so the Jordan-form representation of A1 will be diagonal .

the eigenvectors associated with = 1, = 2, = 3 ,respectively can be any nonzero solution of


A1 q1 = q1 q1 = [1 0 0]

A1 q 2 = 2q 2 q 2 = [4 1 0] thus the jordan-form representation of A1 with respect to

A1 q3 = 3q3 q3 = [5 0 1]

1 0 0
{q1 q2 }
q3 is A1 = 0 2 0

0 0 3

the characteristic polynomial of A2 is


2 ( ) = det(I A2 ) = (3 + 32 + 4 + 2 = ( + 1)( + 1 + i )( + 1 i ) A2 has

eigenvalues 1, 1 + j and 1 j the eigenvectors associated with

1, 1 + j and 1 j are , respectively


[1 1 1] , [1 j 1 2 j ] and [1 1 j 2 j ] the we have

1 1 1 1 0 0

Q = 1 1 + j 1 j and A = 0 1 + j 0 = Q 1 A Q
2 2

0 2j 2j 0 0 1 j

the characteristic polynomial of A3 is 3 ( ) = det(I A3 ) = ( 1) ( 2) theus the


2

eigenvalues of A3 are 1 ,1 and 2 , the eigenvalue 1 has multiplicity 2 , and nullity

( A3 I ) = 3 rank ( A3 I ) = 3 1 = 2 the A3 has two tinearly independent eigenvectors


( A3 I )q = 0 q1 = [1 0 0] q 2 = [0 1 0]
associated with 1 , thus we have

( A3 I )q3 = 0 q3 = [1 0 1]

1 0 1 1 0 0
Q = 1 1 0 and A 3 = 0 1 0 = Q 1 A3 Q
0 0 1 0 0 2

the characteristic polynomial of A4 is 4 ( ) = det(I A4 ) = 3 clearly A4 has lnly one

distinct eigenvalue 0 with multiplicity 3 , Nullity( A4 -0I)=3-2=1 , thus A4 has only one

independent eigenvector associated with 0 , we can compute the generalized , eigenvectors



A1 v1 = 0 v1 = [1 0 0]

of A4 from equations below A1 v 2 = v1 v 2 = [0 4 5] , then the representation of A4

A1 v3 = v 2 v3 = [0 3 4]

with respect to the basis v1{ v2 }


v3 is

0 1 0 1 0 0
A = 0 0 1 = Q 1 A Q where Q = 0 4 3
4 4
0 0 0 0 5 4
1 2 3 4
1 0 0 0
3.14 consider the companion-form matrix A = show that its
0 1 0 0

0 0 1 0

characterisic polynomial is given by ( ) = + 1 + 2 + 3 + 4


4 3 3

show also that if i is an eigenvalue of A or a solution of ( ) = 0 then

[ 3
i 3i i 1] is an eigenvector of A associated with i

A ( ) = + 1 + 2 + 3 + 4 , i i
4 3 3

, ( ) = 0 , i [ 3
3i i 1] A i
proof:

+ 1 2 3 4
1 0 0
( ) = det(I A2 ) = det
0 1 0

0 0 1
0 0 2 3 4
= ( + 1 ) det 1 0 + det 1
0
0 1 0 1
0 4
= 4 + 13 + 2 det + det 3
1 1
= 4 + 1 3 + 2 2 + 3 + 4

if i is an eigenvalue of A , that is to say , (i ) = 0 then we have

1 2 3 4 3i 13i 2 2i 3 i 4 4i 3i
1 3 2
0 0 0 2i 2i i
= = 2 = I i
0 1 0 0 i i i i

0 0 1 0 1 1 i 1

that is to say i[ 3
3i i 1] is an eigenvetor oa A associated with i
13 32 33 34
2
22 23 24
3.15 show that the vandermonde determinant 1 equals
1 2 3 4

1 1 1 1

1i < j 4 ( j i ) , thus we conclude that the matrix is nonsingular or equivalently , the

eigenvectors are linearly independent if all eigenvalues are distinct , vandermonde

13 32 33 34
2
1 22 23 24
1 2 3 4 1i< j 4 ( j i ) ,

1
1 1 1

, , ,
proof:

13 32 33 34 0 22 ( 2 1 ) 23 (3 1 ) 24 ( 4 1 )
2
22 23 24 0 2 ( 2 1 ) 3 (3 1 ) 4 ( 4 1 )
det 1 = det
1 2 3 4 0 2 1 3 1 4 1

1 1 1 1 1 1 1 1
22 ( 2 1 ) 23 (3 1 ) 24 ( 4 1 )

= det 2 ( 2 1 ) 3 (3 1 ) 4 ( 4 1 )
2 1
3 1 4 1
0 3 (3 1 )(3 2 ) 4 ( 4 1 )( 4 2 )

= det 0 (3 1 )(3 2 ) ( 4 1 )( 4 2 )
2 1 3 1 4 1
( 1 )(3 2 ) 4 ( 4 1 )( 4 2 )
= det 3 3 ( 2 1 )
(3 1 )(3 2 ) ( 4 1 )( 4 2 )
= (3 1 )[3 (3 1 )(3 2 )( 4 1 )( 4 2 ) 4 ( 4 1 )( 4 2 )(3 1 )(3 2 )]
= ( 2 1 )( 4 2 )(3 1 )(3 2 )( 4 1 )( 4 2 )
= 1i < j 4

let a, b, c and d be the eigenvalues of a matrix A , and they are distinct Assuming the matrix is
singular , that is abcd=0 , let a=0 , then we have
0 b 3 c3 d3
2
b 3 c3 d3 b 2 c2 d2
0 b2 c2 d
det = det b 2 c2 d 2 = bcd det b c d = bcd (d c)(d b)(c b)
0 b c d
b
c d 1
1 1
1 1 1 1
and from vandermonde determinant
a 3 b3 c3 d3 0 b 3 c3 d3
2
a b2 c2 d2 0 b2 c2 d2
det = det = (d c)(d b)(c a )(b a ) = bcd (d c)(d b)
a b c d 0 b c d

1 1 1 1 1 1 1 1

0 b 3 c3 d3 0 b 3 c3 d3
2
so we can see det 0 b c2 d2 0 b2 c2 d2 ,
= det
0 b c d 0 b c d

1 1 1 1 1 1 1 1

that is to say bcd ( d c)(d b) = 0 a, b, c, and d are not distinet

this implies the assumption is not true , that is , the matrix is nonsingular let q be the
i

eigenvectors of A , Aq = a q Aq 2 = a q 2 Aq 3 = a q 3 Aq 4 = a q 4
1 1

a 1 q 1 + a 2 q 2 + a 3 q 3 + a 4 q 4 = 0
1 a a2 a3

aa 1 q 1 + ba 2 q 2 + ca 3 q 3 + da 4 q 4 = 0
2
a a q + b 2
a q + c 2
a q + d 2
a q = 0
(
a1 q1 a 2 q 2 a 3 q 3 a 4 q 4 )
1
1
b
c
b2
d2
b3
d3
= 0 44
1 1 2 2 3 3 4 4
3 1 d d2 d 3
a a 1 q 1 + b a 2 q 2 + c a 3 q 3 + d a 4 q 4 = 0
3 3 3

a i q i = 0 n1
so a = 0 q i linenrly independent
and q i 0 n1 i

3.16 show that the companion-form matrix in problem 3.14 is nonsingular if and only if

4 0 , under this assumption , show that its inverse equals

0 1 0 0
0 0 1 0
1
A = 3.14
0 0 0 1
1 1 2 3
4 4 4 4

0 1 0 0
0 0 1 0

4 0 , A = 0
1
0 0 1
1 1 2 3
4 4 4 4
proof: as we know , the chacteristic polynomial is

( ) = det(I A) = 4 + 13 + 2 2 + 3 + 4 so let = 0 , we have


det( A) = (1) 4 det( A) = det( A) = 4
A is nonsingular if and only if

4 0

0 1 0 0 1 2 3 4 1 0 0 0
0
0 1 0 1 0 0 0 0 1 0 0
0 0 0 1 0 = I4
1 0 0 0 0 1 0
1 1 2 3
4 4 4 4 0 0 1 0 0 0 0 1

1 2 3 4 0 1 0 0 1 0 0 0
1 0 0

0 0 0 1 0 0 1 0 0
= I4
0 1 0

0 0 0 0 1 0 0 1 0
1 1 2 3
0 0 1 0 4
4 4 4 0 0 0 1

0 1 0 0
0
0 1 0
A 1 = 0 0 0 1
1 1 2 3
4 4 4 4

T tT 2 / 2

3.17 consider A = 0 tT with 0 and T>0 show that [0 0 1] is an
0 0

2 T 2 T 2 / 2 0

generalized eigenvector of grade 3 and the three columns of Q = 0 tT 0
0
0 0

1 0

constitute a chain of generalized eigenvectors of length 3 , venty Q AQ = 0 t 1
1

0 0


A 0 ,T>0 , [0 0 1] 3 , Q 3

1 0

3 , Q AQ = 0 t 1
1

0 0
Proof : clearly A has only one distinct eigenvalue with multiplicity 3
0 0 0 2 T 2 0 2 T 2

( A I ) = 0 = 0
2
0 0 0 = 0 0

1 0 0 0 1 0

0 0 0 2T 2 0 T T 2 / 2 0 0 0 0 0

( A I ) = 0 = 0
3

0 0 0 0 T 0 = 0 0 0 0 = 0
1 0 0 0 0 0 0 1 0 0 0 1


these two equation imply that [0 0 1] is a generalized eigenvctor of grade 3 ,

0 0 T T / 2 0 T
2 2 2


( A I ) = 0 = 0 0 T 0 = T
1 0 0 0 1 0
and
2T 2 0 T T 2 / 2 2T 2 2T 2

( A I ) = T = 0 0 T T = 0
0 0 0
0 0 0
that is the three columns of Q consititute a chain of generalized eigenvectors of length 3

T T 2 / 2 2T 2 T 2 / 2 0 3T 2 32T 2 / 2 T 2 / 2

AQ = 0 T 0 T 0 = 0 T T
0 0
0 0 1 0 0
1 0 T T 2 / 2 0 1 0 3T 2 32T 2 / 2 T 2 / 2
2 2


Q 0 1 = 0 T 0 0 1 = 0 2T T
0 0 0 0 1 0 0 0 0
1 0
Q AQ = 0 1
1

0 0

3.18 Find the characteristic polynomials and the minimal polynomials of the following matrices
,
1 1 0 0 1 1 0 0 1 1 0 0 1 0 0 0
0
1 1 0 0
1 1 0 0
1 0 0 0
1 0 0
(a) (b) (c ) (d )
0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0

0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1
(a ) 1 ( ) = ( 1 ) 3 ( 2 ) 1 ( ) = ( 1 ) 3 ( 2 )
(b) 2 ( ) = ( 1 ) 4 2 ( ) = ( 1 ) 3
(c) 3 ( ) = ( 1 ) 4 3 ( ) = ( 1 ) 2
(d ) 4 ( ) = ( 1 ) 4 4 ( ) = ( 1 )

3.19 show that if is an eigenvalue of A with eigenvector x then f ( ) is an eigenvalue

of f ( A) with the same eigenvector x

A , f ( ) f ( A) , x f ( A)

f ( ) ,

proof let A be an n n matrix , use theorem 3.5 for any function f (x) we can define

h( x) = 0 + 1 x + + n 1 x n 1 which equals f (x) on the spectrum of A

if is an eigenvalue of A , then we have f ( ) = h( ) and

f ( ) A x x A 2 x = A x = 2 x, A 3 x = 3 x , 3 A k x = k x
f ( A) x = h( A) x = ( 0 I + 1 A + 3 + n 1 A n 1 ) x
f ( A) = h( A)
= 0 x + 1x + 3 + n 1n 1 x
= h ( ) x = f ( ) x

which implies that f ( ) is an eigenvalue of f ( A) with the same eigenvector x

3.20 show that an n n matrix has the property A k = 0 for k m if and only if A has

eigenvalues 0 with multiplicity n and index m of less , such a matrix is called a nilpotent matrix

n n A k = 0 A n 0 m ,

,
proof : if A has eigenvalues 0 with multiplicity n and index M or less then the Jordan-form
J1 0 1 l


representation of A is A =
J2 where J = 0 1 ni = n
i i =1

J 3 0 n n ma
i
ni m
i i

k
from the nilpotent property , we have J i = 0 for k ni so if
J k1

k m ma ni , J ik = 0 all i = 1,2, , l and A = J k2 =0
i
k
J 3

then A k = 0 ( f ( A) = 0 if and only if f ( A ) = 0)

If A k = 0 for k m, then A k = 0 for k m, where

J1 l i 1 l

A = J2 li 1 n =n
, Ji =
i
i =1
J 3 li n n
i i

So we have
k k 1 k!
l i kli li k ni +1
(k ni + 1)(ni 1)!
k
k
J i = li = 0 i = 1,2, l
li k


li = 0, and ni m for i = 1,2, l
which implies that A has only one distinct eigenvalue o with multiplicity n and index m or less ,

1 1 0

3.21 given A = 0 0 1 , find A10 , A103 and e At A A10 , A103 and e At ,

0 0 1

the characteristic polynomial of A is 2 ( ) = det(I A) = ( 1) 2

let h( ) = 0 + 1 + 2
2
on the spectrum of A , we have

f (0) = h(0) 010 = 0


f (1) = h(1) 110 = 0 + 1 + 2
f (1) = h (1) 10 110 = 1 + 2 2

the we have 0 = 0, 2 = 8 2 = 9

A10 = 8 A + 9 A 2
1 1 0 1 0 1 1 1 9
= 80 0 1 + 9 0 0 1 = 0 0 1
0 0 1 0 0 1 0 0 1
the compute
A103 0103 = 0 0 = 0
1 = 0 + 1 + 2 1 = 101
103

103 1103 = 1 + 2 2 2 = 102


A103 = 101A + 102 A 2
1 1 0 1 1 0 1 1 102
= 1010 0 1 + 102 0 0 1 = 0 0 1

0 0 1 0 0 1 0 0 1

e0 = 0 0 = 1
to compute e :
At
e = 0 + 1 + 2 1 = 2e t te t 2
t

te t = 1 + 2 2 2 = te t e t + 1
Ae At = 0 I + 1 A + 2 A 2
1 0 0 1 1 0 1 1 1
= 0 1 0 + (2e e 2) 0 0 1 + (2e e + 1) 0 0 1
t t t t

0 0 1 0 0 1 0 0 1
e t e t 1 te t e t + 1

= 0 1 et 1
0 0 et

At
3.22 use two different methods to compute e for A 1 and A 4 in problem 3.13

3.13 A 1 A 4 e
At ,
e A2t
method 1 : the Jordan-form representation of A 1 with respect to the basis

{[1 0 0]

[4 1 0]

[5
}
0 1] is A1 = diag {1, 2, 3}

1 4 5
Ca e A1t
= Qe Q , where Q = 0 1 0
A1t 1

0 0 1
1
1 4 5 e
t
0 0 1 4 5

Cb = 0 1 0 0 e 2t 0 0 1 0
0 0 1 0 0 e 3t 0 0 1
Cc
e t 4e 2 t 5e 3t 1 4 5 e t 4(e 3t e t ) 5(e 3t e t )

Cd = 0 e 2t 0 0 1 0 = 0 e 2t 0
0 0 e 3t 0 0 1 0 0 e 3t

0 1 0 1 0 0
A = 0 0 1 = Qe A4t Q where Q = 0 4 3
4
0 0 0 0 5 4
e A4t = Qe A4t Q 1
1 0 0 1 t t / 2
2


= 0 4 3 0 1 t
0 5 4 0 0 1
1 4t + 5t 2 / 2 3t + 2t 2

= 0 20t + 1 16t
0
25t 20t + 1

method 2: the characteristic polynomial of A1 is ( ) = ( 1)( 2)( 3). let

h( ) = 0 + 1 + 2 2 on the spectrum of A1 , we have

f (1) = h(1) e t = 0 + 1 + 0 = 3e t 3e 3t + e 3t
5 3
f (2) = h(2) e 2t = 0 + 2 1 + 3 2 2 = e t + 4e 2t + e 3t
2 2
f (3) = h(3) e 3t = 0 + 3 1 + 9 2 2 1 t
3 = ( e 2e 2 t + e 3 t )
2
e A1t = 0 I + 1 A2 + 2 A12
3e t 3e 2t + e 3t 0 0 1 4 10
5 t 3 3t
= 0 3e 3e 2t + e 3t
t
0 2t
+ ( 2 e + 4e 2 e ) 0 2 0

0 0 3e t 3e 2t + e 3t 0 0 3
1 12 40
1 t
+ (e 2e 2t + e 3t ) 0 4 0
2
0 0 9
e t 4(e 2t e t ) 5(e 3t e t )

= 0 e 2t 0
0 0 e 3t

the characteristic polynomial of A4 is ( ) = , let h( ) = 0 + 1 + 2


3 2
on the

spectrum of A4 ,we have

f (0) = h(0) : 1 = 0
f (0) = h (0) : t = 1
f (0) = h (0) : t 2 = 2 2
e A4t = 0 I + 1 A4 + 1 A42
0 4 3t + 2t 2 0 5 4
2
thus = I + t 0 20 16 + t / 2 0 0 0
0 25
20 0 0 0
1 4t + 5t 2 / 2 3t + 2t 2

= 0 20t + 1 16t
0
25t 20t + 1
3.23 Show that functions of the same matrix ; that is f ( A) g ( A) = g ( A) f ( A) consequently

At
we have Ae = e At A f ( A) g ( A) = g ( A) f ( A)

At
Ae = e At A

g ( A) = 0 I + 1 A + + n 1 An 1
proof: let ( n is the order of A)
f ( A) = 0 I + 1 A + + n 1 An 1

n 1 n
f ( A) g ( A) = 0 0 I + ( 0 1 + 1 0 ) A + + i n 1 i A n 1
+ i n i An + + n 1 n 1 A2 n 2
i =0 i =1
n 1 k 2n 2 k
= ( i k i ) Ak + ( i k i ) Ak
k =0 i =0 k = n i = k n +1

n 1 k 2n 2 k
f ( A) g ( A) = ( i k i ) Ak + ( i k i ) Ak = f ( A) g ( A)
k =0 i =0 k = n i = k n +1

let f ( A) = A, g ( A) = e At then we have Ae At = e At A

1 0 0

3.24 let C = 0 2
0 , find a B such that e B = C show that of i = 0 for some
0 0 3
I ,then B does not exist

1 0 0

let C = 0 2
0 , find a B such that e B = C Is it true that ,for any nonsingular c ,there
0 0 3

B
exists a matrix B such that e = C ?

1 0 0

C = 0 2
0 1 1 1 = 0 , B e B = C
0 0 3

1 0 0

C = 0 2
0 , C B , e B = C ?
0 0 3

Let f ( ) = n e = so f ( B ) = ln e B = B
ln l1 0 0

B = ln C = 0 ln l2 0 where 1 > 0, i = 1,2,3 , if = 0 for some i ,
0 0 ln l3

ln l1 does not exist

1 0 n n 0 n 1 0

= 0 n
for C = 0 0 we have B = n C = 0 n 0 0 ,

0 0 0 0 n 0 0 n

where > 0, if 0 then n does mot exist , so B does mot exist , we can conclude

that , it is mot true that , for any nonsingular C THERE EXISTS a B such that ek = c

1
3.25 let ( sI A) = Adj ( sI A) and let m(s) be the monic greatest common divisor of
( s)

all entries of Adj(Si-A) , Verify for the matrix A2 in problem 3.13 that the minimal polynomial

of A equals ( s ) m( s )

1
, ( sI A) = Adj ( sI A) , m(s) Adj(Si-A),
( s)

3.13 A2 A ( s ) m( s )

verification :

1 0 1 1 0 0

A3 = 0 1 0 A3 = 0 1 0

0 0 2 0 0 2
( s ) = ( s 1) 2 ( s 2) j ( s ) = ( s 1) \ ( s 2)
S 1 0 1 ( s 1)( s 2) 0 ( s 1)

sI A = 0 S 1 0 ,
Adj ( sI A) = 0 ( s 1)( s 2) 0
0 0 s 2 0 0 ( s 1) 2
so m( s ) = s 1

we can easily obtain that ( s ) = ( s ) m( s )

3.26 Define ( sI A) 1 =
1
( s )
[R0 s n 1 + R1 s n 2 + + Rn 2 s + Rn 1 ] where

( s ) = det( sI A) := s 2 + a 1 s n 1 + a 2 s n 2 + + a n and Ri are constant matrices theis


definition is valid because the degree in s of the adjoint of (sI-A) is at most n-1 , verify
tr ( AR0 )
1 = R0 = I
1
tr ( AR1 )
2 = R1 = AR0 + 1 I = A + 1 I
2
tr ( AR1 )
3 = R2 = AR1 + 2 I = A 2 + 1 A + 2 I
2

tr ( AR1 )
n 1 = Rn 1 = ARn 2 + n 1 I = A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I
n 1
tr ( AR1 )
n = 0 = ARn 1 + n I
n
where tr stands for the trase of a matrix and is defined as the sum of all its diagonal entries this

process of computing a i and Ri is called the leverrier algorithm

1
( SI A) 1 := [ R0 S n 1 + R1 S n 2 + + Rn 2 S + Rn 1 ] (s ) A
( s)

( s ) = det( sI A) := s + a 1 s + a 2 s n2 + + a n
n n 1
and Ri ,
, SI-A S N-1
tr ( AR0 )
1 = R0 = I
1
tr ( AR1 )
2 = R1 = AR0 + 1 I = A + 1 I
2
tr ( AR1 )
3 = R2 = AR1 + 2 I = A 2 + 1 A + 2 I
2

tr ( AR1 )
n 1 = Rn 1 = ARn 2 + n 1 I = A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I
n 1
tr ( AR1 )
n = 0 = ARn 1 + n I
n

tr , i Ri leverrier .

verification:

( SI A)[ R0 S n 1 + R1 S n 2 + + Rn 2 S + Rn 1 ]
= R0 S n + ( R1 AR0 ) S n 1 + ( R2 AR1 ) S n 2 + ( Rn 1 ARn 2 ) S + ARn 1
= I ( S n + 1 S n 1 + 2 S n 2 + + n 1 S + n )
= I( S )
which implies
1
( SI A) 1 := [ R0 S n 1 + R1 S n 2 + + Rn 2 S + Rn 1 ]
( s)

where ( s ) = s + 1 s + 2 s n2 + + n
n n 1

3.27 use problem 3.26 to prove the cayley-hamilton theorem


3.26 cayley-hamilton
proof:
R0 = I
R1 AR0 = 1 I
R2 AR1 = 2 I

Rn 1 ARn 2 = n 1 I
ARn 1 = n I

multiplying ith equation by A n i +1 yields ( i = 1,2 n )

A n R0 = A n
A n 1 R1 A n R0 = 1 A n 1
A n 2 R2 A n 1 R1 = 2 A n 2

ARn 1 A 2 Rn 2 = n 1 A
ARn 1 = n I

A n + 1 A n 1 + 2 A n 2 + + n 1 A + n I
then we can see that is
= A n R0 + + A n 1 R1 A n R0 + + ARn 1 A 2 Rn 2 ARn 1 = 0

( A) = 0

3.28 use problem 3.26 to show

( sI A) 1

=
1
( s)
[
A n 1 + ( s + 1 ) A n 2 + ( s 2 + 1 s + 2 ) A n 3 + 3 + ( s n 1 + 1 s n 2 + 3 + n 1 ) I ]
3.26 ,
Proof:
1
( SI A) 1 := [ R0 S n 1 + R1 S n 2 + 3 + Rn 2 S + Rn 1 ]
( s)
1
= [ S n 1 + ( A + 1 I ) S n 2 + ( A 2 + 1 A + 2 I ) S n 3 + 3
( s)
( A n 2 + 1 A n 3 + 3 + n 3 A + n 4 ) S + A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I ]

=
1
( s)
[
A n 1 + ( s + 1 ) A n 2 + ( s 2 + 1 s + 2 ) A n 3 + 3 + ( s n 1 + 1 s n 2 + 3 + n 1 ) I ]
another : let 0 = 1

1
( SI A) 1 := [ R0 S n 1 + R1 S n 2 + 3 + Rn 2 S + Rn 1 ]
( s )
1 n 1 1 n 1 n 1i n 1
=
( s ) i =i
Ri S n 1i = S
( s ) i =0 i =0
i l A L

1 N 1 n 1
= i
( s ) i =0
S n 1 I
A 0
+
i =0
i S n 1 I A + 3 +

S n 1 + ( A + 1 I ) S n 2 + ( A 2 + 1 A + 2 I ) S n 3 + 3
( A n 2 + 1 A n 3 + 3 + n 3 A + n 4 ) S + A n 1 + 1 A n 2 + 3 + n 2 A + n 1 I ]

=
1
( s)
[
A n 1 + ( s + 1 ) A n 2 + ( s 2 + 1 s + 2 ) A n 3 + 3 + ( s n 1 + 1 s n 2 + 3 + n 1 ) I ]

3.29 let all eigenvalues of A be distinct and let qi be a right eigenvector of A associated with

i that is Aqi = I qi define Q = q1 [ ]


q 2 q n and define

p1
p
=: ,
2
P; = Q 1


p n
where pi is the ith row of P , show that pi is a left eigenvector of A associated with i , that

is pi A = i pi A , qi i ,

p1
p
Aqi = I qi , Q = q1 [ ]
q 2 q n P; = Q 1 =:

2


p n
pi P I , pi A i , pi A = i pi
Proof: all eigenvalues of A are distinct , and qi is a right eigenvector of A associated with i ,

1
and Q = q1 [ q2 qn ] =
so we know that A
2 = Q 1 AQ = PAP 1

3

PA = A P That is

p1 p1 p1 A 1 p1
p 1 p A p
p2
A = 2 = 2 2
2
: 2 so pi A = i pi , that is , pi is a

n
p n p n p n A n p n
left eigenvector of A associated with i

3.30 show that if all eigenvalues of A are distinct , then ( SI A) 1 can be expressed as

1
( SI A) 1 = qi pi where qi and pi are right and left eigenvectors of A associated
s i

with i
1
A , ( SI A)

1
( SI A) 1 = qi pi qi pi A i ,
s i

Proof: if all eigenvalues of A are distinct , let qi be a right eigenvector of A associated with i ,

p1
p
then Q = q1 [ ]
q 2 q n is nonsingular , and : Q 1 = where is a left eigenvector of

2


p n
1
s qi pi ( SI A)
i

1 1
A associated with i , = ( s qi pi qi pi A) = ( s qi pi qi i pi )
s i s i
1
= ( s i )(qi pi ) = qi pi
s i

1
That is ( SI A)
1
= qi pi
s i
3.31 find the M to meet the lyapunov equation in (3.59) with

0 1 3
A= B = 3 C = what are the eigenvalues of the lyapunov equation ? is the
2 2 3
lyapunov equation singular ? is the solution unique ?
A,B,C, M (3.59) lyapunov , >?
AM + MB = C
3 1 0
M =CM =
2 1 3

A ( ) = det(I A) = ( + 1 j )( + 1 + j ) B ( ) = det(I B) = + 3
The eigenvalues of the Lyapunov equation are

1 = 1 + j + 3 = 2 + j ) 2 = 1 j + 3 = 2 j
The lyapunov equation is nonsingular M satisfying the equation

0 1 3 3
3.32 repeat problem 3.31 for A = B = 1 C1 = C 2 = with two
1 2 3 3
different C , A, B, C, 3.31 ,
AM + MB = C
1 1
M = C1 No solution
1 1
1 1
1 1 M = C 2 M = 3 for ny

A ( ) = ( + 1) 2 B ( ) = 1

the eigenvalues of the lyapunov equation are 1 = 2 = 1 + 1 = 0 the lyapunov equation is


singular because it has zero eigenvalue if C lies in the range space of the lyapunov equation , then
solution exist and are not unique
,

3.33 check to see if the following matrices are positive definite or senidefinite
2 3 2 0 0 1 a1 a1 a1 a 2 a1 a3

, 3 1 0

0 0 0 a a
2 1 a2 a2 a 2 a3
2 0 2 1 0 2 a3 a1 a3 a 2 a3 a3
2 3 2
2 3
1. 7 det = 2 9 = 7 < 0 3 1 0 is not positive definite , nor is pesitive
3 1 2 0 2
semidefinite

0 1

2. det 0
0 = ( 1 + 2 )( 1 2 ) it has a negative eigenvalue 1 2 ,
1 0 2
so the second matrix is not positive definite , neither is positive demidefinte ,,

3 the third matrix s prindipal minors a1 a1 0, a 2 a 2 0, a3 a3 0, `

a1 a1 a1 a 2 a1 a3
a a a1 a 2 a a a1 a3 a a a 2 a3
det 1 1 = 0 det 1 1 = 0 det 2 2 = 0 det a 2 a1 a2 a2 a 2 a3 = 0
a 2 a1 a2 a2 a3 a1 a3 a3 a3 a 2 a3 a3
a3 a1 a3 a 2 a3 a3
that is all the principal minors of the thire matrix are zero or positive , so the matrix is positive
semidefinite ,
3.34 compute the singular values of the following matrices ,

1 0 1 1 2
2 1 0 2 4

1 2
1 0 1 2 2
2 1 0 0 1 = 2 5
1 0

( ) = ( 2)( 5) 4 = 2 7 + 6 = ( 1)( 6)

1 2
1 0 1
the eigenvalues of 0 1 are 6 and 1 , thus the singular values of
2 1 0 1 0

1 0 1
2 1 0 are 6 and 1 ,

1 2 1 2 5 6
2 4 2 4 = 6 20

25 3 25 3
( ) = ( 5)( 20) 36 = ( + 41)( 41) the eigenvalues of
2 2 2 2
1 2 1 2 25 3 25 3
2 4 2 4 are 2 + 2 41 and
2 2
41 , thus the singular values of

1 2 25 3 1 25 3 1
2 4 are ( 2 + 2 41) = 4.70 and ( 41) 2 = 1.70
2

2 2

3.35 if A is symmetric , what is the ralatimship between its eigenvalues and singular values ?
A ?

If A is symmetric , then AA = AA = A 2 LET qi be an eigenvector of A associated with

eigenvaue i that is , Aqi = i qi , (i = 1,3, 3 n) thus we have

2
A 2 qi = Ai qi = i Aqi = i qi (i = 1,3, 3 n)

Which implies i 2 is the eigenvalue of A 2 for i = 1,2, n , (n is the order of A )

So the singular values of A are | i | for i = 1,2, n where i is the eigenvalue of A

a1
a
n
3.36 show det I n + [b1 bn ] = 1 + a m bm
2
b2

m =1

a n

a1
a
let A =
2
B = [b1 b2 bn ] A is n 1 and B is 1 n


a n
use (3.64) we can readily obtain

a1 a1
a a

det I n + 2 [b1 b2 bn ] = det I 1 + [b1 b2 bn ] 2






a n a n
n
= 1 + a m bm
m 1

3.37 show (3.65) (3.65)


proof: let
sIm A sIm 0 sIm A
N = Q= P=
0 sIn B sIn B sIn
then we have

sI AB 0 sI m sA
NP = m QP =
sB sI n 0 sI n BA
because

det( NP ) = det N det P = s det P


det(QP) = det Q det P = s det P
we have det(NP)=det(QP)

det( NP ) = det( sI m AB) det( sI n ) = S n det( sI m AB)


And
det(QP) = det( sI m ) det( sI n BA) = S m det( sI n BA)

3.38 Consider A x = y , where A is m n and has rank m is ( AA) A y a solution ? if not ,


1

under what condition will it be a solution? Is A( AA)


1
y a solution ?

A m , ( AA) A y A x = y ? ,
1
mn

, ? A( AA)
1
y ?

A is m n and has rank m so we know that m n , and AA is a square matrix of m m

rankA=m , rank( AA) rankA = m n ,

So if m n ,then rank( AA )<n , ( AA) 1 does mot exist m and ( AA) A y isnt a
1

solution if A x = y

If m=n , and rankA=m , so A is nonsingular , then we have rank( AA )=rank(A)=m , and

A ( AA) A y =A A 1 ( A) 1 A y = y that is ( AA) 1 A y is a solution ,


1

RankA=M
Rank( AA )=m AA is monsingular and ( AA) 1 exists , so we have

AA( AA) 1 y = y , that is , A( AA) 1 y is a solution of A x = y ,


PROBLEMS OF CHAPTER 4
4.1 An oscillation can be generated by

0 1
X = X
1 0

cos t sin t
Show that its solution is X (t ) = X (0)
sin t cos t
0 1
t
At 1 0
Proof: X (t ) = e X (0) = e X (0) ,the eigenvalues of A are j,-j;

Let h( ) = 0 + 1 .If h( ) = e t ,then on the spectrum of A,then

h( j ) = 0 + 1 j = e jt = cos t + j sin t 0 = cos t


then
h( j ) = 0 1 j = e jt
= cos t j sin t 1 = sin t

1 0 0 1 cos t sin t
so h( A) = 0 I + 1 A = cos t + sin t =
0 1 1 0 sin t cos t
0 1
At
t
1 0
cos t sin t
X (t ) = e X (0) = e X (0) = X (0)
sin t cos t
4.2 Use two different methods to find the unit-step response of

0 1 1
X = X + U
2 2 1

Y = [2 3]X
Answer: assuming the initial state is zero state.
method1:we use (3.20) to compute
1
1 s 1 1 s + 2 1
( sI A) = =
2 s + 2 s + 2s + 2 2 s
2

At cos t + sin t sin t t


then e = L1 (( sI A) 1 ) = e and
2 sin t cos t sin t

5s 5
Y ( s ) = C ( sI A) 1 BU ( s ) = 2
= 2
( s + 2 s + 2) s ( s + 2 s + 2)

then y (t ) = 5e t sin t for t>=0

method2:
t
y (t ) = C e A(t t ) Bu (t )dt
0
t
= C e A(t t ) dtB =CA 1 (e At e 0 ) B
0

1 0.5 e t cos t + 2e t sin t 1


= C
1 0 e t cos t 3e t sin t 1
= 5 sin te t
for t>=0
4.3 Discretize the state equation in Problem 4.2 for T=1 and T= . 4.3
T 1
Answer:

X [k + 1] = e AT X [k ] + e A d BU [k ]
T

0
Y [k ] = CX [k ] + DU [k ]
For T=1,use matlab:
[ab,bd]=c2d(a,b,1)
ab =0.5083 0.3096
-0.6191 -0.1108
bd =1.0471
-0.1821

0.5083 0.3096 1.0471


X [k + 1] = X [k ] + U [k ]
0.6191 0.1108 0.1821
Y [k ] = [2 3]X [k ]
for T= ,use matlab:
[ab,bd]=c2d(a,b,3.1415926)
ab =-0.0432 0.0000
-0.0000 -0.0432
bd =1.5648
-1.0432

0.0432 0 1.5648
X [k + 1] = X [k ] + U [k ]
0 0.0432 1.0432
Y [k ] = [2 3]X [k ]
4.4 Find the companion-form and modal-form equivalent equations of

2 0 0 1

X = 1 0 1 X + 0U

0 2 2 1

Y = [1 1 0]X
Answer: use [ab ,bb,cb,db,p]=canon(a,b,c,d,companion)
We get the companion form
ab = 0 0 -4
1 0 -6
0 1 -4
bb = 1
0
0
cb = 1 -4 8
db =0
p =1.0000 1.0000 0
0.5000 0.5000 -0.5000
0.2500 0 -0.2500

0 0 4 1
X = 1 0 6 X + 0U

0 1 4 0

Y = [1 4 8]X
use use [ab ,bb,cb,db,p]=canon(a,b,c,d) we get the modal form
ab = -1 1 0
-1 -1 0
0 0 -2
bb = -3.4641
0
1.4142
cb = 0 -0.5774 0.7071
db = 0
p = -1.7321 -1.7321 -1.7321
0 1.7321 0
1.4142 0 0

1 1 0 3.4641
X = 1 1 0 X + 0 U
0 0 2 1.4142

Y = [0 0.5774 0.7071]X
4.5 Find an equivalent state equation of the equation in Problem 4.4 so that all state variables have
their larest magnitudes roughly equal to the largest magnitude of the output.If all signals are
required to lie inside 10 volts and if the input is a step function with magnitude a,what is the
permissible largest a? 4.4
10 a
a
Answer: first we use matlab to find its unit-step response .we type
a=[-2 0 0;1 0 1;0 -2 -2]; b=[1 0 1]';c=[1 -1 0]; d=0;
[y,x,t]=step(a,b,c,d)
plot(t,y,'.',t,x(:,1),'*',t,x(:,2),'-.',t,x(:,3),'--')

1.2
1.1 x2
1
0.9
0.8
0.7
0.6
x1
0.5
0.4
0.3
0.2
0.1
0
-0.1 x3
-0.2
-0.3
y
-0.4
-0.5
-0.6
0 1 2 3 4 5 6

so from the fig above.we know max(|y|)=0.55, max(|x1|=0.5;max(|x2|)=1.05,and max(|x3|)=0.52

for unit step input.Define x1 = x1 , x 2 = 0.5 x 2 , x3 = x3 , then

2 0 0 1
X = 0.5 0 0.5 X + 0U

0 4 2 1

Y = [1 2 0]X
the largest permissible a is 10/0.55=18.2

0 b
4.6 Consider X = X + 1 U Y = [c1 c1 ]X
0 b2
where the overbar denotes complex conjugate.Verify that the equation can be transformed into

X = A X + B u y = C1 X

0 1 0
with A=
+
[ ]
B = C1 = 2 Re( b1c1 ) 2 Re(b1c1 ) ,by using the
1

b1 b1
transformation X = QX with Q =
b1 b1
b1 b1
Answer: let X = QX with Q = ,we get
b1 b1

0 b 1 b1 b1
QX = QX + 1 U Y = [c1 c1 ]QX Q 1 =
0 b2 ( )b1b1 b1 b1
so
0 b
X = Q 1 QX + Q 1 1 U
0 b2
1 b1 b1 0 b1 b1 1 b1 b1 b1
= X + U
( )b1b1 b1 b1 0 b1 b1 ( )b1b1 b1 b1 b2
1 b1 b1 b1 b1 1 0
= X + ( )b b U
( )b1b1 2 b1 2 b1 b1 b1 ( )b1b1 1 1

0 1 0
= X + U = A X + B U
+ 1
b1 b1
Y = [c1 c1 ]QX = [c1 c1 ] [
X = b1c1 b1c1 ]
c1b1 + c1b1 X = C1 X
b1 b1
4.7 Verify that the Jordan-form equation

1 b1
1 b
2
0 b3
X = X + U
1 b 1
1 b2

b3
y = [c1 c2 c3 c1 c2 c3 ]X
can be transformed into
A I2 0 B

X = 0 A

I 2 X + B u [
y = C1 C2 ]
C3 X
0 0 A B

where A , B , and C i are defined in Problem 4.6 and I 2 is the unit matrix of order 2.

A , B , and C i 4.6 I 2

PROOF: Change the order of the state variables from [x1 x2 x3 x4 x5 x6] to [x1 x4 x2 x5 x3 x6]
And then we get
x1 1 x1 b1
x 1 x b
4 4 1 A I2 0 B1
x 1 x 2 b2
X = 2 = + = 0 A I 2 X + B2 u
x 5 1 x5 b2
0 0 A B3
x 3 x3 b3

x 6 x6 b3
y = [c1 c1 c2 c2 c3 [
c3 ]X = C1 C2 ]
C3 X
4.8 Are the two sets of state equations

2 1 2 1
X = 0 2 2 X + 1U
y = [1 1 0]X
0 0 1 0
and

2 1 1 1
X = 0 2 1 X + 1U
y = [1 1 0]X
0 0 1 0
equivalent?Are they zero-state equivalent?
Answer:
1
s 2 1 2 1
G 1 ( s ) = C ( sI A) B = [1 1 0] 0
1
s 2 2 1
0 0 s 1 0
( s 2)( s 1) ( s 1) 2( s 1) 1
=
[1 1 0]
0 ( s 2)( s 1) 2( s 2) 1
2
( s 2) ( s 1)
0 0 ( s 2) 2 0
1
=
( s 2) 2
1
s 2 1 1 1
G 2 ( s ) = C ( sI A) B = [1 1 0] 0
1 s 2 1 1
0 0 s + 1 0
( s 2)( s + 1) ( s + 1) ( s 1) 1
=
[1 1 0]
0 ( s 2)( s + 1) ( s 2) 1
2
( s 2) ( s + 1)
0 0 ( s 2) 2 0
1
=
( s 2) 2

obviously, G 1 ( s ) = G 2 ( s ) ,so they are zero-state equivalent but not equivalent.

4.9 Verify that the transfer matrix in (4.33)has the following realization:
1 I q Iq 0 0 N1
I 0 Iq 0 N
2 q
2
X =
X + U

r 1 I q 0 0 Iq N r 1
r Iq
0 0 0 N r

Y = Iq [ 0 0 0X ]
This is called the observable canonical form realization and has dimension rq.It is dual to (4.34).
(4.33) rq(4.34)
Answer:
define
C ( sI A) 1 = [ Z 1 Z2 Zr ]
then
C = [ Z1 Z 2 Z r ]( sI A)
we get
Z i 1
sZ i = AZ i , i = 2 r Z i = , i = 2 r
s
r r
Z
sZ 1 = I q Z i i =I q 1i 1i then
i =1 i =1 s

s r 1 s r 2 1
Z1 = Iq , Z2 = I q ,, Z r = Iq
d ( s) d ( s) d ( s)
then
1
C ( sI A) 1 B = ( s r 1 N 1 + + N r )
d (s)
this satisfies (4.33).
4.10 Consider the 1 2 proper rational matrix
1
G ( s ) = [d1 d2 ]+
s + 1s + 2 s 2 + 3 s + 4
4 3

[
11 s 3 + 21 s 2 + 31 s + 41 12 s 3 + 22 s 2 + 32 s + 42 ]
Show that its observable canonical form realization can be reduced from Problem 4.9 as
4.9
1 1 0 0 11 12
0 1
0 22
X = 2 X + 21
3 0 0 1 31 32

4 0 0 0 41 42

y = [1 0 0 0]X + [d1 d 2 ]U
Answer: In this case,r=4,q=1,so

I q = 1, N 1 = [ 11 12 ], N 2 = [ 21 22 ], N 3 = [ 31 32 ], N 4 = [ 41 42 ]
its observable canonical form realization can be reduced from 4.9 to 4.10
4.11 Find a realization for the proper rational matrix
2 2s 3 2 2s 3

G ( s ) = s + 1 ( s + 1)( s + 2) = s + 1 ( s + 1)( s + 2) + 0 0

s 2 s 3 2 1 1
s + 1 s+2 s + 1 s+2
1 2 2 4 3 0 0
= s + +
s + 3s + 2 3 2 6 2 1 1
2

so the realization is
3 0 2 0 1 0
0 3 0 2 0 1
X =
X + U 4.12 Find a
1 0 0 0 0 0 realization for

0 1 0 0 0 0 each column of

2 2 4 3 0 0 G ( s ) in Problem
Y = X + 1 1U
3 2 6 2 4.11,and then

connect them.as shown in fig4.4(a).to obtain a realization of G ( s ) .What is the dimension of this

realization ?Compare this dimension with the one in Problem 4.11. 4.11 G ( s )

4.4(a) G ( s ) 4.11

Answer:

1 2 1 2 0
G 1 ( s ) = = +
s + 1 s 2 s + 1 3 1
X = X + U
1 1 1

2 0
YC1 = X 1 + U 1
3 1
1 2 s 3 0 1 2 3 0
G 2 ( s ) = + = s + +
( s + 1)( s + 2) 2s 2 1 ( s + 1)( s + 2) 2 2 1
3 2 1
X 2 = X 2 + U 2
1 0 0
2 3 0
YC 2 = X 2 + U 2
2 2 1
These two realizations can be combined as
1 0 0 1 0

X = 0 3 2 X + 0 1U

0 1 0 0 0
2 2 3 0 0
Y = X + U
3 2 2 1 1
the dimension of the realization of 4.12 is 3,and of 4.11 is 4.

4.13 Find a realization for each row of G ( s ) in Problem 4.11 and then connect them,as shown in

Fig.4.4(b),to obtain a realization of G ( s ) .What is the dimension of this realization of this

realization?Compare this dimension with the ones in Problems 4.11 and 4.12. 4.11

G ( s ) 4.4(b) G ( s )
4.114.12
Answer:
1
G 1 ( s ) = [2s + 4 2s 3] = 2 1 [s[2 2] + [4 3]]
( s + 1)( s + 2) s + 3s + 2
3 1 2 2
X 1 = X1 + U 1
2 0 4 3
YC1 = [1 0]X 1 + [0 0]U 1
1 1
G 2 ( s ) = [ 3( s + 2) 2( s + 1)] + [1 1] = [s[ 3 2] + [ 6 2]] + [1 1]
( s + 1)( s + 2) ( s + 1)( s + 2)
3 1 3 2
X 2 = X2 + U 2
2 0 6 2
YC 2 = [1 0]X 2 + [1 1]U 2

These two realizations can be combined as

3 1 0
0 2 2
2 0 0
0 4 3
X =
X+ U
0 1
0 3 3 2

0 0 2
0 6 2
1 0 0 0 0 0
Y = X + U
0 0 1 0 1 1
the dimension of this realization is 4,equal to that of Problem 4.11.so the smallest dimension is
of Problem 4.12.

(12 s + 6) 22 s + 23
4.14 Find a realization for G ( s ) =
3s + 34 3s + 34
(12 s + 6) 22 s + 23
G ( s ) =
3s + 34 3s + 34
Answer:

(12 s + 6) 22 s + 23 1
G ( s) = = [ 4 22 / 3] + [130 / 3 679 / 9]
3s + 34 3s + 34 s + 34 / 3
34
X = X + [130 / 3 679 / 9]U
3
Y = X + [ 4 22 / 3]U
4.16 Find fundamental matrices and state transition matrices for

0 1 1 e 2t
X = X and X = X
0 t 0 1
Answer:
t
x1 = x 2 x1 (t ) = x 2 (t )dt + x1 (0)
for the first case: 0
2
x 2 = tx 2 d ln x 2 (t ) = tdt x 2 (t ) = x 2 (0)e 0.5t

t e 0.5t 2 dt
X (0) = X (t ) = 0
1 1 0
we have X (0) = X (t ) = and
0 0 1 e 0.5t
2

The two initial states are linearly independent.thus


1 t

e
0.5t 2 dt
X (t ) = 0 and
0
2
e 0.5t
1
1 t
1 t0
1 e 0.5t0 2 dt t e 0.5t 2 dt
t0
2 2
e 0.5t dt
e 0.5t dt
(t , t 0 ) = 0 0 =
0 0 0
2
0.5t 2 0.5 ( t 2 t 0 2 )
e e 0.5t0 e

for the second case:

x1 = x1 + e 2t x 2 (t ) = x1 + e t x 2 (0)
t

x1 (t ) = e 0 [ x 2 (0)e t e dt + c] = 0.5 x 2 (0)(e t e t ) + x1 (0)e t


dt t dt

x 2 = x 2 x 2 (t ) = x 2 (0)e t
we have

1 e t 0 0.5(e t e t )
X (0) = X (t ) = and X (0) = X (t ) =
0 0 1 e t
the two initial states are linearly independent;thus
e t 0.5(e t e t )
X (t ) = and
0 e t
1
e t 0.5(e t e t ) e t0 0.5(e t0 e t0 )
(t , t 0 ) =
0 e t 0 e t0
e t 0 t 0.5e t (e t0 e 3t0 ) + 0.5(e t e t )e t0
=
0 e t0 t

4.17 Show (t 0 , t ) / t = (t 0 , t ) A(t ).

(t 0 , t ) / t = (t 0 , t ) A(t ).
Proof: first we know
(t , t 0 )
= A(t )(t , t 0 )
t
then
(t , t ) ((t , t 0 )(t 0 , t )) (t , t 0 ) (t 0 , t )
= = ( t 0 , t ) + (t , t 0 )
t t t t
(t 0 , t ) (t 0 , t )
= A(t )(t , t 0 )(t 0 , t ) + 1 (t 0 , t ) = A(t ) + 1 (t 0 , t )
t t
=0
(t 0 , t ) / t = (t 0 , t ) A(t )

a11 (t ) a12 (t )
,show de ( , 0 ) = exp ( a11 ( ) + a 22 ( ))d

4.18 Given A(t ) =
a 21 (t ) a 22 (t )
0

A(t), de ( , 0 ) = exp (a11 ( ) + a 22 ( ))d


0
Proof:

11 12 a11 a12 11 12
=
a 22 21 22
21 22 a 21
so

(de ) = (11 22 2112 ) = 11 22 2112 + 1122 2112

= (a1111 + a12 21 ) 22 (a1112 + a12 22 ) 21 + 11 (a 2112 + a 22 22 ) 12 (a 2111 + a 22 21 )
= (a11 + a 22 )(11 22 2112 ) = (a11 + a 22 ) de
so

0 ( a11 ( ) + a22 ( ))d
de ( , 0 ) = ce wih ( 0 , 0 ) = I
we ge
c =1

0 ( a11 ( ) + a22 ( ))d
hen de ( , 0 ) = e
4.20 Find the state transition matrix of
. sin t 0
x= x
0 cos t
Answer:

x1 = sin t x1 x1 (t ) = x1 (0)e cos t 1


x 2 = cos tx 2 x 2 (t ) = x 2 (0)e sin t
then we have

1 e 1+ cos t 0 0
X (0) = X (t ) = and X (0) = X (t ) = sin t
0 0 1 e
the two initial states are linearly independent;thus

e 1+ cos t 0
X (t ) = and
sin t
0 e
1
e 1+ cos t 0 e 1+ cos t0 0 e cos t cos t0 0
(t , t 0 ) = sin t 0
= sin t + sin t 0
0 e sin t 0 e 0 e

4.21 Verify that X (t ) = e At Ce Bt is the solution of X (t ) = e At Ce Bt

X = AX + XB x(0) = C
At
Verify:first we know e = Ae At = e At A ,then
t

X = (e At Ce Bt ) = ( Ae At )Ce Bt + e At C (e Bt B) = AX + XB
t
X (0) = e 0 Ce 0 = C
Q.E.D
4.23 Find an equivalent time-invariant state equation of the equation in Problem 4.20.
4.20

1 e1cos t 0
Answer: let P (t ) = X (t ) = and
sin t
x (t ) = P(t ) x(t )
0 e

Then A (t ) = 0 x (t ) = 0

4.24 Transform a time-invariant ( A, B, C ) into (0, B (t ), C (t )) is by a time-varying equation

transformation.(A,B,C) (0, B (t ), C (t ))

Answer: since (A,B,C)is time-invariant,then

X (t ) = e At
P(t ) = X 1 (t )
B (t ) = X 1 (t ) B = e At B
C (t ) = CX (t ) = Ce At
(t , t 0 ) = X (t ) X 1 (t 0 ) = e A(t t0 )

4.25 Find a time-varying realization and a time-invariant realization of the inpulse response

g (t ) = t 2 e t .
Answer:

g ( , ) = g ( ) = ( ) 2 e ( ) = ( 2 2 + 2 )e
e
[
= 2 e 2e e ]

e
2 e

the time-varying realization is

0 0 0 e t

x (t ) = 0 0 0 x(t ) + te t u (t )
0 0 0 t 2 e t

[
y (t ) = t 2 e t 2te t ]
e t x(t )

2
g ( s ) = L[t 2 e lt ] =
s 3ls + 3ls l3
3 2

u sin g (4.41), we get the time inviant realization :


3l 3l2 l3 1

x = 1 0 0 x + 0u (t )
0
1 0 0
y = [0 0 2]x

( )
4.26 Find a realization of g ( , ) = sin (e ) cos . Is it possible to find a time-invariant

state equation realization? g ( , ) = sin (e ( ) ) cos

Answer: clearly we can get the time-varying realization of

g ( , ) = sin (e ( ) ) cos = (sin e )(e cos ) = M ( ) N ( ) using Theorem 4.5

we get the realization.

x (t ) = N (t )u (t ) = e t cos tu (t )
y (t ) = sin te t x(t )
but we cant get g(t) from it,because g ( , ) g ( ) ,so its impossible to find a

time-invariant state eqution realization.


5.1
Is the network shown in Fing5.2 BIBO stable ? If not , find a bounded input that will excite an unbound output
5.2 BIBO ?. .
From Fig5.2 .we can obtain that
x=uy
y=x
Assuming zero initial and applying Laplace transform then we have
y( s ) s
y ( s ) = = 2
u ( s ) s + 1
s
g (t ) = L1 [ g ( s )] = L1 [ 2
] = cos t
s +1
because
x 2 k +3

| g (t ) | dt = | cos t | dt = cos tdt + 2 2
2 k +1 (1) k +1 cos tdt
0 0 0
k =0 2


=1+ (1)
k =0
k +1
[sin(2 + 32 ) sin( k + 12 )

=1+ (1)
k =0
k +1
(1) k [sin 32 sin 12 ]

=1+ (1)
k =0
2 k +1
(1) 2 k +1
2

=1+2 1 =
k =0
Which is not bounded .thus the network shown in Fin5.2 is not BIBO stable ,If u(t)=sint ,then we have
1 1
y(t)= cos( ) sin d = cos sin( ) d = 2 sin d = 2 sin
0 0 0

we can see u(t)is bounded ,and the output excited by this input is not bounded

5.2
consider a system with an irrational function y ( s ) .show that a necessary condition for the system to be BIBO stable is
that |g(s)| is finite for all Res 0
S BIBO Res 0. |g(s)|
Proof let s= + j, then

g ( s ) = g ( + j ) = g (t )e t e jt dt = g (t )e t cos tdt j g (t )e t sin tdt
0 0 0
If the system is BIBO stable ,we have

0
| g (t ) | dt M < + . for some constant M

which implies shat
0
g (t )dt is convergent .
For all Res= 0, We have
|g(t) e t cos t | | g (t ) |
|g(t0 e t sin t || g (t ) |

so 0
g (t )e t cos tdt and
0
g (t )e t sin tdt are also convergent , that is .

0
g (t )e t cos t = N < +

0
g (t )e t sin tdt = L < +
where N and L are finite constant. Then
1 1
| g ( s ) |= ( N + ( L) ) 2 = ( N + L ) 2
2 2 2 2
is finite ,for all Res 0 .

Another Proof, If the system is BIBO stable ,we have 0
| g (t ) | dt M < +. for some constant M .And
.

For all Res 0 . We obtain | g ( s ) |= 0
| g (t )e (Re s )t dt | g (t 0 | dt M < + .
0
That is . | g ( s ) | is finite for all Res 0

5.3
1 t
Is a system with impulse g(t)= BIBO stable ? how about g(t)=t e for t 0 ?
t +1
1 t
g(t)= BIBO ? g (t ) = te , t 0 ?
t +1
1 1
Because 0 | t + 1 | dt = 0 t + 1d t = ln(t + 1) =

0
| te t | dt = te t dt = (te t e t ) | 0 = 1 < +
0
1 t
the system with impulse response g(t)= is not BIBO stable ,while the system with impulse response g(t)= te
t +1
for t 0 is BIBO stable.
e 2 s
54 Is a system with transfer function g ( s ) = BIBO stable
( s + 1)
e 2 s
g ( s ) = BIBO
( s + 1)
Laplace transform has the property .
If g (t ) = f (t a ). then g ( s ) = e s f ( s )
1 1
We know L1 [ ] = e t , t 0 . Thus L1 [e s ] = e (t 2 ) , t 2
s +1 s +1
So the impulse response of the system is g (t ) = e ( t 2 ) . for t 2

0
| g (t ) | dt = = e (t 2 ) dt = 1 < +
2
the system is BIBO stable
5.5 Show that negative-feedback shown in Fig . 2.5(b) is BIBO stable if and only if and the gain a has a
magnitude less than 1.For a=1 ,find a bounded input r (t ) that will excite an unbounded output.
2.5(b) BIBO a 1 a=1 r(t),

Poof ,If r(t)= (t ) .then the output is the impulse response of the system and equal

g (t ) = a(t 1) a 3 (r 3) a 4 (t 4) + = (1) i +1 a i (t i )
t =1
the impulse is definded as the limit of the pulse in Fig.3.2 and can be considered to be positive .thus we have

| g (t ) |= | a |i (t i )
t =1

if | a | 0
and | g (t ) | t = | a |i (t i )t = | a |i = |a|
0
t =1
0
t =1 (1|a|) < + if | a |< 1
which implies that the negative-feedback system shown in Fig.2.5(b) is BIBO stable if and only if the gain a has
magitude less than 1.
For a = 1 .if we choose r (t ) = sin(t). clearly it is bounded the output excited by r (t ) = sin(t) is

y ( ) = g ( )r ()d = (1) i +1 sin( i) = (1) i +1 sin( i) = (1) i +1 sin( i) = (1) i +1 (1) i sin()
0
i +1 i =1 i =1 i =1

= sin()1
i =1

And y(t) is not bounded

( s 2)
5.6 consider a system with transfer function g ( s ) = what are the steady-state responses by
( s + 1)
t 0 .and by u ( t )=sin2t. for t 0 ?
u ( t )=3.for
( s 2)
g ( s ) = u (t ) = 3. t 0 u (t ) = sin 2t. t 0
( s + 1)

s2 3
The impulse response of the system is g (t ) = L1 [ ] = L1 [` ] = (t ) 3e t . t 0
s +1 s +1
And we have

0
| g (t ) | t = | (t ) 3e t | t (| (t ) | + | 3e t )t = (t )t + 3e t t = 1 + 3 = 4 < +
0 0 0 0
so the system is BIBO stable
using Theorem 5.2 ,we can readily obtain the steady-state responses
if u ( t )=3for t 0 .,then as t , y (t ) approaches g (0) 3 = 2 3 = 6
if u ( t )=sin2t for t 0, then ,as t . y (t ) approaches
2 10
| g ( jt) | sin( tt + g ( jt)= sin( tt + arctan 3) = 1.26 sin( tt + 1.25)
5
1 10 2
5.7 consider x = x + u
0 1 0
y=[ -2 3] x 2u is it BIBO stable ?
BIBO
The transfer function of the system is

1
1 10
g ( s ) = [ 2 3]
s + 1 10 2
2 = [ 2 3]
s +1 ( s + 1)( s 1) 2 2

0 1 0 0 1 0
s 1
4 2s + 2
= 2=
s +1 s +1
The pole of is 1.which lies inside the left-half s-plane, so the system is BIBO stable

5.8 consider a discrete-time system with impulse response


g[k ] = k (0.8) k for k 0, is the system BIBO stable ?
k
g[ k ] = k (0.8) k 0, BIBO
Because

1

| g[k ] |= k (0.8) k =
k =0 k =0 0.2
(1 0.8) k (0.8) k
k =0
=

1
1
1 0.8
[ k (0.8) k k (0.8) k +1 ] = 0.8 k = = 20 < +
0.2 k =0 k =0 0.2 k =0 0.2 1 0.8
G[k]is absolutely sumable in [0, ]. The discrete-time system is BIBO stable .

5.9 Is the state equation in problem 5.7 marginally stable? Asymptotically stable ? 5.7

The characteristic polynomial


+ 1 10
( ) = det(I A) = det = ( + 1)( 1)
0 1
the eigenralue Ihas positive real part ,thus the equation is not marginally stable neither is Asymptotically stable.

1 0 1
5.10

Is the homogeneous stable equation x = 0 0 0 x .marginally stable ? Asymptotically stable?

0 0 0
1 0 1

x = 0 0 0 x

0 0 0
+ 1 0 1
The characteristic polynomial is () = det(I A) = det 0 0 = 2 ( + 1)
0 0
And the minimal polynomial is ( ) = ( + 1) .
The matrix has eigenvalues 0 , 0 . and 1 . the eigenvalue 0 is a simple root of the minimal .thus the equation is marginally
stable
The system is not asymptotically stable because the matrix has zero eigenvalues

1 0 1
5 11

Is the homogeneous stable equation x = 0 0 0 x marginally stable ? asymptotically stable ?

0 0 0
1 0 1

x = 0 0 0 x

0 0 0
+ 1 0 1
The characteristic polynomial () = det 0 1 = 2
+ 1.
0 0
And the minimal polynomial is ( ) = 2 ( + 1) ..the matrix has eigenvalues 0. 0. and 1 . the eigenvalue 0 is a
repeated root of the minimal polymial ( ). so the equation is not marginally stable ,neither is asymptotically
stable .

5.12 Is the discrete-time homogeneous state equation


0.9 0 1
x[k + 1] = 0 1 0 x[k ]
0 0 1
marginally stable ?Asymptotically stable ?
0.9 0 1
x[k + 1] = 0 1 0 x[k ]
0 0 1
2
The characteristic of the system matrix is ( ) = ( 1
0.9), .and the minimal polynomial is
() = ( 1 0.9), the equation is not asymptotically stable because the matrix has eigenvalues 1.
whose magnitudes equal to 1 .
the equation is marginally stable because the matrix has all its eigenvalues with magnitudes less than or to 1 and
the eigenvalue 1 is simple root of the minimal polynomial . ( ) .
5.13 Is the discrete-time homogeneous state equation
0.9 0 1
x[k + 1] = 0 1 0 x[k ] marginally stable ?
0 0 1
asymptotically stable ?
0.9 0 1

x[k + 1] = 0 1 0 x[k ]

0 0 1
2
Its characteristic polynomial is () = ( 1 0.9), and its minimal polynomial is
2
() = ( 1 0.9), the matrix has eigenvalues 1 .1 .and 0.9 . the eigenvalue 1 with magnitude equal to
1 is a repeated root of the minimal polynimial ( ) . .thus the equation is not marginally stable and is not
asymptotically stable .

0 1
5.14 Use theorem 5.5 to show that all eigenvalues of A = have negative real parts .
0.5 1
5.5 A .
a b
Poof :For any given definite symmetric matrix N where N = with a>0 .and ac b 2 > 0
b c
The lyapunov equation AM + MA = N can be written as
0 0.5 m11 m12 m11 m12 0 1 a b
1 1 m + =
21 m22 m21 m22 0.5 1 b c
0.5(m12 + m21 ) m11 m12 0.5m22 a b
that we have =
m11 m21 0.5m22 m12 + m21 2m22 b c
m11 = 1.5a + 0.25c b
m m12
thus m12 = m21 = a M = 11 is the unique solution of the
m21 m22
m22 = a + 0.5c
a>0 a>0
lyapunor equation and M is symmetric and
ac b 2 > 0 c>0
1 1 1
(1.5a + 0.25c) ac = [(6a + c) 2 16ac] = (36a 2 + c 2 4ac) = [(2a c) 2 + 32a 2 ] 0
16 16 16
(1.5a + 0.25c) ac > b
2 2

m11 = 1.5a + 0.25c b > 0


a > 0, c > 0
m m 12
det 11 = m 11 m 22 m 12 m 21 = (1.5a + 0.25c b)(a + .5c) a 2
m 21 m 22
1
= (4a 2 + 8ac + c 2 8ab 4bc)
8
1
= [(2a 2b) 2 + (c 2b) 2 + 8(ac b 2 )] > 0
8
we see is positive definite
because for any given positive definite symmetric matrix N, The lyapunov equation AM + MA = N has a
unique symmetric solution M and M is positive definite , all eigenvalues of A have negative real parts ,

5.15 Use theorem 5.D5show that all eigencalues of the A in Problem 5.14 magnitudes less than 1.
Poof : For any given positive definite symmetric matrix N , we try to find the solution of discrete Lyapunov
equation M
a b
M AMA = N Let N = . where a>0 and ac b 2 > 0
b c
m m12
and M is assumed as M = 11 ,then
m21 m22
m m12 0 0.5 m11 m12 0 1
M AMA = 11
m21 m22 1 1 m21 m22 0.5 1
m11 0.25m22 m12 + 0.5(m21 m22 )
=
m21 + 0
5 m12 m22 )
m11 + m12 + m21
a b
=
b c
8 3 4 4 4 2 12 12 16
this m11 = a + c b, m12 = m21 = a + c b, m22 = a + c b
5 5 5 5 5 5 5 5 5
m m12
M = 11 is the unique symmetric solution of the discrete Lyapunov equation
m21 m22
(8a + 3c) 2 16ac = 64a 2 + 9c 2 + 32ac
a>0 a>0

M AMA = N , And we have 2
ac b ca > 0

8 3 4
(8a + 3c) 2 > 16ac > 16b 8a + 3c 4b > 0 m11 = a+ c b>0
5 5 5
m m12 1
det 11 = m11 m22 m12 m21 = [(8a + 3(4b)(12a + 12c 16b) (4a + 4c 2b) 2 ]
m21 m22 25
4
= (4a 2 + c 2 + 3b 2 + 5ac 8ab 4bc)
5
4
= [(2a 2b) 2 + (c 2b) 2 + 5(ac b 2 )] > 0
5
we see M is positive definite .use theorem 5.D5,we can conclude that all eigenvalues of A have magnitudes less
than 1.

5.16 For any distinct negative real i and any nonzero real ai ,show that the matrix
a1 a3
2
a1 a1 a 2

2 1 1 + 2 1 + 3
a 2 a1 a2 a 2 a3
M = 2
2 + 1 2 2 2 + 3 is positive definite .

a3 a1 a3 a 2 a2
3
+
3 1 3 + 2 2 3
M i ai

Poof : i , i = 1,2,3. are distinct negatire real numbers and ai are nonzero real numbers , so we have
2
a
(1), >0
1

21
aa
2
a1
1 2 2 2
2 1 1 + 2 a1 a 2 a12 a 22 1 1
(2) . det 2
= = a12 a 22 (
a 2 a1 a 2 4 1 2 ( 1 + 2 ) 2
4 1 2 ( 1 + 2 ) 2
+
2 2
2 1

1 + 22 4 1 2 = 21 + 22 + 2 1 2 4 1 2 = 1 22
1 1
1 + 22 > 4 1 2 and >
4 1 2 ( 1 + 2 ) 2

a1 a 2
2
a1

2 1 1 + 2 1 1
det 2
= a1 a 2 (
2
) >0
a 2 a1 a2 4 1 2 ( 1 + 2 ) 2
+ 2
2 2
2 1

(3)
1 1 1

2 1 1 + 2 1 + 3
1 1 1
det M = a12 a 22 a32
2 + 1 2 2 2 + 3
1 1 1

3 + 1 3 + 2 2 3
1 1 1

2 1 1 + 2 1 + 3
1 2 3 1 2 1
= a12 a 22 a32 det 0
2 2 2 + 1 2
1 + 3
2 + 3 ( 1 + 2 ))

0 1 2 1 1 2 1

2 + 3 ( 1 + 2 ))
1 + 3 2 3 1 + 22

1 2 2 2 1 2 1 1 2 1 1 2 1 2
= a1 a 2 a3 det
2


2 1 2 2 2 + 1 2 3 3 + 12 2 + 3 ( 1 + 2 ))
1 + 3
1 2 2 2 1 1 1 1 4 1
= a1 a 2 a3 det +
2 1 4 2 3
3 1 + 2 2 ( 1 + 3 ) ( 2 + 3 ) 2 ( 1 + 2 ))
1 + 3
1= 2 + 3
1 2 2 2 1 1 1 1
a1 a 2 a3 > 0, > 0. > 0, > 0,
2 1 4 2 3
3 1 + 2 3 1 + 2
2 ( 1 + 3 )
4 1
> 0.
( 1 + 2 ))
1 + 3
1= 2 + 3
det M < 0
From (1), (2) and (3) , We can conclude that M is positive definite

5.17 A real matrix M (not necessarily symmetric ) is defined to be positive definite if X M X > 0
for any non zero X .Is it true that the matrix M is positive definite if all eigenvalues of Mare real
and positive or you check its positive definiteness ?
M X X M X > 0 M
M / M M

If all eigenvalues of M are real and positive or if all its leading pricipal minors are positive ,the matrix
M may not be positive definite .the exampal following can show it .
1 8
Let M = , Its eigenvalues 1,1 are real and positive and all its leading pricipal minors are
0 1

positive , but for non zero X = [1 2] we can see X MX = 11 < 0 . So M is not positive
definite .

Because X MX is a real number . we have ( X M X ) = X M X = X M X , and
1 1 1
X M X = X M X + X M X = X [ M + M )] X
2 2 2
1
This X M X > 0, if and only if X [ M + M )] X >0, that is , M is positive and the matrix
2
1
M + M ) is symmetric , so we can use theorem 3.7 to cheek its positive definiteness .
2

5.18 show that all eigenvalues of A have real parts less than < 0 if and only if ,for any given
positive definite symmetric matrix N , the equation AM + MA + 2M = N has a unique
symmetric solution M and M is positive definite .
A < 0 N
AM + MA + 2M = N M M
Proof : the equation can be written as ( A + I ) + M ( A + I ) = N
Let B = A + I , then the equation becomes B M + MB = N . So all eigenvalues of B have
megative parts if and only if ,for any given positive definite symmetric matrix N, the equation
B M + MB = N has a unique symmetric solution m and is positive definitive .
And we know B = A + I , so det(I B ) = det(I A I ) = det (( ) I A) , that is , all
eigenvalues of A are the eigenvalues of B subtracted . So all eigenvalues of A are real parts less
than < 0 .if and only all eigenvalues of B have negative real parts , eguivalontly if and only if ,
for any given positive symmetric matrix N the equation AM + MA + 2M = N has a unique
symmetric solution M and M is positive definite .

5.19 show that all eigenvalues of A have magnitudes less than if and only if , for any given
positive definite symmetric matrix N ,the equation M AMA = N . Has a unique symmetric
2 2

solution M and M is positive definite .


A N
2 M AMA = 2 N M M
Proof : the equation can be written as M (c)M ( A) = N let B = A , then
1 1

det(I B ) = det(I 1 A) = 1 det(I A). that is ,all eigenvalues of A are the eigenvalues
B multiplied by ,so all eigenvalue of B have msgritudes less than 1 . equivalently if and only if ,
for any given positive definite symmetric matrix N the equation 2 M AMA = 2 N has a unique
symmetric solution M and M is positive definite .

2 | | | |
520 Is a system with impulse response g ( , ) = e , for , BIBO stable ? how about
(t t)
g (t , t) = sin(e ) cos t ?
2 | | | | (t t)
g ( , ) = e BIBO g (t , t) = sin(e ) cos t ?
BIBO

|e | d = e 2 | | | | d = e | | d
2 | | | |
0 0 0
e 2t (e t0 e t ) t0 0 if

= e 2 t ( 2 e t0 e t ) if t 0 < 0 and t > 0 2 <+
e 2t (2 e t0 e t ) if t0 < 0 and t<0

| sin(e ( t t ) ) cos t |=| sin t e ( t t ) | cos t || sin t | (1 e t0 t )

| sin | (e )d =| sin | e e d =| sin | (1 e ( 0 ) )
( )
0 0
t 0 t
t0 t, e ( 0 ,1]

| sin | (e
( )
)d 0 < + so
0

| sin (e ) cos | | sin | e | sin | e ( 0 ) 1 < +
( )
0 0

Both systems are BIBO stable .

2
5.21 Consider the time-varying equation x = 2tt + u y = e t x
is the equation stable ?
marginally stable ? asymptotically stable ?
2
x = 2tt + u y = e t x
BIBO
x (t 0 = 2tx(t ) x(t ) = x( 0 ) e t2 x( t ) = e t2 is a fundamental matrix
2
its stable transition matrix is ( t , t 0 ) = e t t0

2 2
2 2
Its impulse response is g ( , ) = e e = e
+ +
g ( , )d = e d e d = 2e d = e d = 2
2 2 2 2
= < +
0 0 0 2
so the equation is BIBO stable .
2 2
| ( t , t 0 ) |=| e t t0
|= e t t0 , t t 0 is not bounded ,that is . there does not exist a finite
constant M such that ( t , t 0 ) M < +. so the equation is not marginally stable and is not
asymptotically stable .

5.22 show that the equation in problem 5.21 can be transformed by using
t 2 t 2
x = p (t ) x, with p (t ) = e , into x = 0.x + e u y = x is the equation BIBO
stable ?marginally stable ? asymptotically ? is the transformation a lyapunov transformation ?
2
5 21 x = p (t ) x, with p (t ) = e t
2
x = 0.x + e t u y = x BIBO
lyapunov ?
t2
proof : we have found a fundamental matrix of the equation in problem 5.21 .is x(t ) = e . so
1 t 2
from theorem 4.3 we know , let pt ) = x (t ) = e . and x = p (t ) x, , then we have
A(t ) = 0
1 2 2 2
B t ) = e t .
t ) = x (t ) B C t ) = e t e t = 1
t ) = C (t ) B . D
t ) = D(t ) = 0
2
t
and the equation can be transformed into x = 0.x + e u y=x
the impulse response is invariant under equivalence transformation , so is the BIBO stability .
that is marginally stable . x = 0.x x(t ) = x(0) x(t ) = 1 is a fundamental matrix
the state transition matrix is ( t , t 0 ) = 1 | ( t , t 0 ) |= 1 1 < + so the equation is
marginally stable . t , | ( t , t 0 ) | 0 ()so the equation is not asymptotically
stable . from theorem 5.7 we know the transformation is mot a lyapunov transformation
1 0
5.23 is the homogeneous equation x = x for t 0 0
e
3t
0
marginally stable ? asymptotically stable?
1 0
x = x , t 0 0
e
3t
0

1 0 x1 = x1 x1 (t ) = x1 (0)e t
x = 3t x 1 1
e 0 x 2 = e 3t x1 x 2 (t ) = x1 (0)e 4t + x 2 (0) x1 (0)
4 4
e t 0
x = 1 4t
4 e 1

1
e t +t0 0
is a fundamental matrix of the equation . (t ) = X (t ) X (t 0 ) = 1 4t +t0 is the state
e 1
4
t + t 1 4 t + t 0 1 3t 0
transition of the equation. || ( t , t 0 ) || = max e 0 ,1 + e e for all t 0 0
4 4
1 1 1 1
and t t 0 0 e t +t0 1; 0 e 4t +t0 0 e t +t0 1; e 3t0 0, so
4 4 4 4
3 1 4 t + t 0 1 3t 0 5 5
1+ e e . || ( t , t 0 ) || < + the equation is marginally
4 4 4 4 4
stable .if t t 0 is fixed to some constant , then || ( t , t 0 ) || 0 () as t ,
so the equation is not asymptotically stable .
t + t 0 1 4t +t0 1
for all t and t 0 with t t 0 . we have 0 e 1; 0 e . every entry of
4 4
( t , t 0 ) is bounded , so the equation is marginally stable .

because the (2,2)th entry of ( t , t 0 ) does not approach zero as t , the equation is not
asymptotically .
0 1 0 1

x = 0 0 1 x + 0

6.1 is the stable equation controllable ? observable ?
1 3 3 0
y = [1 2 1]x

0 1 0
2
([ B AB A B]) = ( 0 0 1 = 3
1 3 3
thus the state equation is controllable ,
1 1 2 0
0= ( 1 2 1= 1
0 1 2 1

but it is not observable .

0 1 0 0 1
x = 0 0 1 x + 1 0u

6.2 is the state equation controllable ? observable?
1 2 1 0 0
y = [1 0 1]x

( B ) = 2. n 2 = 3 2 = 1, using corollary , we have

0 1 1 0
([ B AB]) = ( 1 0 0 0 ) = 3 ,Thus the state equation is controllable
0 0 2 0

C 1 0 1
( CA ) = ( 0 3 1 ) = 3 . Thus the state equation is observable

CA 2 0 2 4

6.3 Is it true that the rank of [B AB A n1 B ] equals the rank of [ AB A 2 B A n B] ? If


not ,under what condition well it be true ?

( [B A [ AB A 2 B A n B] B A n1 B] ]= ()

It is not true that the rank of [B AB A n1 B ] equals the rank of [ AB A 2 B A n B] .

If A is nonsingular {( ( [ AB A 2 B A n B] )= (A [B AB A n1 B] ]= ( [B
AB A n1 B ] (see equation (3.62) ) then it will be true

A11 A12 B1
6.4 show that the state equation x = x + 0 u is controllable if and only if the pair
A21 A22

( A22 A21 ) IS controllable ( A22 A21 )


Proof :

6.5 find a state equation to describe the network shown in fig.6.1 and then check its controllability
and observability
6.1 ,??
x1 + x1 = u
The state variables x1 and x 2 are choosen as shown ,then we have x 2 + x 2 = 0 thus
y = x 2 + 2u
a state equation describing the network can be es pressed as
x1 1 0 x1 1
x = 0 1 x + 0U
2 2
checking the controllability and observability :
x1
y [0 1] + 2u
x2
1 0
([ B AB] = ( ) = 1
0 1
The equation is neither controllable nor obserbable
C 1 0
( ) = ( ) = 1
CA 0 1
6.6 find the controllability index and obserbability index of the state equation in problems 6,1 and 6.2
6.1 6.2 .
x1 1 0 x1 1
x = 0 1 x + 0U
2 2
The state equation in problem 6.1 is the equation is controllable ,so
x1
y [0 1] + 2u
x2

we have n p m min(n, n p + 1) n p + 1. where n = 3, p = rank (b) = 1

n p n p + 1. ie 3 3 =3

C
C
([c ]) = C A = 1, V = 1
C A C A 2

The controllability index and cbservability index of the state eqyation in problem 6.1 are 3 and 1 ,
0 1 0 0 1
x = 0 0 1 x + 1 0u

respectively .the state equation in problem 6.2 is
0 2 1 0 0
y [1 0 1]x
The state equation in is both controllable and observable ,so we have

p p + 1. ad p p +1

where n = 3, p = rank ( B) = 2 . q = rank c = 1

3 2 2 .3 3 = 2, = 3
The controllability index and cbservabolity index of the state equation in problem 6.2 are 2 and
3 ,respectively rank(c)=n .the state equation is controllable so

n p n p + 1. where p = rank ( B) = n = 1

6.7 what is the controllability index of the state equation x = A X + IU WHERE I is the unit matrix ?

x = A X + IU

Solution . C = [ B AB A 2 B A n 1 B ] = [ I A A 2 A n 1 ]

2
C is an n n matrix. So rank(C) n . And it is clearly that the first n columns of C are

linearly independent. This rank(C) n .and u i = 1 for i = 0, 1, n

The controllability index of the state equation x = A X + IU is u = max(u1 , u 2 , , u n ) = 1

1 4 1
6.8 reduce the state equation x = x + u y = [1 1]x . To a controllable one . is the
3 1 1
reduced equation observable ?

1 3
Solution :because (C ) = ([B AB ]) = ( ) = 1 < 2
3 3

1 1 1
The state equation is not controllable . choosing Q = P = and let x = p x then we
1 0
1 0 1 1 4 1 1 3 4
have A = PAP = =
1 1 4 1 1 0 0 5

0 1 1 1 1 1
B = PB = = . C = CP 1 = [1 1] = [2 1]
1 1 1 0 1 0
thus the state equation can be reduced to a controllable subequation

x c = 3 xc + u y = 2 xc and this reduced equation is observable

6.9 reduce the state equation in problem 6.5 to a controllable and observable equation . 6.5
.
Solution : the state equation in problem 6.5 is

1 0 1
x = x + U
0 1 0 it is neither controllable nor observable
y [0 1]x + 2u
x1 1 0 x1 1
x = 0 1 x + 0U
2 2
from the form of the state equation ,we can reaelily obtain
x
y [0 1] 1 + 2u
x2
thus the reduced controllable state equation can be independent of x c . so the equation can be further

reduced to y=2u .

1 1 0 0 0 0
0 1 1 0 0 1

x = 0 0 1 0 0 x + 0 u
6.10 reduce the state equation to a controllable and
0 0 0 2 1 0
0 0 0 0 2 1
y = [0 1 1 0 1]x

observable equation

Solution the state equation is in Jordan-form , and the state variables associated with 1 are

independent of the state variables associated with 2 thus we can decompose the state equation into

two independent state equations .and reduce the two state equation controllable and observable equation ,

respectively and then combine the two controllable and observable equations associated with 1 and
2 . Into one equation ,that is we wanted .

1 1 0 0
x1 = 0 1 1 x1 + 1 u y = [0 1 1]x1
Decompose the equation . 0 0 1 0
1 0
x 2 = 2 x 2 + u y = [0 1]x 2
0 2 1

x1
Where x = , y = y1 + y 2
x 2
Deal with the first sub equation .c

0 1 0
Choosing Q = P := 1 l 1
1
0 and let x1 = p x1 then we have
0 0 1

C 1 0 1 1 0 0 1 0 0 21 1

A1 = PA1 P 1
= 1 0 0 0 1 1 1 1 0 = 1 2 1 0
0 0 1 0 0 1 0 0 1 0 0 1

11 1 0 0 1 0 1 0
B1 = PB1 = 0 0 0 1 = 0 C1 = C1 P 1 = [0 1 1]1 C 0 = [1 1 1]
0 0 1 0 0 0 0 1
Thus the sub equation can be reduced to a controllable one

0 21 1
x 1c = x 1c + u
1 1 0
y1 = [1 2 1 ]x 1c
1 1
(O) = =1< 2
1 21

1 1
the reduced controllable equation is not observable choosing P = .and let ,
0 1

1 1 0 21 1 1 1 0
A1 = =
0 1 1 2 1 0 1 1 1
1 1 1 1
x 1c = p x 1c then we have B 1 = =
0 1 0 0
1 1
C1 = [1 1 ] = [1 0]
0 1
thus we obtain that reduced controllable and observable equation of the sub equation associated
with
1 : x1co = 1 x1c 0 + u
y1 = x1c 0

deal with the second sub equation :

0 1
(C 2 ) = ( ) = 2 it is controllable
1 2

0 1 0 1
(O2 ) = ( ) = 1 < 2 It is not observable choosing p= and let
0 2 1 0

x 2 = p x 2 then we have

0 1 2 1 0 1 2 0
A2 = =
1 0 0 2 1 0 1 2
0 1 0 1
B2 = =
1 0 1 0
0 1
C 2 = [0 1] = [1 0]
1 0
thus the sub equation can be reduced to a controllable and observable
x ico = 2 xico + u
y 2 = xico
combining the two controllable and observable state equations . we obtain the reduced controllable and
observable state equation of the original equation and it is
0 1
x co = 1 x co + u
0 2 1
y = [1 1]x co

x = A x + Bu
6.11 consider the n-dimensional state equation the rank of its controllability matrix is
y = c x + ou

assumed to be n1 < n . Let Q1 be an n n1 matrix whose columns are any n1 linearly independent

columns of the controllability matrix let p1 be an n1 n matrix such that p1Q1 = In1 ,where In1

is the unit matrix of order n1 .show that the following n1 -dimensional state equation
x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du
is controllable and has the same transfer matrix as the original state equation
x = A x + Bu
n n1 < n Q1 n1
y = c x + ou

n n1 p1 P p1Q1 = In1 In1 n1 n1

x 1 = P1 AQ1 x 1 + P1 Bu

y = CQ1 x 1 + Du

1 P
Let P = Q := 1 , where P1 is an n1 n matrix and P2 is an (n n1 ) n one ,and we
P2
have

PQ P1Q2 I n1 0 P1Q1 = I n1
QP = Q1 P1 + Q2 P2 = I n PQ = 1 1 = = In that is
P2 Q1 P2 Q2 0 I n n1 P2 Q2 = I n n1

Then the equivalence trans formation x = p x will transform the original state equation into

x c p1 x c p1 x
= A[Q1 Q2 ] + Bu y = C [Q1Q2 ] c + Du
x c p2 x c p2 x c
P AQ P1 AQ2 x c p1 B x
= 1 1 + u = [CQ1 CQ2 ] c + Du
P2 AQ1 P2 AQ2 x c p 2 B x c
A
= C
A12 x c Bc
+ u [ x
]
= C c C c c + Du
0 AC x c 0 x c

where A c = P1 AQ1 . B c = P1 B. C C = CP1 , and A c is n1 n1 the reduced n1 -dimensional

x 1 = P1 AQ1 x 1 + P1 Bu
state equation is controllable and has the sane transfer matrix as the original
y = CQ1 x 1 + Du
state equation

6.12 In problem 6.11 the reduction procedure reduces to solving for P1 in P1Q1 = I . how do you

solve P1 ?

6.11 P1Q1 = I . P1 , P1 ?

Solution : from the proof of problem 6.11 we can solve P1 in this way
(C)= [ AB A 2 B A n B] = n1 < n

we fprm the n < n matrix Q = ([q1 q n1 q n ] , where the first n1 columns are any n1 linearly

independent columns of C , and the remaining columns can aribitrarily be chosen as Q is nonsingular .

1 P
Let P = Q := 1 , where P1 is an n1 n matrix and p1Q1 = In1
P2

6.13 Develop a similar statement as in problem 6.11 for an unobservable state equation .
6.11 .
x = A x + Bu
Solution : consider the n-dimensional state equation the rank of its observability matrix
y = C x + Du

is assumed to be n1 < n .let P1 be an n1 n matrix whose rows are any n1 linearly independent

rows of the observability matrix . let Q1 be an n n1 matrix such that p1Q1 = In1 .where In1 is

the unit matrix of order n1 , then the following n1 -dimensional state equation

x 1 = P1 AQ1 x 1 + P1 Bu
y = CQ1 x 1 + Du
is observable and has the same transfer matrix as the original state equation

6.14 is the Jordan-form state equation controllable and observable ?


Jordan-form

2 1 0 0 0 0 0 2 1 0
0 2 0 0 0 0 0 2 1 1

0 0 2 0 0 0 0 1 1 1 2 2 1 3 1 1 1

x = 0 0 0 2 0 0 0 x + 3 2 1u y = 1 1 1 2 0 0 0 x
0 0 0 0 1 1 0 1 0 1 0 1 1 0 1 1 0

0 0 0 0 0 1 0 1 0 1
0
0 0 0 0 0 1
1 0 0

2 1 1
1 0 1
solution : ( 1 1 1 ) = 3
and ( ) = 2
3 2 1 1 0 0

so the state equation is controllable ,however ,it is not observable because


2 1 3
( 1 1 2 ) = 2 3
0 1 1

6.15 is it possible to find a set of bij and a set of cij such that the state equation is

controllable ? observable ? bij cij

1 1 0 0 0 b11 b12
0 b
1 0 0 0 21 b22 c11 c12 c13 c14 c15
x = 0 0 1 1 0 x + b31 b32 u y = c 21 c 22 c 23 c 24 c 25 u

0 0 0 1 0 b41 b42 c31 c32 c33 c34 c35
0 0 0 0 1 b51 b52

solution : it is impossible to find a set of bij such that the state equation is observable ,because

b21 b22
( b41 b42 2 < 3
b51 b52

it is possible to find a set of cij such that the state equation is observable .because

C11 C13 C15


( C 21 C 23 C 25 3
C 31 C 33 C 35

C11 C13 C15


The equality may hold for some set of cij

such as C 21
C 23 C 25 = I the state equation is
C 31 C 33 C 35

C11 C13 C15



observable if and only if ( C 21
C 23 C 25 = 3 6
C 31 C 33 C 35

6.16 consider the state equation

1 0 0 0 0 b1
0 1 b1 0 0 b
11
x = 0 b1 1 0 0 x + b12 u

0 0 0 2 b2 b21
0 0 0 b2 1 b22
y = [C1 C11 C12 C 21 C 22 ]x
it is the modal form discussed in (4.28) . it has one real eigenvalue and two pairs of complex
conjugate eigenvalues . it is assumed that they are distinct , show that the state equation is

controllable if and only if b1 0; bi1 0 or bi 2 0 for i = 1 , 2 it is observable if

and only if c1 0; ci1 0 or ci 2 0 for i = 1 , 2

4.82

b1 0; bi1 0 or bi 2 0 for i = 1 , 2

c1 0; ci1 0 or ci 2 0 for i = 1 , 2

proof:controllability and observability are invariant under any equivalence transformation . so we


introduce a nonsingular matrix is transformed into Jordan form.
1 0 0 0 0 1 0 0
0 0
0 0.5 0.5 j 0
0 0
1 0
1 0
p = 0 0.5 0.5 j 0 0 , p 1 = 0 i i 0 0

0 0 0 0.5 0.5 j 0 0 0 1 1
0 0 0 0.5 0.5 j 0 0 0 i j
1 0 0 0 0 1
0 1 + J1 0 0 0 0.5( j )
11 12

A = PAP 1 =0 0 1 J1 0 0 B = PB = 0.5(11 + j12 )



0 0 0 2 + J 2 0 0.5(211 j22 )
0 0 0 0 2 J 2 0.5(21 + j22 )

C = CP 1 = [C1 C11 + jC12 C11 jC12 C 21 + jC 22 C 22 jC 22 ]


Using theorem 6.8 the state equation is controllable if and only if

b1 0; 0.5(b11 jb12 ) 0; 0.5(b21 jb22 ) 0; equivalently if and only if

b1 0; bi1 0; or bi 2 0; for i = 1, 2

The state equation is observable if and only if

c11 0; c11 jc12 0; c 21 jc 22 0; equivalently , if and only if

c1 0; ci1 0 or ci 2 0 for i = 1 , 2

6.17 find two- and three-dimensional state equations to describe the network shown in fig 6.12 .
discuss their controllability and observability
6.12
solution: the network can be described by the following equation :
u + x1
= x 2 2 x1 ; x 2 = 3 x 3 ; x3 = x1 x 2 ; y = x3
2
they can be arranged as

2 2
x1 = x1 u
11 11
3 3
x 2 = x1 + u
22 22
1 1
x 3 = x1 + u
22 22
y = x3 = x1 x 2
so we can find a two-dimensional state equation to describe the network as
2 2
x1 11 0 x1 11
following : = + 3 u y = x3 = x1 x 2
x 2 3 0 x 2
22 22
2 2
2


and it is not controllable because (c) = ( 11 11 ) = 1 < 2
3 2 3
22 11 22

1 1
however it is observable because (o) = ( 1 ) = 2
22 0
we can also find a three-dimensional state equation to describe
2 2
0 0 x 11
x1 11 1 3
3

it : x 2 = 0 0 x2 + u y = [0 0 1]x and it is neither controllable nor
22 22
x 3 1 x3 1
0 0
22 22

C
observable because ( B [ AB ]
A2 B ) = 1 < 3 ( CA ) = 2 < 3
CA 2

618 check controllability and observability of the state equation obtained in problem 2.19 . can uou
give a physical interpretation directly from the network
2.19
Solution : the state equation obtained in problem 2.19 is
x1 1 0 0 x1 1
x = 0 0 1 x + 1u y = [0 1 0]x
2 2
x 3 0 1 1 x3 0
It is controllable but not observable because
1 1 1 C 0 1 0

[
( B AB ]
A B ) = 1 0 1 = 3
2
( CA ) = 0 0 1 2

0 1 1 CA 2 0 1 1

A physical interpretation I can give is that from the network , we can that if u = 0 x1 (0) = a 0

x 2 (0) = 0 then y 0 that is any x(0) = [a 0 *] and u (t ) 0 yield the same output
T
and

Y (t ) 0 thus there is no way to determine the initial state uniquely and the state equation describing

the network is not observable and the input is has effect on all of the three variables . so the state
equation describing the network is controllable

6.19 continuous-time state equation in problem 4.2 and its discretized equations in problem 4.3 with
sampling period T=1 and . Discuss controllability and observability of the discretized equations .
4.2 4.3 T=1 and .

0 1 1
solution : the continuous-time state equation in problem 4.2 is x = x + u y = [2 3]x
2 2 1
the discretized
equation :

(cos T + sin T )e T sin Te T 3 3 1


( conT + sin T )e T
x[k + 1] = x[k ] + 2 2 2 u[k ]
2 sin Te
T
(cos T sin T )e T 1 + (2 sin T + conT )e T

y[k ] = [2 3]x[k ]

0.5083 0.3096 1.0471


T=1 Ad = Bd =
0.6191 0.1108 0.1821

0.0432 0 1.5648
T= : Ad = Bd =
0 0.0432
1.0432
The xontinuous-time state equation is controllable and observable , and the sustem it describes is
single-input so the drscretized equation is controllable and observable if and only if

| I m [ i j ] | 2m / 2 for m = 1, 2 whenever Re [ i j ] = 0 the eigenvalues of a are

1+j and 1-j . so | I m [ i j ] |= 2 for T=1 . 2m 2 for any m = 1, 2 .so the


discretized equation is neither controllable nor observable . for T= 2m=1 .so the descretized
equation is controllable and observable .

0 1 0
6.20 check controllability and observabrlity of x = x + u y = [0 1]x
0 t 1

1 0 1
Solution : M 1 = A(t ) M 0 the determinant of the matrix [M 0 M1] = is 1. which
t 1 t
is nonzero for all t . thus the state equation is controllable at every t
1 e 0.5 2 e 0.5 2 d
Its state transition matrix is ( , 0 ) = 0 and
0 e 0.5 ( 2 02 )

1 e 0.5 t2 t e 0.5 t2 d
C (t) (t , t 0 ) = [0 1]
0
t0
2 2)
[ 2 2)
= 0 e 0.5( t t0

]
e 0.5( t t0

1 0
[ ]
W0 ( 0 , 1 ) = 0.5( 2 02 ) 0 e 0.5( 0 ) d
0 e
2 2


1 0 0
= 2 02
d
0 0
e
We see . W0 (t 0 , t1 ) is singular for all t 0 and t , thus the state equation is not observable at any t 0 .

0 0 1
6.21 check controllability and observability of x = x + t u [ ]
y = 0 e t x solution :
0 1 e

x1 (t ) = x1 (0) x 2 (t ) = x 2 (0)e t its state transition matrix is

1 1 0 1 0 1 0
(t , t 0 ) = XtX (t 0 ) = t t0
= t + t 0
and
0 e 0 e 0 e
1 0 1 1
(t , t) B (t) = t + t t
= t
0 e e e
1 0
[
C (t) (t, t 0 ) = 0 e t ] t+ t0
[
= 0 e 2 t+t0 ]
0 e
1 1 e 0 e ( 0 )
WE compute
0 e
[ ]
Wc ( 0 , ) = 1 e d = d = 2
0 e
e 2 e ( 0 ) e ( 0 )

Its determinant is identically zero for all to and t 0 , so the state equation is not controllable at any t 0
1 0 0

0 e
[ ]
Wc ( 0 , ) = 2 + 0 1 e 2 +0 d = 4 + 20
d its determinant is identically zero for all
e
0 0

t 0 and t1 , so the state equation is not observable at any t 0 .

6.22 show shat ( ( A(t ), B (t )) is controllable at t 0 if and only if ( A(t ), B (t )) is observable at t 0

( A(t ), B (t )) t 0 ( A(t ), B (t )) t 0

Proof: (t , t 0 ) (t 0 , t ) = I where (t , t 0 ) is the transition matrix of x (t ) = A(t ) x(t ))

(t , t 0 ) (t 0 , t )
(t , t 0 ) (t 0 , t ) = (t 0 , t ) + (t , t 0 )
t t t

= A(t ) (t , t 0 ) (t 0 , t ) + (t , t 0 ) (t 0 , t ) = A(t ) + (t , t 0 ) (t 0 , t ) = 0
t t

(t 0 , t ) (t 0 , t )
There we have = (t 0 , t ) + A(t ) and. = A(t ) + (t 0 , t )
t t

So (t 0 , t ) is the transition matrix of x (t ) = A(t ) x(t )) ( A(t ), B (t )) is controllable at t 0 if and only

1
if there exists a finite t1 > t 0 such that Wc ( 0 , ) = 0
(1 , )B() B () (1 , )d is nonsingular

( A(t ), B (t )) is observable at t 0 if and only if there exists a finite t1 > t 0 such that
1
Wc ( 0 , ) = ( 0 , )B() B () ( 0 , )d is nonsingular.
0

For time-invariant systems , show that A,Bis controllable if and only if (-A,B) is controllable , Is true
foe time-varying systems ?
A,B-A,B
Proof: for time-invariant system (A,B) is controllable if and only if

(C1 ) = B [ ]
AB A n 1 B = n (assuming a is n n

(-A,B) is controllable if and only if (C 2 ) = B [ AB A n 1 B = n ]


we know that any column of a matrix multiplied by nonzero constant does not change the rank of the

matrix . so (C1 ) = (C 2 ) the conditions are identically the same , thus we conclude that (A,B)is

controllable if and only if (-A,B)is controllable .for time-systems, this is not true .
s 1
7.1 Given g ( s ) = 2
. Find a three-dimensional controllable realization check its
s 1)( s + 2
observalility

s 1
g (s) = 2

s 1)( s + 2

s 1 s 1
Solution : g ( s) = 2 using (7.9) we can find a
s + 2 s 2 s 2
s 1)( s + 2 2

three-dimensional controllable realization as following

2 1 2 1
x = 1 0 0 x + 0u y = [0 1 1]x this realization is not observable because
0 1 0 0

(s-1)and s 2 1)( s + 2are not coprime

7.2 find a three-dimensional observable realization for the transfer function in problem 7.1
check its controllability

s 1
7.1 g ( s ) = 2

s 1)( s + 2
Solution : using (7.14) we can find a 3-dimensional observable realization for the transfer
2 1 0 0

function : x = 1 0 1 x + 1 u y = [1 0 0]x and this tealization is not

2 0 0 1

controllable because (s-1) and s 2 1)( s + 2 are not coprime

7.3 Find an uncontrollable and unobservable realization for the transfer function in
problem 7.1 find also a minimal realization
7.1

s 1 1 1
Solution : g ( s ) = = = 2 a minimal realization is ,
s 1)( s + 2 ( s + 1)( s + 2) s + 3s + 2
2

3 2 1
x = x + u y = [0 1]x an uncontrollable and unobservable realization
1 0 0

3 2 0 1

is x = 1
0 0 x + 0u
y = [0 1 0]x
0 0 1 0
74 use the Sylvester resultant to find the degree of the transfer function in problem 7.1
Sylvester resultant 7.1
solution :
2 1 0 0 0 0
1 1 2 1 0
0
0 1 1 2 1
s=
1 0 2 0 1 1
0 0 1 0 2 0

0 0 0 0 1 0
rank(s)=5. because all three D-columns of s are linearly independent . we conclude that s has only

two linearly independent N-columns . thus deg y ( s ) = 2

2s 1
7.5 use the Sylvester resultant to reduce to a coprime fraction Sylvester resultant
4s 2 1
2s 1

4s 2 1
1 1 0 0
0 2 1 1
Solution : s = rank ( s ) = 3, u =1 s1 = s
4 0 0 2

0 0 4 0
T
1 1 1 1
null ( s1 ) = 0 1 thus we have N ( s) = + 0s D( s) = +s and
2 2 2 2
1
2s 1 2 1
= =
4s 2 1 1 2s + 1
s+
2

s+2
7.6 form the Sylvester resultant of g ( s ) = 2
by arranging the coefficients of
s + 2 s
N (s ) and D(s ) in descending powers of s and then search linearly independent columns in
order from left to right . is it true that all D-columns are linearly independent of their LHS

columns ?is it true that the degree of g ( s ) equals the number of linearly independent N-columns?

s+2
g (s) = 2 N (s ) D(s ) s Sylvester resultant
s + 2 s
D- LHS g ( s )

N-
s+2 D( s)
Solution g ( s ) = 2 := deg g ( s ) = 1
s + 2 s N ( s )

D( s ) = D2 S 2 + D1 S + D0
N (s) = N 2 S 2 + N1 S + N 0

from the Sylvester resultant :

1 0 0 0
2 1 1 0
s= rank ( s ) = 3 u = 2 (the number of linearly independent
0 2 2 1

0 0 0 2
N-columns

1 s + 2 D( s)
7.7 consider g ( s ) = = and its realization
s + 1 s + 2 N ( s )
2

2 1
x = 1 x + u y = [1 2 ]x
1 0 0
show that the state equation is observable if and only if the Sylvester resultant of

s + 2 D( s)
D( s ) and N ( s ) is nonsingular . g ( s ) = 2 1 =
s + 1 s + 2 N ( s )

2 1
x = 1 x + u y = [1 2 ]x
1 0 0

D( s ) and N ( s ) Sylvester resultant

1 2 1
proof: x = x + u y = [1 2 ]x is a controllable canonical form realization of
1 0 0

g ( s ) from theorem 7.1 , we know the state equation is observable if and only id D( s ) and N (s)

are coprime

from the formation of Sylvester resultant we can conclude that D ( s ) and N ( s ) are coprime if

and only if the Sylvester resultant is nonsingular thus the state equation is observable if and only if the

Sylvester resultant of D ( s ) and N ( s ) is nonsingular

7.8 repeat problem 7.7 for a transfer function of degree 3 and its controllable-form realization ,
3 7.7

1 s 3 + 1 s 2 + 2 s + 3 N ( s )
solution : a transfer function of degree 3 can be expressed as g ( s ) = =
s 3 + 1 s 2 + 2 s + 3 D( s )

where D ( s ) and N ( s ) are coprime

0 1 0 ) s 2 + ( 2 2 0 ) s + 2 + 3 0

writing g ( s ) as g ( s ) = , we can obtain a controllable
s 3 + 1 s 2 + 2 s + 3

1 2 3 1

x = 1 0 0 x + 0u

canonical form realization of g ( s )
0 1 0 0
y = [1 1 0 2 2 0 3 3 0 ]x + 0 u

the Sylvester resultant of D ( s ) and N ( s ) is nonsingular because D( s ) and N ( s ) are

coprime , thus the state equation is observable .

1 1
7.9 verify theorem 7.7 fir g ( s ) = 2
. g ( s ) = 7.7
( s + 1) ( s + 1) 2

1
verification : g ( s ) = is a strictly proper rational function with degree 2, and it can be
( s + 1) 2
expanded into an infinite power series as

g ( s ) = 0 s 1 + s 2 2 s + s 4 4 s 5 + 5s 6 + s 1

rT (1,1) = r ([0]) = 0 rT (1,2) = rT (2,1) = 1


rT (2,2) = rT (2 + K ,2 + L ) = 2 for veyer k ,.l = 1,2

1
7.10 use the Markov parameters of g ( s ) = to find on irreducible companion-form realization
( s + 1) 2

1
g ( s ) =
( s + 1) 2

g ( s ) = 0 s 1 + s 2 + (2) s 3 + 3s 4 + (4) s 5 + 5s 6 + 3
solution 0 1
T (2,2) = =2
1 2

deg g ( s ) =2
1
~ 1 2 0 1 1 2 2 1 0 1
A = T (2,2)T 1 (2,2) = = =
2 3 1 2 2 3 1 0 1 2
0
b= c = [1 0]
1

the triplet ( A, b, c) is an irreducible com-panion-form realization

1
7.11 use the Markov parameters of g ( s ) = to find an irreducible balanced-form realization
( s + 1) 2

g ( s ) .

0 1
Solution : T (2,2) =
1 2
t = [0 1;1 2]tt = [1 2; 2 3]
[k , s, l ] = svd (t ), sl = sqrt ( s);
Using Matlab I type o = k * sl ; c = sl * l ;
A = inv(o) * * inv(c)
b = c(; ,1), c = O(1,1)
This yields the following balanced realization

1.7071 0.7071 0.5946


x = x+
0.7071 0.2929 0.5946
y = [ 0.5946 0.5946]x

7.12

2 1 1
Show that the two state equation x = x + u y = [2 2]x and
0 1 0

2 0 1
x = x + u y = [2 0]x are realizations of (2 s + 2) ) , are they
1 1 2 (s 2 s 2

minimal realizations ? are they algebraically equivalent?

(2 s + 2) ) ,??
(s 2 s 2
(2 s + 2) 2( s + 1) 2
Proof : = =
( s s 2) ( s + 1)( s 2) s 2
2
1
1 1
[2 2]
s 2 1 1

= [2 2 ]
s 2 ( s 1)( s 2) 1 = 2

0 s 1 0 0 1 0 s 2
s 1
1
s2 0 1
1
0 1
[2 0] 2 = [2 0]
s2
1
2
1 2 = s 2
1 s + 1
( s + 1)( s 2) s + 1

thus the two state equations are realizations of


(2 s + 2) ) the degree of the transfer function
(s 2 s 2
is 1 , and the two state equations are both two-dimensional .so they are nor minimal realizations .
they are not algebraically equivalent because there does not exist a nonsingular matrix p such that

2 1 1 2 0
P P = 1 1
0 1
1 1
P =
0 0
[2 2]P 1 = [2 0]

7.13 find the characteristic polynomials and degrees of the following proper rational matrices

1 s + 3 1 1 1 s+3 1
s +1
( s + 1) 2 ( s + 1)( s + 2) 2
s + 2 s + 5
G 1 = s G 1 = G 3 = ( s + 1)
1 s 1 1 s +1 1
1
s + 3 s +1 s + 2 ( s + 1)( s + 2) s + 2 s + 4 s

note that each entry of G s ( s ) has different poles from other entries

G 1 ( s ) , G 2 ( s ) G 3 ( s ) .

1 s+3 1 s
Solution : the matrix G 1 ( s ) has , , , and det G 1 ( s ) =0 as its minors
s s +1 s+3 s +1
thus the characteristic polynomial of G 1 ( s ) is s(s+1)(s+3) and G 1 ( s ) =3 the matrix G 2 ( s ) has

1 1 1 1 s2 s +1
, , , . det G 2 ( s ) = as
( s + 1) 2 ( s + 1)( s + 2) s + 2 ( s + 1)( s + 2) ( s + 1) 3 ( s + 2) 2

its minors , thus the characteristic polynomial of G 2 ( s ) = ( s + 1) 3 ( s + 2) 2 and dG 2 ( s ) = 5

Every entry of G 3 ( s ) has poles that differ from those of all other entries , so the characteristic

polynomial of G 3 ( s ) is s ( s + 1) ( s + 2)( s + 3) ( s + 4)( s + 5) and G 3 ( s ) = 8


2 2
1
s 1 1
7.14 use the left fraction G ( s ) = to form a generalized resultant as in (7.83), and
s s 1
then search its linearly independent columns in order from left to right ,what is the number of linearly

independent N-columns ?what is the degree of G ( s ) ? Find a right coprime fraction lf G ( s ) ,is the

given left fraction coprime? G ( s ) (7.83),

. N-> G ( s ) ? G ( s ) .

?
1
s 1 1
Solution : G ( s ) = =: D 1 ( s ) N ( s )
s s 1
s 1 0 1 1 0
D (s) = = + s
s s 0 0 1 1
Thus we have
1
N (s) =
1
And the generalized resultant
0 1 1 0 0 0
0 0 -1 0 0 0

1 0 0 0 1 1
S= rank s=5 ,
-1 1 0 0 0 -1
0 0 0 1 0 0

0 0 0 -1 1 0
the number of linearly independent N-colums is l. that is u=1,

null ( s ) = [ 1 0 0 0 0 1]
N 0 D0 N 1 D1
So we have

D( S ) = 0 + 1 S = S
+ 1 0 + 1
N (S ) = + s =
0 0 0
1
G ( s ) = s 1
0
deg = G ( s ) = u = 1
the given left fraction is not coprime .

7.15 are all D-columns in the generalized resultant in problem 7.14 linearly independent .pf their LHS

columns ?Now in forming the generalized resultant ,the coefficient matrices of D (S ) and N (S ) are
arranged in descending powers of s , instead of ascending powers of s as in problem 7.14 .is it true that

all D-columns are linearly independent of their LHS columns? Does he degree of G ( s ) equal the

number of linearly independent N-columns ?does theorem 7.M4hold ? 7.14

D- LHS ? D (S ) N (S ) S .

. D- LHS ? G ( s )

N-? 7.M4 ?

Solution : because D1 0, ALL THE D-columns in the generalized resultant in problem7.14 are

linearly independent of their LHS columns

Now forming the generalized resultant by arranging the coefficient matrices of D (S ) and N (S ) in

descending powers of s :
1 0 0 0 0 0

1 1 0 0 0
0
D1 N1 0 0
0 1 1 1 0
0
S = D0 N0 D1 N1 = rank S =5
0 0 1 1 1 0

0
0 D0 N 0
0 0 0 0 1 1

0 0 0 0 0 1

we see the D0 -column in the second colums block is linearly dependent of its LHS columns .so it is

not true that all D-columns are linearly independent of their LHS columns .

the number of linearly independent N -columns is 2 and the degree of G ( s ) is I as known in problem

7.14 , so the degree of G ( s ) does not equal the number of linearly independent N -columns , and the

theorem 7.M4 does not hold .

7.16, use the right coprime fraction of G ( s ) obtained in problem 7.14 to form a generalized tesultant as

in (7.89). search its linearly independent rows in order from top to bottom , and then find a left coprime

fraction of G ( s ) 7.14 G ( s ) ,(7.89),

, G ( s ) .

1 1
Solution : G ( s ) = s 1 D( s) = s N ( s) =
0 0
The generalized resultant
0 1 0
1
0 0
0 0 0
T = rank T = 3 1 = 1, 2 = 0
0 0 1
0 1 0

0 0 0

0 1 0

mull ( 1 0 0 ) = [0 0 1]
0 0 0
0 1 0
1 0 0
mull ( ) = [ 1 0 0 1]
0 0 1

0 1 0

1 0 0 0 1 0
[ N 0 D0 D0 ] =
N0
0 0 1 0 0 0
0 0 1 0 s 0
D (s) = + s=
0 1 0 0 0 1
+ 1
N (s) =
0
1
s 2 + 0.5s 0.5s s 0 1
thus a left coprime fraction of is G ( s ) =
0.5 s 0.5 0 1 0

s2 +1 2 s + 1
3
7.17 find a right coprime fraction of G ( s ) = s s 2 snd then a minimal realization
s+2 2
s 2 s

G ( s ) ]

s2 +1 2 s + 1
3
solution : G ( s ) = s s 2 =: D 1 ( S ) N ( S )
s+2 2
s 2 s

s 3 0 0 0 0 0 0 0 2 1 0 3
D (s) = = + s + s + 0 0 s
0 s 2 0 0 0 0 0 1
where
s 2 + 1 s (2 s + 1) 1 0 0 1 1 2 2
N (s) = = + s + s
s+2 2 s 2 0 1 2 0 0
the generalized resultant is
0 0 1 0 0 0 0 0 0 0 0 0
0
0 2 0 0 0 0 0 0 0 0 0
0 0 0 1 0 0 1 0 0 0 0

0 0 1 2 0 0 2 0 0 0 0 0
0 0 1 2 0 0 0 1 0 0 1 0

0 1 0 0 0 0 1 2 0 0 2 0
S =
1 0 0 0 0 0 1 2 0 0 0 1

0 0 0 0 0 1 0 0 0 0 1 2
0 0 0 0 1 0 0 0 0 0 1 2

0 0 0 0 0 0 0 0 0 1 0 0

0 0 0 0 0 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0

rank s = 9, 1 = 2, 2 = 1

the monic null vectors of the submatrices that consist of the primary dependent N 2 -columns and

N 1 -columns are , respectively


Z 2 = [ 2.5 2.5 0 0.5 0 0 0.5 1]
Z1 = [ 0.5 2.5 0 0.5 1 1 0.5 0 0 1]
N 0
D
0
N 1 0.5 2.5 0 0.5 1 1 0.5 0 0 0 1 0
=
D1 2.5 2.5 0 0.5 0 0 0.5 1 0 0 0 0
N2

D2

0 0 0.5 0.5 1 0 2 s 2 + 0.5s 0.5s


D(=s) +
s + 0 0 s=
0.5 0.5 0 1 0.5 s 0.5
0.5 2.5 1 0 s + 0.5 2.5
N ( s )= + s=
2.5 2.5 1 0 s + 2.5 2.5

s + 0.5 2.5 s 2 + 0.5s 0.5s


thus a right coprime fraction of G ( s ) is G ( s ) =
s + 2.5 2.5 0.5 s 0.5

s 0
s 2 0
we define H ( s ) = L( s ) = 1 0
0 s
0 1
then we have
1 0.5 0.5 0 0
=D( s) H (s) + L( s )
0 1 0 0.5 0.5
1 0.5 2.5
N (s) = L( s )
1 2.5 2.5
1
1 0.5 1 0.5
1
=D =
0 1 0 1
hc

1 0.5 0.5 0 0 0.5 0.25 0.25
= Dhc1 Dic =
0 1 0 0.5 0.5 0 0.5 0.5

thus a minimal realization of G ( s ) is

0.5 0.25 0.25 1 0.5



x 1 0 0 x + 0
0 u
0 0.5 0.5 0 1
1 0.5 2.5
y= x
1 2.5 2.5
2 1 1
8.1 given x = x + u y = [1 1]x
1 1 2

find the state feedback gain k so that the feedback system has 1 and 2 as its eigenvalues .

compute k directly with using any equivalence transformation , ,

k -1,-2.

Solution : introducing state feedback u = r [k1 k 2 ]x , we can obtain

2 1 1 1
x = x [k1 k 2 ]x + r
1 1 2 2
the new A-matrix has characteristic polynomial
f ( s ) = ( s 2 + k1 )( s 1 + 2k 2 ) (1 2k1 )(1 k 2 )
= s 2 + (k1 + 2k 2 3) s + (k1 5k 2 + 3)

the desired characteristic polynomial is ( s + 1)( s + 2) = s 2 + 3s + 2 ,equating

k1 + 2k 2 3 = 3 and k1 5k 2 + 3 = 2 , yields

k1 = 4 and k 2 = 1 so the desired state feedback gain k is k = [4 1]

8.2 repeat problem 8.1 by using (8.13), (8.13) 8.1.


solution :

D ( s ) = ( s 2)( s 1) + 1 = s 2 3s + 1
D f ( s ) = ( s + 1)( s + 2) = s 2 + 3s + 2
k = [3 (3) 2 3] = [6 1]
1 1 3 1 3
C = C=
0 1 0 1
1 4 1 4
D = [b Ab] = C 1
= 7 7 C is non sin gular , so ( A, b) is can trollable
2 1 2 7 1
7
using (8.13)

1 3 1 7 4 7
k = k CC 1 = [6 1] = [4 1]
0 1 2 7 1 7

8.3 Repeat problem 8.1 by solving a lyapunov equation


lyapunov 8.1.
solution : ( A, b) is controllable

1 0
selecting F = and k = [1 1] then
0 2

1
0 13 9 1
AT TF = b k T = T 1 =
9 13 0
1
13
9 1
k = k T 1 = [1 1] = [4 1]
13 0

1 1 2 1

8.4 find the state foodback gain for the state equation x = 0 1 1 x + 0 u

0 0 1 1
so that the resulting system has eigenvalues 2 and 1+j1 . use the method you think is the
simplest by hand to carry out the design .
-2 -1+j1 /

solution : ( A, b) is controllable

( s ) = ( s 1) 3 = s 3 3s 2 + 3s 1
f ( s ) = ( s + 2)( s + 1 + j1)( s + 1 j1) = s 3 + 4s 2 + 6s + 4
k = [4 (3) 6 3 4 (1)] = [7 3 5]
1 3 3 1 3 6
C = 0 1 3
1
C = 0 1 3
0 0 1 0 0 1
1 1 2 1 1 0
C = 0 1 2 C 1 = 2 3 2
1 1 1 1 2 1

1 3 6 1 1 0

Using(8.13) k = k C C [7 3 5]0 1 3 2 3 2 = [15 47 8]
1

0 0 1 1 2 1

( s 1)( s + 2)
8.4 Consider a system with transfer function g ( s ) = is it possible to
( s + 1)( s 2)( s + 3)

( s 1)
change the transfer function to g f ( s ) = by state feedback? Is the resulting
( s + 2)( s + 3)
system BIBO stable ?asymptotically stable?

g ( s ) g f ( s ) ? BIBO ??

( s 1) ( s 1)( s + 2)
Solution : g f ( s ) = =
( s + 2)( s + 3) s + 2) 2 ( s + 3)

We can easily see that it is possible to change g ( s ) to g f ( s ) by state feedback and the resulting

system is asymptotically stable and BIBO stable .

( s 1)( s + 2)
8.6 consider a system with transfer function g ( s ) = is it possible to
( s + 1)( s 2)( s + 3)
1
change the transfer function to g f ( s ) = by state feedback ? is the resulting system BIBO
s+3
stable? Asymptotically stable ?

g ( s ) g f ( s ) ? BIBO ??

1 ( s 1)( s + 2)
Solution ; g f ( s ) = =
s + 3 ( s 1)( s + 2)( s + 3)

It is possible to change g ( s ) to g f ( s ) by state feedback and the resulting system is

asymptotically stable and BIBO stable .. however it is not asymptotically stable .

8.7 consider the continuous-time state equation

1 1 2 1
x = 0 1 1 x + 0u
y = [2 0 0]x
0 0 1 1

let u = pr k x , find the feedforward gain p and state feedback gain k so that the resulting

system has eigenvalues 2 and 1 j1 and will track asymptotically any step reference input

, u = pr k x p k

-2 1 j1 , .

Solution :
1 1 1 1

s 1 ( s 1) 2 ( s 1) 3
( s 1) 2 1
2
g ( s ) = c( sI A)b = [2 0 0] 0
1 1 0 = 2 s 8 s + 8
s 1 ( s 1) 2 s 3 3s 2 + 3s 1
1 1
0 0
s 1
3 2
f ( s ) = s + 4 s + 6 s +4
3 4
P= = = 0.5
b3 8
k = [15 47 8]

(obtained at problem 8.4)

8.8 Consider the discrete-time state equation


1 1 2 1
x[k + 1] = 0 1 1 x + 0u[k ]

0 0 1 1
y[k ] = [2 0 0]x[k ]
find the state feedback gain so that the resulting system has all eigenvalues at x=0 show that for
any initial state the zero-input response of the feedback system becomes identically zero for
k 3
, Z=0,
,( k 3 )

solution ; ( A, b) is controllable

( z ) = z 3 3s + 3 z 1
f ( z) = z 3
k = [3 3 1]
1 3 6 1 1 0
k = C C k = [3 3 1]0 1 3 2 3 2 = [1 5 2]
1
0 0 1 1 2 1
The state feedback equation becomes
1 1 2 1 1 0 4 4 1

x[k + 1] = 0 1 1 0[1 5 2]x[k ] + 0 r[k ] = 0 1 1 x[k ] + 0 r[k ]

0 0 1 1 1 1 5 1 1
y[k ] = [2 0 0]x[k ]
denoting the mew-A matrix as A , we can present the zero-mput response of the feedback system
k
as following y zi [ k ] = C A x[0]
k
compute A

0 1 0
A = Q 0 0 1Q 1
0 0 0
K
0 1 0
A = Q 0 0 1 Q 1
k

0 0 0

using the nilpotent property


K
0 1 0
0 0 1 = 0 for k 3

0 0 0

so we can readily obtain y zi [k ] = C 0 x[0] = 0 for k 3

consider the discret-time state equation in problem 8.8 let for u[k ] = pr[k ] k x[k ] where p

is a feedforward gain for the k in problem 8.8 find a gain p so that the output will track any step

reference input , show also that y[k]=r[k] for k 3 ,thus exact tracking is achieved in a finite
number of sampling periods instead of asymptotically . this is possible if all poles of the resulting
system are placed at z=0 ,this is called the dead-beat design

8.8 , u[k ] = pr[k ] k x[ k ] , p , 8.8

k , p , k 3 y[k]=r[k].
,. Z=0 ,
,[.
Solution ;

2 z 2 8s + 8
g ( z ) = ( z ) = z 3 u[k ] = pr[k ] k x[k ]
z 3 3s 2 + 3z 1
z 2 8z + 8
g f ( z ) = p
z3

( A, b) is controllable all poles of g f ( z ) can be assigned to lie inside the section in fig 8.3(b)

under this condition , if the reference input is a step function with magnitude a , then the output

y[k] will approach the constant g f (1) a as k +


thus in order for y[k] to track any step reference input we need g f (1) = 1
g f (1) = 2 p = 1 p = 0.5

the resulting system can be described as


0 4 4 0.5
x[k ] = 0 1 1 x[k ] + 0 r[k ]
1 5 1 0.5
= A x[k ] + b r[k ]
y[k ] = [2 0 0]x[k ]
the response excited by r[k] is
K 1
K K 1 m
y[k ] = C A x[0] + C A
M =0
b r[m]

K
as we know, A =0 for k 3, so
2
y[k ] = cb r[k 1] + c Ab r[k 2] + c A b r[k 3]
= r[k 1] 4r[k 2] + 4r[k 3] for k 3
for any step reference input r[k]=a the response is
y[k]=(1-4+4) a =a =r[k] for k 3

2 1 0 0 0
0 2 0 0 1
8.10 consider the uncontrollable state equation x = x + u is it possible
0 0 1 0 1

0 0 0 1 1

to find a gain k so that the equation with state feedback u = r k x has eigenvalues 2, -2 ,

-1 ,-1 ? is it possible to have eigenvalues 2, -2 ,2 ,-1 ? how about 2 ,-2, 2 ,-2 ? is the equation

stabilizable?, k u = r k x

-2. 2, -1, -1, ?-2, -2, -2,-1?-2,-2,-2,-2??


Solution: the uncontrollable state equation can be transformed into

0 0 4 0 1
x c 1 0 0 0 xc 0
= + u
xc 0 1 3 0 x c 0

0 0 0 1 0
A 0 xc bc
= c + u
0 Ac xc 0
0 1 4 0
1 2 4 0
Where the transformation matrix is P
1
=
1 1 1 0

1 1 1 1

( AC , b c ) is controllable and ( AC ) is stable , so the equation is stabilizable . the eigenvalue of

( AC ) ,say 1 , is not affected by the state feedback ,while the eigenvalues of the controllable
sub-state equation can be arbitrarily assigned in pairs .
so by using state feedback ,it is possible for the resulting system to have eigenvalues 2 ,-2 , -1, -1,
also eigenvalues 2, -2, -2, -1.
It is impossible to have 2, -2, -2, -2, as it eigenvalues because state feedback can not affect the
eigenvalue 1.

8.11 design a full-dimensional and a reduced-dimensional state estimator for the state equation in

problem 8.1 select the eigenbalues of the estimators from { 3 2 j 2}


8.1 , .

{ 3 2 j 2}.

Solution : the eigenvalues of the full-dimensional state estimator must be selected as 2 j 2 ,

the A-, b and c matxices in problem 8.1 are

2 1 1
A= b= c = [1 1]
1 1 2

the full-dimensional state estimator can be written as x = ( A l c) = x + bu + l y

l1
let l = then the characteristic polynomial of x = ( A l c) is
l 2
( s ) = ( s 2 + l1 )( s 1 + l 2 ) + (1 + l 2 )(1 l1 )
= s 2 + (l1 + l 2 3) s + 3 2l1 l 2
= ( s + 2) 2 + 4 = s 2 + 4s + 8
l1 = 12 l 2 = 19

thus a full-dimensional state estimor with eigenvalues 2 j 2 has been designed as following

14 13 1 12
x = x + u + y
20 18 2 19
designing a reduced-dimensional state estimator with eigenvalue 3 :
(1) select |x| stable matrix F=-3
(2) select |x| vector L=1, (F,L) IS controllable

(3) solve T=: TA-FT= l c

T = [t1 t2 ]
2 1
let T : [t1
t 2 ] + 3[t1 t 2 ] = [1 1]
1 1
3 4
T =
21 21
(4) THE l-dimensional state estimutor with eigenvalue 3 is

13
z = 3 x + u+ y
21
1
1 1 y 4 21 y
x = 5 4 =
21 z 5 21 z
21
8.12 consider the state equation in problem 8.1 .compute the transfer function from r to y of the
state feedback system compute the transfer function from r to y if the feedback gain is applied to
the estimated state of the full-dimensional estimator designed in problem 8.11 compute the
transfer function from r to y if the feedback gain is applied to the estimated state of the
reduced-dimensional state estimator also designed in problem 8.11 are the three overall transfer
functions the same ?
8.1 , r y . 8.11
, r y ,
8.11 , r y ,
?
Solution
x = A x + bu y = cx u = r k x
(1) y x y = c( sI A + b k ) b
1

1
0 1
s+2 3s 4
= [1 1] 9 1 2 = ( s + 1)( s + 2)

( s + 1)( s + 2) s + 1
2 1 1 2 1 1 1
x = x + u = x + r [4 1]x
1 1 2 1 1 2 2
2 1 4 1 1
= x x + r
1 1 8 2 2
14 13 1 12
x = x + u + y
(2) 20 18 2 19
14 13 1 1 12
= x + r [4 1]x + [1 1]x
20 18 2 2 19
10 12 12 12 1
= x + x+
19 2
r
28 20 19
y = [1 2]x

2 1 4 1 1
x 1 1 8 2 x

+ 2 r
=
x 12 12 10 12 x 1

19 19 28 20 2
x
y = [1 1 0 0]
x

1
s 2 1 4 1 1
1 s 1 8 2 2
g r y = [1 1 0 0]
12 12 10 12 1

19 19 28 s + 20 2
3s 3 + 8s 2 + 8s 32 (3s 4)( s 2 + 4 s + 8) 3s 4
= 4 = =
s + 7 s 3 + 22s 2 + 32s + 16 ( s + 1)( s + 2)( s + 4 s + 8) ( s + 1)( s + 2)
2
2 1 1 2 1 4 1 1 1
x = x + u = x x + r r
1 1 2 1 1 8 2 2 2
2 1 4 1 4 21 y 1
= x x + r
1 1 8 2 5 21 z 2
2 1 11 63 1
= x [1 1]x z + 2 r
1 1 22

126

13 12 63 1
(3) = x + z + r
21 23 126 2
13 4 21 y
z = 3 z + (r [4 1] ) + [1 1]x
21 5 21 z
164 164 13
= x 42 z + r
21 21 21
y = [1 1]x
1
13 12 63

x
x
= 21 23
126 + 2 r
z 164 164 z 13
42 21
21 21
x
y = [1 1 0]
z
1
1
s 13 1 2 63
g r y ( s ) = [1 1 0] 21 s 23 126 + 2
164 164 13
s + 42 21
21 21
2
3s + 5s 12 (3s 4)( s + 3) 3s 4
= 3 = =
s + 6 s 2 + 11s + 6 ( s + 1)( s + 2)( s + 3) ( s + 1)( s + 2)
these three overall transfer functions are the same , which verifies the result discussed in section
8.5.

0 1 0 0 0 0
0 0 1
0 0 0
8.13 let A = B= find two different constant matrices k such that
3 1 2 3 1 2

2 1 0 0 0 2

(A-BK) has eigenvalues 4 3 j and 5 4 j (A-BK)

43j 5 4 j
4 3 0 0
3 4 0 0
solution : (1) select a 4 4 matrix F =
0 0 5 4

0 0 4 5

the eigenvalues of F are 4 3 j and 5 4 j

1 0 0 0
(2)if k1 = , ( F , k1 ) is observable
0 0 1 0

0.0013 0.0059 0.0005 0.0042


0.0126 0.0273 0.0193 0.0191
AT1 T1 F = B K 1 T1 =
0.1322 0.0714 0.1731 0.0185

0.0006 0.0043 0.2451 0.1982
1 606.2000 168.0000 14.2000 2.0000
K 1 = K 1T1 =
371.0670 119.1758 14.8747 2.2253

1 1 1 1
(3)IF K 2 = ( F , k 2 ) is observable
0 0 1 0

0.0046 0.0071 0.0024 0.0081


0.0399 0.0147 0.0443 0.0306
AT2 T2 F = B K 2 T2 =
0.2036 0.0607 0.3440 0.0245

0.0048 0.0037 0.2473 0.2007
1 252.9824 55.2145 0.0893 3.2509
K 2 = K 2T2 =
185.5527 59.8815 7.4593 2.5853

You might also like