Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

304-501 LINEAR SYSTEMS

Lecture 14: Linear Dynamical Systems

4.3 Linear Dynamical Systems

In this section, we discuss linear time-varying systems represented by the block diagram shown below.

u (t ) y (t )
x (t ) x(t ) +
B (t ) ∫ (⋅)dτ C (t )
+ +
A(t )

D(t )

That is,

x(t ) = A(t ) x(t ) + B(t )u (t )


(4.14)
y (t ) = C (t ) x(t ) + D(t )u (t )

where x(t ) ∈ R , u (t ) ∈ R , y (t ) ∈ R , and A, B, C , D have piecewise continuous entries.


n m p

4.3.1 Homogeneous Differential Equation x(t ) = A(t ) x(t )

Consider the homogeneous differential equation:

x(t ) = A(t ) x(t ), x(t0 ) = x0 . (4.15)

4.3.1.1 Time-Varying Case

Proposition:

The set of all solutions of x(t ) = A(t ) x(t ), x(t0 ) = x0 forms an n -dimensional vector space over
R.
Proof:

First, we show that the set of solutions forms a linear space over R . Let x1 (⋅), x2 (⋅) be two distinct
solutions of (4.15) (with distinct initial states.) Then,

L14- 1/1
304-501 LINEAR SYSTEMS

d d d
[α1 x1 (t ) + α 2 x2 (t )] = α1 x1 (t ) + α 2 x2 (t )
dt dt dt
= α1 A(t ) x1 (t ) + α 2 A(t ) x2 (t )
= A(t ) [α1 x1 (t ) + α 2 x2 (t ) ] , ∀α1 , α 2 ∈ R

Next, we show that the solution space has dimension n . Let xi (⋅) be solutions of (4.15) with
xi (t0 ) = ei , i = 1,… , n (the canonical unit vectors in R n .) We shall show that these solutions are
linearly independent and that every solution can be expressed as a linear combination of { xi (⋅)}i =1 .
n

We prove this by contradiction. Suppose { xi (⋅)}i =1 are linearly dependent. Then,


n

∑α x (t ) = θ , for some α ∈ R not all zero, ∀t ∈ T


i =1
i i i .

At t = t0 :

n n

∑αi xi (t0 ) = θ = ∑αi ei


i =1 i =1

which implies that {ei }i =1 are linearly dependent, clearly a contradiction. Hence { xi (⋅)}i =1 are linearly
n n

independent.

Let x(⋅) be a solution to the homogeneous differential equation (4.15), with x(t0 ) = e . Then, e ∈ R
n

can be written as a linear combination of the basis vectors {ei }i =1 :


n

n
e = ∑ α i ei , α i ∈ R
i =1

n n
⇒ ∑ α i xi (t ) is a solution of (4.15), with initial state e = ∑ α i xi (t0 ) .
i =1 i =1

n
By the uniqueness of the solution, we conclude that x(⋅) = ∑α x (⋅) .
i =1
i i

Definition: Fundamental Matrix

A fundamental set of solutions of x(t ) = A(t ) x(t ), x(t0 ) = x0 is any set { xi (⋅)}i =1 such that for some
n

t ∈ T , { xi (t )}i =1 forms a basis of R n .


n

L14- 2/2
304-501 LINEAR SYSTEMS

An n × n matrix function of t , Ψ (⋅) is said to be a fundamental matrix for x (t ) = A(t ) x(t ) if the n
columns of Ψ (⋅) consist of n linearly independent solutions of x (t ) = A(t ) x(t ) , i.e.,

ψ 1 (t ) = A(t )ψ 1 (t )

ψ n (t ) = A(t )ψ n (t )

{ψ i (t0 )}i =1 forms a basis for ( R n , R ) , and Ψ (t ) = [ψ 1 (t ) ψ n (t )]


n
where

Example:

Consider the system

0 0
x(t ) =   x(t )
 t 0

That is, x1 (t ) = 0 , x2 (t ) = tx1 (t ) . The solution is:

1 2 1 2
x1 (t ) = x1 (t0 ) , and x2 (t ) = t x1 (t0 ) − t0 x1 (t0 ) + x2 (t0 ) .
2 2

0  x (0)  0 2  x (0) 


Let t0 = 0 and ψ 1 (0) =   =  1  . Then ψ 1 (t ) =   . Let ψ 2 (0) =   =  1  . Then
1
  x (0)
 2  1
   0 x (0)
 2 
2
ψ 2 (t ) =  2  . Thus a fundamental matrix for the system is given by:
t
 

0 2 
Ψ (t ) =  2
1 t 

Proposition:

An n × n matrix Ψ (⋅) is a fundamental matrix for x (t ) = A(t ) x(t ) iff it satisfies

Ψ (t ) = A(t )Ψ (t ), Ψ (t0 ) = H nonsingular for some t0 ∈ T

Proof: Follows from the previous proposition.

Proposition:

If u ∈ R is any constant vector, then x(t ) = Ψ (t )u is a trajectory.


n

L14- 3/3
304-501 LINEAR SYSTEMS

Proof: x (t ) = Ψ (t )u = A(t ) Ψ (t )u = A(t ) x(t )

Proposition:

N {Ψ (t )} is invariant ∀t ∈ T .

Proof:

Suppose u ∈ N {Ψ (t1 )} , for some t1 ∈ T . We show that u ∈ N {Ψ (t )} , ∀t ∈ T . Let


x(t ) = Ψ (t )u . Since x(⋅) is a trajectory, x(t1 ) = θ . But the null trajectory x(t ) = θ , ∀t ∈ T passes
through the point x(t1 ) = θ , and by uniqueness, x(t ) = θ , ∀t ∈ T is the only possible trajectory.
Therefore,

x(t ) = Ψ (t )u = θ , ∀t ∈ T
⇒ u ∈ N {Ψ (t )} , ∀t ∈ T

Note that for every initial condition, there exists one state trajectory.

Consider the fundamental matrix Ψ (t ) = [ψ 1 (t ) ψ n (t ) ] .

Theorem:

{ψ i (t )}i =1 {ψ i (t )}i =1
n n
form a basis of R at some t0 ∈ T iff form a basis of R at all
n n
The vectors
t ∈T .

Proof:

{ψ i (t )}i =1 forms a basis at some t0 ∈ T ⇔ N {Ψ (t )} = {θ } at t0 ∈ T ⇔ N {Ψ (t )} = {θ } for


n

all t ∈ T (by above proposition) ⇔ {ψ i (t )}i =1 forms a basis for all t ∈ T .


n

Corollary:

(i) Ψ −1 (t ) exists ∀t ∈ T ,

(ii) N {Ψ (t )} = {θ } , ∀t ∈ T ⇔ Ψ −1 (t ) exists, ∀t ∈ T

L14- 4/4
304-501 LINEAR SYSTEMS

Definition: State Transition Matrix

The state transition matrix Φ (t , t0 ) associated with the system x (t ) = A(t ) x(t ) is that matrix-valued
function of t , t0 which:

(1) Solves the matrix differential equation: Φ (t , t0 ) = A(t )Φ (t , t0 ), ∀t ∈ T , t0 ∈ T ,

(2) Satisfies Φ (t0 , t0 ) = I , ∀t0 ∈ T .

Proposition:

Let Ψ (⋅) be a fundamental matrix satisfying Ψ (t0 ) = I . Then Φ (t , t0 ) = Ψ (t ) .

Proof:

Follows directly from the above definition.

Proposition:

−1
Let Ψ (⋅) be any fundamental matrix of x (t ) = A(t ) x(t ) . Then Φ (t , t0 ) = Ψ (t ) Ψ (t0 ), ∀t , t0 ∈ T .

Proof:

−1
We have Φ (t0 , t0 ) = Ψ (t0 ) Ψ (t0 ) = I , ∀t0 ∈ T . Moreover,
Φ (t , t0 ) = Ψ (t )Ψ −1 (t0 ) = A(t )Ψ (t )Ψ −1 (t0 ) = A(t )Φ (t , t0 )

Proposition:

The solution of x(t ) = A(t ) x(t ), x(t0 ) = x0 is given by x(t ) = Φ (t , t0 ) x0 , ∀t ∈ T .

Proof:

The initial state is x(t0 ) = Φ (t0 , t0 ) x0 = x0 . Next, we need to check that x(t ) satisfies the differential
equation: x(t ) = Φ (t , t0 ) x0 = A(t )Φ (t , t0 ) x0 = A(t ) x (t ), ∀t ∈ T .

Note:

The function s (t ; t0 , x0 ) := Φ (t , t0 ) x0 is called the state transition function.

L14- 5/5
304-501 LINEAR SYSTEMS

Properties of the state transition matrix:

(1) Φ (t , t ) = I , ∀t ∈ T ,

(2) Φ (t , t0 ) = Φ (t , t1 )Φ (t1 , t0 ) ,

x1

Φ (t1 , t0 ) x0
Φ (t , t1 ) x0
Φ (t0 , t0 ) x0

t0 t1 t
t

x2

−1
(3) Φ (t , t0 )
−1
=  Ψ (t )Ψ −1 (t0 )  = Ψ (t0 )Ψ −1 (t ) = Φ (t0 , t ) ,

d
(4) Φ (t0 , t ) = −Φ (t0 , t ) A(t ) ,
dt

(5) If Φ (t , t0 ) is the state transition matrix of x (t ) = A(t ) x(t ) , then Φ (t0 , t ) is the state transition
T

matrix of the system z (t ) = − A (t ) z (t ) .


T

t
∫t Tr{ A(σ )}dσ
[
(6) det Φ (t , t0 ) = e ] 0
, where Tr{ A} denotes the trace of matrix A , and it is nonzero if
t
∫ Tr{A(σ )}dσ
t0
is finite.

(7) The Peano-Baker series for the solution (transition matrix) of

Φ (t , t0 ) = A(t )Φ(t , t0 ), Φ (t0 , t0 ) = I

is given by:

t t σ1 t σ1 σ2
Φ(t , t0 ) = I + ∫ A(σ 1 )dσ 1 + ∫ A(σ 1 ) ∫ A(σ 2 )dσ 2 dσ 1 + ∫ A(σ 1 ) ∫ A(σ 2 ) ∫ A(σ 3 )dσ 3 dσ 2 dσ 1 +
t0 t0 t0 t0 t0 t0

L14- 6/6
304-501 LINEAR SYSTEMS

t t t
Note that if A(t ) and ∫ t0
A(σ )dσ commute, i.e., if A(t ) ∫ A(σ )dσ = ∫ A(σ )dσ A(t ) , then,
t0 t0

t t σ1 t σ1 σ2
Φ (t , t0 ) = I + ∫ A(σ 1 )dσ 1 + ∫ A(σ 1 ) ∫ A(σ 2 )dσ 2 dσ 1 + ∫ A(σ 1 ) ∫ A(σ 2 ) ∫ A(σ 3 )dσ 3 dσ 2 dσ 1 +
t0 t0 t0 t0 t0 t0

= I + ∫ A(σ 1 )dσ 1 + ∫  ∫ A(σ 2 )dσ 2 A(σ 1 )  dσ 1 + ∫  ∫  ∫ A(σ 3 )dσ 3 A(σ 2 )  dσ 2 A(σ 1 )  dσ 1 +


t t σ1 t σ1 σ2

t0  t0
t0    t0  t0
t0   
 dσ + 1 t  σ1 d  σ 2 A(σ )dσ  dσ A(σ )  dσ +
2 2
t 1 t d  σ1
= I + ∫ A(σ 1 )dσ 1 +
2 ∫t0 dσ 1  ∫t0 2 ∫t0  ∫t0 dσ 2  ∫t0
A(σ ) d σ  1 
t0 2 2
 1 3 3
 2

1

2 2
1 t 1 t σ1
= I + ∫ A(σ )dσ +  ∫ A(σ )dσ  + ∫  ∫ A(σ 2 )dσ 2  A(σ 1 )dσ 1 +
t

t0 2  t0  2 t0  t0 
2 3
1 t 1 1 t d  σ1
= I + ∫ A(σ )dσ +  ∫ A(σ )dσ  + ⋅ ∫ A(σ 2 )dσ 2  dσ 1 +
t

t0 
2 0t   ∫
 2 3 0 dσ 1  0
t t 

2 3 k
1 t 1 1 t 1 t
= I + ∫ A(σ )dσ +  ∫ A(σ )dσ  + ⋅  ∫ A(σ )dσ  + A(σ )dσ  +
t

t0 2  t0  2 3  t0 
+ ∫
k !  t0 
t
+∞
1 t k
∫ A (σ ) d σ
= ∑  ∫ A(σ )dσ  = e t0
k =0 k !

 t0 

and we can check that this transition matrix satisfies the differential equation
t t t
d ∫t A (σ ) d σ ∫ A (σ ) d σ ∫ A(σ ) dσ
e 0
=e t
0
A(t ) = A(t )e t 0
.
dt

The commutative property holds if A(t ) is a diagonal, or constant matrix. Note that in general,
e Ae B ≠ e A+ B , A, B ∈ R n
unless the matrices commute, i.e., AB = BA . (this can be shown by multiplying the series expansions
A B
of e , e .)

4.3.1.2 Time-Invariant Case

Consider the time-invariant differential equation:

x(t ) = Ax(t ), x(t0 ) = x0 . (4.16)

In this case,

Φ (t , t0 ) = Φ (t − t0 , 0) =: Φ (t − t0 )

and

Φ (t ) = AΦ (t ), Φ (0) = I , ∀t ∈ T .

Using the unilateral Laplace transform, we obtain:

L14- 7/7
304-501 LINEAR SYSTEMS

sΦ ( s ) − Φ (0) = AΦ ( s )
⇒ Φ ( s ) = ( sI − A) −1 Φ (0) = ( sI − A) −1 ,
with ROC: Re{s} > max {Re[λi ( A)]}
i =1,…, n

Thus, one can obtain the state transition matrix of an LTI space-space system by taking the inverse
−1
Laplace transform (entry-by-entry) of ( sI − A) .

Properties of the Matrix Exponential

n×n
(1) If A ∈ R , then the Peano-Baker series is
1 2 1 k
Φ (t , t0 ) = I + A(t − t0 ) + A (t − t0 ) 2 + +
A (t − t0 ) k +
2 k!
A ( t − t0 )
and the series converges uniformly and absolutely to Φ (t , t0 ) = e on every finite interval.

d At
(2) e = e At A = Ae At ,
dt
A ( t − t0 )
(3) The solution of x(t ) = Ax (t ), x(t0 ) = x0 is x(t ) = e x0 . We have
Φ (t , t0 ) = Ψ (t )Ψ −1 (t0 ) = e At e− At0 = e A( t −t0 ) .

L14- 8/8

You might also like