Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

JOURNAL OF OPTIMIZATIONTHEORY AND APPLICATIONS:Vol. 71, No.

2, NOVEMBER1991

New Taylor Series Approach to State-Space Analysis


and Optimal Control of Linear Systems 1
P. N. PARASKEVOPOULOS 2, A. S. TSIRIKOS 3, AND K. G. ARVANITIS 3

Communicated by C. T. Leondes

Abstract. A new Taylor series approach is presented which reduces the


problem of determining the state vector coefficient matrix X for time-
invariant systems to an expression involving multiplications of matrices
of small dimensions. This approach is numerically superior to known
techniques and is extended to cover the time-varying case, wherein anal-
ogous expressions are derived. Furthermore, the optimal control prob-
lem is solved using the same technique. Finally, an expression is derived
for the computation of the approximation error involved in computing
X, prior to determining X.

Key Words. Linear systems, Taylor series, state-space analysis, optimal


control, estimation of the approximation error.

1. Introduction

In the last years, orthogonal series have been used for solving a variety
of problems in system analysis and synthesis. The key idea involved is based
on the following expression:

f0f0
• . •

k times
~pr(a)(dcr)k~--P~q~r(t), (la)

~0r(t)= [~o0(t),~01(t),..., ~p~_l(t)]~, (lb)

1This work was partially supported by the Greek State Scholarship Foundation (IKY).
2professor, Division of Computer Science, Department of Electrical Engineering, National
Technical Universityof Athens, Zographou, Athens, Greece.
3Graduate Student, Division of Computer Science, Department of Electrical Engineering,
National Technical University of Athens, Zographou, Athens, Greece.
315
0022-3239/91/1100-0315506.50/0~.~1991PlenumPublishingCorporation
316 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

where (0~(t)•W is the orthogonal basis vector and P r • R ~×~ is called the
operational matrix of integration. The matrix Pr has been determined for
several types of orthogonal series such as Walsh (Ref. I), block-pulse (Ref.
2), Laguerre (Ref. 3), Chebyshev (Ref. 4), Legendre (Ref. 5), Hermite (Ref.
6), Jacobi (Ref. 7), Bessel (Ref. 8), Fourier sine-cosine (Ref. 9), Fourier
exponential series (Ref. 10), etc.
Nonorthogonal series have also been used to study system analysis and
synthesis problems. A popular such nonorthogonal series is the Taylor series
(Refs. 11-16), for which case relation (1) takes on the form

(2a)

k times

gt~(t) = [1, t/l! ..... tr-l/(r - 1)!1 r, (2b)

where ~ ( t ) • Nr and
I
I 3
. . .[. . . .Okxk
.... r-I. . Ok×(r-k)J'
...... T,• W ×r. (3)

For the orthogonal series and the Taylor series, the following relationships
hold (Ref. t6) :
eT

IIG(t)~-Mrp~(t)andT,~-MP~M -1 , M = [ j , e r: p-1~ (4)

where e is a constant r x 1 vector depending on the particular orthogonal


series.
The present paper refers to the problems of state space analysis and
optimal control of linear time-invariant and time-varying systems. These
problems have already been studied via orthogonal series (Refs. 17-23), as
well as via Taylor series (Refs. 11-13). The algorithms reported in Refs. 11-
13 and Refs. 17-21 have the disadvantage in that they require the solution
of a linear algebraic system with a very large number of equations.
In this paper, a new Taylor series approach to state space analysis and
optimal control is introduced. To facilitate the description of the method,
we refer first to the analysis of time-invariant systems. For this case, the
approach starts by expanding the state vector x(t) in Taylor series. The
JOTA: VOL. 71, NO. 2, NOVEMBER 1991 317

Taylor coefficients x(i)(0) in this series are computed by repeatedly using the
state-space equation

~( t) = A x ( t) + Su( t).

This way, an expression for x(t) truncated up to its first r-Taylor series terms
is derived. This expression involves the system matrices A and B, the initial
condition x(0), the coefficients of the Taylor series expansion of u(t), and
the Taylor series basis vector gr(t). Upon introducing x(t)~-Xgr(t) and
u(t) ~- UV,(t) in this expression and by equating coefficients of like powers
of t, an expression for X is readily derived. Clearly, our approach is simpler
and more natural than the approach reported in Refs. 11-13, since our
approximation method is carried out by directly manipulating the Taylor
series expansion for x(t). The expression for X involves multiplications of
matrices of small dimensions, such as Ak, B, T~, etc. Hence, our approach
is numerically superior (see Remark 2.3) to those reported in Ref's. 11-13,
which require the solution of a linear algebraic system with a very large
number of equations rn, where n is the dimension of x. Analogous results
are derived for the case of time-varying systems.
With regard to the optimal control problem, this is readily solved using
the results on the state space analysis, since the optimal control problem is
essentially reduced to an analysis problem of an unforced system.
Finally, an expression is derived which allows the estimation of the
approximation error involved. This result is important because, for any given
r, one may determine the error involved in computing X, prior to determining
X. This is a very important issue in practice: indeed, if the error is not small
enough, we check for larger r until a satisfactory small error is achieved.
These results appear to be first in the area of system analysis and synthesis
via orthogonal and Taylor series.

2. State-Space Analysis of Time-Invariant Systems

Consider the linear time-invariant system described in state-space as


follows:

2(0 = Ax(t) + Bu(t), x(t = 0) = x(0), (5a)

y( t) = Cx( t), (5b)

where x(t)EN ~, u(t)eN m, and y ( t ) s R p and A, B, C are constant matrices


with appropriate dimensions. Suppose that x(t) is C °~ analytic in the time
318 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

interval [0, T]. Then, it can be expanded into the infinite Taylor series
x(t) = x(O) + x(l~(O)t/l! + x(Z)(o)fl/2! +. • •. (6)

An explicit form of the ith time derivative of x(t), denoted as x(°(t), may be
shown to have the following form:
i-1
x(O(t) = Atx(t) + E AtBu('-l-1)(/), i = 1, 2 . . . . . (7)
1=0

where u(t) is assumed to be C ~ analytic in the time interval [0, T]. Relation
(7) has been derived by repeatedly differentiating (5a).
Using (7) in (6) and keeping the first r terms, the Taylor series (6) may
be written as

X(l)'~x(O)+ E Aix(O)+ E AIBu(i-I-I)(0 ti/i!" (8)


i=l 1=0

Relation (8) may also be written in matrix form as follows:


(
x(t) ~- t[x(0)"Ax(O)'.. A r- •
I

lx(0)]

k
+[BiABi...:Ar-IB]

0 u(O) ... u~-3)(O)


× " ' ' ~tr(t). (9)

0 0 , , , J)
0 0 --.

Clearly, relation (9) is the Taylor series expansion (6) of x(t), truncated up
to its first r terms. In our opinion, the form (9) explicitly indicates that this
is a more natural approach to solve (5a) via Taylor series as compared to
known techniques (Ref. 11).
The procedure is completed if we perform the following simple step.
Expand x(t) and u(t) in Taylor series to yield
x(t)~-X~,.(t), X = [ x ( 0 ) i x~l)(0) ! ' ' ' ! x(r-~)(0)], (10)
u(t) ~- U~r(t), U=[u(0) i u(l)(0) i ' " ' ! u(r-l~(0)], (11)
JOTA: VOL. 71, NO. 2, NOVEMBER 1991 319

where Xs Nn×,, Us ~m×ra r e the state vector and the input vector coefficient
matrices, respectively. Introducing (10) in (9) and equating like powers of t
in both sides yields

X = [x(0) i Ax(O) i" "" i A'-lx(O)]


Fvr, q
+[B'AB'---'A-BI/ ! /', ~ IUT~I
(12)

L UT'A
where use was made of (11).
Next, to solve (5b) for y(t), let y(t)"~ Y~0r(t), where Ys NP×' is the
output vector coefficient matrix. Then, using (12), we readily have

Y= CX=[Cx(O) i CAx(O) i ... i CA'-lx(O)]


FVrd
+[CB" C A B ' . . . " CAr-1B]IUT~ I. (13)

L UT;J

Expression (12) is the expression sought for X which is expressed


directly in terms of the system matrices A and B, the initial conditions X(0),
the coefficient matrix of the input vector U, and the Taylor operational
matrix of integration 7",. We observe that, in order to determine X according
to (12), only multiplications of small matrices are required. Hence, our
approach simplifies the numerical complexity in studying the state-space
analysis problem via Taylor series.
It is mentioned that a relation analogous to (12) has been also derived
for continuous, discrete-time, and bilinear systems (Refs. 22-26) following
a different approach based on the closed-form solution of (5a).

Remark 2.1. Expressions (12) and (13) have two terms, the first
depending on the initial conditions x(0) and the second depending on the
input vector u, and may be written more compactly as

X=Dr+I'Ir V, (14a)

Y= CD~+ W~V, (14b)


320 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

where
Dr = [x(0) i Ax(O) i" " " i Ar-~x(0)], (15a)
I-L=[B ! AB : . . . i Ar-IB], (15b)
I~%= CI-L, (15c)

FO'r,q
v = I U T ~[ . . (t5d)

L UTTJ
Assuming that x(0)= 0, then Dr = 0 and Eq. (14b) reduces to Y= WrVwhich
may be thought of as an input-output relation in the Taylor series domain,
where Wr plays an analogous role to the transfer matrix in the s-domain.

Remark 2.2. For r = n, the matrices Dr, Fir, Wr take on the special
expressions
D , = [ x ( O ) i Ax(O) i . . . i A'-lx(0)], (16a)

1-I,=[B i A B i " " " i A"-IB], (16b)


W,=[CB i C A B i " " " i CAn-~B] • (16c)
Clearly, the above matrices D,, FL, W, are the well-known matrices of
identifiabitity, state controllability, and output controllability, respectively.
The above expressions for D,, U,, W,, may be derived even for r > n by
using the Cayley-Hamilton theorem. These observations may be useful in
establishing a simpler approach for the study of the structural analysis of
linear systems than that reported in Ref. 27.

Remark 2.3. It is important to compare the numerical effort involved


in (12) with known techniques. To this end, we first determine the total
number of operations required in (12), which alternatively may be written
as
r-1
X=[x(O) iAx(O)i''" i At-Ix(0)] + E A'[BUTr]T~. (17)
~=0
The multiplication of any two matrices with dimensions p × k and k × m
requires p k m operations. Hence, the computation of the first term in (17)
and the expression B U T t require ( r - 1)n2 and n m r + nr 2 operations, respec-
tively. The multiplications in the loop for i = 0 to r - 1 require
JOTA: VOL. 71, NO. 2, NOVEMBER 1991 321

( r - 1 ) n 2 r + ( r - 1 ) n r 2 operations. Finally, the total number of operations p


in (17) is given by

p = [(r2 - 1)n+ (r2+m)r]n. (18)

Note that p is of the order of r2(n + r)n. Now, with regard to the complexity
of the old technique for state-space analysis via orthogonal series (Refs. 4,
5, 7, and 17-21) or nonorthogonal series (Refs. 11-13), we mention that the
main bulk of computations in the old method is due to the inversion of an
nr x nr matrix. In the general case, all matrix inversion algorithms (Gauss,
LU decomposition, etc.) require an order of N 3 operations, where N is the
size of the matrix. Here, since N = nr, the matrix inversion algorithms require
an order of n3r 3 operations. Hence, our technique involves fewer operations,
particularly as n becomes large.

Example 2.1. Consider the time-invariant system of the form (5a),


where (Ref. 22)

A= IilZj0
0
, b = O,
klJ
x(O)= IZJ
.

Let u(t) = 1 + t. Then, the exact solution Xe(t) is given by

F2t + fl /2 + t3/6 + t4/241


x~(t) = / 1 + fl/2 + t3/6 J
El + fl/2

Applying the proposed method for r = 5, we have

u(t) ~-uTv5(t),

where

uT=[1, 1, 0, 0, 01.
Then, using (12), we readily have

X= 0 1 1 .
1 1 0
322 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

Now, expanding the exact solution in Taylor series and keeping its r = 5 first
terms, we have that

xe(t) = 0 1 1 ~ts(t),
1 1 0

where it immediately follows that, for the present example, the approximate
solution for r = 5 is identical to the exact solution.

3. State-Space Analysis of Time-Varying Systems

Consider the linear time-varying system described by the state-space


equations

Yc(t) = A(t)x(t) + B(t)u(t), x(t = O) = x(O), (19a)

y( t) = C( t)x( t), (19b)

where x ( t ) ~ n, u ( t ) ~ m, y ( t ) ~ p and the matrices A(t), B(t), C(t) are of


appropriate dimensions. We assume that all time-varying quantities in (19a)
and (19b) are C a analytic in the time interval [0, T].
The aim of the present section is to derive an expression for x(t) for
the system (19) analogous to the expressions (12) and (13), derived in the
previous section for the system (5). In extending the results of the time-
invariant case to the time-varying case, a difficulty arises in deriving an
expression for x(°(t) analogous to that of (7). In what follows, a procedure
is presented which overcomes this difficulty, and a rather compact expression
for x(°(t) for system (19a) is derived. We start presenting this procedure by
first proving the following lemma.

Lemma 3.1. The ith time derivative of the state vector x(t) of system
(19a) is given by
i--1
x(i)(t) = Si(t)x(t) + ~ [$l (t)B(t)u(t)] ~i-I- l~, (20)
l=0

where the sequence {SM)} is defined as

SM)=S,-~(t)A(t)+ S~[~(t), So(t)=L (21)


JOTA: VOL. 71, NO. 2, NOVEMBER 1991 323

Proof. We use the perfect induction method. For i= 1, relation (20)


yields the state space equation (19a),

x°)( t) = Sl( t)x( t) + So( t)B( t)u( t) = A( t)x( t) + B( t)u( t),

where use was made of definition (21). Suppose that (20) is true for i=k,
that is,
k-I
x~k)(t) = Sk(t)x(t) + ~ [$1 (t)B(t)u(t)] ¢k-l- 1). (22)
I=0

We will show that (20) is also true for i=k + 1. To this end, we take the
time derivative of (22) to yield
k-1
x ~ +J)(t) = S~l~(t)x(t) + S~(t)x ~l)(t) + ~ [St (t)B(t)u(t)] ~k-°.
l=0

Substituting (19a) in the above relation yields

x ~k+l~(t) = S~l)(t)x(t) + Sk(t)A(t)x(t) + Sk(t)B(t)u(t)


k-1
+ ~ [Sl(t)B(t)u(t)] ~k-°.
l=0

Using definition (21) in the above expression, we have


k
x ~k+l~(t) = Sk+ i(t)x(t) + ~, [St (t)B(t)u(t)] ~k-O.
/=0

This completes the proof of the lemma. []

The following relationship will be used and it can be easily derived:

[M(t)u(t)] ~j'= ~ (Jk)[M~(t)][u~J-~(t)], j = 0 , 1,2 . . . . . (23)


k=0

where M(t) is a C a analytic matrix in the time interval [0, T] with appropri-
ate dimensions. Introducing (23), for M(t)= St (t)B(t), in (20) gives

x~°(t) = Si(t)x(t)
i-' i-t-' (/-~-- 1)[
+E ~ Sl(t)B(t)]~)u~i-t-~-'~(t). (24)
l=0 k=0
324 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

In expression (24), the term St(t)B(t) will be replaced by a new


expression which will facilitate the goal of this section. This new expression
is given in the following lemma, proven in the Appendix.

Lemma 3.2. The following relationship holds:

St (t)B(t) =j~0'(l.lt~('-s)(t),
\J/~j " l = 0 , 1,2, " " " (25)

where the sequence {Qk(t)} is defined as


Qk(t) = [A(t) - Id/dtl~B(t), k = 0, 1, 2 . . . . . (26)
Using Lemma 3.2, relation (24) becomes
x(i)( t) = Si( t)x( t)

l=0 k=0 j=0

After some rearrangements, the second term in (27) can be written in the
following form:

i--I i--I--I ~ (i-1-1)(l.) i_j+k)(t)u(i_l_ k_


Y. E a~ 1)(0
t=o k=o s=0 k j
i-1 i-,-1 , ( i _ k _ j _ l ) ( j ; k )
=• E Z Q(~O(t)u(i-'-k-1)(t). (28)
t=o ~=o j = o \ l--j
Between the binomial coefficients, the following relationship can be easily
proven:

(i-k-j-1)(j+k)=(i) (29)
j=o\ l-j J\ k J \l)'
for i= 0, 1, 2 . . . . . l= 0, 1. . . . . i - 1, and k = 0, 1. . . . . i - l - 1. Use (29) in
(28). The resulting expression is next substituted in (27). Then, the
expression (27) takes on the form

x(°(t)=Si(t)x(t)+ }~ F~ Qg)(t)u"-t-~-~)(t), (30)


120 k=0

for i = 1, 2 . . . . . Relation (30) is the expression sought for time-varying sys-


tems and it is quite analogous to the expression (7) for the time-invariant
systems.
JOTA: VOL. 71, NO. 2, NOVEMBER 1991 325

Introducing (30) in the Taylor series expansion (6) and keeping the first
r terms, expression (6) can be written as

,, Fu(i-')(O)-I
r--1
x(O = x(O) + E ! !
i=1
~o: L u(o) J
-u(i-~)(O)-
+ (1)n}-'~,(o)
u(O)

+''" + (')
i 1
1]~('-')(O)u(O) }fl/i! (31)

where the n x mi time-varying matrix 1-Ij(t) is defined as

rIj(t)=[Qo(t) Q~(t)... ' Qj_~(t)], j=l,2,.... (32)

Relation (31) may further be written in a more compact form as

x(t) ~- Ix(O) " S~(O)x(O) " • • • " Sr-~(O)x(O)] + Z II}°(O) Vi ~tr(t), (33)
i=0

where Vie ~ . . . . is defined as

.....

v,= Omrx(i+l)

Om ×1 Om × 1

Om(itl)×(r i I)

Expression (33) corresponds to the expression (9) for time-invariant


systems and constitutes the first r terms of the Taylor series expansion (6)
for x(t). To complete the procedure for deriving an expression analogous to
326 JOTA: VOL. 71, NO• 2, NOVEMBER 1991

(12), we introduce (10) in (33), and equating like powers of t in both sides
yields

x= [x(O)" S,(O)x(O). . . " y_,(O)x(O)]


UT1 ,o
mr x r
__v_k,o__
UTm
m r x r, (34)
+[n~(o)n~'~(o) --.'nSr-'~(o)] uL,~

UT1,~-I
mr × r
• UT~,r-1

where use was made of definition (11) and where


__ i+j
Tid - T~ E~,j (35)
where

{
Erd = diag 0 , . , 0, •,
(,) (r i)}
.,
T y
j elements r--j elements

f o r i = 1 , 2 , . . . r a n .d j = 0., 1 , . , . r - l. , w h e r e E , j, TideN
~ r×r , with T i. j = 0
for i+j>_r.
Now, consider solving (19b) for y(t); i.e., consider the problem of
determining the output coefficient matrix Y in the approximation
y ( t ) ~ - Y v r ( t ) , where YsR m×~. To this end, two lemmas analogous to
Lemmas 3.1 and 3.2 can be proven.

Lemma 3.3. The ith time derivative of y(t) is given by the following
relationship:
i--1
y(°(t) = Li(t)x(t) + ~ [Ll (t)B(t)u(t)] (i- t- 1), i = 1 , 2. . . . . (36)
I=0

where the sequence {Li(t)} is defined as


L~(t) = L i - l ( t ) A ( t ) + L(l)(t), Lo(t) = C(t). (37)
JOTA: VOL. 71, NO. 2, NOVEMBER 1991 327

Lemma 3.4. The following relation holds:

l (i-k)
i = 0 , 1,2 . . . . . (38)
k=O

where the sequence {Ak(t)} is defined as

Ak(t) = C(t)Qk(t), k = O, I, 2 . . . . . (39)

Following the same procedure for Y as that for X, we finally arrive at


the desired expression,

Y= [x(0) " LI(0)x(0) "- " • " Lr-l(0)x(0)]

UT~,o

t mr ×r

+[wr(°)w~l)(°)" w~(~-'(°)]
UT1,~

uL,~ tmr ×r
, (40)

UTI,r- J

where the p × m i time-varying matrix W,(t) is defined as


,UTr.,.-I t mr ×r

W M ) = [ A 0 ( t ) ' A l ( t ) " • • ' A,_,(t)], i = 1, 2, 3 , . . . . (41)

Remark 3.1. One may easily go from the time-varying expressions


(34) and (40) to the respective time-invariant expressions (12) and (13) by
observing that, if the matrices A, B, C are time-invariant, then

n?)(o) = wf~)(o) = o, k>l,


Er,o=I, T;,o= T~, i=1,2 .... r,
s,(o)-A, ' L ~ ( O ) = C A ~, i=1,2 ..... r-l,

IIr(O)=H~=[B" A B ' . . • " A t-'B],

W~(O)= Wr = [ C B ' CAB''" " CA~-'B].


328 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

Example 3.1. Consider the time-varying system of the form (19a),


where

To find X, we apply the proposed technique. Consider r = 6. We have

u(t)=[Oll~-U~6(t)=[O1 0 0 0 0 0]
0 0 0 0 0 1/t6(t)"

Then, (34) yields

X=
E°0 0''1-'
1 0 -6 "
The exact solution Xe(t) for x(t) is

Xe(t)=f - e x p ( - t ) + 1
L ( - e x p ( - t 2) + 1)/2]"

If we expand Xe(t) into Taylor series, we have


.~t/l!- fi/2! + t3/3!- t4/4! + t5/5! . . . . 1.
Xe( t ) J
Lt2/2! - 6t4/4!

We observe that, if xe(t) is expanded in Taylor series and the infinite series
is truncated up to r = 6 terms, we have the same coefficients derived by our
method for r = 6. Clearly, the present example shows the computational
simplicity of our approach over the method reported in Refs. 12 and 13.

4. Optimal Control Problem

We first consider the time-varying case. At the end of this section, we


present the time-invariant case as a special case. For the time-varying system
of the type (19), the quadratic performance index is assumed to be of the
form

J = (1/2) f0f [xr(t)Q(t)x(t) + uT(t)R(t)u(t)] dr, (42)


JOTA: VOL. 71, NO. 2, NOVEMBER 1991 329

where Q(t) is an n x n positive-semidefinite matrix and R(t) is an m x m


positive-definite matrix. The optimal feedback control law is
u* (t) = g -1 (t)Br(t)p(t), (43)
where p(t)~ R n satisfies the following canonical equation:

/~(t)3 Lp(t)d
with boundary conditions
x(t=O)=x(O), p(t=tf)=O, (45)
where the matrix N(t)e ~2n x 2n is defined as

N(t) = [A(t) B(t)R-l(t)Br(t)] (46)


LQ(t) -At(t) _]"
Let

L~2,(t~, t) ~22(tz, '


be the state transition matrix of (44), partitioned in four n x n dimension
submatrices. Then, using the relation

Ix(tf)l=[O(t f ) ] = I q ~ l l ( t / , t) (I)12(tf,t)]Ix(t) 1 (48)


p(tl)] L(I)zl(tf, t) (I)22(tf, t)JLp(t)_]'
we have
p( t) = - ~ l( tf , t)~21( tf , t)x( t) (49)
Substituting (49) in (43), we obtain
u*( t) = -K( t)x( t), (50)
where the time-varying gain matrix K(t) is given by
K( t) = R-l(t)Br(t)~Z2 l( tf , t)f~21( tf, t). (51)
The state transition matrix q~(tl, t) satisfies the following equation:
+(tf, t)=-OUT, t)U(t), ~(tf, tf )= I. (52)
Taking the transpose in both sides of (52) yields
+T(ty, t) = --NV(t)~T(tf, t), ~V(tf, tf) =I. (53)
330 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

Equation (53) may be separated into 2n equations as follows:

(pi(tf, t)= - N T ( t)~pg(tf, t), i= 1, 2 , . . . , 2n, (54)

with boundary conditions

q31(tf, tf) = [0, . . . . 0, l, 0 . . . . . 0] 7, i = 1, 2 . . . . . 2n, (55)

where the unity appears in the ith position and where ¢p~(tf, t) is the ith
column of cbr(tf, t). We observe that the n equations in (54) for i=
n + 1 , . . . , 2n are sufficient to determine O22(ts, t) and 02~(tf, t), and thus
the optimal gain matrix K(t).
In what follows, we solve the problem (54), (55). To this end, suppose
that q~(ty, t) and N(t) are C a analytic in the time interval [0, T] and
tfi[0, T]. Then, the vector q~(tf, t) can be expanded in Taylor series in the
neighborhood of ts as follows:

q~Mf, t)=q~(tf, tf ) + q~Ji)(tf, tf ) ( t - tf )/l!


+ <p}2)(tf, tf ) ( t - tf )2/2! + . - . . (56)

Equation (54) is an unforced system. Hence, the jth time derivative


~p}J)(tf, t) is given by the Lemma 3.1 and it has the following form:

qg~J)(tf, t)= Sj( t)~p,( tf, t), (57)

where

Silt) = S i - , ( t ) ( - N r ( t ) ) + SJ~-)l(t), So(t) = I. (58)

Substituting (57) in (56) and keeping the first r terms, relation (56) may be
written in an analogous form to that of relation (33) to yield

q~( tf , t)
~- [q~( tf , ti ) i S~( ts )q~,(tf , ty ) i" "" i Sr-~(ty)~P~(ts, tr)]q/r(t- ty)
= [~o,(tf, tf) i Sl(tf)q~,(ty, tf) i'" " i Sr-l(tf)q~'(tf, tf)]Zr(tf)~tr(t), (59)

where Zr(tf ) is the shift operational matrix between the two basis vectors
~tr(t-tf) and ~'r(t), defined as

~tr(t - tf ) = Z,(ty ) qtr(t), (60)


JOTA: VOL. 71, NO. 2, NOVEMBER 1991 331

where

-tHll 1 ...
z,(ts) = . . .

L(--1)~-it)-l/(r--1)! (--1)r-2t~-2/(r-2)! •••

To complete the procedure, expand ~o,(tf, t) in terms of the Taylor series


basis vector V r ( t - t y ) to yield
~pi(tf, t) ~- dbiVr(t- tf ) = ~Z~(ty ) Vr(t), (61)
where use was made of (60). Introducing (61) in (59) and equating like
powers of t in both sides of the resulting equation, we arrive at the expression
~i = [q~,(tf, ty ) i S1 (tf )q~i(tf, tf ) ! " . i St-t(tf )~oi(tf, ty )]. (62)
Computing the coefficient matrices ~;, i = n + 1. . . . . 2n, we can determine
the submatrices ~zl(tf, t) and ~22(tf, t). Finally, using (51), the respective
gain matrix K(t) may be determined.
In the time-invariant case, the matrix N(t) appearing in (44) is time-
independent, and thus it has the form

_At J"
The operator S~(t), given in (58), takes on the form
,9.= -[NT] i,

and the coefficient matrices ~, are computed by


q~= [~pi(tf, tf) i -Nrq~i(tf, tf ) i " " " i ( - N r ) ~ - I g M f , tf )]. (63)
The remaining steps in computing the gain matrix K(t) are identical to those
of the time-varying case mentioned above.

Example 4.1. Consider the time-varying system (Refs. 12 and 19)


~( t) = tx( t) + u( t)
and the quadratic performance index

J = (1/2)
fo' [x2(t) + u2(t)] dt.
332 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

K (t), Ke(t)
1.2-

1.0- \
. . . . . K(t)
\ ",,. -- Ke(t)
0.8-

0.4-

o2__
0.0 ---
0.0 o12 0,4 0.6 0,8 1.0

Fig. 1. Graphical representation of the exact optimal gain Ke(t) and the approximate gain
K(t) derived via Taylor series for r=6 (Example4.1).

Following the proposed method for the time-varying case and for r = 6, we
have

N(t)=[; -i1' qh(1, 1)=[01],

~ ) 2 = I ~ - 1 - 103 1 5 - 415 - 739


].

Introducing the above expression for (~2 in (61) and using (51), we arrive
at the expression sought for the time-varying gain matrix
[ 1.058 " - 1.125 " 0.166" -0.500" 3.000" - 7.000] ~'6(t)
K(t) "
[0.966 " -0.375 " - 1.000 " 9.500 " -24.000 " 39.000] g/6(t)
A comparison between the exact optimal gain Ke(t) and the approximate
solution derived by our technique is given in Fig. 1.

5. Estimation of the Approximation Error

In the study of various problems via orthogonal or nonorthogonal


series, a very important issue has not been studied yet, namely, the issue of
JOTA: VOL. 71, NO. 2, NOVEMBER 1991 333

estimating the approximation error involved. In this section, we study this


problem for the case of time-invariant systems. This can be easily accom-
plished via the well-known Taylor's theorem as follows: Truncate the Taylor
series (6) up to its first r terms. Then, the remainder, denoted by E~,o(t), is
defined as (Cauchy form of the remainder)

e,,o(t) ~ x(~)(7)[(t- y)r-'t/(r- 1)!],


for some ye(0, t), Vte(0, T). (64)

Our approach will be focused in finding an upper bound of the approxi-


mation error, denoted by Emax(r) and defined as

emax(r) ~ s u p llff~,o(t) LI, (65)


te(0,T)

where I1 " il denotes the Euclidean norm. To this end, we take the norm of
both sides in (64) to yield

IIe~,0(t)II -- llx(r~(7)[( t - )')~- it~(r - 1)!]1t


-< IIx(°(Y)II T~/( r - 1)!, (66)

for some y e (0, T), where use was made of the inequality 0 < ?, < T, which
follows from definition (64). The expression for x(r)()~) may be derived using
(7) to yield
r-I
x(r)()')=A'x(?') + 2 ffBu(r-t-~(Y), y~[0, T].
l=0

Taking the norm of both sides of the above expression, we have

[[x(r)(y)l [ =
1
Arx(y)+ ~ AIBu(r-J-°(y)
/=0
r-1
1L
<-I]A~x(Y)II+ Z IMlBu(r-t-')(r)ll
l=O
r-I
< [IAl[rllx(r)ll + ~ IIAIItIIBI]Ilu(r-'-l~(~,)ll, y e [0, T], (67)
/=0

where the following definition was used:

IIFtl = {supllFxI1 : Ilxtl = 1, x c ~ } , for any F e N "×".


334 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

Next, the two terms appearing in (67) will be studied separately. To


find an upper bound of the first term, consider the solution for x(7); i.e.,
consider the expression

x()') = exp(A),)x(0) + exp[A(7'- ~)]Bu(o.) do'. (68)

Taking the norm of both sides yields

IIx(7')tf=lexp(A7')x(O)+f~exp[A(T-o')]Bu(o.)do.
1
_< Ilexp(AT')x(0)II + exp[A(7/- o.)]Bu(cr) do"

< jTexp(AT')II Ilx(0)lt + llexp[A(7' - o')lBu(o.)ll do-

< llexp(/ 7') It IIx(0)It + Ilexp[A(), - ~)] II ilBtl llu(o.)II do..

(69)

Now, let Q be a transformation matrix such that QA Q - 1= j, where J is the


Jordan canonical form of A, and ~ = max Re(~j), where ~,j are the eigenva-
lues of A. Then (Ref. 28), J
n--]

IIexp(A 7')II -< IiOil I1Q - 1tl exp(~t~axT') ~ 7'i/i!, (70a)


i=0

tlexp[A (7' - o.)] 11


n--1
< IIOll tiQ-111 exp[L~ax(~ - o.)] ~ (7' -
i=0
cr)'/i!. (70b)

Denote

a = m a x ( ~ x , 0),

and use the fact that o. ~ [0, ~] and T ~ [0, T]. Then,
n--I n~-I
exp(/2~ax~') E 7'i/i!<exp(aT) E T'/i!, (71a)
i=0 i=0

n-I n-I
exp[2~x(~,-o.)] E
i=0
(~-o.)~/i!<_exp(aT)E T~/i].i=0
(71b)
JOTA: VOL 71, NO. 2, NOVEMBER 1991 335

Substituting the inequalities (71) in (70) and the resulting inequalities in (69)
yields

llx(r)lt <-IIQll IIQ-'II Itx(0)]l + tlBtl

n--1
f ;0 Ilu(o.)lldcr
1
x exp(aT) ~ Ti/i!. (72)
i~0

Denoting by

v~.... = sup Itu(~)tl,


c ~ [ o , Tl

then the integral ~or Itu(o.)ll do" can be bounded by

f0 ILu(o.)lt do- ~ [ Vl,max]


fo do.<[Vl,max]T.

We finally substitute the above inequality into (72) to yield

[[x(~')l[ N LtQI[ [[Q-1 [l[![x(0)l[ + llBI[V1.... T]


n--I
x exp(aT) ~ Ti/i!, V~,e[0, rl. (73)
i=0

To find an upper bound for the second term in (67), denote

g2,max = max sup


j = 0 , 1 , - " ?,~ (0,T)
Ilu~J~(7)ll.

Then, the second term in (67) can be bounded by


r-I
LIA~IIltBIi Ilu<r-e-~)(7)l/
/=0
r-t
-< Z IIAIl/l/Bllg2,m,x, Vre[0, 7"1. (74)
l-0

It can be easily shown that the following inequality holds:


r--1 r--I
~2 HAll'< [IA H' (r - 1)l/l! _<__exp[ ILAH( r - 1)], (75)
t=0 l=0
336 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

for any r = 1, 2 . . . . . Next, substitute Ineq. (75) into (74) to yield


r-I

Y, IIAIIZltgll Ilu<'-t-~)(~,)l I
1=0

~exp[llZll(r- 1)]llnlr V2. . . . . V~G[0, TI. (76)

Now, substitute Ineqs. (73) and (76) into the expression (67) to yield

JIx(r)(~) II
tt--1

_< llAllrttOtl jlO-ltl(llx(0)t] + IJBIIV~,m~xT) exp(aT) ~ Te/i!


i=0

+exp[ltAIl(r- 1)]ltnll Vz,~x, VT~[0, T]. (77)

Finally, substituting (77) into (66) and the resulting expression into (65), an
upper bound of the approximation error Em~,(r) will be
E tzml
Em~x(r)_< [IA[lrlfQ[tflQ-'r[(Ilx(0)ll + tIBIIV1. . . . T)exp(aT) ~, Ti/iT
i=O

+ exp[llAN(r- ])]IIBI[ V2,max]T~/(r - 1)!, (78)

for r = 1, 2 . . . . . The above expression is the expression sought for the upper
bound of the error involved in the approximate analysis of time-invariant
systems via Taylor series.
In closing this section, we mention that one may readily estimate the
approximation error for the optimal control by making use of the procedure
presented above.

Example 5.1. Consider the time-invariant system of Example 2.1


defined in the interval [0, 1]. The matrix A is in the Jordan canonical form.
An upper bound of em~(r) will be given by (78),

6max(r) ~ [5/2 + 5~/2"+ 2-~/2 exp(r-- 1)]/(r-- I)!


= [9.571 + 2.828 exp(r-- l)]/(r-- 1)!, Vt~(O, 1),
where use was made of the following:

IIA]l = 1, a=O, Ilbll =-,/2,

IIx(O)ll : 1, vl,m,x = v2.... =2.


JOTA: VOL. 71, NO. 2, NOVEMBER 1991 337

V(r)
E+02[ . .

E+O1 l

E--01

E -- 02

E --03

E - - 04

E--05 k ~ J .......... _..Jr


4 7 10 13 16

Fig. 2. Graphical representation of an upper bound for Em.~(r) in semilogarithmic scale


(Example 5.1).

In Fig. 2, a graphical representation of an upper bound of emax(r) with


respect to r is given. From this figure, it follows that, if the desired accuracy
of the approximation is of order of l 0 -3, then one must take r > 13 terms.

6. Conclusions

In this paper, a new Taylor series approximation approach is presented


for state-space analysis and optimal control of linear systems. This approach
has the following advantages over known techniques"

(a) It appears more natural, since it derives an expression for X work-


ing directly on the Taylor series expansion for x(t).
(b) It is numerically superior to known techniques, since it does not
require the solution of an algebraic system with a very large number of
equations, but rather multiplication of matrices of small dimensions.
(c) It allows the estimation of the error involved in the approximation.
338 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

(d) It appears that it can be extended relatively easily to cover other


types of systems (time-delay systems, bilinear systems, nonlinear systems,
etc.).

7. Appendix

Proof of Lemma 3.2. Using the definitions (21) and (26) for $/(t) and
Qj(t), respectively, we have
S, (t) Qj(t) : [SI-,(t)A (t) + S~[)1 (t)] Qj(t)
= Sl-~ (t)A(t)Qj(t) - S,-1 (t) Q~)(t)
+ S,-,(t) Q~')(t) + S~l)l(t)Qfit)
= St-l(t)[A(t)Qj(t) - Qj(.l)(t)] + [St-l(t)Qj(t)] ~1)
= S,-l(t)Qj+ l(t) + [St_ l(t) Qj(t)] ~), (79)
for any l = 1, 2 . . . . and j = 0, 1, 2 , . . . . The following relationships are obvi-
ously true:
So(t)Qk(t) = Qk(t), k = 0 , 1, 2 . . . . . j. (80)
Using (79) in (80), we have
Sl(t)Qk(t)=Q~+l(t)+Q(k~)(t), k = 0 , 1,2 . . . . . j - 1 . (81)
Again using (79) in (81), we have
S2(t) Qk(t)
=Qk+2(t)+2Q(~l)+~(t)+O~ 2) (t), k=O, 1,2 . . . . , j - 2 .

Following this procedure, we arrive at the desired expression (25). []

References

I. CHEN, C. F., and HSIAO, C. H., Design of Piecewise Constant Gains for Optimal
Control via Walsh Functions, IEEE Transactions on Automatic Control, Vol.
AC-20, pp. 596-603, 1975.
2. SANNUTI, P., Analysis and Synthesis of Dynamic System via Block-Pulse
Functions, Proceedings of the IEE, Vol. 124, pp. 569-571, 1977.
3. KING, R. E., and PARASKEVOPOULOS, P. N., Parameter Identification of
Discrete-Time SISO Systems, International Journal of Control, Vol. 30,
pp. 1023-1029, 1979.
JOTA: VOL. 71, NO. 2, NOVEMBER 1991 339

4. PARASKEVOPOULOS,P. N., Chebyshev Series Approach to System Identification,


Analysis, and Optimal Control, Journal of the Franklin Institute, Vol. 316,
pp. 135-157, 1983.
5. PARASKEVOPOULOS,P. N., Legendre Series Approach to Identification and Analy-
sis of Linear Systems, IEEE Transactions on Automatic Control, Vol. AC-30,
pp. 585-589, 1985.
6. KEI~ERrS, G. T., and PARASKEVOPOULOS,P. N., Hermite Series Approach to
Optimal Control, International Journal of Control, Vot. 47, pp. 55%567, 1988.
7. L~u, C. C., and SHIH, Y. P., Systems Analysis, Parameter Estimation, and Opti-
mal Regulator Design of Linear Systems via Jacobi Series, International Journal
of Control, Vol. 42, pp. 211-224, 1985.
8. PARASKEVOPOULOS,P. N., SKLAVOUNOS,P. G., and GEORGIOU, G. C., The
Operational Matrix of Integration for Bessel Functions, Journal of the Franklin
Institute, Vol. 327, pp. 329-341, 1990.
9. PARASKEVOPOULOS,P. N., SPARIS,P. D., and MOUROUTSOS,S. G., The Fourier
Series Operational Matrix of Integration, International Journal of Systems
Science, Vol. 16, pp. 171-176, 1985.
10. PARASKEVOI'OULOS,P. N., The Operational Matrices of Integration and Differ-
entiation for the Fourier Sine-Cosine and Exponential Series, IEEE Transactions
on Automatic Control, Vol. 32, pp. 648-651, 1987.
11. MOVROUTSOS, S. G., and SVARIS, P. D., Taylor Series Approach to System
Identification, Analysis, and Optimal Control, Journal of the Franklin Institute,
Vol. 319, pp. 359-37t, 1985.
12. SPARIS,P. D., and MouRouxsos, S. G., Analysis and Optimal Control of Time-
Varying Linear Systems via Taylor Series, International Journal of Control,
Vol. 41, pp. 831-842, 1985.
13. PERNG, M. H., An Effective Approach to the Optimal Control Problemfor Time-
Varying Linear Systems via Taylor Series, International Journal of Control,
Vol. 44, pp. 1225-1231, 1986.
14. HORNG, I. R., CHou, J. H., and TSAI, R. Y., Taylor Series Analysis of Linear
Optimal Control Systems Incorporating Observers, International Journal of
Control, Vol. 44, pp. 1265-1272, 1986.
15. CrtEy, C. K., and YANG, C. Y., Analysis and Parameter Identification of Time-
Delay Systems via Polynomial Series, International Journal of Control, Vol. 46,
pp. 111-127, 1987.
16. St'ARtS, P. D., and MouRouxsos, S. G., A Comparative Study of the Operational
Matrices of Integration and Differentiation for Orthogonal Polynomial Series,
International Journal of Control, Vol. AC-42, pp. 621-638, 1985.
17. HWANG, C., and CHEy, M. Y., Analysis and Optimal Control of Time-Varying
Linear Systems via Shifted Legendre Polynomials, International Journal of
Control, Vol. 41, pp. 1317-1330, 1985.
18. CrmN, C. F., and HSIAO, C. H., Walsh Series Analysis in Optimal Control,
International Journal of Control, Vol. 21, pp. 881-897, 1975.
19. Hsu, N. C., and CHENG, B., Analysis and Optimal Control of Time-Varying
Linear Systems via Block-Pulse Functions, International Journal of Control,
Vol. 33, pp. t107-1122, 1981.
340 JOTA: VOL. 71, NO. 2, NOVEMBER 1991

20. LIu, C. C., and SHIH, Y. P., Analysis and Optimal Control of Time-Varying
Systems via Chebyshev Polynomials, International Journal of Control, Vol. 38,
pp. 1003-1012, 1983.
21. WANO, M. L., CHANG, R. Y., and YAN6, S. Y., Analysis and Optimal Control
of Time- Varying Systems via Generalized Orthogonal Polynomials, International
Journal of Control, Vol. 44, pp. 895-910, 1986.
22. PARASKEVOPOULOS,P. N., A New Orthogonal Series Approach to State Space
Analysis and Identification, International Journal of Systems Science, Vol. 20,
pp. 957-970, 1989.
23. PARASKEVOPOULOS,P. N., SKLAVOUNOS,P. G., and ARVANITIS,K, G., A New
Orthogonal Series Approach to State Space Analysis and Optimal Control (to
appear).
24. PARASKEVOI'OULOS,P. N., SKLAVOUNOS,P. G., and KARKAS, D. A., A New
Orthogonal Series Approach to Sensitivity Analysis, Journal of the Franklin
Institute, Vol. 327, pp. 429-433, 1990.
25. PARASKEVOPOULOS,P. N., and DIAMANTARAS,K. I., A new Orthogonal Series
Approach to State Space Analysis of ID and 2D Discrete Systems, Proceedings
of the IEE, Part G, Vol. 137, pp. 205-209, 1990.
26. PARASKEVOPOULOS,P. N., TSlRIKOS, A. S., and ARVANITIS, K. G., A New
Orthogonal Series Approach to State-Space Analysis of Bilinear Systems (to
appear).
27. DING, X., and FRANK, P., Structure Analysis via Orthogonal Functions,
International Journal of Control, Vol. 50, pp. 2285-2300, 1989.
28. GORI-GIoRGI, C., and MONAKO, S., Observabitity and State Reconstruction
of Affine Time-Invariant Process, Proceedings of the International Symposium
MECO 78, Vol. 2, pp. 412-416, 1978.

You might also like