Professional Documents
Culture Documents
State Space Analysis
State Space Analysis
2, NOVEMBER1991
Communicated by C. T. Leondes
1. Introduction
In the last years, orthogonal series have been used for solving a variety
of problems in system analysis and synthesis. The key idea involved is based
on the following expression:
f0f0
• . •
k times
~pr(a)(dcr)k~--P~q~r(t), (la)
1This work was partially supported by the Greek State Scholarship Foundation (IKY).
2professor, Division of Computer Science, Department of Electrical Engineering, National
Technical Universityof Athens, Zographou, Athens, Greece.
3Graduate Student, Division of Computer Science, Department of Electrical Engineering,
National Technical University of Athens, Zographou, Athens, Greece.
315
0022-3239/91/1100-0315506.50/0~.~1991PlenumPublishingCorporation
316 JOTA: VOL. 71, NO. 2, NOVEMBER 1991
where (0~(t)•W is the orthogonal basis vector and P r • R ~×~ is called the
operational matrix of integration. The matrix Pr has been determined for
several types of orthogonal series such as Walsh (Ref. I), block-pulse (Ref.
2), Laguerre (Ref. 3), Chebyshev (Ref. 4), Legendre (Ref. 5), Hermite (Ref.
6), Jacobi (Ref. 7), Bessel (Ref. 8), Fourier sine-cosine (Ref. 9), Fourier
exponential series (Ref. 10), etc.
Nonorthogonal series have also been used to study system analysis and
synthesis problems. A popular such nonorthogonal series is the Taylor series
(Refs. 11-16), for which case relation (1) takes on the form
(2a)
k times
where ~ ( t ) • Nr and
I
I 3
. . .[. . . .Okxk
.... r-I. . Ok×(r-k)J'
...... T,• W ×r. (3)
For the orthogonal series and the Taylor series, the following relationships
hold (Ref. t6) :
eT
Taylor coefficients x(i)(0) in this series are computed by repeatedly using the
state-space equation
~( t) = A x ( t) + Su( t).
This way, an expression for x(t) truncated up to its first r-Taylor series terms
is derived. This expression involves the system matrices A and B, the initial
condition x(0), the coefficients of the Taylor series expansion of u(t), and
the Taylor series basis vector gr(t). Upon introducing x(t)~-Xgr(t) and
u(t) ~- UV,(t) in this expression and by equating coefficients of like powers
of t, an expression for X is readily derived. Clearly, our approach is simpler
and more natural than the approach reported in Refs. 11-13, since our
approximation method is carried out by directly manipulating the Taylor
series expansion for x(t). The expression for X involves multiplications of
matrices of small dimensions, such as Ak, B, T~, etc. Hence, our approach
is numerically superior (see Remark 2.3) to those reported in Ref's. 11-13,
which require the solution of a linear algebraic system with a very large
number of equations rn, where n is the dimension of x. Analogous results
are derived for the case of time-varying systems.
With regard to the optimal control problem, this is readily solved using
the results on the state space analysis, since the optimal control problem is
essentially reduced to an analysis problem of an unforced system.
Finally, an expression is derived which allows the estimation of the
approximation error involved. This result is important because, for any given
r, one may determine the error involved in computing X, prior to determining
X. This is a very important issue in practice: indeed, if the error is not small
enough, we check for larger r until a satisfactory small error is achieved.
These results appear to be first in the area of system analysis and synthesis
via orthogonal and Taylor series.
interval [0, T]. Then, it can be expanded into the infinite Taylor series
x(t) = x(O) + x(l~(O)t/l! + x(Z)(o)fl/2! +. • •. (6)
An explicit form of the ith time derivative of x(t), denoted as x(°(t), may be
shown to have the following form:
i-1
x(O(t) = Atx(t) + E AtBu('-l-1)(/), i = 1, 2 . . . . . (7)
1=0
where u(t) is assumed to be C ~ analytic in the time interval [0, T]. Relation
(7) has been derived by repeatedly differentiating (5a).
Using (7) in (6) and keeping the first r terms, the Taylor series (6) may
be written as
lx(0)]
k
+[BiABi...:Ar-IB]
0 0 , , , J)
0 0 --.
Clearly, relation (9) is the Taylor series expansion (6) of x(t), truncated up
to its first r terms. In our opinion, the form (9) explicitly indicates that this
is a more natural approach to solve (5a) via Taylor series as compared to
known techniques (Ref. 11).
The procedure is completed if we perform the following simple step.
Expand x(t) and u(t) in Taylor series to yield
x(t)~-X~,.(t), X = [ x ( 0 ) i x~l)(0) ! ' ' ' ! x(r-~)(0)], (10)
u(t) ~- U~r(t), U=[u(0) i u(l)(0) i ' " ' ! u(r-l~(0)], (11)
JOTA: VOL. 71, NO. 2, NOVEMBER 1991 319
where Xs Nn×,, Us ~m×ra r e the state vector and the input vector coefficient
matrices, respectively. Introducing (10) in (9) and equating like powers of t
in both sides yields
L UT'A
where use was made of (11).
Next, to solve (5b) for y(t), let y(t)"~ Y~0r(t), where Ys NP×' is the
output vector coefficient matrix. Then, using (12), we readily have
L UT;J
Remark 2.1. Expressions (12) and (13) have two terms, the first
depending on the initial conditions x(0) and the second depending on the
input vector u, and may be written more compactly as
X=Dr+I'Ir V, (14a)
where
Dr = [x(0) i Ax(O) i" " " i Ar-~x(0)], (15a)
I-L=[B ! AB : . . . i Ar-IB], (15b)
I~%= CI-L, (15c)
FO'r,q
v = I U T ~[ . . (t5d)
L UTTJ
Assuming that x(0)= 0, then Dr = 0 and Eq. (14b) reduces to Y= WrVwhich
may be thought of as an input-output relation in the Taylor series domain,
where Wr plays an analogous role to the transfer matrix in the s-domain.
Remark 2.2. For r = n, the matrices Dr, Fir, Wr take on the special
expressions
D , = [ x ( O ) i Ax(O) i . . . i A'-lx(0)], (16a)
Note that p is of the order of r2(n + r)n. Now, with regard to the complexity
of the old technique for state-space analysis via orthogonal series (Refs. 4,
5, 7, and 17-21) or nonorthogonal series (Refs. 11-13), we mention that the
main bulk of computations in the old method is due to the inversion of an
nr x nr matrix. In the general case, all matrix inversion algorithms (Gauss,
LU decomposition, etc.) require an order of N 3 operations, where N is the
size of the matrix. Here, since N = nr, the matrix inversion algorithms require
an order of n3r 3 operations. Hence, our technique involves fewer operations,
particularly as n becomes large.
A= IilZj0
0
, b = O,
klJ
x(O)= IZJ
.
u(t) ~-uTv5(t),
where
uT=[1, 1, 0, 0, 01.
Then, using (12), we readily have
X= 0 1 1 .
1 1 0
322 JOTA: VOL. 71, NO. 2, NOVEMBER 1991
Now, expanding the exact solution in Taylor series and keeping its r = 5 first
terms, we have that
xe(t) = 0 1 1 ~ts(t),
1 1 0
where it immediately follows that, for the present example, the approximate
solution for r = 5 is identical to the exact solution.
Lemma 3.1. The ith time derivative of the state vector x(t) of system
(19a) is given by
i--1
x(i)(t) = Si(t)x(t) + ~ [$l (t)B(t)u(t)] ~i-I- l~, (20)
l=0
where use was made of definition (21). Suppose that (20) is true for i=k,
that is,
k-I
x~k)(t) = Sk(t)x(t) + ~ [$1 (t)B(t)u(t)] ¢k-l- 1). (22)
I=0
We will show that (20) is also true for i=k + 1. To this end, we take the
time derivative of (22) to yield
k-1
x ~ +J)(t) = S~l~(t)x(t) + S~(t)x ~l)(t) + ~ [St (t)B(t)u(t)] ~k-°.
l=0
where M(t) is a C a analytic matrix in the time interval [0, T] with appropri-
ate dimensions. Introducing (23), for M(t)= St (t)B(t), in (20) gives
x~°(t) = Si(t)x(t)
i-' i-t-' (/-~-- 1)[
+E ~ Sl(t)B(t)]~)u~i-t-~-'~(t). (24)
l=0 k=0
324 JOTA: VOL. 71, NO. 2, NOVEMBER 1991
St (t)B(t) =j~0'(l.lt~('-s)(t),
\J/~j " l = 0 , 1,2, " " " (25)
After some rearrangements, the second term in (27) can be written in the
following form:
(i-k-j-1)(j+k)=(i) (29)
j=o\ l-j J\ k J \l)'
for i= 0, 1, 2 . . . . . l= 0, 1. . . . . i - 1, and k = 0, 1. . . . . i - l - 1. Use (29) in
(28). The resulting expression is next substituted in (27). Then, the
expression (27) takes on the form
Introducing (30) in the Taylor series expansion (6) and keeping the first
r terms, expression (6) can be written as
,, Fu(i-')(O)-I
r--1
x(O = x(O) + E ! !
i=1
~o: L u(o) J
-u(i-~)(O)-
+ (1)n}-'~,(o)
u(O)
+''" + (')
i 1
1]~('-')(O)u(O) }fl/i! (31)
x(t) ~- Ix(O) " S~(O)x(O) " • • • " Sr-~(O)x(O)] + Z II}°(O) Vi ~tr(t), (33)
i=0
.....
v,= Omrx(i+l)
Om ×1 Om × 1
Om(itl)×(r i I)
(12), we introduce (10) in (33), and equating like powers of t in both sides
yields
UT1,~-I
mr × r
• UT~,r-1
{
Erd = diag 0 , . , 0, •,
(,) (r i)}
.,
T y
j elements r--j elements
f o r i = 1 , 2 , . . . r a n .d j = 0., 1 , . , . r - l. , w h e r e E , j, TideN
~ r×r , with T i. j = 0
for i+j>_r.
Now, consider solving (19b) for y(t); i.e., consider the problem of
determining the output coefficient matrix Y in the approximation
y ( t ) ~ - Y v r ( t ) , where YsR m×~. To this end, two lemmas analogous to
Lemmas 3.1 and 3.2 can be proven.
Lemma 3.3. The ith time derivative of y(t) is given by the following
relationship:
i--1
y(°(t) = Li(t)x(t) + ~ [Ll (t)B(t)u(t)] (i- t- 1), i = 1 , 2. . . . . (36)
I=0
l (i-k)
i = 0 , 1,2 . . . . . (38)
k=O
UT~,o
t mr ×r
+[wr(°)w~l)(°)" w~(~-'(°)]
UT1,~
uL,~ tmr ×r
, (40)
UTI,r- J
u(t)=[Oll~-U~6(t)=[O1 0 0 0 0 0]
0 0 0 0 0 1/t6(t)"
X=
E°0 0''1-'
1 0 -6 "
The exact solution Xe(t) for x(t) is
Xe(t)=f - e x p ( - t ) + 1
L ( - e x p ( - t 2) + 1)/2]"
We observe that, if xe(t) is expanded in Taylor series and the infinite series
is truncated up to r = 6 terms, we have the same coefficients derived by our
method for r = 6. Clearly, the present example shows the computational
simplicity of our approach over the method reported in Refs. 12 and 13.
/~(t)3 Lp(t)d
with boundary conditions
x(t=O)=x(O), p(t=tf)=O, (45)
where the matrix N(t)e ~2n x 2n is defined as
where the unity appears in the ith position and where ¢p~(tf, t) is the ith
column of cbr(tf, t). We observe that the n equations in (54) for i=
n + 1 , . . . , 2n are sufficient to determine O22(ts, t) and 02~(tf, t), and thus
the optimal gain matrix K(t).
In what follows, we solve the problem (54), (55). To this end, suppose
that q~(ty, t) and N(t) are C a analytic in the time interval [0, T] and
tfi[0, T]. Then, the vector q~(tf, t) can be expanded in Taylor series in the
neighborhood of ts as follows:
where
Substituting (57) in (56) and keeping the first r terms, relation (56) may be
written in an analogous form to that of relation (33) to yield
q~( tf , t)
~- [q~( tf , ti ) i S~( ts )q~,(tf , ty ) i" "" i Sr-~(ty)~P~(ts, tr)]q/r(t- ty)
= [~o,(tf, tf) i Sl(tf)q~,(ty, tf) i'" " i Sr-l(tf)q~'(tf, tf)]Zr(tf)~tr(t), (59)
where Zr(tf ) is the shift operational matrix between the two basis vectors
~tr(t-tf) and ~'r(t), defined as
where
-tHll 1 ...
z,(ts) = . . .
_At J"
The operator S~(t), given in (58), takes on the form
,9.= -[NT] i,
J = (1/2)
fo' [x2(t) + u2(t)] dt.
332 JOTA: VOL. 71, NO. 2, NOVEMBER 1991
K (t), Ke(t)
1.2-
1.0- \
. . . . . K(t)
\ ",,. -- Ke(t)
0.8-
0.4-
o2__
0.0 ---
0.0 o12 0,4 0.6 0,8 1.0
Fig. 1. Graphical representation of the exact optimal gain Ke(t) and the approximate gain
K(t) derived via Taylor series for r=6 (Example4.1).
Following the proposed method for the time-varying case and for r = 6, we
have
Introducing the above expression for (~2 in (61) and using (51), we arrive
at the expression sought for the time-varying gain matrix
[ 1.058 " - 1.125 " 0.166" -0.500" 3.000" - 7.000] ~'6(t)
K(t) "
[0.966 " -0.375 " - 1.000 " 9.500 " -24.000 " 39.000] g/6(t)
A comparison between the exact optimal gain Ke(t) and the approximate
solution derived by our technique is given in Fig. 1.
where I1 " il denotes the Euclidean norm. To this end, we take the norm of
both sides in (64) to yield
for some y e (0, T), where use was made of the inequality 0 < ?, < T, which
follows from definition (64). The expression for x(r)()~) may be derived using
(7) to yield
r-I
x(r)()')=A'x(?') + 2 ffBu(r-t-~(Y), y~[0, T].
l=0
[[x(r)(y)l [ =
1
Arx(y)+ ~ AIBu(r-J-°(y)
/=0
r-1
1L
<-I]A~x(Y)II+ Z IMlBu(r-t-')(r)ll
l=O
r-I
< [IAl[rllx(r)ll + ~ IIAIItIIBI]Ilu(r-'-l~(~,)ll, y e [0, T], (67)
/=0
IIx(7')tf=lexp(A7')x(O)+f~exp[A(T-o')]Bu(o.)do.
1
_< Ilexp(AT')x(0)II + exp[A(7/- o.)]Bu(cr) do"
(69)
Denote
a = m a x ( ~ x , 0),
and use the fact that o. ~ [0, ~] and T ~ [0, T]. Then,
n--I n~-I
exp(/2~ax~') E 7'i/i!<exp(aT) E T'/i!, (71a)
i=0 i=0
n-I n-I
exp[2~x(~,-o.)] E
i=0
(~-o.)~/i!<_exp(aT)E T~/i].i=0
(71b)
JOTA: VOL 71, NO. 2, NOVEMBER 1991 335
Substituting the inequalities (71) in (70) and the resulting inequalities in (69)
yields
n--1
f ;0 Ilu(o.)lldcr
1
x exp(aT) ~ Ti/i!. (72)
i~0
Denoting by
Y, IIAIIZltgll Ilu<'-t-~)(~,)l I
1=0
Now, substitute Ineqs. (73) and (76) into the expression (67) to yield
JIx(r)(~) II
tt--1
Finally, substituting (77) into (66) and the resulting expression into (65), an
upper bound of the approximation error Em~,(r) will be
E tzml
Em~x(r)_< [IA[lrlfQ[tflQ-'r[(Ilx(0)ll + tIBIIV1. . . . T)exp(aT) ~, Ti/iT
i=O
for r = 1, 2 . . . . . The above expression is the expression sought for the upper
bound of the error involved in the approximate analysis of time-invariant
systems via Taylor series.
In closing this section, we mention that one may readily estimate the
approximation error for the optimal control by making use of the procedure
presented above.
V(r)
E+02[ . .
E+O1 l
E--01
E -- 02
E --03
E - - 04
6. Conclusions
7. Appendix
Proof of Lemma 3.2. Using the definitions (21) and (26) for $/(t) and
Qj(t), respectively, we have
S, (t) Qj(t) : [SI-,(t)A (t) + S~[)1 (t)] Qj(t)
= Sl-~ (t)A(t)Qj(t) - S,-1 (t) Q~)(t)
+ S,-,(t) Q~')(t) + S~l)l(t)Qfit)
= St-l(t)[A(t)Qj(t) - Qj(.l)(t)] + [St-l(t)Qj(t)] ~1)
= S,-l(t)Qj+ l(t) + [St_ l(t) Qj(t)] ~), (79)
for any l = 1, 2 . . . . and j = 0, 1, 2 , . . . . The following relationships are obvi-
ously true:
So(t)Qk(t) = Qk(t), k = 0 , 1, 2 . . . . . j. (80)
Using (79) in (80), we have
Sl(t)Qk(t)=Q~+l(t)+Q(k~)(t), k = 0 , 1,2 . . . . . j - 1 . (81)
Again using (79) in (81), we have
S2(t) Qk(t)
=Qk+2(t)+2Q(~l)+~(t)+O~ 2) (t), k=O, 1,2 . . . . , j - 2 .
References
I. CHEN, C. F., and HSIAO, C. H., Design of Piecewise Constant Gains for Optimal
Control via Walsh Functions, IEEE Transactions on Automatic Control, Vol.
AC-20, pp. 596-603, 1975.
2. SANNUTI, P., Analysis and Synthesis of Dynamic System via Block-Pulse
Functions, Proceedings of the IEE, Vol. 124, pp. 569-571, 1977.
3. KING, R. E., and PARASKEVOPOULOS, P. N., Parameter Identification of
Discrete-Time SISO Systems, International Journal of Control, Vol. 30,
pp. 1023-1029, 1979.
JOTA: VOL. 71, NO. 2, NOVEMBER 1991 339
20. LIu, C. C., and SHIH, Y. P., Analysis and Optimal Control of Time-Varying
Systems via Chebyshev Polynomials, International Journal of Control, Vol. 38,
pp. 1003-1012, 1983.
21. WANO, M. L., CHANG, R. Y., and YAN6, S. Y., Analysis and Optimal Control
of Time- Varying Systems via Generalized Orthogonal Polynomials, International
Journal of Control, Vol. 44, pp. 895-910, 1986.
22. PARASKEVOPOULOS,P. N., A New Orthogonal Series Approach to State Space
Analysis and Identification, International Journal of Systems Science, Vol. 20,
pp. 957-970, 1989.
23. PARASKEVOPOULOS,P. N., SKLAVOUNOS,P. G., and ARVANITIS,K, G., A New
Orthogonal Series Approach to State Space Analysis and Optimal Control (to
appear).
24. PARASKEVOI'OULOS,P. N., SKLAVOUNOS,P. G., and KARKAS, D. A., A New
Orthogonal Series Approach to Sensitivity Analysis, Journal of the Franklin
Institute, Vol. 327, pp. 429-433, 1990.
25. PARASKEVOPOULOS,P. N., and DIAMANTARAS,K. I., A new Orthogonal Series
Approach to State Space Analysis of ID and 2D Discrete Systems, Proceedings
of the IEE, Part G, Vol. 137, pp. 205-209, 1990.
26. PARASKEVOPOULOS,P. N., TSlRIKOS, A. S., and ARVANITIS, K. G., A New
Orthogonal Series Approach to State-Space Analysis of Bilinear Systems (to
appear).
27. DING, X., and FRANK, P., Structure Analysis via Orthogonal Functions,
International Journal of Control, Vol. 50, pp. 2285-2300, 1989.
28. GORI-GIoRGI, C., and MONAKO, S., Observabitity and State Reconstruction
of Affine Time-Invariant Process, Proceedings of the International Symposium
MECO 78, Vol. 2, pp. 412-416, 1978.