Professional Documents
Culture Documents
Adc PDF
Adc PDF
Adc PDF
i 1 2 N
S. N. Huang, K. K. Tan, and T. H. Lee tion term, ui 2 R is the input, and yi 2 R is the output of the ith
subsystem. [15] is an example for representing the classes above.
Abstract—In this letter, we solve the problem of decentralized adaptive
Since the functions i xi ; i xi are unknown, the traditional ( ) ( )
adaptive control methods are not applicable to such a control system.
( )
asymptotic tracking for a class of large scale systems with significant non-
Assumption 1: The signs of i xi are known, and there exist
linearities and uncertainties. Neural networks (NNs) are used as a control
part to cancel the effect of the unknown nonlinearity. Semiglobal asymp- known continuous functions i xi and constants 0i > such that ( ) 0
totic stability results are obtained and the tracking error converges to zero. ( )
i xi ji xi j 0i , i ( ) ; ; N. = 1 2 ...
Index Terms—Adaptive control, interconnected systems, neural net- The above assumption implies that smooth functions i xi are ( )
works (NNs), stability. strictly either positive or negative. Define ei t yi 0 ydi ; ei t ()= _()=
yi 0 ydi ; ; ei(n 01) yi(n 01) 0 ydi
_ _ ... (n 01)
=
; ei(n ) yi(n ) 0 ydi
(n )
, and =
I. INTRODUCTION
e = [e i i;1 ; e 2; . . . ; e
i; i;n ] = e ; e_ ; . . . ; e
T
i i
(n 01) T
(2)
i
The study of large-scale systems has been motivated by their wide
applicability to many practical systems such as power systems and The ith subsystem (1) can be written as
spacecraft. For such systems, with possibly many interconnected e_ i;1 = e i;2
components, finding a centralized controller is often a technically chal-
e_ i;2 = e i;3
lenging design problem. Research on decentralized control for nonlinear
.. : (3)
systems is reported in [1]–[4]. While most of the results consider subsys- .
tems which are linear in a set of unknown parameters [1], [2], or consider e_ i;n = e(i
n )
= 0 y + (x ) + ( x )u + 1
(n )
di i i i i i i
the isolated subsystems to be known [3], [4] presents a decentralized
neural network (NN) control to approximate unknown functions in the By introducing esi i;1 i;2 i;n 1 = [e ; e ; . . . ; e 0 ] , the above state equa- T
isolated subsystems which may not be linearly parameterized. On the tions can be decomposited into the following two equations
other hand, there is a significant amount of effort on control design for 0 1 0 ... 0 0
systems with interconnection constraint. Most of the literature consider
0 0 1 ... 0 0
the first-order type condition (linearly bounded) on the interconnections e_ si = .. esi + .. e i;n (4)
[1], [2], [14]. The results on the higher-order interconnections are given . .
by [3], [13]. Specifically, [13] proposes a NN decentralized controller 0 0 0 ... 0 1
without requiring the knowledge of input gain functions as in [3]. e_ i;n = 0 y + (x ) + ( x )u + 1 :
(n )
di i i i i i i (5)
In this letter, by introducing a decomposition structure, we obtain the
solution to the problem of decentralized adaptive asymptotic tracking The problem in this letter is to find decentralized controllers ui that can
for a class of large scale systems with significant nonlinearities and un- stabilize the interconnected subsystems of error dynamics in (4), (5),
certainties. It is assumed that the states of the isolated subsystems are and render the equilibrium point stable.
available for feedback and nonlinear systems have full relative degree. Defining vi ki;1 ei ki;2 ei = ki;n 01 ei(n 02) and introducing
+ _ +. . .+
NNs are used as a control part to cancel the effect of the nonlinearity it to (4), we have
point wise in time. Intensive research on NN modeling and central-
ized control may be found in [5]–[12]. Our results focus on the area e_ si = A e + b (e + v )
i si i i;n i (6)
of decentralized adaptive nonlinear control. The current interconnec- e_ i;n = 0 y + (x ) + ( x )u + 1
(n )
di i i i i i i (7)
tion form in the system includes as special cases, various other forms
considered previously in the literature. In contrast to [4], [13], our con- where
troller does not depend on a limit for the ith subsystem input gain rate 0 1 ... 0 0
( )
of change [the derivative of i xi in (1)] and can handle the intercon-
A i = ..
. ; b i = ..
. (8)
nections which are bounded by general nonlinear functions in states.
In addition, the tracking error is proved converging to zero. 0k i;1 0k i;2 . . . 0k i;n 01 1
. . . ; k 01 are chosen such that the polynomial k 1 +
and ki;1 ; ki;2 ;
k 2 s +. . .+ k 01 s 02 + s 01 is strict Hurwitz. This implies that
i;n i;
n n
II. PRELIMINARIES
A is an asymptotically stable matrix. For a given Q > 0, there exists
i; i;n
i i
Let us consider the nonlinear interconnected subsystems given in the
following form a unique positive definite matrix P satisfying i
x_ i;1 = x i;2 A P T
+ P A = 0Q : (9)
x_ =
i i i i i
x
i;2
..
i;3
Assumption desired trajectory vector xdi
2: A =
. (1) [y ; y_ ; . . . ; y ]for the ith subsystem is continuous and
(n ) T
di di
1 2
1
i;n i i i i i i N
y i = x i;1
Assumption 3: The interconnections i are bounded by
N
Authorized licensed use limited to: Shri Mata Vaishno Devi University. Downloaded on March 29,2020 at 06:57:40 UTC from IEEE Xplore. Restrictions apply.
244 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 17, NO. 1, JANUARY 2006
III. LOCAL CONTROLLER STRUCTURE the NN approximation error and system interconnections. Consider the
following update laws
In this section, a desired local controller is proposed, which is
free from control singularity for the isolated subsystems. For no- W^_ i = 0
1i $i 8i (zi )
tational convenience in this letter, we denote $i ei;n =vi .
(n 01)
+ _^M i =
2i j$i j
(18)
=[
Since xi ... ] =
xi;1 ; xi;2 ; ; xi;n T and ei;n xi;n 0 ydi , (19)
( ) ( ) = (
i xi can be defined as i xi + i i ; $i
(n 01)
i1 with ) where W ^ i , ^M i are the estimates of Wi , M i [Wi is given in (29)],
3 3
i= [ ... ] =
xi;1 ; xi;2 ; ; xi;n 01 T and i1 ydi 0 vi . Note respectively, and
1i ,
2i , are positive constants.
_ =
that i1 _ ...
ydi 0 ki;1 ei 0 ki;2 ei 0
(n )
0 ki;n 01 ei(n 01) . Let Define W ~ i = W^ i 0 Wi3 2
wi~ , ~M i = ^M i 0 M i 2
~i .
=
i2 0ydi+ _ + +. . .+
(n )
=
ki;1 ei ki;2 ei ki;n 01 ei(n 01) 0ydi (n )
vi . +_ Introduce two vectors Ei = [eTsi ; $i ; W ~ iT ; ~M i ]T and E0i =
Define a smooth scalar function [9] [eTsi (0); $i (0); W~ iT (0)T; ~M i (0)]T . Notice that esi = Fei (xi ; xdi ) =
$ [ei;1 ; ei;2 ; . . . ; ei;n 01 ] , where Fei :
xi 2
di !
e , and
Vev = i d : $i = F$i (xi ; xdi ) = ei;n + ki;1 ei;1 + . . . + ki;n 01 ei;n 01 , where
i ( i ; i + i1 ) i F$i :
xi 2
di !
$i . Thus, the vector Ei can be viewed as a
(11)
0
function of the variables xi , xdi , W ~ i , ~M i
By mean value theorem, Vev can be rewritten as Vev w $i2 = = ~ i ; ~M i )
Ei = FEi (xi ; xdi ; W
( + ) (0 1)
i i ; w $i i1 with w 2 ; . This is positive definite as shown (20)
in [9]. Its time derivative (see [9, the proof of Lemma 3.1]) where FEi :
xi 2
di 2
wi 2
i !
Ei . Similarly,
= FE i (xi (0); xdi (0); W~ i (0); ~M i (0)), where
~ ~
we have E0i
0ydi + v_ i $i
(n ) 0
FE i :
x i 2
d i 2
w i 2
i !
E i . Introduce the
V_ ev = i$(xii ) $_ i 0 i ( xi )
+ gi (zi )$i (12) 0 0 0
sets
E = fE = [E T ; E T ; . . . ; EN
1
~0 ~0
] jEi 2
Ei ; 1 i N g and
T T
2
0
( )
where gi zi is given below. To design the local controller, we consider
E = fE = [E T ; E T ; . . . ; E TN ]T jE i 2
E i ; 1 i N g. In-
0 0 01 02 0 0 0
troduce the largest ball which is included in
E : EBR = fE jkE k
an isolated subsystem with i , 1 =0 Rg, R > 0, and the largest ball which is included in
E : E Br = 0 0
z = f(xi ; $i ; i ; i )jxi 2
xi ; xdi 2
di g : i ( xi )
0^M i sgn($i ) + 0ydi i+(xv_ii) + 1i :
1 2
n ( )
(22)
IV. DECENTRALIZED CONTROLLER OF
By using (6) and the Lyapunov equation (11), we have the following
INTERCONNECTED SUBSYSTEMS _
time derivative of Vsi
( ) ( )
In the case that nonlinearities i xi and i xi are unknown, the
3 zi is not available. Based on the desired
( ) V_ si = 0eTsi Qi esi + 2esi
T
Pi bi $i + $i $_ i
desired local controller udi i ( xi )
0y + v_ i $i
local control structure (15), we propose the following decentralized n ( )
NN control for handling the unknown functions in the interconnected
subsystems (6) and (7)
0 dii (xi ) + gi ( zi ) $ i : (23)
ui = 0ki;n $i 0 2esi
T ^ iT 8i (zi )
Pi bi 0 ^M i sgn($i ) + W (17) Substituting (22) into the above equation produces
(1 2)
with ki;n > = . Here, the neural network term WiT i is used to ^ 8 V_ si 0eTsi Qi esi 0 ki;n $i2 + W^ iT 8i $i 0 udi3 $i
deal with the unknown functions, and M i ^ sgn( )
$i which is a sliding j$ j N ( x )
mode (see [16, ch. 14]), is introduced for countering uncertainties in 0^M i j$i j + i j=1i ij j : (24)
Authorized licensed use limited to: Shri Mata Vaishno Devi University. Downloaded on March 29,2020 at 06:57:40 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 17, NO. 1, JANUARY 2006 245
From the definition (2) and the relationships among xj;k , $j , and j;1 , where Xi =[ ]
= ( )
eTsi ; $i T and i diag fmin Qi Ii ; ki;n 0 = g (1 2)
we have with Ii the ni 0 ( 1) (
2 ni 0 1)unit matrix. Since
(1 2) _ 0
ki;n > = , we have V . Now, we need to establish
(x
) = (e + y ; e + y_ ; . . . ; $ + ): (25)
ij j ij j;1 dj j;2 dj j j1
that E 2
() (0)
1
we have T0m kE k2 V t V T0m R . This implies
n
j;1 j;2 1
bounded. According to Assumption 2, this implies that e +y ; e +
j;n
y_ ; . . . ; = y 0 0 k e 0 k e 0 . . . : 0 k 0 e 0 ,
0 j;1 dj j;2
j;1 j;2 j1
that kXi k is bounded. This also implies that xi is bounded. Since
is bounded, i.e., j (e + y ; e + y_ ; . . . ; 0+ )j
ij dj dj
. Uti- 1 ( )
i xj and xj is ensured to be bounded, we have that
j;1 j;2 j1 M 0ij N
( ) = 1 2 ...
2 2 N 2 j =1 ij
the smooth functions ij xj , j ; ; ; N , are bounded. Because
( a )( b ), the last term of (24) can be written as
k=1 k k
_ _
N 2 N 2
k=1 k k=1 k of the boundedness of all the signals, from (6) and (22), esi , $i are
j$ j ( x ) j$ j bounded. This implies that
+ $2
N N 2
i
j =1
ij j i j =1 M 0ij i
i
d kX k = d e e + $ = 1 e_ e + e e_ + 2$ $_ (32)
0i
$ ( ; $; )
T T
si si i i
N 2 si si
2
2 T
e e +$
N
+ : (27) dt dt j =1 j ij j j1 i si si i T 2
2 2
0i
si si i
V_ 0e Q e 0 k 0 21 $ + W^ 8 $
N N
2
lim !1 kX (t)k = 0.
i
T 2 T
si si i si i;n i i i i
j$ j
i si si i
N
0d (z )$ 0 ^ j$ j + i i (28) i Mi i
i
0i
!1 sit !1 i t t !1 i
. .3.+([ ( ; $ ; )] =2 )].
i 1i i i i1 01 2i
0 je 0 j, it follows that
i i;n i i;n i i
(n 2)
the case that (x ), (x ), ( ; $; ) are unknown,
di i i i i i i i
i;n 1 i
i1
i i
t
i
t
i;n i;1
t
i
t
i;2 i
+ . . . + k 0 lim 0
d (z ) = W 3 8(z ) + ; z 2
!1 e
T
(n 2)
i i (29) i i i z
i;n 1 i
i t
jv j j$ j, we have
i
nodes of the ith NN. This approximation is considered with the linear i i
NNs. This is why we add the neural network term in the control (17).
(n 02)
Using this approximation property, we have
lim
!1 je j 0 k lim!1 je j 0 . . . 0 k
i;n i;1 i i;n 01 tlim
!1 ei
0 21 $
t t
V_ = lim
!1 je j
N N
si 0 min (Q )ke k 0 i si
2
k i;n i
2
t
i;n
i=1 i=1
!1 j$ j = 0:
lim
+W~ 8 $ 0 ~ j$ j (30)
i
T t
i i i Mi i
where M i =0i=( ) + . For V_ , by using adaptations This together with (33) concludes that lim !1 je j = 0. Thus,
lim !1 e (t) = lim !1[e ] = 0. Q: E: D:
N t i;n
j =1 M 0ij
(18) and (19), we have
Mi
t i t
T
si ;e i;n
T
Remark 4.1: For the system (1), [4], [13] restrict the controllers
depending on the control gain rate _ (x) in the isolated subsystem.
V_ (V_ + V_ )
N
0 min ( Q ) ke k +
i si
2
k i;n
2
i
[13], the result of [4] is extended to a higher-order polynomial, that
1(
is i x1 ; x2 ; ... )
; xN Nj=1 ij0 ij1 jsj j1 ij2 jsj j2 ( + + +... +
i=1
N
p p
)
ij jsj j , where sj is the filtered error. The interconnection constraint
=0 X
XT
i i i (31) in this letter, as shown in Assumption 3, includes as special cases, var-
ious other forms considered previously in the literature.
i=1
Authorized licensed use limited to: Shri Mata Vaishno Devi University. Downloaded on March 29,2020 at 06:57:40 UTC from IEEE Xplore. Restrictions apply.
246 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 17, NO. 1, JANUARY 2006
Remark 4.2: The condition T1M r 2 T0m R2 in Theorem 4.1 is Recurrent Neural Network as a Linear Attractor
used to ensure that the error E belongs to
E for any time t 0. for Pattern Association
Thus, an upper bound for the adaptation gains
1i ,
2i , i = 1; 2; . . . ; N
is placed. Ming-Jung Seow and Vijayan K. Asari
Remark 4.3: The controller (17) contains the discontinuous
function sgn($i ) which raises a theoretical issue regarding the ex-
Abstract—We propose a linear attractor network based on the obser-
istence and uniqueness of solution. A continuous saturation function vation that similar patterns form a pipeline in the state space, which can
sat($i=) with constant (see [16]) can be used to solve this issue. be used for pattern association. To model the pipeline in the state space,
we present a learning algorithm using a recurrent neural network. A
REFERENCES least-squares estimation approach utilizing the interdependency between
neurons defines the dynamics of the network. The region of convergence
[1] P. A. Ioannou, “Decentralized adaptive control of interconnected around the line of attraction is defined based on the statistical charac-
systems,” IEEE Trans. Autom. Control, vol. 31, no. 4, pp. 291–298, teristics of the input patterns. Performance of the learning algorithm is
1986. evaluated by conducting several experiments in benchmark problems,
[2] L. C. Fu, “Robust adaptive decentralized control of robot manipulators,” and it is observed that the new technique is suitable for multiple-valued
IEEE Trans. Autom. Control, vol. 37, pp. 106–110, 1992. pattern association.
[3] L. Shi and S. K. Singh, “Decentralized adaptive controller design for Index Terms—Learning rule, linear attractor, pattern association, recur-
large-scale systems with higher order interconnections,” IEEE Trans. rent neural network.
Autom. Control, vol. 37, no. 8, pp. 1106–1118, 1992.
[4] J. T. Spooner and K. M. Passino, “Decentralized adaptive control of non-
linear systems using radial basis neural networks,” IEEE Trans. Autom. I. INTRODUCTION
Control, vol. 44, no. 11, pp. 2050–2057, 1999.
[5] F. C. Chen and C. C. Liu, “Adaptively controlling nonlinear continuous- Recurrent neural networks are feedback architectures which can be
time systems using multilayer neural networks,” IEEE Trans. Autom. used as associative memories where the stored memories are repre-
Control, vol. 39, no. 6, pp. 1306–1310, 1994. sented by the dynamics of the network convergence. In most models of
[6] S. Fabri and V. Kadirkamanathan, “Dynamic structure neural networks
for stable adaptive control of nonlinear systems,” IEEE Trans. Neural
associative memory, memories are stored as attracting fixed points at
Netw., vol. 7, no. 5, pp. 1151–1167, 1996. discrete locations in the state space [1]. Fixed-point attractor may not be
[7] F. L. Lewis, A. Yesildirek, and K. Liu, “Multilayer neural-net robot suitable for patterns which exhibit similar characteristics [2]–[5]. As a
controller with guaranteed tracking performance,” IEEE Trans. Neural consequence, it would be more appropriate to represent the pattern as-
Netw., vol. 7, no. 2, pp. 387–398, 1996. sociation using a linear attractor network that encapsulates the attractive
[8] M. M. Polycarpou, “Stable adaptive neural control scheme for non- fixed points scattered in the state space with a line of attraction, where
linear systems,” IEEE Trans. Autom. Control, vol. 41, no. 3, pp.
447–451, 1996.
fixed points in the line correspond to similar patterns [5], [6]. Brody
[9] T. Zhang, S. S. Ge, and C. C. Hang, “Stable adaptive control for a class et al. model a basic mechanism for graded persistent activity utilized
of nonlinear systems using a modified Lyapunov function,” IEEE Trans. attractor networks in [2]. Stringer [3] and Seung [4] presented the
Autom. Control, vol. 45, no. 1, pp. 129–132, 2000. utilization of continuous attractors for modeling biological processes.
[10] Y. Xia and J. Wang, “A recurrent neural network for solving linear pro- In this correspondence, we propose a simple methodology to construct
jection equations,” Neural Netw., vol. 13, pp. 337–350, 2000. a linear attractor network suitable for multiple-valued pattern associa-
[11] N. Hovakimyan, F. Nardi, A. Calise, and H. Lee, “Adaptive output feed-
back control of a class of nonlinear systems using neural networks,” Int. tion utilizing a fully connected recurrent neural network connecting all
J. Contr., vol. 72, pp. 1161–1169, 2001. neurons to each other. The dynamics of the linear attractor network con-
[12] J. Cao and J. Wang, “Global asympotic stability of a general class of re- verge similar patterns and diverge dissimilar ones. Our experiments with
current neural networks with time-varying delays,” IEEE Trans. Circuits human skin exemplar patterns demonstrate that the proposed learning
Syst. I, Fundam. Theory Appl., vol. 50, pp. 34–44, 2003. rule can learn and characterize skin color as a linear attractor.
[13] S. N. Huang, K. K. Tan, and T. H. Lee, “A decentralized neural network
control for a class of interconnected systems,” IEEE Trans. Neural Netw.,
vol. 13, no. 6, pp. 1554–1557, 2002. II. LEARNING MODEL
[14] K. Narendra and N. Oleng, “Exact output tracking in decentralized adap-
The relationship of each neuron with respect to every other neuron
in an N neuron network for stimulus-response pair (xs ; y s ), bias value
tive control systems,” IEEE Trans. Autom. Control, vol. 47, pp. 390–395,
Authorized licensed use limited to: Shri Mata Vaishno Devi University. Downloaded on March 29,2020 at 06:57:40 UTC from IEEE Xplore. Restrictions apply.