Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

APPENDIX

Fundamental Concepts
A
A.1 GENERALIZED CONCEPT OF STABILITY-BRIEF REVIEW
The stability of a linear system is entirely independent of the input, and the state of a
stable system with zero input will always return to the origin of the state space, inde-
pendent of the finite initial state. In contrast, the stability of a nonlinear system
depends on the type and magnitude of input, and the initial state. These factors have
to be taken into account in defining the stability of a nonlinear system. In control
system theory, the stability of a non linear system is classified into the following cat-
egories, depending on the region of state space in which the state vector ranges;
Local Stability, Finite Stability and Global Stability.

A.1.1 Local Stability


Consider a nonlinear autonomous system described by the following state equations;
x_ ¼ f ðx, uÞ (A.1)

y ¼ gðx, uÞ (A.2)
where x is the state vector (n1); u is the vector (r1) of inputs to the system and y is
the vector (m1) of outputs. This nonlinear system is said to be locally stable about
an equilibrium point if, when subjected to small perturbation (Dx, Du), it remains
within a small region surrounding the equilibrium point.
If, as t increases, the system returns to the original state, it is said to be asymp-
totically stable in-the-small or stability under small disturbances i.e., local stability
conditions can be studied by linearizing the nonlinear system equations about the
desired equilibrium point.

A.1.2 Finite Stability


If the state of a system after perturbation remains within a finite region R, it is said to
be stable within R. If, further the state of the system after perturbation returns to the
original equilibrium point from any initial point x(t0) within R, it is said to be asymp-
totically stable within the finite region R Figure A.1.

271
272 APPENDIX A Fundamental Concepts

R R

r r
x(t0) x(t0)

(a) (b)
FIGURE A.1
Stability in nonlinear system. (a) Local stability or Finite stability. (b) Asymptotic stability

A.1.3 Global Stability


The system is said to be globally stable or asymptotically stable in-the-large if R
includes the entire finite space and the state of the system after perturbation from
every initial point regardless of how near or far it is from the equilibrium point within
the finite space returns to the original equilibrium point as t! 1.

A.2 ASPECT OF LINEARIZATION


A.2.1 Linearization of a Nonlinear Function
Consider a nonlinear function y ¼ f(x) as shown in Figure A.2. Assume that it is nec-
essary to operate in the vicinity of point a on the curve (the operating point) whose
co-ordinates are xa, ya.
For the small perturbations Dx and Dy about the operating point a let
Dx ¼ x (A.3)

Δy
Y
y = f(x)
a
ya

Approximate linear
Δx relationship

xa X
FIGURE A.2
Linearization of a nonlinear function
APPENDIX A Fundamental Concepts 273

Dy ¼ y (A.4)

dy
If the slope at the operating point is  , then the approximate linear relationship
dx a
becomes

dy
Dy ¼  Dx (A.5)
dx a
i.e.,

dy
y ¼  x: (A.6)
dx a

A.2.2 Linearization of a Dynamic System


The behaviour of a dynamic system, such as power system, may be described in the
following form
x_ ¼ f ðx, uÞ (A.7)

y ¼ gðx, uÞ (A.8)
where x is the state vector (n1); u is the vector (r1) of inputs to the system and y is
the vector (m1) of outputs. Here the procedure for linearization of equation (A.7)
and (A.8) has been described. Let x0 be the initial state vector and u0 the input vector
corresponding to the equilibrium point about which the small signal performance is
to be investigated. Since x0 and u0 satisfy equation (A.7), we have
x_ 0 ¼ f ðx0 , u0 Þ ¼ 0 (A.9)
Let us perturbed the system from the above state, by letting
x ¼ x0 þ Dx u ¼ u0 þ Du
The state must satisfy equation (A.7). Hence,
x_ ¼ x_0 þ Dx_ ¼ f ½ðx0 þ DxÞ,ðu0 þ DuÞ (A.10)
As the perturbations are assumed to be small, the nonlinear functions f(x, u) can be
expressed in terms of Taylor’s series expansion. With terms involving second and
higher order of Dx and Du neglected, we may write
x_ i ¼ x_i0 þ Dx_ i ¼ fi ½ðx0 þ DxÞ,ðu0 þ DuÞ
@fi @fi @fi @fi
¼ fi ðx0 , u0 Þ þ Dx1 þ  þ Dxn þ Du1 þ   þ Dur
@x1 @xn @u1 @ur
Since x_ i0 ¼ fi ðx0 , u0 Þ, we obtain
@fi @fi @fi @fi
Dx_ i ¼ Dx1 þ   þ Dxn þ Du1 þ  þ Dur
@x1 @xn @u1 @ur
274 APPENDIX A Fundamental Concepts

with i ¼ 1, 2, 3, . . ., n. In a like manner, from equation (A.8), we have


@gj @gj @gj @gj
Dyj ¼ Dx1 þ  þ Dxn þ Du1 þ   þ Dur
@x1 @xn @u1 @ur
with i ¼ 1, 2, 3, . . ., m. Therefore, the linearized forms of equations (A.7) and
(A.8) are
Dx_ ¼ ADx þ BDu (A.11)

Dy ¼ CDx þ DDu (A.12)


Where
2 3 2 3
@f1 @f1 @f1 @f1
6 @x1  
6 @xn 7
7
6 @u1
6 @ur 7
7
½Ann ¼ 6      7 ½Bnr ¼ 6    7
4 @fn @fn 5 4 @fn @fn 5
 
@x1 @xn @u1 @ur
2 3 2 3
@g1 @g1 @g1 @g1
6 @x1   
6 @xn 7
7
6 @u1
6 @ur 7
7
½Cmn ¼ 6      7 ½Dmr ¼ 6      7
4 @gm @gm 5 4 @gm @gm 5
  
@x1 @xn @u1 @ur
In equations (A.11) and (A.12), A is the state or plant matrix, B is the control or input
matrix, C is the output matrix and D is the feed-forward matrix. These partial deriv-
atives are evaluated at the equilibrium point about which the small perturbation is
being analyzed.

A.3 SYSTEM MATRIX AND ITS EIGEN PROPERTIES


A.3.1 Eigenvalues and Eigenvectors
The single machine or multimachine linearized dynamic model of a power system
can be written in simple form as
DX_ ðtÞ ¼ ADXðtÞ þ EDU ðtÞ (A.13)
where DX: state vector (r1), r ¼ total number of states.
A: system matrix (rr).
E: input matrix.
DU: input vector.
The eigenvalues of the matrix A are given by the values of the scalar parameter l
for which there exist non-trivial solutions (i.e. other than f ¼ 0) to the equation
Af ¼ lf (A.14)
To find the eigenvalues, equation (A.14) may be written in the form
ðA  lI Þf ¼ 0 (A.15)
APPENDIX A Fundamental Concepts 275

For a non-trivial solution


detðA  lI Þf ¼ 0 (A.16)
Expansion of this determinant gives the ‘characteristics equation’. The r solutions
of (A.16) l ¼ l1, l2, . . ., lr are the eigenvalues of A.
For any eigenvalue lp the r-column vector fp which satisfies equation (A.14)
is called the right-eigenvector of A associated with the eigenvalue lp.
Thus, we get
Afp ¼ lp fp p ¼ 1, 2, ... , r (A.17)
The right-eigenvector has the form
 T
fp ¼ f1p f2p    frp
Similarly, the r-row vector cp which satisfies the equation
cp A ¼ lp cp , p ¼ 1, 2, ... , r (A.18)
is called the left-eigenvector associated with the eigenvalue lp.
The left-eigenvector has the form

cp ¼ c1p c2p : : : crp 
The left and right-eigenvectors corresponding to different eigenvalues are orthogo-
nal, i.e.,
cq fp ¼ 0 (A.19)
where lp 6¼ lq and
cp fp ¼ ap (A.20)
where lp ¼ lq and ap is a non zero constant. It is normal practice to normalized these
vectors so that
cp fp ¼ 1 (A.21)

A.3.2 Effect of Right and Left Eigenvectors on System States


Referring to the sate equation (A.13) for the autonomous system (with zero input) the
system equation is given by
DX_ ðtÞ ¼ ADXðtÞ (A.22)
In order to avoid cross-coupling between the state variables, consider a new state
vector Z related to the original state vector X by the similarity transformation
DX ¼ FZ (A.23)
Where F isthe modal matrix of A and is defined by F ¼ ½ f1 f2    fr  and
F1 ¼ C ¼ cT1 cT2    cTr
T
276 APPENDIX A Fundamental Concepts

The F1AF will transform the matrix A into a diagonal matrix L with the eigen-
values l1, l2, . . ., lr are the diagonal elements. Therefore, after substitution of equa-
tion (A.23) into (A.22) gives
FZ_ ¼ AFZ (A.24)
The new state equation can be written as
Z_ ¼ F1 AFZ (A.25)
this becomes
Z_ ¼ LZ (A.26)
where L is a diagonal matrix consisting of eigenvalues of matrix A. Equation (A.26)
represents r nos. uncoupled first order equations
Z_ p ¼ lp Zp , p ¼ 1, 2, ..., r (A.27)
and the solution with respect to time t of this equation is given by
Zp ðtÞ ¼ Zp ð0Þelp t (A.28)
where Zp(0) is the initial value of the state Zp.
The response in terms of original state vector is given by
2 3
Z1 ðtÞ
6 Z2 ðtÞ 7
6 7
6 : 7
DX ¼ FZ ¼ ½ f1 f2 : : : fr 6 6 : 7
7 (A.29)
6 7
4 : 5
Zr ðtÞ
Using equation (A.28) in equation (A.29) results in
Xr
DXðtÞ ¼ fp Zp ð0Þelp t (A.30)
p¼1

Again from Equation (A.29), we get


ZðtÞ ¼ F1 DXðtÞ (A.31)

Z ðtÞ ¼ CDXðtÞ (A.32)


This implies that
Zp ðtÞ ¼ cp DXðtÞ (A.33)
with t ¼ 0, it follows that
Zp ð0Þ ¼ cp DXð0Þ (A.34)
By using Cp to denote the scalar product Cp ¼ cpDX(0), this represents the magnitude
of the excitation of the p th mode resulting from the initial condition. Therefore,
equation (A.30) may be written as Xr
DXðtÞ ¼ fp Cp elp t (A.35)
p¼1
APPENDIX A Fundamental Concepts 277

In other words, the time response of the p th state variable is given by


DXp ðtÞ ¼ f1p C1 el1 t þ f2p C2 el2 t þ   þ frp Cr elr t (A.36)
Equation (A.36) indicates that the right-eigenvector entries fkp, (k ¼1, 2, . . ., r)
measures the relative activity of the state variables participating in the oscillation
of certain mode (lp). For example, the degree of activity of the state variable Xp
in the pth mode is given by the element fkp of the right eigenvector fp.
Similarly, the effect of left-eigenvector on system state variable can be illustrated
as follows:
The transformed state vector Z is related to the original state vector X by the
equation
DXðtÞ ¼ FZðtÞ
(A.37)
¼ ½ f1 f2    fr ZðtÞ
and by the equation (A.32)
ZðtÞ ¼ CDXðtÞ
 T (A.38)
¼ c1 T c2 T : : : cr T DXðtÞ
Again from equation (A.27) we get
Z_ p ¼ lp Zp , p ¼ 1, 2,.. ., r
Thus the variables DX1, DX2, . . ., DXr are the original state variables represent the
dynamic performance of the system. The variables Z1, Z2, . . ., Zr are the transformed
state variables such that each variable is associated with only one eigenvalue i.e.,
they are directly related to the electromechanical modes. As seen from equation
(A.38), the left eigenvector cp identifies which combination of the original state vari-
ables displays only the p th mode. Thus the k th element of the right-eigenvector fp
measures the activity of the variable Xk in the p th mode, and the k th element of the
left-eigenvector cp weighs the contribution of this activity to the pth mode.

A.4 WHAT ARE SEMI-DEFINITE PROGRAMMING (SDP)


PROBLEMS?
A wide variety of problems in systems and control theory can be formulated as a semi-
definite programming problem of the form
minimize CTx subject to F(x)  0
Xm
where x 2 Rm is the variable, FðxÞ ¼ F0 þ xi Fi , the matrices c 2 Rm and
i¼1
Fi ¼ FTi 2 Rnn, i ¼ 0, 1. . . . ., m are given. The matrix F(x) is positive semi-definite
and the constraint F(x)  0 is called a linear matrix inequality (LMI). SDP problems
are convex programming problems with a linear objective function and LMI
constraints.
Semi-definite programming problems can be recast in the form:
Ax  b
278 APPENDIX A Fundamental Concepts

A.4.1 What is a linear matrix inequality?


A linear matrix inequality (LMI) has the form:

X
m
FðxÞ ¼ F0 þ x i Fi > 0 (A.39)
i¼1
where x 2 Rm is the variable. x ¼ (x1, x2. . . . ., xm) is a vector of unknown scalars (the
decision or optimization variables) and the symmetric matrices Fi ¼ FTi 2 Rnn, i ¼ 0,
1. . . . ., m are given. ‘> 0’ stands for “positive definite”, i.e., the largest eigenvalue of
F(x) is positive.
Note that the constraints F(x) > 0 and F(x) < G(x) are special case of (A.39), since
they can be rewritten as F(x) < 0 and F(x)  G(x) < 0, respectively.
 The  LMI (A.39) is a convex constraint on x since F(y) > 0 and F(z) > 0 imply that
F yþz 2 > 0. As a result,

 Its solution set, called the feasible set, is a convex subset of Rm


 Finding a solution x to (A.39), if any, is a convex optimization problem.

Convexity has an important consequence, even though (A.39) has no analytical


solution in general, it can be solved numerically with guarantees of finding a solution
when one exists.
In control systems there are a number of problems which lead to the solution of an
LMI. For example,

 Lyapunov equation: ATP þ PA ¼  Q

Lyapunov theorem: The linear time-invariant dynamical system described by;

x_ ðtÞ ¼ AxðtÞ

where x 2 R is the variable and the matrix A 2 Rnn is stable if and only if given any
n

positive definite symmetric matrix Q 2 Rnn there exists a unique positive definite
symmetric matrix P satisfying the Lyapunov’s equation:

AT P þ PA ¼ Q < 0 (A.40)
The Lyapunov equation (A.40) is in the form of an LMI. This LMI could be rewritten
in the form of (A.39). Indeed, considering n ¼ 2, and defining:
       
x11 x12 1 0 0 1 0 0
P¼ ; P1 ¼ ; P2 ¼ ; P3 ¼
x21 x22 0 0 1 0 0 1

we can write with x1 ¼ x11, x2 ¼ x12 ¼ x21, x3 ¼ x22


" # " # " #
1 0 0 1 0 0
P ¼ x11 þ x12 þ x22
0 0 1 0 0 1
¼ x1 P1 þ x2 P2 þ x3 P3
APPENDIX A Fundamental Concepts 279

Therefore,
AT P þ PA
     
¼ x 1 A T P 1 þ P1 A þ x 2 A T P 2 þ P2 A þ x 3 A T P 3 þ P 3 A
¼ x1 F1  x2 F2  x3 F3 < 0
where
F0 ¼ 0;  F1 ¼ AT P1 þ P1 A;  F2 ¼ AT P2 þ P2 A;  F3 ¼ AT P3 þ P3 A
Consequently,
x 1 F1 þ x 2 F2 þ x 3 F3 > 0 (A.41)
This shows that a Lyapunov equation can be written in the form of an LMI.
 
a a
The LMI (A.41) with A ¼ 11 12 can be written as
a21 a22
" #" # " #" #
a11 a21 1 0 1 0 a11 a12
F1 ¼ A P1 þ P1 A ¼
T
þ
a12 a22 0 0 0 0 a21 a22
" #
2a11 a12
¼
a12 0
" #" # " #" #
a11 a21 0 1 0 1 a11 a12
F2 ¼ A P2 þ P2 A ¼
T
þ
a12 a22 1 0 1 0 a21 a22
" #
2a21 a11 þ a22
¼
a11 þ a22 2a12
 
0 a21
Similarly, F3 ¼ AT P3 þ P3 A ¼
a21 2a22
Therefore, x1F1  x2F2  x3F3 < 0
2 32 3 2 3
a11 a21 0 x1 b1
gives 4 a12 a11 þ a22 a21 54 x2 5 < 4 b2 5
0 a12 a22 x3 b3
which is in the form of semi-definite programming problem
ex < e
Ae b
A.4.2 Interior-Point method
For the LMI
X
m
Fð x Þ ¼ F0 þ x i Fi > 0 (A.42)
i¼1
280 APPENDIX A Fundamental Concepts

where x 2 Rm is the variable, and the symmetric positive definite matrices


Fi ¼ FTi 2 Rnn, i ¼ 0, 1. . . . ., m are given, the function

log det F1 ðxÞ FðxÞ > 0,
’ðxÞ ¼
1 otherwise;
is finite if and only if F(x) > 0 and becomes infinite if x approaches the boundary of
the feasible set: {xjF(x) > 0}. It can be shown that ’ is strictly convex on the feasible
set so that it has a unique minimizer denoted by x*:
x ¼ arg min ’ðxÞ
x
¼ arg max det FðxÞ
FðxÞ>0

We define here x* as the analytic center of the LMI, F(x) > 0. F(x*) has the maximum
determinant among all positive definite matrices of the form F(x).
Newton’s method, with appropriate step length selection, can be used to effi-
ciently compute x*, starting from a feasible initial point. The algorithm to compute
x* is:

1

xkþ1 :¼ xðkÞ  aðkÞ H xðkÞ g xðkÞ (A.43)
where a(k) is the damping factor of the k-th iteration, g(x) and H(x) denote the gra-
dient Hessian matrix of ’(x), respectively, at x(k).
The damping factor
 
1 if d xðkÞ  1=4,
’ðxÞ ¼   
1= 1 þ d xðkÞ otherwise;
where
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
D
dðxÞ¼ gðxÞT H ðxÞ1 gðxÞ
is called the Newton decrement of ’ at x.

EXAMPLE: 1
Find the analytic center of

1 x
FðxÞ ¼ >0
x 1
We have
 
’ðxÞ ¼ log det F1 ðxÞ ¼  log 1  x2
d’ 2x
¼
dx 1  x2
d2 ’ 2ð1 þ x2 Þ
¼
dx2 ð1  x2 Þ2
The feasible set in which ’(x) is defined is: (1 1), and the minimum, which
occurs at x ¼ 0, is also ’(x*) ¼ 0.
APPENDIX A Fundamental Concepts 281

EXAMPLE: 2
Find the analytic center of
 
1  x 1 x2
FðxÞ ¼ >0
x2 1 þ x1
We have
 
’ðxÞ ¼ log det F1 ðxÞ ¼ log 1  x21  x22
 
d’ 2x1 d2 ’ 2 1 þ x21  x22
¼ , ¼ 2 ;
dx1 1  x21  x22 dx21 1  x21  x22
 
d’ 2x2 d2 ’ 2 1  x21 þ x22
¼ , ¼   ;
dx2 1  x21  x22 dx22 1  x2  x2
2
1 2

d2 ’ 4x1 x2
¼ 2
dx1 dx2 1  x21  x22
   2
 ’x x ’x x  1  x21 þ x22
 11 1 2 
 ’x x ’x x  ¼ 4  4
2 1 2 2 1  x21  x22
The feasible set in which ’(x) is defined is: x21 þ x22 < 1, and the minimizer is

x ¼ ½ 0 0 T , which is an analytical center of the LMI L(x).

You might also like