Generalized Leverrier-Faddeev Algorithm1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

A New Look at Generalized Leverrier-Faddeev

Algorithm
BEKHITI Belkacem1, BEKHITI Abdullah2 & NAIL Bachir3
1
Institute of Aeronautics and Space Studies, blida, Algeria.
2
Ziane Achour University of Djelfa, Algeria.
3
Faculty of Science and Technology, Tissemsilt, Algeria.
E-mail: Belkacem1988@hotmail.co.uk

Abstract: The paper considers an algorithm for finding the inverse of matrix polynomials over a complex
field. This algorithm is based on the complex theory of lambda-matrices. This is an extension of existing
general method (Leverrier-Faddeev Algorithm) for computing the inverse of resolvents of matrix. Here in
this paper we have developed a new way to a powerful and efficient recursive algorithm to find the inverse
of matrix polynomials. To the limit of our knowledge no one has done such work or similar.

Keywords: Lambda-matrices. Resolvents. Generalized Leverrier-Faddeev Algorithm. Matrix polynomials

Introduction: The study of a constant regular matrix 𝑨 ∈ ℂ𝑚×𝑛 have been attracted the
attention of many mathematical researchers, and various methods are used for
computing the usual inverse, One of such algorithms is the Leverrier-Faddeev scheme
(also called Souriau-Frame algorithm) too many alternatives of the Leverrier-Faddeev
algorithm are presented in J.S. Frame et.al 1949, D.K. Faddeev et.al 1963, S. Barnett
et.al 1989 and Guorong. Wang et.al 1993. A more general algorithm for computing the
generalized inverses (Moore-Penrose) of a given rectangular or singular constant matrix
𝑨 ∈ ℂ𝑚×𝑛 , based on the Leverrier-Faddeev algorithm, is originated in H.P. Decell et.al
1965. Also, in S. Barnett et.al 1989, a new derivation of the Leverrier-Faddeev algorithm
is utilized to produce a computational scheme for the inverse of a 2nd degree matrix
polynomial 𝑨(𝜆) = 𝜆2 𝑰𝑛 + 𝜆𝑨1 + 𝑨2 . Additionally, an extension of Leverrier’s algorithm for
computing the inverse of an arbitrary degree matrix polynomial is introduced in G. Wang
et.al 1993 but it is so complicated and not suitable for implementation. In N.P.
Karampetakis et.al 1997 it is derived a representation and an algorithm for computation
of the Moore-Penrose inverse of a non-regular polynomial matrix of an arbitrary degree.
In J. Jones et.al 1998 it is described an algorithm for computing the Moore-Penrose
inverse of a singular rational matrix and its implementation in the symbolic
computational language MAPLE. In this paper we have introduced a new way and a new
proof of an efficient and powerful scheme for the computation of the inverse of an
arbitrary degree matrix polynomial.

Roots of Matrix Polynomial in Complex Plane: In the theory of complex analysis it is


well-known that, if a complex function f(𝑧) is analytic in a region 𝒟 and does not vanish
identically, then the function f ′ (𝜆)/f(𝜆) is called the logarithmic derivative of f(𝜆). The
isolated singularities of the logarithmic derivative occur at the isolated singularities of
f(𝜆) and, in addition, at the zeros of f(𝜆). The principle of the argument results from an
application of the residue theorem to the logarithmic derivative. The contour integral of
this logarithmic derivative (𝑖. 𝑒. f ′ (𝜆)/f(𝜆)) is equal to difference between the number of
zeros and poles of a complex rational function f(𝜆), and this is known as the Cauchy's
argument principle (see Peter Henrici 1974). Specifically, if a complex rational function
f(𝑧) is a meromorphic function inside and on some closed contour 𝒞, and it has no zeros
or poles on 𝒞, then
1 f ′ (𝜆)
𝑍−𝑃 = ∮ 𝑑𝜆 (1)
2𝜋𝑖 𝒞 f(𝜆)

Where the variables 𝑍 and 𝑃 indicates respectively the number of zeros and poles of the
function f(𝑧) inside the contour 𝒞, with each pole 𝑝 and zero 𝑧 counted with its
multiplicity. The argument principle theorem states that the contour 𝒞 is a counter-
clockwise and is simple, that is, without self-intersections.

If the complex function f(𝑧) is not rational then the number of zeros inside the contour 𝒞
is given by
1 f ′ (𝜆)
𝑍= ∮ 𝑑𝜆 (2)
2𝜋𝑖 𝒞 f(𝜆)

Now, by mean of matrix theory we can extend this result to matrix polynomials case.

Theorem: The number of latent roots of the regular matrix polynomial 𝑨(𝜆) in the
domain 𝒟 enclosed by a counter 𝒞 is given by

1 𝑑
𝑍= ∮ trace(𝑨−1 (𝜆)𝑨′ (𝜆))𝑑𝜆 with 𝑨′ (𝜆) = 𝑨(𝜆) (3)
2𝜋𝑖 𝒞 𝑑𝜆

Proof: Let we put ∆(𝜆) = det(𝑨(𝜆)) and let 𝑐𝑖𝑗 be the cofactor of the element 𝑑𝑖𝑗 in ∆(𝜆), so

𝒄𝑇𝑖 = 𝒆𝑇𝑖 Adj(𝑨(𝜆)) = [𝑐𝑖1 ⋮ 𝑐𝑖2 ⋮ ⋯ ⋮ 𝑐𝑖𝑚 ] 𝑖 = 1,2, … , 𝑚

det(𝑨(𝜆)) = Adj(𝑨(𝜆))𝑨(𝜆) ⇔ 𝒆𝑇𝑖 Adj(𝑨(𝜆))𝑨(𝜆) = 𝒆𝑇𝑖 det(𝑨(𝜆))


(4)
⇔ 𝒄𝑇𝑖 𝑨(𝜆) = ∆(𝜆)𝒆𝑇𝑖

where 𝒆𝑖 has a one for its 𝑖 𝑡ℎ element and zeros elsewhere. We also have
𝑚
𝑑
∆′ (𝜆) = ∆(𝜆) = ∑ ∆𝒊 (𝜆) (5)
𝑑𝜆
𝑖=1

where ∆𝒊 (𝜆) is the determinant whose 𝑖 𝑡ℎ column is 𝑨′⋆𝒊 (𝜆) and the remaining columns are
those of ∆(𝜆). Expanding ∆𝒊 (𝜆) by the 𝑖 𝑡ℎ column we have

∆𝒊 (𝜆) = 𝒄𝑇𝑖 𝑨′⋆𝒊 (𝜆) (6)



Now 𝑨(𝜆)𝑨−1 (𝜆) = 𝑰 implies that, provided ∆(𝜆) ≠ 0, 𝑨′ (𝜆) = −𝑨(𝜆)(𝑨−1 (𝜆)) 𝑨(𝜆) and hence

𝑨′⋆𝒊 (𝜆) = −𝑨(𝜆) {(𝑨−1 (𝜆)) 𝑨(𝜆)} this leads to
⋆𝒊

∆𝒊 (𝜆) = −𝒄𝑇𝑖 𝑨(𝜆) {(𝑨−1 (𝜆)) 𝑨(𝜆)}
⋆𝒊
−1 (𝜆))′
= −∆(𝜆)𝒆𝑇𝑖 {(𝑨 𝑨(𝜆)} (7)
⋆𝒊
We then find from that
𝑚 𝑚
′ ′
∆ ′ (𝜆)
= ∑ ∆𝒊 (𝜆) = −∆(𝜆) ∑ 𝒆𝑇𝑖 {(𝑨−1 (𝜆)) 𝑨(𝜆)} = ∆(𝜆)trace ((𝑨−1 (𝜆)) 𝑨(𝜆)) (8)
⋆𝒊
𝑖=1 𝑖=1


and from the matrix derivative properties 𝑨−1 (𝜆)𝑨′ (𝜆) = −(𝑨−1 (𝜆)) 𝑨(𝜆) we have
∆′ (𝜆) 𝑑 𝑑𝑨(𝜆)
= trace(𝑨−1 (𝜆)𝑨′ (𝜆)) ⟺ det(𝑨(𝜆)) = trace (Adj(𝑨(𝜆)) ) (9)
∆(𝜆) 𝑑𝜆 𝑑𝜆

Finally if we let ∆(𝜆) = det(𝑨(𝜆)) being analytic in any domain in the complex plane then
the number of its roots inside a closed contour is

1 ∆′ (𝜆) 1
𝑍= ∮ 𝑑𝜆 = ∮ trace(𝑨−1 (𝜆)𝑨′ (𝜆))𝑑𝜆 ■
2𝜋𝑖 𝒞 ∆(𝜆) 2𝜋𝑖 𝒞

Problem Statement & Proposition: Given an ℓ𝑡ℎ degree regular matrix polynomial
𝑨(𝜆) = 𝜆ℓ 𝑰𝑚 + ∑ℓ𝑖=1 𝑨𝑖 𝜆ℓ−𝑖 with 𝑨𝑖 ∈ ℝ𝑚×𝑚 we are going to find an another matrix polynomial
𝑵(𝜆) = ∑𝑛𝑖=ℓ 𝑵𝑖 𝜆𝑛−𝑖 and ∆(𝜆) = 𝜆𝑛 + ∑𝑛𝑖=1 𝛼𝑖 𝜆𝑛−𝑖 such that

𝑵(𝜆)
𝑨−1 (𝜆) = ⟺ 𝑨(𝜆)𝑵(𝜆) = ∆(𝜆)𝑰𝑚 (10)
∆(𝜆)

Where 𝑵𝑖 ∈ ℝ𝑚×𝑚 , 𝛼𝑖 ∈ ℝ and 𝑛 = 𝑚ℓ. Expanding the last equation we obtain:

𝟎 when 𝑘 = 0,1,2, … , ℓ − 1

𝑵𝑘 = {𝛼𝑘−ℓ 𝑰𝑚 − ∑ 𝑨𝑖 𝑵𝑘−𝒊 when 𝑘 = ℓ, ℓ + 1, … , 𝑛 (11)
𝑖=1
𝟎 when 𝑘 = 𝑛 + 1, … , 𝑛 + ℓ

If the coefficients 𝛼1 … 𝛼𝑛 of the characteristic polynomial ∆(𝜆) were known, last equation
would then constitute an algorithm for computing the matrices 𝑵𝑖 . Here in this paper we
proposed a recursive algorithm which will compute 𝑵𝑖 and 𝛼𝑖 in parallel, even if the
coefficients 𝛼𝑖 are not known in advance.

According to D. F. Davidenko 1960 and Peter Lancaster 1964 (Jacobi's formula), we write

∆′ (𝜆) ∆′ (𝜆)
= trace(𝑨−1 (𝜆)𝑨′ (𝜆)) = trace(𝑨′ (𝜆)𝑨−1 (𝜆)) ⟺ 𝜆 = trace(𝜆𝑨−1 (𝜆)𝑨′ (𝜆)) (12)
∆(𝜆) ∆(𝜆)

And it is evidently that ℓ × trace(𝑰𝑚 ) = 𝑛 = trace(ℓ × 𝑨−1 (𝜆)𝑨(𝜆)), after the combination of
the obtained equations we get

𝜆∆′ (𝜆) − 𝑛∆(𝜆) = trace(𝑵(𝜆){𝜆𝑨′ (𝜆) − ℓ𝑨(𝜆)}) = trace(𝑵(𝜆)𝑩(𝜆)) = trace(𝑩(𝜆)𝑵(𝜆)) (13)

where 𝑩(𝜆) = 𝜆𝑨′ (𝜆) − ℓ𝑨(𝜆) = − ∑ℓ𝑖=0 𝑖𝑨𝑖 𝜆ℓ−𝑖 and 𝜆∆′ (𝜆) − 𝑛∆(𝜆) = − ∑𝑛𝑖=0 𝑖𝛼𝑖 𝜆𝑛−𝑖 . Expanding
the equation and equating identical powers of 𝜆 we obtain:

1 ℓ
𝛼𝑘−ℓ = trace (∑ 𝑖𝑨𝑖 𝑵𝑘−𝑖 ) with 𝑘 = ℓ + 1, … , 𝑛 + ℓ and 𝛼0 = 1 (14)
𝑘−ℓ 𝑖=1
The Generalized Leverrier-Faddeev Algorithm: In this section, a new efficient
algorithm for computing the inverse of regular matrix polynomial is developed. The
results have application to linear control systems theory, since it is useful in various
analysis and synthesis problems for state-space systems. The above developments are
summarized in the following algorithm.

Algorithm:1 (1st Generalized Leverrier-Faddeev Algorithm)

Initialization: Give the matrix coefficients 𝑨0 , 𝑨1 , … , 𝑨ℓ , 𝑵1 = 𝑰, and 𝛼0 = 1


Result: 𝑵(𝜆) = 𝜆𝑛−ℓ (∑𝑛−ℓ −𝑖 𝑛 𝑛
𝑖=0 𝑵𝑖+1 𝜆 ) and ∆(𝜆) = 𝜆 + ∑𝑖=1 𝛼𝑖 𝜆
𝑛−𝑖

Begin:
for 𝑘 = 1, 2, . . . , 𝑛 do
𝛼𝑘 = trace(∑ℓ𝑖=1 𝑖𝑨𝑖 𝑵𝑘−𝑖+1 )/𝑘
if 1 ≤ 𝑘 ≤ 𝑛 − ℓ
𝑵𝑘+1 = 𝛼𝑘 𝑰𝑚 − ∑ℓ𝑖=1 𝑨𝑖 𝑵𝑘−𝒊+1
else
𝑵𝑘 = 0
end
end

Example:1 Given a monic matrix polynomial 𝑨(𝜆) = 𝑰3 𝜆3 + 𝑨1 𝜆2 + 𝑨2 𝜆 + 𝑨3 with 𝑨𝑖 ∈ ℝ3×3


by using the above algorithm we can get
−1 𝑵(𝜆) 𝑵1 𝜆6 + 𝑵2 𝜆5 + 𝑵3 𝜆4 + 𝑵4 𝜆3 + 𝑵5 𝜆2 + 𝑵6 𝜆 + 𝑵7
(𝑨(𝜆)) = =
∆(𝜆) 𝛼0 𝜆9 + 𝛼1 𝜆8 + 𝛼2 𝜆7 + 𝛼3 𝜆6 + 𝛼4 𝜆5 + 𝛼5 𝜆4 + 𝛼6 𝜆3 + 𝛼7 𝜆2 + 𝛼8 𝜆 + 𝛼9

where ℓ = 𝑚 = 3, 𝑛 = 9.

𝑵1 = 𝑰 𝛼1 = trace(𝑨1 𝑵1 )
1
𝑵2 = 𝛼1 𝑰 − 𝑨1 𝑵1 𝛼2 = trace(𝑨1 𝑵2 + 2𝑨2 𝑵1 )
2
1
𝑵3 = 𝛼2 𝑰 − (𝑨1 𝑵2 + 𝑨2 𝑵1 ) 𝛼3 = trace(𝑨1 𝑵3 + 2𝑨2 𝑵2 + 3𝑨3 𝑵1 )
3
1
𝑵4 = 𝛼3 𝑰 − (𝑨1 𝑵3 + 𝑨2 𝑵2 + 𝑨3 𝑵1 ) 𝛼4 = trace(𝑨1 𝑵4 + 2𝑨2 𝑵3 + 3𝑨3 𝑵2 )
4
1
𝑵5 = 𝛼4 𝑰 − (𝑨1 𝑵4 + 𝑨2 𝑵3 + 𝑨3 𝑵2 ) 𝛼5 = trace(𝑨1 𝑵5 + 2𝑨2 𝑵4 + 3𝑨3 𝑵3 )
5
1
𝑵6 = 𝛼5 𝑰 − (𝑨1 𝑵5 + 𝑨2 𝑵4 + 𝑨3 𝑵3 ) 𝛼6 = trace(𝑨1 𝑵6 + 2𝑨2 𝑵5 + 3𝑨3 𝑵4 )
6
1
𝑵7 = 𝛼5 𝑰 − (𝑨1 𝑵6 + 𝑨2 𝑵5 + 𝑨3 𝑵4 ) 𝛼7 = trace(𝑨1 𝑵7 + 2𝑨2 𝑵6 + 3𝑨3 𝑵5 )
7
1 1
𝛼8 = trace(2𝑨2 𝑵7 + 3𝑨3 𝑵6 ) 𝛼9 = trace(3𝑨3 𝑵7 )
8 9

Numerical applications:

𝑨1 =[10.3834 7.9702 -7.3731; 𝑨2 =[31.1427 25.0780 -28.6260;


0.3884 14.2775 -4.0121; 0.5948 43.9776 -14.7398;
1.5983 7.0882 2.3391]; 7.6854 23.1468 -1.6814];

𝑨3 =[34.8866 -1.9819 -25.9976;


3.2351 26.7417 -12.1195;
14.4444 0.2493 -3.6046];
The result will be

𝑵2 =[16.6166 -7.9702 7.3731; 𝑵3 =[104.1316 -95.9827 101.9174;


-0.3884 12.7225 4.0121; -7.9160 65.5337 53.5358;
-1.5983 -7.0882 24.6609]; -27.7524 -84.0073 220.2736];
𝑵4 =[299.346 -416.846 540.859; 𝑵5 =[365.23 -773.18 1368.96;
-58.367 189.088 274.614; -195.77 359.87 673.87;
-181.258 -360.000 948.424]; -550.16 -666.03 2105.41];
𝑵6 =[80.717 -521.835 1634.103; 𝑵7 =[-93.372 -13.625 719.241;
-298.468 442.373 783.585; -163.397 249.768 338.704;
-765.723 -468.273 2287.089]; -385.461 -37.324 939.339];

a0=1; a1=27; a2=316.5; a3=2110.5; a4=8805; a5=23786; a6=41496; a7=44951; a8=27343;


a9=7087.5;

Connection to the Block Companion Forms: In multivariable control systems it is


well-known that the transfer matrix 𝑯(𝜆) can be reached either by state space or by the
matrix fraction description. In order to obtain the inverse of 𝑨(𝜆) assume that we are
dealing with MIMO system whose denominator is the identity, that is
−1 −1
𝑯(𝜆) = 𝑩(𝜆)(𝑨(𝜆)) = (𝑨(𝜆)) where 𝑩(𝜆) = 𝑰 (15)

From the other hand the rational complex function 𝑯(𝜆) can be written as
−1
𝑯(𝜆) = (𝑨(𝜆)) = 𝑪𝑐 (𝜆𝑰𝑛 − 𝑨𝑐 )−1 𝑩𝑐 (16)

Where
𝟎 𝑰𝑚 𝟎 𝟎
𝑰𝑝 𝟎
𝟎 𝟎 ⋮ ⋮ 𝟎
𝑪𝑐 𝑇 = ( 𝟎 ), 𝑨𝑐 = ⋮ ⋮ ⋱ 𝑰𝑚 ⋮ , 𝑩𝑐 = ( )
⋮ ⋮ 𝑰𝑚 ⋮
⋮ 𝟎 𝑰𝑚
𝟎 −𝑨 −𝑨ℓ−1 −𝑨2 −𝑨1 )
( ℓ
Now, let we define that
−1 𝑵(𝜆) Adj(𝜆𝑰 − 𝑨𝑐 ) 𝑹(𝜆)
(𝑨(𝜆)) = = and (𝜆𝑰 − 𝑨𝑐 )−1 = = (17)
∆(𝜆) det(𝜆𝑰 − 𝑨𝑐 ) ∆(𝜆)

With: 𝑹(𝜆) = ∑𝑛𝑖=1 𝑹𝑖 𝜆𝑛−𝑖 , 𝑵(𝜆) = ∑𝑛𝑖=1 𝑵𝑖 𝜆𝑛−𝑖 , ∆(𝜆) = ∑𝑛𝑖=0 𝛼𝑖 𝜆𝑛−𝑖 , 𝛼0 = 1 & 𝑵𝑖 = 𝟎 for 𝑖 < ℓ

Then the following results are obtained


−1 𝑪𝑐 𝑹(𝜆)𝑩𝑐 ∑𝑛𝑖=1(𝑪𝑐 𝑹𝑖 𝑩𝑐 )𝜆𝑛−𝑖
(𝑨(𝜆)) = 𝑪𝑐 (𝜆𝑰 − 𝑨𝑐 )−1 𝑩𝑐 = = ⟹ 𝑵𝑖 = 𝑪𝑐 𝑹𝑖 𝑩𝑐 (18)
∆(𝜆) ∆(𝜆)

From the usual Leverrier-Faddeev algorithm we have

𝛼0 = 1
𝑹 = 𝛼𝑖 𝑰 + 𝑨𝑐 𝑹𝑖 with 1 < 𝑖 < 𝑛 − 1
{ 𝑖+1 and { 1 (19)
𝟎 = 𝛼𝑛 𝑰 + 𝑨𝑐 𝑹𝑛 for 𝑖 = 𝑛 𝛼𝑖 = − trace(𝑨𝑐 𝑹𝑖 )
𝑖
a back-substitutions and recursive evaluation of these formulas give as
𝑘−1 𝑘−1
1
𝛼𝑘 = − trace (∑ 𝛼𝑖 𝑨𝑘−𝑖
𝑐 ) 𝑵𝑘 = 𝑪𝑐 (∑ 𝛼𝑖 𝑨𝑘−𝑖−1
𝑐 ) 𝑩𝑐 (20)
𝑘
𝑖=0 𝑖=0
The above developments are summarized in the following algorithm.

Algorithm:2 (2nd Generalized Leverrier-Faddeev Algorithm)

Initialization: Give the matrix coefficients 𝑨0 , 𝑨1 , … , 𝑨ℓ , 𝑨0 = 𝑰, and 𝛼0 = 1


Result: 𝑵(𝜆) = ∑𝑛
𝑖=1 𝑵𝑖 𝜆
𝑛−𝑖
and ∆(𝜆) = ∑𝑛
𝑖=0 𝛼𝑖 𝜆
𝑛−𝑖

Begin:
Construct a companion matrices 𝑨𝑐 , 𝑩𝑐 , and 𝑪𝑐 as in Equation(16)
for 𝑘 = 1, 2, . . . , 𝑛 do
𝛼𝑘 = −trace(∑𝑘−1 𝑘−𝑖
𝑖=0 𝛼𝑖 𝑨𝑐 )/𝑘
𝑵𝑘 = 𝑪𝑐 (∑𝑘−1 𝑘−𝑖−1
𝑖=0 𝛼𝑖 𝑨𝑐 )𝑩𝑐
end

Example:2 Given a matrix polynomial 𝑨(𝜆) = 𝑨0 𝜆3 + 𝑨1 𝜆2 + 𝑨2 𝜆 + 𝑨3 with 𝑨𝑖 ∈ ℝ3×3 by


using the above algorithm we get

−1 𝑵(𝜆) 𝑵1 𝜆8 + 𝑵2 𝜆7 + 𝑵3 𝜆6 + 𝑵4 𝜆5 + 𝑵5 𝜆4 + 𝑵6 𝜆3 + 𝑵7 𝜆2 + 𝑵8 𝜆 + 𝑵9
(𝑨(𝜆)) = =
∆(𝜆) 𝛼0 𝜆9 + 𝛼1 𝜆8 + 𝛼2 𝜆7 + 𝛼3 𝜆6 + 𝛼4 𝜆5 + 𝛼5 𝜆4 + 𝛼6 𝜆3 + 𝛼7 𝜆2 + 𝛼8 𝜆 + 𝛼9

Where: ℓ = 𝑚 = 3, 𝑛 = 9, with ℓ is the degree of 𝑨(𝜆) and 𝑚 is the size of 𝑨𝑖 and 𝑛 = 𝑚ℓ

𝑨1 =[26.6527 3.3590 -14.0226; 𝑨2 =[140.295 32.414 -98.168;


15.2183 11.1736 -12.2758; 100.753 49.619 -86.468;
25.3291 3.9304 -10.8264]; 166.468 41.315 -114.010];

𝑨3 =[170.645 69.366 -153.679;


131.606 77.383 -135.327;
215.960 92.055 -195.753];
The result of this will be 𝑵1 = 𝑵2 = 𝟎, 𝑵3 = 𝑰 and

𝑵4 =[0.3473 -3.3590 14.0226; 𝑵5 =[-137.111 -51.163 213.617;


-15.2183 15.8264 12.2758; -246.930 92.912 200.253;
-25.3291 -3.9304 37.8264]; -389.671 -60.991 436.604];
𝑵6 =[-1082.44 -300.66 1258.01; 𝑵7 =[-3447.80 -846.59 3564.41;
1539.45 -238.14 -1255.40; -4582.17 202.80 3757.73;
2308.07 364.44 -2306.40]; -6552.32 -1042.16 6167.00];
𝑵8 =[4984.78 1132.44 -4837.53; 𝑵9 =[-2690.46 -568.28 2505.05;
6474.08 135.50 -5337.94; -3463.08 -215.80 2867.93;
8885.48 1417.62 -8069.01]; -4596.73 -728.42 4076.10];

a0=1; a1=27; a2=316.5; a3=2110.5; a4=8805; a5=23786; a6=41496; a7=44951; a8=27343;


a9=7087.5;

Matrix Polynomials in Descriptor Form: Differential-algebraic systems are dynamic


ones that can only be described by a mixture of algebraic and differential equations
together. In another regard, it can be said that algebraic equations are those constraints
that control the differential solution. These dynamics are also known as descriptor and
singular systems and arise naturally as a linear approximation of system models, in
many applications such as electrical networks, dynamics of aeronautical systems,
neutral delay systems, chemical and thermal processes and diffusion, range systems,
interconnected systems, economics, optimization problems, Feedback systems, robotics,
biology, etc. (Dragutin Lj. Debeljković et.al 2004)

The matrix transfer function of a MIMO system can be described by generalized state-
space or polynomial fraction description as
−1
𝑯(𝜆) = 𝑩(𝜆)(𝑨(𝜆)) = 𝑪(𝜆𝑬 − 𝑨)−1 𝑩, and 𝑟𝑎𝑛𝑘(𝑬) < 𝑛 (21)

Where: 𝑨, 𝑬 ∈ ℝ𝑛×𝑛 , 𝑪 ∈ ℝ𝑝×𝑛 and 𝑩 ∈ ℝ𝑛×𝑚 . The index 𝑚 stands for the number of inputs,
𝑝 for the number of outputs and 𝑛 for the number of states.

To get to the transfer function we need to calculate the inverse of the matrix pencil
(𝜆𝑬 − 𝑨) or the inverse of 𝑨(𝜆). As it is known, calculating inversions of polynomial
matrices in general is not an easy task and requires a lot of complex calculations.
Especially, it goes more difficult if we are dealing with generalized systems. That is why
we propose an algorithm that makes this process easier for computation.

Now, assume that we are dealing with the problem of inverting the matrix
polynomial 𝑨(𝜆) = ∑ℓ𝑖=0 𝑨𝑖 𝜆𝑛−𝑖 with 𝑟𝑎𝑛𝑘(𝑨0 ) < 𝑚, as what we have done before, we let

−1 𝑪𝑐 𝑹(𝜆)𝑩𝑐 𝑵(𝜆)
𝑯(𝜆) = (𝑨(𝜆)) = 𝑪𝑐 (𝜆𝑬𝑐 − 𝑨𝑐 )−1 𝑩𝑐 = = (22)
det(𝜆𝑬𝑐 − 𝑨𝑐 ) ∆(𝜆)

Where 𝑵(𝜆) = ∑𝑛𝑖=1 𝑵𝑖 (𝜆 + 𝜇)𝑛−𝑖 , ∆(𝜆) = det(𝜆𝑬𝑐 − 𝑨𝑐 ) = ∑𝑛𝑖=0 𝛼𝑖 (𝜆 + 𝜇)𝑛−𝑖 and

𝑰𝑚 ⋯ 𝟎 𝟎 𝑰𝑚 𝟎 𝟎
𝟎 𝑰𝑝 𝟎
⋮ ⋱ ⋮ 𝟎 𝟎 ⋮ ⋮
𝑬𝑐 = (

) , 𝑨𝑐 = ⋮ ⋮ ⋱ 𝑰𝑚 ⋮ , 𝑪𝑐 = ( 𝟎 ) , 𝑩𝑐 = ( 𝟎 )
𝑇
𝟎 … 𝑰𝑚 𝟎 𝑰𝑚 ⋮ ⋮
⋮ ⋮ 𝟎
𝟎 … 𝟎 𝑨0 −𝑨 𝟎 𝑰 𝑚
( ℓ −𝑨ℓ−1 −𝑨2 −𝑨1 )

Note: The variable 𝜇 is called a regularization parameter, and is introduced to make


simplifications in calculations.

The Adjugate matrix 𝑹(𝜆) can be calculated using the method proposed by P.N.
Paraskevopoulos et.al 1983. Once the Adjugate is obtained we do a back-substitutions to
get a recursive formula for 𝑵𝑖 and 𝛼𝑖 .

To compute the inverse of matrix (𝜆𝑬𝑐 − 𝑨𝑐 ) the following technique will be used. Find a 𝜇
so that matrix pencil (𝜇𝑬𝑐 + 𝑨𝑐 ) is regular. It should be noted that (𝜇𝑬𝑐 + 𝑨𝑐 ) is
polynomial in 𝜇 of degree at most 𝑛.

(𝜆𝑬𝑐 − 𝑨𝑐 )−1 = (𝜆𝑬𝑐 + 𝜇𝑬𝑐 − 𝜇𝑬𝑐 − 𝑨𝑐 )−1


−1
= ((𝜆 + 𝜇)𝑬𝑐 − (𝜇𝑬𝑐 + 𝑨𝑐 )) (23)
−1
= ((𝜆 + 𝜇)𝑴 − 𝑰) 𝑸

Where 𝑸 = (𝜇𝑬𝑐 + 𝑨𝑐 )−1 and 𝑴 = (𝜇𝑬𝑐 + 𝑨𝑐 )−1 𝑬𝑐 which can be easily be evaluated, since
for constant 𝜇 the matrix (𝜇𝑬𝑐 + 𝑨𝑐 ) is given known constant matrix of appropriate
dimension. If we introduce the following change of variable we obtain (𝜆 + 𝜇) = 1/𝑠

(𝜆𝑬𝑐 − 𝑨𝑐 )−1 = −𝑠(𝑠𝑰 − 𝑴)−1 𝑸 (24)


𝑹𝑛 𝑠 𝑛−1 + ⋯ + 𝑹2 𝑠 + 𝑹1
(𝜆𝑬𝑐 − 𝑨𝑐 )−1 = −𝑠(𝑠𝑰 − 𝑴)−1 𝑸 = −𝑠 { }𝑸
𝛼𝑛 𝑠 𝑛 + 𝛼𝑛−1 𝑠 𝑛−1 + ⋯ + 𝛼1 𝑠 + 𝛼0
𝑹1 (𝜆 + 𝜇)𝑛−1 + ⋯ + 𝑹𝑛−1 (𝜆 + 𝜇) + 𝑹𝑛
= −{ }𝑸
𝛼0 (𝜆 + 𝜇)𝑛 + 𝛼1 (𝜆 + 𝜇)𝑛−1 + ⋯ + 𝛼𝑛−1 (𝜆 + 𝜇) + 𝛼𝑛

Next the Sourian Frame-Faddev-algorithm will be used to compute the term(𝑠𝑰 − 𝑴)−1

𝛼𝑛 = 1 𝑹𝑛 = 𝑰
𝛼𝑛−1 = −trace(𝑴𝑹𝑛 ) 𝑹𝑛−1 = 𝛼𝑛−1 𝑰 + 𝑴𝑹𝑛
1
𝛼𝑛−2 = − trace(𝑴𝑹𝑛−1 ) 𝑹𝑛−2 = 𝛼𝑛−2 𝑰 + 𝑴𝑹𝑛−1
2

1
𝛼1 = − trace(𝑴𝑹2 ) 𝑹1 = 𝛼1 𝑰 + 𝑴𝑹2
𝑛−1
1
𝛼0 = − trace(𝑴𝑹1 ) 𝑹0 = 𝟎 = 𝛼0 𝑰 + 𝑴𝑹1
𝑛

In compact form we write 𝛼𝑛 = 1, 𝑹0 = 𝟎 & 𝑹𝑛 = 𝑰 and

1 𝑘−1 𝑘
𝛼𝑛−𝑘 = − trace (∑ 𝛼𝑛−𝑖 𝑴𝑘−𝑖 ) & 𝑹𝑛−𝑘 = ∑ 𝛼𝑛−𝑖 𝑴𝑘−𝑖 for 𝑘 = 1, … , 𝑛 (25)
𝑘 𝑖=0 𝑖=0

The above developments are summarized in the following algorithm.

Algorithm:3 (3rd Generalized Leverrier-Faddeev Algorithm)

Initialization: Give the matrix coefficients 𝑨0 , 𝑨1 , … , 𝑨ℓ , and 𝛼𝑛 = 1.


Result: 𝑵(𝜆) = ∑𝑛
𝑖=1 𝑵𝑖 (𝜆 + 𝜇)
𝑛−𝑖
and ∆(𝜆) = ∑𝑛
𝑖=0 𝛼𝑖 (𝜆 + 𝜇)
𝑛−𝑖

Begin:
▪ Construct a companion matrices 𝑨𝑐 , 𝑩𝑐 , and 𝑪𝑐 as in Equation(22)
▪ Give a scalar 𝜇 such that (𝜇𝑬𝑐 + 𝑨𝑐 ) is nonsingular.
▪ Construct a matrices 𝑴 = (𝜇𝑬𝑐 + 𝑨𝑐 )−1 𝑬𝑐 and 𝑸 = (𝜇𝑬𝑐 + 𝑨𝑐 )−1
for 𝑘 = 1, 2, . . . , 𝑛 do
𝛼𝑛−𝑘 = −trace(∑𝑘−1
𝑖=0 𝛼𝑛−𝑖 𝑴
𝑘−𝑖
)/𝑘
𝑘 𝑘−𝑖
𝑵𝑛−𝑘 = −𝑪𝑐 (∑𝑖=0 𝛼𝑛−𝑖 𝑴 𝑸)𝑩𝑐
end

Conclusion: In this paper we have presented new-look to the Leverrier-Faddeev


algorithm for the computation of the inverse of regular and non-regular matrix
polynomials. The power of proposed algorithms is its simplicity for implementation and
its efficiency, especially when the degree and the order of the matrix polynomial gets
bigger. This algorithm is a helpful and functional for the computation of multivariable
transfer functions of large-scale systems.
References

[1] P. Lancaster, Algorithms for Lambda-Matrices, Numerische Mathematik 6, 388-394 (1964)


[2] D. F. Davidenko, Inversion of matrices by the method of variation of parameters, Dokl. Soviet
Math, 1960, Volume 131, Number 3, 500–502
[3] P. Lancaster, Some applications of the Newton-Raphson method to non-linear matrix
problems. Proc. Roy. Soc. A 271, 324-33t (1963)
[4] P. Lancaster, A generalised Rayleigh-quotient iteration for lambda-matrices. Arch. Rat. Mech.
Anal. 8, 309-322 (1961)
[5] Shui-Hung Hou, A Simple Proof of the Leverrier-Faddeev Characteristic Polynomial Algorithm,
SIAM REV. Vol. 40, No. 3, pp. 706-709, September 1998.
[6] A. S. Householder, The Theory of Matrices in Numerical Analysis, Dover, New York, 1975.
[7] D. K. Faddeev and V. N. Faddeeva, Computational Methods of Linear Algebra, Freeman, San
Francisco, 1963.
[8] F. R. Gantmacher, The Theory of Matrices, Vol. I, Chelsea Publishing Co., New York, 1960.
[9] Peter Henrici, Applied and Computational Complex Analysis, 1974 John Wiley & Sons, Inc.
[10] S. Barnett, Leverrier’s algorithm: a new proof and extensions, SIAM J. Matrix Anal.
Appl.10(1989), 551–556.
[11] H.P. Decell, An application of the Cayley-Hamilton theorem to generalized matrix inversion,
SIAM Review7No 4 (1965), 526–528.
[12] D.K. Faddeev and V.N. Faddeeva, Computational Methods of Linear Algebra, Freeman, San
Francisko, 1963.
[13] J.S. Frame, A simple recursion formula for inverting a matrix, Bull. Amer. Math. Soc.
55(1949), 19–45.
[14] J. Jones, N.P. Karampetakis and A.C. Pugh, The computation and application of the
generalized inverse vai Maple, J. Symbolic Computation 25(1998), 99–124.
[15] N.P. Karampetakis, Computation of the generalized inverse of a polynomial matrix and
applications, Linear Algebra Appl. 252(1997), 35–60.
[16] G. Wang and Y. Lin, A new extension of the Leverrier’s algorithm, Linear Algebra
Appl.180(1993), 227–238.

You might also like