Download as pdf or txt
Download as pdf or txt
You are on page 1of 76

Fiedler linearizations of polynomial matrices and applications

Dr Vologiannidis Stavros1,2 Dr Antoniou Efstathios3

1 DataScouting S.A., Thessaloniki, Greece (www.datascouting.com) 2 Department of Informatics and Communications, Technological Educational
Institute of Serres, Greece

3 Department of Sciences, Technological Educational Institute of Thessaloniki,


Greece

2 July 2012, City University, London

Dr Vologiannidis Stavros

Introduction
Denition A

Linearization of a regular polynomial matrix


T (s ) = Tn s n + Tn1 s n1 + ... + T0 , Ti Cpp

is a matrix pencil

L(s ) = sL1 L0 , Li Cnpnp


such that

L(s ) = U (s )

T (s )
0

I(n1)p

V (s )

where U (s ), V (s ) are unimodular matrices, i.e. matrices with constant non-zero determinants. Moreover, L(s ) is a of T (s ) if the dual pencil L# (s ) = sL0 L1 is a linearization of the dual polynomial matrix
Dr Vologiannidis Stavros

Strong Linearization

Introduction
Denition A

Linearization of a regular polynomial matrix


T (s ) = Tn s n + Tn1 s n1 + ... + T0 , Ti Cpp

is a matrix pencil

L(s ) = sL1 L0 , Li Cnpnp


such that

L(s ) = U (s )

T (s )
0

I(n1)p

V (s )

where U (s ), V (s ) are unimodular matrices, i.e. matrices with constant non-zero determinants. Moreover, L(s ) is a of T (s ) if the dual pencil L# (s ) = sL0 L1 is a linearization of the dual polynomial matrix
Dr Vologiannidis Stavros

Strong Linearization

Companion Linearizations
The most common linearizations of T (s ) are the well known rst and second companion linearizations P (s ) and P (s ) Ip 0 0 0 Ip 0 . . .. .. 0 Ip . . . . . . . . . . . . P (s ) = s . . , .. .. 0 0 Ip . . 0 . T0 T1 Tn1 0 0 Tn Ip 0 0 0 0 T0 . . . .. . . . . Ip . . . . . . (s ) = s 0 Ip P . .. .. . .. . . . . . 0 Tn2 . 0 . 0 0 Tn 0 Ip Tn1

P (s ), P (s ) can be constructed by inspection of the coecient matrices of T (s ).


The matrices involved are relatively sparse.
Dr Vologiannidis Stavros

Companion Linearizations
The most common linearizations of T (s ) are the well known rst and second companion linearizations P (s ) and P (s ) Ip 0 0 0 Ip 0 . . .. .. 0 Ip . . . . . . . . . . . . P (s ) = s . . , .. .. 0 0 Ip . . 0 . T0 T1 Tn1 0 0 Tn Ip 0 0 0 0 T0 . . . .. . . . . Ip . . . . . . (s ) = s 0 Ip P . .. .. . .. . . . . . 0 Tn2 . 0 . 0 0 Tn 0 Ip Tn1

P (s ), P (s ) can be constructed by inspection of the coecient matrices of T (s ).


The matrices involved are relatively sparse.
Dr Vologiannidis Stavros

Companion Linearizations
The most common linearizations of T (s ) are the well known rst and second companion linearizations P (s ) and P (s ) Ip 0 0 0 Ip 0 . . .. .. 0 Ip . . . . . . . . . . . . P (s ) = s . . , .. .. 0 0 Ip . . 0 . T0 T1 Tn1 0 0 Tn Ip 0 0 0 0 T0 . . . .. . . . . Ip . . . . . . (s ) = s 0 Ip P . .. .. . .. . . . . . 0 Tn2 . 0 . 0 0 Tn 0 Ip Tn1

P (s ), P (s ) can be constructed by inspection of the coecient matrices of T (s ).


The matrices involved are relatively sparse.
Dr Vologiannidis Stavros

Motivation - ARMA models


Consider the discrete time ARMA representation

Tn yk +n + Tn1 yk +n1 +...+ T0 yk = Sm uk +m + Sm1 uk +m1 +...+ S0 uk


where

uk Rq , k = 0, 1, 2, 3, . . . is the input vector, yk Rp , k = 0, 1, 2, 3, . . . is the output vector, Ti Rpp , i = 0, 1, . . . , n and Si Rpq , i = 0, 1, . . . , m. T ( )yk = S ( )uk

The above equation can be rewritten in a more compact form as

where T ( ) = n=0 Ti i , S ( ) = m 0 Si i are polynomial i i= matrices and is the forward shift operator, i.e. xk = xk +1 . The behaviour of the ARMA model can be studied through the algebraic properties of T ( ).
Dr Vologiannidis Stavros

Motivation - ARMA models


Consider the discrete time ARMA representation

Tn yk +n + Tn1 yk +n1 +...+ T0 yk = Sm uk +m + Sm1 uk +m1 +...+ S0 uk


where

uk Rq , k = 0, 1, 2, 3, . . . is the input vector, yk Rp , k = 0, 1, 2, 3, . . . is the output vector, Ti Rpp , i = 0, 1, . . . , n and Si Rpq , i = 0, 1, . . . , m. T ( )yk = S ( )uk

The above equation can be rewritten in a more compact form as

where T ( ) = n=0 Ti i , S ( ) = m 0 Si i are polynomial i i= matrices and is the forward shift operator, i.e. xk = xk +1 . The behaviour of the ARMA model can be studied through the algebraic properties of T ( ).
Dr Vologiannidis Stavros

Motivation - ARMA models


Consider the discrete time ARMA representation

Tn yk +n + Tn1 yk +n1 +...+ T0 yk = Sm uk +m + Sm1 uk +m1 +...+ S0 uk


where

uk Rq , k = 0, 1, 2, 3, . . . is the input vector, yk Rp , k = 0, 1, 2, 3, . . . is the output vector, Ti Rpp , i = 0, 1, . . . , n and Si Rpq , i = 0, 1, . . . , m. T ( )yk = S ( )uk

The above equation can be rewritten in a more compact form as

where T ( ) = n=0 Ti i , S ( ) = m 0 Si i are polynomial i i= matrices and is the forward shift operator, i.e. xk = xk +1 . The behaviour of the ARMA model can be studied through the algebraic properties of T ( ).
Dr Vologiannidis Stavros

Motivation - ARMA models


The ARMA model the form Ip 0 0 Ip . . . . .. .. . . . . 0 0 can be transformed into a descriptor system of

Tn

0 . . . xk +1 0

0 T0

0 . . .

. T1

..

Ip

0 . .. . . . 0 Ip Tn1

xk

0 . . = . 0

S0 Sm

0 . . . v k 0

The coecient matrices on the l.h.s. of the above equation are the those of the First Companion Linearization P ( ). The behaviour of the ARMA model can be studied through the properties of the rst companion linearizationP ( ).
Dr Vologiannidis Stavros

Motivation - ARMA models


The ARMA model the form Ip 0 0 Ip . . . . .. .. . . . . 0 0 can be transformed into a descriptor system of

Tn

0 . . . xk +1 0

0 T0

0 . . .

. T1

..

Ip

0 . .. . . . 0 Ip Tn1

xk

0 . . = . 0

S0 Sm

0 . . . v k 0

The coecient matrices on the l.h.s. of the above equation are the those of the First Companion Linearization P ( ). The behaviour of the ARMA model can be studied through the properties of the rst companion linearizationP ( ).
Dr Vologiannidis Stavros

Motivation - Polynomial Eigenvalue Problem


The (PEP) can be stated as follows: Given a regular polynomial matrix T ( ) = Tn n + Tn1 n1 + ... + T0 Cpp nd 0 C and x0 Cp such that T (0 )x0 = 0px1 . The scalar 0 is an and x0 is the associated of T ( ). If P ( ) = P1 P0 is the rst companion linearization of T ( ), then by solving the (GEP)

Polynomial Eigenvalue Problem

eigenvector

eigenvalue

Generalized Eigenvalue Problem


P (0 )0 = 0

it is easy to see that 0 is also an eigenvalue of T ( ), while the associated eigenvector x0 can be recovered from the rst p components of 0 .
Dr Vologiannidis Stavros

Motivation - Polynomial Eigenvalue Problem


The (PEP) can be stated as follows: Given a regular polynomial matrix T ( ) = Tn n + Tn1 n1 + ... + T0 Cpp nd 0 C and x0 Cp such that T (0 )x0 = 0px1 . The scalar 0 is an and x0 is the associated of T ( ). If P ( ) = P1 P0 is the rst companion linearization of T ( ), then by solving the (GEP)

Polynomial Eigenvalue Problem

eigenvector

eigenvalue

Generalized Eigenvalue Problem


P (0 )0 = 0

it is easy to see that 0 is also an eigenvalue of T ( ), while the associated eigenvector x0 can be recovered from the rst p components of 0 .
Dr Vologiannidis Stavros

Motivation - Polynomial Eigenvalue Problem


The (PEP) can be stated as follows: Given a regular polynomial matrix T ( ) = Tn n + Tn1 n1 + ... + T0 Cpp nd 0 C and x0 Cp such that T (0 )x0 = 0px1 . The scalar 0 is an and x0 is the associated of T ( ). If P ( ) = P1 P0 is the rst companion linearization of T ( ), then by solving the (GEP)

Polynomial Eigenvalue Problem

eigenvector

eigenvalue

Generalized Eigenvalue Problem


P (0 )0 = 0

it is easy to see that 0 is also an eigenvalue of T ( ), while the associated eigenvector x0 can be recovered from the rst p components of 0 .
Dr Vologiannidis Stavros

Motivation - Polynomial Eigenvalue Problem


Quadratic eigenvalue problem - T ( ) = T2 2 + T1 + T0 .

Dynamic analysis of structures (T0 and T2 real symmetric and positive semidenite) T T Vibrations of spinning structures (T0 = T0 , T2 = T2 T ,T1 = T1 ) Modelling of excitation of train tracks (palindromic, sparse, matrix dimensions=1005) Camera surveillance .... Optimal control Stability analysis of vibrating systems under state delay feedback control ....

Polynomial eigenvalue problem

Dr Vologiannidis Stavros

Motivation - Polynomial Eigenvalue Problem


Quadratic eigenvalue problem - T ( ) = T2 2 + T1 + T0 .

Dynamic analysis of structures (T0 and T2 real symmetric and positive semidenite) T T Vibrations of spinning structures (T0 = T0 , T2 = T2 T ,T1 = T1 ) Modelling of excitation of train tracks (palindromic, sparse, matrix dimensions=1005) Camera surveillance .... Optimal control Stability analysis of vibrating systems under state delay feedback control ....

Polynomial eigenvalue problem

Dr Vologiannidis Stavros

Motivation - Polynomial Eigenvalue Problem


Quadratic eigenvalue problem - T ( ) = T2 2 + T1 + T0 .

Dynamic analysis of structures (T0 and T2 real symmetric and positive semidenite) T T Vibrations of spinning structures (T0 = T0 , T2 = T2 T ,T1 = T1 ) Modelling of excitation of train tracks (palindromic, sparse, matrix dimensions=1005) Camera surveillance .... Optimal control Stability analysis of vibrating systems under state delay feedback control ....

Polynomial eigenvalue problem

Dr Vologiannidis Stavros

Motivation
Many applications lead to (large scale) matrix polynomial problems where the coecients are sparse and structured Any kind of extra structure (arising typically from the properties of the underlying physical problem) should be reected as much as possible in the method used for solving the problem Reliable numerical algorithms are available for matrix pencils Special techniques exist for structured matrix pencils Out of the innite number of linearizations which one should we choose? Linearizations have varying eigenvalue condition numbers Solving PEP with a backward stable algorithm (e.g., QZ) applied to a linearization can be backward unstable for the PEP

Structure preservation leads to a gain in eciency and accuracy


Dr Vologiannidis Stavros

Motivation
Many applications lead to (large scale) matrix polynomial problems where the coecients are sparse and structured Any kind of extra structure (arising typically from the properties of the underlying physical problem) should be reected as much as possible in the method used for solving the problem Reliable numerical algorithms are available for matrix pencils Special techniques exist for structured matrix pencils Out of the innite number of linearizations which one should we choose? Linearizations have varying eigenvalue condition numbers Solving PEP with a backward stable algorithm (e.g., QZ) applied to a linearization can be backward unstable for the PEP

Structure preservation leads to a gain in eciency and accuracy


Dr Vologiannidis Stavros

Motivation
Many applications lead to (large scale) matrix polynomial problems where the coecients are sparse and structured Any kind of extra structure (arising typically from the properties of the underlying physical problem) should be reected as much as possible in the method used for solving the problem Reliable numerical algorithms are available for matrix pencils Special techniques exist for structured matrix pencils Out of the innite number of linearizations which one should we choose? Linearizations have varying eigenvalue condition numbers Solving PEP with a backward stable algorithm (e.g., QZ) applied to a linearization can be backward unstable for the PEP

Structure preservation leads to a gain in eciency and accuracy


Dr Vologiannidis Stavros

Dierent approaches to linearizations


Multiplicative approach - (Lancaster, Prells) - [5], [6]. Additive construction approach - (Mackey, Mackey, Mehl,
Mehrmann, Tisseur, Higham ...) - [7], [4].

Fiedler Linearizations - (Fiedler, Antoniou, Vologiannidis) - [3],


[1] and [2]. [8].

Extended Fiedler Linerizations - (Antoniou, Vologiannidis) (Extended) Fiedler linearizations focus on: The construction of (companion like) linearizations using the unperturbed coecients of T (s ). Preservation of selected structural properties of the polynomial matrix. Direct eigenvector recovery by inspection.
Dr Vologiannidis Stavros

Dierent approaches to linearizations


Multiplicative approach - (Lancaster, Prells) - [5], [6]. Additive construction approach - (Mackey, Mackey, Mehl,
Mehrmann, Tisseur, Higham ...) - [7], [4].

Fiedler Linearizations - (Fiedler, Antoniou, Vologiannidis) - [3],


[1] and [2]. [8].

Extended Fiedler Linerizations - (Antoniou, Vologiannidis) (Extended) Fiedler linearizations focus on: The construction of (companion like) linearizations using the unperturbed coecients of T (s ). Preservation of selected structural properties of the polynomial matrix. Direct eigenvector recovery by inspection.
Dr Vologiannidis Stavros

Dierent approaches to linearizations


Multiplicative approach - (Lancaster, Prells) - [5], [6]. Additive construction approach - (Mackey, Mackey, Mehl,
Mehrmann, Tisseur, Higham ...) - [7], [4].

Fiedler Linearizations - (Fiedler, Antoniou, Vologiannidis) - [3],


[1] and [2]. [8].

Extended Fiedler Linerizations - (Antoniou, Vologiannidis) (Extended) Fiedler linearizations focus on: The construction of (companion like) linearizations using the unperturbed coecients of T (s ). Preservation of selected structural properties of the polynomial matrix. Direct eigenvector recovery by inspection.
Dr Vologiannidis Stavros

Dierent approaches to linearizations


Multiplicative approach - (Lancaster, Prells) - [5], [6]. Additive construction approach - (Mackey, Mackey, Mehl,
Mehrmann, Tisseur, Higham ...) - [7], [4].

Fiedler Linearizations - (Fiedler, Antoniou, Vologiannidis) - [3],


[1] and [2]. [8].

Extended Fiedler Linerizations - (Antoniou, Vologiannidis) (Extended) Fiedler linearizations focus on: The construction of (companion like) linearizations using the unperturbed coecients of T (s ). Preservation of selected structural properties of the polynomial matrix. Direct eigenvector recovery by inspection.
Dr Vologiannidis Stavros

Dierent approaches to linearizations


Multiplicative approach - (Lancaster, Prells) - [5], [6]. Additive construction approach - (Mackey, Mackey, Mehl,
Mehrmann, Tisseur, Higham ...) - [7], [4].

Fiedler Linearizations - (Fiedler, Antoniou, Vologiannidis) - [3],


[1] and [2]. [8].

Extended Fiedler Linerizations - (Antoniou, Vologiannidis) (Extended) Fiedler linearizations focus on: The construction of (companion like) linearizations using the unperturbed coecients of T (s ). Preservation of selected structural properties of the polynomial matrix. Direct eigenvector recovery by inspection.
Dr Vologiannidis Stavros

Dierent approaches to linearizations


Multiplicative approach - (Lancaster, Prells) - [5], [6]. Additive construction approach - (Mackey, Mackey, Mehl,
Mehrmann, Tisseur, Higham ...) - [7], [4].

Fiedler Linearizations - (Fiedler, Antoniou, Vologiannidis) - [3],


[1] and [2]. [8].

Extended Fiedler Linerizations - (Antoniou, Vologiannidis) (Extended) Fiedler linearizations focus on: The construction of (companion like) linearizations using the unperturbed coecients of T (s ). Preservation of selected structural properties of the polynomial matrix. Direct eigenvector recovery by inspection.
Dr Vologiannidis Stavros

Overview

Denition of some elementary matrices containing coecients of the original polynomial matrix and presentation of some of their properties. Method to construct Fiedler linearizations using multiplication of elementary matrices. Method to construct block symmetric Fiedler linearizations using multiplication of elementary matrices. Recovery of the eigenvectors of the original polynomial matrix by those of the linearization.

Dr Vologiannidis Stavros

Elementary matrices
Denition (Elementary matrices) Let T (s ) a regular polynomial matrix. Then we dene the following elementary matrices corresponding to T (s ) as follows: Ip(k 1) 0 .. , k = 1, 2, . . . , n 1, . Ck Ak = 0 . .. . . Ip(nk 1) . where

Ck =
and

Ip Ip Tk
0

A0 = diag {T0 , Ip(n1) }.


Dr Vologiannidis Stavros

Products of elementary matrices


Denition Let I = (i1 , i2 , . . . , im ) be an ordered tuple containing indices from {0, 1, 2, . . . , n 1}. Then AI := Ai1 Ai2 Ai .
m

Theorem [1] Let i , j {0, 1, 2, . . . , n 1}. Then Ai Aj = Aj Ai if and only if |i j | = 1. Denition Let I1 and I2 be two tuples. I1 will be termed equivalent to I2 (I1 I2 ) if and only if AI1 = AI2 .

Dr Vologiannidis Stavros

Products of elementary matrices


Denition Let I = (i1 , i2 , . . . , im ) be an ordered tuple containing indices from {0, 1, 2, . . . , n 1}. Then AI := Ai1 Ai2 Ai .
m

Theorem [1] Let i , j {0, 1, 2, . . . , n 1}. Then Ai Aj = Aj Ai if and only if |i j | = 1. Denition Let I1 and I2 be two tuples. I1 will be termed equivalent to I2 (I1 I2 ) if and only if AI1 = AI2 .

Dr Vologiannidis Stavros

Products of elementary matrices


Denition Let I = (i1 , i2 , . . . , im ) be an ordered tuple containing indices from {0, 1, 2, . . . , n 1}. Then AI := Ai1 Ai2 Ai .
m

Theorem [1] Let i , j {0, 1, 2, . . . , n 1}. Then Ai Aj = Aj Ai if and only if |i j | = 1. Denition Let I1 and I2 be two tuples. I1 will be termed equivalent to I2 (I1 I2 ) if and only if AI1 = AI2 .

Dr Vologiannidis Stavros

We will use the following notation: Let k , l Z. Then (k : l ) :=

A = Inp .

(k , k + 1, ..., l ), k l , k > l

If I = (i1 , i2 , . . . , im ) then I =(im , im1 , . . . , i1 ).


Denition A product of elementary matrices AI will be termed i the block elements of AI are either 0, Ip or Ti (for generic matrices Ti ).

free

operation

Example 0 T0 T0 T1 A0 A1 A0 =

Ip T1 2 A1 A0 A1 = T1 T1 T0

Ip(n2)

is operation free, is not.

Ip(n2)

We will use the following notation: Let k , l Z. Then (k : l ) :=

A = Inp .

(k , k + 1, ..., l ), k l , k > l

If I = (i1 , i2 , . . . , im ) then I =(im , im1 , . . . , i1 ).


Denition A product of elementary matrices AI will be termed i the block elements of AI are either 0, Ip or Ti (for generic matrices Ti ).

free

operation

Example 0 T0 T0 T1 A0 A1 A0 =

Ip T1 2 A1 A0 A1 = T1 T1 T0

Ip(n2)

is operation free, is not.

Ip(n2)

We will use the following notation: Let k , l Z. Then (k : l ) :=

A = Inp .

(k , k + 1, ..., l ), k l , k > l

If I = (i1 , i2 , . . . , im ) then I =(im , im1 , . . . , i1 ).


Denition A product of elementary matrices AI will be termed i the block elements of AI are either 0, Ip or Ti (for generic matrices Ti ).

free

operation

Example 0 T0 T0 T1 A0 A1 A0 =

Ip T1 2 A1 A0 A1 = T1 T1 T0

Ip(n2)

is operation free, is not.

Ip(n2)

We will use the following notation: Let k , l Z. Then (k : l ) :=

A = Inp .

(k , k + 1, ..., l ), k l , k > l

If I = (i1 , i2 , . . . , im ) then I =(im , im1 , . . . , i1 ).


Denition A product of elementary matrices AI will be termed i the block elements of AI are either 0, Ip or Ti (for generic matrices Ti ).

free

operation

Example 0 T0 T0 T1 A0 A1 A0 =

Ip T1 2 A1 A0 A1 = T1 T1 T0

Ip(n2)

is operation free, is not.

Ip(n2)

Successor Inxed Property


We need to nd an easy way to characterize operation free products of elementary matrices. Denition (Successor Inxed Property) Let I = (i1 , i2 , . . . , ik ) be an index tuple. I will be called successor inxed if and only if for every pair of indices ia , ib I , with 1 a < b k , satisfying ia = ib , there exists at least one index ic = ia + 1, such that a < c < b. Example

Let I1 = (1, 2, 0, 1, 3, 2, 0, 1) and I2 = (1, 2, 0, , 3, , 2, 0). Then I1 has SIP but not I2 . Also note that I1 (1, , , 1, 3, 2, 0, 1) (1, 0, 2, , , 2, 0, 1) ... We need to nd an easy way to uniquely write index tuples corresponding to equal products of elementary matrices.
Dr Vologiannidis Stavros

1 1

02

31

Successor Inxed Property


We need to nd an easy way to characterize operation free products of elementary matrices. Denition (Successor Inxed Property) Let I = (i1 , i2 , . . . , ik ) be an index tuple. I will be called successor inxed if and only if for every pair of indices ia , ib I , with 1 a < b k , satisfying ia = ib , there exists at least one index ic = ia + 1, such that a < c < b. Example

Let I1 = (1, 2, 0, 1, 3, 2, 0, 1) and I2 = (1, 2, 0, , 3, , 2, 0). Then I1 has SIP but not I2 . Also note that I1 (1, , , 1, 3, 2, 0, 1) (1, 0, 2, , , 2, 0, 1) ... We need to nd an easy way to uniquely write index tuples corresponding to equal products of elementary matrices.
Dr Vologiannidis Stavros

1 1

02

31

Successor Inxed Property


We need to nd an easy way to characterize operation free products of elementary matrices. Denition (Successor Inxed Property) Let I = (i1 , i2 , . . . , ik ) be an index tuple. I will be called successor inxed if and only if for every pair of indices ia , ib I , with 1 a < b k , satisfying ia = ib , there exists at least one index ic = ia + 1, such that a < c < b. Example

Let I1 = (1, 2, 0, 1, 3, 2, 0, 1) and I2 = (1, 2, 0, , 3, , 2, 0). Then I1 has SIP but not I2 . Also note that I1 (1, , , 1, 3, 2, 0, 1) (1, 0, 2, , , 2, 0, 1) ... We need to nd an easy way to uniquely write index tuples corresponding to equal products of elementary matrices.
Dr Vologiannidis Stavros

1 1

02

31

Successor Inxed Property


We need to nd an easy way to characterize operation free products of elementary matrices. Denition (Successor Inxed Property) Let I = (i1 , i2 , . . . , ik ) be an index tuple. I will be called successor inxed if and only if for every pair of indices ia , ib I , with 1 a < b k , satisfying ia = ib , there exists at least one index ic = ia + 1, such that a < c < b. Example

Let I1 = (1, 2, 0, 1, 3, 2, 0, 1) and I2 = (1, 2, 0, , 3, , 2, 0). Then I1 has SIP but not I2 . Also note that I1 (1, , , 1, 3, 2, 0, 1) (1, 0, 2, , , 2, 0, 1) ... We need to nd an easy way to uniquely write index tuples corresponding to equal products of elementary matrices.
Dr Vologiannidis Stavros

1 1

02

31

Successor Inxed Property


We need to nd an easy way to characterize operation free products of elementary matrices. Denition (Successor Inxed Property) Let I = (i1 , i2 , . . . , ik ) be an index tuple. I will be called successor inxed if and only if for every pair of indices ia , ib I , with 1 a < b k , satisfying ia = ib , there exists at least one index ic = ia + 1, such that a < c < b. Example

Let I1 = (1, 2, 0, 1, 3, 2, 0, 1) and I2 = (1, 2, 0, , 3, , 2, 0). Then I1 has SIP but not I2 . Also note that I1 (1, , , 1, 3, 2, 0, 1) (1, 0, 2, , , 2, 0, 1) ... We need to nd an easy way to uniquely write index tuples corresponding to equal products of elementary matrices.
Dr Vologiannidis Stavros

1 1

02

31

Products of elementary matrices


Theorem (Range Product)

The product A(k :l ) is of the form


Ik 1

01(l k +1)

Il k +1
01l

A(k :l ) =

I Tk . . . Tl

T0 T1

Inl 1

,0 < k l n 1

Il

. . . Tl

Inl 1

, k = 0, l n 1

Dr Vologiannidis Stavros

Standard forms of products of elementary matrices


Denition (Standard forms) Column standard form: AI = A(c :i ) , where ci (0 : i ) {}. Row standard form: AI = A(r :j ) , where rj (0 : j ) {}. j =0
j

n1

i =n1

Example Consider n = 4 and I = (0, 1, 0, 2, 1, 3, 2, 0, 1, 0). A column standard form is AI = (A0 1 A2 A3 )(A0 A1 A2 )(A0 A1 )(A0 ) = A 0 0 0 T0 0 0 T0 T1 . 0 T0 T1 T2 T0 T1 T2 T3 A row standard form is AI = (A0 )(A1 A0 )(A2 A1 A0 )(A3 A2 A1 A0 ).
Dr Vologiannidis Stavros

Standard forms of products of elementary matrices


Denition (Standard forms) Column standard form: AI = A(c :i ) , where ci (0 : i ) {}. Row standard form: AI = A(r :j ) , where rj (0 : j ) {}. j =0
j

n1

i =n1

Example Consider n = 4 and I = (0, 1, 0, 2, 1, 3, 2, 0, 1, 0). A column standard form is AI = (A0 1 A2 A3 )(A0 A1 A2 )(A0 A1 )(A0 ) = A 0 0 0 T0 0 0 T0 T1 . 0 T0 T1 T2 T0 T1 T2 T3 A row standard form is AI = (A0 )(A1 A0 )(A2 A1 A0 )(A3 A2 A1 A0 ).
Dr Vologiannidis Stavros

Characterization operation free products

Theorem (Characterization of operation free products)

AI is operation free i any of the following statements hold 1 I satises the SIP. 2 AI can be written in the column standard form. 3 AI can be written in a row standard form.

Dr Vologiannidis Stavros

Characterization operation free products

Theorem (Characterization of operation free products)

AI is operation free i any of the following statements hold 1 I satises the SIP. 2 AI can be written in the column standard form. 3 AI can be written in a row standard form.

Dr Vologiannidis Stavros

Products of elementary matrices


Similar results hold using inverses of elementary matrices Ip(k 1) 0 .. , k = 1, . . . , n 1 . Ck 1 Ak := A1 = 0 k . .. . . Ip(nk 1) .
Ck := Ck 1 =

Tk Ip Ip 0

and An = diag {Ip(n1) , Tn }.

Row and column standard forms can be dened accordingly. Similar characterization of operation free products. Example

(A4 A3 A2 A1 )(A4 A3 A2 )(A4 A3 )(A4 ) = T2 T3 T4 T3 T4 0 T4 0 0

T1 T2 T3 T4

Dr Vologiannidis Stavros

0 . 0 0

Products of elementary matrices


Similar results hold using inverses of elementary matrices Ip(k 1) 0 .. , k = 1, . . . , n 1 . Ck 1 Ak := A1 = 0 k . .. . . Ip(nk 1) .
Ck := Ck 1 =

Tk Ip Ip 0

and An = diag {Ip(n1) , Tn }.

Row and column standard forms can be dened accordingly. Similar characterization of operation free products. Example

(A4 A3 A2 A1 )(A4 A3 A2 )(A4 A3 )(A4 ) = T2 T3 T4 T3 T4 0 T4 0 0

T1 T2 T3 T4

Dr Vologiannidis Stavros

0 . 0 0

Products of elementary matrices


Similar results hold using inverses of elementary matrices Ip(k 1) 0 .. , k = 1, . . . , n 1 . Ck 1 Ak := A1 = 0 k . .. . . Ip(nk 1) .
Ck := Ck 1 =

Tk Ip Ip 0

and An = diag {Ip(n1) , Tn }.

Row and column standard forms can be dened accordingly. Similar characterization of operation free products. Example

(A4 A3 A2 A1 )(A4 A3 A2 )(A4 A3 )(A4 ) = T2 T3 T4 T3 T4 0 T4 0 0

T1 T2 T3 T4

Dr Vologiannidis Stavros

0 . 0 0

Products of elementary matrices


Similar results hold using inverses of elementary matrices Ip(k 1) 0 .. , k = 1, . . . , n 1 . Ck 1 Ak := A1 = 0 k . .. . . Ip(nk 1) .
Ck := Ck 1 =

Tk Ip Ip 0

and An = diag {Ip(n1) , Tn }.

Row and column standard forms can be dened accordingly. Similar characterization of operation free products. Example

(A4 A3 A2 A1 )(A4 A3 A2 )(A4 A3 )(A4 ) = T2 T3 T4 T3 T4 0 T4 0 0

T1 T2 T3 T4

Dr Vologiannidis Stavros

0 . 0 0

Construction of Linearizations
Let T (s ) be a pol. matrix of degree n with T0 , Tn nonsingular. Choose k {1, 2, . . . , n}. Let P be a permutation of the tuple (0 : k 1) and LP , RP tuples with elements from (0 : k 2) s.t. (LP , P, RP ) satises the SIP. Let N be a permutation of the tuple (n : k ) and LN , RN tuples with elements from (n : k 1) s.t. (LN , N , RN ) satises the SIP. Then the matrix pencil

Notice that T0 is allowed to be singular if 0 (LP , RP ) while the / same holds for Tn if n (LN , RN ). /
Dr Vologiannidis Stavros

linearization of T (s ) and its coecients are operation free matrices.


is a

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Construction of Linearizations
Let T (s ) be a pol. matrix of degree n with T0 , Tn nonsingular. Choose k {1, 2, . . . , n}. Let P be a permutation of the tuple (0 : k 1) and LP , RP tuples with elements from (0 : k 2) s.t. (LP , P, RP ) satises the SIP. Let N be a permutation of the tuple (n : k ) and LN , RN tuples with elements from (n : k 1) s.t. (LN , N , RN ) satises the SIP. Then the matrix pencil

Notice that T0 is allowed to be singular if 0 (LP , RP ) while the / same holds for Tn if n (LN , RN ). /
Dr Vologiannidis Stavros

linearization of T (s ) and its coecients are operation free matrices.


is a

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Construction of Linearizations
Let T (s ) be a pol. matrix of degree n with T0 , Tn nonsingular. Choose k {1, 2, . . . , n}. Let P be a permutation of the tuple (0 : k 1) and LP , RP tuples with elements from (0 : k 2) s.t. (LP , P, RP ) satises the SIP. Let N be a permutation of the tuple (n : k ) and LN , RN tuples with elements from (n : k 1) s.t. (LN , N , RN ) satises the SIP. Then the matrix pencil

Notice that T0 is allowed to be singular if 0 (LP , RP ) while the / same holds for Tn if n (LN , RN ). /
Dr Vologiannidis Stavros

linearization of T (s ) and its coecients are operation free matrices.


is a

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Construction of Linearizations
Let T (s ) be a pol. matrix of degree n with T0 , Tn nonsingular. Choose k {1, 2, . . . , n}. Let P be a permutation of the tuple (0 : k 1) and LP , RP tuples with elements from (0 : k 2) s.t. (LP , P, RP ) satises the SIP. Let N be a permutation of the tuple (n : k ) and LN , RN tuples with elements from (n : k 1) s.t. (LN , N , RN ) satises the SIP. Then the matrix pencil

Notice that T0 is allowed to be singular if 0 (LP , RP ) while the / same holds for Tn if n (LN , RN ). /
Dr Vologiannidis Stavros

linearization of T (s ) and its coecients are operation free matrices.


is a

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Construction of Linearizations
Let T (s ) be a pol. matrix of degree n with T0 , Tn nonsingular. Choose k {1, 2, . . . , n}. Let P be a permutation of the tuple (0 : k 1) and LP , RP tuples with elements from (0 : k 2) s.t. (LP , P, RP ) satises the SIP. Let N be a permutation of the tuple (n : k ) and LN , RN tuples with elements from (n : k 1) s.t. (LN , N , RN ) satises the SIP. Then the matrix pencil

Notice that T0 is allowed to be singular if 0 (LP , RP ) while the / same holds for Tn if n (LN , RN ). /
Dr Vologiannidis Stavros

linearization of T (s ) and its coecients are operation free matrices.


is a

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Construction of Linearizations
Let T (s ) be a pol. matrix of degree n with T0 , Tn nonsingular. Choose k {1, 2, . . . , n}. Let P be a permutation of the tuple (0 : k 1) and LP , RP tuples with elements from (0 : k 2) s.t. (LP , P, RP ) satises the SIP. Let N be a permutation of the tuple (n : k ) and LN , RN tuples with elements from (n : k 1) s.t. (LN , N , RN ) satises the SIP. Then the matrix pencil

Notice that T0 is allowed to be singular if 0 (LP , RP ) while the / same holds for Tn if n (LN , RN ). /
Dr Vologiannidis Stavros

linearization of T (s ) and its coecients are operation free matrices.


is a

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Construction of Linearizations
Example Using k = 3 and P = (1, 0, 2) N = (3, 5, 4), RP = (1), LP = , RN = ,LN = (4) we get

0 I 0 0 I T1 0 0 0 0 T3 T4 P (s ) = s 0 0 T4 T5 0 0 I 0

0 0 0 I 0 0 0 T0 T1 0 I I T1 T2 0 0 0 0 0 T4 0 0 0 0 I

0 0 0 . I

(LP , P, RP ) = (1, 0, 2, 1) satises the SIP (LN , N , RN ) = (4, 3, 5, 4) satises the SIP

Dr Vologiannidis Stavros

Construction of Linearizations

Example Setting RP = (0, 1) and LP = (0) we get 0 T0 0 0 T0 T1 0 0 0 0 T3 T4 s 0 0 T4 T5 0 0 I 0

0 0 T0 0 0 T0 T1 0 0 0 I T0 T1 T2 0 0 0 T4 0 0 0 0 0 I 0

0 0 0 . I

(LP , P, RP ) = (0, 1, 0, 2, 1, 0) satises the SIP (LN , N , RN ) = (4, 3, 5, 4) satises the SIP

Dr Vologiannidis Stavros

Construction of Linearizations

Why were the previous linearizations block symmetric? Can we produce all the block symmetric linearizations in this family?

Dr Vologiannidis Stavros

Block Symmetric Linearizations


Remark: A product AI is block symmetric if-f I I . Let k be the index used in our main result If k is odd, then choose P = (P odd , Peven ), N = (N odd , Neven ) where Podd Nodd

= (1, 3, . . . , k 2), Peven = (0, 2, . . . , k 1)

= (k , k 2, . . .), Neven =(k 1, k 3, . . .)

If k is even, then LN , LP , N , RP , RN are chosen accordingly It can be veried that

and LP = , RP = Podd , LN = Neven , RN = .

is a linearization of T (s ). This linearization can be further extended to produce more block symmetric linearizations by pre and post multiplying by the same elementary matrix Ai (always obeying SIP).
Dr Vologiannidis Stavros

block symmetric, operation free

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Block Symmetric Linearizations


Remark: A product AI is block symmetric if-f I I . Let k be the index used in our main result If k is odd, then choose P = (P odd , Peven ), N = (N odd , Neven ) where Podd Nodd

= (1, 3, . . . , k 2), Peven = (0, 2, . . . , k 1)

= (k , k 2, . . .), Neven =(k 1, k 3, . . .)

If k is even, then LN , LP , N , RP , RN are chosen accordingly It can be veried that

and LP = , RP = Podd , LN = Neven , RN = .

is a linearization of T (s ). This linearization can be further extended to produce more block symmetric linearizations by pre and post multiplying by the same elementary matrix Ai (always obeying SIP).
Dr Vologiannidis Stavros

block symmetric, operation free

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Block Symmetric Linearizations


Remark: A product AI is block symmetric if-f I I . Let k be the index used in our main result If k is odd, then choose P = (P odd , Peven ), N = (N odd , Neven ) where Podd Nodd

= (1, 3, . . . , k 2), Peven = (0, 2, . . . , k 1)

= (k , k 2, . . .), Neven =(k 1, k 3, . . .)

If k is even, then LN , LP , N , RP , RN are chosen accordingly It can be veried that

and LP = , RP = Podd , LN = Neven , RN = .

is a linearization of T (s ). This linearization can be further extended to produce more block symmetric linearizations by pre and post multiplying by the same elementary matrix Ai (always obeying SIP).
Dr Vologiannidis Stavros

block symmetric, operation free

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Block Symmetric Linearizations


Remark: A product AI is block symmetric if-f I I . Let k be the index used in our main result If k is odd, then choose P = (P odd , Peven ), N = (N odd , Neven ) where Podd Nodd

= (1, 3, . . . , k 2), Peven = (0, 2, . . . , k 1)

= (k , k 2, . . .), Neven =(k 1, k 3, . . .)

If k is even, then LN , LP , N , RP , RN are chosen accordingly It can be veried that

and LP = , RP = Podd , LN = Neven , RN = .

is a linearization of T (s ). This linearization can be further extended to produce more block symmetric linearizations by pre and post multiplying by the same elementary matrix Ai (always obeying SIP).
Dr Vologiannidis Stavros

block symmetric, operation free

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Block Symmetric Linearizations


Remark: A product AI is block symmetric if-f I I . Let k be the index used in our main result If k is odd, then choose P = (P odd , Peven ), N = (N odd , Neven ) where Podd Nodd

= (1, 3, . . . , k 2), Peven = (0, 2, . . . , k 1)

= (k , k 2, . . .), Neven =(k 1, k 3, . . .)

If k is even, then LN , LP , N , RP , RN are chosen accordingly It can be veried that

and LP = , RP = Podd , LN = Neven , RN = .

is a linearization of T (s ). This linearization can be further extended to produce more block symmetric linearizations by pre and post multiplying by the same elementary matrix Ai (always obeying SIP).
Dr Vologiannidis Stavros

block symmetric, operation free

sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN )

Block Symmetric Linearizations


Example Setting k = 3 and Podd = (1) ,Peven = (0, 2), Nodd = (3, 5), Neven = (4), RP = Podd , LN = Neven , LP = RN = which is the rst linearization of the previous example. 0 I 0 0 0 0 0 I 0 0 I T1 0 0 0 0 T0 T1 0 0 0 0 T3 T4 I I T1 T2 0 0 . P (s ) = s 0 0 T4 T5 0 0 0 0 T4 I 0 0 I 0 0 0 0 0 I 0 Pre and post multiplying with A0 we can get the second linearization of the example.

Dr Vologiannidis Stavros

Eigenvector recovery of extended Fiedler linearizations


Let P (s ) = sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN ) be a linearization of T (s ) Cpp [s ] constructed above. Theorem

T T Let x = x1 xn

Cnp1 be a right eigenvector of P (s )


0
i

corresponding to an eigenvalue . If A(c :i ) is the column i =k 1 standard form of A(P,RP ) and ci = 0 such that cj = 0 for j > i, then xi +1 is a right eigenvector of T (s ) corresponding to .
Theorem

Let x = x1 xn C1np be a left eigenvector of P (s ) corresponding to an eigenvalue . If A(r :i ) is the row standard i =0 form of A(LP ,P) and ri = 0 such that rj = 0 for j > i, then xi +1 is a left eigenvector of T (s ) corresponding to .
i

k 1

Dr Vologiannidis Stavros

Eigenvector recovery of extended Fiedler linearizations


Let P (s ) = sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN ) be a linearization of T (s ) Cpp [s ] constructed above. Theorem

T T Let x = x1 xn

Cnp1 be a right eigenvector of P (s )


0
i

corresponding to an eigenvalue . If A(c :i ) is the column i =k 1 standard form of A(P,RP ) and ci = 0 such that cj = 0 for j > i, then xi +1 is a right eigenvector of T (s ) corresponding to .
Theorem

Let x = x1 xn C1np be a left eigenvector of P (s ) corresponding to an eigenvalue . If A(r :i ) is the row standard i =0 form of A(LP ,P) and ri = 0 such that rj = 0 for j > i, then xi +1 is a left eigenvector of T (s ) corresponding to .
i

k 1

Dr Vologiannidis Stavros

Eigenvector recovery of extended Fiedler linearizations


Let P (s ) = sA(LN ,LP ,N ,RP ,RN ) A(LN ,LP ,P,RP ,RN ) be a linearization of T (s ) Cpp [s ] constructed above. Theorem

T T Let x = x1 xn

Cnp1 be a right eigenvector of P (s )


0
i

corresponding to an eigenvalue . If A(c :i ) is the column i =k 1 standard form of A(P,RP ) and ci = 0 such that cj = 0 for j > i, then xi +1 is a right eigenvector of T (s ) corresponding to .
Theorem

Let x = x1 xn C1np be a left eigenvector of P (s ) corresponding to an eigenvalue . If A(r :i ) is the row standard i =0 form of A(LP ,P) and ri = 0 such that rj = 0 for j > i, then xi +1 is a left eigenvector of T (s ) corresponding to .
i

k 1

Dr Vologiannidis Stavros

Example
Example Using k = 3 and P = (1, 0, 2) N = (3, 5, 4), RP = (1), LP = , RN = ,LN = (4) we get

0 I 0 0 I T1 0 0 0 0 T3 T4 P (s ) = s 0 0 T4 T5 0 0 I 0

0 0 0 I 0 0 0 T0 T1 0 I I T1 T2 0 0 0 0 0 T4 0 0 0 0 I

0 0 0 . I

Since (P, RP ) = ((1, 0, 2), (1)) ((1, 2), (0, 1)) , x2 is a right eigenvector of T (s ).

Dr Vologiannidis Stavros

Eigenvector recovery at of extended Fiedler linearizations


Let T (s ) Cpp [s ] of degree n with Tn singular. Let also P (s ) a linearization with n (LN , RN ). / Theorem

T T Let x = x1 xn

Cnp1 be a right eigenvector of P (s )


n
i

corresponding to an eigenvalue at . If A(c :i ) is the column i =1 standard form of A(N ,RN ) and ci = n such that cj = n for j > i, then xi is a right eigenvector of T (s ) corresponding to .
Theorem

Let x = x1 xn C1np be a left eigenvector of P (s ) corresponding to an eigenvalue at . If A(r :i ) is the row i =n standard form of A(LN ,N ) and ri = n such that rj = n for j > i, then xi is a left eigenvector of T (s ) corresponding to .
i

Dr Vologiannidis Stavros

Eigenvector recovery at of extended Fiedler linearizations


Let T (s ) Cpp [s ] of degree n with Tn singular. Let also P (s ) a linearization with n (LN , RN ). / Theorem

T T Let x = x1 xn

Cnp1 be a right eigenvector of P (s )


n
i

corresponding to an eigenvalue at . If A(c :i ) is the column i =1 standard form of A(N ,RN ) and ci = n such that cj = n for j > i, then xi is a right eigenvector of T (s ) corresponding to .
Theorem

Let x = x1 xn C1np be a left eigenvector of P (s ) corresponding to an eigenvalue at . If A(r :i ) is the row i =n standard form of A(LN ,N ) and ri = n such that rj = n for j > i, then xi is a left eigenvector of T (s ) corresponding to .
i

Dr Vologiannidis Stavros

Eigenvector recovery at of extended Fiedler linearizations


Let T (s ) Cpp [s ] of degree n with Tn singular. Let also P (s ) a linearization with n (LN , RN ). / Theorem

T T Let x = x1 xn

Cnp1 be a right eigenvector of P (s )


n
i

corresponding to an eigenvalue at . If A(c :i ) is the column i =1 standard form of A(N ,RN ) and ci = n such that cj = n for j > i, then xi is a right eigenvector of T (s ) corresponding to .
Theorem

Let x = x1 xn C1np be a left eigenvector of P (s ) corresponding to an eigenvalue at . If A(r :i ) is the row i =n standard form of A(LN ,N ) and ri = n such that rj = n for j > i, then xi is a left eigenvector of T (s ) corresponding to .
i

Dr Vologiannidis Stavros

Example
Example Using k = 3 and P = (1, 0, 2) N = (3, 5, 4), RP = (1), LP = , RN = ,LN = (4) we get

0 I 0 0 I T1 0 0 0 0 T3 T4 P (s ) = s 0 0 T4 T5 0 0 I 0

0 0 0 I 0 0 0 T0 T1 0 I I T1 T2 0 0 0 0 0 T4 0 0 0 0 I

0 0 0 . I

Since (N , RN ) = (3, 5, 4) ((3), (5, 4)), x4 is a right eigenvector of T (s ) at .

Dr Vologiannidis Stavros

Conclusions

Properties of products of elementary matrices. A broad family of linearizations Method for creating block symmetric linearizations. Eigenvector recovery is easily established due to the operation free property of Fiedler linearizations.

Dr Vologiannidis Stavros

Bibliography I
E. N. Antoniou and S. Vologiannidis. A new family of companion forms of polynomial matrices. Electron. J. Linear Algebra, 11:7887 (electronic), 2004. E. N. Antoniou and S. Vologiannidis. Linearizations of polynomial matrices with symmetries and their applications. Electron. J. Linear Algebra, 15:107114 (electronic), 2006. Miroslav Fiedler. A note on companion matrices. Linear Algebra Appl., 372:325331, 2003. Nicholas J. Higham, D. Steven Mackey, Niloufer Mackey, and Franoise Tisseur. Symmetric linearizations for matrix polynomials. SIAM J. Matrix Anal. Appl., 29(1):143159 (electronic), 2006.
Dr Vologiannidis Stavros

Bibliography II

Peter Lancaster. Symmetric transformations of the companion matrix. 8:146148 (electronic), 1961. Peter Lancaster and Uwe Prells. Isospectral families of high-order systems. ZAMM Z. Angew. Math. Mech., 87(3):219234, 2007. D. Steven Mackey, Niloufer Mackey, Christian Mehl, and Volker Mehrmann. Vector spaces of linearizations for matrix polynomials. SIAM J. Matrix Anal. Appl., 28(4):9711004 (electronic), 2006.

Dr Vologiannidis Stavros

Bibliography III

S. Vologiannidis and E. Antoniou. A permuted factors approach for the linearization of polynomial matrices. Mathematics of Control, Signals, and Systems (MCSS), 22:317342, 2011. 10.1007/s00498-011-0059-6.

Dr Vologiannidis Stavros

You might also like