Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

γ-matrices: a new class of simultaneously

diagonalizable matrices
Antonio Boccuto, Ivan Gerace, Valentina Giorgetti and Federico Greco

Laboratorio di Matematica Computazionale “Sauro Tulipani”


Dipartimento di Matematica e Informatica
arXiv:2107.05890v1 [math.NA] 13 Jul 2021

Università degli Studi di Perugia


Via Vanvitelli, 1
I-06123 Perugia (Italy)

July 14, 2021

Abstract
In order to precondition Toeplitz systems, we present a new class of simulta-
neously diagonalizable real matrices, the γ-matrices, which include both symmetric
circulant matrices and a subclass of the set of all reverse circulant matrices. We de-
fine some algorithms for fast computation of the product between a γ-matrix and a
real vector and between two γ-matrices. Moreover, we illustrate a technique of ap-
proximating a real symmetric Toeplitz matrix by a γ-matrix, and we show that the
eigenvalues of the preconditioned matrix are clustered around zero with the excep-
tion of at most a finite number of terms.

1 Introduction
Toeplitz-type linear systems arise from numerical approximation of differential equations
(see, e.g., [7], [23]). Moreover, in restoration of blurred images, it is often dealt with
Toeplitz matrices (see, e.g., [10], [11], [20], [24]). The related linear system is often ill-
conditioned. Thus, preconditioning the system turns out be very useful to have stability of
the result. In particular, if the linear system is solved by means of the conjugate gradient
method, the preconditioning technique allows to grow the speed of convergence.
Recently, there have been several studies about simultaneously diagonalizable real
matrices. In particular, in investigating the preconditioning of Toeplitz matrices (see,

1
e.g., [21]), several matrix algebras are considered, among which the class of all circulant
matrices (see, e.g., [6], [15], [16], [17], [26], [32], [42], [45]), the family of all ω-circulant
matrices (see, e.g., [7], [29]), the set of all τ-matrices (see, e.g., [8]) and the family of all
matrices diagonalized by the Hartley transform (see, e.g., [3], [9], [12]).
In this paper we investigate a particular class of simultaneously diagonalizable real
matrices, whose elements we call γ-matrices. Such a class, similarly as that of the matri-
ces diagonalizable by the Hartley transform (see, e.g., [9]) includes symmetric circulant
matrices and a subclass of the family of all reverse circulant matrices. Symmetric circu-
lant matrices have several applications to ordinary and partial differential equations (see,
e.g., [28], [30], [37], [38]), images and signal restoration (see, e.g., [14], [40]), graph the-
ory (see, e.g., [18], [23], [31], [33], [35], [36]). Reverse circulant matrices have different
applications, for instance in exponential data fitting and signal processing (see, e.g., [2],
[4], [22], [46], [47]).
We first analyze both spectral and structural properties of this family of matrices.
Successively, we deal with the problem of real fast transform algorithms (see, e.g., [1], [3],
[5], [25], [27], [34], [39], [43], [44], [48], [50]). In particular we define some algorithms
for fast computation of the product between a γ-matrix and a real vector and between two
γ-matrices. We show that the computational cost of a multiplication between a γ-matrix
and a real vector is of at most 47 n log2 n + o(n log2 n) additions and 12 n log2 n + o(n log2 n)
multiplications, while the computational cost of a multiplication between two γ-matrices
is at most 29 n log2 n + o(n log2 n) additions and 32 n log2 n + o(n log2 n) multiplications.
Finally, we present a technique of approximating a real symmetric Toeplitz matrix by a
γ-matrix, and we show that the eigenvalues of the preconditioned matrix are clustered
around zero with the exception of at most a finite number of terms.
In Section 2 we investigate spectral properties of γ-matrices, in Section 3 we deal with
structural properties, in Section 4 we define the real fast transforms by means of which
it is possible to implement the product between a γ-matrix and a real vector. In Section
5 we use the previously defined transforms to implement the multiplication between two
γ-matrices. In Section 6 we deal with the problem of preconditioning a real symmetric
Toeplitz matrix by a γ-matrix.

2 Spectral characterization of γ-matrices


To present a new class of simultaneously diagonalizable matrices, we first define the fol-
(n)
lowing matrix. Let n be a fixed positive integer, and Qn = (qk, j )k, j , k, j = 0, 1, . . . , n − 1,
where

  2π k j 
 α
 j
 cos if 0 ≤ j ≤ bn/2c,
n

(n)
qk, j = (1)
 α j sin 2π k (n − j)

  
if bn/2c ≤ j ≤ n − 1,

n

2

1

 √ =α if j = 0, or j = n/2 if n is even,
 n


αj = r (2)
2



e otherwise,



n
and put
 n n+1 
(0) (1) (b 2 c) (b 2 c) (n−2) (n−1)
Qn = q q · · · q q ··· q q , (3)

where
1  T 1
q(0) = √ 1 1 · · · 1 = √ u(0) , (4)
n n

r      T r
( j) 2 2π j 2π j(n − 1) 2 ( j)
q = 1 cos · · · cos = u ,
n n n n
r      T r
2 2π j 2π j(n − 1) 2 ( j)
q(n− j) = 0 sin · · · sin = v , (5)
n n n n

j = 1, 2, . . . , b n−1
2 c. Moreover, when n is even, set

1  T 1
q(n/2) = √ 1 − 1 1 − 1 · · · − 1 = √ u(n/2) . (6)
n n
In [41] it is proved that all columns of Qn are orthonormal, and thus Qn is an orthonormal
matrix.
Now we define the following function. Given λ ∈ Cn , λ = (λ0 λ1 · · · λn−1 )T , set
 
λ0 0 0 . . . 0 0
 0 λ1 0 . . . 0 0 
 
 0 0 λ2 . . . 0 0 
λ) = Λ =  .
diag(λ ..  ,
 
.. .. . . ..
 .. . . . . . 
 
 0 0 0 . . . λn−2 0 
0 0 0 ... 0 λn−1

where Λ ∈ Cn×n is a diagonal matrix.


A vector λ ∈ Rn , λ = (λ0 λ1 · · · λn−1 )T is said to be symmetric (resp., asymmetric) iff
λ j = λn− j (resp., λ j = −λn− j ) ∈ R for every j = 0, 1, . . . , bn/2c.
Let Qn be as in (3), and Gn be the space of the matrices simultaneously diagonalizable
by Qn , that is

Gn = sd(Qn ) = {Qn ΛQTn : Λ = diag(λ


λ ), λ ∈ Rn }.

A matrix belonging to Gn , n ∈ N, is called γ-matrix. Moreover, we define the following


classes by

Cn = {Qn ΛQTn : Λ = diag(λ


λ ), λ ∈ Rn , λ is symmetric},

3
Bn = {Qn ΛQTn : Λ = diag(λ
λ ), λ ∈ Rn , λ is asymmetric},

Dn = {Qn ΛQTn : Λ = diag(λ λ ), λ ∈ Rn , λ is symmetric,


λ0 = 0, λn/2 = 0 if n is even},

En = {Qn ΛQTn : Λ = diag(λλ ), λ ∈ Rn , λ j = 0, j = 1, . . . , n − 1,


j 6= n/2 when n is even}.

Proposition 2.1. The class Gn is a matrix algebra of dimension n.

Proof. We prove that Gn is an algebra. Let In be the identity n × n-matrix. Since Qn is


orthogonal, then Qn In QTn = In . Hence, In ∈ Gn .
If C ∈ Gn , C is non-singular, C = Qn ΛQTn and Λ is diagonal, then C−1 = Qn Λ−1 QTn ,
and hence C−1 ∈ Gn , since Λ−1 is diagonal.
Moreover, if Cr ∈ Gn , αr ∈ R, Cr = Qn Λr QTn and Λr is diagonal, r = 1, 2, then α1C1 +
α2C2 = Qn (α1 Λ1 + α2 Λ2 )QTn ∈ Gn , since α1 Λ1 + α2 Λ2 is diagonal. Furthermore, C1C2 =
Qn Λ1 QTn Qn Λ2 QTn = Qn Λ1 Λ2 QTn , since Λ1 Λ2 is diagonal. Therefore, Gn is an algebra.
Now we claim that dim(Gn ) = dim(λ λ ) = n. By contradiction, let λ 1 6= λ 2 ∈ Rn be
such that Qn diag(λ λ 1 )QTn = Qn diag(λλ 2 )QTn = C. Then, the elements of λ 2 are obtained
by a suitable permutation of those of λ 1 . Since the order of the eigenvectors of C have
(1) (2)
been established, if a component λ j of λ 1 is equal to a component λk of λ 2 , then
(1) (2) (1) (2)
q( j) and q(k) belong to the same eigenspace, and hence λ j = λj = λk = λk . This
implies that λ 1 = λ 2 , a contradiction. This ends the proof.

Proposition 2.2. The class Cn is a subalgebra of Gn of dimension b n2 c + 1.

Proof. Obviously, Cn ⊂ Gn . Now we claim that Cn is an algebra. First, note that In ∈


Cn , since In = Qn In QTn and In =diag(1), where 1 = (1 1 · · · 1)T . So, In ∈ Cn , since 1 is
symmetric.
If C ∈ Cn , C is non-singular, C = Qn ΛQTn , Λ =diag(λ λ ), λ = (λ0 λ1 · · · λn−1 )T is sym-
metric, then C−1 = Qn Λ−1 QTn , and so C−1 ∈ Cn , as Λ−1 =diag(λ λ 0 ), and λ 0 = (1/λ0 1/λ1 · · · 1/λn−1 )T
is symmetric too.
If Cr ∈ Cn , αr ∈ R, Cr = Qn Λr QTn , Λr =diag(λλ (r) ), λ (r) = (λ0(r) λ1(r) · · · λn−1
(r) T
) is sym-
metric, r = 1, 2, then α1C1 +α2C2 = Qn (α1 Λ1 +α2 Λ2 )Qn ∈ Cn , since α1 Λ1 +α2 Λ2 =diag(λ
T λ ∗ ),
(1) (2) (1) (2)
and λ ∗ = (α1 λ0 + α2 λ0 · · · α1 λn−1 + α2 λn−1 )T is symmetric. Furthermore, C1C2 =
Qn Λ1 Λ2 QTn , since Λ1 Λ2 =diag(λλ ∗ ), and λ ∗ = (λ0(1) λ0(2) · · · λn−1
(1) (2) T
λn−1 ) is symmetric.
Therefore, Cn is an algebra.
Now we prove that dim(Cn ) = b 2n c + 1. By the definition of Cn , it is possible to choose
at most b n2 c+1 elements of λ . The proof is analogous to that of the last part of Proposition
2.1.

Proposition 2.3. The class Bn is a linear subspace of Gn , and has dimension b n−1
2 c.

4
Proof. First, let us prove that Bn is a linear subspace of Gn . For r = 1, 2, let Ar ∈
Bn , αr ∈ R, Λr = diag(λ λ (r) ), with λ (r) asymmetric, and Cr = Qn Λr QTn . Then α1C1 +
α2C2 = Qn (α1 Λ1 + α2 Λ2 )QTn with α1 Λ1 + α2 Λ2 =diag(λ λ ∗ ), and λ ∗ = (α1 λ0(1) + α2 λ0(2)
(1) (2)
· · · α1 λn−1 + α2 λn−1 )T is asymmetric. Therefore, α1C1 + α2C2 ∈ Bn . So, Bn is a linear
subspace of Gn .
Now we prove that dim(Bn ) = b n−12 c. By the definition of Bn , it is possible to choose
n−1
at most b 2 c elements of λ , because λ0 = 0 and λn/2 = 0 when n is even. The proof is
analogous to that of the last part of Proposition 2.1.
Similarly as in Propositions 2.1 and 2.2, it is possible to prove that Dn is a subalgebra
of Gn of dimension b n−1
2 c and En is a subalgebra of Gn of dimension 1 when n is odd and
2 when n is even. Now we prove the following
Theorem 2.4. One has

Gn = Cn ⊕ Bn , (7)

where ⊕ is the orthogonal sum, and h·, ·i denotes the Frobenius product, defined by

hG1 , G2 i = tr(GT1 G2 ), G1 , G2 ∈ Gn ,

where tr(G) is the trace of the matrix G.


Proof. Observe that, to prove (7), it is enough to demonstrate the following properties:
2.4.1) Cn ∩ Bn = {On }, where On ∈ Rn×n is the matrix whose entries are equal to 0;

2.4.2) for any G ∈ Gn , there exist C ∈ Cn and B ∈ Bn with G = C + B;

2.4.3) for any C ∈ Cn and B ∈ Bn , it is C + B ∈ Gn ;

2.4.4) hC, Bi = 0 for each C ∈ Cn and B ∈ Bn .

2.4.1) Let G ∈ Cn ∩ Bn . Then, G = Qn Λ(G) QTn , where Λ(G) = diag(λ λ (G) ) and λ (G)
is both symmetric and asymmetric. But this is possible if and only if λ (G) = 0, where 0
is the vector whose components are equal to 0. Thus, Λ(G) = On and hence G = On . This
proves 2.4.1).
2.4.2) Let G ∈ Gn , Λ(G) ∈ Rn×n be such that G = Qn Λ(G) QTn , Λ(G) =diag(λ λ (G) ) =
(G) (G) (G)
diag(λ0 λ1 · · · λn−1 ). For j ∈ {0, 1, . . . , n − 1}, set
(G) (G) (G) (G)
(C)
λj + λ(n− j) mod n (B)
λj − λ(n− j) mod n
λj = , λj = .
2 2
(r) (r) (r)
For r ∈ {C, B}, set Λ(r) = diag(λ0 λ1 · · · λn−1 ), C = Qn Λ(C) QTn and B = Qn Λ(B) QTn .
Observe that λ (G) = λ (C) + λ (B) , where λ (C) is symmetric and λ (B) is asymmetric.
Hence, C ∈ Cn and B ∈ Bn . This proves 2.4.2).
(r) (r) (r)
2.4.3) Let C ∈ Cn and B ∈ Bn . For r ∈ {C, B}, set Λ(r) = diag(λ0 λ1 · · · λn−1 ),
C = Qn Λ(C) QTn and B = Qn Λ(B) QTn . Note that λ (C) is symmetric and λ (B) is asymmetric.
We have C + B = Qn (Λ(C) + Λ(B) )QTn ∈ Gn .

5
(r)
2.4.4) Choose arbitrarily C ∈ Cn and B ∈ Bn . For r ∈ {C, B}, put Λ(r) = diag(λ0
(r) (r)
λ1 · · · λn−1 ), C = Qn Λ(C) QTn and B = Qn Λ(B) QTn . Observe that λ (C) is symmetric and
(B) (B)
λ (B) is asymmetric. In particular, λ0 = 0 and λn/2 = 0 when n is even. Note that CT B =
(C) (B) (C) (B) (C) (B)
Qn Λ(C) Λ(B) QTn , where Λ(C) Λ(B) =diag (λ0 λ0 λ1 λ1 · · · λn−1 λn−1 ). Thus, we ob-
tain
n−1 b(n−1)/2c
(C) (B) (C) (B) (C) (B)
hC, Bi = tr(CT B) = ∑ λj λj = ∑ (λ j λ j + λn− j λn− j ) =
j=0 j=1
b(n−1)/2c
(C) (B) (C) (B)
= ∑ (λ j λ j − λ j λ j ) = 0,
j=1

that is 2.4.4). This ends the proof.


Theorem 2.5. It is

Cn = Dn ⊕ En , (8)

where ⊕ is the orthogonal sum with respect to the Frobenius product.


Proof. Analogously as in Theorem 2.4, to get (8) it is sufficient to prove the following
properties:
2.5.1) Dn ∩ En = {On }, where On ∈ Rn×n is the matrix whose entries are equal to 0;

2.5.2) for any C ∈ Cn , there exist C1 ∈ Dn and C2 ∈ En with C = C1 +C2 ;

2.5.3) for every C1 ∈ Dn and C2 ∈ En , we get that C1 +C2 ∈ Cn .

2.5.4) hC1 ,C2 i = 0 for each C1 ∈ Dn and C2 ∈ En .


2.5.1) Let C ∈ Dn ∩ En . Then, C = Qn ΛQTn , where Λ = diag(λ λ ), λ is symmetric
and such that λ0 = 0 and λn/2 = 0 when n is even, because C ∈ Dn . Moreover, since
C ∈ En , we get that λ j = 0 for j = 1, . . . , n − 1, j 6= n/2 when n is even, that is λ = 0 ,
Thus, Λ = On and hence C = On . This proves 2.5.1).
2.5.2) Let C ∈ Cn , Λ ∈ Rn×n be such that C = Qn ΛQTn , Λ =diag(λ λ ) = diag(λ0
λ1 · · · λn−1 ). For j ∈ {0, 1, . . . , n − 1}, set
 
 λ j if j 6= 0 and j 6= n/2,  λ j if j = 0 or j = n/2,
(1) (2)
λj = λj =
0 otherwise, 0 otherwise.
 

(r) (r) (r)


For r = 1, 2, set Λr = diag(λ0 λ1 · · · λn−1 ), and Cr = Qn Λr QTn . Note that λ = λ (1) +
λ (2) , where λ (1) is symmetric. Hence, C1 ∈ Dn and C2 ∈ En . This proves 2.5.2).
2.5.3) Let C1 ∈ Dn and C2 ∈ En . For r = 1, 2 there is Λr =diag(λ λ (r) )= diag(λ0(r)
(r) (r) (1) (1)
λ1 · · · λn−1 ), such that Cr = Qn Λr QTn , λ (1) is symmetric, λ0 = 0 and λn/2 = 0 when
(2)
n is even, λ j = 0, j = 1, . . . , n − 1, j 6= n/2 when n is even. Thus, it is not difficult to
(1)
check that λ + λ (2) is symmetric. So, C1 +C2 = Qn diag(λ
λ (1) + λ (2) )QTn ∈ Cn .

6
2.5.4) Pick C1 ∈ Dn and C2 ∈ En . Then for r = 1, 2 there exists Λr =diag(λ λ (r) )=
(r) (r) (r) (1) (1)
diag(λ0 λ1 · · · λn−1 ), such that Cr = Qn Λr QTn , λ (1) is symmetric, λ0 = 0 and λn/2 = 0
when n is even. Note that C1T C2 = Qn Λ1 Λ2 QTn , where
(1) (2) (1) (2) (1) (2)
Λ1 Λ2 = diag(λ0 · λ0 λ1 · λ1 · · · λn−1 · λn−1 ) = diag(0) = On .

Therefore, C1T C2 = On , and thus we get hC1 ,C2 i =tr(C1T C2 ) = 0, that is 2.5.4). This ends
the proof.
Now we give a consequence of 2.4 and 2.5.

Corollary 2.5.1. The following result holds:

Gn = Bn ⊕ Dn ⊕ En .

Proposition 2.6. Given two γ-matrices G1 , G2 , we get:

2.6.1) If G1 , G2 ∈ Cn , then G1 G2 ∈ Cn ;

2.6.2) If G1 , G2 ∈ Bn , then G1 G2 ∈ Cn ;

2.6.3) If G1 ∈ Cn and G2 ∈ Bn , then G1 G2 = G2 G1 ∈ Bn .

Proof. 2.6.1) It follows immediately from the fact that Gn is an algebra.


2.6.2) If G1 = Qn Λ(G1 ) QTn and G2 = Qn Λ(G2 ) QTn , then G1 G2 = Qn Λ(G1 ) Λ(G2 ) QTn ,
(G ) (G ) (G ) (G ) (G ) (G )
where Λ(G1 ) Λ(G2 ) =diag (λ0 1 λ0 2 λ1 1 λ1 2 · · · λn−11 λn−11 ). Since the eigenval-
ues of G1 and G2 are asymmetric, we get that the eigenvalues of G1 G2 are symmetric.
Hence, G1 G2 ∈ Cn .
2.6.3) We first note that, since Gn is an algebra, we get that G1 G2 = G2 G1 . Since the
eigenvalues of G1 are symmetric and those of G2 are asymmetric, arguing analogously as
in the proof of 2.6.2) it is possible to check that the eigenvalues of G1 G2 are asymmetric.
Therefore, G1 G2 ∈ Bn .

3 Structural characterizations of γ-matrices


In this section we show how Cn coincides with the set of all real symmetric circulant
matrices, and Sn is a particular subclass of reverse circulant matrices.
We consider the set of families

Ln,k = {A ∈ Rn×n : there is a = (a0 a1 . . . an−1 )T ∈ Rn with al, j = a( j+kl) mod n },


Hn,k = {A ∈ Rn×n : there is a symmetric a = (a0 a1 . . . an−1 )T ∈ Rn
with al, j = a( j+kl) mod n },
n
Jn,k = A ∈ Rn×n : there is a symmetric a = (a0 a1 . . . an−1 )T ∈ Rn with
n−1 n−1 o
t
∑ at = 0, ∑ (−1) at = 0 when n is even, and al, j = a( j+kl) mod n ,
t=0 t=0

7
where k ∈ {1, 2, . . . , n − 1}.
When k = n − 1, Ln,n−1 is the class of all real circulant matrices, that is the family
of those matrices C ∈ Rn×n such that every row, after the first, has the elements of the
previous one shifted cyclically one place right (see, e.g., [19]).
Given a vector c ∈ Rn , c = (c0 c1 · · · cn−1 )T , let us define
 
c0 c1 c2 . . . cn−2 cn−1
cn−1 c0 c1 . . . cn−3 cn−2 
 
...
cn−2 cn−1 c0 cn−4 cn−3 
 
circ(c) = C =   ... .. ... ... ... ..  ,
 . . 

.
c3 c4 . . c0
 
 c2 c1 
c1 c2 c3 . . . cn−1 c0

where C ∈ Ln,n−1 .
2πi
If i is the imaginary unit and ωn = e n , then the n-th roots of 1 are
2π ji
 2π j   2π j 
ωnj = e n = cos +i sin , j = 0, 1, . . . , n − 1.
n n
(n)
The Fourier matrix of dimension n × n is defined by Fn = ( fk,l )k,l , where

(n) 1
fk,l = √ ωnkl , k, l = 0, 1, . . . , n − 1.
n

Note that Fn is symmetric, and Fn−1 = Fn∗ (see, e.g., [19]).


Let Fn be the space of all real matrices simultaneously diagonalizable by Fn , that is

Fn = sd(Fn ) = {Fn ΛFn∗ ∈ Rn×n : Λ = diag(λ


λ ), λ ∈ Cn }.

It is not difficult to see that Fn is a commutative matrix algebra.


Theorem 3.1. ([19, Theorems 3.2.2 and 3.2.3]) The following result holds:

Fn = Ln,n−1 .

As a consequence of this theorem, we get that the n eigenvectors of every circulant


matrix C ∈ Rn×n are given by
(n−1) j T
w( j) = (1 ωnj ωn2 j · · · ωn ) ,

and the eigenvalues of a matrix C =circ(c) ∈ Fn are expressed by


n−1
λ j = cT w( j) = ∑ ck ωnjk , j = 0, 1, . . . , n − 1.
k=0

Now we present some results about symmetric circulant real matrices. Observe that,
if C =circ(c), with c ∈ Rn , then C is symmetric if and only if c is symmetric. Thus, the
class of all real symmetric circulant matrices coincides with Hn,n−1 and has dimension
b n2 c + 1 over R.

8
Theorem 3.2. (see, e.g., [18, §4], [41, Lemma 3]) Let C ∈ Hn,n−1 . Then, the set of
all eigenvectors of C can be expressed as {q(0) , q(1) , . . ., q(n−1) }, where q( j) , j = 0,
1, . . . , n − 1, is as in (4), (5) and (6).
Note that from Theorem 3.2 it follows that the set of all real symmetric circulant
matrices is contained in Gn . The nest result holds.
Theorem 3.3. (see, e.g., [13, §1.2], [18, §4], [51, Theorem 1]) Let C = circ(c) ∈ Hn,n−1 .
Then, the eigenvalues λ j of C, j = 0, 1, . . . , b 2n c, are given by
λ j = cT u( j) . (9)
Moreover, for j = 1, 2, . . . , b n−1
2 c it is
λ j = λn− j .

From Theorem 3.3 it follows that, if C is a real symmetric circulant matrix and λ (C)
is the set of its eigenvalues, then λ (C) is symmetric, thanks to (9). Hence,
Hn,n−1 ⊂ Cn . (10)
Now we prove that Cn is contained in the class of all real symmetric circulant matrices
Hn,n−1 . First, we give the following
Theorem 3.4. Every matrix C ∈ Cn is circulant, that is
Cn ⊂ Ln,n−1 . (11)
(C) (C) (C) (C)
Proof. Let C ∈ Cn , C = (ck,l )k.l and Λ(C) = diag(λ0 λ1 · · · λn−1 ) be such that λ j =
(C)
λ(n− j) mod n for every j ∈ {0, 1, . . . , n − 1}, and C = Qn Λ(C) QTn . We have
n−1
(n) (C) (n)
ck,l = ∑ qk, j λ j ql, j .
j=0

From this we get, if n is even,


n/2−1
(C) (n) (n) (C) (n) (n) (C) (n) (n) (n) (n)
ck,l = λ0 qk,0 ql,0 + λn/2 qk,n/2 ql,n/2 + ∑ λ j (qk, j ql, j + qk,n− j ql,n− j ), (12)
j=1

and, if n is odd,
(n−1)/2
(C) (n) (n) (C) (n) (n) (n) (n)
ck,l = λ0 qk,0 ql,0 + ∑ λ j (qk, j ql, j + qk,n− j ql,n− j ). (13)
j=1

When n is even, from (1) and (12) we deduce


(C) (C)
λ0 λn/2
ck,l = + (−1)k−l +
n n
!
2 n/2−1 (C)
      
2πk j 2πl j 2πk j 2πl j
+ ∑ λ j · cos n · cos n + sin n · sin n
n j=1
=

(C) (C)
λ 2 n/2−1 (C)
 
λ0 k−l n/2 2π(k − l) j
= + (−1) + ∑ λ j · cos .
n n n j=1 n

9
Let c = (c0 c1 · · · cn−1 )T , where
(C) (C)
λ0 λn/2 2 n/2−1 (C) 
2π t j

t
ct = + (−1) + ∑ λ j · cos , t ∈ {0, 1, . . . , n − 1}.
n n n j=1 n

Then we get C =circ(c), since for any k, l ∈ {0, 1, . . . , n − 1} it is ck,l = c(k−l) mod n .
When n is odd, from (1) and (13) we obtain
(C)
2 (n−1)/2 (C)
   
λ0 2πk j 2πl j
ck,l =
n
+ ∑ λ j · cos n · cos n +
n j=1
   !
2πk j 2πl j
+ sin · sin =
n n
(C)
2 (n−1)/2 (C)
 
λ0 2π(k − l) j
=
n
+ ∑ λ j · cos
n j=1 n
.

Let c = (c0 c1 · · · cn−1 )T , where


(C)
2 (n−1)/2 (C)
 
λ0 2π t j
ct =
n
+ ∑ λ j · cos n ,
n j=1
t ∈ {0, 1, . . . , n − 1}.

Hence, C =circ(c), because for each k, l ∈ {0, 1, . . . , n − 1} it is ck,l = c(k−l) mod n . There-
fore, Cn ⊂ Ln,n−1 .
A consequence of Theorem 3.4 is the following
Corollary 3.4.1. The class Cn is the set of all real symmetric circulant matrices, that is
Cn = Hn,n−1 . (14)
Proof. Since every matrix belonging to Cn is symmetric, we get that (14) is a consequence
of (10) and (11).
If k = 1, then Ln,1 is the set of all real reverse circulant (or real anti-circulant) ma-
trices, that is the class of all matrices B ∈ Rn×n such that every row, after the first, has
the elements of the previous one shifted cyclically one place left (see, e.g., [19]). Given a
vector b = (b0 b1 · · · bn−1 )T ∈ Rn , set
 
b0 b1 b2 . . . bn−2 bn−1
 b1
 b2 b3 . . . bn−1 b0  
 b2 b3 b4 . . . b0 b1 
rcirc(b) = B =  .. ..  ,
 
.. .. ..
 .
 . . ... . . 

bn−2 bn−1 b0 . . . bn−4 bn−3 
bn−1 b0 b1 . . . bn−3 bn−2
with B ∈ Ln,1 .
Observe that every matrix B ∈ Bn,1 is symmetric, and the set Ln,1 is a linear space
over R, but not an algebra. Note that, if B1 , B2 ∈ Ln,1 , then B1 B2 , B2 B1 ∈ Ln,n−1 (see
[19, Theorem 5.1.2]).
Now we give the next result.

10
Theorem 3.5. The following inclusion holds:

Bn ⊂ Ln,1 .
(B) (B) (B) (B)
Proof. Let B ∈ Bn , B = (bk,l )k.l and Λ(B) = diag(λ0 λ1 · · · λn−1 ) be such that λ j =
(B)
−λ(n− j) mod n for every j ∈ {0, 1, . . . , n − 1}, and B = Qn Λ(B) QTn . We have

n−1
(n) (B) (n)
bk,l = ∑ qk, j λ j ql, j . (15)
j=0

(B) (B)
Observe that λ0 = 0 and λn/2 = 0, if n is even. From this and (15) we get

b(n−1)/2c
(B) (n) (n) (n) (n) 
bk,l = ∑ λj · qk, j ql, j − qk,n− j ql,n− j , (16)
j=1

both when n is even and when n is odd. From (1) and (16) we deduce
!
2 b(n−1)/2c (B)
      
2πk j 2πl j 2πk j 2πl j
bk,l =
n j=1∑ λ j cos n cos n − sin n sin n =

2 b(n−1)/2c (B)
 
2π(k + l) j
= ∑ λ j cos .
n j=1 n

Let b = (b0 b1 · · · bn−1 )T , where

2 b(n−1)/2c (B)
 
2πt j
bt =
n j=1∑ λ j · cos n , t ∈ {0, 1, . . . , n − 1}. (17)

Thus, B =circ(b), because for each k, l ∈ {0, 1, . . . , n − 1} we have bk,l = b(k−l) mod n . For
any k, l ∈ {0, 1, . . . , n − 1} it is bk,l = b(k+l) mod n . Hence, Bn ⊂ Ln,1 .

Theorem 3.6. One has

Bn ⊂ Hn,1 .

Proof. We recall that


n
Hn,1 = B ∈ Rn×n : there is a symmetric b = (b0 b1 . . . bn−1 )T ∈ Rn
o
with bk, j = b( j+k) mod n .

By Theorem 3.5, we get Bn ⊂ Ln,1 . Now we prove the symmetry of b.


(B) (B) (B)
Let B ∈ Bn be such that there exists Λ(B) ∈ Rn×n , Λ(B) = diag(λ0 λ1 · · · λn−1 ),
(B) (B)
such that C = Qn Λ(B) QTn and λ j = −λ(n− j) mod n for all j ∈ {0, 1, . . . , n − 1}. By The-
orem 3.5, bk, j = b( j+k) mod n . Moreover, by arguing as in Theorem 3.5, we get (17), and

11
hence

2 b(n−1)/2c (B) 2 b(n−1)/2c (B)


   
2πt j 2πt j
bt =
n j=1∑ λ j · cos n = n ∑ λ j · cos 2π j − n =
j=1

2 b(n−1)/2c (B)
 
2π(n − t) j
= ∑ λ j · cos = bn−t
n j=1 n

for any t ∈ {0, 1, . . . , n − 1}. Thus, b is symmetric.


(B)
Theorem 3.7. Let B =rcirc(b) ∈ Bn . Then, the eigenvalues λ j of B, j = 0, 1, . . . , b 2n c,
can be expressed as
(B)
λj = bT u( j) . (18)

Moreover, for j = 1, 2, . . . b n−1


2 c, we get
(B) (B)
λn− j = −λ j .
(B) (B)
Furthermore, it is λ0 = 0, and λn/2 = 0 if n is even.

Proof. Since u( j) , j = 0, 1, . . . bn/2c, is an eigenvector of B, then


(B)
B u( j) = λ j u( j) ,

that is every component of the vector B u( j) is equal to the respective component of the
(B)
vector λ j u( j) . In particular, if we consider the first component, we obtain (18). The last
part of the assertion is a consequence of the asymmetry of the vector λ (B) .
For the general computation of the eigenvalues of reverse circulant matrices, see, e.g.,
[13, §1.3 and Theorem 1.4.1], [49, Lemma 4.1].
Theorem 3.8. The following result holds:

Bn = Jn,1 .

Proof. First of all, we recall that


n
Jn,1 = B ∈ Rn×n : there is a symmetric b = (b0 b1 . . . bn−1 )T ∈ Rn with
n−1 n−1 o
t
b
∑ t = 0, ∑ (−1) bt = 0 when n is even, and bk, j = b( j+k) mod n .
t=0 t=0

We begin with proving that Bn ⊂ Jn,1 .


Let B ∈ Bn . In Theorem 3.9 we proved that B ∈ Hn,1 , that is b is symmetric and
bk, j = b( j+k) mod n .
Now we prove that
n−1
∑ bt = 0. (19)
t=0

12
Since B ∈ Bn , the vector  T
u(0) = 1 1 ··· 1
(B)
is an eigenvector for the eigenvalue λ0 = 0. Hence, the formula (19) is a consequence
of (18).

Again by (18), we get


n−1
∑ (−1)t bt = 0,
t=0
since the vector  T
u(n/2) = 1 − 1 1 − 1 · · · − 1
(B)
is an eigenvector for the eigenvalue λn/2 = 0 if n is even. Thus, Bn ⊂ Jn,1 .
Now observe that Jn,1 is a linear space of dimension b(n − 1)/2c. Thus, by Proposi-
tion 2.3, Bn and Jn,1 have the same dimension. So, Bn = Jn,1 . This ends the proof.

Theorem 3.9. The following result holds:

Dn = Jn,n−1 .

Proof. We recall that


n
Jn,n−1 = C ∈ Rn×n : there is a symmetric c = (c0 c1 . . . cn−1 )T ∈ Rn with
n−1 n−1 o
t
c
∑ t = 0, ∑ (−1) ct = 0 when n is even, and ck, j = c( j−k) mod n .
t=0 t=0

We first prove that Dn ⊂ Jn,n−1 . From Theorem 2.5 and (10) we deduce that Dn ⊂ Cn =
Hn,n−1 . Therefore, if C =circ(c) ∈ Dn , then c is symmetric.
Now we prove that
n−1
∑ ct = 0. (20)
t=0

Since C ∈ Cn , the vector  T


(0)
u = 1 1 ··· 1

is an eigenvector for the eigenvalue λ0 = 0. Hence, the formula (20) is a consequence of


(9).

Again by (9), we get


n−1
∑ (−1)t ct = 0,
t=0
since the vector  T
u(n/2) = 1 − 1 1 − 1 · · · − 1

13
is an eigenvector for the eigenvalue λn/2 = 0 if n is even. Thus, Dn ⊂ Jn,n−1 .
Now observe that Jn,n−1 is a linear space of dimension b(n − 1)/2c. Thus, by Propo-
sition 2.3, Dn and Jn,n−1 have the same dimension. So, Dn = Jn,n−1 . This completes
the proof.

Theorem 3.10. The next result holds:

En = Pn = Ln,n−1 ∩ Ln,1 ,

where
   
n×n: k 1 if i + j is even
 C∈R there are k1 , k2 with ci, j = if n is even,


k2 if i + j is odd
Pn =


{C ∈ Rn×n: there is k with ci, j = k for all i, j = 0, 1, . . . , n − 1} if n is odd.

Proof. We first claim that En = Pn .


We begin with the inclusion En ⊂ Pn . Let C ∈ En , C = (ck,l )k.l and Λ = diag(λ0
λ1 · · · λn−1 ) be such that λ0 = 0 for every j ∈ {1, 2, . . . , n − 1} except n/2 when n is even,
and C = Qn ΛQTn . We have
n−1
(n) (n)
ck,l = ∑ qk, j λ j ql, j .
j=0

From this we get


(n) (n) (n) (n)

 λ0 qk,0 ql,0 + λn/2 qk,n/2 ql,n/2 if n is even,

ck,l = (21)
 (n) (n)
λ0 qk,0 ql,0 if n is odd.

When n is even, from (1) and (21) we deduce

λ0 λn/2
ck,l = + (−1)k−l .
n n
Note that

λ0 λn/2
 n + n


 if k − l is even,
ck,l =
 λ0 − λn/2 if k − l is odd,



n n
and thus C ∈ Pn .
When n is odd, from (1) and (21) we obtain

λ0
ck,l =
n
for k, l ∈ {0, 1, . . . , n − 1}. Hence, C ∈ Pn .

14
Now we prove that Pn ⊂ En . First of all, note that Pn ⊂ Cn . Let C ∈ Pn . If n is even,
then rank(C) ≤ 2, and hence C has at least n − 2 eigenvalues equal to 0. By contradiction,
suppose that at least one of the eigenvalues different from λ0 and λn/2 , say λ j , is different
from 0. Thus, λ0 = 0 or λn/2 = 0. If λ0 = 0, then
n−1
∑ ct = 0,
t=0

and hence k1 = −k2 . Therefore, rank(C) ≤ 1, and thus there is at most one non-zero
eigenvalue. This implies that λn/2 = 0, and hence
n−1
∑ (−1)t ct = 0. (22)
t=0

From (22) it follows that k1 = k2 = 0. Thus we deduce that C = On , which obviously


implies that λ j = 0. This is absurd, and hence λ0 6= 0.
When λn/2 = 0, then (22) holds, and hence k1 = k2 . Thus, rank(C) ≤ 1, which implies
that λ0 = 0, because we know that λ j 6= 0. This yields a contradiction. Thus, Pn ⊂ En , at
least when n is even.
Now we suppose that n is odd. If C ∈ Pn , then rank(C) ≤ 1. This implies that C
has at most a non-zero eigenvalue. We claim that λ j = 0 for all j ∈ {1, 2, . . . , n − 1}.
By contradiction, suppose that there exists q ∈ {1, 2, . . . , n − 1} such that λq 6= 0. Hence,
λ0 = 0, and thus
n−1
0= ∑ ct = n k.
t=0

This implies that C = On . Hence, λq = 0, which is absurd. Therefore, Pn ⊂ En even


when n is odd.
Now we claim that Pn = Ln,n−1 ∩ Ln,1 .
Observe that, if C ∈ Pn , then C is both circulant and reverse circulant, and hence
Pn ⊂ Ln,n−1 ∩ Ln,1 . Now we claim that Ln,n−1 ∩ Ln,1 ⊂ Pn . Let C ∈ Ln,n−1 ∩ Ln,1 . If
c = (c0 c1 . . . cn−1 )T is the first row of C, and C =circ(c) =rcirc(c), then we get

c1, j = c( j−1) mod n = c( j+1) mod n , j ∈ {0, 1, . . . , n − 1}.

If n is even, then c2 j = c0 and c2 j+1 = c1 for j ∈ {0, 1, . . . , n/2 − 1}, while when n is odd
we have c j = c0 for j ∈ {0, 1, . . . , n − 1}, getting the claim.
Theorem 3.11. The following result holds:

Hn,1 = Bn ⊕ En .

Proof. By Theorem 3.9, we know that


n
Bn = Jn,1 = C ∈ Rn×n : there is a symmetric c = (c0 c1 . . . cn−1 )T ∈ Rn with
n−1 n−1 o
t
∑ ct = 0, ∑ (−1) ct = 0 if n is even, and ci, j = c( j+i) mod n .
t=0 t=0

15
Moreover, by Theorem 3.10 we have
n
En = C ∈ Rn×n : there is c = (c0 c1 . . . cn−1 )T ∈ Rn such that: there is k ∈ R with
ct = k, t = 0, 1, . . . , n − 1 if n is even, and k1 , k2 ∈ R with ct = k((−1)t +3)/2
o
if n is odd, t = 0, 1, . . . , n − 1, ci, j = c( j+i) mod n .
First, we show that
Hn,1 ⊃ Bn ⊕ En . (23)
Let C =rcirc(c) ∈ Bn ⊕ En . There are C =rcirc(c(r) ), with C(1) ∈ Bn , C(2) ∈ En . r = 1, 2,
C = C(1) +C(2) , c = c(1) + c(2) . Note that c is symmetric, because both c(1) and c(2) are.
Thus, (23) is proved.
We now prove the converse inclusion. Let C =rcirc(c) ∈ Hn,1 . Then, c is symmetric.
Suppose that n is odd and
n−1
∑ ct = τ.
t=0
Let =c(2) (τ/n τ/n . . . τ/n)T , c(1)
= c − c(2) , and C =rcirc(c(r) ), r = 1, 2. Then, C =
C +C . Note that C ∈ En . Moreover, c(1) is symmetric, and
(1) (2) (2)

n−1 n−1
(1)
∑ ct = ∑ (ct − τ/n) = 0.
t=0 t=0

Therefore, C(1) ∈ Bn .
Now assume that n is even. We get
n−1
n
∑ ct = τ = 2 (k1 + k2) (24)
t=0
and
n−1
n
∑ (−1)t ct = γ∗ = 2 (k1 − k2). (25)
t=0
Then,
k1 = (τ + γ∗ )/n, k2 = (τ − γ∗ )/n. (26)
Let c(2) = (k1 k2 . . . k1 k2 )T , c(1) = c − c(2) , and C =rcirc(c(r) ), r = 1, 2. Then, C = C(1) +
C(2) . Note that C(2) ∈ En . Moreover, c(1) is symmetric, and from (24), (26) we obtain
n−1 n−1
(1) n
∑ ct = ∑ ct − 2 (k1 + k2) = τ − τ = 0.
t=0 t=0
Moreover, from (25), (26) we deduce
n−1 n−1
(1) n
∑ (−1)t ct = ∑ (−1)t ct − 2 (k1 − k2) = γ∗ − γ∗ = 0.
t=0 t=0

Thus, C(1) ∈ Bn .
Moreover observe that, by Theorem 2.5, En ⊂ Cn , and thanks to Theorem 2.4, Cn and
Bn are orthogonal. This implies that En and Bn are orthogonal. This ends the proof.

16
4 Multiplication between a γ-matrix and a real vector
In this section we deal with the problem of computing the product
y = Gx, (27)
where G ∈ Gn . Since G is a γ-matrix, there is a diagonal matrix Λ(G) ∈ Rn×n with G =
Qn Λ(G) QTn , where Qn is as in (1). To compute

y = Qn Λ(G) QTn x,
we proceed by doing the next operations in the following order:
z = QTn x; (28)

t = Λ(G) z; (29)

y = Qn t. (30)
To compute the product in (28), we define a new fast technique, which we will call inverse
discrete sine-cosine transform (IDSCT), while to do the operation in (30), we define a new
technique, which will be called discrete sine-cosine transform (DSCT).
From now on, we assume that the involved vectors x belong to Rn , where n = 2r ,
r ≥ 2, and we put m = n/2, ν = n/4.

4.1 The IDSCT technique


In this subsection we present the technique to compute (28). Given x ∈ Rn , we denote by
 
α0 C0 (x)
 α1 C1 (x) 
 .. 
.
 
 
T
IDSCT(x) = Qn x =  αm Cm (x)  , (31)
 
 αm+1 Sm−1 (x) 
 
 .. 
 . 
αn−1 S1 (x)
where α j , j = 0, 1, . . . , n − 1, are as in (2),

C j (x) = xT u( j) , j = 0, 1, . . . , m, (32)

S j (x) = xT v( j) , j = 1, 2, . . . , m − 1. (33)
For x = (x0 x1 · · · xn−1 )T ∈ Rn , let η (resp., ζ ): Rn → Rm be the function which asso-
ciates to any vector x ∈ Rn the vector consisting of all even (resp., odd) components of x,
that is
η(x) = η x = (x00 x10 . . . xm−1
0
), where x0p = x2p , p = 0, 1, . . . , m − 1;
ζ (x) = ζ x = (x000 x100 . . . xm−1
00
), where x00p = x2p+1 , p = 0, 1, . . . , m − 1.

17
Proposition 4.1. (see, e.g., [32]) For k = 0, 1, . . . , m, we have
π k π k
Ck (x) = Ck (η x) + cos Ck (ζ x) − sin Sk (ζ x). (34)
m m
Moreover, for k = 1, 2, . . . , ν, it is
π k π k
Sk (x) = Sk (η x) + cos Sk (ζ x) + sin Ck (ζ x). (35)
m m
Proof. We first prove (34). By using (32), for k = 0, 1, . . . , m we have
n−1  2πk j 
Ck (x) = xT u(k) = ∑ j x cos =
j=0 n
 2πk j   2πk j 
= ∑ x j cos + ∑ x j cos =
j even n j odd n
m−1  4πkp  m−1  2πk(2p + 1) 
= x
∑ 2p cos + x
∑ 2p+1 cos =
p=0 n p=0 n
m−1  2πkp  m−1  πk(2p + 1) 
= ∑ x 2p cos + ∑ x 2p+1 cos
p=0 m p=0 m
m−1  2πkp  m−1  2πkp πk 
0 00
= ∑ x p cos m + ∑ x p cos m + m =
p=0 p=0
m−1  2πkp 
= ∑ x0p cos m +
p=0
m−1   2πkp   πk   2πkp   πk 
00
+ ∑ px cos cos − sin sin =
p=0 m m m m
m−1  2πkp 
0
= ∑ px cos +
p=0 m
 πk  m−1  2πkp   πk  m−1  2πkp 
00 00
+ cos ∑ p x cos − sin ∑ p x sin =
m p=0 m m p=0 m
π k π k
= Ck (η x) + cos Ck (ζ x) − sin Sk (ζ x).
m m

18
We now turn to (35). Using (33), for k = 1, 2, . . . , ν we get
n−1  2πk j  n−1  2πk j 
Sk (x) = ∑ x j sin 2π j − = ∑ x j sin =
j=0 n j=0 n
 2πk j   2πk j 
= ∑ x j sin + ∑ x j sin =
j even n j odd n
m−1  4πkp  m−1  2πk(2p + 1) 
= ∑ 2px sin + x
∑ 2p+1 sin =
p=0 n p=0 n
m−1  2πkp  m−1  2πkp πk 
0 00
= ∑ p x sin + ∑ px sin + =
p=0 m p=0 m m
m−1  2πkp 
0
= ∑ p x sin +
p=0 m
m−1   2πkp   πk   2πkp   πk 
00
+ ∑ p x sin cos + cos sin =
p=0 m m m m
m−1  2πkp   πk  m−1  2πkp 
= ∑ x0p sin m + cos m ∑ x00p sin m +
p=0 p=0
 πk  m−1  2πkp 
+ sin ∑ x00p cos =
m p=0 m
π k π k
= Sk (η x) + cos Sk (ζ x) + sin Ck (ζ x).
m m

Lemma 4.2. For k = 0, 1, . . . , ν − 1, let t = m − k. We have


π k π k
Ct (x) = Ck (η x) − cos Ck (ζ x) + sin Sk (ζ x). (36)
m m
Furthermore, for k = 1, 2, . . . , ν − 1 we get
π k π k
St (x) = Sk (η x) − cos Sk (ζ x) + sin Ck (ζ x). (37)
m m

19
Proof. We begin with (36). For t = ν + 1, ν + 2, . . . , m, we have
m−1  4πt p  m−1  2πt(2p + 1) 
Ct (x) = ∑ x2p cos + ∑ x2p+1 cos =
p=0 n p=0 n
m−1  4π(m − k)p  m−1  2π(m − k)(2p + 1) 
= ∑ x2p cos + ∑ x2p+1 cos =
p=0 n p=0 n
m−1
0
 2πkp  m−1 00  πk(2p + 1) 
= ∑ x p cos 2π p − m + ∑ x p cos π(2p + 1) − m =
p=0 p=0
m−1  2πkp  m−1  πk(2p + 1) 
0 00
= ∑ p x cos − ∑ px cos =
p=0 m p=0 m
m−1  2πkp   πk  m−1  2πkp 
0 00
= ∑ p x cos − cos ∑ px cos +
p=0 m m p=0 m
 πk  m−1  2πkp 
00
+ sin ∑ px sin =
m p=0 m
 πk   πk 
= Ck (η x) − cos Ck (ζ x) + sin Sk (ζ x).
m m
Now we turn to (37). For t = ν + 1, ν + 2, . . . , m − 1, it is
m−1  4πt p  m−1  2πt(2p + 1) 
St (x) = ∑ x2p sin n + ∑ x2p+1 sin =
p=0 p=0 n
m−1  4π(m − k)p  m−1  2π(m − k)(2p + 1) 
= x
∑ 2p sin + x
∑ 2p+1 sin =
p=0 n p=0 n
m−1 m−1
0
 2πkp 
00
 πk(2p + 1) 
= ∑ px sin 2π p − + ∑ p x sin π(2p + 1) − =
p=0 m p=0 m
m−1  2πkp  m−1  πk(2p + 1) 
= − ∑ x0p sin + ∑ x00p sin =
p=0 m p=0 m
m−1  2πkp   πk  m−1  2πkp 
= − ∑ x0p sin + sin ∑ p x 00
cos +
p=0 m m p=0 m
 πk  m−1  2πkp 
00
+ cos ∑ x p sin m =
m p=0
 πk   πk 
= −Sk (η x) + sin Ck (ζ x) + cos Sk (ζ x).
m m

Note that
C0 (x) = C0 (η x) + C0 (ζ x), (38)

Cm (x) = C0 (η x) − C0 (ζ x). (39)

20
Since Sν (x) = 0 whenever x ∈ Rm , then

Cν (x) = Cν (η x) − Sν (ζ x) = Cν (η x) (40)

and

Sν (x) = Sν (η x) + Cν (ζ x) = Cν (ζ x). (41)

We call ρ : Rn → Rm that function which reverses all components of a given vector x ∈ Rn


but the 0-th component, and σ (resp., α): Rn → Rn that function which associates to every
vector x ∈ Rn the double of its symmetric (resp., asymmetric) part, namely

ρ(x) = (x0 xn−1 xn−2 · · · x2 x1 )T ,


σ (x) = σ x = x + ρ(x), α(x) = α x = x − ρ(x).

Note that
σ x+α x
x= . (42)
2
Now we state the next technical results.

Lemma 4.3. Let a = (a0 a1 · · · an−1 )T , b = (b0 b1 · · · bn−1 )T ∈ Rn be such that a is sym-
metric and b is asymmetric. Then, aT b = 0.

Proof. First of all, we observe that b0 = bn/2 = 0. So, we have

n−1 n−1 n/2−1 n


T
a b = ∑ aj bj = ∑ aj bj = ∑ a j b j + an/2 bn/2 + ∑ aj bj =
j=0 j=1 j=1 j=n/2+1
n/2−1 n/2−1 n/2−1 n/2−1
= ∑ aj bj + ∑ an− j bn− j = ∑ aj bj − ∑ a j b j = 0.
j=1 j=1 j=1 j=1

Corollary 4.3.1. Let x ∈ Rn be a symmetric vector and k ∈ {0, 1, . . . , n − 1}. Then, we get

Sk (x) = 0. (43)

Proof. Note that Sk (x) = xT v(k) . The formula (43) follows from Lemma 4.3, since x is
symmetric and v(k) is asymmetric.

Corollary 4.3.2. Let x ∈ Rn be an asymmetric vector and k ∈ {0, 1, . . . , n − 1}. Then, we


have

Ck (x) = 0. (44)

Proof. Observe that Ck (x) = xT u(k) . The formula (44) is a consequence of Lemma 4.3,
because u(k) is symmetric and x is asymmetric.

21
Lemma 4.4. Let x ∈ Rn and k ∈ {0, 1, . . . , m}. Then we get

Ck (σ x) = 2Ck (x). (45)

Moreover, if k ∈ {1, 2, . . . , m − 1}, then

Sk (α x) = 2 Sk (x). (46)

Proof. We begin with proving (45). For every j ∈ {0, 1, . . . , n − 1} we have


n−1 n−1
(k) (k)
Ck (σ x) = 2x0 + ∑ x j u j + ∑ xn− j u j =
j=1 j=1
n−1 n−1 n−1
(k) (k) (k)
= 2x0 + ∑ x j u j + 2 ∑ x j u j = 2 ∑ x j u j = 2Ck (x).
j=1 j=1 j=0

(k)
Now we turn to (46). For any j ∈ {0, 1, . . . , n − 1}, since v0 = 0, we get

n−1 n−1 n−1 n−1


(k) (k) (k) (k)
Sk (α x) = ∑ x jv j − ∑ xn− j v j = ∑ x jv j + ∑ xn− j vn− j =
j=1 j=1 j=1 j=1
n−1 n−1
(k) (k)
= 2 ∑ x j v j = 2 ∑ x j v j = 2 Sk (x).
j=1 j=0

Observe that from (31), (42), (43) and (44), taking into account that σ x is symmetric
and α x is asymmetric, we obtain
 
C0 (σ x)
 α0 2 
 
 

 C 1 (σ x) 

 α1
2

..
 
 
 . 
C (σ x)
 
T m
IDSCT(x) = Qn x =  αm . (47)
 

 2 

 

 α S m−1 (α x) 

 m+1 2 
 .. 
.
 
 
 S1 (α x) 
αn−1
2
Thus, to compute IDSCT(x), we determine the values Ck (σ x) for k = 0, 1, . . . , m and
Sk (α x) for k = 1, 2, . . . , m − 1, and successively we multiply such values by the constants
αk /2, k = 0, 1, . . . , n − 1. Now we give the following

22
Lemma 4.5. Let x ∈ Rn be symmetric. Then for k = 1, 2, . . . , ν − 1 we get
1
Ck (x) = Ck (η x) +  π k  Ck (σ ζ x). (48)
2 cos
m
Moreover, if t = m − k, then
1
Ct (x) = Ck (η x) −  πk  Ck (σ ζ x). (49)
2 cos
m
Proof. We begin with (48). Since x is symmetric, we have

Sk (x) = Sk (η x) = 0. (50)

From (35) and (50) we obtain


 πk 
sin
Sk (ζ x) = −  m  Ck (ζ x). (51)
πk
cos
m
From (34), (45) and (51) we get (48). The equality in (49) follows from (36), (45) and
(51).
Now we define the following algorithm:
1: function CS(x, n)
2: if n = 4 then
3: c0 = x0 + 2x1 + x2 ;
4: c1 = x0 − x2 ;
5: c2 = x0 − 2x1 + x2 ;
6: else
7: c =CS(η x, n/2);
e
8: c =CS(σ ζ x, n/2);
9: for k = 1, 2, . . . n/4 − 1 do
10: aux = 1/(2 cos(2 π k/n)) ck ;
11: ck = cek + aux;
12: cn/2−k = cek − aux;
13: end for
14: aux = c0 /2;
15: c0 = ce0 + aux;
16: cn/4 = cen/4 ;
17: cn/2 = ce0 − aux;
18: end if
19: return c

Lemma 4.6. For each symmetric vector x ∈ Rn , the vector η x ∈ Rm is symmetric.

23
x = η x, where x is a symmetric vector. So, we have
Proof. Let e

xej = x2 j = xn−2 j = x2(n/2− j) = xen/2− j , j = 0, 1, . . . m. (52)

x ∈ Rm , from (52) it follows that e


Since e x is symmetric.

Theorem 4.7. Let x ∈ Rn be a symmetric vector and c = CS(x, n). Then,

c j = C j (x) = xT u( j) , j = 0, 1, . . . , m.

Proof. First of all we prove that, when we call the function CS, the first variable is sym-
metric. Concerning the external call, the vector x is symmetric in CS. Now we consider
the internal calls of CS at lines 7 and 8. By Lemma 4.6, η x is symmetric. Hence, line 7
of our algorithm satisfies the assertion. Furthermore, it is readily seen that the assertion is
fulfilled also by line 8.
Now, let us suppose that n = 4. Then,

c0 = xT u(0) = x0 + x1 + x2 + x3 .

Since x is even, we get the formula at line 3 of the algorithm CS. Moreover,

u(1) = (1 0 − 1 0)T ,

and hence

c1 = xT u(1) = x0 − x2 ,

obtaining the formula at line 4 of the algorithm CS. Furthermore,

u(2) = (1 − 1 1 − 1)T ,

and thus we have

c2 = xT u(2) = x0 − 2 x1 + x2 ,

since x is symmetric. So, we obtain the formula at line 5 of the algorithm CS.
Now we consider the case n > 4. Since x is symmetric, from Lemma 4.5 we deduce
that line 11 of the algorithm CS gives the correct value of Ck (x) for k = 1, 2, . . . , ν − 1. As
x is symmetric, from Lemma 4.5 we obtain that line 12 gives the exact value of Cm−k (x)
for k = 1, 2, . . . , ν − 1.
From (38) and (45) we deduce that line 15 of the function CS gives the correct value
of C0 (x), and by (39) and (45) we obtain that line 17 of the function CS gives the exact
value of Cm (x). Furthermore, by virtue of (40), line 16 of the function CS gives the correct
value of Cν (x). This completes the proof.
Theorem 4.8. Suppose to have a library in which the values 1/(2 cos(2 π k/n)) have been
stored for n = 2r , r ∈ N, r ≥ 2, and k ∈ {0, 1, . . . , n − 1}. Then, the computational cost of
a call of the function CS is given by
3 1
A(n) = n log2 n − n + 1, (53)
4 2

24
1 1
M(n) = n log2 n + n − 2, (54)
4 2
where A(n) (resp., M(n)) denotes the number of additions (resp., multiplications) re-
quested to compute CS.
Proof. Concerning line 8, we first compute σ ζ x. To do this, ν − 1 sums and 2 multi-
plications are required. To compute the for loop at lines 9-13, 2 ν − 2 sums and ν − 1
multiplications are necessary. Moreover, to compute lines 14-17, 2 sums and one multi-
plication are necessary. Thus, the total number of additions is given by
3 n
A(n) = n − 1 + 2 A , (55)
4 2
while the total number of multiplications is
1 n
M(n) = n + 2 + 2 M . (56)
4 2
Concerning the initial case, we have
A(4) = 5, M(4) = 2.
For every n = 2r , with r ∈ N, r ≥ 2, let A(n) be as in (53). We get A(4) = 6 −2 +1 = 5.
Now we claim that the function A(n) defined in (53) satisfies (55). Indeed, we have
n 3 1
A = n (log2 n − 1) − n + 1,
2 8 4
and hence
3 n 3 3 1
n−1+2A = n − 1 + n (log2 n − 1) − n + 2 = A(n),
4 2 4 4 2
getting the claim.
Moreover, for any n = 2r , with r ∈ N, r ≥ 2, let M(n) be as in (54). One has M(4) =
2 + 2 − 2 = 2. Now we claim that the function M(n) defined in (54) fulfils (56). It is
n 1 1
M = n (log2 n − 1) + n − 2,
2 8 4
and hence
1 n 1 1 1
n+2+2M = n + 2 + n (log2 n − 1) + n − 4 = M(n),
4 2 4 4 2
obtaining the claim.
Lemma 4.9. Let x ∈ Rn be asymmetric. Then, for k = 1, 2, . . . , ν − 1 it is
1
Sk (x) = Sk (η x) +  πk  Sk (α ζ x). (57)
2 cos
m
Moreover, if t = m − k, then
1
St (x) = −Sk (η x) +  πk  Sk (α ζ x). (58)
2 cos
m

25
Proof. We begin with (57). As x is asymmetric, we have
Ck (x) = Ck (η x) = 0. (59)
From (34) and (59) we deduce
 πk 
sin
Ck (ζ x) = m
 πk  Sk (ζ x). (60)
cos
m
From (35), (46) and (60) we get (57).
The relation (58) follows from (37), (46) and (60).
Now we define the following algorithm:
1: function SN(x, n)
2: if n = 4 then
3: s1 = 2 x1 ;
4: else
5: es =SN(η x, n/2);
6: s =SN(α ζ x, n/2);
7: for k = 1, 2, . . . n/4 − 1 do
8: aux = 1/(2 cos(2 π k/n)) sk ;
9: sk = sek + aux;
10: sn/2−k = aux − sek ;
11: end for
12: sn/4 = 0;
13: for j = 0, 2, . . . , n/4 − 2 do
14: sn/4 = sn/4 + x2 j+1 − x2 j+3
15: end for
16: sn/4 = 2 sn/4
17: end if
18: return s

Lemma 4.10. For every asymmetric vector x ∈ Rn , the vector η x ∈ Rm is asymmetric.


Proof. Let x = η x, where x is asymmetric. Then,
x j = x2 j = −xn−2 j = −x2(n/2− j) = −xn/2− j , j = 0, 1, . . . m. (61)
x ∈ Rm , from (61) it follows that x is asymmetric.
Since e
Lemma 4.11. Given any asymmetric vector x ∈ Rn , set b
x = ζ x. Then,
ν−1
Sν (b
x) = 2 ∑ (−1) j xb2 j+1. (62)
j=0

Proof. Let x ∈ Rn be any asymmetric vector. Then, b


x = ζ x ∈ Rm . It is not difficult to
check that
xbj = −b
xn−1− j , j = 0, 1, . . . n − 1. (63)
Since u(ν) = (1 − 1 . . . 1 − 1)T whenever u(ν) ∈ Rm , the equality in (62) follows from
(41) and (63).

26
Theorem 4.12. Given an asymmetric vector x ∈ Rn , let s = SN(x, n). We get

s j = S j (x) = xT v( j) , j = 1, 2, . . . , m − 1.

Proof. First we prove that, when we call the function SN, the first variable is asymmetric.
Concerning the external call, we have that x is asymmetric. Now we consider the internal
calls of SN at lines 5 and 6. By Lemma 4.10, η x is asymmetric. Therefore, line 5 of our
algorithm fulfils the assertion. Moreover, it is easy to see that line 6 satisfies the assertion
too.
Now, let us suppose that n = 4. We have

v(1) = (0 1 0 − 1)T ,

and hence

s1 = xT v(1) = x1 − x3 = 2x1 ,

because x is asymmetric. Therefore, we obtain the formula at line 3 of the algorithm SN.
Now we consider the case n > 4. As x is asymmetric, from Lemma 4.9 we get that
line 9 of the function SN gives the right value of Sk (x) for k = 1, 2, . . . , ν − 1. Since x is
asymmetric, from Lemma 4.9 we obtain that line 10 of the function SN gives the exact
value of Sm−k (x) for k = 1, 2, . . . , ν − 1. Thanks to Lemma 4.11 and asymmetry of x,
lines 12-16 give the exact value of Sν (x). This ends the proof.

Theorem 4.13. Suppose to have a library in which the values 1/(2 cos(2 π k/n)) have
been stored for n = 2r , r ∈ N, r ≥ 2, and k ∈ {0, 1, . . . , n − 1}. Then, the computational
cost of a call of the function SN is given by
11
A(n) = n log2 n − n + 3, (64)
4

1 1
M(n) = n log2 n − n, (65)
4 4
where A(n) (resp., M(n)) denotes the number of additions (resp., multiplications) required
to compute CS.

Proof. Concerning line 6, we first compute α ζ x. To do this, ν − 1 sums and no mul-


tiplications are required. To compute the for loop at lines 7-11, 2 ν − 2 sums and ν − 1
multiplications are necessary. Concerning the for loop at lines 13-15, ν sums are nec-
essary. Finally, at line 16, one multiplication is necessary. Thus, the total number of
additions is
n
A(n) = n − 3 + 2 A , (66)
2
and the total number of multiplications is
1 n
M(n) = n + 2 M . (67)
4 2

27
Concerning the initial case, we have

A(4) = 0, M(4) = 1.

For each n = 2r , with r ∈ N, r ≥ 2, let A(n) be as in (64). We get A(4) = 8 − 11 + 3 = 0.


Now we claim that the function A(n) defined in (64) satisfies (66). Indeed, we have
n 11 1
A =− n + n (log2 n − 1) + 3.
2 8 2
Therefore,
n 11
n−3+2A = n−3− n + n (log2 n − 1) + 6 = A(n),
2 4
which gives the claim.
Moreover, for every n = 2r , with r ∈ N, r ≥ 2, let M(n) be as in (65). One has
M(4) = 2 − 1 = 1. Now we claim that the function M(n) defined in (65) fulfils (67).
Indeed,
n 1 1
M = − n + n (log2 n − 1),
2 8 8
and hence
1 n 1 1 1
n+2M = n − n + n (log2 n − 1) = M(n),
4 2 4 4 4
getting the claim.

Corollary 4.13.1. Let x ∈ Rn , y =IDSCT(x), c = CS(σ x, n) and s = SN(α x, n). Then,


 α c
j j
 2 ,

 if j ≤ m;
yj = (68)
 α j sn− j , otherwise.


2
Proof. Since σ x is symmetric, from Theorem 4.7 we obtain

c j = C j (σ x), j = 0, 1, . . . , m.

Moreover, as α x is asymmetric, from Theorem 4.12 we get

s j = S j (α x), j = 1, 2, . . . , m − 1.

Therefore, from (47) we deduce (68).


To complete the computation of IDSCT(x) as in (47), we have to compute σ x and α x
and to multiply every entry of the result of CS(σ x, α x, n) by αk /2, k = 0, 1, . . . , n − 1.
The computational cost of these operations is O(n). Therefore, the total cost for the
computation of (28) is 74 n log2 n + o(n log2 n) additions and 12 n log2 n + o(n log2 n) mul-
tiplications.

28
4.2 Computation of the eigenvalues of a γ-matrix
In this subsection we find an algorithm to compute the expression in (29). We first deter-
mine the eigenvalues of the matrix G in (27), in order to know the matrix Λ(G) in (29).
Since G ∈ Gn and Gn = Cn ⊕ Bn , we determine two matrices C ∈ Cn and B ∈ Bn ,
C =circ(c), B =rcirc(b). Observe that
g0,0 = c0 + b0 , gν,ν = c0 + bm , (69)
g0,m = cm + bm , gν,m+ν = cm + b0 .
By solving the system in (69), we find c0 , b0 , cm , bm . Knowing c0 , from the principal
diagonal of G it is not difficult to determine the numbers b2i , i = 1, 2, . . . , ν − 1. If we
know these quantities, it is possible to determine the numbers c2i , i = 1, 2, . . . , ν − 1, from
the first row of G. Moreover, note that
g0,1 = c1 + b1 , g1,2 = c1 + b3 , (70)
g0,3 = c3 + b3 , gm−1,m+2 = c3 + b1 .
By solving the system in (70), we obtain c1 , b1 , c3 , b3 . If we know c1 , then from the
first diagonal superior to the principal diagonal of G it is not difficult to determine the
numbers b2i+1 , i = 2, 3, . . . , ν − 1. Knowing these values, it is not difficult to find the
quantities c2i+1 , i = 2, 3, . . . , ν − 1, from the first row of G. It is possible to prove that the
computational cost of all these operations is O(n). Note that
 
(C) (B)
G q( j) = C q( j) + B q( j) = λ j + λ j q( j) , j = 0, 1, . . . , n − 1, (71)
(C) (B)
where the λ j ’s and the λ j ’s are the j-th eigenvalues of C and B, and the orders are
given by Theorems 3.3 and 3.7, respectively.
Proposition 4.14. Given two symmetric vectors c, b ∈ Rn such that
n−1 n−1
∑ bt = 0 and ∑ (−1)t bt = 0,
t=0 t=0

set d(C)= CS(c, n) and d(B) = CS(b, n). Then the eigenvalues of G = circ(c) + rcirc(b)
are given by
(G) (C) (G) (C) (B)
λ0 = d0 ; λj = dj +dj , j = 1, 2, . . . m − 1; (72)
(G) (C) (G) (C) (B)
λm = dm ; λj = dn− j − dn− j , j = m + 1, m + 2, . . . n − 1.
Proof. Since c and b are symmetric, from Theorem 4.7 we obtain
(C) (B)
dj = C j (c) = cT u( j) , dj = C j (b) = cT u( j) , j = 0, 1, . . . , m.
(C) (C) (B) (B)
By Theorems 3.3 and 3.7, we get that λ j = dj and λ j = dj for j = 0, 1, . . . , m,
(C) (B)
where λ j (resp., λ j ) are the eigenvalues of circ(c) (resp., rcirc(b)). From Theorems
3.3, 3.7 and (71) we obtain (72).

To complete the computation of (29), we have to multiply the diagonal matrix Λ(G) by
z. The cost of this operation consists of n multiplications. Thus, by Theorem 4.8, the total
cost to compute (29) is of 32 n log2 n + o(n log2 n) additions and 12 n log2 n + o(n log2 n)
multiplications.

29
4.3 The DSCT technique
Now we compute y = Qn t = DSCT(t). Let α and α
e be as in (2). By (1), for j = 0,
1, . . . , m it is
m−1 2kπ j
j
yj = α t0 + (−1) α tn/2 + α
e ∑ cos n tk +
k=1
m−1 2kπ j
+ α
e ∑ sin tn−k = DSCT j (t), (73)
k=1 n

and for j = 1, 2, . . . , m − 1 we have


m−1  2 k π (n − j) 
yn− j = α t0 + (−1) j α tn/2 + α
e ∑ cos tk +
k=1 n
m−1  2 k π (n − j) 
+ α
e ∑ sin tn−k = (74)
k=1 n
m−1 2kπ j
= α t0 + (−1) j α tn/2 + α
e ∑ cos tk −
k=1 n
m−1 2kπ j
− α ∑ sin
e tn−k = DSCTn− j (t).
k=1 n

Now we define the functions ϕ, ϑ : Rn → Rn by

ϕ(t) = t = (t0 t1 . . .tm−1 tm tm−1 . . . t1 ), (75)


ϑ (t) = et = (0 tn−1 tn−2 . . .tm+1 0 − tm+1 . . . − tn−1 ).

The following result holds.


Theorem 4.15. For j = 1, 2, . . . , m − 1, it is

α
(C j (t) − t0 − (−1) j tm + S j (et)) +
e
DSCT j (t) = (76)
2
+ α t0 + (−1) j α tm

and
α
DSCTn− j (t) = (C j (t) − t0 − (−1) j tm − S j (et)) + (77)
2
+ α t0 + (−1) j α tm .

Moreover, one has


α
e
DSCT0 (t) = (C0 (t) − t0 − tm ) + α t0 + α tm ; (78)
2

α
e
DSCTm (t) = (Cm (t) − t0 − tm ) + α t0 + α tm . (79)
2

30
Proof. We have
m−1 2kπ j n−1 2kπ j
T ( j) j
C j (t) = t u = t 0 + (−1) t m + ∑ cos t k + ∑ cos tk =
k=1 n k=m+1 n
m−1 2kπ j m−1  2 (n − k) π j 
= t0 + (−1) j tm + ∑ cos tk + ∑ cos t n−k =
k=1 n k=1 n
m−1 2kπ j m−1  2 (n − k) π j 
j
= t0 + (−1) tm + ∑ cos tk + ∑ cos tk = (80)
k=1 n k=1 n
m−1 2kπ j
= t0 + (−1) j tm + 2 ∑ cos tk ;
k=1 n

m−1
2kπ j n−1 2kπ j
S j (et) = etT v( j) =
∑ sin n tk + ∑ sin n etk =
e
k=1 k=m+1
m−1 2kπ j m−1  2 (n − k) π j 
= ∑ sin tn−k − ∑ sin tn−k = (81)
k=1 n k=1 n
m−1 2kπ j
= 2 ∑ sin tn−k .
k=1 n

From (73) and (80) we obtain (76), (78) and (79), while from (74) and (81) we get (77).

The computational cost of the valuation of the functions ϕ and ϑ is linear, and the cost
of the call of the functions CS and SN to determinate the values C j (t) for j = 0, 1, · · · , m
and S j (et) for j = 1, 2, · · · , m is of 74 n log2 n+o(n log2 n) additions and 21 n log2 n+o(n log2 n)
multiplications. The remaining computations for DSCT(t) are of O(n). Thus, the total
cost to compute DSCT(t) is of 74 n log2 n+o(n log2 n) additions and 21 n log2 n+o(n log2 n)
multiplications. Therefore, the total cost to compute (27) is given by 5 n log2 n+o(n log2 n)
additions and 23 n log2 n + o(n log2 n) multiplications.

5 Multiplication between two γ-matrices


In this section we show how to compute the product of two given γ-matrices.

Lemma 5.1. Let G1 , G2 be two γ-matrices. Then, the first column of Λ(G1 ) Λ(G2 ) QTn is
given by

31
 
(G1 ) (G2 )
α λ0 λ0
 
 

 α (G1 ) (G2 ) 
 e λ 1 λ1

..

.
 
 
 (G1 ) (G2 ) 
α
 e λn/2−1 λn/2−1 

(G1 ,G2 )
s =

,
 (82)
 α λ (G1 ) λ (G2 ) 
 
 n/2 n/2 
 
 

 0 

 .. 
 . 
0

where α and α e are as in (2) and λ j(G1 ) (resp., λ j(G2 ) ), j = 0, 1, . . . , n/2, are the first
n/2 + 1 eigenvalues of G1 (resp., G2 ). Moreover, s(G1 ,G2 ) can be computed by 32 n log2 n +
o(n log2 n) additions and 21 n log2 n + o(n log2 n) multiplications.
Proof. First, we note that


 α, if j = 0 or j = n/2;



(n)
q1, j = e , if j = 1, 2, . . . , n/2 − 1;
α (83)




0, otherwise.

Thus, the first part of the assertion follows from (83).


Moreover, we get
(G1 )
λj = g1 T u( j) = C j (g1 ), j = 0, 1, . . . , m;
(G2 )
λj = g2 T u( j) = C j (g2 ), j = 0, 1, . . . , m,

where g1 and g2 are the first row of G1 and G2 , respectively.


In order to compute the eigenvalues of G1 and G2 , we do two calls of the function
3 1
CS, obtaining a computational cost of n log2 n + o(n log2 n) additions of n log2 n +
2 2
o(n log2 n) multiplications. Note that the final multiplication cost is not relevant, because
it is of type O(n).
Theorem 5.2. Given two γ-matrices G1 , G2 , where G1 = C1 + B1 , G2 = C2 + B2 , Cl ∈ Cn ,
Bl ∈ Bn , l = 1.2, we have
     
(C1C2 ) (B1 B2 ) (C1 B2 ) (B1C2 )
G1 G2 = circ Qn s +s + rcirc Qn s +s . (84)

Moreover, with the exception of the computation of the functions circ and rcirc, G1 G2
can be computed by means of 29 n log2 n + o(n log2 n) additions of 32 n log2 n + o(n log2 n)
multiplications.

32
Proof. First, we note that

G1 G2 = C1 C2 +C1 B2 + B1 C2 + B1 B2 .

From this it follows that the first column of G1 G2 is equal to


   
Qn s(C1C2 ) + s(B1 B2 ) + Qn s(C1 B2 ) + s(B1C2 ) . (85)

Moreover, observe that every γ-matrix has the property that its first column coincide with
the transpose of its first row. From this and (2.6) we obtain (84).
Furthermore note that, when we compute the two DSCT’s in (85), from (82) it follows
that the vector et in (75) is equal to 0 in both cases. Therefore, two calls of the function
CS are needed.
Again, we observe that to compute s(C1C2 ) , s(B1 B2 ) , Qn (s(C1 B2 ) ) and s(B1C2 ) it is enough
to compute the eigenvalues of Cl , Bl , l = 1, 2, only once. Thus, four calls of the function
CS are needed. Hence, G1 G2 can be computed with 92 n log2 n + o(n log2 n) additions of
3
2 n log2 n + o(n log2 n) multiplications.

6 Toeplitz matrix preconditioning


For each n ∈ N, let us consider the following class:

Tn = {Tn ∈ Rn×n : Tn = (tk, j )k, j ,tk, j = t|k− j| , k, j ∈ {0, 1, . . . , n − 1} }. (86)

Observe that the class defined in (86) coincides with the family of all real symmetric
Toeplitz matrices.
Now we consider the following problem.

Given Tn ∈ Tn , find

Gn (Tn ) = min kG − Tn kF ,
G∈Gn

where Gn (Tn ) = Cn (Tn ) + Bn (Tn ), Cn (Tn ) ∈ Cn , Bn (Tn ) ∈ Bn , and k · kF denotes the


Frobenius norm.

Theorem 6.1. Let Gbn = Sn + Hn,1 . Given Tn ∈ Tn , one has

Cn (Tn ) + Bn (Tn ) = min kG − Tn kF = min kG − Tn kF ,


G∈Gbn G∈Gn

where Cn (Tn ) =circ(c), with

(n − j)t j + j tn− j
cj = , j ∈ {1, 2, . . . , n − 1}; (87)
n

c0 = t0 , (88)

33
and Bn (Tn ) =rcirc(b), where: for n even and j ∈ {1, 2, . . . , n − 1} \ {n/2},
( j−3)/2
1 4 j −2n 2k +1
bj = (t j − tn− j ) + 4 ∑ (t2k+1 − tn−2k−1 )+
2n n k=1 n
!
(n− j−3)/2
2k +1
+ 4 ∑ (t2k+1 − tn−2k−1 ) , j odd; (89)
k=1 n

j/2−1
1 4 j −2n 2k
bj = (t j − tn− j ) + 4 ∑ (t2k − tn−2k )+
2n n k=1 n
!
(n− j)/2−1
2k
+ 4 ∑ (t2k − tn−2k ) , j even; (90)
k=1 n

for n even,
!
n/2−1
2 2k
b0 = ∑ (t2k − tn−2k ) , (91)
n k=1 n

!
n/4−1
4 2k
bn/2 = ∑ (t2k − tn−2k ) ; (92)
n k=1 n

for n odd and j ∈ {1, 2, . . . , n − 1},


( j−3)/2
1 4 j −2n 2k +1
bj = (t j − tn− j ) + 4 ∑ (t2k+1 − tn−2k−1 )+
2n n k=0 n
!
(n− j)/2−1
2k
+ 4 ∑ (t2k − tn−2k ) , j odd; (93)
k=1 n

j/2−1
1 4 j −2n 2k
bj = (t j − tn− j ) + 4 ∑ (t2k − tn−2k )+
2n n k=1 n
!
(n− j−3)/2
2k +1
+ 4 ∑ (t2k+1 − tn−2k−1 ) , j even; (94)
k=0 n

for n odd,
!
(n−3)/2
2 2k +1
b0 = ∑ (t2k+1 − tn−2k−1 ) . (95)
n k=0 n

Proof. Let us define

φ (c, b) = kTn − circ(c) − circ(b)k2F

34
for any two symmetric vectors c, b ∈ Rn . If j ∈ {1, 2, . . . , n − 1}, then we get
n−1
∂ φ (c, b)
= −4(n − j)t j − 4 j tn− j + 4 ∑ b j + 4 n c j . (96)
∂cj j=0

Furthermore, one has


n−1
∂ φ (c, b)
= −2 nt0 + 2 ∑ b j + 2 n c0 . (97)
∂ c0 j=0

If n is even and j is odd, j ∈ {1, . . . , n − 1}, then, since cn− j = c j , we have


( j−3)/2 (n− j−3)/2
∂ φ (c, b)
= −2 2t j + 4 ∑ t2k+1 + 2tn− j + 4 ∑ t2k+1 −
∂bj k=0 k=0
!
n/4−1
−4 ∑ c2k+1 − 2 n b j =
k=0
( j−3)/2
= −2 2(t j − c j ) + 4 ∑ (t2k+1 − c2k+1 ) + 2(tn− j − cn− j )+ (98)
k=0
!
(n− j−3)/2
+4 ∑ (t2k+1 − c2k+1 ) − 2 n b j .
k=0

If both n and j are even, j ∈ {1, 2, . . . , n − 1} \ {n/2}, then, by arguing analogously as


in the previous case, we deduce
j/2−1
∂ φ (c, b)
= −2 2(t j − c j ) + 4 ∑ (t2k − c2k ) + 2(tn− j − cn− j )+
∂bj k=1
!
(n− j)/2−1
+4(t0 − c0 ) + 4 ∑ (t2k − c2k ) − 2 n b j . (99)
k=1

Moreover, if n is even, then one has


!
n/2−1
∂ φ (c, b)
= −2 t0 − c0 + 2 ∑ (t2k − c2k ) − n b0 , (100)
∂ b0 k=1

getting (91). Furthermore, for n even, we have


!
n/4−1
∂ φ (c, b)
= −2 2(tn/2 − cn/2 ) + 4 ∑ (t2k − c2k ) − nbn/2 . (101)
∂ bn/2 k=1

Now, if both n and j are odd, j ∈ {0, 1, . . . , n − 1}, then, taking into account (87), we
obtain
( j−3)/2
∂ φ (c, b)
= −2 2 (t j − c j ) + 4 ∑ (t2k+1 − c2k+1 ) + 2 (tn− j − cn− j )+
∂bj k=0
!
(n− j)/2−1
+4 ∑ (t2k − c2k ) + 2 (t0 − c0 ) − 2 n b j . (102)
k=1

35
If n is odd and j is even, j ∈ {0, 1, . . . , n − 1}, then we have
j/2−1
∂ φ (c, b)
= −2 2 (t j − c j ) + 4 ∑ (t2k − c2k ) + 2 (tn− j − cn− j )+ (103)
∂bj k=1
!
(n− j−3)/2
+4 ∑ (t2k+1 − c2k+1 ) + 2 (t0 − c0 ) − 2 n b j .
k=1

Finally, for n odd, one has


!
(n−3)/2
∂ φ (c, b)
= −2 t0 − c0 + 2 ∑ (t2k+1 − c2k+1 ) − n b0 . (104)
∂ b0 k=0

It is not difficult to see that the function φ is convex. By [21, Theorem 2.2], φ has
exactly one point of minimum. From this it follows that φ admits exactly one stationary
point. Now we claim that this point satisfies
n/4−1
∑ b2k+1 = 0 (105)
k=0

and
n/4−1
b0 + 2 ∑ b2k + bn/2 = 0 (106)
k=1

when n is even, and


(n−1)/2
b0 + 2 ∑ bj = 0 (107)
j=1

if n is odd, that is Bn (Tn ) ∈ Bn . From (105)-(107) and (96)-(97) we get (87)-(88). Fur-
thermore, from (87)-(88) and (98)-(104) we obtain (89)-(95). Finally, (105)-(106) follow
from (89)-(92), while (107) is a consequence of (93)-(95).
Now, for n ∈ N, set
+∞
T
cn = {t ∈ Tn : there is a function f (z) = ∑ t j z j, (108)
j=−∞
+∞
with z ∈ C, |z| = 1, and such that ∑ |t j | < +∞}.
j=−∞

Observe that any function defined by a power series as in the first line of (108) is real-
valued, and the set of such functions satisfying the condition
+∞
∑ |t j | < +∞
j=−∞

36
is called Wiener class (see, e.g., [9], [16, §3]).
Given a function f belonging to the Wiener class and a matrix Tn ∈ T
cn , Tn ( f ) =
+∞
(tk, j )k, j : tk, j = t|k− j| , k, j ∈ {0, 1, . . . , n − 1}, and f (z) = ∑ t j z j , then we say that Tn ( f )
j=−∞
is generated by f .
We will often use the following property of absolutely convergent series (see, e.g., [9],
[15]).

Lemma 6.2. Let ∑ t j be an absolutely convergent series. Then, we get
j=1
" !#
n n
1
lim
n→+∞ n ∑ k |tk | + ∑ (n − k) |tk | = 0.
k=1 k=d(n+1)/2e


Proof. Let S = ∑ |t j |. Choose arbitrarily ε > 0. By hypothesis, there is a positive integer
j=1
n0 with

ε
∑ |tk | ≤ . (109)
k=n0 +1 4
 
2 n0 S
Let n1 = max , 2 n0 . Taking into account (109), for every n > n1 it is
ε
!
1 n n
0 ≤ ∑ k |tk | + ∑ (n − k) |tk | =
n k=1 k=d(n+1)/2e
1 n0 1 n 1 n
= ∑ k |tk | + ∑ k |tk | + ∑ (n − k) |tk | ≤
n k=1 n k=n0 +1 n k=d(n+1)/2e
n0 n
1 ε ε
≤ n0 ∑ |tk | + 2 ∑ |tk | ≤ n0 S + 2 = ε.
n1 k=1 k=n0 +1 2 n0 S 4

So, the assertion follows.


Theorem 6.3. For n ∈ N, given Tn ( f ) ∈ T cn , let Cn ( f ) = Cn (Tn ( f )), Bn ( f ) = Bn (Tn ( f ))
be as in Theorem 6.1, and set Gn ( f ) = Cn ( f ) + Bn ( f ) . Then, one has:
6.3.1) for every ε > 0 there is a positive integer n0 , such that for each n ≥ n0 and for
(G ( f ))
every eigenvalue λ j n of Gn ( f ), it is
(Gn ( f ))
λj ∈ [ fmin − ε, fmax + ε], j ∈ {0, 1, . . . , n − 1}, (110)

where fmin and fmax denote the minimum and the maximum value of f , respectively;

6.3.2) for every ε > 0 there are k, n1 ∈ N such that for each n ≥ n1 the number of eigen-
((G ( f ))−1 Tn ( f )) ((Gn ( f ))−1 Tn ( f ))
values λ j n of G−1
n Tn ( f ) such that |λ j − 1| > ε is less than
−1
k, namely the spectrum of (Gn ( f )) Tn ( f ) is clustered around 1.

37
(Cn ( f ))
Proof. We begin with proving 6.3.1). Choose arbitrarily ε > 0. We denote by λ j
(B ( f ))
(resp., λ j n ) the generic j-th eigenvalue of Cn ( f ) (resp., Bn ( f )), in the order given
by Theorem 3.3 (resp., Theorem 3.7). To prove the assertion it is enough to show that
(C ( f ))
property (110) holds (in correspondence with ε/2) for each λ j n , j = 0, 1, . . . , n − 1,
and that
(Bn ( f ))
λj ∈ [−ε/2, ε/2] for every n ≥ n0 and j ∈ {0, 1, . . . , n − 1}. (111)

Indeed, since Cn ( f ), Bn ( f ) ∈ Gn , we have


(Gn ( f )) (Cn ( f )) (Bn ( f ))
λj = λj +λj for all j ∈ {0, 1, . . . , n − 1},

getting the assertion.


We first consider the case n odd. For every j ∈ {0, 1, . . . , n − 1}, since c j = cn− j and
thanks to (87), one has

n−1 (n−1)/2
(Cn ( f ))
= ∑ ch cos (2 π h j) = c0 + 2 ∑ ch cos (2 π h j) =

λ j
h=0 h=1


(n−1)/2
= t0 + 2 ∑ th cos (2 π h j) −

h=1


(n−1)/2 (n−1)/2
h h
− t cos (2 h j) + t cos (2 h j) ≤ (112)

∑ nh π ∑ n n−h π
h=1 h=1

(n−1)/2
 2 π j h (n−1)/2 h (n−1)/2
i n h
≤ ∑ |th| e + ∑ |th | + ∑ |tn−h | ≤
h=−(n−1)/2 h=1 n h=1 n
+∞  2 π j h (n−1)/2 h n−1
n−h
i n
≤ ∑ h |t | e + ∑ n h |t | + ∑ |th |.
h=−∞ h=1 h=(n+1)/2
n

 2 π j arbitrarily ε > 0. Note that the first addend of the last term in (112) tends to
Choose
f ei n as n tends to +∞, and hence, without loss of generality, we can suppose that it
belongs to the interval [ fmin − ε/6, fmax + ε/6] for n sufficiently large. By Lemma 6.2, it
is
!
(n−1)/2 n−1
h n−h
lim ∑ |th| + ∑ |th | = 0.
h=1 n n
n→+∞
h=(n+1)/2

38
When n is even, we get

n−1 n/2−1
(Cn ( f )) n/2
= c cos (2 h j) = c + 2 c cos (2 h j) + (−1) cn/2 =

λ
j ∑ h π 0 ∑ h π
h=0 h=1


n/2−1
= t0 + 2 ∑ cos (2 π h j)th + (−1)n/2tn/2 −

h=1


n/2−1 n/2−1
h h
− ∑ th cos (2 π h j) + ∑ tn−h cos (2 π h j) ≤

h=1 n h=1 n

n/2  2 π j h n/2−1 h n/2−1
i n h
≤ ∑ h |t | e + ∑ n h ∑ n |tn−h|
|t | +
h=−n/2+1 h=1 h=1
+∞  2 π j h n/2−1 h n−1
n−h
i n
≤ ∑ |th | e + ∑ |th | + ∑ |th |.
h=−∞ h=1 n h=n/2+1
n

Thus, it is possible to repeat the same argument used in the previous case, getting 6.3.1).
Now we turn to 6.3.2). From Theorem 3.7 we obtain
 n−1  
2π jh
∑ bh cos n if j ≤ n/2,





 h=0
(B ( f ))
λj n =
n−1  
2 π (n − j) h



 − ∑ bh cos if j > n/2.


h=0 n

So, without loss of generality, it is enough to prove 6.3.2) for j ≤ n/2.


We first consider the case when n is even. We get

n−1 
2 π j h n−1
(Bn ( f ))
≤ ∑ bh cos ≤ ∑ |bh | =

λ j
h=0 n h=0
n/4−1 n/2−1 n/2−1
= |b0 | + ∑ |b2h | + ∑ |b2h | + |bn/2 | + ∑ |b2h+1 | = (113)
h=1 h=n/4+1 h=0
= I1 + I2 + I3 + I4 + I5 .

So, in order to obtain 6.3.2), it is enough to prove that each addend of the last line of (113)
tends to 0 as n tends to +∞. We get:
n/2−1 n/2−1
4k 4k
I1 = |b0 | ≤ ∑ |t 2k | + ∑ |t
2 n−2k
|= (114)
k=1 n2 k=1 n
n/2−1 n/2−1
4k 2n−4k 4n−8 ∞ 4S
= ∑ 2
|t 2k | + ∑ 2
|t 2k | ≤ 2 ∑ |th | ≤ ,
k=1 n k=1 n n h=1 n

where S = ∑ |th|. From (114) it follows that I1 = |b0 | tends to 0 as n tends to +∞.
h=1
Analogously it is possible to check that I4 = |bn/2 | tends to 0 as n tends to +∞.

39
Now we estimate the term I2 + I3 . We first observe that

1 n/4−1 1 n/2−1
∑ (4 h − n)(|t2h| + |tn−2h|) + n2 ∑ (4 h − n)(|t2h| + |tn−2h|) ≤
n2 h=1 h=n/4+1

1 n/2−1
≤ 2 ∑ (4 h − n)(|t2h | + |tn−2h |) ≤ (115)
n h=1
1 n/2−1 1 n/2−1
≤ ∑ 4 h|t2h | + ∑ (4 h − n)|tn−2h| =
n2 h=1 n2 h=1
1 n/2−1 1 n/2−1
= ∑ 4 h|t2h | + ∑ (2 n − 4 h)|t2h|.
n2 h=1 n2 h=1

Arguing analogously as in (114), it is possible to see that the quantities at the first hand of
(115) tend to 0 as n tends to +∞.
Furthermore, we have
!
2 n/2−1 h−1 2k n/2−2
4k  n  n/2−2
2k
∑ ∑ |t2k | = ∑ 2 2
− 2 − k |t 2k | ≤ ∑ |t2k |, (116)
n h=1 k=1 n k=1 n k=1 n

!
2 n/2−1 h−1
2k n/2−2
4k  n  n/2−1
2k

n h=1 ∑ n |tn−2k | = ∑ n2 2 − 2 − k |tn−2k | ≤ ∑ n |t2k |, (117)
k=1 k=1 k=2

!
2 n/2−1 n/2−h−1
2k n/2−2
2k
∑ ∑ |t2k | ≤ ∑ |t2k |, (118)
n h=1 k=1 n k=1 n

and
!
2 n/2−1 n/2−h−1 n/2−2  
2k 4k n−2k −2
∑ ∑ |tn−2k | ≤ ∑ 2 |tn−2k | =
n h=1 k=1 n k=1 n 2
n/2−1 n/2−1
(n − 2 k) (2 k − 2) 2k
= ∑ 2
|t2k | ≤ ∑ |t2k |. (119)
k=2 n k=2 n

Summing up (115)-(119), from (90) we obtain


n/4−1 n/2−1
1 n/2−1 1 n/2−1
I2 + I3 = ∑ |b2h | + ∑ |b2h | ≤ ∑ 4 h|t2h | + ∑ 4 h|tn−2h| +
h=1 h=n/4+1
n2 h=1 n2 h=1
n/2−2 n/2−1
4k 4k
+ ∑ |t2k | + ∑ |t2k |. (120)
k=1 n k=2 n

Thus, taking into account Lemma 6.2, it is possible to check that the terms at the right
hand of (120) tend to 0 as n tends to +∞.

40
Now we estimate the term I5 . One has
!
2 n/2−1 h−1
2k + 1 n/2
2k + 1

n h=0 ∑ n |t2k+1| = ∑ n |t2k+1| =
k=0 k=0
n/2 n/2
2k 1
= ∑ n 2k+1 ∑ n |t2k+1| = J1 + J2.
|t | +
k=0 k=1

Thanks to Lemma 6.2, it is possible to check that J1 tends to 0 as n tends to +∞. Moreover,
we have
S
0 ≤ J2 ≤ , (121)
n
and hence I4 tends to 0 as n tends to +∞. Analogously as in the previous case, it is possible
to prove that
n/2−1
1 n/2−1
I5 = ∑ |b2k+1 | ≤
∑ 2 (2 k + 1)|t2k+1| +
k=0 n2 k=0
2 n/2−2  n 2k + 1
+ ∑ 2 −1−k
n k=1 n
|tn−2k−1 | ≤ (122)

n/2−2 n/2−1
2 (2 k + 1) 2 (2 k + 1)
≤ ∑ |t2k+1 | + ∑ |t2k+1 |.
k=1 n k=2 n

By virtue of Lemma 6.2 and (121), we get that I5 tends to 0 as n tends to +∞. Therefore,
all addends of the right hand of (113) tend to 0 as n tends to +∞. Thus, (111) follows
from (113), (114), (120) and (122).
When n is odd, it is possible to proceed analogously as in previous case. This proves
6.3.1).
Now we turn to 6.3.2), that is we prove that the spectrum of (Gn ( f ))−1 Tn ( f ) is clus-
tered around 1. Since (Gn ( f ))−1 (Tn ( f ) − Gn ( f )) = (Gn ( f ))−1 Tn ( f ) − In , where In is the
identity matrix, it is enough to check that the eigenvalues of (Gn ( f ))−1 (Tn ( f ) − Gn ( f ))
are clustered around 0.
Choose arbitrarily ε > 0. Since f belongs to the Wiener class, there exists a positive
integer n0 = n0 (ε) such that

∑ |t j | ≤ ε.
j=n0 +1

Proceeding similarly as in the proof of [9, Theorem 3 (ii)], we get


(n0 ) (n0 ) (n0 )
Tn ( f ) − Gn ( f ) = Tn ( f ) −Cn ( f ) − Bn ( f ) = En +Wn + Zn ,
(n ) (n ) (n ) (n ) (n )
where En 0 , Wn 0 and Zn 0 are suitable matrices such that En 0 and Wn 0 agree with the
(n−n0 )×(n−n0 ) leading principal submatrices of Tn ( f )−Cn ( f ) and Bn ( f ), respectively.

41
We have:
(n0 )
rank(En ) ≤ 2 n0 ;
(n ) 2 n−n0 −1 2 n0 ∞
kWn 0 k1 ≤ ∑ k |tn−k − tk | ≤ ∑ k |tk | + 4 ∑ |tk |; (123)
n k=1 n k=1 k=n0 +1
n−1
(n )
kZn 0 k1 ≤ ∑ |bh|,
h=0

where the symbol k · k1 denotes the 1-norm of the involved matrix. Let n1 > n0 be a
positive integer with

1 n0 n−1
∑ k |tk | ≤ ε and
n1 k=1 ∑ |bh| ≤ ε. (124)
h=0

Note that such an n1 does exist, thanks to Lemma 6.2 and since all terms of (113) tend to
0 as n tends to +∞. From (123) and (124) it follows that
(n0 ) (n0 ) (n0 ) (n0 )
kWn +Wn k1 ≤ kWn k1 + kZn k1 ≤ 8ε. (125)

From (125) and the Cauchy interlace theorem (see, e.g., [52]) we deduce that the eigen-
values of Tn ( f ) − Gn ( f ) are clustered around 0, with the exception of at most k = 2 n0 of
them. By virtue of the Courant-Fisher minimax characterization of the matrix (Gn ( f ))−1 (Tn ( f )−
Gn ( f )) (see, e.g., [52]), we obtain
(Tn ( f )−Gn ( f ))
(G ( f ))−1 (Tn ( f )−Gn ( f )) λj
λj n ≤ (126)
fmin
for n large enough. From (126) we deduce that the spectrum of (Gn ( f ))−1 (Tn ( f ) −
Gn ( f )) is clustered around 0, namely for every ε > 0 there are k, n1 ∈ N with the property
(G ( f ))−1 Tn ( f ) (G ( f ))−1 Tn ( f )
that for each ε > 0 the number of eigenvalues λ j n such that λ j n − 1 >

ε is at most equal to k.

7 Conclusions
We analyzed a new class of simultaneously diagonalizable real matrices, the γ-matrices.
Such a class includes both symmetric circulant matrices and a subclass of reverse cir-
culant matrices. We dealt with both spectral and structural properties of this family of
matrices. Successively, we defined some algorithms for fast computation of the prod-
uct between a γ-matrix and a real vector and between two γ-matrices, and we proved
that the computational cost of a multiplication between a γ-matrix and a real vector
is of at most 74 n log2 n + o(n log2 n) additions and 12 n log2 n + o(n log2 n) multiplica-
tions, and the computational cost of a multiplication between two γ-matrices is at most
9 3
2 n log2 n + o(n log2 n) additions and 2 n log2 n + o(n log2 n) multiplications. Finally, we
gave a technique of approximating a real symmetric Toeplitz matrix by a γ-matrix, and
we proved how the eigenvalues of the preconditioned matrix are clustered around 0 with
the exception of at most a finite number of terms.

42
References
[1] Ahmed, N., Natarajan, T., Rao, K. R.: Discrete Cosine Transform. IEEE Trans. on
Computers 23, 90–93 (1974)

[2] Andrecut, M.: Applications of left circulant matrices in signal and image processing,
Modern Physics Letters B, 22, 231–241 (2008)

[3] Aricò, A., Serra–Capizzano, S., Tasche, M.: Fast and numerically stable algorithms
for discrete Hartley transforms and applications to preconditioning. Comm. Infor-
mation Systems 6, 21–68 (2005)

[4] Badeau, R., Boyer, R.: Fast multilinear singular value decomposition for structured
tensors. SIAM J. Matrix Anal. Appl. 30, 1008–1021 (2008)

[5] Bergland, G. D.: A radix-eight Fast Fourier Transform Subroutine for real-valued
series. IEEE Trans. on Audio and Electroacoustics 17, 138–144 (1969)

[6] Bertaccini, D.: The spectrum of circulant-like preconditioners for some general lin-
ear multistep formulas for linear boundary value problems. SIAM J. Numer. Anal.
40 1798–1822 (2002)

[7] Bertaccini, D., Ng, M. K.: Block ω-circulant preconditioners for the systems of
differential equations. Calcolo 40, 71–90 (2003) Daniele Bertaccini1, Michael K.
Ng2

[8] Bini, D., Di Benedetto, F.: A new preconditioner for the parallel solution of positive
definite Toeplitz systems. In: Second Ann. Symp. Parallel Algorithms and Architec-
ture, Crete, Greece, 220–223 (1990)

[9] Bini, D., Favati, P.: On a matrix algebra related to the discrete Hartley transform.
SIAM J. Matrix Anal. Appl. 14, 500–507 (1993)

[10] Boccuto, A., Gerace, I., Martinelli, F.: Half-quadratic image restoration with a non-
parallelism constraint, J. Math. Imaging Vis. 59, 270–295 (2017)

[11] Boccuto, A., Gerace, I., Pucci, P.: Convex Approximation Technique for Interacting
Line Elements Deblurring: A New Approach. J. Math. Imaging Vis. 44, 168–184
(2012)

[12] Bortoletti, A., Di Fiore, C.: On a set of matrix algebras related to discrete Hartley-
type transforms. Linear Algebra Appl. 366, 65–85 (2003)

[13] Bose, A., Saha, K.: Random circulant matrices. CRC Press, Taylor & Francis Group,
Boca Raton-London-New York (2019)

[14] Carrasquinha, E., Amado, C., Pires, A. M., Oliveira, L.: Image reconstruction based
on circulant matrices. Signal Processing: Image Communication 63 72–80 (2018)

[15] Chan, R.: The spectrum of a family of circulant preconditioned Toeplitz systems.
SIAM J. Numer. Anal. 26, 503–506 (1989)

43
[16] Chan, R. H., Jin, X.-Q., Yeung, M.-C.: The Spectra of Super-Optimal Circulant
Preconditioned Toeplitz Systems. SIAM J. Numer. Anal. 28, 871–879 (1991)

[17] Chan, R. H., Strang, G.: Toeplitz equations by conjugate gradients with circulant
preconditioner. SIAM J. Sci. Stat. Comput. 10, 104–119 (1989)

[18] Codenotti, B., Gerace, I., Vigna, S.: Hardness results and spectral techniques for
combinatorial problems on circulant graphs. Linear Algebra Appl. 285, 123–142
(1998)

[19] Davis, P. J.: Circulant matrices. John Wiley & Sons, New York (1979)

[20] Dell’Acqua, P., Donatelli, M., Serra–Capizzano, S., Sesana, D., Tablino–Possio, C.:
Optimal preconditioning for image deblurring with anti–reflective boundary condi-
tions. Linear Algebra Appl. 502, 159–185 (2016).

[21] Di Fiore, C., Zellini, P.: Matrix algebras in optimal preconditioning. Linear Algebra
Appl. 335, 1–54 (2001)

[22] Ding, W., Qi, L., Wei, Y.: Fast Hankel tensor-vector product and applications to
exponential data fitting. Numerical Linear Algebra with Appl. 22, 814–832 (2015)

[23] Discepoli, M., Gerace, I., Mariani, R., Remigi, A.: A Spectral Technique to Solve
the Chromatic Number Problem in Circulant Graphs. In: Computational Science
and its Applications - International Conference on Computational Science and Its
Applications 2004, Lecture Notes in Computer Sciences 3045, 745-754 (2004)

[24] Donatelli, M., Serra–Capizzano, S.: Antireflective boundary conditions for deblur-
ring problems. J. Electr. Comput. Eng., Art. ID 241467, 18 pp. (2010)

[25] Duhamel, P., Vetterli, M.: Improved Fourier and Hartley transform algorithms: Ap-
plication to cyclic convolution of real data. IEEE Trans. Acoustics, Speech and Sig-
nal Processing 35, 818–824 (1987)

[26] Dykes, L., Noschese, S., Reichel, L.: Circulant preconditioners for discrete ill-posed
Toeplitz systems. Numer. Algor. 75, 477–490 (2017)

[27] Ersoy, O.: Real Discrete Fourier Transform. IEEE Trans. Acoustics, Speech and
Signal Processing 33, 880–882 (1985)

[28] Evans, D. J., Okolie, S. O.: Circulant matrix methods for the numerical solution of
partial differential equations by FFT convolutions J. Computational Appl. Math. 8,
238–241 (1982)

[29] Fischer, R., Huckle, T.: Using ω-circulant matrices for the preconditioning of
Toeplitz systems. Selçuk J. Appl. Math. 4 71–88 (2003)

[30] Gilmour, A. E.: Circulant matrix methods for the numerical solution of partial dif-
ferential equations by FFT convolutions. Appl. Math. Modelling 12, 44–50 (1988)

44
[31] Gerace, I., Greco, F.: The Travelling Salesman Problem in symmetric circulant ma-
trices with two stripes. Mathematical Ctructures in Computer Science, 18, Special
Issue 1: In memory of Sauro Tulipani, 165–175 (2008)

[32] Gerace, I., Pucci, P., Ceccarelli, N., Discepoli, M., Mariani, R.: A Preconditioned
Finite Element Method for the p-Laplacian Parabolic Equation. Appl. Numer. Anal.
Computational Math. 1, 155–164 (2004)

[33] Greco, F., Gerace, I.: The Traveling Salesman Problem in Circulant Weighted
Graphs With Two Stripes. Electronic Notes in Theoretical Computer Science 169,
99–109 (2007)

[34] Gupta, A., Rao, K. R.: A fast recursive algorithm for the discrete sine transform.
IEEE Trans. Acoustics, Speech and Signal Processing 38, 553–557 (1990)

[35] Gutekunst, S. C., Williamson, D. P.: Characterizing the Integrality Gap of the Sub-
tour LP for the Circulant Traveling Salesman Problem. SIAM J. Discrete Math. 33,
2452–2478 (2019)

[36] Gutekunst, S. C., Jin, B., Williamson, D. P.: The Two-Stripe Symmetric Circulant
TSP is in P. https://people.orie.cornell.edu/dpw/2stripe.pdf (2021)

[37] Győri, I., Horváth, L.: Utilization of Circulant Matrix Theory in Periodic Au-
tonomous Difference Equations. Int. J. Difference Equations 9, 163–185 (2014)

[38] Győri, I., Horváth, L.: Existence of periodic solutions in a linear higher order system
of difference equations. Computers and Math. with Appl. 66, 2239–2250 (2013)

[39] Heinig, G., Rost, K.: Hartley transform representations of symmetric Toeplitz matrix
inverses with application to fast matrix-vector multiplication. SIAM J. Matrix Anal.
Appl. 22, 86–105 (2000)

[40] Henriques, J. F.: Circulant structures in computer vision. Ph. D. thesis., Department
of Electrical and Computer Engineering, Faculty of Science and Technology, Coim-
bra (2015)

[41] Lei, Y. J., Xu, W. R., Lu, Y., Niu, Y. R., Gu, X. M.: On the symmetric doubly
stochastic inverse eigenvalue problem. Linear Algebra Appl. 445, 181–205 (2014)

[42] Lv, X.-G., Huang, T.-Z., Ren, Z.-G.: A modified T. Chan’s preconditioner for
Toeplitz systems. Computers and Math. with Appl. 58, 693–699 (2009)

[43] Martens, J.–B.: Discrete Fourier Transform algorithms for real valued sequences.
IEEE Trans. Acoustics, Speech and Signal Processing 32, 390–396 (1984)

[44] Murakami, H.: Real-valued fast discrete Fourier transform and cyclic convolution
algorithms of highly composite even length. Proc. of the international Conference
on Acoustics, Speech, and Signal Processing (ICASSP) 3, 1311–1314 (1996)

[45] Oseledets, I., Tyrtyshnikov, E.: A unifying approach to the construction of circulant
preconditioners. Linear Algebra Appl. 418, 435–449 (2006).

45
[46] Papy, J. M., De Lauauer, L., Van Huffel, S.: Exponential data fitting using mul-
tilinear algebra: The single-channel and the multi-channel case. Numerical Linear
Algebra with Appl. 12, 809–826 (2005)

[47] Qi, L.: Hankel tensors: Associated Hankel matrices and Vandermonde decomposi-
tion. Commun. Math. Sci. 13, 113–125 (2015)

[48] Scarabotti, F.: The discrete sine transform and the spectrum of the finite q-ary tree.
SIAM J. Discrete Math. 19, 1004-1010 (2006)

[49] Serra-Capizzano, S., Sesana, D.: A note on the eigenvalues of g-circulants (and of
g-Toeplitz, g-Hankel matrices). Calcolo 51, 639–659 (2014)

[50] Strang, G.: The Discrete Cosine Transform. SIAM Review 41, 135–147 (1999)

[51] Tee, G. J.: Eigenvectors of block circulant and alternating circulant matrices. New
Zealand J. Math. 36, 195–211 (2007)

[52] Wilkinson, J.: The algebraic eigenvalue problem. Clarendon Press, Oxford (1965)

46

You might also like