Wavelet Matrix Operations and Quantum Transforms

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Applied Mathematics and Computation 428 (2022) 127179

Contents lists available at ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

Wavelet matrix operations and quantum transformsR


Zhiguo Zhang a,∗, Mark A. Kon b
a
School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, China
b
Department of Mathematics and Statistics, Boston University, Boston, MA, 02215, USA

a r t i c l e i n f o a b s t r a c t

Article history: The currently studied version of the quantum wavelet transform implements the Mallat
Received 14 February 2021 pyramid algorithm, calculating wavelet and scaling coefficients at lower resolutions from
Revised 5 April 2022
higher ones, via quantum computations. However, the pyramid algorithm cannot replace
Accepted 11 April 2022
wavelet transform algorithms, which obtain wavelet coefficients directly from signals. The
Available online 6 May 2022
barrier to implementing quantum versions of wavelet transforms has been the fact that
Keywords: the mapping from sampled signals to wavelet coefficients is not canonically represented
Wavelet transform with matrices. To solve this problem, we introduce new inner products and norms into
Interpolatory wavelet the sequence space l 2 (Z ), based on wavelet sampling theory. We then show that wavelet
Multiresolution analysis transform algorithms using L2 (R ) inner product operations can be implemented in infinite
Quantum algorithm matrix forms, directly mapping discrete function samples to wavelet coefficients. These in-
Generalized sampling finite matrix operators are then converted into finite forms for computational implemen-
tation. Thus, via singular value decompositions of these finite matrices, our work allows
implementation of the standard wavelet transform with a quantum circuit. Finally, we val-
idate these wavelet matrix algorithms on MRAs involving spline and Coiflet wavelets, illus-
trating some of our theorems.
© 2022 Elsevier Inc. All rights reserved.

1. Introduction

An important topic in recent years, with both theoretical and applied interest, arises from several areas and applications
of quantum computation [1,2]. With the help of quantum computation, any operation that can be properly formulated in
terms of unitary matrix products has the potential for being carried out more efficiently. This has motivated researchers to
design versatile quantum algorithms via representations of classic ones in terms of matrix structures. The resulting work has
led to several new areas, including quantum transform theory, which implements function transforms via quantum methods
[2–6].
The topic of quantum transforms (such as quantum Fourier and wavelet transforms) has generated strong connections
with applied and computational mathematics since this theory was introduced in the 1990’s [5]. However, there are two
currently critical problems recognized as obstructions to the realization of quantum transforms. One is identification of
suitable unitary matrix operations (i.e. quantum transform kernels) for implementing transformations. The other is design
of quantum circuits corresponding to these unitary matrix operations.

R
Research partially supported by the US National Science Foundation under grant DMS-1736392

Corresponding author:
E-mail addresses: zhiguozhang@uestc.edu.cn (Z. Zhang), mkon@bu.edu (M.A. Kon).

https://doi.org/10.1016/j.amc.2022.127179
0 096-30 03/© 2022 Elsevier Inc. All rights reserved.
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

A milestone in this direction is due to Høyer, who has showed in [7] that any finite unitary matrix operation for a
quantum transform can be realized efficiently by quantum circuits. More importantly, he has proposed a general method for
designing such circuits.
Høyer’s significant work allows a partial separation of effort in the design of quantum algorithms, since his algorithm can
always lead to quantum circuits encoding finite unitary operations [8,9]. However, since unitary matrix operations can vary
greatly among different quantum transforms, there are not yet many common approaches to construction of such matrix
operations. Such constructions have become an important topic involving both applied and computational mathematics
[3–5].
Wavelet theory has maintained its growth as a significant area in applications of mathematics [10,11]. It has notably been
used in signal compression [12], transient signal detection [13,14] and noise removal [14–16]. With the help of entropy func-
tion theory, wavelet theory is even applied to neural networks for optimizing activation functions [12,13]. Simultaneously, it
has also led to great progress in varied other areas of time-frequency analysis, including the Stockwell transform [17] and
the offset canonical transform [18].
As an important component of quantum transform theory, quantum wavelet transforms promise to bring some significant
applications in computational science. Høyer [7] has proposed quantum wavelet transforms focusing on the Mallat pyramid
algorithm, and has designed corresponding quantum circuits. Based on these results, Fijany [8] has further produced more
effective circuits for such quantum wavelet transforms. Since then, properties for Høyer’s quantum wavelet transforms have
been more thoroughly investigated. As examples, wavelet packet transforms have been analyzed in [9]. Argueello has ex-
tended Høyer’s work to Daubechies wavelets of arbitrary order [19]. In [20], quantum transform kernels have been further
improved for Daubechies wavelets.
The primary significance of wavelet transforms lies in their ability to obtain and utilize time-frequency information in
signals, for which numerical implementations generally involve computation of wavelet coefficients from discrete signal
samples. However, Hoyer’s quantum wavelet transforms are restricted only to obtain scaling and wavelet coefficients at
lower resolutions from higher ones.
As mentioned by Høyer in [7], the above difference is based on the fact that his unitary operations are constructed
from conjugate mirror filters. Hence, strictly speaking, his algorithm implements the Mallat pyramid algorithm instead of
the wavelet transforms. This thus reflects a shortcoming in Høyer’s algorithm, in spite of its potential importance in com-
putational science and applied mathematics. In addition, his algorithms are not suitable for nonorthogonal or noncompact
wavelet expansions.
Clearly, it is impossible that all wavelet computations should be representable as unitary operations. On the other hand,
the singular value decomposition, involving compositions of unitary as well as diagonal matrices, allows any finite matrix
operation to be realized in terms of multiple quantum computation procedures.
Thus, it is critical for a quantum wavelet transform that relevant operations can be representable in matrix form. How-
ever, classical wavelet transforms use inner product operations in L2 (R ), typically represented as integral operators involv-
ing continuous functions. This could obstruct the application of quantum computations for efficiently implementing wavelet
transforms via discrete samples.
In wavelet theory it has been shown that discrete function samples can be used directly as wavelet coefficients using so-
called interpolatory wavelets [8,21]. Notably, Donoho and Walter have proposed two methods for constructing interpolatory
wavelets [22,23], allowing customization to different requirements and thus greatly influencing wavelet theory [24–26].
Specifically, Walter’s interpolatory wavelets are constructed as linear combinations of given wavelet structures, spanning
the same spaces as the original wavelets [23]. This in turn allows the interpolatory wavelet coefficients to be obtained di-
rectly from sampled signals in the original wavelet spaces [21,26]. Based on Walter’s result, interpolatory filter banks are
constructed in [21,27] allowing function samples (as interpolatory scaling coefficients) to be transformed into coefficients
of interpolatory wavelets and packets via discrete convolutions. However, interpolatory wavelet coefficients are not directly
translatable to general wavelet coefficients. Hence there still does not exist an effective simple matrix operation for trans-
forming function samples into general wavelet coefficients. Thus, from the viewpoint of wavelet theory, wavelet transforms
can still not be realized via quantum transform.
In this paper, we focus on representation of wavelet transforms as finite matrix operations based on wavelet sampling
theory. It is shown that such finite matrix operations can be constructed for mapping from function samples (as interpo-
latory scaling coefficients) to general wavelet and scaling coefficients. Hence, unlike Høyer’s quantum wavelet transforms,
our algorithm can realize wavelet transforms through quantum computations in a fashion that parallels the classical inner
product operations in L2 (R ).
In addition, practical signals are now always represented as digital data in computer algorithms. Hence obtaining wavelet
and scaling coefficients from discrete samples is an important problem in wavelet analysis. Since our matrix operations are
constructed from mappings between wavelet bases, the obtained coefficients are theoretically exact, aside from calculational
error. Hence for discrete samples, our methods can obtain more accurate coefficients than classic methods which discretize
the inner product operation. Thus our methods also can be used in classic wavelet algorithms, e.g. for noise removal [14–16],
character identification [13,14] and other applications.
This work is divided into five parts. Section 2 (the first part after the introduction) briefly reviews theories of quantum
computation and wavelet transforms. There, the new inner product and norm appropriate for sampling theory is introduced
in l 2 (Z ). With these, in the second part (Sections 3-4), we propose infinite matrix operations that realize wavelet and scaling

2
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

transforms via discrete samples. In the third part (Section 5), the infinite matrix operations are converted to finite forms.
The fourth part (Section 6) discusses relations between wavelet transforms and unitary matrix forms, allowing wavelet and
scaling transforms to be implemented via quantum circuits. In the last part (Section 7), simulations show wavelet and scaling
transforms can be practically realized via the new matrix operations.

2. Background

We begin here by briefly reviewing some relevant results from wavelet theory and quantum computation theory. For
more comprehensive discussions of wavelets and quantum computation, see e.g., [1,28,29]. In addition, we will try to unify
the notations in this paper, see Appendix H.

2.1. Classic Wavelet Transform

Let the function ψ (x) denote a standard wavelet, and define


ψ j,k (x ) = ψ (2 j x − k ). (1)
The set {ψ j,k (x )}k∈Z forms a Riesz basis for its spanned space W j ⊂ L2 ( Z )
(the wavelet space), inducing an orthogonal
decomposition L2 (R ) =  j W j , where j denotes a direct sum over all j.
The wavelet ψ (x) is often generated from a scaling function φ (x). The dilations and translations of the scaling func-
tions induce a multiresolution analysis (MRA), i.e., a set of closed subspaces {V j } j∈Z , with Vj (the scaling space) spanned by
{φ j,k (x ) = φ (2 j x − k )}k∈Z. Furthermore Vj and Wj are related by
V j+1 = V j  W j (2)
with  denoting a direct sum.
Eq. (2) suggests a scheme for decomposing a function fs (x) ∈ VJ (J ∈ Z), namely,
+ ∞ +∞ + ∞
f s (x ) = aok φJ,k (x ) = cko φJ−1,k (x ) + bok ψJ−1,k (x ) = PV (x ) + PW (x ) (3)
k=−∞ k=−∞ k=−∞

Here
+∞ +∞
PW (x ) = bok ψJ−1,k (x ) and PV (x ) = co φJ−1,k (x ) (4)
k=−∞ k=−∞ k

respectively denote projections of fs (x) into VJ − 1 and WJ − 1 . Then


 +∞  +∞
bok =  fs , ψJ−1,k L2 = 2J−1 fs (x )ψ (2J−1 x − k )dx and cko =  fs , φJ−1,k L2 = 2J−1 fs (x )φ (2J−1 x − k )dx (5)
−∞ −∞

The notation ·, ·L2 in (5) denotes the L2 inner product. Eq. (5) calculates what are known as discrete scaling and wavelet
transforms, with the values {cko }k∈Z and {bok }k∈Z defining discrete scaling and wavelet coefficients, respectively.
In fact, since φ J,k (x) ∈ VJ and ψ J,k (x) ∈ WJ with k ∈ Z,Eq. (2) implies that there exists a pair of coefficients {hk }k∈Z ∈ l 2 (Z )
and {gk }k∈Z ∈ l 2 (Z ) (forming a conjugate mirror filter) such that
+ ∞
φ ( 2J x − l ) = k=−∞
(hl−2k φ (2J−1 x − k ) + gl−2k ψ (2J−1 x − k )). (6)

From (3) and (6), we can further define


+ ∞ +∞
cko = hl−2k aol and bok = gl−2k aol , (7)
l=−∞ l=−∞

which gives the Mallat pyramid algorithm. From (7), if we can obtain scaling coefficients {aok }k∈Z for fs (x) in VJ , then the scal-
ing and wavelet coefficients {cko }k∈Z and {bok }k∈Z respectively for VJ − 1 and WJ − 1 can be obtained from discrete convolutions
of the scaling coefficients {aok }k .

2.2. Wavelet Sampling and Quantum Computation

2.2.1. Unitary matrix operations and quantum transform


A quantum system formed from n qubits (quantum bits) is structured as a vector space Xn with a basis consisting of
N = 2n elements |x, where ket |• represents the Dirac notation for labeling state vectors. Thus, a quantum system state
vector can then be described as
 = [a0 , · · · , ak , · · · , aN−1 ]T ,
a (8)
with
N−1
k=0
|ak |2 = 1. (9)

3
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Quantum computation operations act on vectors in the qubit space Xn , represented by unitary matrices. For a natural
number n, any N × N unitary matrix UN with N = 2n defines a feasible step in a quantum computation, mapping a state
 ∈ Xn to another state
a
b = UN · a
 ∈ Xn . (10)
Using the standard representation [7], the operation UN mapping a  to b can be represented by a quantum circuit. This
procedure is a quantum transform, and the unitary operation UN is the quantum transform kernel [7,8,19].
Høyer represented the Mallat pyramid algorithm (7) in the form of an operator UN , for orthogonal compact scaling func-
tions and wavelets. He simultaneously realized the unitary operation UN via quantum circuits, defining what is known as
the current version of quantum wavelet transforms.
In fact, based on the singular value decomposition, Eq. (10) implies that any operation implemented in matrix form can
be realized via two quantum computation steps. This has motivated us to relate scaling and wavelet transforms (5) to matrix
computations based on wavelet sampling.

2.2.2. Matrix operations and wavelet sampling


Assume there exists an interpolatory basis {Sφ ( x − k )}k∈Z with interpolation points {k}k∈Z in V0 . Clearly, from wavelet
φ
sampling theory, Sφ (x) is also a scaling function and {SJ,k (x ) = Sφ (2J x − k )}k∈Z forms an interpolatory basis for VJ with inter-
φ
polation points {k/2J }k∈Z . In other words, for any fs (x) ∈ VJ , the numbers { fs (k/2J )}k∈Z form coefficients of {SJ,k (x )}J,k in the
scaling expansion. Thus the signal fs (x) ∈ VJ can be represented as
+ ∞
f s (x ) = fs (k/2J )Sφ (2J x − k ). (11)
k=−∞
φ φ
Since {SJ,k (x )}J,k forms a Riesz basis for V0 ⊂ L2 (R ), the samples { fs (k/2J )}k∈Z as the coefficients of {SJ,k (x )}J,k should be
square summable [21,23]. Thus, we have
+ ∞
k=−∞
| fs (k/2J )|2 < +∞. (12)

For convenience, we will further normalize the target function fs (x) in our discussions based on (12), so that we have
+ ∞
k=−∞
| fs (k/2J )|2 = 1. (13)

Consider the case that only a finite number of the above samples are non-zero (e.g., when fs (x) is compactly supported).
Eq. (11) can then be written
I1
f s (x ) = fs (k/2J )Sφ (2J x − k ) (14)
k=I0

with I0 , I1 ∈ Z. In this case we can define a vector


 
fap = fs (I0 /2J ), · · · , fs (I1 /2J ) T . (15)

Clearly the wavelet transform (5) based on function samples can be implemented via quantum transforms by taking fap
as an initial state in Xn , only if we can identify a finite matrix  with properties guaranteeing that
cIo =  · fap , (16)
where cIo is the vector formed by the sequence {cko }k∈Z and {bok }k∈Z , as in (5). In the following sections, we show that such a
matrix indeed exists and is computable. In order to do this, we first discuss an inner product for the l 2 (Z ) space based on
wavelet sampling.

2.3. An Inner Product Based on Sampling Theory

The inner product based on sampling theory is defined based on a natural correspondence between {Sφ (x − k )}k∈Z and
an orthonormal basis {θ (x − k)} in V0 [29,31]. This correspondence can be written
  ∞
Sφ (x ) = + λ θ (x − k )
+k∞=−∞ kφ , (17)
θ (x ) = k=−∞ ξk S (x − k )
with {λk }k∈Z ∈ l 2 (Z ) and {ξk }k∈Z ∈ l 2 (Z ).
Let fs (x) ∈ VJ and gs (x) ∈ VJ . The corresponding samples are then represented as the infinite column vectors fs and gs ,
with entries
fs [n] = fs (n/2J ) and gs [n] = gs (n/2J ), n ∈ Z (18)
Simultaneously, we define Q to be an infinite matrix with entries
qk,n = λJk−n . (19)

4
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Here
λkj = 2− j/2 λk , k ∈ Z, k ∈ Z (20)
Let the matrix
 = Q T Q. (21)
By defining a bivariate function
→ →
→ →
T  → →
DC f s , g s = f s − gs  f s − gs , (22)

the inner product and norm based on wavelet sampling is introduced here (Theorem 1 in [30,31]).

Theorem 1. The matrix  in (21) is positive definite. Hence, for any fs , gs ∈ l 2 (Z ),
 fs , gs  = fsT  gs (23)
is an inner product on l 2 ( Z ). Furthermore, we have
 fs , gs  =  fs , gs L2 . (24)
This implies that

|| fs || = f T  fs


s (25)
defines a norm on l 2 ( Z ), which is denoted as the  (Z ) norm.l2

Proof. Details are provided in Appendix A.

Clearly, with respective to fs and gs , the function


DC ( fs , gs ) = || fs − gs ||2 (26)
forms a squared distance since || fs || as in (25) is a norm on l 2 (Z ). By using this norm, it is shown in the next section
that the infinite matrix operations for wavelet and scaling transforms can be constructed from the viewpoint of function
approximation.

2 (Z ) Norm
3. Function Approximations Based on the l

Let

f (x ) L2 =  f (x ), f (x )L2 (27)
+∞
denote the L2 norm. Here  f (x ), g(x )L2 = −∞ f (x )g(x )dx is the inner product in L2 space. Then we define
 +∞
e( fs , gs ) = fs (x ) − gs (x ) L2 =
2
| fs (x ) − gs (x )|2 dx (28)
−∞

as an error between functions fs (x) and gs (x) in VJ .


For an efficient explanation, assume that ϕ (x) denotes a wavelet, scaling function or wavelet packet whose translates and
dilations {ϕ (2 j x − k ) = ϕ j,k (x )}k∈Z form a Riesz basis for their span Uj ⊆VJ (Here Uj represents a wavelet, scaling or wavelet
packet space). We seek an approximation to the target function fs (x) in the form
+∞ + ∞
fob (x ) = dk ϕ ( 2 j x − k ) = dk ϕ j,k (x ) (29)
k=−∞ k=−∞

for Uj . From (24), (26) and (28), the approximation errors between fs (x) and fob (x) can be written

DC ( fs , fob ) = e( fs , fob ) = fs − fob L2


2
(30)
with fob denoting infinite column vectors with entries
f [n] = f (n/2J ). (31)
ob ob

On the other hand, if fPr (x ) denotes the projection of fs (x) into Uj , then it follows from (30) and the properties of an
inner product that


DC fs , f ob = f⊥ (x ) 2L2 + fPr (x ) − fob (x ) 2L2 = f⊥ (x ) 2L2 + e( fPr (x ), fob (x ) ). (32)

where f⊥ (x ) = fs (x ) − fPr (x ).
Clearly, for a given signal fs (x), the items f⊥ (x ) 22 is uniquely determined in (32). Hence (32) implies that minimization
L
of DC ( fs , fob ) corresponds to that of e( fPr (x ), fob (x )). Since {ϕ j,k (x )}k∈Z forms a Riesz basis for Uj , the approximation fob (x)
can be used to recover any element of Uj by the frame theory. Thus Eq. (32) implies that DC ( fs , fob ) is minimized when

5
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

e( fPr (x ), fob (x )) = 0. In other words, the approximation fob (x) can recover the projection fPr (x ) by minimizing DC ( fs , fob ),
for fixed fs (x). Hence, we conclude that the coefficients {dk }k∈Z in (29) that minimize DC ( fs , fob ) are in fact the coefficients
{dko}k∈Z for series expansion of fPr (x ) in the Riesz basis {ϕ j,k (x )}k∈Z .
Remark. When Uj denotes a wavelet space WJ − 1 (i.e. Uj = WJ − 1 with j = J − 1), the functionsf⊥ (x) and fPr (x ) in (32)
respectively correspond to PV (x) and PW (x) in (3). In this case, Eq. (32) is written

DC ( fs , fob ) = PV (x ) L2 + e(PW (x ), fob (x )).


2
(33)
Hence, as discussed above, the wavelet coefficients {bok }k∈Z
in (4) can be obtained by minimizing DC ( fs , fob ) in (33).
Similarly, by taking Uj as VJ − 1 , we can calculate the scaling coefficients {cko }k∈Z in (4) with help of DC ( fs , fob ). As shown in
(5), since the wavelet and scaling transforms correspond to the calculations of {bok }k∈Z and {cko }k∈Z in (3), Eq. (32) assists in
the design of our matrix operations for realizing such transforms, from the standpoint of the norm l 2 ( Z ).

4. Matrix Wavelet and Scaling Transforms

In this section, we will discuss how to minimize (32) so that the projections PV (x) and PW (x) as in (4) can be recovered.
Let
 
ϕ = ϕ j,m (n/2J ) n,m (34)

with φ (x) as in (29). Based on (29), the vector fob in (31) can be written as
f = ϕ d (35)
ob

where d = [· · · , dk−1 , dk , dk+1 , · · · ]T .


By (35), DC ( fs , fob ) is a quadratic polynomial in the components of d. Thus, positivity of DC ( fs , fob ) (as seen in Theorem 1)
implies that there exists a stationary point do for DC ( fs , fob ) with
 T
do = · · · , dko−1 , dko, dko+1 , · · · . (36)
Thus obtaining coefficients {dko}k∈Z corresponds to solution of the matrix equation

∂ DC ( fs , fob )
= 0. (37)
∂ dk
From (30) and (35), Eq. (37) implies
Tϕ ( + T )( fs − ϕ do ) = 0, (38)

with do given in (36). With the fact that  in (21) is symmetric, Eq. (38) has the form
Tϕ  fs =Tϕ ϕ do. (39)

Clearly, the vector do corresponds to the coefficients of fPr (x ) in the Riesz basis {ϕ j,k (x )}k∈Z . From frame theory, do must
be unique for a given fs (x). This in turn implies that a necessary and sufficient condition for determining the coefficients do
for fPr (x ) is the requirement that Tϕ ϕ in (39) have a unique inverse. Based on Theorem 1, we will show that Tϕ ϕ
indeed satisfies such a condition.

Theorem 2. For {ϕ j,k (x )}k∈Z and φ respectively as in (29) and (34), there exists a unique inverse
−1
Tϕ ϕ = [ηm,n ], (40)

with matrix entries


 
ηm,n = ϕ dj,m , ϕ dj,n L2 , (41)

for the operations T 


ϕ ϕ: l 2 (Z ) → l 2 ( Z ), where {ϕ dj,k (x ) = ϕ d ( 2 j x − k )} k∈Z forms the dual basis of {ϕ ( 2 j x − k )}k∈Z in Uj .
Proof. Details are provided in Appendix B.
From Theorem 2, Eq. (39) can be solved as
−1
do = Tϕ ϕ Tϕ  fs (42)

with (Tϕ ϕ )−1 as in (40). Clearly the wavelet and the scaling spaces WJ−1 and VJ − 1 can be treated as Uj . In this case, the
wavelet and the scaling bases {ψ (2J−1 x − k )}k∈Z and {φ (2J−1 x − k )}k∈Z correspond to the Riesz basis {ϕ (2 j x − k )}k∈Z , with
j = J−1. With such treatment, Eq. (39) is transformed to
ψ T  fs = ψ T ψ b̄o and T  fs = T  co (43)
for WJ − 1 and VJ − 1 with  = [ψ J − 1, m (n/2J )] n,m and = [φ J − 1, m (n/2J )] n,m .

6
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Assume that ψ d (x) and φ d (x) respectively denote the dual wavelet of ψ (x) and the dual scaling function of φ (x). Then
Theorem 2 implies the scaling and wavelet coefficients {cko }k∈Z and {bok }k∈Z in (5) can be obtained from samples { fs (k/2J )}k∈Z
via matrix operations
−1 −1
bo=  T   T  · fs and co = T  T  · fs , (44)

with bo = [· · · , bok−1 , bok , bok+1 , · · · ]T and co = [· · · , cko−1 , cko , cko+1 , · · · ]T . Here

( T  )−1 = [νm,n ] and ( T  )−1 = [μm,n ], (45)


whose matrix entries
 d   d 
νm,n = ψJ−1 ,m , ψJ−1,n L2 and μm,n = φJ−1,m , φJ−1,n L2 .
d d
(46)
In order to ensure computability of (44) via quantum computation, we will discuss how this formula is implemented
with finite samples, and is realized with finite matrices in the following sections.
Remark. Eq. (44) implies that the wavelet and scaling coefficients {cko }k∈Z and {bok }k∈Z as in (5) can be obtained by
applying ( T  )−1  T  and ( T  )−1 T  to function samples fs . From wavelet theory, this implies that these two ma-
trix operations have realized wavelet and scaling transforms based on fs , which is same as what is done in (5) based on
inner products of continuous functions. In addition, due to the uniqueness of (Tϕ ϕ )−1 in (40), it will be shown in
Theorem 5 below that ( T  )−1 and ( T  )−1 can be transformed into nonsingular finite matrices. This will enable us
to limit the matrix operation in (45) to finite dimensions.

5. Finite Matrix Operations for Wavelet and Scaling Transforms

5.1. Finite Series Forms of PV (x) and PW (w)

For the purposes of computability, it is necessary for us to discuss how to realize (44) in a form involving finite matrices.
Consider the case (14) where the finite samples {k/2J , fs (k/2 ) }k=I0 ,...,I1 have an index k ranging from I0 to I1 . In this case,
J

based on the parameters I0 and I1 , we will determine suitable elements of the sets {φ (2J−1 x − k )}k∈Z and {ψ (2J−1 x − k )}k∈Z
for representing PW (x) and PV (x) in (4), which optimizes the sizes of coefficient vectors.
To show this, we still use {ϕ (2 j x − k )}k∈Z with its span Uj ⊆VJ for our discussion. Assume there exists an interpolatory
basis {Sϕ (2 j x − k )}k∈Z in Uj , with interpolation points {k/2 j + δ}k∈Z and 0 ≤ δ ≤ 1/2j + 1 . In this case, the projection of fs (x)
onto Uj can be represented as
N1
fPr (x ) = fPr (k/2 j + δ )Sϕ (2 j x − k ). (47)
k=No

Clearly, every sample fPr (k/2 j + δ ) in (47) corresponds to a unique interpolatory function Sϕ (2j x − k), with k ∈ Z. Due to
this special property, the optimal size of an interpolatory basis always equals that of the sample range (N0 ,N1 ), as shown in
(47).
On the other hand, since both {Sϕ (2 j x − k )}k∈Z and {ϕ (2 j x − k )}k∈Z form Riesz bases for Uj , the samples
{ fPr (k/2 j + δ )}k∈Z as the coefficients of {Sϕ (2 j x − k )}k∈Z can be transformed to those of {ϕ (2 j x − k )}k∈Z . This allows us to
convert the optimal size (N0 ,N1 ) for interpolatory bases to those for general ones. From frame theory, such conversion has
close relations with the coefficients
 +∞
βnϕ = Sϕ (x )ϕ d (x − n )dx (48)
−∞
ϕ ϕ
where {ϕ d (2 j x − n )}k∈Z forms a dual basis for {ϕ (2 j x − k )}k∈Z in Uj . In fact, if there exist two constants (Aα , Bα ) such that
 
βnϕ = 0 for n ∈/ Aϕβ , Bϕβ (49)

with n ∈ Z, we have the following Lemma:

Lemma 1. The function fPr (x ) in (47) can be represented as


N1 +Bϕβ
fPr (x ) = ϕ cno ϕ (2 j x − n ) (50)
n=N0 +Aβ

ϕ ϕ
with Aα and Bα as in (49), where
N1
cno = 2− j
m=N0
βnϕ−m fPr (m/2 j + δ ). (51)

Proof. Details are provided in Appendix C.


ϕ ϕ
Eq. (50) implies that fPr (x ) can be well represented by {ϕ (2j x − n)}n with the index nranging from N0 + Aβ to N1 + Bβ .
This implies that the optimal sizes of interpolatory bases have been converted to those of {ϕ (2j x − n)}n , with help of

7
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

(48). In other words, if we can determine sample range (N0 ,N1 ) of the projection fPr (x ), then the size of {ϕ (2j x − n)}n for
approximating fPr (x ) can be obtained.
Consider the case j = J − 1 for Uj . We show in the following Theorem that the range of samples { fPr (k/2J−1 )}k∈Z can be
obtained from that of { fs (k/2J−1 )}k=I0 ,···I1 , with help of the coefficient
 +∞
αnϕ = Sφ (2x − n )Sd (x )dx
ϕ
(52)
−∞
ϕ
where {Sd (2J−1 x − k )}k∈Z forms a dual basis for {Sϕ (2J−1 x − k )}k∈Z in UJ − 1 .

Theorem 3. Assume x and x respectively denote the smallest integer larger than x and the largest integer smaller than
x. If
 
αnϕ = 0 and n ∈/ Aϕα , Bϕα and n ∈ Z, (53)

then we have
 ϕ
  ϕ

I0 − Bα I1 − Aα
N0 = and N1 = (54)
2 2
for the projection fPr (x ) into UJ − 1 , with (N0 ,N1 ) as in (47). Thus fPr (x ) has the form
Y1
fPr (x ) = cno ϕ (2J−1 x − n ). (55)
n=Y0

Here
 ϕ
  ϕ

I0 − Bα ϕ I1 − Aα ϕ
Y0 = + Aβ and Y1 = + Bβ (56)
2 2
Proof. Details are provided in Appendix D.
From (47), Eq. (54) implies that the range [I0 ,, I1 ] for signal samples {fs (k/2J )}k can be converted to the range [N0 ,,
N1 ] for samples { fPr (k/2J−1 + δ )}k . This in turn allows us to optimize the sizes of {ϕ (2j x − n)}n for approximating fPr (x ) in
UJ − 1 , via Lemma 1. Hence Theorem 3 implies that the sizes of {ϕ (2J−1 x − n )}n∈Z for the projection fPr (x ) can be optimized
based on the range [I0 ,, I1 ] of signal samples {fs (k/2J )}k , as shown in (55) and (56).
Clearly, when there is an interpolatory basis {Sφ (x − k )}k∈Z with interpolation points {k/2J − 1 }k in V0 , an interpolatory
basis {Sψ (2J−1 x − k )}k∈Z with interpolation points {k/2J − 1 + 1/2J }k exists in WJ − 1 [21]. Hence, {Sψ (2J−1 x − k )}k∈Z and
{Sφ (2J − 1 x − k)}k can be respectively taken as {Sϕ (2J−1 x − k )}k∈Z (as in (47)) with δ = 1/2J and δ = 0.
On the other hand, considering (48) and (52), we generally have the following coefficients
 +∞ φ +∞
ψ ψ
αnφ = −∞ φ
S (2x − n )Sd (x )dx αn = −∞ Sφ (2x − n )Sd (x )dx
+∞ φ +∞ ψ , (57)
βnφ = −∞ S (x )φ (x − n )dx
d βnψ = −∞ S (x )ψ (x − n ))dx
d

for the wavelets and the scaling functions.


Since the majority of wavelets and scaling functions generally have localized energy in the time-frequency plane, it will
be useful here to assume that our wavelets and scaling functions together with their dual functions have compact supports
φ φ
(these arguments also work in the case of approximate compactness). In this case, there generally exist integers (Aα , Bα ),
ψ ψ φ φ ψ ψ
(Aα , Bα ), (Aβ , Bβ ) and (Aβ , Bβ ) such that (exactly or approximately)
⎧    
⎨αnφ = 0 f or n ∈/ Aφα , Bφα and n ∈ Z αnψ = 0 f or n ∈/ Aψα , Bψα and n ∈ Z
    (58)
⎩β φ = 0 f or n ∈/ Aφ , Bφ and n ∈ Z βnψ = 0 f or n ∈/ Aψβ , Bψβ and n ∈ Z
n β β
φ ψ
where {Sd (x − k )}k∈Z and {Sd (x − k )}k∈Z form dual bases respectively for {Sφ (x − k )}k∈Z in V0 and {Sψ (x − k )}k∈Z in W0 .
These wavelet and scaling function properties above imply that we can determine the sizes of corresponding series for
approximating fPr (x ) in VJ − 1 and WJ − 1 by taking these spaces as UJ − 1 . Comparing (58) to (49) and (52), it follows from (55)
that there always exist integers
 φ
  φ
  ψ
  ψ

I0 − Bα φ I1 − Aα φ I0 − Bα ψ I1 − Aα ψ
M0 = + Aβ , M1 = + Bβ , K0 = + Aβ , and K1 = + Bβ , (59)
2 2 2 2
such that the projections PV (x) and PW (x) of fs (x) into VJ − 1 and WJ − 1 respectively have representations
M1 K1
PV (x ) = cno φ (2J−1 x − n ) and PW (x ) = bon ψ (2J−1 x − n ) (60)
n=M0 n=K0

Eq. (60) implies that the projections PV (x) and PW (x) can always be approximated by finite scaling and wavelet series
when wavelets and scaling functions have good energy concentrations. In the following subsection, we will discuss con-
struction of finite matrices for realization of wavelet transforms based on (60).

8
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

5.2. Converting the Matrix Transforms to Finite Forms

Since φ (x) and ψ (x) are assumed to have compact supports


   
Xφ , Yφ and Xψ , Yψ (61)
respectively, there exist a pair of constants (Amin ,Bmax ) such that

χnψ = 0 and χnφ = 0 for n ∈/ [Amin , Bmax ], (62)


ψ +∞ φ +∞
with Amin ≤ 0 and Bmax ≥ 0. Here χn = −∞ ψ (x )ψ (x − n )dx and χn = −∞ φ (x )φ (x − n )dx.
Simultaneously, due to the assumption of compact supports for Sφ (x) and θ (x), there exist two constants |LO | < +∞ and
|LU | < +∞ such that (17) can be transformed into
LU
S φ ( 2J x ) = λJk θJ,k (x ) (63)
k=LO

with LO , LU ∈ Z.
Assume that

Kmin = min(K0 , M0 ) and Kmax = max(K1 , M1 ) (64)


with (K0 ,K1 ) and (M0 ,M1 ) as (59). Then it follows from (60) that
Kmax Kmax
PW (x ) = bon ψ (2J−1 x − n ) and PV (x ) = cno φ (2J−1 x − n ), (65)
n=Kmin n=Kmin

where bon = 0 for n∈{M0 ,, M1 } and cno = 0 for n∈{K0 ,, K1 }.
Similarly, let
   
Xmin = min(Xφ , Xψ ) and Ymax = max(Yφ , Yψ ) , (66)
with Xφ , Yφ , Xψ and Yψ as in (61). Simultaneously, define

X = Ymax − Xmin , L = LU − LO and m = Kmax − Kmin + Bmax − Amin (67)


Based on the parameters in (62), (63), (64) and (66), we construct the vectors
  T
ψ iσ = O(2·i)×1 , ψ (Xmin ), ψ (Xmin + 1/2 ), · · · , ψ (Ymax − 1/2 ), ψ (Ymax ), O(2·(m−i)×1
 T (68)
φ iσ = O(2·i)×1 , φ (Xmin ), φ (Xmin + 1/2 ), · · · , φ (Ymax − 1/2 ), φ (Ymax ), O(2·(m−i)×1
and
 T
λn = O1×n , λJ , · · · , λJ , O , (69)
LO LU 1×(2m+2X−n )

with Ok × l a k-by-l matrix of zeros, i = 0, , m and n = 0, , 2m + 2X.


Furthermore, we use the vectors in (68) and (69) to form the matrices
 σ   σ 
˜ = ψ σ
 ,··· ,ψ σ
 m (2m+2X+1 )×(m+1 ) , = φ0 , · · · , φm (2m+2X+1 )×(m+1 )
˜  (70)
0

and
 
Q˜ = λ0 , · · · , λ2m+2X . (71)
(2m+2X+L+1 )×(2m+2X+1 )

With the help of these matrices we can state

Theorem 4. Let bap = [boKmax , · · · , boK ]T and cap = [cKo max , · · · , cKo ]T with {bon }n∈Z and {cno }n∈Z as in (65). Suppose
min min
 T  T
bI = O1×Bmax bT O1×(−Amin ) , cI = O1×Bmax T
cap O1×(−Amin ) (72)
ap

and
 T
fI = O1×(2(Ymax +Kmax −Amin )−I1 ) f T O1×(I0 −2(Xmin +Kmin −Bmax )) (73)
ap

with fap as in (15). Then, Eq. (43) holds if and only if


˜ T
˜˜ bI = 
˜ T
˜ fI and
˜ T
˜ ˜ cI =
˜ T
˜ fI (74)
where


˜ = Q˜ T Q˜ . (75)

9
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Proof. See Appendix E.


Since the coefficients {bok }k∈{Kmin ,···Kmax } and {cko }k∈{Kmin ,··· ,Kmax } in (65) contain all nonzero elements of bo and co in (43),
the equivalence between (43) and (74) implies that bo and co can be obtained from bI and cI , allowing us to obtain the
finite matrix forms. This identifies important conditions for quantum computations to realize wavelet transforms in practice.
However, as shown in (72) and (73), since some components may be zero for bI , cI and fI , it is necessary to discuss the
invertibility of 
˜ T
˜˜ and ˜ T˜ ˜ in (74). In fact, due to equivalence of (43) and (74), the uniqueness of (40) enables us
to show nonsingularity of  ˜ T˜˜ and ˜ T˜ ˜ in (74) (Shown in Appendix F). Below we let M(m + 1) × (m + 1) denote all
(m + 1)-by-(m + 1) matrices.

Theorem 5. The matrices 


˜ T
˜˜ ∈ M(m+1 )×(m+1 ) and
˜ T
˜ ˜ ∈ M(m+1 )×(m+1 ) in (74) are nonsingular, so we can solve
(74) as
bI =  fI and cI =  fI , (76)
ψ φ

where ψ = (
˜ T
˜˜ )−1 
˜ T
˜ and φ = (
˜ T
˜ )
˜ −1
˜ T
˜ . Thus PW (x) and PV (x) in (65) can be represented as

PW (x ) = ψ
 K o (x )bap and PV (x ) = φ
 K o (x )cap , (77)
withKo = [Kmax ,, Kmin ], ψ
 o ( x ) = [ψ
K J−1,Kmax (x ), · · · , ψJ−1,Kmin (x )] and φK o (x ) = [φJ−1,Kmax (x ), · · · , φJ−1,Kmin (x )].

Proof. See Appendix F.
When the wavelets and scaling functions have compact supports (or approximately compact), Eqs. (65),(72) and
Theorem 5 imply that their coefficients for signals can always be obtained from (76). Compared to Theorem 2 (stating
that the inverses of the infinite matrices  T  and T  can be represented as (40)), Theorem 5 implies that wavelet
and scaling function coefficients can be calculated directly by using inversion of the finite matrices  ˜ T
˜ ˜ and ˜ T
˜ ˜.
Remark. Eqs. (76) and (77) imply that applications of ψ and φ correspond to wavelet and scaling transforms of the
target function fs (x) based on samples f . Due to the form of the finite matrices in (76), Theorem 5 has in fact ensured
I
computability of matrix wavelet transforms via quantum computation. With the help of (76), we will show in the following
section that these matrix transforms can be realized and accelerated by quantum computation.

6. Quantum Computation and the Wavelet Transform

6.1. Relating the Wavelet and Scaling Matrices to Unitary Forms

By convention only unitary matrix multiplications can be realized as quantum computation operation. Thus we wish to
construct unitary transformations from fI to [bI , cI ] via (76). Clearly the matrix operations ψ and φ from (76) can be
implemented individually in respective realizations of wavelet transforms and scaling transforms. However, decompositions
of signals into wavelet and scaling spaces generally play critical roles in scientific computation and signal processing as a
whole. Hence, we here wish to realize these two transforms into a unified procedure employing quantum computations.
Since (˜ T˜
˜ )−1 
˜ T
˜ and ( ˜ T
˜ )
˜ −1
˜ T
˜ have the same size, Eq. (76) can be rewritten

cIo = A · B · fI = M · fI (78)


where cIo = [ bI cI ]T with bI and cI as in (72), M = A • B ,
 −1 
(
˜ T
˜˜) O
A = −1 (79)
O (
˜ T
˜ )
˜

and
 T
B = 
˜˜ 
˜ ˜ . (80)
Clearly, the realization of our wavelet transforms via unitary operations has a close relation with the rank of M . Since
A is nonsingular, the rank of M is determined by that of B . In fact we have:
Theorem 6. B has a full row rank.

Proof. See Appendix G.


Clearly, from (79), A is a nonsingular matrix. Since the rows of B are linearly independent by Theorem 6, M has a
row full rank. Let
a = 2x + 2m + 1 and b = 2m + 2 (81)
Then from (70) and (75), we have M ∈ Mb × a , with Mb × a denoting the space of b-by-a matrices. Then ac-
cording to the singular value decomposition, there exist unitary matrices Vu ∈ Mb × b and Wu ∈ Ma × a such that M
can be rewritten as
 
M = Vu ·  Ob×(2x−1) · Wu , (82)

10
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

where  = [τi, j ] ∈ Mb×b is a diagonal matrix and τi,i


2 is the eigenvalue of  · T with τ
M M i,i ≥ τ (i + 1), (i + 1) > 0.
Then the wavelet and scaling function coefficients (bI , cI ) can be determined by inserting (82) into (78), i.e.
cIo = Vu · λ · Wu · fI (83)
with λ = [  Ob×(2x−1 ) ].
Remark. It follows from (78) that M is a finite matrix operation from discrete function samples fI to wavelet and
scaling coefficients cIo. Since A and B have full row rank by (79) and Theorem 6, the matrix M also has a full row rank.
This result implies that the diagonal matrix  in (83) is nonsingular for the singular value decomposition of M . Since
the matrices Wu and Vu are unitary, Eq. (83) implies that the mapping from fI to cIo can be considered as two quantum
transform procedures, accompanied by a weighting operation  .

6.2. Algorithmic Steps for Matrix Transforms

With the above analyses, the wavelet and scaling transforms can be realized via the following scheme for implementing
quantum computation, including the major algorithmic elements:
φ ψ φ ψ
1. Calculate parameters {αn }n {αn }n , {βn }n and {βn }n according to (57).
φ φ ψ ψ φ φ ψ ψ
2. Determine [Aα , Bα ] [Aα , Bα ], [Aβ , Bβ ] and [Aβ , Bβ ] according to (58).
3. Determine (Amin ,Bmax ) according to (62).
4. Determine LO and LU according to (63).
φ φ ψ ψ φ φ ψ ψ
5. Determine M0 , M1 , K0 and K1 according to (59) by using [Aα , Bα ] , [Aα , Bα ], [Aβ , Bβ ] and [Aβ , Bβ ] . Then obtain Kmin and
Kmax according to (64).
6. Calculate Xmin and Ymax in (66) based on [Xϕ ,Yϕ ] and [Xψ ,Yψ ]in (61). Furthermore, calculate X, L and m in (67).
7. Generate the vector fap from samples { fs (k/2J )} according to (15). Then construct the sample vector f from fap
k=I0 ,··· ,I1 I
according to (73).
8. Generate the matrices  ˜ and ˜ according to (68) and (70).
9. Generate  ˜ according to (71) and (75).
10. Calculate ( ˜ T
˜˜ )−1 and ( ˜ T˜ )
˜ −1 .
11. Form matrices A and B in (79) and (80). Then, calculate M according to (78).
12. Obtain matrices Vu ,  λ and Wu via singular value decomposition of M .
13. Obtain cIo by applying Vu ,  λ and Wu to fI according to (83).
14. Obtain wavelet and scaling coefficients bap and cap from cIo according to (72) and (78).

7. Simulation

7.1. Target Function and Samples

Clearly from the above discussions, the existence of an interpolatory basis for a scaling space forms a precondition for
our realization of the matrix wavelet transform. On the other hand, we know from classic wavelet sampling theory that such
interpolatory bases exist for the sixth-order spline and the second-order Coiflet MRAs. Thus, these two MRAs can become
good candidates for simulations. Here we use the Matlab R2019b software package for validating properties of the matrix
wavelet transform (simulation codes available at http://math.bu.edu/people/mkon/html/s_research.html).
To perform this validation, we select a function of the form
+5 +5
f s (x ) = bk ψ ( x − k ) + ak φ (x − k ), (84)
k=−5 k=−5

as the target function in this simulation, where φ (x) and ψ (x) are respectively a scaling function and a corresponding
wavelet.
From wavelet theory, the target function fs (x) in (84) has projections into W0 and V0 , respectively having the forms
+5 +5
PW (x ) = bk ψ (x − k ) and PV (x ) = ak φ (x − k ). (85)
k=−5 k=−5

Eqs. (84) and (85) imply that PW (x) ∈ W0 , PV (x) ∈ V0 and fs (x) ∈ V1 . Hence, we here choose parameter J = 1 in (3).
Simultaneously, since the translates {k}k∈{-5, ...,+5} in (84) range from − 5 to 5 for both {φ (x − k )}k∈Z and {ψ (x − k )}k∈Z , it
follows from (84) that
supp( fs ) = [Xmin − 5, Ymax + 5], (86)
with Xmin and Ymax as in (66). Thus, since J = 1, we choose
I0 = 2J (Xmin − 5 ) = 2(Xmin − 5 ) and I1 = 2J (Ymax + 5 ) = 2(Ymax + 5 ), (87)
so that samples fap = [ fs (Xmin − 5 ), · · · , fs (Ymax + 5 )] as in (15) can cover the support of fs (x).

11
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

In addition, the coefficients {ak }k∈Z and {bk }k∈Z in (84) will be randomly generated from the standard uniform distribu-
tion on [ − 1, 1]. For each MRA, we will repeat our experiment 50 times and calculate the statistical data, so that we can
evaluate our methods synthetically.

φ ψ φ ψ
7.2. Numerical Values for αn , αn , βn and βn

Standard scaling functions φ (x) have been constructed and used for the sixth-order spline and the second-order Coiflet
MRAs. In both cases, since the interpolatory scaling function Sφ (x) and wavelet Sψ (x), with their dual scaling functionφ d (x)
φ ψ
and wavelet ψ d (x), can be related to the scaling function φ (x) [21,23,27], Eq. (57) implies that we can obtain {αn }n {αn }n ,
{βnφ }n and {βnψ }n via numerical calculations with φ (x).
The numerical results show that
|αnφ | ≤ 6.1558 × 10−4 when n ∈/ [−13, 13], |αnψ | ≤ 7.5507 × 10−4 when n ∈/ [−12, 12] (88)

|βnφ | ≤ 7.8717 × 10−4 when n ∈/ [−9, 9] and |βnψ | ≤ 4.7538 × 10−4 when n ∈/ [−11, 11] (89)
for the sixth-order spline. The above coefficients (when n is outside the intervals in (88) and (89)) are sufficiently small for
our simulations. Thus we choose here
 φ
 ψ
 φ  ψ
Aα = −13 Aα = −12 Aβ = −9 Aβ = −11
, , , (90)
φ ψ φ ψ
Bα = 13 Bα = 12 Bβ = 9 Bβ = 11
for (58) above.
φ ψ φ ψ
Similarly the coefficients {αn }n {αn }n , {βn }n and {βn }n for the second-order Coiflet MRAs are

|αnφ | ≤ 6.7387 × 10−4 for n ∈/ [−8, 8], |αnψ | ≤ 1.6975 × 10−4 for n ∈/ [−10, 8], (91)

|βnφ | ≤ 1.1022 × 10−4 for n ∈/ [−1, 9], and |βnψ | ≤ 1.1426 × 10−4 for n ∈/ [−4, 2]. (92)
Clearly, the values in (91) and (92) are small enough for this simulation to use the parameters
 φ
 ψ
 φ  ψ
Aα = −8 Aα = −10 Aβ = −1 Aβ = −4
, , , (93)
φ ψ φ ψ
Bα = 8 Bα = 8 Bβ = 9 Bβ = 2
in (58).

7.3. Generation of Matrices

7.3.1. Parameters for Generating Matrices


Three matrices are critical for generating the wavelet and scaling coefficients {cko }k∈Z and {bok }k as in (5). They are re-
spectively ˜, ˜ and ˜ as in Theorems 4 and 5. Clearly, from Theorem 4, all of these matrices are related to the coefficients
(Xmin ,Ymax ), (Kmin ,Kmax ), (Amin ,Bmax ) and (LO ,LU ). From (62), (63), (64) and (66), these coefficients are related to φ (x) and
ψ (x). Since the scaling function φ (x) and the wavelet ψ (x) have been well constructed and used in wavelet theory, we can
obtain these coefficients via numerical calculations of φ (x) and ψ (x). In fact, we have
(Xmin , Ymax ) = (−5, 6 ), (Kmin , Kmax ) = (−27, 28 ), (94)

(Amin , Bmax ) = (−11, 11 ) and (LO , LU ) = (−11, 11 ) (95)


for the sixth-order spline MRA. Simultaneously, for the second-order Coiflet MRA, we choose
(Xmin , Ymax ) = (0, 11 ), (Kmin , Kmax ) = (−12, 29 ) (96)

Amin = Bmax = 0, and (LO , LU ) = (−1, 9 ) (97)


With these parameters, we generate the appropriate matrices in the following subsections.
It follows from (87), (94) and (96) that
I0 = −20, and I1 = 22 (98)
for spline MRA and
I0 = −10 and I1 = 32 (99)
for the Coiflet MRA

12
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Fig. 1. Matrix images for the six sixth-order spline MRA. a) [


˜ ,
˜ ]T , b) 
˜ , c) M , d) Vu , e)  λ , f) Wu

7.3.2. Matrices for the spline MRA


Substituting (94) into (72) yields
 T  T
bI = O1×11 bT O1×11 and cI = O1×11 T
cap O1×11 . (100)
ap

Simultaneously from (94) and (95), we have


Ymax + Kmax − Amin = 45 and Xmin + Kmin − Bmax = −43. (101)
Inserting (98) and (101) into (73) gives
 T
fI = O1×68 f T O1×64 (102)
ap

with fap as in (15).


On the other hand, inserting (94) and (95) into (67) yields
X = 6 − (−5 ) = 11 and L = 11 − (−11 ) = 22 and m = 28 − (−27 ) + 11 − (−11 ) = 77 (103)
Thus inserting (94) and (103) into (68) yields
  T
ψ iσ = O(2·i)×1 , ψ (−5 ), ψ (−5 + 1/2 ), · · · , ψ (6 − 1/2 ), ψ (6 ), O(2·(77−i)×1
 T . (104)
φ iσ = O(2·i)×1 , φ (−5 ), φ (−5 + 1/2 ), · · · , φ (6 − 1/2 ), φ (6 ), O(2·(77−i)×1
The matrices ˜ and ˜ are generated via (70) and (104).
We must consider the fact that matrices such as ˜ and  ˜ are relatively large, so that it can be difficult to describe them
here through simple equations. On the other hand, in our algorithms, graphical illustrations of numerical value distributions
for matrix elements are more informative than numerical values themselves. Hence in this simulation, we normalize the
values of the matrix elements into the interval [0, 1]. These matrices are depicted graphically by taking element values as
pixel intensities (brighter pixels correspond to higher values). The image of the matrix [ ˜ T,˜ T ]T is shown in Fig. 1-(a).
Then, applying (95) and (103) to (69) gives
 T  T
λi = O1×i , λ1 , · · · , λ1 , O = O1×i , 2−1 λ−11 , · · · , 2−1 λ11 , O1×(176−i ) . (105)
−11 11 1×(176−i )

With (71), (75) and (105), we obtain  ˜ in this simulation; its images shown in Fig. 1-(b). Then, applying [
˜ ,
˜ ]T and 
˜
to (78) yields M , shown in Fig. 1-(c). By finding the singular value decomposition of M , we obtain Vu ,  and Wu for
this experiment. These matrices are respectively shown in Figs. 1-(d) to (f).

13
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Fig. 2. Matrix images for the second Coiflet MRA. a) [


˜ ,
˜ ]T , b) 
˜ , c) M , d) Vu , e)  λ , f) Wu

7.3.3. Matrices for the Coiflet MRA


Since Coiflet wavelets and scaling functions form orthogonal bases, we have Amin = Bmax = 0 as shown in (97). Thus it
follows from (72) and (97) that
bI = bap and cI = cap . (106)

Similarly, it follows from (96) and (97) that

Ymax + Kmax − Amin = 40 and Xmin + Kmin − Bmax = −12 (107)

Inserting (99) and (107) into (73) yields


 T
fI = O1×48 fap O1×14 (108)

with fap as in (15). Simultaneously, applying (96) and (97) to (67) yields

X = 11 − 0 = 11, L = 9 − (−1 ) = 10 and m = 29 − (−12 ) + 0 − 0 = 41. (109)

Inserting (96) and (109) into (68) gives


  T
ψ iσ = O2i×1 , ψ (0 ), ψ (1/2 ), · · · , ψ (11 − 1/2 ), ψ (11 ), O2(41−i)×1
 T . (110)
φ iσ = O2i×1 , φ (0 ), φ (1/2 ), · · · , φ (11 − 1/2 ), φ (11 ), O2(41−i)×1
Thus by using (70) and (110), ˜ and ˜ can be constructed. After normalizing the values of elements into [0, 1], the
image of the matrix [
˜ ,
˜ ]T is shown in Fig. 2-(a). With (97) and (109), Eq. (69) implies
 T  T
λi = O1×i , λ1 , · · · , λ1 , O = O1×i , 2−1 λ−1 , · · · , 2−1 λ9 , O1×(102−i ) . (111)
−1 9 1×(102−i )

Using (111), the matrix 


˜ can be obtained from (71) and (75), and is shown in Fig. 2-(b). Clearly, Eqs. (79) and (80) imply
that M in (78) can be constructed from [ ˜ ,˜ ]T in Fig. 2-(a) and 
˜ in Fig. 2-(b). The matrix M is shown Fig. 2-(c). Then,
we can obtain Vu ,  λ and Wu via decompositions of M . Figs. 2-(d) to (f) respectively show the matrices Vu ,  λ and Wu .

14
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Fig. 3. Mean absolute values Eψ (x) and Eφ (x) for the sixth spline MRA. a) Mean absolute value Eψ (x) of eψ (x). b) Mean absolute value Eφ (x) of eφ (x).

7.3.4. Discussion on Matrices in the Simulation


Figs. 1 and 2-(a) have described matrices formed by juxtaposing the matrices ˜ and  ˜ . Since ϕ (x) and ψ (x) have good
localization of energy for both spline and Coiflet MRAs, the large values of [ ˜ ]T , formed by samples {φ (k/2 )}k∈Z and
˜ ,
{ψ (k/2 )}k∈Z as in (70), also concentrate within a smaller subset of matrix elements corresponding to central positions of
supports of the φ (x) and ψ (x). Informally the ‘heat map’ images of [ ˜ ,˜ ]T are represented as two lines of larger elements,
respectively generated by ˜ and ˜.
Comparing Fig. 1-(b) with Fig. 2-(b), it is shown that 
˜ for the Coiflet MRA approximates the identity matrix better than
the  ˜ matrix for the Spline MRA does. Clearly, the matrices  ˜ , consisting of parameters {λk }k∈Z as in (17), (71) and (75),
reflect relations between an interpolatory basis {Sφ (x − k )}k∈Z and an orthonormal basis {θ (x − k)}. The matrices  ˜ become
identical when {Sφ (x − k )}k∈Z also forms an orthonormal basis. This implies that the Coiflet MRA yields greater similarity
between Sφ (x) and θ (x) than does the Spline MRA.
However, on the whole, Figs. 1 and 2 show that large values of [ ˜ ,˜ ]T and  ˜ concentrate within small numbers of
elements. This also leads to concentrations of the large values in the matrix M for both MRAs, with ‘heat maps’ forming
two bright lines, as shown in Figs. 1-(c) and 2-(c).
As discussed above, M corresponds to one weighted transformation  λ and two rotation transformations Vu and Wu . It
should be noted that  λ in Fig. 1-(e) has higher luminosity (large values) near the origin (i.e. the lower right), compared to
that in Fig. 2-(e). This implies that the weighted procedure for the Spline MRA is more unbalanced than that for the Coiflet
MRA.

7.4. Results for the Matrix Wavelet Transforms

According to (78) and (83), the coefficients (bI , cI ) can be obtained by applying the matrices Vu ,  λ and Wu to fI . Then,
by (72), we can further obtain bap and cap respectively from bI and cI .
On the other hand, the projections PW (x) and PV (x) as in (84) and (85) can be constructed from bap and cap using (77).
Thus, in this simulation, we first obtain the wavelet coefficients bap and the scaling coefficients cap by applying Vu ,  λ and
Wu in Figs. 1 and 2, respectively for the sixth-order spline and the second Coiflet MRAs. Then, our method can be verified
via analysis of

eψ (x ) = PW (x ) − ψ
 K o (x )bap and e (x ) = PV (x ) − φ
φ
 K o (x )cap (112)

with PW (x) and PV (x) in (85).

15
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Fig. 4. Mean absolute values Eψ (x) and Eϕ (x) for the second-order Coiflet MRA. a) Mean absolute value Eψ (x) of eψ (x). b) Mean absolute value Eϕ (x) of
eϕ (x).

• Simulation results for the sixth-order spline MRA

For the sixth-order spline MRA, the mean absolute values Eψ (x) and Eφ (x) of eψ (x) and eφ (x) for 50 experiments are
respectively shown in Figs. 3-(a) and (b).
As shown in Fig. 3, the maximum values of Eψ (x) and Eφ (x) are respectively 4.7623 × 10−16 and 9.1527 × 10−16 . In
addition, their mean values in Fig. 3 are 1.5476 × 10−16 and 3.8537 × 10−16 . We note that bap and cap in this experiment
are obtained numerically, so the errors shown in Fig. 3 are reasonable for our simulations. We conclude that the wavelet
and scaling coefficients have been correctly obtained for the sixth-order spline MRA by our methods.

l Simulation results for the second-order Coiflet MRA

Similarly, for the Coiflet MRA, the mean absolute values Eψ (x) and Eϕ (x) of eψ (x) and eϕ (x) for 50 experiments are respec-
tively shown in Figs. 4-(a) and (b). Fig. 4 shows that Eψ (x) and Eϕ (x) respectively have the maximum values 1.5026 × 10−4
and 8.2606 × 10−5 . In addition, the mean values are respectively 5.0257 × 10−5 and 3.4041 × 10−5 for Eψ (x) and Eϕ (x) in
Fig. 4. Comparing Fig. 3 to Fig. 4, it follows that Eψ (x) and Eϕ (x) for the Coiflet MRA are much larger than those for the
spline MRA.
Clearly, the accuracy of our algorithm largely depends on that of the finite matrix operation M , which is an approxima-
tion of the infinite matrix (Tϕ ϕ )−1 Tϕ  under the assumption that φ (x), ψ (x) and Sφ (x) have compact supports. Hence
using M cannot avoid resulting approximation errors, arising from the difference between (Tϕ ϕ )−1 Tϕ  and M Such
φ ψ φ ψ
errors are related to parameters such as αn , αn , βn and βn , obtained from φ (x), ψ (x) and Sφ (x).
On the other hand, M is built from
˜,˜ and  ˜ , which are formed by the numbers{φ (k/2 )}k∈Z , {ψ (k/2 )}k∈Z and {λk }k∈Z
in (17). Hence accuracies of M again to a great extent depend on those of φ (x), ψ (x) and Sφ (x). However, as discussed in
[28], only numerical values of Coiflet MRA functions can be obtained, while we have the analytic expressions for the Spline
MRA functions. This leads to more numerical errors for the Coiflet MRAs than those for Spline MRAs, as demonstrated in
Figs. 3 and 4.

16
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

However, as shown in Fig. 4, such error levels can still be reasonable. We can conclude that the wavelet and scaling
coefficients have been correctly obtained for the second Coiflet MRA by our methods.

8. Conclusion

When an interpolatory scaling function exists for an MRA, Theorem 1 has shown that the l 2 (Z ) norm (Eq. (25)) can be

defined on l 2 (Z ) by introducing the matrix . Simultaneously, via this norm and the matrix , the L2 (R ) inner product as
well as the approximation error (28) between two functions can be directly and simply related to their samples in l 2 (Z ).
Using the nested space structures of an MRA, this allows construction of the infinite matrix Eq. (43), yielding an infinite
matrix operation (44), which derives wavelet and scaling coefficients directly from samples. For finite samples, Eq. (43) can
be further transformed into finite matrix Eq. (74). This has produced the finite wavelet and scaling matrix operations ψ
and ϕ in (76). Combining these two yields a single matrix operation M , which can obtain simultaneously wavelet and
scaling coefficients from discrete function samples. Since M has a full row rank (Theorem 6), the diagonal matrix  in
(82) for the singular value decomposition of M is nonsingular. From this, as in (83), the matrix transform M can be
realized in two quantum transform procedures, sandwiching a weighting operation.

Appendix A. Proof of Theorem 1

Proof. As the first step, we show that for any fs ∈ l 2 (Z ),


gs = Q · fs (A1)
forms an invertible mapping from l 2 (Z ) onto l 2 ( Z ).
From (17), we have
+ ∞
Sˆφ (w ) = ( λk e−kwi )θˆ (w ). (A2)
k=−∞

Since (17) forms an invertible mapping from {Sφ (x − k )}k∈Z to {θ (x − k)}, which forms Riesz bases for V0 , it follows from
frame theory that
+ ∞
k=−∞
λk e−kwi = 0 (A3)

for any w ∈ R.
On the other hand, according to (19), Eq. (A1) implies that
+∞
gs [k] =
n=−∞
λJk−n fs [n]. (A4)

Fourier transforming (A4) yields


+ ∞ +∞ +∞
m=−∞
gs [m]e−iwm =
k=−∞
λJk e−iwk n=−∞
fs [n]e−iwn . (A5)

Since fs [n], λk ∈ l 2 (Z ),Eq. (A5) implies that


J

+ ∞ + ∞
k=−∞
λJk e−iwk ∈ L2 [−π , π ] and n=−∞
fs [n]e−iwn ∈ L2 [−π , π ] (A6)

by frame theory. From (A5) and (A6), we have


+ ∞
gs [m]e−iwm ∈ l 2 [−π , π ]. (A7)
m=−∞

Using frame theory, Eq. (A7) implies that gs ∈ l 2 (Z ).


Conversely, if (A3) holds, it follows from (20) and (A5) that
+ ∞ +∞ +∞ −1
n=−∞
fs [n]e−iwn =
m=−∞
gs [m]e−iwm λJ e−iwk
k=−∞ k
(A8)
+∞
with ( k=−∞
λJk e−iwk )−1 ∈ l 2 (Z ). Similarly, from frame theory, Eq. (A8) implies for anygs ∈ l 2 (Z ),
+ ∞
fs [n]e−iwn ∈ L2 [−π , π ]. (A9)
n=−∞

This in turn implies that fs ∈ l 2 [−π , π ]. Eqs. (A7) and (A9) imply (A1).
As the second step, we show that (25) defines a norm on l 2 (Z ).
It follows from (21) that
2
f T  fs = (Q fs )T Q fs = Q fs ≥ 0. (A10)
s l2

17
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Clearly, (A10) implies that fsT  fs = 0 if and only if Q fs = 0. Clearly, this holds when fs = 0. Eq. (A1) implies that Q
in (A10) is an invertible transformation from l 2 (Z ) onto l 2 (Z ). Hence we conclude that Q fs = 0 if and only if fs = 0. This
implies
f T  fs = 0 iff fs = 0. (A11)
s

Eqs. (A10) and (A11) imply that  in (A10) is a positive definite matrix, which implies


f T  fs > 0 f or fs = 0


fs =
s . (A12)
 f T  fs = 0 f or fs = 0
s

Simultaneously, for any fs ∈ l 2 (Z ) and gs ∈ l 2 (Z ),


! !
T T
fs + gs = ( fs + gs ) ( fs + gs ) =
(Q f + Q gs ) (Q fs + Q gs )

s
, (A13)
= Q fs + Q gs ≤ Q fs 2 + Q gs l 2 = fsT  fs + gTs gs = fs + gs 
l2 l 
and
!

T
α fs 
= (a fs ) (a fs ) = |a| fsT  fs = |a| fs 
(A14)
for any constant a = 0.
By definition of a norm, with (A12), (A13) and (A14), it follows that fs  is a norm on the space l 2 (Z ).We will denote
2 (Z ) norm.
this as the l
In the third step, we show that (23) defines an inner product on l 2 (Z ).
Since  is a symmetric matrix, we have
 fs , gs  = fsT  gs = (gTs T fs )T = gTs  fs = gs , fs  . (A15)
Simultaneously, for any fs , gs and hs ∈ l 2 (Z ),
 fs + hs , gs  = ( fs + hs )T  gs = fsT  gs + hTs  gs =  fs , gs  + hs , gs  . (A16)
By definition of an inner product, Eqs. (A15) and (A16) with (A12), (A13) and (A14) lead to (23).
In the fourth step, we show that (24) holds.
From (18) and (21), we have
+∞ + ∞ +∞
f T gs =
s m=−∞
λJ fs (n/2J )
n=−∞ m−n n=−∞
λJm−n gs (n/2J ). (A17)

From properties of orthonormal bases, Eq. (A17) implies


 +∞ + ∞ +∞ +∞ +∞
f T gs =
s m =−∞ n =−∞
λJm −n gs (n /2J ) θJ,m (x ) n=−∞ m=−∞ λJm−n fs (n/2J ) θJ,m (x )dx, (A18)
−∞

where {θ J,m (x) = 2J/2 θ (2J x − m)} forms an orthonormal basis for VJ with θ (x) as in (17).
On the other hand, Eq. (17) implies
+ ∞ + ∞  
n=−∞ m=−∞ λJm−n fs (n/2J ) θJ,m (x ) = +n=∞−∞ fs (n/2J ) +m∞
=−∞ λm−n θJ,m (x ).
J
+ ∞ (A19)
= n=−∞ fs (n/2J )Sφ (2J x − n )
Since fs (x) ∈ VJ and {Sφ (2J x − n )}n∈Z form an interpolatory basis for VJ , it follows from (11) and (A19) that
+ ∞ +∞
n=−∞ m=−∞
λJm−n fs (n/2J ) θJ,m (x ) = fs (x ). (A20)

Similarly, we have
+ ∞ +∞
n=−∞ m=−∞
λJm−n gs (n/2J ) θJ,m (x ) = gs (x ). (A21)

Inserting (A20) and (A21) into (A18) yields (24).

Appendix B. Proof of Theorem 2

Proof. Let
  T
ϕ j (x ) = · · · , ϕ j,k−1 (x ), ϕ j,k (x ), ϕ j,k+1 (x ), · · ·
 T . (B1)
ϕ dj (x ) = · · · , ϕ dj,k−1 (x ), ϕ dj,k (x ), ϕ dj,k+1 (x ), · · ·
Step 1: we show
ϕ j (x ) = Tϕ ϕ ϕ dj (x ). (B2)

18
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

It follows from frame theory that


+∞
ϕ j,k (x ) = l=−∞
ϕ j,k , ϕ j,l L2 ϕ dj,l (x ), (B3)

where {ϕ dj,l (x )}l∈Z forms the dual basis of {ϕ j,k (x )}k∈Z in Uj .


On the other hand, if
 T
ϕ k = · · · , ϕ j,k ((n − 1 )/2J ), ϕ j,k (n/2J ), ϕ j,k ((n + 1 )/2J ), · · · , (B4)
then it follows from (B4) and (34) that
ϕ = [· · · , ϕ k−1 , ϕ k , ϕ k+1 , · · · ]. (B5)
Substituting (B5) into Tϕ ϕ yields

Tϕ ϕ = [· · · , ϕ k−1 , ϕ k , ϕ k+1 , · · · ]T [· · · , ϕ k−1 , ϕ k , ϕ k+1 , · · · ]. (B6)


Eq. (B6) implies that
 
Tϕ ϕ = ςk,l k,l (B7)

with
ςk,l = ϕ kT ϕ l . (B8)
As in (B4), it is clear that ϕ
 k is formed by samples ϕ j,k (x) ∈ VJ . Hence it follows from Theorem 1 that

ςk,l = ϕ j,k (x ), ϕ j,l (x )L2 . (B9)


Substituting (B9) into (B3) yields
+∞

ϕ j,k (x ) = ςk,l ϕ dj,l (x ), (B10)
l=−∞

which implies (B2) holds.

Step 2: we show that the infinite matrix Tϕ ϕ has a unique inverse on l 2 (Z ).

From (B2), Tϕ ϕ is a linear transformation taking {ϕ dj,k (x )}k∈Z to {ϕ j,k (x )}k∈Z . From frame theory, both {ϕ j,k (x )}k∈Z
and {ϕ dj,k (x )}k∈Z are Riesz bases for Uj . Hence the linear transformation Tϕ ϕ is unique and invertible for Riesz bases
{ϕ j,k (x )}k∈Z and {ϕ dj,k (x )}k∈Z .
Assume that (Tϕ ϕ )−1 denotes the inverse of Tϕ ϕ . Then we have

(Tϕ ϕ )−1 ϕ j (x ) = ϕ dj (x ) (B11)

with ϕ j (x ) and ϕ
 dj (x ) as in (B1). Since ϕ j,k (x) denotes a wavelet, scaling function or wavelet packet here, wavelet theory
implies that for any {dk }k∈Z ∈ l 2 (Z ), there exist unique fs (x) ∈ Uj and {d˜k }k∈Z ∈ l 2 (Z ) such that
+∞
 +∞

f s (x ) = d˜k ϕ dj,k (x ) =  dj (x ))T d˜ = (ϕ
dk ϕ j,k (x ) = (ϕ  j (x ))T d, (B12)
k=−∞ k=−∞


where d˜ = [· · · , d˜k−1 , d˜k , d˜k+1 · · · ]T and d = [· · · , dk−1 , dk , dk+1 · · · ]T .
Substituting (B2) into (B12) yields
(ϕ dj (x ))Tc˜ = (ϕ j (x ))T c = (ϕ dj (x ))T Tϕ ϕ c. (B13)
Due to uniqueness of the series expansion (B12), Eq. (B13) holds only when
c˜ = T ϕ c. (B14)
ϕ

Similarly, for any {c˜k }k∈Z ∈ l 2 (Z ), the function fs (x) ∈ VJ − 1 and the coefficients {ck }k∈Z ∈ l 2 (Z ) are also unique for (B12).
Thus, it follows from (B11) and (B12) that
(ϕ j (x ))T c = (ϕ dj (x ))T c˜ = (ϕ j (x ))T (Tϕ ϕ )−1c˜, (B15)
From uniqueness of (B12), Eq. (B15) implies that
c = (Tϕ ϕ )−1c˜. (B16)

Due to arbitrariness of c, c˜ ∈ l 2 ( Z ), it follows from (B14) and (B16) that T
ϕ ϕ is also an invertible linear mapping from
l 2 (Z ) to l 2 (Z ).

19
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Step 3: we show that (41) holds.

From frame theory, both {ϕ j,n (x )}n∈Z and {ϕ dj,k (x )}k∈Z form Riesz bases for Uj . Hence
+∞

ϕ dj,m (x ) = ϕ dj,m , ϕ dj,n L2 ϕ j,n (x ). (B17)
n=−∞

Clearly, from (40), (B1) and (B11), we have


+∞

ϕ dj,m (x ) = ηm,n ϕ j,n (x ). (B18)
l=−∞

Due to unique representation of (B17) for given {ϕ j,n (x )}n∈Z ,Eq. (B18) results in (41).

Appendix C. Proof of Lemma 1

Proof. Since both {ϕ (2 j x − k )}k∈Z and {Sϕ (2 j x − k )}k∈Z form Riesz bases for Uj , there must exist a unique set of coeffi-
ϕ
cients {βn }n∈Z such that
+ ∞
S ϕ ( 2 j x ) = 2− j
n=−∞
βnϕ ϕ (2 j x − n ) (C1)
ϕ
with βn as in (48), by frame theory. Inserting (C1) into (47) yields
N1 +∞
fPr (x ) = 2− j
m=N0
fPr (m/2 j + δ )
n=−∞
βnϕ ϕ (2 j x − m − n ), (C2)

which implies
+∞ N1
fPr (x ) = 2− j (βnϕ−m fPr (m/2 j + δ )) · ϕ (2 j x − n ). (C3)
n=−∞ m=N0
ϕ ϕ ϕ
Eq. (49) implies that βn−m = 0 for n > Bβ + m or n < Aβ + m. Since N0 ≤ m ≤ N1 in (C3), Eqs. (49) and (C3) imply
N1 +Bϕβ N1
fPr (x ) = 2− j ϕ (βnϕ−m fPr (m/2 j + δ )) · ϕ (2 j x − n ), (C4)
n=N0 +Aβ m=N0

which implies (50) and (51).

Appendix D. Proof of Theorem 3

Proof. The frame theory implies that the projection of fs (x) onto UJ − 1 can be represented as

fPr (x ) =
k∈Z
 fs (x ), Sdϕ (2J−1 x − k )L2 Sϕ (2J−1 x − k ) (D1)
ϕ
with Sd (x ) as in (52).
Inserting (14) into (D1) yields
 IN ϕ
fPr (x ) = fs (k/2J )Sφ (2J x − k ), Sd (2J−1 x − m )L2 Sϕ (2J−1 x − m ). (D2)
m∈Z k=I0

From (52) and (D2), we have


 I1 ϕ
fPr (x ) = 2−(J−1) fs (k/2J )αk−2m Sϕ (2J−1 x − m ). (D3)
m∈Z k=I0

Properties of interpolatory functions imply


I1
fPr (m/2J−1 + δ ) = 2−(J−1) fs (k/2J )αk−2m . (D4)
k=I0

It follows from (53) that


fPr (m/2J−1 + δ ) = 0 (D5)
ϕ ϕ
for k − 2m ≤ Aα and Bα ≤ k − 2m in (D4). Since (D1) means that I0 ≤ k ≤ I1 for α k − 2m in (D3), Eq. (D5) holds for
 ϕ 
m < (I0 − Bα )/2 or... (D6)
Eqs. (D4) and (D6) imply that (D3) can be represented as
(I1 −Aϕα )/2
fPr (x ) = fPr (m/2J−1 + δ )Sϕ (2J−1 x − m ). (D7)
ϕ
m= (I0 −Bα )/2 
Comparing (D7) to (47) yields (54). Inserting (54) into (50) yields (55).

20
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Appendix E. Proof of Theorem 4

Proof. We introduce the matrices


 
B = ψ  K −A T
 K −B , · · · , ψ (E1)
min max max min

and
 T
B = φ Kmin −Bmax , · · · , φ Kmax −Amin (E2)
with
  T
ψ k = · · · , ψJ−1,k ((n − 1 )/2J ), ψJ−1,k (n/2J ), ψJ−1,k ((n + 1 )/2J ), · · ·
 T . (E3)
φ k = · · · , φJ−1,k ((n − 1 )/2J ), φJ−1,k (n/2J ), φJ−1,k ((n + 1 )/2J ), · · ·
As a first step, we prove that
BT BbI = BT  fs and TB  B cI = TB  fs (E4)
holds if and only if (43) holds, with bI and cI as in (72).
Clearly  T  fs in (43) can be rewritten as
 
 T  fs = · · · , ψ
 ,ψ  , · · · T  fs ,
 ,ψ (E5)
k−1 k k+1

with  = [· · · , ψ

k−1 , ψk , ψk+1 , · · · ]. Since ψ J − 1, k (x), fs (x) ∈ VJ ,Eqs. (24) and (E5) imply that
 
 +∞
ψ kT  fs = ψJ−1,k (x ) fs (x )dx = ψJ−1,k (x ), fs (x )L2 . (E6)
−∞

Since {ψJ−1,k (x )}k∈Z forms a Riesz basis for WJ − 1 , it follows from frame theory that
 +∞  +∞
ψJ−1,k (x ) fs (x )dx = ψJ−1,k (x ), fs (x )L2 = ψJ−1,k (x ), PW (x )L2 = ψJ−1,k (x )PW (x )dx. (E7)
−∞ −∞

Inserting (E7) into (E6) yields


 +∞
ψ kT  fs = ψJ−1,k (x )PW (x )dx. (E8)
−∞

Based on (24), inserting (65) into (E8) yields


Kmax  
+∞
1 Kmax +∞
ψ kT  fs = bon ψ (2J−1 x − n ) · ψJ−1,k (x )dx = bon ψ (x )ψ (x − (n − k ))dx. (E9)
n=Kmin −∞ 2J−1 n=Kmin −∞

It follows from (62) and (E9) that


ψ kT  fs = 0 (E10)
for k < Kmin − Bmax or k > Kmax − Amin . Based on (E10), Eq. (E5) can be rewritten as
   
 T  fs = O, ψ  K −A , O T  fs = [O, B , O]T  fs = O, ( T  fs )T , O T ,
 K −B , · · · , ψ (E11)
min max max min B

with  B as in (E1) and O denoting an infinite matrix of zeros.


On the other hand, it follows from (65) that bo in (43) has the form
 
bo = O, b0 , · · · , b0 , O T . (E12)
Kmin Kmax

Let A = [ψ

Kmin , · · · , ψKmax ]. Then (E12) implies that

 T
bo =  O, b0Kmin , · · · , b0Kmax , O = Abap , (E13)

with bap as in (72). From (E13) we have

 T bo =  T Abap . (E14)


Clearly, it follows from (24) and (B4) that
   +∞
ψ mT ψ n = ψJ−1,m (x ), ψJ−1,n (x ) L2 = 2−(J−1) ψ (x )ψ (x − (n − m ))dx. (E15)
−∞

Eq. (62) implies that ψ


 T ψ
m
 n = 0 in (E15) when n − m∈[A
min ,Bmax ]. Hence, if Kmin ≤ n ≤ Kmax in (E15), then we have

ψ mT ψ n = 0, when m ∈/ [Kmin − Bmax , Kmax − Amin ] (E16)

21
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

in (E15). Thus, it follows from (E1), (E15) and (E16) that


 T
 T A = O, (BT A )T , O . (E17)
Inserting (E17) into (E14) yields
 T  T
 T bo = O, (BT A )T , O bap = O, (BT Abap )T , O . (E18)
Simultaneously (62) implies Amin ≤ 0 and Bmax ≥ 0. Hence, Kmax − Amin > Kmax and Kmin − Bmax < Kmin . This implies
that
  
BbI = ψ K −B , · · · , ψ
 K −A  1×(−A ) T = Abap ,
O1×Bmax , bTap , O (E19)
min max max min min

with bI as in (E4) and bap as in (72).


Inserting (E19) into (E18) yields
 T
T
 T bo = O, (B BbI ) , O . (E20)

Inserting (E11) and (E20) into the first identity of (39) yields
 T
T  T
T
O, (BT BbI ) , O = O, (BT  fs ) , O . (E21)

Since every step above can be reversed, Eq. (E21) implies that the first identity of (43) holds if and only if the first
identity of (E4) holds. Similarly, for the scaling function {φ (2J−1 x − k )}k∈Z , the second identity of (43) holds if and only if
the second identity of (E4) holds.
For the second step, we show that (E4) holds if and only if (74) holds.
Let
 
G = Q B = gk,l . (E22)
Then according to (19) and (E1), the element gk,l in matrix G can be represented as
+∞ +∞
gk,l =
n=−∞
λJk−n ψJ−1 (n/2J − l ) = n=−∞
λJk−n ψ (n/2 − l ). (E23)

Since supp(ψ )⊆[Xmin ,Ymax ] as in (66), we have ψ J − 1, l (n/2J ) = ψ (n/2 − l) = 0 for any l ∈ Z with n ∈ Z when n <
2(Xmin + l) or n > 2(Ymax + l). Hence, if gk,l = 0 in (E23), then we have
2(Xmin + l ) ≤ n ≤ 2(Ymax + l ) (E24)
for given l. Clearly, (E1) and (E10) imply that
HO < l < HU (E25)
for (E23) with HO = Kmin − Bmax and HU = Kmax − Amin . Hence, when gk,l = 0 for (E23), Eqs. (E24) and (E25) imply
NO ≤ n ≤ NU (E26)
with NO = 2(Xmin + HO ) and NU = 2(Ymax + HU ).
On the other hand, if λk−n = 0 in (E23), Eq. (63) also implies
J

LO ≤ k − n ≤ LU (E27)
for (E22) and (E23). Applying (E26) to (E27) yields
MO ≤ k ≤ MU (E28)
for (E22) and (E23), where MO = LO +NO and MU = LU +NU .
From (E25), (E26) and (E28), we have
gk,l = 0 (E29)
for (E23) when l∈[HO ,HU ] and k∈[MO ,MU ].
Simultaneously, since supp(ψ )⊆[ Xmin ,Ymax ], we can generate the matrices 
˜ = [bn,l ] as in (70) and Q˜ = [qk,n ] as in (71)
with qk,n = λk−n and bn,l = ψ (n/2 − l) by respectively applying the limitations (E25), (E26) and (E28) to the subscripts l, n
J

and k. In this case, Eq. (E29) implies that G in (E22) has the form
 T
G = Q B =
T
O (Q˜ 
˜) O (E30)

with ˜ as in (70) and Q˜ as in (71).


Similarly, we have
 T
Q B =
T
O (Q˜ )
˜ O (E31)

22
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

with
˜ as in (70).

On the other hand, since fs (x) = PW (x) + PV (x), it follows from (64), (65) and (66) that
 
supp( fs ) = (Xmin + Kmin )/2J−1 , (Ymax + Kmax )/2J−1 , (E32)

which implies that fs (n/2J ) = 0 for


n ≤ 2(Xmin + Kmin ) and n ≥ 2(Ymax + Kmax ) (E33)
Since Amin ≤ 0 and Bmax ≥ 0 as in (62), Eq. (E33) implies that fs (n/2J ) = 0 for
n > NU = 2(Ymax + Kmax − Amin ) or n < NO = 2(Xmin + Kmin − Bmax ) (E34)
Let Q fs = [· · · , ak−1 , ak , ak+1 , · · · ]T . Then, from (18) and (19), we have
+ ∞
ak =
n=−∞
λJk−n fs (n/2J ). (E35)

By (E27) and (E34), we have


ak = 0 for k < MO and k > MU . (E36)
Eqs. (E34) and (E36) imply
 T
T
Q fs = O (Q˜ fI ) O , (E37)

with fI as in (73).


Based on (21) and (75), inserting (E30) and (E37) into (E4) yields the first identity of (74). Similarly, from (E31) and (E37),
we have the second identity of (74). Since every step above can be reversed, (E4) holds if and only if (74) holds. This in turn
implies that (74) holds if and only if (43) holds.

Appendix F. Proof of Theorem 5

As a first step, we show that for any given fI as in (73), there exist unique inverse transformations (
˜ T
˜˜ )−1 and
(
˜ T
˜ )
˜ −1 for which (76) holds.
From (65), (72) and (73), the vectors bI and fI , as subsequences of bo and fs in (43), contain all their nonzero components.
Hence, there exists a bijection between bI and bo. Similarly, a bijection exists between fI and fs . Simultaneously, note that
(74) holds if and only if (43) does. This implies that every bo and fs for which (43) holds correspond to unique bI and fI for
which (74) holds.
On the other hand, for any given fs , there always exists a unique bo that allows (43) to hold. Thus there exist a unique
b for which (74) holds, given any f . This implies that  ˜ T
˜ ˜ has a unique inverse ( ˜ T
˜˜ )−1 in (74) and (76) above. In
I I
the above parallel case, ˜ T
˜ ˜ also has a unique inverse ( ˜ T˜ )
˜ −1 in (74) and (76).
As the second step, we show that  ˜ T
˜˜ and ˜ T˜ ˜ are nonsingular, so  ˜ T˜˜ and ˜ T
˜ ˜ have their inverse
matrices.
Assume that  ˜ T
˜˜ is singular. Then there must exist a nonzero row vector d such that

d · 
˜ T
˜˜ = 0. (F1)
Multiplying (74) by d and inserting (F1) yields
d · 
˜ T
˜ fI = d · 
˜ T
˜˜ bI = 0. (F2)
Consider a matrix
Y = (
˜ T
˜˜ )−1 +  (F3)
with  = [dT , · · · , dT ]T . Multiplying (F3) by 
˜ T
˜ fI yields

Y·
˜ T
˜ fI = (
˜ T
˜˜ )−1 +  
˜ T
˜ fI
. (F4)
= (
˜ T
˜˜ )−1 
˜ T
˜ fI +  · 
˜ T
˜ fI

Applying (F2) to (F4) yields


Y·
˜ T
˜ fI = (
˜ T
˜˜ )−1 
˜ T
˜ fI = bI . (F5)
Under the case that 
˜ T
˜˜
is singular, Eq. (F5) implies both (
˜ T
˜˜ )−1
and ϒ are the inverses of 
˜ T
˜˜.
This has
contradicted the fact that ( ˜ T˜˜ )−1 is unique. Thus, ˜ T
˜˜ must be nonsingular. Similarly, ˜ T˜ ˜ is nonsingular as well.
As the last step we show that (77) holds.
From (72), bap and cap in bI and cI correspond to coefficients {b0n }n∈{Kmin ,··· ,Kmax } and {cn0 }n∈{Kmin ,··· ,Kmax } in (60). Thus it
follows from (65) that (77) holds for bap and cap . 

23
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Appendix G. Proof of Theorem 6

Proof. As the first step, we show that


˜ T
˜ ˜ = O. (G1)

From (E30) and (E31), we have


˜ T
˜ ˜ = (Q B )T Q B =  T  B .
B (G2)

On the other hand, it follows from (24) and (E3) that

 +∞
ψ kT φ l = ψJ−1,k (x )φJ−1,l (x )dx =< ψJ−1,k (x ), φJ−1,l (x )>L2 (G3)
−∞

Since ψ J − 1, k (x) ∈ WJ − 1 , ϕ J − 1, l (x) ∈ VJ − 1 and WJ − 1 VJ − 1 = VJ ,Eq. (G3) implies that

ψ kT φ l = 0. (G4)

From (G4), Eqs. (E1) and (E2) imply

BT  B = O. (G5)

Inserting (G5) into (G2) yields (G1).


As the second step, we show the rows of B are independent.
Assume that the rows of B are dependent. Then there must exist a nonzero vector cu = [c1 , c2 ] such that
 
cu B = [c1 , c2 ]

˜ T
˜
= c1 
˜ T
˜ + c2
˜ T
˜ = O. (G6)

˜ T
˜

Eq. (G6) implies

c1 
˜ T
˜ = −c2
˜ T
˜. (G7)

Now we show that

c1 = O (G8)

in (G7).
Assume c1 = O in (G7). Then we have

c2
˜ T
˜ = O. (G9)

On the other hand, since cu = [c1 , c2 ] = O


 , the assumption c1 = O implies c2 = O in (G9). In this case, the formula (G9)
implies that the rows ˜ T
˜ are linearly dependent. This contradicts the fact that the matrix ˜ T˜ ˜ is nonsingular (as shown
in (76)). Hence, (G6) and (G9) imply (G8).
Next, we show that (G6) cannot hold when cu = 0.
Multiplying (G7) by ˜ and inserting (G1) yields

c1 
˜ T
˜˜ = −c2
˜ T
˜˜ =O (G10)

Since (G8) holds, Eqs. (G8) and (G10) imply 


˜ T
˜˜ is singular. This has contradicted (76), stating 
˜ T
˜˜ is invertible.
Hence, the rows of B are independent, which implies that B has a full row rank.

24
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

Appendix H. List of Symbols

Z integers bI ,cI ,bap ,cap column vectors defined in (72)


R real numbers ψ  o (x ),φ  o (x )
K K function vector defined in (77)
L2 ( R ) square integrable functions on R {λk }k∈Z ,{ξk }k∈Z coefficients defined in (17)
 ∞
l 2 (Z ) finite energy discrete signals f(x) such that +
k=−∞ | f (k )| < +∞
2
{λkj }k∈Z coefficients defined in (20)
Vj scaling space αnφ ,αnψ ,βnφ ,βnψ coefficients defined in (57)
φ φ ψ ψ
Wj wavelet space Aα , Bα , Aα , Bα parameters defined in (58)
φ φ ψ ψ
φ (x) scaling function for V0 Aβ , Bβ , Aβ , Bβ parameters defined in (58)
ψ (x) mother wavelet for W0 Amin , Bmax parameters defined in (62)
θ (x) orthogonal scaling function for W0 LO , LU parameters defined in (63)
Sφ ( x ) interpolatory scaling function for V0 Kmin , Kmax parameters defined in (64)
Sψ ( x ) interpolatory wavelet for W0 Xmin , Ymax parameters defined in (66)
PW (x) projections of signals into WJ − 1 O zero matrix
PV (x) projections of signals into VJ − 1 , matrix defined in (43)
φ d (x) dual scaling function for φ (x) ˜, ˜ matrix defined in (70)
ψ d (x) dual wavelet for ψ (x) A matrix defined in (79)
φ
Sd ( x ) dual scaling function for Sφ (x) B matrix defined in (80)
ψ
Sd ( x ) dual wavelet for Sψ (x) M matrix defined in (78)
fˆ(w ) Fourier transform of f(x) Q matrix defined in (19)
|• Dirac state vector  infinite matrix defined in (21)
x column vector Q˜ matrix defined in (71)
f ap column vector defined in (15) 
˜ matrix defined in (75)
f s , gs column vector defined in (18) ψ wavelet matrix defined in (76)
f
ob column vector defined in (31) ϕ scaling matrix defined in (76)
ψ iσ , φ iσ vector defined in (68)  diagonal defined in (82)
λn vector defined in (69) Wu , Vu unitary matrix defined in (82)
f I vector defined in (73) ·, ·L2 inner products for space L2 (R )
·  norm defined in (25) ·, · inner products defined in (23)
e ( •, •) approximation error defined in (28) D C ( •, •) operation defined in (22)

References

[1] Richard J. Lipton, Kenneth W. Regan, Quantum Algorithms via Linear Algebra, MIT Press, 2014.
[2] Karim Lounici, Katia Meziani, Gabriel Peyre, Adaptive sup-norm estimation of the wigner function in noisy quantum homodyne tomography, Annals
of Statistics 46 (3) (2018) 1318–1351.
[3] Nguyen Huyen, John Lekner, Construction of accelerating wavepackets, Applied Mathematics and Computation 218 (22) (2012) 10990–10997.
[4] Mohsen Yoosefi Nejad, Mohammad Mosleh, Saeed Rasouli Heikalabad. A blind quantum audio watermarking based on quantum discrete cosine trans-
form, Journal of Information Security and Applications 55 (2020) 102495.
[5] Michael A. Nielsen, Issaac L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2010.
[6] Alireza K. Golmankhaneh, Cemil Tunc, Sumudu transform in fractal calculus, Applied Mathematics and Computation 350 (2019) 386–401.
[7] P. Hoyer. Efficient quantum transforms. Los Alamos preprint archive, http://xxx.lanl.gov/archive/quant-ph/9702028, Feb. 1997.
[8] A. Fijany, C.P. Williams, Quantum wavelet transforms: Fast algorithms and complete circuits, lecture Notes in Computer Science 1509 (1998) 10–33.
[9] Haisheng Li, Ping Fan, Haiying Xia, Shuxiang Song and Xiangjian He. The multi-level and multi-dimensional quantum wavelet packet transforms,
Scientific Reports 8 (2018) 13884.
[10] Carolina Barajas-Garcia, Selene Solorza-Calderon, Everardo Gutierrez-Lopez, Scale, translation and rotation invariant Wavelet Local Feature Descriptor,
Applied Mathematics and Computation 363 (2019) 124594.
[11] Sahar Alipour, Farshid Mirzaee, An iterative algorithm for solving two dimensional nonlinear stochastic integral equations: A combined successive
approximations method with bilinear spline interpolation, Applied Mathematics and Computation 371 (2020) 124947.
[12] Paolo Mercorelli, Denoising and harmonic detection using nonorthogonal wavelet packets in industrial applications, Journal of Systems Science and
Complexity 20 (2007) 325–343.
[13] Paolo Mercorelli, Biorthogonal wavelet trees in the classification of embedded signal classes for intelligent sensors using machine learning applications,
Journal of the Franklin Institute 344 (2007) 813–829.
[14] Paolo Mercorelli, A denoising procedure using wavelet packets for instantaneous detection of pantograph oscillations, Mechanical Systems and Signal
Processing 35 (2013) 137–149.
[15] Manuel Schimmack, Paolo Mercorelli, An on-line orthogonal wavelet denoising algorithm for high-resolution surface scans, Journal of the Franklin
Institute 355 (2018) 9245–9270.
[16] Manuel Schimmack, Paolo Mercorelli, A structural property of the wavelet packet transform method to localise incoherency of a signal, Journal of the
Franklin Institute 356 (2019) 10123–10137.
[17] Deyun Wei, Yijie Zhang, Fractional Stockwell transform: Theory and applications, Digital Signal Processing 115 (2021) 103090.
[18] Deyun Wei, Yuan-Min Li, Convolution and multichannel sampling for the offset linear canonical transform and their applications, IEEE Transactions on
Signal Processing 67 (23) (2019) 6009–6024.
[19] Francisco Argueello, Quantum wavelet transforms of any order, Quantum Information and Computation 9 (5-6) (2009) 414–422.
[20] Guangsheng Ma, Hongbo Li and Jiman Zhao. Windowed Fourier transform and general wavelet algorithms in quantum computation. Quantum Infor-
mation and Computation. 2019, 19(3-4): 237-251.
[21] Zhiguo Zhang, Mark A. Kon, Interpolatory filter banks and interpolatory wavelet packets, Journal of Computational and Applied Mathematics 374
(2020) 112755.
[22] David L. Donoho, Interpolating wavelet transforms, 1992, stats. stanford. edu/˜donoho/Reports/ 1992/ interpol. Pdf.
[23] Gilbert G. Walter, A sampling theorem for wavelet subspaces, IEEE Transactions on Information Theory 38 (2) (1992) 881–884.
[24] Behzad Nemati Saray, Mehrdad Lakestani, On the sparse multi-scale solution of the delay differential equations by an efficient algorithm, Applied
Mathematics and Computation 381 (2020) 125291.
[25] L. Thesing, A.C. Hansen, On the stable sampling rate for binary measurements and wavelet reconstruction, Applied and Computational Harmonic
Analysis 48 (2) (2020) 630–654.

25
Z. Zhang and M.A. Kon Applied Mathematics and Computation 428 (2022) 127179

[26] Huijun Guo, Junke Kou, Pointwise density estimation for biased sample, Journal of Computational and Applied Mathematics 361 (2019) 444–458.
[27] Zhiguo Zhang, Mark A. Kon, On relating interpolatory wavelets to interpolatory scaling functions in multiresolution analyse, Circuits Systems and
Signal Processing 34 (6) (2015) 1947–1976.
[28] S. Mallat, A Wavelet Tour of Signal Processing, 2nd ed., China Machine Press, Beijing, 2003.
[29] Charles K. Chui, An Introduction to Wavelets, Academic Press, 1992.
[30] Zhiguo Zhang, Yue Li. Recovery of the optimal approximation from samples in wavelet subspace. 2012, 22: 795–807.
[31] Zhiguo Zhang, Mark A. Kon, Wavelet sampling and generalization in neural networks, Neurocomputing 267 (2017) 36–54.

26

You might also like