Professional Documents
Culture Documents
Wavelets Theory and Its Applications A First Course Mani Mehra Online Ebook Texxtbook Full Chapter PDF
Wavelets Theory and Its Applications A First Course Mani Mehra Online Ebook Texxtbook Full Chapter PDF
https://ebookmeta.com/product/principles-of-probability-theory-
and-its-applications-first-edition-majid-eyvazian/
https://ebookmeta.com/product/a-first-course-in-spectral-
theory-1st-edition-milivoje-lukic/
A First Course in Group Theory 1st Edition Bijan Davvaz
https://ebookmeta.com/product/a-first-course-in-group-theory-1st-
edition-bijan-davvaz/
https://ebookmeta.com/product/number-theory-and-its-
applications-1st-edition-satyabrota-kundu/
https://ebookmeta.com/product/a-modern-course-in-aeroelasticity-
solid-mechanics-and-its-applications-264-earl-h-dowell-editor/
https://ebookmeta.com/product/wavelets-and-wavelet-transform-
systems-and-their-applications-a-digital-signal-processing-
approach-1st-edition-cajetan-m-akujuobi/
https://ebookmeta.com/product/a-first-course-in-graph-theory-and-
combinatorics-second-edition-texts-and-readings-in-
mathematics-55-sebastian-m-cioaba/
Forum for Interdisciplinary Mathematics
Mani Mehra
Wavelets Theory
and Its
Applications
A First Course
Forum for Interdisciplinary Mathematics
Editor-in-chief
P. V. Subrahmanyam, Indian Institute of Technology Madras, Chennai, India
123
Mani Mehra
Department of Mathematics
Indian Institute of Technology Delhi
New Delhi, Delhi, India
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Dedicated
With Love and Regards to
My parents
Preface
There are several books on wavelet available on the market. Some of them are the
classic ones—Ten Lectures on Wavelets by I. Daubechies; Wavelets and Filter
Banks by G. Strang and T. Nguyen; An Introduction to Wavelets by C. K. Chui;
and Wavelet Analysis: The Scalable Structure of Information by H. L. Resnikoff
and R. O. Jr Wells. So a natural question to ask is: Why another book? My purpose
is different in the following manner:
• A book is needed to supply the students with particular needs, who want to
understand not only the wavelet theory but also its applications to have a
broader understanding. Therefore, it is essential to keep a balance between
mathematical rigor and practical applications of the wavelet theory.
• I want to prepare the students for more advanced books and research articles in
the wavelet theory.
• I discovered that most of my students find the existing literature quite difficult
mainly because of the presupposed level of mathematical sophistication and lack
of motivation and graphical representation.
I have been teaching undergraduate and graduate courses on wavelets and its
applications since 2008 and have delivered lectures at many seminars on various
aspects of wavelets. As is typical with such offerings, the courses/seminars drew an
audience with widely varying backgrounds and expectations. We always felt the
difficulty in presenting the beauty, usefulness, and mathematical depth of wavelets
to those audience who always wanted ideas to be implemented. The underlying
mathematical ideas are explained in a conversational way using the remarks and
figures (or graphical representation) rather than the standard theorem–proof fashion.
It is my pleasure to have this book closely linked to the wavelet MATLAB toolbox,
wherever applicable, and the rest is provided in the form of MATLAB codes.
Although I have done my level best to make this book simpler, it would be
outrageous to claim that I have been successful in this task. However, through much
vii
viii Preface
trial and error, I came to the point where a better understanding of the wavelet
applications can ultimately be achieved by the discrete theory of wavelet. This
concept makes this book distinct from the existing books, and I hope that others
will also find it useful.
This book consists of four parts. Parts I and II are designed for a one-semester
course on the wavelet theory and its applications for advanced undergraduate
course or graduate course in science, engineering, and mathematics. A few chapters
end with a set of straightforward problems designed to drive the concepts just
covered. Some applications could be linked to Parts III and IV depending on the
audience. Parts III and IV consist of original research and are written in a more
advanced style incorporating all the existing literature. This part of the book pro-
vides material for the typical first course—a short (10–20 h) introduction to the
wavelet-based numerical methods for differential equations for researchers who
want to pursue research in this area. The different parts of this book discuss the
following topics:
(i) Mathematical Foundation
(ii) Introduction to Wavelets
(iii) Wavelet-Based Numerical Methods for Differential Equations
(iv) Applications of Wavelets in Other Fields
Part I (Chaps. 1 and 2) gives a basic introduction to related mathematics that is
subsequently used in the text. Chapter 1 is devoted to the review of some basic
mathematical concepts. The wavelet theory is really very hard to appreciate without
understanding the drawback of Fourier theory. Chapter 2 is designed for Fourier
analysis and time–frequency analysis, which is more than just a reference but less
than a “book within a book” on Fourier analysis. Naturally, there are a great many
books on Fourier analysis (e.g., Fourier Series and Boundary Value Problem by
Churchill and Brown) that cover the same material in detail.
Part II (Chaps. 3–5) gives a basic introduction to the wavelets. Numerous
graphics illustrate the concepts as they are understood. The basic concepts needed
for a clean and fair treatment of the wavelet theory are developed from the
beginning without, however, going into such a technical detail that this book
becomes a mathematical course in itself and loses its stated purpose. Chapter 3 is
devoted to multi-resolution analysis and construction of various types of wavelets
on flat geometries and their properties. Chapter 4 discusses the wavelets on arbitrary
manifolds. Chapter 5 discusses wavelet transforms for both flat geometries and
arbitrary manifolds.
Preface ix
Part III (Chaps. 6–9) is all about differential equations, which describes how
quantities change across time or space, naturally in science and engineering, and
indeed in almost every field of study where measurements can be taken. Since the
advent of digital computers in the mid-twentieth century, much effort has been
expended in analyzing and applying computational techniques for differential
equations. In fact, the topic has reached such a level of importance that under-
graduate students in mathematics, engineering, and physical sciences are exposed to
one or more courses in this area. Moreover, the field of numerical methods for
differential equations is fortunate to be blessed with several high-quality, compre-
hensive research-level monographs (e.g., Computational Partial Differential
Equations using MATLAB by Jichun Li and Yi-Tung Chen). Therefore, Chap. 6
discusses the traditional numerical methods available in the literature briefly.
Chapter 7 explains the wavelet-Galerkin method for partial differential equations
(PDEs). Chapter 8 explains the wavelet collocation method for PDEs. Section 9.1
explains a wavelet optimized method for PDEs. Chapter 9 discusses other
wavelet-based numerical methods.
Part IV (Chaps. 10 and 11) contains the applications of the wavelet theory. These
chapters aim at making the reader aware of the vast applications of the wavelet
theory across disciplines. Many books on wavelets have been written emphasizing
the applications of the wavelet theory (e.g., Wavelets and Filter Banks by Strang
and Nguyen).
However, in this book, it is limited to two areas in detail:
• In the field of differential equations, which has been already discussed in Part III
in detail.
• The applications of the wavelet in inverse problems are in its infancy, and a lot
more has to be extracted. Very few researchers have taken this application.
Therefore, it is discussed in Chap. 10 in detail and could open many ways to
tackle inverse problems for the researchers.
Chapter 11 discusses other few important applications of the wavelets in brief.
First and foremost, I want to thank my Ph.D. advisor, Prof. B. V. Rathish Kumar,
and course advisor of my first course on wavelet at IIT Kanpur, Prof. P. C. Das, for
encouraging me in taking up the exciting topic of wavelet-based numerical methods
for my Ph.D. dissertation. I thank Prof. Nicholas Kevlahan of McMaster University
for his collaboration on the wavelets on arbitrary manifolds. I thank Kavita Goyal
for her help in producing some of the MATLAB graphics and discussions during
the early stage of this book. I also want to thank my student Kuldip Singh Patel for
producing some of the graphics in Chap. 10. Thanks to my students Ankita Shukla
and Aman Rani for their careful reading of the manuscript. I am indebted to referees
and editorial assistants whose suggestions tremendously improved this book.
Finally, I want to thank my parents, my brothers, my sister-in-law, and my two
beautiful nieces for their love and support. I also want to thank my husband Vivek
Kumar Aggarwal for his constant encouragement throughout the preparation of this
book. My special thanks go to my delightful children, Manvi and Vivaan, who will
someday read their names here and wonder how their mom actually did it by
balancing the academics and the responsibility of bringing up the kids.
xi
Contents
xiii
xiv Contents
xvii
Part I
Mathematical Foundation
Chapter 1
Preliminaries
Linear algebra is a branch of mathematics that studies vector spaces, also called
linear spaces. Linear algebra is commonly restricted to the case of finite-dimensional
vector spaces, while the peculiarities of the infinite-dimensional case are traditionally
covered in the functional analysis. We use the following notations:
• N = {0, 1, 2, . . .} is the set of all natural numbers.
• R is the set of all real numbers.
• Z = {. . . , −1, 0, 1, . . .} is the set of integers.
space if K = C (the field of complex numbers). The elements of vector space shall be
called vectors (which can be sequence of numbers, functions, etc.). For example, each
element of vector space l p , p ≥ 1 is a sequence x = (xi ) = (x1 , x2 , . . .) of numbers
∞
such that |xi | p < ∞.
i=1
A basis B of a vector space X over a field K is a linearly independent subset of X
that spans (or generates) X . In more detail, suppose that B = {v1 , . . . , vn } is a subset
of a vector space X over a field K . Then, B is a basis if it satisfies the following
conditions.
(i) If c1 v1 + · · · cn vn = 0, then necessarily c1 = · · · = cn = 0 (i.e., {vi : i =
1, 2, . . . , n} is linearly independent set).
(ii) For every x ∈ X , x = c1 v1 + · · · + cn vn . The numbers ci are called the coor-
dinates of the vector x with respect to the basis B (i.e., x = span{vi : i =
1, 2, . . . , n}).
There can be several choices for basis of a vector space but the number of elements
in the basis is always the same. The number of elements in a basis is called the
dimension of the space. A vector space X with finite number of elements in its basis
is said to be finite-dimensional (e.g., real line (R) vector space). If B contains infinite
elements, the vector space is said to be infinite-dimensional (e.g., l p , p ≥ 1 vector
space).
A normed space X is a vector space with a norm (i.e., to gauge the size of element of
vector space) defined on it. The function norm . : X → R satisfies the following
axioms:
(i) x ≥ 0, x = 0 iff x = 0.
(ii) αx = |αx.
(iii) x + y ≤ x + y (Triangle inequality).
The norm defined in (1.1) will satisfy all the axioms mentioned above.
Example 1.2.2 Thevector space L p (R), 1 ≤ p < ∞ is the set of all measurable
functions such that | f (x)| p d x < ∞. The norm is defined by
R
1.2 Normed Space 5
1/ p
f p = | f (x)| d x
p
. (1.2)
R
This norm (defined in (1.2)) will also satisfy all the axioms mentioned above.
The few main elementary properties of L p (R) norms are Minkowski inequality
f + g p ≤ f p + g p , (1.3)
An inner product space X is a vector space with an inner product defined on it. An
inner product is a function ., . : X × X → K (where K = R or C) satisfying the
following axioms:
(i) x, x ≥ 0, x, x = 0 iff x = 0.
(ii) αx, y = α x, y .
(iii) x, y = y, x .
(iv) x + y, z = x, z + y, z .
√
An inner product on X defines a norm on X given by x = x, x . Moreover, the
norm introduced by the inner product space satisfies the important parallelogram law
if a norm does not satisfy (1.5), it cannot be obtained from an inner product. Therefore,
not all normed spaces are inner product spaces. The Cauchy–Schwarz inequality
(with respect to L2 (R) spaces has been already discussed in the previous section)
states that for all vectors x and y of an inner product space
| x, y | ≤ xy.
A complete inner product space is called a Hilbert space. Given a basis {vk : k ∈ I }
(sequence of vectors) or {vk (x) : k ∈ I } (sequence of functions) in a Hilbert space
H, every vector or function f ∈ H can be uniquely represented as
f = ck ( f )vk , f (x) = ck ( f )vk (x), (1.6)
k∈I k∈I
where the index set I may be finite or countably infinite. Suppose the elements of
Hilbert space are functions then a frame is also a set {vk (x) : k ∈ I } in H which
allows every f to be written like (1.6) but it will be linearly dependent. Thus, the
coefficients ck ( f ) are not necessarily unique and one may get redundant representa-
tion. More precisely, a family {vk (x) : k ∈ I } in a Hilbert space H is a frame for H
if ∃ two constant m, M satisfying 0 < m ≤ M < ∞ such that
m|| f ||2 ≤ | < f, vk > |2 ≤ M|| f ||2 , ∀ f ∈ H (frame inequality). (1.7)
k∈I
If frame is linearly independent set for H, then frame gives Riesz basis for H.
Therefore, Riesz basis is basis which satisfies (1.6) and frame inequality (1.7). In
case of Riesz basis, dual frame will also be Riesz basis and biorthogonal, i.e.,
Moreover, bases are optimal for fast data processing, whereas the redundancy inher-
ent in frames (overcomplete frames) increases flexibility when used in expressing
signal or approximating a function. It also adds robustness to the noise in data, but
usually at the price of high computational cost.
Remark 1.4.1 In case of orthonormal basis, frame inequality is satisfied for m =
M = 1. Therefore, it is also a Riesz basis. In fact, an orthonormal basis is biorthogonal
to itself.
1.4 Hilbert Space 7
Example 1.4.1 Given {vk (x) : k ∈ N} is an orthonormal basis for H, prove that
{gk (x) = k1 vk (x) : k ∈ N} does not form Riesz basis for H.
Solution: The {gk (x) : k ∈ N} is linearly independent but does not satisfy (1.7),
since
∞
∞
1
| f, gk |2 = | f, vk |2 ,
k=1 k=1
k
∞
1
= 2
| f, vk |2 .
k=1
k
It is clear that m which satisfies (1.7), hence {gk (x) : k ∈ Z} does not form Riesz
basis for H.
Example 1.4.2 Given {vk (x) : k ∈ N} is an orthonormal basis for H, prove that the
set {v1 (x), v1 (x), v2 (x), v2 (x), v3 (x), v3 (x), . . .} is a frame but does not form Riesz
basis for H.
1.5 Projection
x = y ⊕ z, y ∈ Y and z ∈ Z .
In calculus, a function series is a series, where the summands are not just real or
complex numbers but functions. Examples of function series include power series,
Laurent series, Fourier series (will be explained in Chap. 2 in details), etc. As for
8 1 Preliminaries
sequences of functions, and unlike for series of numbers, there exist many types of
convergence for a function series. We mention few of them as follows:
∞
(i) The infinite series f n (x) converges to f (x) in a < x < b in the mean-square
n=1
sense (or L2 sense, i.e., in L2 (a, b) norm), if
b
| f (x) − s N (x)|2 d x → 0, as N → ∞. (1.10)
a
N
where s N (x) = f n (x).
n=1
∞
(ii) The infinite series f n (x) converges to f (x) pointwise in (a, b) if it converges
n=1
to f (x) for each x ∈ (a, b), i.e.,
Example 1.6.1 Discuss all three notions of convergence mentioned above for the
series of functions f n (x) = (1 − x)x n−1 , 0 < x < 1.
The Fourier analysis contains two components: Fourier series and Fourier transform.
The Fourier series is named in the honor of Joseph Fourier, who made an important
contribution in mathematics. Fourier introduced the series for the purpose of solving
the heat equation in a metal plate, publishing his initial results in his 1807 memoir
to the Institute de France. Although the original motivation was to solve the heat
equation, it later became obvious that the same techniques could be applied to a
wide variety of mathematical and physical problems. The Fourier series has many
applications in different branches of science and engineering. Fourier series is more
universal than Taylor series because many discontinuous periodic functions of prac-
tical interest can be developed in the form of Fourier series.
However, many practical problems also involve nonperiodic functions. In that
case, the Fourier series is converted to Fourier integral, and the complex form of
Fourier integral is called Fourier transform.
The space L2 (0, 2l) is a Hilbert space with the inner product
2l
1
f, g = f (x)g(x)d x,
2l 0
inπx
It turns out that the family of functions {e l }n∈Z is an orthonormal basis of the
space L2 (0, 2l) and hence any f (x) ∈ L2 (0, 2l) has the following Fourier series
representation:
∞
inπx inπx
f (x) = a0 + an cos + bn sin , (2.1)
n=1
l l
where the constants a0 , an and bn are called the Fourier coefficients of f (x), defined
by
2l 2l 2l
1 1 inπx 1 inπx
a0 = f (x)d x, an = f (x)cos d x, bn = f (x)sin .
2l 0 l 0 l l 0 l
(2.2)
The complex form of (2.1) is
∞
inπx
f (x) = cn e l , (2.3)
n=−∞
where the constants cn are called the Fourier coefficients of f (x), defined by
2l
1
f (x)e−
inπx
cn = l d x. (2.4)
2l 0
f(x)
0
−0.5
0 1
x
N
inπx
where s N (x) = cn e l is the N th partial sum of the Fourier series. We observe
n=−N
from series (2.1) that f (x) (l = π, 2π–periodic function) is decomposed into a
sum of infinitely many mutually orthogonal components, wn (x) = einx , n ∈ N. This
orthonormal basis is generated by integer dilations (wn (x) = w(nx), ∀n ∈ N) of sin-
gle function w(x) = ei x . It means ei x (sinusoidal wave) is the only function required
to generate space L2 (0, 2π). This sinusoidal wave has high frequency for large |n|
and low frequency for small |n|. So, every function in space L2 (0, 2π) is composed of
waves with various frequencies. Moreover, the Fourier series defined in (2.1) satisfy
the Parseval’s identity given by
2l ∞
| f (x)| d x = 2l
2
|cn |2 . (2.5)
0 n=−∞
shown in Fig. 2.1, the Fourier coefficients computed using (2.4) are given by
e−2iπn + 2iπne−iπn − 1
cn = .
4π 2 n 2
The Matlab function fourier_approximation.m computes the s N (x) for
different N using the command fourier_approximation .
The s N (x) for N = 64, 128 and 256 is plotted in Fig. 2.2 and the corresponding
errors in L2 norm are 1.26, 8.91 × 10−1 and 6.19 × 10−1 . It is interesting fact that
the discontinuity of the function in the Example 2.1.1 is completely resolved by the
series (2.1), even though the individual terms of the series are continuous. However,
with only finite number of terms, this will not be the case. Moreover, we observe
12 2 Fourier Analysis
sN(x)
0
−0.6
0 1
x
from Fig. 2.2 that the approximation error is not only confined near the region of
discontinuity but spread throughout the domain. This is called Gibbs phenomenon.
The reason for the poor approximation of the discontinuous functions lies in the
nature of the functions einx , as they all have the support on whole of the real axis and
differs only with respect to the frequency n. Now, we write results for the convergence
of Fourier series in different modes of convergence as discussed in Sect. 1.6 (see [1]
for details).
Theorem 2.1.1 If f (x) and f (x) are piecewise continuous functions in (0, 2l),
then the fourier series converges at every point x. Moreover,
∞
inπx 1
cn e l = ( f ext (x+) + f ext (x−)) ∀ − ∞ < x < ∞,
n=−∞
2
where f ext (x), −∞ < x < ∞ is the periodic extension of f (x) defined in (0, 2l).
Theorem 2.1.2 If f (x) ∈ L2 (0, 2l), then the Fourier series converges to f (x), x ∈
(0, 2l) in L2 sense.
Theorem 2.1.3 If f (x) and f (x) are continuous in [0, 2l] and value of Fourier
series matches with f (x) at the endpoints (0 and 2l), then the Fourier series converges
to f (x), x ∈ [0, 2l] uniformly.
fˆ (ω) = iω fˆ(ω).
(3) fˆ(ω) → 0, as ω → −∞ or ∞.
It should be noted that, f ∈ L1 (R) fˆ ∈ L1 (R), which can be illustrated from the
following example: −x
e if x ≥ 0,
f (x) =
0 if x < 0.
The above function f (x) is in the space L1 (R) but its Fourier transform fˆ(ω) =
(1 − iω)−1 is not in L1 (R).
If fˆ ∈ L1 (R) be the Fourier transform of f ∈ L1 (R), then the inverse Fourier
transform of fˆ is defined as
∞
1
(F −1
fˆ)(x) = eiωx fˆ(ω)dω. (2.7)
2π −∞
Remark 2.2.1 The Fourier transform defined in (2.6), decomposes the functions
(signals) into the sum of a (potentially infinite) number of sine and cosine wave fre-
quency components. The frequency domain is a term used to describe the domain
for analysis of mathematical functions (signals) with respect to frequency. Therefore,
the Fourier transform of a function is also called the Fourier spectrum of that func-
tion. The inverse Fourier transform converts the Fourier transform from frequency
domain to the original function under some assumptions. If the original function is
a signal, then the physical representation of the domain will be time domain.
It should be noted that the two components of Fourier analysis, namely the Fourier
series and the Fourier transform are basically unrelated. We will later see that this is
not the case with wavelet analysis.
Example 2.2.1 Evaluate the Fourier transform of the Gaussian function f (x) =
e−ax , a > 0.
2
Solution:
∞
fˆ(ω) = e−iωx e−ax d x,
2
(2.8)
−∞
∞
e−a (x+ 2a )
2 2
iω
− ω4a
= d x,
−∞
14 2 Fourier Analysis
ω2
∞
e− 4a
e−x d x,
2
= √
a −∞
π − ω2
= e 4a .
a
Now we will see when the inverse Fourier transform defined in (2.7) gives back
the original function f (x). To prove this statement, we introduce convolution first.
Convolution is a mathematical operation on two functions f and g, which involves
multiplication of one function by a delayed or shifted version of another function,
integrating or averaging the product and repeating the process for different delays. It
has applications in the area of image and signal processing, electrical engineering,
probability, statistics, computer vision, and differential equations. More precisely,
the convolution of two functions is defined by
∞
( f ∗ g)(x) = f (τ )g(x − τ )dτ , (2.9)
−∞
This implies that f ∗ g1 ≤ f 1 g1 and hence f ∗ g ∈ L1 (R). A change of vari-
able of integration in (2.9) will tell us f ∗ g = g ∗ f, ∀ f, g ∈ L1 (R), i.e., convolution
is commutative. Now since f ∗ g ∈ L1 (R), we can define ( f ∗ g) ∗ h and a simple
manipulation will tell you that ( f ∗ g) ∗ h = f ∗ (g ∗ h), ∀ f, g, h ∈ L1 (R), i.e., con-
volution is associative. However, no function acts as an identity for the convolution
except in case of modified concept of function (generalized function) which will be
discussed in a short while. To prove that no identity function for the convolution
exists, we need the following theorem which states that the Fourier transform of a
convolution is the pointwise product of Fourier transforms.
Theorem 2.2.2 Let f, g ∈ L1 (R), then ( f ˆ∗ g)(ω) = fˆ(ω)ĝ(ω).
Proof:
∞
( f ˆ∗ g)(ω) = e−iωx ( f ∗ g)(x)d x,
−∞
∞ ∞
= e−iωx f (τ )g(x − τ )dτ d x,
−∞ −∞
∞ ∞
= f (τ ) e−iωx g(x − τ )d x dτ ,
−∞ −∞
(using Fubini’s theorem as whole integrand is integrable),
∞ ∞
= f (τ ) e−iω(τ +y) g(y)dy dτ , (x − τ = y),
−∞ −∞
∞ ∞
−iωτ
= f (τ )e e−iω y g(y)dy dτ ,
−∞ −∞
= fˆ(ω)ĝ(ω).
f ∗ e = f, ∀ f ∈ L1 (R). (2.10)
which means
ê(ω) = 1, (2.11)
δ(x) = 0 ∀x = 0 (2.12)
and ∞
δ(x)d x = 1. (2.13)
−∞
It was introduced by theoretical physicist Paul Dirac. In the context of signal pro-
cessing, it is often referred as the unit impulse function. It is a continuous analogue
of the Kronecker delta function which is usually defined on a finite domain, and
takes values 0 and 1. The Dirac delta distribution (δ distribution for x0 = 0) can act
as convolution identity because
f ∗ δ = f, ∀ f ∈ L1 (R), (2.14)
provided δ̂(ω) = 1.
We have already seen that we cannot obtain an identity function in L1 (R) for
the convolution. However, we still wish to find an approximation to the function δ
in (2.14), i.e., an approximation of the convolution identity. Consider the Gaussian
(x−b)2
functions of the form ae− c2 for some real constants a, b, c. The parameter a
is the height of the curve’s peak, b is the position of the center of the peak and c
controls the√width of the curve’s peak. Now we take particular Gaussian function
with c = 2 α, a = c√1 π and b = 0
1 x2
eα (x) = √ e− 4α , α > 0, (2.15)
2 πα
using the result given in Example 2.2.1. It is clear that the function eα satisfies (2.11)
as α → 0+ . The Gaussian function for different α (which governs the height and
width of the peak) is shown in Fig. 2.3. In Fig. 2.3, the black line shows the Gaussian
at α = 1/25 ; therefore, one can observe that width of peak is decreased and height
of peak is increased as α → 0+ . In Fig. 2.4, the Gaussian is plotted for α = 1/215 ,
2.2 Fourier Transform 17
e (x)
j=5
1
α
0.5
0
−5 0 5
x
30
eα(x)
20
10
0
−5 0 5
x
which is almost the Dirac delta function. Now, we state the following theorem without
proof and refer to [2] for proof.
Theorem 2.2.4 Let f ∈ L1 (R), then lim+ ( f ∗ eα )(x) = f (x) at every point x
α→0
where f is continuous.
at every point x where f is continuous (for proof one can refer to [2]).
We have defined the Fourier transform of the functions in the space L1 (R), and in
this space there is no guarantee that the inverse Fourier transform will exist as already
explained in Sect. 2.2. A tool is of no practical good if you cannot revert its effect
back, so we need to move to a space where at least the inverse Fourier transform
exists. The L2 (R) (set of square integrable functions) space is much more elegant to
define Fourier transform as for f ∈ L2 (R) its Fourier transform fˆ ∈ L2 (R), which
can be formulated in the form of theorem as follows.
18 2 Fourier Analysis
Theorem 2.2.6 The Fourier transform F is one–one map of L2 (R) onto itself, i.e.,
for every g ∈ L2 (R), there corresponds one and only one f ∈ L2 (R) such that fˆ = g
f (x) = (F −1 g)(x)
1 ˆ 2
f 22 = f 2 (Parseval’s identity). (2.17)
2π
The detailed theory of Fourier transform of functions in L2 (R) (discussed above in
short) is also known as the Plancherel theory.
The time–frequency analysis is a topic of signal analysis where we study the signal
in both the time and frequency domains together with couple of advantages over
Fourier transform as follows.
• In order to study the signal in frequency domain from its Fourier transform dis-
cussed in previous Sect. 2.2, we require complete description of the signal’s behav-
ior over entire time (which may include indeterminate future behavior of the sig-
nal). You cannot predict Fourier transformation based on local observation of the
signal.
• Moreover, if the signal is changed in small neighborhood of some particular time,
then the entire Fourier spectrum is affected.
• Another drawback of classical Fourier analysis is that signals are either assumed
infinite in time (case of Fourier transform) or periodic (case of Fourier series), while
many signals in practice are of short duration (i.e., not defined for infinite time),
and are not periodic. For example, traditional musical instruments do not produce
infinite duration sinusoids, but instead begin with an attack, then gradually decay.
This is poorly represented by Fourier analysis, which motivates time–frequency
analysis.
Time–frequency analysis is generalization of Fourier analysis for those signals
whose frequency (statistics) vary with time. Many signals of interest such as speech,
music, images, and medical signals have changing frequency, which again motivates
time–frequency analysis.
The drawback of Fourier transform in time–frequency analysis was initially
observed by D. Gabor in 1946. He used the concept of window function to define
another transform. A nontrivial function w ∈ L2 (R) is called a window function if
xw(x) is also in L2 (R). It means function decays to zero rapidly. The center t ∗ and
radius w of the window function are defined by
2.3 Time–Frequency Analysis 19
∞ ∞ 1/2
∗ 1 1 ∗ 2
t = x|w(x)| d x, w = 2
(x − t ) |w(x)| d x
2
,
w22 −∞ w2 −∞
(2.18)
therefore, the width of the window function will be 2w .
Gabor used this Gaussian function to define the Gabor transform for α > 0,
f ∈ L2 (R) ∞
(G b,α f )(ω) = (e−iωx f (x))eα (x − b)d x. (2.19)
−∞
In (2.19), the Gabor transform is localizing the Fourier transform of f around point
x = b. Moreover,
∞ ∞
eα (x − b)db = eα (t)dt = 1, (2.20)
−∞ −∞
It can be easily proved that Gaussian function discussed in (2.15) is a window func-
tion. Since eα is an even function, the center t ∗ defined in (2.18) is 0, and hence, the
radius of the Gaussian window will be given by
∞ 1/2
1 √
eα = x 2 eα2 (x)d x = α (for proof see [2], pp. 51).
eα 2 −∞
It means we are windowing the function f by using the window function w(x) =
(eiωx eα (x − b)). Therefore
1
(G b,α f )(ω) =< f, w >= < fˆ, ŵ >, (2.22)
2π
π
where ŵ(η) = e−ib(η−ω) e−α(η−ω) = eibω e−ibη (
2
e 1 (η
α 4α
− ω)). Therefore
∞
1
(G b,α f )(ω) = fˆ(η)ŵ(η)dη,
2π −∞
∞
e−ibω
= √ fˆ(η)eibη e 4α1 (η − ω)dη, (2.23)
2 πα −∞
20 2 Fourier Analysis
8
2
6 1
√
2 α
η
4
2 √
α
0
0 5 10
x
For w1 (x) = eα (x), window Fourier transform (2.25) will reduce to Gabor transform
(2.19). It means we are windowing the function f by using the window function
w(x) = (eiωx w1 (x − b)). Therefore
1
(Wb f )(ω) =< f, w >= < fˆ, ŵ >=< fˆ, h >, (2.26)
2π
2.3 Time–Frequency Analysis 21
and w1 are the center and radius of the window function w1 . The frequency window
corresponding to window function h will be supported in [ω ∗ + ω − ŵ1 , ω ∗ + ω +
ŵ1 ], where ω ∗ and ŵ1 are the center and radius of the window function ŵ1 . The
window area is 4w1 ŵ1 , which is again constant.
Remark 2.3.1 The uncertainty principle tells us that we can not find a window smaller
than the Gaussian window.
Now the question is to get your f back from its window Fourier transform, which
can be done using the following theorem.
Theorem 2.3.2 Let w1 ∈ L2 (R) is such that w1 2 = 1 and both w1 , ŵ1 are window
functions
∞ ∞
< f, w > < g, w >dbdω = 2π < f, g >, (note that w = eiωx w1 (x − b)),
−∞ −∞
is a window function.
Example 2.3.2 Draw the time–frequency window (like Fig. 2.5) and analyze the
function f (x) = sin(πx 2 ) using the window function w1 (t) (given in previous exam-
ples).
Exercises
1. Find out the Fourier series of f (x) = x, −π < x < π and discuss the conver-
gence at x = 0, π.
2. Let
−(1 + x) if − 1 < x < 0,
f (x) =
(1 − x) if 0 < x < 1.
22 2 Fourier Analysis
C0
s N (x) = + C1 cos(x) + D1 sin(x) + C2 cos(2x) + D2 sin(2x),
2
what choice of coefficients will minimize the L2 (−π, π) error.
4. Find the Fourier transform of the square function defined by
1 if |x| < 21 ,
f (x) =
0 if |x| ≥ 21 .
The square function f (x) is in L1 (R). What about its Fourier transform fˆ,
whether fˆ ∈ L1 (R) or fˆ ∈
/ L1 (R). Verify your answer.
5. Find the Fourier transform of the triangle function defined by
1 − |x| if |x| < 1,
f (x) =
0 if |x| ≥ 1.
6. Let g(x) = f (−x) (reflection of f relative to the origin), prove that F(F f ) =
2πg.
7. Find the Fourier transform of sinc x2 and sinc2 x2 , where sinc(x) =
sin(x)
x
if x = 0,
(use Exercises 4–6).
1 ifx = 0
8. Let f ∈ L1 (R) then prove that fˆ is uniformly continuous on R.
9. Prove that if fˆ(w) is differentiable and fˆ(0) = fˆ (0) = 0, then
∞ ∞
f (x)d x = x f (x)d x = 0.
−∞ −∞
1
10. State whether the functions wm (x) = 0 wm−1 (x − t)dt, m ≥ 2 and
1 0 ≤ x ≤ 1,
w1 (x) =
0 otherwise.
can be used as window function in short time Fourier transform or not? Prove
your statement.
11. Prove the following property for Gabor transformation:
• (G b,α a f + bg)(ω) = a(G b,α f )(ω) + b(G b,α g(ω) (Linearity Property).
2.3 Time–Frequency Analysis 23
12. Let w1 ∈ L2 (R) be so chosen that w1 2 = 1 and both w1 and ŵ1 are window
functions and let f ∈ L2 (R). Prove that at every point x where f is continuous
∞ ∞
1
f (x) = [eiωx (Wb f )(ω)]w1 (x − b)dωdb.
2π −∞ −∞
References
1. W.A. Strauss, Partial Differential Equations: An Introduction (Wiley, New York, 1992)
2. C.K. Chui, An Introduction to Wavelets (Academic Press, Boston, 1992)
Part II
Introduction to Wavelets
Chapter 3
Wavelets on Flat Geometries
Multiresolution analysis (MRA) (introduced by S. Mallat [5] and Y. Meyer [3]) is the
heart of wavelet theory; it is a mathematical construction that characterizes wavelets
in a general way. The goal of MRA is to express an arbitrary function f ∈ L2 (X )
(X can be any manifold) on multiresolution approximation spaces. Multiresolution
analysis is explained for X = R with the help of multiresolution approximation
spaces as follows.
The goal is to decompose the whole function space (L2 (R)) into subspaces at
different scales. Multiresolution is explained for two subspaces V j and W j , which
are called scaling function space and wavelet space, respectively. First, we focus
on spaces V j which are closed subspaces of L2 (R) . The spaces V j are increasing
means; each V j is contained in V j+1 , i.e.,
© Springer Nature Singapore Pte Ltd. 2018 27
M. Mehra, Wavelets Theory and Its Applications, Forum for Interdisciplinary
Mathematics, https://doi.org/10.1007/978-981-13-2595-3_3
28 3 Wavelets on Flat Geometries
Each subspace has information about the function at different scales. Moving in the
direction of increasing j will take us to L2 (R). The completeness on the sequence
of subspaces is also required as follows:
V j = L2 (R). (3.2)
j∈Z
PV j f −→ f, j −→ ∞.
We have an increasing and complete scale of spaces. It also means with respect to
multiresolution that V j+1 consists of all dilated functions in V j (see Remark 3.1.1
for more information about dilation factor).
Instead of dilating, we can shift the function, which is called the translation.
When we move in the direction of decreasing j, the smallest subspace should contain
only the zero function. Hence,
V j = {0}.
j∈Z
Now we collect the results from (3.1), (3.2), and (3.3) and write in the form of the
multiresolution approximation subspaces V j (closed) which satisfies the follow-
ing axioms:
(m1) V j ⊂ V j+1 (the approximation subspaces V j s textbfare nested),
j∈Z V = L2 (R) (completeness),
(m2) j
2 2
1.5 1.5
φ(x−1)
1 1
φ(x)
0.5 0.5
0 0
−0.5 −0.5
−2 0 2 4 −2 0 2 4
x x
Fig. 3.1 The function φ(x) and function φ(x − 1) (translation of φ(x) by shift 1)
2 2
1.5 1.5
φ(2x−1)
1 1
φ(2x)
0.5 0.5
0 0
−0.5 −0.5
−2 0 2 4 −2 0 2 4
x x
Fig. 3.2 The function φ(2x) (dilation of φ(x) by 2) and function φ(2x − 1) translation of φ(2x)
by 1)
The φ(x) and φ(x − 1) corresponding to (3.4) are plotted in Fig. 3.1 and will
remain in the space V 0 according to (m3). The functions φ(2x) and φ(2x − 1)
(Fig. 3.2) will be in V 1 according to (m4). One can observe from Figs. 3.1 and 3.2
that
φ(x) = φ(2x) + φ(2x − 1), (3.5)
which can be generalized to any φ(x) (other than Haar scaling function) as follows:
the {φ(x − k)|k ∈ Z} is an orthonormal basis for V 0 (m3); by applying (m4), we get
j j
{φk (x) = 2 2 φ(2 j x − k)|k ∈ Z} which is an orthonormal basis of the space V j and
1
in particular, {φk1 (x) = 2 2 φ(2x − k)|k ∈ Z} is an orthonormal basis of the space V 1 .
Now φ00 (x) = φ(x) ∈ V 0 ⊂ V 1 (m1), hence
∞
√
φ(x) = 2 h k φ(2x − k). (3.6)
k=−∞
Equation (3.6) is called the dilation equation (two-scale relation for scaling function)
and the coefficients h k are called the low-pass filter coefficients.
30 3 Wavelets on Flat Geometries
V j ⊥ W j, (3.7)
and
V j+1 = V j ⊕ W j . (3.8)
Therefore, any function in V j can be analyzed at different scales using (3.9). Con-
tinuing the decomposition of (3.9) for J0 → −∞ and J → ∞, we get
∞
W j = L2 (R). (3.10)
j=−∞
The information in moving from the space V 0 to the space V 1 is captured by the
translations of the function ψ(x). Indeed, this is true and for a given MRA there
always exist a function ψ00 (x) = ψ(x) ∈ W 0 (which is called mother wavelet) such
j
that {ψk (x) = 2 j/2 ψ(2 j x − k) : k ∈ Z} is an orthonormal basis for W j . Now since
ψ(x) ∈ W 0 ⊂ V 1 ,
∞
√
ψ(x) = 2 gk φ(2x − k). (3.11)
k=−∞
Equation (3.11) is called the wavelet equation (two-scale relation for wavelet func-
tion) and the coefficients gk are called the high-pass filter coefficients. In case of
Haar scaling function, the information in moving from the space V 0 to the space V 1
is captured by the translations of the following function ψ(x) shown in Fig. 3.3:
⎧
⎨ 1 if 0 ≤ x < 21 ,
ψ(x) = −1 if 21 ≤ x < 1, (3.12)
⎩
0 otherwise.
It is easy to verify that ψ(x) = φ(2x) − φ(2x − 1) from Fig. 3.2. The ψ(x) (Haar
wavelet) and ψ(x − 1) are plotted in Fig. 3.4 and will remain in W 0 . The functions
ψ(2x) and ψ(2x − 1) are plotted in Fig. 3.5.
In order to construct mother wavelet function ψ using multiresolution analysis, we
always need sequence of subspaces V j of L2 (R) (approximation spaces) satisfying
3.1 Multiresolution Analysis 31
−1
−2
−2 0 2 4
x
2 2
1 1
ψ(x−1)
ψ(x)
0 0
−1 −1
−2 0 2 4 −2 0 2 4
x x
Fig. 3.4 The function ψ(x) and function ψ(x − 1) (translation of ψ(x) by shift 1)
2 2
1 1
ψ(2x−1)
ψ(2x)
0 0
−1 −1
−2 0 2 4 −2 0 2 4
x x
Fig. 3.5 The function ψ(2x) (dilation of ψ(x) by 2) and function ψ(2x − 1) translation of ψ(2x)
by 1)
where ∞
p
Ml = x p φ(x − l)d x, l, p ∈ Z, (3.14)
−∞
is the pth moment of φ(x − l). Multiplying (3.13) by ψ(x), integrating, and using
(3.7), we get
∞
x p ψ(x) = 0, p = 0, 1, . . . , M − 1. (3.15)
−∞
Remark 3.1.1 One can choose any dilation factor a > 0 other than 2 (i.e., f (.) ∈
V j ⇐⇒ f (a(.)) ∈ V j+1 for all j ∈ N) (see 10.2 and 10.4 in [4] for details).
Here, we look at the behavior of fourier transform of scaling function and wavelet.
Taking Fourier transform of dilation Eq. (3.6) and wavelet Eq. (3.11) both sides, we
get
ω ω 1
∞
φ̂(ω) = H φ̂ , where H (ω) = √ h k e−ikω , (3.16)
2 2 2 k=−∞
ω ω 1
∞
ψ̂(ω) = G φ̂ , where G(ω) = √ gk e−ikω , (3.17)
2 2 2 k=−∞
respectively.
The Fourier transform of Haar scaling function is given by
1 − e−iω ω iω
φ̂(ω) = = 2sinc e− 2 (sinc function is defined in Exercise 7 of Chap. 2).
iω 2
(3.18)
Similarly, the Fourier transform of wavelet is given by
3.1 Multiresolution Analysis 33
1 0.8
0.8
0.6
0.6
|ψ̂(ω)|
|φ̂(ω)|
0.4
0.4
0.2
0.2
0 0
−50 −30 −10 10 30 50 −50 −30 −10 10 30 50
ω ω
Fig. 3.6 The magnitude of Fourier transform of scaling function (|φ̂(ω)|) and wavelet (ψ̂(ω)))
1−e
−iω
2
ω
ψ̂(ω) = φ̂ . (3.19)
2 2
The main ways to start multiresolution analysis are explained with the help of Haar
wavelet system.
Start with the Spaces V j
k Vk+1be
a space of all those functions in L2 (R) which are constant on the interval
j
Let
2j
, 2j
means
V 0 = { f (x) : f (x) ∈ L2 (R) and f (x) is piecewise constant on unit intervals (k, k + 1), k ∈ Z},
V 1 = { f (x) : f (x) ∈ L2 (R) and f (x) is piecewise constant on the intervals (k2−1 , (k + 1)2−1 ), k ∈ Z},
and in general
V j = { f (x) : f (x) ∈ L2 (R) and f (x) is piecewise constant on unit intervals (k2− j , (k + 1)2− j ), k ∈ Z}.
It is very clear that if f (x) ∈ V 0 , it means that f (x) is piecewise constant on the
intervals of the form (k, k + 1), which definitely is a piecewise constant function on
the intervals ( k2 , k+1
2
), which means f (x) ∈ V 1 . We can generalize this concept to
get a ladder of subspaces
· · · ⊂ V −2 ⊂ V −1 ⊂ V 0 ⊂ V 1 ⊂ V 2 · · · , axiom (m1).
Another random document with
no related content on Scribd:
"Eikö äiti nytkään ole kotona?"
"Uskotteko, etten ole enään sama tyhjä, itsekäs olento, mikä olin
vielä toissapäivänä, eilen…"
"Näen sen."
"Voi sanokaa!"
"Kiitän teitä!"
"Ei enään! Nyt tuntuu kuin olisin saanut kiinni kultaisen langan
päästä, jota seuraamalla aavistan löytäväni sen suuren, kaivatun…
sen, jota sanotaan elämäntehtäväksi. —"
"Kaikesta siitä mitä minulla on, kiitän teitä, — kiitän teitä vaan!"
virkkoi Helvi lämpimästi ja salataksensa mielenliikutustaan kääntyi
hän nopeasti taloon vievälle puistotielle.
Suurin osa siitä opista, jota hän koulun penkillä oli päähänsä
puinut, näytti niin tuiki tarpeettomalta, tyhjältä. — — Sen sijaan että
sielläkin olisi opastettu kaikkeen siihen, elävään välttämättömyyteen,
innostutettu suuriin aatteisiin, hyviin töihin ja käytännöllisyyteen,
pantiin pääpaino kuolleisiin, homehtuneisiin numeroihin ja
ijänikuisten tapausten häirähtämättömään muistamiseen, joista henki
ei kohonnut, eikä järki avartunut.
Miksei ollut kotikaan antanut hänelle niitä aatteita, joita hän oli
tietämättään janonnut?
(Koskenlaskijan morsiamet).
"Petturi!"
"Kätke kaikkien katseilta se, joka kirveltää, moni muu ennen sinua
on tehnyt niin…"
Sydänmaan rakkautta.
"No ei sinne niin pitkä matka ole", vastasi hän sitten lapselle,
yhtäkkiä rohkaistuen ja ottaen tämän syliinsä.
"Anna toki pojan istua… Tämähän on niin hyvä poika tämä pikku
Kaapro."
Tuskin oli Jaakko päässyt ovesta ulos, kun kalvava kaiho täytti
hänen rintansa ja hän olisi tahtonut jälleen lämpimään, armaaseen
tupaan katselemaan Hiljaa ja pitelemään poikaa polvillansa…
*****
Sitten alkoi siellä ja täällä kuulua lisäksi ääniä, että isäntä aikoo
ottaa häneltä mökin pois, kun saapi sen annetuksi toiselle
paremmilla ehdoilla.
"Voiko sitä olla vieras, vaikka on oma —-?" kysyi lapsi ja hänen
silmiinsä tuli miettivä, lohduton ilme.
Kun hän kerran tällaisella retkellään sai kuulla että merten takaa
oli Kaapro Kalposen kuoleman viesti saapunut, — tunsi hän suurta
levotonta iloa sydämessänsä, sellaista ikiääretöntä, joka vain
aniharvoin ihmissieluun hulvahtaa, loistavana, mahtavana,
rinnanääriä repien, — niinkuin tila siellä olisi asunnoksi ahdas ja
matala riemun suurelle majesteetille…!
*****
Miina.
Miinalla oli tapana iltasin, kun tupa oli lämmitetty ja pelti pantu
kiinni, lähteä sukanneuleineen vähäksi aikaa kylään "häkää pakoon."
Niitä varten pani Miina aina oluen, keitti lämpimät ruuat ja kahvin
— ja jos sattui olemaan jouluinen aika, saattoi hänellä toisinaan olla
ryypytkin.
"Vaikka teidän nenänne olisi sata syltä pitkä, niin kyllä mie sen
niistän…" huusi hän niin että musta yö raikui.