Download as pdf
Download as pdf
You are on page 1of 160
Fundamentals of Wavelets Theory, Algorithms, and Applications JAIDEVA C. GOSWAMI ANDREW K. CHAN Texas A&M University ® ‘A WILEYSINTERSCIENCE PUBLICATION JOHN WILEY & SONS, INC. New York / Chichester / Weinheim / Brisbane / Singapore / Toronto Contents Preface 1 What This Book Is About 2 Mathematical Preliminaries 2.1 Linear Spaces 2.2 Vectors and Vector Spaces 2.3 Basis Functions 2.3.1 Orthogonality and Biorthogonality 2.4 Local Basis and Riesz Basis 25 Discrete Linear Normed Space 2.6 Approximation by Orthogonal Projection 2.7 Matrix Algebra and Linear Transformation 2.7.1 Elements of Matrix Algebra 2.7.2 Bigenmatrix 2.7.3 Linear Transformation 2.74 Change of Basis 2.75 Hermitian Matrix, Unitary Matrix, and Orthogonal ‘Transformation 2.8 Digital Signals 2.8.1 Sampling of Signal 2.8.2 Linear Shift-Invariant Systems 2.83 Convolution B 15 16 18 18 19 20 aa 2 23 PPD ‘CONTENTS, 284 z-Transform 285 Region of Convergence 2.8.6 Inverse z-Transform 29 Exercises References, 3. Fourier Analysis 3.1 Fourier Series 3.2 Examples 3.21 Rectified Sine Wave 3.2.2. Comb Function and the Fourier Series Kemel Kw(t) 3.3 Fourier Transform 3.4 Properties of the Fourier Transform 3.4.1 Linearity 3.4.2 Time Shifting and Time Scaling 3.43 Frequency Shifting and Frequency Scaling 3.44 Moments 3.4.5 Convolution 3.4.6 Parseval’s Theorem 3.5 Examples ofthe Fourier Transform 35.1 Rectangular Pulse 3.52 Triangular Pulse 3.5.3 Gaussian Function 36 Poisson’s Sum 3.6.1 Panition of Unity 3.7 Sampling Theorem 3.8 Partial Sum and the Gibbs Phenomenon 3.9 Fourier Analysis of Discrete-Time Signals 39.1 Discrete Fourier Basis and Discrete Fourier Series 3.9.2 Discrete-Time Fourier Transform 3.10 Discrete Fourier Transform 3.11 Exercises References BRR a 32 32 3 35 37 37 38 38 see 41 a 8 45 Bs 55 56 ‘CONTENTS. 4 Time Frequency Analysis 4.1 Window Function 4.2 Short-Time Fourier Transform 42.1 Inversion Formula 42.2 Gabor Transform 423 Time-Frequency Window 424 Properties of STFT 43 Discrete Short-Time Fourie Transform. 431 Examples of STT 44 Discrete Gabor Representation 45 Continuous Wavelet Transform 45.1 Inverse Wavelet Transform 4.5.2 Time-Frequency Window 46 Discrete Wavelet Transform 47 Wavelet Series 48 Interpretations ofthe Time—Frequency Plot 49 Wigner-Ville Distribution 4.10 Properties ofthe Wigner-Ville Distribution 4.10.1 A Real Quantity 4.10.2 Marginal Properties 4.103 Correlation Function 4.11, Quadratic Superposition Principle 4.12 Ambiguity Function 4.13 Exercises 4.14 Computer Programs 4.14.1 Short-Time Fourier Transform 4.14.2 Wigner-Ville Distribution References 5 Multiresolution Analysis 5.1 Multiresolution Spaces 5.2 Orthogonal, Biorthogonal, and Semiorthogonel Decomposition + 53 TworScale Relations 5.4 Decomposition Relation 37 58 6 6 e298 a 6 0 B 1" 6 80 al al 8 85 85 86 88 89 2 7 x CONTENTS 5.5 Spline Functions 5.5.1 Properties of Splines 5.6 Mapping a Function into MRA Space 5.7 Exercises 5.8 Computer Programs 58.1 B-Splines References 6 Construction of Wavelets 6.1 Necessary Ingredients for Wavelet Construction 6.1.1 Relationship Between Two-Scale Sequences 6.1.2 Relationship Between Reconstruction and ‘Decomposition Sequences 62 Construction of Semiorthogonal Spline Wavelets 6.2.1. Expression for (go(é]} 63 Construction of Orthonormal Wavelets 644 Onhonormal Scaling Functions 64.1 Shannon Scaling Function 64.2 Meyer Scaling Function 6.4.3 Battle-Lemarié Scaling Function 6.4.4 Daubechies Scaling Function 6.5 Construction of Biorthogonal Wavelets 6.6 Graphical Display of Wavelets 66.1 Iteration Method 6.62 Spectral Method 6.63 Eigenvalue Method 6.7 Exercises 6.8 Computer Programs 6.8.1 Daubechies Wavelet 68.2 Iteration Method References 7 Discrete Wavelet Transform and Filter Bank Algorithms 7.1. Decimation and Interpolation 7.1 Decimation 102 103 108 106 106 107 108 109 109 110 2 13 ng 18 18 19 123 125 129 132 132 132 134 134 138 138 139 139 141 1 142 ‘CONTENTS. 7.1.2 Interpolation 7.13 Convolution Followed by Decimation 7.1.4 Interpolation Followed by Convolution 7.2 Signal Representation in the Approximation Subspace 7.3 Wavelet Decomposition Algorithm 7.4 Reconstruction Algorithm 75 Change of Bases 7.6 Signal Reconstruction in Semiorthogonal Subspaces, 7.6.1 Change of Basis for Spline Functions 7.6.2 Change of Basi for Spline Wavelets 7.7 Examples 7.8 Two-Channel Perfect Reconstruction Filter Bank 7.8.1 Spectral-Domain Analysis of a Two-Channel PR Filter Bank 7.8.2 Time-Domain Analysis 79 Polyphase Representation for Filter Banks 7.9.1 Signal Representation in the Polyphase Domain 7.9.2 Filter Bank in the Polyphase Domain 7.10 Comments on DWT and PR Filter Banks 7.11 Exercises 7.12 Computer Programs 7.12.1 Algorithms References Fast Integral Transform and Applications 81 Finer Time Resolution £2 Finer Scale Resolution 83 Function Mapping into the Ineroctave Approximation Subspaces ‘8.4 Examples 84.1 IWT of a Linear Function 842 Crack Detection 843 Decomposition of Signals with Nonoctave Frequency Components 8.44 Penurbed Sinusoidal Signal 845 Chir Signal 44 “7 “7 148 149 153 154 156 157 160 163 165 168 116 180 180 181 183 184 184 186 187 BSREER xii CONTENTS 8.4.6 Music Signal with Noise 8.4.7 Dispersive Nature of the Waveguide Mode References 9 Digital Signal Processing Applications 9.1 Wavelet Packets 9.2 Wavelet Packet Algorithms 9.3 Thresholding 93.1 Hard Thresholding 932 Soft Thresholding 933 Percentage Thresholding 9.3.4 Implementation 9.4 Interference Suppression 9.5 Faulty Bearing Signature Identification 95.1 Pattern Recognition of Acoustic Signals 9.5.2 Wavelets, Wavelet Packets, and FFT Features 9.6 Two-Dimensional Wavelets and Wavelet Packets 9.6.1 Two-Dimensional Wavelets 9.6.2 ‘Two-Dimensional Wavelet Packets 9.7 Wavelet and Wavelet Packet Algorithms for ‘Two-Dimensional Signals 97.1 Two-Dimensional Wavelet Algorithm 9.7.2 Wavelet Packet Algorithm 98 Image Compression 98.1 Image Coding. 9.8.2 Wavelet Tree Coder 983 EZW Code 984 EZW Example 9.8.5 Spatial-Oriented Tree 9.8.6 Generalized Self-Similarity Tree 9.9 Microcalcification Cluster Detection 99.1 CAD Algorithm Structure 99.2 Panitioning of Image and Nonlinear Contrast Enhancement 9.9.3 Wavelet Decomposition ofthe Subimages 9.9.4 Wavelet Coefficient Domain Processing. 205 210 au 212 214 216 217 218 218 219 221 221 28 231 233, 233, 235 235 236 238 29 244 244 REE ‘CONTENTS ‘99.5 Histogram Thresholding and Dark Pixel Removal 99.6 Parametric ART2 Clustering 99.7 Results 9.10 Multicarrier Communication Systems 9.10.1 OFDM Multicarrier Communication Systems 9.10.2 Wavelet Packet-Based MCCS ‘9.11 Three-Dimensional Medical Image Visualization 9.11.1 Three-Dimensional Wavelets and Algorithms 9.11.2 Rendering Techniques 9.11.3 Region of Interest 9.11.4 Summary 9.12. Computer Programs 9.12.1 Two-Dimensional Wavelet Algorithms 9.12.2 Wavelet Packets Algorithms References (0 Wavelets in Boundary Value Problems 10.1 Integral Equations 102 Method of Moments 10.3 Wavelet Techniques 103.1 Use of Fast Wavelet Algorithm 10.32 Direct Application of Wavelets 103.3 Wavelets in Spectral Domain 103.4 Wavelet Packets 10.4 Wavelets on the Bounded Interval 105 Sparsity and Error Considerations 10.6 Numerical Examples 10.7 Semiorthogonal Versus Orthogonal Wavelets 108 Differential Equations 10.9 Expressions for Splines and Wavelets References Index 248 248 249 249 250 252 282 255 256 258 258 258 258 263 265 267 2m 273 273 274 215 280, 280 282 285 291 294 295 297 301 Preface 1. What This Book Is About 22 Mathematical Preliminaries 2.1 Linear Spaces 2.2 Veetors and Vector Spaces 213 Basis Functions 23.1 Orthogonaity and Biorthogonality 2.4 Local Basis and Riese Basis 2'5 Diseete Linear Normed Space 226 Approximation by Orthogonal Projection 2.7 Matix Algebra and Linear Transformation 2.2.1 Elements of Matrix Algebra 2.7.2 Eigenmatrix 2173 Linear Transformation 27.4 Change of Basis 2775 Hermitian Matrix, Unitary Matrix, and Orthogonal Transformation 2.8 Digital Signals 28.1 Sampling of Signal 2.82 Linear Shift Invariant Systems 28.3 Convolution 2844 2-Transform 28.5 Region of Convergence 2.86 Inverse 2-Transform 2.9 Exercises References 3 Fourier Analysis [3.1 Fourer Series 3.2 Examples 3.2.1 Rectified Sine Wave 32.2 Com Function and the Fourier Series Kemel Ky () 13.3 Fourier Transform 13.4 Properties ofthe Fourier Transform 34.1 Linearity 3.4.2 Time Shitting and Time Scaling 3.43 Frequency Shifting and Frequency Scaling 3.4.4 Moments 3455 Convolution 3.46 Parseval’s Theorem 3.5 Examples ofthe Fourier Transform 3.5.1 Rectangular Pulse 3.5.2 Triangular Pulse 355.3 Gaussian Function 3.6 Poisson's Sum "36.1 Partin of Unity 3.7 Sampling Theorem [38 Paral Sum and the Gibbs Phenomenon 3.9 Fourier Analysis of Discrete-Time Signals ‘39:1 Discrete Fourier Bass and Discrete Fourie Series 39.2 Discrete-Time Fourier Transform 3.10 Discrete Fourier Transform 10 5 16 18 18 0 20 a4 2 23 23 24 a4 25 26 28 30 au 3 2 2 3 35 37 37 38 8 38 » 40 40 41 41 2 8 45 46 0 30 30 34 3.11 Exercises References 4Time-Frequency Analysis “41 Window Punetion 42 Shore-Time Fourier Transform 14.2.1 Inversion Formula 42.2 Gabor Transform 4.2.3 Time-Frequency Window 4.2.4 Properties of STFT 4.3 Diseete Short-Time Fourier Transform 43.1 Examples of STFT 4.4 Diserete Gabor Representation 45 Continuous Wavelet Transform “45.1 Inverse Wavelet Transform 45.2 Time-Frequency Window 46 Diserete Wavelet Transform 4.7 Wavelet Series 448 Interpretations of the Time-Frequeney Plat 49 Wigner-Ville Distribution 4.10 Peoperies ofthe Wigner Ville Distribution 4.10.1 4 Real Quantity 4.10.2 Marginal Properties 410.3 Correlation Function Quadratic Superposition Principle 4.12 Ambiguity Function 4.13 Exercises 4.14 Computer Programs 4.14.1 Short-Time Fourier Transform 4.142 Wigner-Ville Distribution References ‘5 Multiresolution Analysis ‘5.1 Multiresolution Spaces 5.2 Onhogonal, Biorhogon 5.3 Two-Seale Relations 514 Decomposition Relation 55 Spline Functions 13.5.1 Properties of Splines ‘3.46 Mapping a Function into MRA Space 5.7 Exercises 5.8 Compoter Programs 58.1 B-Splines References 6 Construction of Wavelets 6.1 Necessary Ingredients for Wavelet Construction (6.1.1 Relationship Between Two-Seale Sequences 6.1.2 Relationship Between Reconstrction and Decomposition Sequences {6.2 Constmction of Semorthogonal Spline Wavelets 62.1 Expression for /gofk/) 6. Construction of Onhonormal Wavelets {64 Onthonormal Sealing Functions 55 36 a 58 0 61 61 @ 65 6° 70 2 2B "4 16 80 80 80 3 81 83 Ba 85 85 56 88 89 92 96 97 98 02 103 108 106 106 07 108, 109 109 10 112 13 ua lis 6.4.1 Shannon Sealing Punetion 64.2 Meyer Sealing Fonetion 6.4.3 Batle-Lemarie Scaling Function 6.44 Daubechies Seating Fanetion 6.5 Construction of Biorthogonal Wavelets 66 Graphical Display of Wavelets {6.6.1 tration Method 6.6.2 Spectral Method 66.3 Eigenvalue Method 6.7 Exercises 68 Computer Programs {68:1 Daubechies Wavelet 68.2 eration Method References ‘7 Diserete Wavelet Transform and Filter Bank Algorithms "7.1 Decimation and Interpolation “1.1 Decimation 7.1.2 Interpolation 71.3 Convolution Followed by Decimation 7.1.4 Interpolation Followed by Convolution 11.2 Signal Representation inthe Appronimation Subspace 73 Wavelet Decomposition Algorithm ‘74 Reconstruction Algorithm 7.5 Change of Bases 76 Signal Reconstruction in Semiorthogonal Subspaces 716.1 Change of Basis for Spline Functions 1716.2 Change of Bass for Spline Wavelets 7.7 Examples ‘78 Two-Channel Perfect Reconstruction Filter Bank 78.1 Spectral: Domain Analysis of a Two-Channel PR Filter Bank 78.2 Time-Domain Analysis 19 Polyphase Representation for Filter Banks 7.9.1 Signal Representation inthe Polyphase Domain 79.2 Filter Bank in the Polyphase Domain 7.10 Comments on DWT and PR Filer Banks 7.11 Exercises 7.12 Computer Programs 7.121 Algorithms References '8 Fast Integral Transform and Applications 8.1 Finer Time Resolution 8.2 Finer Seale Resolution 5.3 Function Mapping into the Ineroctave Approximation Subspaces 8.4 Examples 84.1 IWT of a Linear Funetion 8.4.2 Crack Detection 8.43 Decomposition of Signals with Nonoctave Frequency Components Sd Perturbed Sinusoidal Signal 8.4. Chipp Signal 8.46 Masic Signal with Noise 8.47 Dispersive Nature of the Waveguide Mode 168 176 180 180 181 82 133 18a 88 186 187 188 190 194 196 lor 202 203 203 Dos References {9 Digital Signal Processing Applications 9.1 Wavelet Packets, 9.2 Wavelet Packet Algorithms 9.3 Thresholing 9.3.1 Hard Thresholding 9.3.2 Soft Thesholding 9.3.3 Percentage Thresholding 9.3.4 Implementation 9.4 Interference Suppression 9.5 Faulty Bearing Signature Identification 9.51 Patten Recognition of Acoustic Signals 9.5.2 Wavelets, Wavelet Packets, and FFT Features 9.6 Two-Dimensional Wavelets and Wavelet Packets 9.6.1 Two-Dimensional Wavelets 916.2 Two-Dimensional Wavelet Packets 9.7 Wavelet and Wavelet Packet Algorithms for Two-Dimensional Signals 9.1.1 Two-Dimensional Wavelet Algorithm 9.7.2 Wavelet Packet Algorithm 9.8 Image Compression 98.1 image Coding 9.8.2 Wavelet Tree Coder 98.3 EZW Code 9.8.4 EZW Example 9.8.5 Spatial Oriented Tree 9.8.6 Generalized Self Similarity Tree 9.9 Mirocalifcation Cluster Detection 99.1 CAD Algorithm Seructare 9.9.2 Panitioning of Image and Nonlinear Contrast Enhancement 9.9.3 Wavelet Decomposition ofthe Subimages 919.4 Wavelet Coefficient Domain Processing 9.9.5 Histogram Thresholding and Dark Pixel Removal 9196 Parametric ART2 Clustering 9.9.7 Results 9.10 Molicamer Communication Systems 9.10.1 OFDM Multcamer Communication Systems 9.10.2 Wavelet Packet-Based MCCS. 9.11 Three-Dimensional Medical Image Visualization 9.11.1 Three-Dimensional Wavelets and Algorithms 9.11.2 Rendering Techniques 9.11.3 Region of Interest 9.11.4 Summary 9.12 Computer Programs 9.12.1 Two-Dimensional Wavelet Algorithms 9.12.2 Wavelet Packets Algorithms References 10 Wavelets in Boundary Value Problems 10.1 Integral Equations 10.2 Method of Moments 103 Wavelet Techniques 10.3.1 Use of Fast Wavelet Algorithm 10:32 Direct Application of Wavelets 210 mM 212 2a 216 27 218 2s 219 221 2a 226 22s 28 21 233 233 2a 235 2s 236 238 239 103.3 Wavelets in Spectral Domain 10.3.4 Wavelet Packets, 104 Wavelets on the Bounded Interval 105 Sparsity and Freor Considerations 106 Numerical Examples 107 Semiorthogonal Versus Orthogonal Wavelets 108 Differential Equations 109 Expressions for Spines and Wavelets References CHAPTER ONE ‘The concept of wavelet analysis has been in place in one form or another since the beginning of the twentieth century. The Littlewood-Paley technique and Calderén— Zygmund theory in harmonic analysis and digital filter bank theory in signal pro- cessing can be considered as forerunners of wavelet analysis. However, in its present form, wavelet theory altracted attention in the 1980s through the work of several researchers from various disciplines—Stromberg, Morlet, Grossmann, Meyer, Bat- tle, Lemarié, Coifman, Daubechies, Mallat, Chui—to name a few. Many other re- searchers have also made significant contributions. In applications to discrete data sets, wavelets may be considered as basis functions ‘generated by dilations and translations of a single function. Analogous to Fourier analysis, there are wavelet series (WS) and integral wavelet transforms (TWTs). In wavelet analysis, WS and IWTs are intimately related. The IWT of a finite-energy function on the real line evaluated at certain points in the time-scale domain gives the coefficients for its wavelet series representation. No such relation exists between Fourier series and Fourier transform, which are applied to different classes of func tions; the former is applied to finite-energy periodic functions, whereas the latter is applied to functions that have finite energy over the real line. Furthermore, Fourier analysis is global in the sense that each frequency (time) component of the function is influenced by all the time (frequency) components of the function. On the other hand, wavelet analysis is a local analysis. This local nature of wavelet analysis makes it suitable for time-frequency analysis of signals. Wavelet techniques enable us to divide a complicated function into several sim- pler ones and study them separately. This property, along with fast wavelet algo- rithms, which are comparable in efficiency to fast Fourier transform algorithms, ‘makes these techniques very attractive in analysis and synthesis problems. Differ- cent types of wavelets have been used as tools to solve problems in signal analysis, image analysis, medical diagnostics, boundary value problems, geophysical signal processing, statistical analysis, pattern recognition, and many others. While wavelets hhave gained popularity in these areas, new applications are continually being inves- tigated. 2 WHAT THIS BOOK Is ABOUT ‘A reason for the popularity of wavelet is its effectiveness in representation of nonstationary (transient) signals. Since most of natural and human-made signals are transient in nature, different wavelets have been used to represent this much lager class of signals than Fourier representation of stationary signals. Unlike Fourier- based analyses that use global (nonlocal sine and cosine Functions as bases, wavelet snalyss uses bass that are localized in time and frequency to represent nonsationary signals more effectively. Asa result, a wavelet representation is much more compact and casjer to implement. Using the powerful multiresolution analysis, oe can repre- sent signal by a finite sum of component at different resolutions so that each com- ponent ean be processed adaptively based on the objectives ofthe application. This capability to represent signals compactly and in several level of resolution isthe ma- jor strength of wavelet analysis In the case of solving partial differential equations by numerical methods, the unknown solution can be represented by wavelets of dif- ferent resolution, resuling in multigrid representation. The dense matrix resulting from an integral operator can be made less dense using wavelet-based thresholding techniques to attain an arbitrary degree of solution accuracy. There have been many research monographs on wavelet analysis as well as text- ‘books for specific application areas. However, there seems to be no textbook that provides a systematic introduction tothe subject of wavelets and its wide areas of ap- plications. This was the motivating factor for us to write this introductory text. Our sim is (1) to present this mathematically elegant analysis in a formal yet readable fashion, (2) to introduce to readers many possible area of applications in both signal processing and boundary value problems, and (3) to provide several algorithms and compute codes for basic hands-on practices. The level of writing will be suitable for college seniors and first-year graduate students. However, sufficient details will be given so that practicing engineers without a background in signal analysis will ind itusefl ‘The book is organized logically to develop the concept of wavelets. It is divided {to four major pars. Rather than vigorously proving theorems and developing algo- rithms the subject matter is developed systematically from the very basics of signal representation using bass functions. Wavelet analysis is explained via a parallel with Fourier analysis and the short-time Fourie transform. Multiresolution analysis is de- veloped to demonstrate the decomposition and reconstruction algorithms. Filter bank theory is incorporated 50 that readers may draw a parallel between the filter bank algorithm and the wavelet algorithm. Specific applications in signal processing, im- age processing, electromagnetic wave scattering, boundary value problems, wavelet ‘maging systems, and interference suppression are included. A detailed chapter-by- chapter outline ofthe book follows. ‘Chapters 2 and 3 are devoted to reviewing some basic mathematical cencepts and techniques and to setting the tone for time-frequency and time-scale analysis. To have a better understanding of wavelet theory, itis necessary to review the basics of linear functional space. Concepts in Euclidean vectors are extended to spaces in higher dimensions. Vector projection, bass functions, local and Rez bases, orthog- onalty, and biorthogonality are discussed in Caper 2. In dition, least-squares approximation of functions as well as such mathematical tools as matrix algebra WHAT THIS BOOK IS ABOUT 3 and z-transform are discussed. Chapter 3 provides a brief review of Fourier analysis to set the foundation for the development of continuous wavelet transform and dis- crete wavelet series. The main objective of this chapter isnot to redevelop Fourier theory but to remind readers of some important igsues and relations in Fourier anal- ysis that are relevant to later development, The principal properties of Fourier se- ries and Fourier transform are discussed. Lesser known theorems, including Pois- son's sum formulas, partition of unity, sampling theorem, and the Dirichlet kernel for partial sum are developed in this chapter. Discrete-time Fourier transform and discrete Fourier transform are also mentioned briefly forthe purpose of comparing them with continuous and discrete wavelet transforms. Some advantages and draw- ‘backs of Fourier analysis in terms of signal representation are presented Development of time-frequency and time-scale analysis forms the core of the second major section ofthis book. Chapter 4 is devoted toa discussion of the short time Fourier transform (time-frequency analysis) and the continuous wavelet trans- form (time-scale analysis). The similarities and differences between these two trans- forms are pointed out. In addition, window widths as measures of localization of a ‘time function and its spectrum are introduced. This chapter also contains the ma- Jor properties ofthe transform, such as perfect reconstruction and uniqueness of in- verse. Discussions of the Gabor transform and Wigner-Vlle distribution complete this chapter on time-frequency analysis. Chapter 5 contains an introduction to and discussion of multiresolution analysis. The relationships between nested approxima tion spaces and wavelet spaces are developed via a derivation of two-scae relations and decomposition relations. Orthogonalty and bionhogonality between spaces and between basis functions and their integer translates are also discussed. This chapter also contains a discussion of the semiorthogonal B-spline function as well as tech- niques for mapping the function onto multiresolution spaces. In Chapter 6, meth- ods and requirements for wavelet construction are developed in detail. Orthogonal, semiorthogonal, and biorthogonal wavelets are constructed via examples to eluct date the procedure. Bionthogonal wavelet subspaces and their orthogonal properties ar also discussed in this chapter. A derivation of the formulas used in methods to ‘compute and display the wavelets presented at the end of this chapter. ‘The algorithm development for wavelet analysis is contained in Chapters 7 and 8. Chapter 7 provides the construction and implementation of decomposition and reconstruction algorithms. The basic building blocks for these algorithms are dis- cussed atthe beginning of the chapter. Formulas for decimation, interpolation, dis- crete convolution, and their interconnections are derived. Although these algorithms fare general for various types of wavelets, special attention is given to the compactly supported semiorthogonal B-spline wavelets. Mapping formulas between the spline spaces and the dual spline spaces are derived. The algorithms fora perfect recon- struction filter bank in digital signal processing are developed via z-transform inthis chapter. The time-domain and polyphase-domain equivalents of the algorithms are discussed. Examples of biorthogonal wavelet construction are given at the end of the chapter. In Chapter 7, limitations to the discrete wavelet algorithms, ineluding the time-variant property ofthe discrete wavelet transform and the sparsity of data distribution are pointed out. To circumvent the difficulties, the fast integral wavelet 4) WHATTHIS BOOK Is ABOUT transform (FIWT) algorithm is developed in Chapter 8 forthe semionhogona spline wavelet, Stating with an increase in time resolution and ending with an increase in scale resolution, step-by-step development ofthe algorithm is presented in this chapter. A number of applications using FIWT are included to illustrate its impor- tance. ‘The final section of this book is on the application of wavelets to engineering problems. Chapter 9 includes their applications to signal and image processing, and jin Chapter 10 we discuss the use of wavelets in solving boundary value problems, In ‘Chapter9, the concept of wavelet packets is discussed first as an extension of wavelet ‘analysis to improve the spectral domain performance ofthe wavelet, Wavelet packet ‘epresentation of signal is seen as a refinement of the wavelet inthe spectral do- sain by further subdividing the wavelet spectrum into subspectra. This is seen to be ‘useful in subsequent discussions of radar interference suppression. Three types of amplitude thresholding are discussed inthis chaper and are used in subsequent ap- plications to image compression. Signature recognition on faulty bearing completes the one-dimensional wavelet signal processing. The wavelet algorithms in Chapter 7 are extended to two dimensions for processing of images. Major wavelet image processing applications included in this chapter are image compression and target etection and recognition. Details of tee-type image coding are not included be- cause of limited space. However, the detection, recognition, and clustering of mi- crocalcfications in mammograms are discussed in moderate deal. The application of wavelet packets to multicarrier communication systems and the application of wavelet analysis to three-dimensional medical image visualization are also included. ‘Chapter 10 concems wavelets in boundary value problems. The traditional method of ‘moments and the wavelet-based method of moments are developed in parallel. Dif- ferent ways to use wavelets inthe method of moments are discussed. In particular, ‘wavelets on a bounded interval as applied to solving integral equations arising from clectromagnetic scattering problems are presented in some detail, These boundary Wavelets ae also suitable to avoid edge effects in image processing. Finally, an ap- plication of wavelets in the spectral domain is illustrated by applying them to solve ‘transmission line discontinuity problem, “Most of the material is derived from lecture notes prepared for undergraduate and graduate courses in the Department of Elecical Engineering at Texas A&M University as well as for short courses taught at several conferences. The material in this book can be covered in one semester. Topics can also be selectively amplified to complement other signal processing courses in an existing curriculum, Exercises ‘ar included in some chapters forthe purpose of practice. A numberof figures have been included in each chapter to expound mathematical concepts. Suggestions on ‘computer code generation are also included atthe end of some chapters. CHAPTER TWO ‘The purpose ofthis hap iso fanz the ede with sme ofthe mathemat- ical outens and ol that are wef in an understanding of wavelet hry. Sine wavelets re coninnos function thats cain amily conditions, proce to dics sone denitns and propre of functional pues, Fo a mae {tiled discussion of function spaces, the reader i refered standard fxs on seal analy Te wavelet alguns dscosed nie chapters vole digi pro Seinfeld ei ie Sigmlprocesng. sucha sumpling, the -afon, ier sifnvarian systems, tldisowc comoluion inecesa fora pod gasp of wave tea In aon, fre session of linea algebra and nati manpuatons is inlaed thats very sel in dsret-ie domain aay itr banks, Readers ley fair wih its contents may skip this chapter. 21° LINEAR SPACES In the broadest sense, a functional space is simply a collection of functions that satisfy a certain mathematical structural patter. For example, the finit-energy space L2(—00, 00) i a collection of functions that are square integrable; that is, [reo ar <0 en Some of the requirement nd operational rules on Linear paces are stated as follows 1. The space $ must not be empry. 2. Ix Sandy €S,thenx+y 3. Iz eS, then (+ y) +2=x+ (+2). 6 MATHEMATICAL PRELIMINARIES 4, There exists in $ a unique element O, such that x +0 5. There exists in S another unique element —x such that x + (—x) = 0. Besides these simple yet important rules, we also define scalar multiplication ‘ex such that if x € S, then y € S, for every scalar c. We have the following ‘additional rules for the space 8: cert aextey. (ct d)x = ex +x with scalar and d. (ed)x = (ds). Lex=x. ‘Spaces that satisfy these additional rules are called linear spaces. However, up to now, we have not defined a measure to gauge the sizeof an element ina linear space. If we assign a number Ix, called the norm of x, to each function in S, this space becomes a normed linear space (.e., a space with a measure associated with it). The ‘norm of a space must also satisfy certain mathematical properties: 1. xl] = Oand fx] = 0 => x =0. 2. etyl < tt + Il 3. axl = a Ix, where a is scalar. ‘The norm of a function is simply @ measure ofthe distance ofthe function to the origin (:e., 0). In other words, we can use the norm Ie-yIl 22) to measure the difference (or distance) between two functions, x and y. ‘There are a variety of norms that one may choose as a measure for a particular linear space. For example, the finte-energy space L?(-00, 00) uses the norm, i= [ [Livre ay” <0, 23) Which we shall call the L?-norm. This norm has also been used to measure the overall Jifference (or error) between two finite-energy functions. This measure is called the ‘root-mean-square error (RMSE) defined by RMSE Jim z 7 Q far [vO where f(x) is an approximating function to f(x). The expression in (2.4) is the approximation error in the L? sense. 1 fax) «| . 4) ‘VECTORS AND VECTORSPACES 7 2.2. VECTORS AND VECTOR SPACES Based on the basic concepts of functional spaces discussed in Section 2.1, we now present some fundamentals of Vector spaces. We begin witha brief review of geo- metric vector analysis. 'A vector V in a three-dimensional Euclidean vector space is defined by three complex number {ot} associated with thre onogonal unit vectors, #2 '5). The ordered set (v))}_, represents the scalar components of the vector V, where the unit vector tt ta,}4.y spans the three -dimensional Euclidean vector space. Any vector U in this space Can be decomposed into three vector components {uj}. (Gee Figure 2.10. "The addition and scalar multiplication of vectors in this space are defined by: fur oyu +02. +09) (hus kop, ). a a 4 @ . © FIGURE 2.1. Onhogonal decomposition of a vector in Euclidean space. 8 MATHEMATICAL PRELIMINARIES In addition to these operations, vectors in a three-dimensional Euclidean space also obey the commutative and associative laws: 1 U+V=V+U. 2, (U+V)+W=U+(V+W). ‘We may represent a vector by a column matrix, v1 v-[5} es %, since all of the mathematical rules above apply to column matrices. We define the length of a vector similar to the definition of the norm of a function, by Wir fardeed 26) ‘The scalar product (inner product) of two vectors is a very important operation in vector algebra that we should consider here. It is defined by en where the superscript 1 indicates matrix transposition and :—- is the symbol for defi- nition. It is known thatthe scalar product obeys the commutative law: U-V = V-U. ‘Two vectors U and V are orthogonal to each other if U - V = 0. We define the ‘Projection of a vector onto another vector by uy Ti = projection of U in the direction of ay ‘component of U in the direction of a, 28) Projection is an important concept which will be used often in later discussions. If ‘one needs to find the component ofa vector in a given direction, simply project the veetorin that direction by taking the scalar product of the vector with the unit vector of the direction desired. BASIS FUNCTIONS 9 We may now extend this concept of basis and projection from the tree- dimensional Buciean space oan N-dimensinal vector space, The components of a vector inthis space form an W x 1 column matrix, whe the bass vectors) form an orthogonal et ach that aay a VkeeZ, es) ‘where d,¢ is the Kronecker delta, defined as bu, a rer 2.10 One can obtain a specific component vy of a vector V (or the projection of Vin the direction ofa) using the inner produet wy =V-ay, em and the vector V is expressed as a linear combination of its vector components, » v=dun a1 kis well known that a vector can be decomposed into elementary vectors along the direction ofthe basis vectors by finding its components one at time. Figure 2.1 illustrates this procedure. The vector V in Figure 2.1a is decomposed into Vp \V ~ vay and its orthogonal complementary vector vsas, The vector Vp is further decomposed into viay and van. Figure 2.14 represents the reconstruction of the vector V from its components. The example shown in Figure 2.1, although elementary, is analogous to the ‘wavelet decomposition and reconstruction algorithm, There the orthogonal compo- nents are wavelet functions at different resolutions. 2.3. BASIS FUNCTIONS We extend the concept of Euclidean geometric vector space to normed linear spaces. ‘That is, instead of thinking about a collection of geometric vectors, we think about a collection of functions. Instead of basis vectors, we have basis functions to represent arbitrary fungtions in that space. These basis functions are basic building blocks for functions in that space. We will use the Fourier series as an example. The topic of, Fourier series is considered in more detail in Chapter 3, Example: Let us recall that a periodic function py (t) can be expanded into a series pre) 3 elm, au) te 10 MATHEMATICAL PRELIMINARIES where 7 is the petiicity ofthe function, ap = 2/T = 2s is the fundamen- tal frequency, and e/"* isthe nth harmonic of the fundamental frequency. Equa- tion (2.13) is identical to (2.12) if we make the equivalence between pr(t) with V, cx with vp, and e/%° with ay. Therefore, the function set eH"), forms the bass et for the Fourier space of discret frequency. Here Z isthe set Of all ntegers, Lee-y 1,0, ly.) The coeficient set (cx}eez i8 often referred to as the discrete spectrum. It is well known thatthe discrete Fourier bass is an orthogonal basis. ‘Using the inner product notation for functions tem = [ sonar, eu where the overbar indicates complex conjugation, we express the orthogonality by 1 2p sate ita gy = TJhtp VR Ler 15) ‘We may normalize the basis functions (with respect to unit energy) by dividing them with VT. Hence {e!©'/ V7, forms the orthonormal basis ofthe discrete Fourier space. 2.3.1 Orthogonality and Biorthogonality Orthogonal expansion of a function is an important tool for signal analysis. The coefficients of expansion represent the magnitudes ofthe signal components. Inthe ‘example above, the Fourier coefficients represent the amplitudes of the harmonic frequency of the signal. If for some particular signal processing objective we decide to minimize (oc make zero) certain harmonic frequencies (such as 60-Hz noise), we simply design after at that frequency to reject the noise. Itis therefore meaningful to decompose a signal into components for observation before processing the signal. Orthogonal decompesition of a signal is straightforward, and the compotation of the coeficiens is simple and efficient. Ifa function f(t) € 2? is expanded in terms of a certain orthonormal set (#¢(t)}iex, € L?, we may write f= FS ab. 216 Se ‘We compute the coefficients by taking the inner product of the function withthe basis toyield iron) = [™ seomrtae 7 YP ceoenaear 20 ae BASIS FUNCTIONS 11 = Di cebes He enn Computation of an nner product such a the one in (2.17 requires knowledge ofthe function f() for ll and is not realsime computable. ‘We have seen that an orthonormal bass isan efficent and straightforward way to represent a signal. In some applications, however, the orthonormal basis function ray lack certain desirable signal processing properties, easing inconvenience in processing. Biorthogonal representation isa possible alternative to overcoming the constraint in orthogonality and producing a good approximation toa given function. Let (¢e (hn, € L? be a biorthogonal basis function st. there exist another basis function set {42(0)), <7 € L? such that (oe. = [~ deat = bee, 2.18) the set {9k (0), is called the dual basis of {(x(t)}xez. We may expand a function (0) in terms ofthe biorthogonal basis 8 = Dhan, S and obtain the coefficients by dy = (g. x) 19) = £ . 8eOdn(0 dr. (2.20) ‘On the other hand, we may expand the function g(t) in terms of the dual basis 10 = Sade, en rad ‘and obtain the dual coefficients di by : d= (8,40) 222) = [ora 23) ‘We recall that in an orthogonal basis, all basis functions belong tothe same space. In ‘abiorthogonal basis, however, the dual basis does not have to be in the original space. If the biorthogonal basis and its dual belong to the same space, these bases are called semiorthogonal. Spline functions of an arbitrary order belong to the semiorthogonal class. More details about spline functions are considered in later chapters. 12) MATHEMATICAL PRELIMINARIES gt eo et veto ta 1 le wof m|3 2 form‘a biorthogonal basis in the two-dimensional Euclidean space. The dual of this aa ‘The bases are displayed graphically in Figure 2.2. We can compute the dual basis in this two-dimensional Euclidean space simply by solving a set of simultaneous ‘equations. Let the biorthogonal basis be and the dual basis be be by FIGURE 2.2. Biorhogonal basis in two-dimensional Euclidean space LOCALBASIS AND RIESZ BASIS 13. ‘The set of simultaneous equations that solves for By and by is (224) This process can be generalized into a finite-dimensional space where the basis vee- tors form an oblique (nonorthogonal) coordinate system. It requires linear matrix transformations to compute the dual basis. This process will not be elaborated upon here. The interested reader may refer to (1). 2.4 LOCAL BASIS AND RIESZ BASIS. We have considered orthogonal bases of a global nature in previous sections [6(t) # € (00, 00)]. Observe that sine and cosine basis functions for Fourier series are defined on the entire real line (—0, 00) and therefore are called global bases. Many ‘bases that exist ina finite imerval ofthe real line [9() : ¢ € (a, b)] satisfy the orthog- ‘onality or biorthogonality requirements. We cal these local bases. The Haar basis is the simplest example of a local basis, Example I: The Haar basis is described by 6u4(0) = np.n(t~ K).k € Z, where 1, Ostet xon0~{ tees 225) is the characteristic function. The Haar basis clearly satisfies the orthogonality con- dition (6ns00,4n40)= f not - Drone=Bat =i 226) ‘To represent a global function f(t), € (—00, 00) with a local basis (1 € (@,), functions that exist outside the finite interval must be represented by inte- ‘er shifts (defkys) of the basis function along the real line, Integer shifts of global functions can also form bases for linear spaces. The Shannon function su) is an ‘example of such a basis. Example 2: The Shannon function is defined by ésu(e) = St, en 14 MATHEMATICAL PRELIMINARIES and the basis formed by sinx(s—b) (y= eB, 28 bsu(t) co) ez (2.28) is an orthonormal basis and is global. The prof ofthe orhonormalit is best shown inthe spectral domain. Let g(t) € L? be expanded into a series with basis functions (6x (*)heez? a) = ead. (2.29) “The bass set (6x (¢))cez is called a Riesz bassif it satisfies the following inequality Ri eels < Wg? < Ro leelife (2.30) 2 Rillealia = [Ean = Rolle. (2.31) 7 where 0 < Ri < Rz < oo are called Riesz bounds. The space ¢? isthe counterpart of L? for diserete sequences with the norm defined as 2 lala = (© rn) <0. (2.32) ‘An equivalent expression for (2.30) inthe spectral domain is 0< R15) [8@+2xb)) sR < om. 2.33) i ‘The derivation of (2.33) is left as an exercise, A hat over a function represents its Fourier transform, a topic discussed in Chapter 3. If R = Ro = |, the basis is orthonormal. The Shannon basis is an example of a Reisz basis that is orthonormal, since the spectrum of the Shannon function is 1 in the interval (=x, 2]. Hence YC [snot 2b)? = 1. (234) c If the basis functions (g(t — k) : k € Z) are not orthonormal, we can obtain an ‘omthonormal basis set (9(r — X) : k € Z) by the elation Fw -—_™ _,,. 235) {E:Be+200?} DISCRETE LINEAR NORMED SPACE 15 ‘The Reis basis also called a stable bassin the sense that if =D 7 al) = rae, al a small difference in the functions results in a small difference in the coefficients, and vice versa In other words, stability implies that 2 small Hau(®)— a2(0)P <=> smal Jaf? — 09 [7 236) 2.5 DISCRETE LINEAR NORMED SPACE A discrete linear normed space is a collection of elements that are discrete sequences of real or complex numbers with a given norm. For a discrete normed linear space, the operation rules in Section 2.1 are applicable to discrete linear normed space as ‘well. An element in an N-dimensional linear space is represented by the sequence x) = Ox), --.2N— Dh, @3n, and we represent a sum of two elements as win) = x(0) + (A) = (x0) +O), X(1) + Dy... 8 =D FY =D) 238) ‘The inner product and the norm in discrete linear space are defined separately as ((@), yO) = Yo xOFCn), (2.39) (2.40) Orthogonalty and bionhogonality as defined previously apply to discrete bases as ‘wel. The bionhogonal discrete basis satisfies the condition (6:00), 800) = Yo om9y0m) = is ean For an onhonormal bass the spaces ar self-dual thai, $= 4). (2.42) 16 MATHEMATICAL PRELIMINARIES Example 1: The discrete Haar basis, defined as fen = 041 moa 03) 0, otherwise, is an orthonormal basis formed by the even translates of Ho(n). The Haar basis, however, is not complete. That is, there exist certain sequences that cannot be repre- sented by an expansion from this basis. It requires a complementary space to make itcomplete. The complementary space of Haar is 1 +. forn=0 v2 MO=J tenet 44) v2 0, otherwise. ‘The odd translates of H(n) form the complementary space, $o any real sequence ‘can be represented by the Haar basis. Example 2: The sequence 1t¢v3 343 3-V3 1-V3 av" 4V2" 42° av2 is a finite sequence with four members whose integer translates form an orthonormal basis. The proof of orthonormalty is left as an exercise. Dan 245) 2.6 APPROXIMATION BY ORTHOGONAL PROJECTION, Assuming that a vector u(n) is not a member ofthe linear vector space V spanned by (x), we wish to find an approximation» € V. We use the orthogonal projection cof w onto the space V as the approximation. The projection is defined by Mp =D (ue) be (2.46) ‘We remark here thatthe approximation error u ~ up is onhogonal to the space V. Thats, up d)=0 VE. Furthermore, the mean square eror (MSE) of such an approximation is minimum. ‘To prove the minimality of the MSE for any orthogonal projection, consider function g € L2[a,b] that is approximated by using a set of orthonormal basis fune- [APPROXIMATION BY ORTHOGONAL PROJECTION 17 tons {dx : k = 0,...,.N = 1) such that 80 © gett) = J 6), ean = with y= 8.9) (2.48) ‘The pointwise error ¢¢(¢) in the approximation of the function g(t) is wet Eel) = 8) ~ elt) = 8) - Y )0)00). (2.49) = ‘We wish to show that when the coefficient sequence {cj} is obtained by orthogonal projection given by (2.48), the MSE ll (0) is minimum. To show this lt us assume that there is another sequence {d; :j = 0,...,.N — 1) which is obtained other than by orthogonal projection, which also minimizes the error. Then we will show that cj =dj; 7 =0,...,N ~ 1 thus completing our proof. With te sequence (dj) we have Pa BO Ba =D dip; 2.50) is Heaton = wot > wot Not Is —- > a = (+ - L460, se - ae) js is ra xt wet not (e.2)— Yo 4) (8s0.8)— Yo di (s. 4500) + > layl? ra is ad ig. 8) Yo aiej~ Yo Tey + YI 2st = = x! ul cae ‘To complete the square of the lst three terms in (2.51, we ad and subract hed [ey to yield a 2 Nea(x)P = feo Lae af wt Pog = fee 2 owl +L -aP 2.52) js 18 MATHEMATICAL PRELIMINARIES fa =heewor+ Dd -eiP- (2.53) is It s clear that to have a minimum MSE, we must have dy = €)3, and hence the proof. 2.7 MATRIX ALGEBRA AND LINEAR TRANSFORMATION We have already used column matrices to represent vectors in finite-dimensional Euclidean spaces. Matrices are operators in these spaces. We give a bref review of ‘matrix algebra in this section and discuss several types of special matrices that will be useful in the understanding of time-domain analysis of wavelets and filter banks. For details, readers are referred to [2]. 2.7.1 Elements of Matrix Algebra 1. Definition. A matrix A. = [Ay] is a rectangular array of elements. The ele- ‘ments may be real numbers, complex number, or polynomials. The first inte- ger index, i, is the row indicator, and the second integer index, j, is the column indicator. A matrix is infinite if, j - 00. An m x n matrix is displayed as Au An AB aa|4n 42 (254) Ast Ann, Ifm =n, Aisa square matrix. An N x 1 column matrix (only one column) represents an N-dimensional vector. 2. Transposition. The transpose of A is A‘, whose element is Ajj. If the dimen- sion of A is m x, the dimension of A' ism xm. The transposition of a column (N x 1) matrix is a row (1 x N) matrix. 3. Matrix sum and difference. Two matrices may be summed if they have the same dimensions: C=A4B => Cy = Ay + By 4, Matrix product, Multiplication of two matrices is meaningful only if their di- ‘mensions are compatible. Compatibility means thatthe number of columns in the first matrix must be the same as the number of rows in the second matrix, If the dimensions of A and B are m x p and p x q, tespectively, the dimension of C = AB is m x q. The element Cy; is given by Cy = Abe. re [MATRIX ALGEBRA AND LINEAR TRANSFORMATION 19 ‘The matrix product is not commutative since p x q is not compatible with, ‘m x p. In general, AB # BA. 5. Identity matrix. An identity matrix is a square matrix whose major diagonal clements are ones and whose off-diagonal elements are zeros. 100000 010000 oo1000 000100 ooo0010 ooo0001 6. Matrix minor. A minor Sy of matrix Aisa submatrix ofA created by deleting the throw and jth column of A. The dimension of Sy is (m— 1) x (t— 1) if the dimension of Ais m x 7. Determinant. The determinant ofa square matrix A isa valve computed suc- cessvely using the definition of minor, We compute the determinant of Square (m x m) matrix by det(A) = S14 Ay dettSi)). ‘The index j canbe any integer between [1m]. 8 Inverse matrix. A~" i the inverse of a square matrix A such that A“!A ‘AAT! We compute the inverse by ott aye Al = aeaycD™ dex. If det(A) = 0, the matrix is singular, and A~' does not exist, 2.7.2 Eigenmatrix A linear transformation is a mapping such that when a vector x € V (a vector space) is transformed, the result ofthe transformation is another vector, y = Ax € V. The vector y, in general, isa scaled, rotated, and translated version of x. In particular, if the output vector y is the only scalar multiple of the input vector, we call this scalar the eigenvalue and the system an eigensystem. Mathematically, we write y=Ax= nx, (255) where A is an N x N matrix, x is an N vector, and 4 is a scalar eigenvalue. We ‘determine the eigenvalues from the solution of the characteristic equation det(A — ul) = 0. 256) 20. MATHEMATICAL PRELIMINARIES Ifxis an N x 1 column matrix, there are N eigenvalues inthis system. These eigen- values may or may not all be distinct. Associated with each eigenvalue is an eigen- ‘vector. Interpretations of the eigenvectors and eigenvalues depend on the nature of the problem at hand. We substitute each eigenvalue 41; into (2.56) to solve for the eigenvector x). We use the following example t illustrate this procedure. Let 3-1 0 : As[-1 2-1 o-1 3 be the transformation matrix. The characteristic equation from (2.56) is a de(A—u) det} -1 2-n 1 0-1 3-a =G-WI@-wG-n)--6-» =6-mu-Nu-4) =0 and the eigenvalues are x = 4, 1, and 3. We substitute jx = 4 into (2.55), [Ie and obtain the following set of linear equations =m 2-5 =0 es 0-n-x =0. ‘This is a linearly dependent set of algebraic equations. We assume that x, = ar and ‘obtain the eigenvector es corresponding to 4. = 4 as : [- exo a3» ‘The reader can compute the other two eigenvectors as an exercise. 2.7.3. Linear Transformation Using the example ofthe eigensystem in Section 2.7.2, we have i [MATRIX ALGEBRA AND LINEAR TRANSFORMATION 21 and the eigenvectors corresponding to the eigenvalues are “lh Uh “t= ‘From the definitions of eigenvalue and eigenfunction, we have Aej=njey for j =1,2,3. (2.60) ‘We may rearrange this equation as Afer @ e)=[mer pe use) 261) To be more concise, we put (261) into a more compact matrix form, m0 0 AE=E|0 uz 0 00 ws, =Ex, 2.62) where is a diagonal matrix and K isthe eigenmatrix. Ifthe matrix Kis nonsingular, wwe diagonalize the matrix A by premultiplying (2.62) by E~! ETAE=p. (263) ‘Therefore, we have used the eigenmatrix K in a linear transformation to diagonalize the matrix A. ‘odingoal 2.7.4 Change of Basis ‘One may view the matrix A in the preceding example asa matrix that defines a linear ‘The matrix A is a transformation that maps x € R° to y € R®. The components OF elated that of in aoa dnd by 68), Sime 1. €2, and e3 are linearly independent vectors, they may be used as a basis for R°, ‘Therefore, we may expand the Vector x on this basis: fe tiheties 265) 22 MATHEMATICAL PRELIMINARIES and the coefficient vector x’ is computed by x 2.66) The new coordinates forthe vector y with respect otis new basis become yeEly . Bax = EARX’ cn is (2.67) Equation (2.67) states that we have modified the linear system y = Ax by a change ‘of basis to another system, y’ = wx’, in which the matrix wis diagonal matrix. We call this linear transformation via the eigenmatrix the similarity transformation. 2.7.5 Hermitian Matrix, Unitary Matrix, and Orthogonal Transformation Given a complex-valued matrix H, we can oblain its Hermitian, Ht, by taking the ‘conjugate ofthe transpose of H, namely Hh =H (2.68) The ote ()' =" (GH) = B’G* bis alow fom he defn. Let the basis vectors of an N-dimensional vector space be bi M, where b; is itself a vector of length N. An orthogonal basis means that the inner product of any two basis vectors vanishes: (bj. bi) = [by to For complex-valued basis vectors, the inner product is expressed by (by,b) = [by] Ibid Ifthe norm of b; is 1, this basis is called an orthonormal basis. We form an N x N ‘matrix of transformation P by putting the orthonormal vectors in a row as. Pe[by, ba,-..,bw) 270) =hj Vise 2) [by}* ba = 4.) an) DIGTALSIGNALS 23 it follows that Pret an, and Phaptl, en) In addition to the column-wise onhonormality, if P also satisfies the row-wise or- ‘thonormality, PP* = I, matrix P is called a unitary (or orthonormal) matrix, 2.8 DIGITAL SIGNALS In this section we provide some basic notations and operations pertinent to signal processing techniques. Details may be found in (3). 2.8.1. Sampling of Signal Let x(1) be an energy-timited continuous-time (analog) signal. If we measure the signal amplitude and record the result at regular interval h, we have a discrete-time signal XC) =U), =O. eee em where For simplicity in writing and convenience of computation, we use x(n) with the sampling period h understood. These discretized sample values constitute a signal, called a digital signal In order to have a good approximation to a continuous bandlimited function x(t) ‘rom its samples {x(1)}, the sampling interval h must be chosen such that ‘where 222s the bandwidth ofthe function x ()[i.., (w) = 0 forall jo| > 9). The choice of h above is the Nyquist sampling rate, and the Shannon recovery formula sinx(t nh) x0) = Dox (nmy ED 2.75) 0 = FO Te ay 2.75) enables us to recover the original analog function x(t). The proof of this theorem ‘most easily carried out using the Fourier transform and Poisson's sum of Fourier Series. We shall defer this proof until Chapter 3 24 MATHEMATICAL PRELIMINARIES 2.8.2 Linear Shift-Invariant Systems Let us consider a system characterized by its impulse response h(n). We say that the system is linearly sift invariant if the input x(n) and the output y(n) satisfy the following system relations: ‘Shift invariance: x(n) = y(n) : 2.16) x(n—n!) => y(n"). en Linearity: a(n) => u(r) and 22(n) => alr) em xu(a) + mx2(n) => yu(n) + myaln). In general, a linear shift-invariant system is characterized by xylan!) man —n!) 3 (nn) +myxln—n'). (278) 2.8.3 Convolution Discrete convolution, also known as moving averages, defines the input-output re- lationship ofa linear shift-invariant system. The mathematical definition of s linear convolution is Aen) xx(0) = Dk mx) c oe = Dx - neo. 279) 7 ‘We may express the convolution sum in matrix notation as, yD . . : xD 0) +h) hO) WD A-2 - . xO yay |=]. AQ) ha) ho) —D h(-2) x0) ¥@) : hQ) hG)—hO)—-A(=1)AC-2) |] x 2.80) ‘Asan example, if in) = (J, 4.4.4) and x(2) = (1,0, 1.0, 1,0, 1 are causal Sequences, the matrix equation fr the input-output relations is DIGALSIGNAIS 25 0. ‘The oupt signal is seen o be much smoother than the input signal Tn fc, the up is very clowe to te average value ofthe input. We eal this type of lie a smoothing o averaging fer In igaal proesing tems is called» lowpats fier (On he other hand ifthe impulse response ofthe tei hn) = (doef we have a differentiating filter or highpass filter. With the input signal x(n) as before, the output signal is (2.82) ‘The oscillation in the input signal is allowed to pass through the fier, while the average value (dc component is ejected by this ler. Tiss evident frm the near 2ero average of the output, while the average ofthe inpt is 2.8.4 Transform ‘The z-transform isa very useful tool for discrete signal analysis. We will use it often in derivations of the wavelet and filter bank algorithms. It is defined by the infinite 26 MATHEMATICAL PRELIMINARIES sum H@)=Dmoe* fa SAD! + AO) + ADE +h? + 2.83) ‘The variable 27! represents a delay of 1 unit of sampling interval; z~™ means a delay of M units, If one replaces z by e/, the z-transform becomes the discrete- time Fourier transform, discussed in more detail in Chapter 3: Drawer. 284) fat HO)zaeia = HOC) ‘We use these notations interchangeably in future discussions. One important property of the z-transform is that the z-transform of a linear discrete convolution becomes & product inthe z-transform domain: yl) = h(n) « x(n) => Y(2) = HOKE) (2.85) 2.8.5. Region of Convergence “The variable in the z-transform is complex valued, The z-transform, X(2) Sees x(ne-", may not converge for some values of z. The region of conver- ‘ence (ROC) of a z-transform indicates the region inthe complex plane in which Ail values of ¢ make the z-transform converge. Two sequences may have the same ‘transform but with different regions of convergence. Example 1: Find te z-transform of x(n) = a cos(cxn)u(n), where u(r) i the unit step function, defined by un) no 0, cern SOLUTION: From te definition of transform, we have X(@) = Soa coslogn)e™* & let pelo DIGITAL SIGNALS t I i ptt Hy ROC Roc FIGURE 2.3. Sequence and ROC (a = 03), 27 28 MATHEMATICAL PRELIMINARIES ie ROC: el > lal ‘The case where a = 0.9 and op = 100 is shown in Figure 2.3. Special cases: 1. Ifa = 1 and a = 0, we have v@= 2. Ia = 1, wehave X= ROC elo. 1 = cos(on)z = Reoseme Fz 3. Ifa = 0, we have xo aE ROC: [el > lal 2.8.6 Inverse z-Transform ‘The formula for recovering the original sequence from its z-transform involves com- plex integration ofthe form ee) fro 2.86 (0) = 55 $ KH, 2.86) where the contour is taken within the ROC of the transform in the counterclock- wise direction. For the purpose of this book, we shall not use (2.86) to recover the sequence. Since the signals and filters that are of interest in this book are rational functions of z, itis more convenient to use partial fractions or long division to re- ‘cover the sequence. Example 2: Determine the sequence x(n) corresponding to the following z- ROC: [zl > 1.2. exERCISES 29 SOLUTION: Using long division, we have 0.8972~? ~ 0.730823 2192 +084 Obviously, x(0) = 0, x(1) = 1, x@) = 0.9, x) = 0.87, ..., x(n) is aright-sided 2. infinite sequence since the ROC is outside a circle of radius < 0.7, x(n) is aleft-sided sequence. We obtain X@) =z 409? +0872 + 2 eite “084-1942 using long division. The sequence {x(n)) becomes a sequence of ascending pow- crs of z and is a left-sided sequence where x(0) = —1/0.84 = 1.19, x(—1) 1/0.84(1 ~ 1.9/0.84) = =1.5, x(-2) = ++ 2.9 EXERCISES 1. Let u = (—4,—5) and v = (12,20) be two vectors in the two-dimensional space. Find —5u, 3u + 2v, —v, and u + v, For arbitrary a,b € R, show that lau) + [bv] > au + bu} 2. Expand the function f(t) = sint in the polynomial basis set (°",)n = 0, 1,2,... Is this an orthogonal set? 3. The following three vectors form a basis set: ¢1 = (1,2, 1); e2 = (1,0, —2); 3 = (0,4,5).Is this an orthonormal basis? If not, form an orthonormal basis through a linear combination of et, = 1,2, 3 4. Let ey = (1,0) and ey = (0, 1) be the unit vectors of a two-dimensional Euclidean space. Let x) = (2,3) and x2 = (1,2) be the unit vector of a ‘nonorthogonal basis. Ifthe coordinates of a point w are (3, 1) with respect to the Euclidean space, determine the coordinates of the point with respect tothe ‘nonorthogonal coordinate basis. 5. Let ex = (0.5,0.5) and e2 = (0, —1) be a biorthogonal basis. Determine the dial of this bass, fay aa] foi] _ few er2] [Or] gop . cy all : & al md —o a Je-[? Form (AB)" and B? A”, and verify that these are the same. Also check if AB is mma 30, MATHEMATICAL PRELIMINARIES 7. Find the eigenvalues and the eigenvectors for matrix A. 310 A=|1 22 023 Form the transform matrix P which makes P~!A Pa diagonal matrix. 8 Find the convolution of x(n) and Da(n) where x(n) = (1,3,0,2,4) form =0,1, ‘and Dz(n) is given in (2.45). Plot x(n) and h(n) = x(n) » Da(n) as sequences oft. ‘9. Find the z-transform of the following sequences and determine the ROC for each of them: cosina), 20 oo fe? io im, lensm © xi ={2m—n, melen smal 0, osherwise, 10, Find the z-transform of the system function forthe following discrete systems: (@) y(n) = 3x(n) ~ Sx(n— 1) +(0—3) (b) y(n) = 48(m) — 118(n = 1) + 58(n — 4), where in 50) = Jo) i REFERENCES 1. E. A. Guillemin, The Mathematics of Cireit Analysis. New York: John Wiley & Sons, 1949, 2. Ben Noble and James W. Daniel, Applied Linear Algebra. Upper Saddle River, NJ: Pren- tice Hall, 1988. 3. A. V.Oppenheim and R. W. Schafer, Discrete-Time Signal Processing. Upper Saddle Rivet, 3: Prentice Hall, 1989, CHAPTER THREE Fourier Analysis Since the days of Joseph Fourier, his analysis has been used in all branches of engi- ‘neering science and some areas of social science, Simply stated, the Fourier method is the most powerful technique for signal analysis. It transforms the signal from one fe +2nn), G53) where we have assumed T = 2x tobe the period of 2n/T = 1, and the Fourier series representation of f(¢) is ). Consequently, ay) = G54) with the coefficient cx given by co ef =I r= ae f, Sole Mar =a Drermme ma & 44° FOURIER ANALYSIS 1 ue F(t + mye de by peemnar, ass) 2 Sa Jorn ‘where a change of variable £ = 1 + 2rn has been used. Since the summation and the integration limits effectively extend the integration over the entire real line R, we may write ang [sonar ly ag! (k). G56) ‘where the definition of the Fourier transform has been used. Combining (3.53), 3.54), and (3.56), we have Poisson's sum formula, x (t+ 2an) = = 2 Fei. 57) For an arbitrary period 7, the formula is generalized to DY setan= +S Fikanyelto 2.58) If g(t) is a scaled version of f (0), thatis, git) = flat), a >0, (3.59) weave Fo) G.60) Poisson's sum formula for f(a) s x LS ph) aim, DY flart2ren =e LACS) . Gs artis renamed af, we have YS se+2xan = 7G eit 6.62) ‘Two other forms of Poisson's sum will be needed for derivations in subsequent sec- tions. They are stated here without proof. The proofs are left as exercises. POISSON'SSUM 45 fesse) = swe ass) % For )-Drewe ash fa Ri 3.6.1. Partition of Unity AA direct consequence of Poisson’s sum is that a basis may be found so that unity is expressed as a linear sum of the basis. We call this property the partition of unity Leta be 1/2 in (3.62) Poisson's sum formula becomes ¥ se+m DL Faxbe. 65) If the spectrum of a function f(t) € L?(R) is such that FOrk) =o, forkeZ, 3.66) thats, Fons and Foonk)=0 keZ\(0), it follows from (3.65) that Lfetmet. G67), Fa ‘The first and second orders of B-splines are good examples of functions satisfying this propery. First-Order B-Spline MO = x.y Bx) = kim, ye MQxk=0, ke Z\(0), 46 FOURIER ANAS Hence Dmeryst. G08) fe ‘Second-Order B-Spline MD = MO* MO G6) . 1, 110.0) =42-1, tett2) 7) 0, otherwise From the convolution property, we have Ro) = [Mo] @7) <. el Bue = (5*) an) ‘Again, we find ere that RaQ) =1 73) MQxk)=0, ke Z\ {0}. (3.74) Consequently, N2(t) also satisfies the conditions for partition of unity. Infact, from the recursive relation of the B-spline, Nm(t) = Nmn-1(0) *Ni(0) 75) [Mee vas, 675) fo we have Rw) = {1—€7!)/ja™, which aise the requremeot or patton of Uy Hence spies of arbi orders al have at propery, Graphic atons forthe prion fully ae shown in Pigire 33 3.7. SAMPLING THEOREM ‘The sampling theorem is fundamentally important to digital signal analysis, It states that if a signal (0) is bandiimited with bandwidth 20, the signal f() can be re- ‘constructed exactly from its sampled values at equidistant grid points. The distance between adjacent sample points, called the sampling period h, should not exceed 2/2. The function f 1) is recovered by using the formula £0 = fe pio ES @7 SAMPLING THEOREM 47 ee) MO _M-1) M43) =1 0 1 FIGURE 3.8 Partition of unity, If h = 1/2, the sampling frequency f, = 1/h = Om is called the Nyquist rate ‘Theoretically, f(#) can always be reconstructed perfectly from samples if h < 1/2. In practice, however, we cannot recover J (¢) without error due tothe infinite nature ‘ofthe sinc function. Let f(o) be the Fourier transform of f(t): Frey = f™ pererat. ‘The integral canbe approximated using Simpson's rl s Foo) = Fo) = 4D senyeH* G78) tt ‘Using Poisson's sum formula in (3.64), we can rewrite F(w) as Flo) =A fekie ho fat -E7(e24) afer Fier 3) 79) Hence F(w) contains 7(w) plus infinitely many copies of 7) shifted along the -axis. In order for f() to be disjointed with its copies, the amount of shift, 2¢/h, 48° FOURIER ANALYSIS FIGURE 3.9 (a) Undersampling (S = 2x/h < 22); (b) oversampling (S = 2x/h > 22). ‘must be a least 20 (see Figure 3.9); that is, 2 Ber, hs G30) “To recover the orginal function, we introduce a spectral window Gale leis We) = {v: se G8) and recover Fw) by Fro) = Fwy Ww G82) From the convolution theorem we obiain (1): $= FO* WO 6.3) ‘Since W(t) = sin Qt/rt is well known, we compute F(t) from the inverse Fourier ‘transform ro Zh [7 Draneteeraw Laka ih ~ Ly san [7 eee a0 poe AY fkayaee — kB), 84) fa PARTIAL SUM AND THE GIBBS PHENOMENON 49 ‘where we have used (3.25). The Function J (¢) is recovered by using the convolution formula ro="Y fa [ * ae kW — dr & =1D sanwo—e fe _ sin(2@— ky] ah Oa G85) ‘where we have used S0h = =. We remark here that (3.85) represents an interpolation formula. Since sin{S2(¢ — RA)]/[S2( — kA)] is unity at ¢ = kh and zero at al other sampling points, the function value at kh isnot influenced by other sampled values: sin(0) teh) = fen OE = pean, 3. Fahy & LMP = FH), 3.86) Hence the function f() is reconstructed through interpolation of its sampled values with the sine function a the interpolation kernel. 3.8 PARTIAL SUM AND THE GIBBS PHENOMENON ‘The partial sum of a Fourier series is a least-square-error approximation tothe orig- inal periodic function. Let py(t) be the (2M + 1)-term partial sum of the Fourier series of a periodic function p(t) with period 7: a putt) = > axel, @87) the Fourier coefficients given by 1 pre a= ye a, 88 ntl anh 88) Putting ag back into (3.87), we have the partial sum. Sipe pu(t) = oy 7 L nortan de, G89) (On interchanging the order of summation and integration, we obtain 50 FOURIER ANALYSIS Putt) 1 SO elton ptr) eft de 1 fT? sin(M + 3) — Doo cp A IE ae, 3.90) 7 Lin sin ft — on om ‘which js the convolution between the orginial periodic function with the Fourier series kernel discussed in Section 3.2.2. We can easily see that the oscillatory char- acteristic of Ky is carried into the partial sum. If p(t) isa rectangular pulse train or 2 periodic function with jump discontinuities, the partial Fourier series will exhibit oscillation around the discontinuities. This is known as the Gibbs phenomenon. The percentage of overshoot remains constant regariless of the number of terms taken for the approximation, As M > 00, the sum converges to the midpoint at the dis- continuity (4). 3.9. FOURIER ANALYSIS OF DISCRETE-TIME SIGNALS ‘Since computation of Fourier series coefficients and Fourier transform requires inte- ration, the function must be describable analytically by elementary functions such as sine and cosine functions, exponential functions, and terms from a power series. In general, most signals we encounter in real life are not representable by elementary functions, We must use numerical algorithms to compate the spectrum. Ifthe signals are sampled signals, the discrete Fourier series and discrete-time Fourier transform ‘are computable directly. They produce an approximate spectrum of the original an Jog signal 3.9.1 Discrete Fourier Basis and Discrete Fourier Series For a given periodic sequence with periodicity N, we have Sola mN) = fpin), mez. 691) ‘The Fourier basis for this periodic sequence has only NV basis functions, namely, ex(n) = ek = O,1, WN 3.92) ‘We can easily show the periodicity ofthe basis set aI eee =e 93) FOURIER ANALYSIS OF DISCRETEIME SIGNALS 51 ince e/2*" = 1 for integer n. Therefore, the expansion of /p(n) isin the form Pa 1109 =F aie 035 = Fagen, 095 & and then we can compute the coefficients by (spin), e/a) 2 LE jn donee a6 & Equations (3.94) and 3.96) form a transform pair for discrete periodic sequences and their discrete spectra. It is quite easy to see from (3.96) that the Fourier coefficients (ax) are also periodie with W. 4 =Aeimn, — meZ, Example 1: Find the Fourier series coefficients forthe sequence Sin) = cost VSnn). SOLUTION: The given sequence is not aperiodic sequence since we cannot find an integer N such that f(n + N) = f(r). Consequently, f(n) doesnot have a discrete Fourier series representation. Example 2: Find the Fourier series representation of (2) f(r) = cos(nn/5), and ) Fem) = (1,1,0,0). SOLUTION: () Instead of computing the coefficients directly using (3.96), we may represent the cosine function in its exponential form, fa) ; (fn 4 er/2s/10") 91) ‘The periodicity ofthis sequence is seen as N = 10. Since (3.97) is already in the form of an exponential series asin (3.95), we conclude that 52 FOURIER ANALYSIS =9 898) 0, otherwise. (b) We compute the Fourier coefficients using (3:96) to obtain a ay =F (1b 7m"), k=0,1,2,3. Webave . k=0 oy 69) fat), k=3. ‘The sequence and its magnitude spectrum are shown in Figure 3.10. 3.9.2. Discrete-Time Fourier Transform [fa discrete signal is aperiodic, we may consider it tobe a periodic signal with period 'N =o. In this case we extend the discrete Fourier series analysis to the discrete time Fourier transform (DTFD), similar to the extension in the analog domain. In DIFF, the time variable (n) is discretized while the frequency variable (q) is con- tinuous since 2x do= lim > 0 ‘The DTFT pair is given explicitly by Fe= Oo sien @.100) so=H [Form do. 2.10) Example 3: Determine the spectrum of the exponential sequence Son) SOLUTION: Using (3.100) yields VneZt = (01. lal + ‘Therefore, replacing the variable z with e/* yields 1 Trees 7 Fe. FOewel= “The z-transform F(2) and the DTFT [7w) = F(2)lzaei] willbe used interchange- ably in future derivations and discussions of wavelet construction. 3.10. DISCRETE FOURIER TRANSFORM ‘The integral in the inverse DTFT discussed in Section 3,9 must be evaluated to re- cover the original discrete-time signal, Instead of evaluating the integral, we can “obtain a good approximation by a discretization on the frequency () axis. ‘Since the function f(t) is bandlimited (if itis not, we make it so by passing it through a lowpass filter with sufficiently large width), we need to discretize the interval [-, 2} only, namely nn NON a ned 3.103) = Wh zee 6.10) ‘The integral in 3.19) can now be approximated asa series sum, namely mea . Flom) bY foehyen!™* = Fo, @.104) rad where xt Fin =F eke to, 8.105) ra EXERCISES 55 ‘We can easly verify that evaluation of the discrete Fourier transform using (3.105) is an (V2) process. We can compute the disrete Fourier transform (DFT) with an (QW log N) operation with the well-known algorithm ofthe fast Fourier transform (FFT). One of the commonly used FFT algorithms is that of Danielson and Lanczos, according to which, assuming N to be such that it is continuously divisible by 2, a DFT of data length N can be writen as a sum of two discrete Fourier transforms, ‘each of length N/2. This process can be used recursively until we arive atthe DFT of only two data poins. This is known asthe radix-2 FFT algorithm. Without get- ting into many details ofthe algorithm, which the interested reader can obtain from many excellent books available on these topics, we simply mention here that by ap- proprately aranging the data of length N, where Nis an integer power of 2 (known 4s decimation-in-time and decimation in-frequency arrangements) we can compute the discrete Fourier transform in an O(N log N) operation. If N is not an integer power of 2, we-can always make it so by padding the data sequence with zeros. 3.11 EXERCISES. 1 Verify thatthe order of taking the complex conjugate and the Fourier transform ‘ofa function f € L2(—c0, 00) cam be reversed as follows: Fa =F-m for any n € R. 2 Check that the condition is equivalent tothe moment condition * dyae foray postive tee be 2 Stow tthe Diet kere ifi sin(n + Ju paw = ( : Spouts) = eee Plt te el orn = 6 4. Pine Fours f0) = elon <1 sel) = ee, 0 (4.12) 8s the window function. The Fourier transform of (4.12) is Fal) a>0, 4.13) ‘The window property of ge(t) can be computed using the formulas in Section 4.1 10 = 0, Ag, = Vat, and fe = 1/2Va. Observe that Age Af = 0.5 give tt = 0" attains the lower bound of the uncertainty principle. 62 TIME-FREQUENCY ANALYSIS 4.23 Time-Frequency Window Let us consider the window function (1) in (4.7). If * isthe center and Ag the radius ofthe window function, then (4.7) gives the information of the function f() inthe time window: [reo Ag tb 4 Ay] ay) ‘To derive the corresponding window in the frequency domain, apply Parseval’siden- tity G.41) to (4.7). We have Gf .8) [Lem beta @as) 1 ef” Hod ORE. a= Lett [” Fes@—heM do et [Fete el a =< [Foso-H] © 416) where the symbol “V" represents the inverse Fourier transform, Observe that (4.15) has a form similar to (4.7). If o* is the center and A@ is the radius of the window function $(), then (4.15) gives us information about the function f(«) inthe inter- val fo +8 - ado" +E +48] «9 Because of the similarity of representations in (4.7) and (4.15), the STFT gives in- formation about the function f(t) inthe time-frequency window: [rt o— ao. +b 4 a9] x [or +8- AG ot +E +d]. 418) Figure 43 represents graphically the notion ofthe time-frequency window given by (Gu17), Here we have assumed that #* = «* = 0. 4.24 Properties of STET Linearity Let f(@) = afi(t) + Bf2(®) be a linear combination of two functions ‘Ailt) and felt) with weighs a and B independent ofr. Then the STFT of (0), Gf (6,8) = (Ge fib, 8) + BGy V6.8). 19) isthe linear sum of the STFT of the individual function. Hence STFT is a linear ‘transformation. SHORETIME FOURIER TRANSFORM 63 bo by bn ' FIGURE 4.3. Time-frequency window for shor-time Fourier transform ('* = a" Time Shift Leting f(t) = J — 4). then Go folb,8) = [ F(0 y(t — bye“ dt = f * Fp ~ 6m yeHeH® dt HO Gy f(b — to 8). (420) Equation (4.20) simply means that if the original function f(t) is shifted by an amount fo in the time axis, the location of STFT in the time-frequency domain Will shift by the same amount in time while the frequency location will remain un- ‘changed. Apart from the change in position, there is also a change in the phase ofthe STFT which is direcly proportional to the time shift. Frequency Shift Leting #() be the modulation funtion ofa care signal!" such Lo) = Spel; 20, then the STET of fo(t) is given by 64 TIME-FREQUENCY ANALYSIS Gp folb.8) = is Fe (0 — eM at = Gof (b,§ — a0). (4.22) Equation (4.22) implies that both the magnitude and phase of the STFT of fo() remain the same as those of (0), except that the new location in the tc» domain is ‘shifted along the frequency axis by the carrier frequency ay. 4.3. DISCRETE SHORT-TIME FOURIER TRANSFORM ‘Similar tothe discussion of Section 3.10, we can efficiently evaluate the integral of (4.7) asa series sum by appropriately sampling the function f(t) and the window function $(). In its discrete form, the short-time Fourier transform can be repre sented as| xe Gof Onn) © Y” Fedde — bade HE, 423) i where haba hh, 424) and : 2nn = (425) TR? (4.25) In panicular, when h = 1, we have xt GF Fn) & YF Wok = nye FON. (4.26) & 43.1 Examples of STFT ‘We use an example similar to the one used in (2] to show the computation of the 'STFT and the effect of the window width with respect to resolution. The signal S(0) = sin2rvye + sin 2nvot + K [8¢—4)+8@—)) (4.27) consists of two sinusoids at frequencies of v; = 500 Hz and vz = 1000 Hz. and ‘two delta functions occurring at ty = 192 and f = 196 ms. We arbitrarily choose K = 3. We apply a rectangular window to the function and compute the STFT for four different window sizes. The signal and the window function are both sampled at 8 KHz. The window size varies from 16 to 2 ms and the corresponding number DISCRETE GABOR REPRESENTATION 65 Magnitude 512 1024 1536 2088 Time index (8000 somples/sec; 1 unit = 0.125msec) FIGURE 4.4. Signal for which the STFT is shown in Figure 45. ‘of samples in the windows are 128, 64, 32, and 16, respectively. Since the delta functions are separated by 32 samples, window sizes equal to of greater than 32 ‘samples are not narrow enough to resolve the delta functions. ‘Tocompute the STFT, we apply the FFT algorithm to the product ofthe signal and the window function. We compute a 128-point FFT each time the window is moved to the right by one sample. Figure 4.4 shows the function f(t), and the results of these STFTs are given in Figure 4.5. Initially, when the time window is wide, the delta functions are not resolvable at all. However, the two frequencies are well distinguished by the high resolution of the window in the spectral domain. As the window size gets smaller, we begin to see the two delta functions while the frequency resolution progressively worsens. At a ‘window size of 16 samples we can distinguish the delta functions quite easily, but the two frequencies cannot be resolved accurately. To resolve evens inthe frequency ‘and time axes, we must compute the STFT every time we change the window size (Computation load is a serious issue in using STFT for signal processing. 4.4 DISCRETE GABOR REPRESENTATION Formally writing the Gabor transform given in Section 4.2.2, we obtain G..f0,8) = [soar de She 66 TIME-FREQUENCY ANALYSIS a 2g = 16 msec 308 co oo °F 095 co or 08 = 4 msec To hem | FIGURE 4.5. STFT of signal shown in Figure 4.4 with different window widkh (24); ori- zontal ais is time (Second) andthe vertical axis is frequency (Fz). CONTINUOUS WAVELET TRANSFORM 67 for cc < b, £ < co. The Gabor transform is dense over the —f plane. Computation load for the Gabor transform in (4.28) is quite heavy. We may, instead of (4.28), compute the discretized version of (4.28). That is, we compute (4.28) only at a set of points on the -f plane: Gee flbn b= | FO galt = bade I de = (F0. galt ~ bye) = (FO. dna), (4.29) The las expression of (429) isthe inner product of the function with the function nat) = galt — bye! The function (0) may be recovered under the restricted condition (3) FO = OY Ge, fbn Fad Balt — dyed (430) we Equation (4,30) is known as the Gabor expansion, in which (Gp, f)(bn. 8) play the role of coefficients inthe recovery formula FO = YG, fbn fedon al) 431) me ‘The function ¢y,x(t) is a Gaussian-modulated sinusoid. Spread of the function is ‘controlled by a, while the oscillation frequency is controlled by &4. These “bullets of the =f plane form the basis of the Gabor expansion, Since the Gaussian function ‘has the minimum size of a time-frequency window, it has the highest concentration ‘of energy in the «~f plane. The Gabor basis ¢,.(0) appears to be a useful basis for signal representation. However, it lacks basic properties such as orthogonality, com- pleteness, and independence needed to achieve simple representations and efficient ‘computation. 4.5 CONTINUOUS WAVELET TRANSFORM ‘The STFT discussed in Section 4.4 provides one of many ways to generate a time frequency analysis of signals. Another linear transform that provides such analy- ses is the integral (or continuous) wavelet transform. The terms continuous wavelet ‘transform (CW) and integral wavelet transform (TWT) willbe used interchangeably throughout this book. Fixed time-frequency resolution of the STFT poses a serious ‘constraint in many applications. In addition, developments on the discrete wavelet ‘transform (DWT) and the wavelet series (WS) make the wavelet approach more stit- able than the STFT for signal and image processing. To clarify our points, let us ‘observe that the radii Ay and Az of the window function for STFT do not depend ‘upon location inthe rw plane. For instance, if we choose (t) = ga(t) asin the Ga- (68 TIME-FREQUENCY ANALYSIS A cD a FIGURE 4.6. Chirp signal with frequency changing linearly with time. bor transform (see Section 4.2.2), once a is fixed, so are Aga and AGa, regardless of the window location in the #~« plane. A typical STFT time-frequency window has been shown in Figure 4.3. Once the window function is chosen, the time-frequency, resolution is fixed throughout the processing. To understand the implications of such {fixed resolution, le us conser the chirp signal shown in Figure 46, in which the frequency of the signal inereses with time. If we choose the parameters ofthe window function 6() [a in the ese of ga(] such that Ag is approximately equal. AB, the STET as computed using (4.7) will be able to resolve the low-frequency portion ofthe signal beter, whi there willbe oor resolution of the high-requency portion. On the other hand, if Ag is approxi thately equal t CD, the low frequency wil not be resolved propery. Observe that if ‘Ag is very small, 4 wl be proportionally lage, and hence the low-frequency part wil be blured. ‘Our objective is to devise’ method that can give good time-frequency resolution aan arbitrary location in the 1a plane. In other words, we must have a window function whose radius increases in time (reduces in frequency) while resolving the low-frequency contents, and decreases in time increases in frequency) while resoly- ing the high-frequency contents of a signal. This objective leads us to the develop- rent of wavelet functions ¥() ‘The integral wavelet transform of 8 function f(#) € L? with respect to some analyzing wavelet yi defined as Wy s.a)= [ /0TraTat (432) a>0. 433) CONTINUOUS WAVELET TRANSFORM 69) ‘The parameters b anda are called translation and dilation parameters, respectively. ‘he normalization factor a~"? is inluded so that [Voe} = IVI : For ¥ to be a window function and to recover f(t) from its IWT, (+) must satisfy the following condition: Fo = [* vod =o, 434) {In addition to satisfying (4.34), a wavelet is constucted so that it has a higher order ‘of vanishing moments. A wavelet is said to have vanishing moments of order m if [lve pa 0...mat 435) Stricly speaking, integral wavelet transform provides time-scale analysis and not time-frequency analysis. However, by proper scale-to-frequency transformation (discussed later, one can get an analysis that is very close to time-froquency analy- sis, Observe that in (4.33), by reducing a, the support of Wo, is reduced in time and hence covers a larger frequency range, and vice versa, Therefore, 1/ais a measure of frequency. The parameter b, on the other hand, indicates the location ofthe wavelet ‘window along the time axis. Thus, by changing (b, a), Wy f can be computed on the entire time-frequency plane. Furthermore, because of the condition (4.34), we conclude that all wavelets must oscillate, giving them the nature of small waves and hhence the name wavelets. Recall that such an oscillation is not required for the win- 5. (4.40) fis the center and Ay is the radius of y(0, then Wy f(b, a) contains the infor- mation of f() in the time window [ar +b-ady,ar* +b+ady). aan) Let us apply Parseval’s identity to (4.32) to get an idea of the frequency window: Wy flb.a) = 4 [* sow (2) at 442) _ [pestered =F [L Fereare™ do. 443) (CONTINUOUS WAVELET TRANSFORM 71 From (4.43) itis clear thatthe frequency window is 4.44) The time-frequency window product = 2aAy x 24 = 44 A = constant Figure 4.7 graphically represents the notion of the time-frequency window for the wavelet transform. Comparing Figure 4.7 withthe corresponding Figure 4.3 for STFT, we observe the flexible nature ofthe window in the wavelet transform. For higher frequency (1/a2), the time window is small, whereas for lower frequency (1/a9), the time window is large. For a fixed frequency level, (1/ao), for exam- ple, both the time and frequency windows are fied, Recall hatin STFT the time frequency window is fixed regardless of the frequency level. Example: We perform a continuous wavelet transform on the same function used for computing the STFT. We choose the complex Morlet wavelet, given by we (445) to compute the CWT by + apt" by taut bata or FIGURE 4.7 Time-frequency window for continuous wavelet transform, 72 TIME-FREQUENCY ANALYSIS Frequency (Hz) Time (second) FIGURE 4.8 Continuous wavelet transform ofthe sigal shown in Figure 44 with More's wavelet. “The results are shown in Figure 4.8. The figure indicates good resolution of the events in both the time and frequency axes. If we choose an appropriate range for a, the transform needs to be computed only once to capture most, if not all, ofthe events ‘occurring i the time and frequency domain. 4.6 DISCRETE WAVELET TRANSFORM ‘Similar to the discrete Fourier transform and discrete short-time Fourier transform, ‘we have the discrete wavelet transform (DWT). However, unlike the discretized time and frequency axes shown ealier in Fourier analysis, here we take the discrete values of the scale perameter a and the translation parameter b in a different way. The interest here isto introduce the DWT and show the relationship between DWT and IWT. A detailed discussion of the DWT is presented in Chapter 7. Here we just mention that we will take a to be of the form 2~F and b to be of the form k2~*, where k, s € Z. With these values of a and b, the integral of (4.32) becomes Wy S022 zn f SOW! — Wat (4.46) WAVELETSERIES 73 Lets now drt the function /(). For simpy, assume the sampling ate ‘be I. In that case, the integral of (4.46) can be written as ae Wy fe 1) 27 Fema hy aan Tocompue the wavelet rast fton some poiatnthe timescale ae, te donot ned o know te fant sales ate ine i ll we eds the fncon tthe aes of tine Which te wel stone, Cnsesuety Evaluation ofthe weet tno can be done matinee, We dacs Slats to compte te wavelet weno nach ‘One ofthe impo obcvaton shot (247) ite near nar. The DWT of fancied in ine sue difecm fom th DWT of he oa fncon To explant fare Salt) = f(t). (448) This gives ny sat-n-a2 f° joverra RPAY sen —m vin —h PY Fw [2'n — k — m2"y) ~ Wy [e222] 449) Therefore, we see that for DWT, a shift in time of a function manifests itself in a rather complicated way. Recall that a shift in time of a function appears as a shift in time location by an exact amount in the case of STFT, with an additional phase shift. Also inthe Fourier transform, the shift appears only as a phase change in the frequency domain. 4.7 WAVELET SERIES Analogous tothe Fourier series, we have the wavelet series. Recall that the Fourier ‘series exists for periodic functions only. Here, for any function f(r) € L?, we have its wavelet series representation, given as FO= LY wists, (4.50) oe where Vist) = 2" yt —b, 51) 74 TIME-FREQUENCY ANALYSIS ‘The double summation in (4.50) is due tothe fact that wavelets have (wo parameters: the translation and scale parameters. For a periodic function p(t), its Fourier series is given by peo = rael 432) c Since (ek €Z) isan orthogonal basis of L?(0,2), we can obtain cas 1 ae (Poel). (453) (On a similar line, if (¥a.s(t) : k.s € Z} forms an orthonormal basis of L7(R), we can get wes = (FO Yas )mn 454) kA coew fe =? Wor (Foz) 435) ‘Therefore, the coefficients {w+} in the wavelet series expansion of a function are nothing but the integral wavelet transform ofthe function evaluated a centain dyadic points (k/2*, 1/2"). No such relationship exists between Fourier series and Fourier transform, which are applicable to different classes of functions; Fourier series ap- plies to functions that ae square integrable in (0,2), whereas Fourier transform is for functions that are in £2(R). Both wavelet series and wavelet transform, on the other hand, ae applicable to functions in L2(R). "Tf (¥s()} isnot anonthonormal basis, we can obtain wx. sing the dual wavelet {es} a8 wes = (FO, Ves(O)}- The concept of dual wavelets will appear in subsequent chapters. 4.8 INTERPRETATIONS OF THE TIME-FREQUENCY PLOT Let us briefly discus the significance ofa surface over the time-frequency plane ‘Usually the height ofa point onthe surface represents the magnitude ofthe STFT or the IWT. Suppose the given function is such that its frequency does not change with time; then we should expect a horizontal line parallel tothe time axis in the time— frequency plot, corresponding tothe frequency ofthe function. However, because of| the finite suppor of the window function and the truncation of the sinusoid, instead of getting a line we wil see a (widened line) band near the frequency. To understand ‘tore clearly, let us considera truncated sinusoid of frequency dy. We assume, for the purpose of explaining the time-frequency plot here, tht even though the sinusoid is truncated, its Fourier tansform is represented as 5(0 ~ 0). By replacing fw) = 5(e — 09) in (4.16) and (4.43), respectively, we obtain lie [GoF0.8)| = 5 [loo — 8] 456) INTERPRETATIONS OF THE TIME-FREQUENCY PLOT 75 fan [y106.0)| = 2 [Btaon}. asp It is clear from (4.56) and (4.57) that |G f(b, £)| and [Wy f(b, 8)] do not depend ‘pon b. On the frequency axis, since [@(0)| = 1, and assuming that her slow 6, we will get the maximum magnitude of STFT at = aa, Then there will be a band around § = ap, the width of which will depend on Ap the radios of Bo) Interpretation of (4.57) isa litle complicated since, unlike STFT, wavelet trans- form does ot ive a ine -tequnc lt daly Let ws coer pe frequency axis such that » point fom te Fw] = max (|Pe]; w€ 0,009). 58) Focallpaccal purposes, we may tke a = ai. ‘ow if we consider a variable § = w/a and in terms Hivel ates /$/a and rewrite (4.57) of the new (459) ‘Therefore, the maximum value ofthe wavelet trans > «transform (4,57) wil occur at § = hoecsirem = ob, depending on the radius A¥ of the wavelet F(a), ° ‘ournextexample, let (©) = 5(¢~1)- Since ths function has al the frequency components, we soul expecta vertical line in he tine-fequeny pane Substituting f() = 5(¢ ~ tp) in (4.7) and (4.32), we obtain [Go40.8)] = 160 ~6)1 (4.60) [Wy 70.0)] = zi 2 (sty ‘Gitlanation of the STFT is straightforward. As expected, it does not depend on & Wee. Ee vertical ine parallel tothe frequency axis nea b= o withthe time spread ined by Ag. For wavelet transform we observe that it depends on the scale Parameter a. Rewriting (4.61) in terms of the new variable &, we have Eb(Ee-»). ae {Although all the fequency contents ofthe delta function in time ar indicted by (82), itis clear that as we reduce § the time spread increases, Furthermore, the location ofthe maximum will depend on the shape of y(t). Readers are referred ve ~ (4 for more information on the interpretation oftine-frequency plots moe 76 TIME-FREQUENCY ANALYSIS 4.9 WIGNER-VILLE DISTRIBUTION ‘We have considered in previous sections linear time-frequency representations of 4 signal. The STFT and CWT are linear transforms because they satisfy the linear superposition theorem. That is, Thay fi +02 fe] = aT fil + oT Lf (463) where T may represent either the STFT or the CWT, and fi(e) and fo(r) are two different signals inthe same class with coefficients cr and a. These transforms are ‘important because they provide an interpretation to the local spectrum of a signal at the vicinity of time t. In addition, easy implementation and high computation efficiency of their algorithms add to their advantages. On the other hand, these linear transforms do not provide instantaneous energy information of the signal at specific instant of time, Intuitively, we want to consider a transform of the type [lite orem arm [Pee ae, Since it is not easy to determine the energy of a signal at a given time, itis more ‘meaningful to consider the energy within an interval (1~r/2, 1441/2) that is centered around the time location f. For this purpose, the Wigner-Ville distribution (WVD) is defined by win 2 [sed iGepem an a6 ‘The constant 1/2: is a normalization factor for simplicity of computation. We should ‘ote that the linearity property no longer holds forthe equation above. The Wigner— Ville distribution is a nonlinear (or bilinear) time-frequency transform because the signal enters the integral more than once. One may also observe that the Wigner Ville distribution at given time # looks symmetrically to the left and right sides of the signal ata distance t/2. Computation of W ;(, ) requires signa information at 4/2 and cannot be carried out in real time. Example 1: Letus considera chirp signal that is modulated by a Gaussian envelope: ayia (a. bet r= (2) oo +a). (465) where exp(—ar?/2) isthe Gaussian term, exp(— jbt?/2) i the chirp signal, and e/°% is.a frequency-shifting term. The Wigner-Ville distribution from (4.64) yields 1 (3) feel ia 4 jt +7 + jao(1 | WIGNER-VILLE DISTRIBUTION 77 sos ]a 12 ait f° (ast ena [oe (= ste + jae on) fy (4.66) Using the Fourier transform of a Gaussian function as given in Chapter 3, the WWD of a Gaussian sinusoid-modulated chirp is a? (@- > ae oma) aon ‘The function and its WVD are shown in Figure 49. Example 2: A sinusoidal modulated chirp signal is given by be LO=e0 (i+ a) : (4.68) FIGURE 4.9 Wignes-Ville distribution of a Gaussian-modulated chip signal, 78 TIME-FREQUENCY ANALYSIS ‘We compute the WWD straightforward to obtain LUE b+ 1/2? = Eff) et vn(+9)] rong joe L [7 csotinee + jooe — jonae = [srt + jour — jo 2 8(0~ 00 b9 46) Wy(t.0) Example 3: We compute the WVD of a pure sinusoidal signal e/*" by setting the chirp parameter b to 2er0. Therefore, 8 <=> 5( — 09). (470) ‘The WYDs of (4.67) and (4.69) om the time-frequency plane area straight line with slope b and a straight line parallel to the time axis, respectively. They are given in Figures 4.10 and 4.12. Figure 4.11 shows the WVD of a Gaussian-modulated sinusoidal function, FIGURE 4.10 Wigner-Ville istibuton of a chirp signal 2 BY ous FIGURE 4.11 Wigner-Ville distribution of sinusoidal function. x sto nue ff ff Mh a ELEN EEO FIGURE 4.12 Wigner-Vlle distribution ofa Gaussian modulated sinusoid 180 TIME-FREQUENCY ANALYSIS 4.10 PROPERTIES OF THE WIGNER-VILLE DISTRIBUTION ‘There are several general properties of WWD that are important for signal represen tation in signal processing. Some of them are discussed in this section. It has been shown [5] thatthe Wigner-Ville distribution has the highest concentration of signal ‘energy in the time-frequency plane. Any other distribution that has higher energy ‘concentration than WVD will be in violation of the uncertainty principle. Further- ‘more, if cannot satisfy the marginal properties discussed in this section. 4.10.1 A Real Quantity ‘The Wigner-Ville distribution is always real, regardless of whether the signal is real cr complex. This can be seen by considering the complex conjugate of the Wigner Ville distribution: Wao = W(t, 0). am ‘Wigner-Ville distribution is always real but not always positive. Figure 4.13 shows the WVD of a function that becomes negative near the center. Consequently, WVD may not be used as a measure of energy density or probability density. 4.10.2 Marginal Properties (Of particular concem to signal processing is the energy conservation. This is ex- ‘pressed by the marginal properties ofthe distribution: Wylt,0) deo = [fF 7) [Ew ie0de= Feo? an) “Marginal (density) expresses the energy density in terms of one ofthe two variables alone. If we wish to find the energy density in terms of f, we simply integrate (sum ‘up) the distribution with respect to , and vice versa. The total energy of the signal ‘can be computed by a two-dimensional integration ofthe Wigner-Ville distribution ‘over the entire time-frequency plane. es ["vora= * Festae= [~ [~Wrt.0) dod at (QUADRATIC SUPERPOSITION PRINCIPLE 81 e EB Jos| FIGURE 4.13. Potindeatng tht Wigner dstibaton may be meine 4.10.3 Correlation Function Meat oats Thee functions in the time or frequency domains easily ne) 7 - SOFT de = Wy (0,0) a7 volo = f° © SOMT@FA d= W (0,0). 475) 4.11 QUADRATIC SUPERPOSITION PRINCIPLE ‘We recall that WVD is a nonlinear distribution where the linear superposition prin- ple does not apply. For instance, et a multicomponent signal be FO= DY ho. 476) ‘The Wigner-Ville distribution of this signal is 82 TIME-FREQUENCY ANALYSIS Welia) = watod+ So Fi Waste. — a7 where W j(t, 0) is called the auto-term of the WWD, while W ,e(t, 0) is a eross- term defined by Waatior= x fo al) R05 ‘These cross-terms of the WVD are also called interference terms, which represent the cross coupling of energy between two components of a multicomponent signal ‘These interference terms are undesirable in most signal processing applications, and much research efforthas been devoted to reducing the contribution of these terms. We ‘must remember that these cross-terms [6, 7] are necessary for perfect reconstruction ofthe signal. In signal detection and identification applications, we are interested in discovering only those signal components that have significant energy. The cross- terms are rendered unimportant since reconstruction of the signal is not necessary. ‘In radar signal processing and radar imaging, the signals to be processed have a time-varying spectrum like that ofa linear chirp or quadratic chirp. Using either the STFT oF WT torepresent a chirp signal loses resolution inthe time-frequency plane. However, the WVDs of these signals produce a well-defined concentration of energy in the time-frequency plane, as shown in Figure 4.10. For multicomponent signals. the energy concentration of the WVD wall be far apart in the time-frequency plane if the bandwidths of the components are not overlapped too much (see Figure 4.14). tt de. (4-78) 0 = o o o o mot FIGURE 4.14 Wigner-Vile distribution of « multicomponent signal AMBIGUITY FUNCTION 83 ‘However if this is not the case, certain cross-interference reduction techniques must ‘be applied, and that leads to the reduction of resolution. 4.12. AMBIGUITY FUNCTION ‘The ambiguity function (AF) isthe characteristic function of the Wigner-Ville dis- ‘sibution, defined mathematically as Ayu») LL Pemba) at de (429) While the Wigner-Ville distribution is a time-frequency function that measures the ‘energy density ofthe signal on the time-frequency plane, the ambiguity function is a distribution that measures the energy distribution over a frequency-shift (Doppler) ‘and time-delay plane. This is a very important function in radar signal process particulary in the area of waveform design. We shall see some applications of function toward the end of this book. ‘Apart from a complex constant, we may express the AF in terms ofthe signal as Lae DeD ‘where K is a complex constant. The proof ofthis relationship can be found in (8). FFor further information on the ambiguity function, readers are referred to [9]. Fig- Agno = (4.80) FIGURE 4.15 Ambiguity funtion ofa chip signal. B84 TIME-FREQUENCY ANALYSIS ture 4.15 shows the AF fora chirp signal. This time-frequency representation will be revisited in Chapter 9, where a combination of waveler packets and the Wigner-Ville distribution is applied to radar signal detection, 4.13 EXERCISES 1. Verify that for any function ¥ € L?(—90, 00), the normalized function given by Vast) = 272 2% — b) fork, s © Z, 1 © R, has the seme L? norm as y= [Lwora=f WesPa, keeZ. 2. Consider the window function ga(t) = e~*",a > 0. Compute the window ‘widths inthe time and frequency domains and verify the uncertainty principle, 3. The hat function Np is defined by fost! mo [3 , feel ere? eee Compute the time-frequency window for Na). 4. Show that FON? = sep Jf 1Gos(b.6)]° bas. ‘5. Given that f(¢) = sin 1”, and using the raised cosine as the window function Iteosten), Inst 20 \c otherwise, plot the window shifted time functions f(t) = 8G =3)f (0) and f(t) and their spectra. ‘Consider the time-frequency atoms or the kernel Re[(s —Ael**! + 6 6helB™ Re [ote — Ael*** +466 ed]. Plot the spectral energy density of the two time-frequency atoms. Comment on the time-frequency resolution ofthe two atoms. 6. In the CWT, show that the normalization constant 1/-/@ is needed to give WOI= lo]. 7. Show that the energy conservation principle inthe CWT implies that pe pe aoe [roman 2 [7 fm soompremat COMPUTER PROGRAMS 85 8. Show thi the foqueny window width of a wavelet ¥ is (1/a\o%, - AG), (/ayos + AW). 9. Identity the reason for dividing the frequency axis by 2 inthe program wvd.m. 4.14 COMPUTER PROGRAMS: 4.14.1 Short-Time Fourier Transform . ‘PROGRAM st fe.m ® & short-time Fourier Transform using Rectangular window [0,1] 8 generates Figure 4.5 . & signal & frequency ‘Acampling rate tL = 0.192; % location of the delta function £2 = 0.196; = 1:2048; b= Ucn) /ry £ = sin(2*pitvite) + sin(2epitv2tey; ketler fte) = £00 + 37 ket; fe) = £(k) + 3; plot (t, £) axis((0 0.24 -2 21) Sigure(2) 8 STP computation N = 16 8 rectangular window width bot = 0.1; Ri = 0.175; for i for b= 1:2048-Nel fb = E(bibeN-1) (86 TIME-FREQUENCY ANALYSIS Eft = abs (Efe (fb) )2 STFD(b,:) = ££tfb(1:ND2); ena ¥ Plot Ncolor = 256; colormap (gzay (NColor) ); SIFT pin = min(min(STPT) STFT_max = max(max(STPD) SIF? = (SIFT - STPT_max) ‘thmes (0:2048-M)/: freq = (O:Nb2-1) * r / Ny Neolor / (STPT_min - STPTmax); axes ('position’, (0.1 bot 0.8 hi}}) image (tine, freq, STFT") axis({0 0.24 0 2000}) yrickmark = (0 500 1000 1500 2000); set (gea, "YDir', ‘normal’, "ytick’, ¥Rickmark) hold on; NeNt2 bot = bot + 0.225; clear SIFT; clear time; clear freq end set (got, ‘paperposition’, (0.5 0.5 7.5 10)) 4.14.2 Wigner-Ville Distribution [PROGRAM vd. Computes Wigner-Ville Distribution @ signal x = 4000; & sampling rate b= (0:255) / 4; omegai = 2.0 * pi * 500.0; £ = exp(itonegal * t) ‘& WD Computation Nelength(£) 4€ (mod (w,2) f= (£01; COMPUTER PROGRAMS 87 Neueay end Nant = 2 - Nb2 = N / 2; for m= 1. 8 = zeros(1,Namt); (N= (m-4) :Nam1~(-1) ) 8 = conj(flipir(s)).*8; 5 = e(Nb2:Nam1-Nb2); shat = abs(fft(s)); ® & Normalize with the number of overlapping terms . im ce m2 shat = shat / (2 *m~ 1); shat = shat / (2 *N-2* m+); end wvd (m, #) shat (2282) 7 end ¥ Plot time = (0:N-1) / ry freq = (0:Nb2-1) #3 /N/ 2 Neolor = 256; colormap (gray (NColor) ); wed_min = min(min(wva)) ; wvd_max = max(max (wd) ) wed = (wvd ~ wwd max) * NColor / (wvd_min ~ wax); image (time, freq, wa"): & Because of the finite support of the signal, there will % ena effects xlabel ("Time (seconds) '); ylabel (‘Frequency (Hz) "); set (gca, ‘YDir’, ‘normal’) 188 TIME-FREQUENCY ANALYSIS REFERENCES, 1. D, Gabor, “Theory of communication” J. IEE (London), 93, pp. 429-457, 1946, 2. I. Daubechies, Ten Lectures on Wavelets, CBMS-NSF Set. Appl. Math. #6. Philadelphia: SIAM, 1992. 3. J.B. Allan and L. R, Rabiner, “A unified approach to STFT analysis and synthess;” Proc. (Of IEEE, 65, pp. 1558-1564, November 1977, 4, A. Grossmann, R. Kroaland-Marinet, and J. Mode, “Reading and understanding con- tinuous wavelet transform,” in Wavelets, Tine-Frequency Methods and Phase Space, 1M. Combes, A. Grossmana, and Ph. Tehamitchian (Eds). Berlin: Springer-Verag, 1989, pp. 2-20. 5. T. A.C. M, Classen and W. F. G. Mecklenbrauker, “The Wigner-Vile distribution: tool for time-frequency signal analysis: I. Continuous time signals” Philips J. Res. 35, pp. 217-250, 1980, 6, A. Moghaddar and E. K. Walton, “Time-frequency distribution analysis of scattering from ‘waveguide cavities” IEEE Trans. Antennas Propag. 4, pp. 677-680, May 1993 7. L.Cohea, “Time-frequency distributions: A review” Proc. IEEE, 77, pp. 941-981, 1989. '8. Boualem Boshash, “Time-frequency signal analysis” in Advances in Spectral Estimation and Array Processing, ol. 1, S. Haykin, Ed.) Upper Saddle River, NJ. Prentice Hall, 1991, Chap. 9. 9. C.E, Cook and M, Bernfeld, Radar Signals. an Diego, Cli: Academic Press, 1967, CHAPTER FIVE Multiresolution Analysis Multiresolution analysis (MRA) forms the most important building block for the construction of scaling functions and wavelets (Chapter 6) and the development of algorithms (Chapters 7 and 8). As the name suggests, in multiresolution analysis, a function is viewed at various levels of approximations or resolutions. The idea was ‘developed by Meyer [1] and Mallat (2,3). By applying the MRA we can divide a ‘complicated function into several simpler ones and study them separately. To under- stand the notion of MRA, let us consider a situation where a function consists of slowly varying and rapidly varying segments s illustrated in Figure 5.1. If we want to represent this function ata single evel of approximation, we have to discretize it using step size (h), determined by the rapidly varying segment. This leads to a large numberof datapoints. By representing the function using several discretization steps (cesolutions) we can significantly reduce the numberof data points required for ac- ‘curate representation. The coarsest approximation of the function together with the details at every level completely represent the original function. Observe that with ‘every level (scale), the step size is doubled. This corresponds to octave-levelrepre- sentation, famitiar in audio signal processing. In addition to this specific example, there are many situation in signal processing as well as in computational electro- ‘magnetics, where multiresolution analysis can be very useful, In this chapter we begin with an understanding of the requirements of MRA. ‘Two-scale relations and decomposition relations are explained. Candinal B-spines, discussed in Section 5.5, generate an MRA and form the basis of most ofthe wavelets discussed in this book and elsewhere. Finally, in Section 5.6 we discuss how to map «given funetion into an appropriate subspace before starting an MRA. 5.1 MULTIRESOLUTION SPACES Let us go back to Figure 5.1. Every time we go down one level by doubling the step size, we remove certain portions of the function, shown on the right-hand-side plots. Them there are the “leftover” parts which are further decomposed, In Figure 5.1 90 MULTIRESOLUTION ANALYSIS 16 awl q % 8 16 24 16 a _— % 8 16 Ey 16 6 a and a "0 8 16 4 % 8 16 24 16 16 a has a % 8 16 ES % & 16 24 FIGURE 5.1 Muilevel representation ofa function we assign all the functions on the left-hand side to As and those on the right-hand Side to We, wheres represents individual scales. Let A, be generated by the bases (des : 276% — k); k € Z) and W, by (Was 2 2°? 't — Wisk € Z). In other ‘words, any function x,(¢) and y, (0) can be represented as the linear combinations of s(t) and Wr. (, respectively. ‘Observe thatthe functions x,-1(0) € As-1 and ys-1(2) € Ws-1(t) are both derived from x, € Ay. Therefore, we should expect thatthe bases dy..—1 of Asi and Ye..-1 Of W,-1 should somehow be related to the bases dy,» of As. Such relationship wil help in devising an algorithm to obtain the functions x1 and y,-1 from x, more efficiently. ‘To achieve a multiresolution analysis ofa function as shown in Figure 5.1. we must have a finite-energy function (0) € L?(R), called a scaling function, that ‘generates a nested sequence {Aj}, namely (CAL CADCAIC 9, MULTIRESOLUTION SPACES 91 ‘and satisfies a dilation (refinement) equation 60 = Daolk] oar — 8) for some a > 0 and coeficients {gofk}} € #2. We will consider a = 2, which corresponds to octave scales. Observe thatthe function (1) is represented as a su- petpositon of a scaled and translated version of tself—hence the name scaling func- tion. More precisely, Ao is generated by {@(+—) :k € Z) and in generl, Ay, by {1s 'k, 5 € Z}. Consequently, we have the following two obvious results: 20D 6 Ay © 20 € At oy xD EAs @ 0427) CA, 62) ‘There are many functions that generate nested sequences of subspaces. But the properties (5.1) and (5.2), and the dilation equation are unique to MRA. Foreach s, since A, is a proper subspace of A, there is some space left in Ay, called W,, which when combined with A, gives us A, This space {W,) i called the wavelet subspace and is complementary o Ay in Ay. meaning that AW, =(0, se% 63) Ar OW, = Avs. 64) With the condition (5.3), the suramation in (.4 is referred to asa direct sum and the decomposition in (54) as a direct-sum decomposition Subspaces {W,] are generated by y(4) € L?, called the wavele, in the same way as (Ay) i generated by (0). In other words, any x4 (0) € A, can be written as (0) = aes ORK), 5) and any function y, (1) € W, canbe writen as yet) = Dane Ott) 66 cs for some coefficient (a.)tezs (wh sez € @. Since Asti = Ws @Ay WOW.1 8A. =WoW10W28-, en wehave éw. 20 92 MULTIRESOLUTION ANALYSIS Au Aut Wat Auea War Aus [Was FIGURE 5.2 _Spliting of MRA subspaces. Observe thatthe (A,) are nested while the (W,} are mutually orthogonal. Conse- ‘quently, we have AcMAn = Aq m>e WiOWn = (0), &#m AcOWn = (0, €e moe tem tem We LA, Aen Wm = (0) fore %y € Ay, and then we can use yas analyzing and {as synthesiz~ ing wavelets, ‘Tn addition to biorthogonal and orthonormal decomposition, there is another class ‘of decomposition, called semiorthogonal decomposition, for which A, 1 W,. Since in this system, the scaling function and wavelets are nonorthogonal, we still need their duals, $ and ¥. However, unlike the biorthogonal case, there is no dual space. ‘That i, @, 8 € Ay and y, € Wa, for some appropriate scales. In this system itis ‘very easy to interchange the roles of g, yr with those of 6. For semiorthogonal scaling functions and wavelets, we have (o@-b, 80-9) bee bE 6.19) (Wie Fem) =O forj#l and j.k.LmeZ. (5.20) ‘The wavelets (g, y} are related t0 (8, as 4 Go) ) S21 4@ = Bam 621) and Fo = 2 ¥o = Fam (622) with Exley = So w+ 2b = So Anbe, (623) me 10 ‘where Ae(t) is the autocorrelation function of x(t). For a proof of (5.23), see Sec tion 7.6.1. Observe thatthe relation above is slightly different from the orthonormal- ization relation (2.35) in that here we do not have a square root in the denominator. In Chapter 6 we discuss the construction of all the scaling functions and wavelets that we have discussed. 96 MULTIRESOLUTION ANALYSIS 5.3. TWO-SCALE RELATIONS ‘Two-seale relations relate the scaling function and the wavelets ata given scale with the scaling function atthe next-higher scale. Since OW EAC AL 624) ve WoC AL, 625) we should be able to write (1) and ¥(¢) in terms of the bases that generate Ay. In other words, there exist two sequences {go(A]},(gi{A)) € & such that $0) = golklo@r — (5.26) 7 We) =D silkl62t - b. 627) T Equation (5.26) and (5.27) ae referred to as rwo-scale relations. In general, for any 4j ©, the relationship between Ay and W, with Ayu is governed by 92/1) =D goth!" — by T wait) =P eile - &. F By taking the Fourier transform ofthe two-scale relations, we have He) = G04 (5) 628) Hw) = G19). (5.29) where Gute) = $ Dawei 630) Gu = 3 rate, (3 with z = ¢~/0”2, Observe that the definitions in (5.30) and (5:31) differ slightly from those used in Chapter 2 for z-transform. An example of a two-scale relation forthe Haar case (#1) is shown in Figure 5.3. Expansions of (5,26) and (5.27) lead to (5.32) DECOMPOSITION RELATION 97 bu € Ao ene Ay ouQr— DEA, FIGURE 5.3. Two-scale relation for Haar case (go(0) = gol1] = 15 a110) = —arlt} = 1s oll] = gi] = 0 for al other). (533) ‘Since the scaling functions exhibit the lowpass filter characteristic [$(0) = 1], all the coefficients (go{k}} add up to 2, whereas because of the bandpas filter characteristic ‘of the wavelets (9(0) = 0), the coefficients (g1[}) add upto 0. 5.4 DECOMPOSITION RELATION Decomposition relations give the scaling function at any scale in terms of the scaling function and the wavelet atthe next-lower scale. Since Ai = Ao + Wo and (21), Gr = 1) € Ai, there exist two sequences ({ho{K]} . (fn {k]}) in & such that G2) = J thol2k]o( — k) + MiL2kW — #)) F o@r-1 YX brol2k — 1166 — &) + [2k - MWe —) ‘Combining these two relations, we have $2t- 0) =D tholk — C1) + mI2k—EIWE-—K)) (534) T 98 MULTIRESOLUTION ANALYSIS eu@ye Ay dH O/2€ Ay va(/2€ Wo gut Ie Ay PH /2€ Ao FIGURE 5.4 Decomposition elation for Haar case (hofO] = ho[—1] = 1/2: Ay{0] = hil) = 1/2: halk] = hy (4) = O for al ther), forall €Z.In gener, wee 40-9 =) fink ap2i—y+hik-eWex—H}. 635) , Figure 5.4 shows an example of decomposition relation for the Haar case (H). 5.5. SPLINE FUNCTIONS (One of the most basic building blocks for wavelet construction involves cardinal B- splines. Complete coverage of spline theory is beyond the scope of this book. In this section we describe briefly spline functions and their properties that are required to understand the topics discussed in this book. For further details one may refer to ‘many excellent books (¢g., (4-8). ‘Spline functions consist of piecewise polynomials (see Figure 5.5) joined together ‘smoothly atthe break points (knots: (0,1, ...), where the degree of smoothness de- pends on the order of splines. For cardinal B-splines, these break points are equally spaced. Unlike polynomials, these form local bases and have many useful properties that can be applied to function approximation, ‘The mth-order cardinal B-spline Nj (#) has the knot sequence {...,—1,0, 1...) and consists of polynomials of order m (degree m — 1) between the knots. Let 'Ni(®, = xio.n(6) be the characteristic function of {0, 1). Then for each integer 'm > 2, the mth-order cardinal B-spline is defined, inductively, by SPLINEFUNCTIONS 99 LO 0 4 n 8 % FIGURES.5 Piecewise polynomial functions. Nat) = (Nnet # NO 636) = [qt -menes 1 -f Nn-i(t = 3) de. 637 lo ‘A fast computation of Nyt) for m > 2 can be achieved by using the formula (7, p13] Nm eg e+ BS My t= recursively until we arrive atthe first-order B-spline Nj (see Figure 5.6). Splines of orders 2 to 6, along with their magnitude spectra, are shown in Figure 5.7. The most commonly used splines are linear (mt = 2) and cubic (m = 4) splines. Their explicit expressions are fs re (0.1) N= 42-4 Fe [1,2) 638) 0, ‘elsewhere 100 MULTIRESOLUTION ANALYSIS No T FIGURE 5.6 Nth spline of order 1. 1 moO | / \' (| o——_—__—____—__ Q 0 : mee 0 10 2 { Natt) IWa(op1 0 t 1 2 -10 0 10 2 Ns) 1X31 3 -10 0 10 20 J\ ue 3 4 7 ie 0 10 20 J\ tion 3 2 1 2 3 4 6 * -10 0 10 20 Nott) \\ 1Ro(ot 12 3 4 5 6 %e ~10 oO 10 20 FIGURE 5.7 Spline fonctions and their magnitude spectra. 8, 4~ 12 + 127-38, Na(t) = } 44 + 601 — 2407 +315, 64 —48r 4127-0, 0, SPUINEFUNCTIONS 101 re O1 rel re(23) reB.4) elsewhere. 6.39) In many applications we need to compute splines at integer points. able 5.1 gives spline values at integer locations. The symmetry property can be used to get values at other points. ‘To obtain the Fourier transform of Nm (¢), observe that (5.36) can be written as, Nn (t) = (Mie # NO. (6.40) ‘Therefore, (641) 6.42) TABLES.1 Canlinal B-Splines at integer Points k @m—DWn® | FIN) [| FO DNn m=3 malt 1 1 1 1 nm 2 2 1,013 1 3 3 47,840 2 4 4 455,192 m 5 1,310,354 1 1 1 m=i2 2 n 2 1 1 m=6 3 2 2,036 1 1 4 3 132,637 2 26 4 2,208,488 3 66, 1 5 9,738,114 m=7 2 6 15,724,248 1 1 3 2 7 4 3 302 3 102 MULTIRESOLUTION ANALYSIS ‘The important property of splines for our purposes isthe fat that they are scaling functions. That i, there exists a sequence (go{m, k]} © € such that Nm(t) = > gol, k] Ng (2t ~ b). 643) 7 In Chapter 6 we derive an expression for gol, 1 5.5.1 “Properties of Splines ‘Some important properties of splines, relevant tothe topics discussed in this book, are ‘discussed in this section without giving any proof. Proofs of some of the properties are left as exercises. 1. Supp Nix = [0, ] with Nn(0) = Nim) = 0. 2. Nm(t) € C"™?; Chis the space of functions that are k times continuously differentiable. 3. Nnlt-1sy € mts € Z; m4 is the polynomial space of degree k (order k+D. 4. [ENmlOdt 5. Neg) = Nn=1(2) — Nn-i(t = 1). 6. Nm (¢) is symmetric with respect to the center * = m/2, that is, m m Wm (F+t)=Ne(F-1), eR, 48) 7. Na(t) behaves asa lowpass iter [Wi (0) = 1; see Figure 5.7). 8. Nm(t) has mth order of approximation in the sense that Ny(w) satisfies the ‘Strang-Fix condition Ka DINgQnk) =0, keZ\(0) and where D/ denotes the jth-oder derivative. Consequently, Nig) locally re- produces all polynomials of order m (see [8, pp. 114-121}. 9. Sig Nm(t ~ §) = I forall 1. This property is referred to as the partition of tury propery. 10, Total positivity: Nn (t) > 0, for [0, m). By virtue ofthe total positivity (6, P. 71 property of B-spines,coeficients of a B-spline series fllow the shape ‘of the data. For instance, if @(0) = Fj atjNm(t~ j), then aj 20VjaeO20 ay t (increasing) = a) t 4 (Convex) = g(0) conven, MAPPING AFUNCTION INTO MRA SPACE 103 Furthermore, the number of sign changes of g(0) does not exceed that ofthe ‘coefficient sequence {«}. The latter property can be used to idemify the zero crossing ofa signal. 11, Asthe order m increases, Ny (1) approaches a Gaussian function (A, Ag, —> (0.5). For instance, in the case of a cubic spline (m = 4), the RMS time~ frequency window product is 0.501 5.6 MAPPING A FUNCTION INTO MRA SPACE {As discussed in Section 5.2, before a signal x(¢) can be decomposed, it must be ‘mapped into an MRA subspace Ay for some appropriate scale M, that x) xu) = Dab! -B) 646) 7 ‘Once we know (ay) We can use fast algorithms to compute {aks} fors < M. Fast, algorithms ae discussed in later chapters. Here we ae concerned with evaluation of the coeficiet (41) If-x(1) is known at every #, we can obtain {ay, 44} by the orthogonal projection (L? rojection) ofthe signal, thai, ann =2™ [008081 — at. 4 However, in practice the signal x(t) is known at some discrete points. The given time sep determines the scale M to which the function can be mapped. For a representa- tion such as (5.46), we want it to satisfy two important conditions: (1) intexpolatory and (2) polynomial reproducibility. By interpolaiory representation we mean that the series should be exact, atleast atthe points at which the function i given, mean- ing that x(k/2M) = xy (k/2). As poimted out before, polynomial reproducibility ‘means thatthe representation is exact at every point for polynomials of order m if, the basis @() has the approximation order m. In other words, x(t) = xy(t) for x(t) € 7q-1. Cardinal B-splines have m order of approximation. In addition, since they ae aTocal basis, the representation (5.46) is also local. By local we mean that (0 ‘oblain the coefficient as, yy for some k, we do not need all the function values; only «few, determined by the suppor ofthe splines, will sufice. The coefficients when (0) = Na(®) and @(¢) = Na(t) are derived below. Linear Splines (m = 2) Suppose that function x() is given at = ¢/2M eZ ‘Then to obtain the spline coefficients (a) for the representation (> 30) = Yan, 2M ~ &), 6.48) ‘We apply the interpolation condition, namely 6.49) 108 MULTIRESOLUTION ANALYSIS By using (5.49) along withthe fact that Ng()=1 and Nak) = keZ\(), 6.50) we get k+l auw=>(Set) «39 ‘The representation (5.48) preserves all polynomials of degree at most 1. Cubic Splines (m= 4) In this case xO xu) = Dra.wNa2Mt —b), (6.52) 7 where [4,p. 117) bg ee =D) veya and n=43 nas otherwise. ‘The representation (5.52) preserves all polynomials of degree at most 3. 5.7 EXERCISES 1. For a given j € Z, a projection Ps,jf(t) of any given function f(e) € L2(—00, 00) onto the hat function space v=| DY ceN2@!t — 8): (erleez € & can be determined by the interpolation conditions Ps, f(k/2/) = ftk/2/) forall ‘ke Z. Find the formulas fr the coefficients (ay) if Pf is writen as Prjf) = YO aaNaQit —n). exERCISES 105, 2. For the Haar wavelet for te0,}) Watt» for tet}. otherwise, define Via) = 2? vat —b, ks eZ. Show the orthogonality relations LL WiaOVamg Od = inadp, — mks w8 7. Due to these relations, we say that the set (Vatts},ge forms an ononormal family in L(—00, 00). ‘3. Show that the Gaussian function $(t) = e~* cannot be the scaling function of a multiresolution analysis. (Hint: Assume that e~* can be written as ef Tao ake“ for a sequence {a4 }eez in €2, which has to be tue if e~" VoC Vi. Then show that this eads toa contradiction by taking Fourier transforms on both sides of the equation and comparing the results.) 4. Show thatthe mth-order B-spline Nt) and its integer translates form a partition ‘of unity that is, XY Nat-® forall x eR. (Hint: Use Poisson’s sum formula.) ‘5. Show the following symmetry property of Na(t): 6. Use Exercise 5 to show that f Nm(t+K)Nm()dt = Nam(m-+k) — forany k € Z. 7. Show thatthe hat function M0 2-6 for re [l,2 1 for 1 € (0,11 0, otherwise 106 MULTIRESOLUTION ANALYSIS ‘and the function W(t) are related by convolution: N2(®) = y(t) * Ny(t). Find ‘the defining equations (the polynomial expression) forthe functions given by Na(t) = Nal) «NO, NAD = ND*MO, ER. 5.8 COMPUTER PROGRAMS 5.8.1 B-Splines . ‘© PROGRAM BspLine.m ® § Computes uniform Bsplines function y = Bapline(m,x) y & Characteristic function Higher order a = zeros (1,500); sé m>=2&m< 100 for k= Liml ak) = 0.0; xlex-k¢el: if xl >= 0.0 & xl < 1.0 ak) = xd; REFERENCES 107 (Gear) * a(g)e(peatt-w) ta(@et)) / (pH) REFERENCES, 1. ¥. Meyer, Wavelets: Algorithms and Applications. Philadelphia: SIAM, 1993, 2. S. Mallat, “A theory of multiresolution signal decomposition: the wavelet representation,” {IEEE Trans. Patter Anal. Machine Intell, 1, pp. 674-693, 1989. 3. S. Mallat, Multiresolution representation and wavelets, PhD. thesis, University of Pean- sylvania, Philadelphia, 1988. 4. C.K. Chui, An Introduction to Wavelets, San Diego, Cali: Academic Press, 1992 5. LJ. Schoenberg, Cardinal Spine Interpolation, CBMS Ser. 12. Philadelphia: SIAM, 1973. 6. LL. Schumaker, Spline Functions: Basic Theory. New York: Wiley-Interscience, 1981. 1. C.de Boor, A Practical Guide to Splines. New York: Springer-Verlag, 1978 8. C.K. Chui, Multivariate Splines. CBMS-NSF Ser. 1988, CHAPTER SIX CONSTRUCTION OF WAVELETS In this chapter we are concerned with the construction of orthonormal, semiorthog- ‘onal, and biorthogonal wavelets. The construction problem is tantamount to finding. suitable two-scale and decomposition sequences as introduced in Chapter 5. It tums. ‘out that these coefficients for orthonormal wavelets can easily be derived from those. ‘of semiorthogonal wavelets. Therefore, we frst discuss the semiorthogonal wavelet followed by orthonormal and biorthogonal wavelets. = Recall that for the semiorthogonal wavelet, both (and &() are in Ao, and y(t) and ¥(¢) are in Wo, Consequently, we can write (1) in terms of 6(); similarly for 'W(8). These relations as given by (5.21) and (5.22) are % oo) oo) = Fee) 61) ™ v Fo) = Ko) (6.2) Fo= Remy 62) ‘with the Buler-Frobenius-Laurent polynomial £(e/) given by Eye) := D> f@t2ak= So Agel). 63) ite me ‘We wil therefore, concentrate onthe construction af and only. "As the first step foward constructing wavelets, we express (ol), (hf), and {gi{k)} in terms of (go[k]} so that only {go{k]} and hence the scaling functions need to be constructed. In semiorthogonal cases, all these sequences have differ- tent lengths, in general. Later we will show that for onthonormal cases, al of thes Sequences have the sume length and that there is very simple elation among them ‘which can easily be drive a a special case ofthe relationship for semiorthogoral ‘eases. The construction ofa semiorthogonal wavelets followed by the constrution of several popular orthonormal wavelets the Shannoa, Meyer, Batle-Lemari, a Daubechies wavelets. Finally, we construct biorthogonal Wavelet. [NECESSARY INGREDIENTS FOR WAVELET CONSTRUCTION 109) 6.1 NECESSARY INGREDIENTS FOR WAVELET CONSTRUCTION ‘As pointed out before, we need to obiain the coefficient soquences (golk]} and {ilk} to be able to construct wavelets. In this section our goal is to finda rela tionship among various sequences. This will help usin reducing our task. Here we ceonsider the case of semiorthogonal decomposition ofa multiresolution space. 6.1.1 Relationship Between Two-Scale Sequences Recall from Chapter 5 thet asa result of the multiresolution properties, the scaling functions and wavelets at one scale (coarser) are related tothe scaling functions at the next-higher scale (finer) by the two-scale relations, namely eO= Leaolkle es “6 4) vO= Dae -b, 635) By taking the Fourier transform ofthe relation above, we have Go) = Good ($) 66) Ho) = 6108 (2). @n where 2 = e/*?? and Gate = 3 Dstt 8 ovo =} Dent 9 Observe that #(t) € Ao, ®(2t) € Ar, and ¥(¢) € Wo. From the nested property of [MRA we know that Ap Ai atid Ao 1. Wo such that Ao Wo = Ai. Orthogonalty ‘of the subspaces Ao and Wo implies that for any & € Z, 0 -9,¥) =0. 6.10) ‘Equation (6.10) can be rewritten using Parseval’s identity as $f FoFodo EL ooTw|a()Pe aw Bee” ecieimla(g)fereo = 110 CONSTRUCTION oF waveets ne “af GoD |B (F +2x8)| Me do 7 ieee =$ i GoGIDEo(ere do, 1) where z = e~/*/?, By partitioning the integration limit [0, 4] into [0, 2x7] and (2x, 47], and with a simple change of variable, itis easy to verify that (6.11) is the same as . FE [" fouo me. + 64-0 Ep(-2) eM do= 0, 6.12) ‘The expression (6.12) holds forall € € Z. What does it mean? To understand this, Jet us recall that an integrable 2xr-periodic function f(t) has the Fourier series rep- resentation (0) = Deel, 13) where 1 I peeye-ite de, com 5 [ foettea (6H From the above it is clear that the quantity on the left of (6.12) represents the th Fourier coeficent of a periodie function Go(2)Gi@Eg(e) + Go(~-2)G(-2Eo(~2). Since all these coefficients are zero, it implies that Gole\Gi@E oe) + Gol-)G-DEp(-2) =O (613) for fe] = ‘The solution of 6.15) gives the relationship between G(z) and Go(2). By direct substitution, we can verify that Gre) = ~ez"G-DEg(-2) 6.16) for any integer m, and a constant c > 0 is a solution (6.15). Without any loss of ‘generality we can set c = 1. The effect of m is to shift the index of the sequence {(gifk)}. Usually, m is chosen such thatthe index begins with 0. 6.1.2 Relationship Between Reconstru and Decomposition Sequences Recall from Chapter 5 that the scaling function ata certain scale (fines) can be ob- tained from the scaling functions and wavelets atthe next lower (coarse) scale. Ia ‘mathematical terms, i means that there exist finite-energy sequences (holR]}, (tu {EI} such that O21 ~ 6) =D thol2k ~ C0 — + mk -WE-H), (617) z [NECESSARY INGREDIENTS FOR WAVELET CONSTRUCTION 111 where, as discussed in Chapter 5, {ho[k]} and (h;[4]} are the decomposition se- quences ‘By taking the Fourier transform of the decomposition relation, we get 13 (a ; 5 15 (%) ¢-veea eG) 0H 36(3) ihetae— ce Aha) + Faas — ce Fp - ne +642) Pine ee (2). {oo Sn 6 4 GY Ok — ee) ‘The equation above reduces to (© hol2k ~aesoronn) Gotz) + (Sac - arvetonn) Ge) T F WeZ, 6.18) ‘Combining the Fourier transforms of the decomposition and two-scale relations, wwe get (Ho) + Ho(-2)Gol2) + HC) + Ha(-DIGAC@) = 3 for event; (6:19) [ilo(2) — Ho(—2)1G0(2) + Ui (2) — Hy(-2)1G4¢2) = 0 for ode, (620) Where z = e~/9/? and 1 Hole) = 5 Dholeaet 1 Ma) = 5 Dhl! These equations lead to HoGoe2)+ MOGVO = Ho(—2)G0(2) + Hi(-2)Gi@) = 0. ‘The lst equation can alo be writen as Ha(2)Gol—2) + Hy(2)G\(~2) =0. (621 112 CONSTRUCTION OF WAVELETS In matrix form we have Golz) Gil) ) [ Hole) $ =|"). (622) Gi) Gia} Lm@) Lo the solution of which gives 1 Gio =3x (623) WW) = 3 * Dec.) ae 1, Go(-2) i(e) = 5 x (6.24 MO = 9 * Rea oh with Ayo) = Go(2)Gi(~2) ~ Go-G1C2). (625) It canbe shown that aus 2) = ez" Eo"), (626) ‘where c > 0 and m is an integer. Since o generates a Riesz or stable basis, Eg(2) and hence Agyo, (2) # 0. 6.2. CONSTRUCTION OF SEMIORTHOGONAL SPLINE WAVELETS, ‘The significance of the results obtained in Section 6.1 is that we need to construct only the scaling functions (i.e., we need to find only the sequence (gof])). In this section we obtain these sequences for the semiorthogonal spline wavelets introduced bby Chui and Wang [1]. Here the cardinal B-splines Ny, are chosen to be the scaling functions. We will show that a fnite-energy sequence {go{m, k]} exists such that the scaling relation Nm (0) = 3 gol, KIN (2t — 8) 627) F is satisfied and therefore Ny) is a scaling function. For m = 1, {Ny(t—K) :k € Z) form an orthonormal basis of Ao. For this case we have already seen that go{0] = {o{1] = 1 (see Figure 5.3) In this section we consider cases for which m > 2. For m > 2, the scaling functions {Nj(t ~ k) : k ¢ Z} are no longer orthogonal; that is, f * Nm Nn(t = Ot £ B00, (628) forall €€ Z and m > 2. An example of nonorthogonality of Nat) is shown in Fig ure 6.1. The J, Na(ONa(¢ ~ €) dis shown by the shaded area, whichis nonzero ‘CONSTRUCTION OF SEMIORTHOGONAL SPLINE WAVELETS 113, xO MD ° 1 2 3 FIGURE 6.1 Nonorthogonality of linear spine shown by the shaded zea. 6.2.1 Expression for (gol) Recall from the definition of Nyt) in Chapter 5 that Nal) = (Miso NYO, and that 8, = (SV 629) in S) From the Fourier transform of the two-sale relation, we have 1 Kno) Goto) olm, He = le) 0(@ zane i ear) (6.30) (es arate" oz => x(t 30) By comparing the coefcient of powers of z, we get -mi(™) cp sol] := gol, k} = {7 (7): Osksm 632) 0, otherwise (Once we have {go(k)), the est of the sequences (g1[k]}, (ho(I}, and {hy {&}) can te found by using the relations derived in Section 6.1 The expression of (iI is 114 CONSTRUCTION OF WAVELETS For Nm(t), the Buler-Frobenius~Laurent polynomial Ey, (z) takes the form Ll Gel x Ava (Oz* Eng nt YE Manin + ot (633) with z :-= e~//? and the autocorrelation function Ang) = f N(2)Nm(k +2) = Nam (mn +). (634) Finally by using the relation (6.16, we have nts amaecooE(A)naaei=0, 69 tot O 2. As pointed out befor, the set of basis functions (M(t — 8) : k'€ Z} isnot onogonal for m > 2. The coresponding ‘thonormal sealing fonction W(1) ean be obtained as PL) Rin(o) MeO) = Te ei op ‘The Battle-Lemarié scaling function gz m(¢) is, then, PoLm(O) = Nat), {and the coefficients (go{k}) can be found from 1 Mio) 1 iF = Goe Her) = Nero) (655) pst 0 Frey (655) where z = ¢~//), By combining (6.54) and (6.55), we have a (42) [Lire Nantm tnt] _ ( 2 ) [ees . (656) 124 CONSTRUCTION OF WAVELETS ‘ORTHONORMAL SCALING FUNCTIONS 125 ‘Asan example, consider the linear Battle-Lemarié scaling function, for which m 2. For this case we have ' +2? | P4de+ 7 et) Wsx:2()1 Gots) = A x ST 651) 1 Dake 7 ‘The coefficients {gofk]} can be found by expanding the expression on the right-hand 420 246 0 a er) ‘ide as a polynomial in z and then comparing the coefficients of the like powers ‘of z. These coefficients can also be found by computing the Fourier coefficients of the right-hand side expression. Observe that Go(1) = 1 is satisfied, thus giving the 2 ‘sum of all {go{K]} to be 2. In Tables 6.3 and 6.4 we provide the coefficients of the 1 Var) ' War:2(0)] op || TABLE 6.3. Two-Scale Sequence for Linear Bate-Lemarié Scaling Function 691;2 42orees 0 2 9 mm solnd=ol—m_|[m _aotn)=sot2—m)_ | m golnl = gol2—ml 1 11569266904457929 | 14 0c000s24422257478 | 27 —0.0000000053986503, 2 0.5618629285876487 | 15 -0.0000195427343909 | 28 —0.0000000028565276 1 Para(t) 1 Ida1;6(@)| 3 -0.0777235484799832 | 16 —0.0000105279065482 | 29 0,0000000013958989 | 0.0734618133554703 | 17 0,0000099211790830 | 30 0,000000000737460 | | ‘5 _0.0240006843916324 | 18 0,000026388701627 | 31 ~0.0000000003617852 6 00181288346913845 | 19 ~0,0000012477015928 | 32. —0.0700000001908819 49024 6 8 =0 2 yw 7 -0.0054917615831284 | 20 —0.000000666407922 | 33 0.0000000000039600 8 ~0.0031140290154640 | 21 0,0000003180755856 | 34 0.0000000000495170 9 0,0013058436261069 | 22. 0.0000001683729269 | 35 -0.0000000000084878 $a) Pesala! 10 0.0007235625130008 | 23 -0.0000000814519590 | 36 ~0.0000000000128703 1 y a 11 ~0.0003172028555467 | 24 —0.0000000832645262 | 37 0.0000000000063709 12 ~0,0001735046359701 | 25 _0.0000000209364375 | 38 0.0000000000033504 9 13 o.00o07K2ss6648652 | 26 _o.o0c0000110875272 | 39 —0.0000000000016637 4 al 6420246 o 0 2 9 FIGURE 6.8. Batle-Lemarié scaling funtion, the comesponding wave i TABLE 6.4 Two Scale Sequence for Cubic Batle-Lemaré Sealing Function a1; fade spect ee mm soln)=gols—nl_|maoted=sols—ml | m soled =g0l4—) 2 _1.0834715125686560 | 15 _0.0026617387556783 | 28 —0.0000282171646500 i : 2 ieseastaeaaente | 4¢ —Qooseovosezsaies | 29 —0.0000222203943141 ner andcuieBale-Lemarisaling factions. The near Batle-Lemaré scaling 44 —no7os9sasoesaesai | 17 —0.0013112570210308 | 30 0.00001460738678%4 funtion andthe wavelet are shown in Figure 6.8, 5 ~0.1556158437675466 | 18 _0,0007918699951128 | 31 0,000011446755089 6 0.0453692402954247 | 19 0.0006835206221413 | 32 ~0.000007577440788, i . 7 00394936381541212 | 29. -0.0004035035254263 | 33 -0.9900059100089365 64.4 Daubechies Scaling Function 8 -0.0242500785203567 | 21 —0.000328S486043928 | 34 0,0000039378865616 Battle-Lemarié obtained fe fcc i Serene | eam | camara DeLand hen gis yma 10 0.0122428617178522 | 23 0.0001653505502899 | 36 -0.0000020497919302 OLM nee 11 00113986402962103 | 24 ~0.0001060637892378 | 37 ~-0.0000015870262674 ‘Eien the denominator forthe orthonormalization process, the Sequence (fof) 12 —00061572588008633 | 25 ~0.0000846821755263 | 38 0.0000010585342577 somes infil long : 13 —o.onsdons7s46ss009 | 26 0.0000546341264354 | 39 0,0000008247217560 To obtain orthonormality but preserve the finite degree of the (Laurent) poly- 14 _0,0030724782908629 | 27. _0.0000433039057782 | 40 —0.0000005577533684 ‘nomial, Daubechies (7,8] considered the two-seale symbol for the scaling function 9D. 126 CONSTRUCTION OF WAVELETS Lez)" Gots) = Gy) Se), 638) where S(2) € mm-1. So our objective isto find S(z). Fis, observe that since Go(1) = 1, we must have S(1) = 1. Furthermore, we also want S(-1) 7 0, be- cause if (1) =O, then 2 +1 isa factor of 5(2 and hence canbe taken out. Now Go(2) given by (6:58) mus satisfy the orthogonality condition, namely IGP + IGOR =1, zee? 659) > cost SIC)? + sin™ Z1S(—2)P = 1. 6.60) By defining =u xin? and FQ) = 1S@P, (6.60) can be rewriten as (Exercise 11) (=n f@) +x" fl =2) = 3 fe)-d-9 [saa] (m+ k—2 SME te Reco, 661) & where the remainder Ra) is rated = ("ETN atecarya-oF ("EWN at. 00 & fa Since (2) is a polynomial of order m, R(x) = O. Therefore, we have pa (mek =) ok een, Ise@r x( t ) si Poe 7 6.63) ‘The polynomial above can be converted 10 ko Is? = 2+ Yay cos <2, (6.64) BT? CORTHONORMAL SCALING FUNCTIONS 127 1 eas) Cis ) as 665) d, pat n kin ‘Our next task is to retrieve $(z) from |$(z)|?. According to Riesz's lemma [10, p. 172], corresponding to a cosine series w Fo =F+ Ya cosko (6.66) i with ao,...,ax € Rand ay #0, there exists poynomial s@)= Done! 667) ms with ap... € R, such that ler? = fle, (6.68) ‘By applying Riesz’s lemma to (6.64) itis easy to verify [9] that S(z) has the following form: S@= K ‘ T]e@-[]e-zoG-%, Kk +2L=m-1, 6.69) ee ‘where {ri} are the nonzero real roots and {z¢} are the complex roots of 2"~"|S(2)|? inside unit cicle and Cis a constant such that $(1) = 1 ‘Once we have S(2), we can substitute this into (6.58) and compare the coefficients of powers of z to get the sequence (go[A]}. We will show the steps to get these sequences with an example. ‘Consider m = 2. For this, we have ap = 4 and ay = —1, which gives a ae Ista)? =2—cos 5 eget pete) sas 6:70) where 7) = 2 — V3. From (6.69), we have 1 SQ) = -2 te) pve -2+ 9. (en) 128 CONSTRUCTION OF WAVELETS So, for m = 2, we get (e001 + golt]e + go(2le? + gol3Iz*) 142), 01 : -(4) x5[a+vde+a-v5] =fe+ 4) (672) Go 7 ‘Since (2) 2m. For m = 2 and 7, the scaling functions and wavelets, along with their magnitude spectra, are shown in Figure 69. Two-scale sequences for some Daubechies scaling « polynomial of order m, the length of two-scale sequence for dp:m is 4 Pozalt) ' oat al ° 1 2 3 i 6 9 2 Yo) q a 0 1 2 5 0 5 10 1 Vos) 7 W200)! { NX a V ol 6 0 5 0 o 0 2» FIGURE 6.9 Daubechies scaling function, the comesponding wavelet, and their magnitude spectra CONSTRUCTION OF BIORTHOGONAL WAVELETS 129 LE 6.5 Two-Scale Sequence forthe Daubechies Scaling Function nm _solm=aol-m)_ | m —golmd=aol-m)_ |» gold = soln) ™ mas 7 © 0.6830127018922193 | 0 o2264180825835584 | 0 0,1100994307456160 1 1.1830127018922192 | 1 o.8ss043s4z70s0283 | 1 0.5607912836254882 2 0.3169872981077807 | 2 L024326a442501967 | 2 1.0311484DI6361415, 3 ~o-1s30127018922192 | 3 0.1957659613478087 | 3 0.6643724822110735, m 4 ~0.3426567153829353 | 4 —0.2035138224626306 0 as7ossr2077841636 | 5 —0.04S601131883S469 | 5 -0.3168350112806179 1 Lasitte91sesi4436 | 6 — 0.1097026586421339 | 6 0:1008464550093839 2 0.65036s0005262323 | 7 —0.0088268001083583 | 6 —0.1008461650003839 3 ~0.1908344155683274 | 8 -o.0177918701010542 | 7 0.1140034451597251 4 ~0.1208322083103963 | 9 0.0047174279300679 | 8 ~0.0537824525896852 5 000498174997368837 m=6 9 ~0.0234399415642046 m 0 o.s77424x20027466 | 10 0.0177497923793598 © o32seossaeosiz9e2 | 1 0.6995038140774233 | 10 0.0177497923793598 1 1.01094s7150918286 | 2 1.0622637S98801800 | 11 0.0006075149984022 2 o.g922001382605015 | 3 0.4458313229311702 | 12 -0.0025479047181871 3 o.039s7s0260356447 | 4 -0.3199865989409083 | 13 0.0005002268531225 4 ~0.2685071673690398 | 5 —0'1835180641065938, 5 0,0836163004751772 | 6 0.1378880929785304 6 —O.osesosco10708818 | 7 0.0389232007078970 7 ~00149e69993303616 | 8 —0.0146637483054601 9 10007832511506546 10 0,0067560623615907 11 ~0.0015235338263795 fanctons are given in Table 6,5. Readers should keep in mind that in some books (eg. 8), there is a factor of V2 in the two-scale sequences. 6.5 CONSTRUCTION OF BIORTHOGONAL WAVELETS In previous sections we discussed semionthogonal and orthonormal wavelets. We developed orthogonal wavelets as a special case of semiorthogonal wavelets by using o=6 vey. 673) (One of the major difficulties with compactly supported orthonormal wavelets i that they Iack spatial symmetry. This means that the processing filters are nonsymmetric and do not possess a linear phase property. Lacking this property results in severe “undesirable phase distortions in signal processing. This topic is dealt with in more ‘detail in Chapter 7. Semiorthogonal wavelets, on the other hand, are symmetric but suffer from the drawback that their duals do not have compact support. This i also 130 CONSTRUCTION OF WAVELETS undesirable since truncation of the filter coefficients is necessary for real-time pro- ‘cessing. Biorthogonal wavelets may have both symmetry and compact support. ‘Cohen et al. [11] extended the framework ofthe theory of orthonormal wavelets to the case of biorthogonal wavelets by modification of the approximation space structure. Let us recall that in both the semiorthogonal and orthonomal cases, there exists only one sequence of nested approximation subspaces, (0) CA CALCAQCAICARC+- L674) ‘The wavelet subspace, W,, isthe orthogonal complement to A, within A, such that ALOW, = (0), se2, and Alt We = Asa. 675) ‘This framework implies thatthe approximation space is orthogonal to the wavelet space at any given scales, and the wavelet spaces are orthogonal across scales: WLW, fors # p. (676) In the orthonomal case, the scaling functions and wavelets are orthogonal to their translates at any given scales: (4300) Oam(0) = Bim) (r,s), Vasm(®) = Sten) 67) In the semiorthogonal case, (6.77) no longer holds for ¢ and yr. Instead, they are ‘orthogonal to their respective duals, (x50). Fms(O) = btm (ess Fs) = Stans (6.78) and the duals span dual spaces in the sense that A; := span{dy,s(1)(2'r — m), m,€ Z} and W, := span{ Y(t —m), s,m, ¢ Z]. As described in Chapter 5, semionthogonality implies that A, = A, and W, = Ws. In biorthogonal systems, there exists an additional dual nested space: (0) + CA a CAy CR CA Canc 17. (679) In association with this nested sequence of spaces isa Set of dual wavelet subspaces (not nested) W,, € Z, that complements the nested subspaces A,,s € Z. To be ‘more specific, the relations of these subspaces are Alt We = As (6.80) Kit We = Ken (6a) CONSTRUCTION OF BIORTHOGONAL WAVELETS 131 ‘The orthogonality conditions then become ALLW, (6.82) Rls, (6.83) siving us (G40), Fns() = 0 (6.84) (Gs), Wn.s(0) = 0. (6.85) In addition, the biorthogonality between the scaling functions and the wavelets in (6.78) still holds. The two-scale relations for these bases are oo= Lavtever -b (686) oo) Dhue -» 687) vo DLatkwar -* (688) vo = Dhhtesar— (6.89) ‘The orthogonality and biorthogonality between these bases give the following four conditions on the filtering sequences: (eotk 2m], fafk — 2n}) = (6.90) (eit — 2m), Folk — 2n]) =0 691) (golk — 2m}, ofl) = bn.0 (692) (silk — 2m), Rif) = Bn. 693) Biorthogonal wavelet design consists of finding the filter sequences that satisty (6.90) through (6.93). Because there is quite abit of freedom in designing the biorthogonal wavelets, there are no set steps inthe design procedure. For example, one may begin with gofk] being the two-scale sequence of a B-spline and proceed to determine the ret of the sequences. Another way is to design biorthogonal filter banks and then iterate the sequences to obtain the scaling functions and the wavelet (discussed in Section 6.6). Unlike the orthonormal wavelet, where the analysis filter is « simple time-reversed version of the synthesis filter, one must iterate both the synthesis filter ‘and the analysis filter to get both wavelets and both scaling functions. We wil follow this approach and defer our discussion of biorthogonal wavelet design by way of an ‘example in Chapter 7, 132 CONSTRUCTION OF WAVELETS: 6.6 GRAPHICAL DISPLAY OF WAVELETS Many wavelets are mathematical functions that may not be described analytically ‘As examples, the Daubechies compactly supported wavelets are given in terms of tworseale sequences, andthe spine wavelets are described in terms of infinite poly- nomial. Is dificult forthe user to visualize the scaling function and the wavelet. based on parameters and indirect expressions. We describe three methods here to display the graph ofthe scaling function and the wavelet. 6.6.1 Iteration Method ‘This method isthe simplest to implement. We include a Matlab program with this book for practice. Let us write Gnei(t) = J solklbm(2t—K), m= 0,1,2,3,... and compate all values of In practice, we may initialize the program by taking (694) dol) = 30) 695) and seting go(n) = 8(n) = 1. After upsampling by 2, the sequence is convolved With the go] Sequence to give di (n). This Sequence is upsampled and convolved with go(A] again to give da(n), and s0 on. In most cases the procedure usual con- ‘verges within 10 iterations, For biortiogonal wavelets, the convergence time may be longer. Once the scaling function has been obtained, the associated wavelet can be ‘computed and displayed using the two-scale relation forthe wavelet: ¥O = Dailklo@r—¥). {A display indicating the iterative procedure is given in Figure 6.10. The figure indi- cates the number of points in each iteration. To get the corresponding position along. the time axis, the abscissa needs to be divided by 2” for each iteration m. 6.6.2. Spectral Method In this method, the two-scale relation for the scaling function is expressed in the spectral domain Foy =coel*™>G(2), 2 = Golell@™)Gote!@”)6 (2) sera, (696) GRAPHICAL DISPLAY OF WAVELETS. 133 2 m=1 m=2 q q BY 1 2 3 4 Pts aN m=4 q q 2 “ oh hc mas m=6 ef} i 2 2 oo 0 @ 8 10 a m=7 m=8 o q r 2 i a pA pS q Q 2 ~ ‘0 6001000180000 ‘0 000-2000 300000 FIGURE 6.10 Iterative procedure to gt scaling functions. Abscissas need tobe divided by 2 to get the conect position in time. = flenemmra(s) ket Nose . = TT cote). 697) el ‘Since $(0) = 1, we may take the inverse Fourier transform of (6.97) 10 yield wo EL (ffoumrente 49 134 CONSTRUCTION OF WavELETS ‘To compute (6.98), the user has to evaluate the truncated infinite product and then take the fast Fourier transform. 6.6.3 Eigenvalue Method ‘This method converts the two-scale relation into an eigen equation. Let us consider the two-scale relation by setting x = m to yield the following matrix equation: (7) =D g0lkI62n — b) cs = Lgolen —migom) = [go(n, m1 6(m), 699) where the matrix element go(n,m) = go(2n —m). In matrix form, we write (6.99) as 30101 gol—1] gol-2) -| | (0) 0) gol21 golll gol] -} | oc} = 1] @«D |. + gol4) gof3] aol) 42) 2@) ‘The eigenvalue ofthis eigenmatrix is 1, so we can compute ¢(n) forall integers ‘This procedure can be repeated for a twofold increase in resolution, Let x = n/2, and the two-scale relation becomes (5) = Dawtkion 8. (6.100) T By repeating this procedure for x = n/4,n/8,..., we compute the discretized (0) to.an arbitrarily fine resolution. 6.7 EXERCISES 1. Show thatthe support of semiorthogonal wavelet, in(#) = [0, 2m ~ 1} 2. Show that the integer translates of the Shannon wavelet y(t — ) form an or- ‘thonormal bass. 3. Find the cubic polynomial Sg that satisfies the conditions S4(0) = S%(0) = 0, Sa(1) = 1, $4(1) = O, and S4(x) + S4(1 ~ x) = 1. Use this polynomial as the smoothing function for Meyers scaling function and compute the two-scale coeflicients. 4, Show thai (6(¢ ~£), k € Z) is aRiesz bass of Vo = {9 — 4) :k © Z), then {es }uez i8 a Riesz basis of Vs = {$x.s(t),k € Z} for a fixed s € Z. That is, EXERCISES 135 e Pw 4D la? “ | DL ae -0| plies that ay wat s| x anol BY lal? Se 1 Se With the same constants A and B. 5 Show thatthe following statements are equivalent: (a) (@(-— A) :k € Z} is an ‘orthonormal family, and (b) S72. 2p [9(@ + 2km)|? = 1 almost everywhere. 6. Prove that (Ni(-— &) : & € Z} is an orthonormal family by using this theorem, that is, by showing that 7. Obtain an algebraic polynomial corresponding to Euler-Frobenius-Laurent polynomial £yj(z) and find its roots, iy > --- > 2g. Check that these zeros are simple, eal, negative, and come in reciprocal pairs, ie. Athe= Aaks = Ash 8, The autocorrelation function F fora given function f « L*(—00, 00) is defined as ray=[" fern Oa, rer. ‘Compute the autocorrelation function of the hat function Nz and compare it to the function Ns introduced in Exercise 7. 9. Construct a linear Battle-Lemarié scaling function to show that for the hat func- tion N2(0), itholds (let z =e 1) that 136 CONSTRUCTION OF WAvELETS ‘The Fourie transform ofthe orthonormalized sealing function N+) i given by (-1/0) a = 22)? (a/og? +4424)" We have shown that the symbol Rw = RH) Got) = LEO 0 = Bei Compute the ato to sow tat he results 142)? 1 (4) ata", where We" +9/4)-16*+ 2/4) T+ G7 +274] Use the power series expansions 1 iw aig, ey) - +n) 14 gnt Lt yt gl 3+ Qn — 3) and yi (nT ® 1 a a+ = 6 w( j ) = ‘as well as the binomial theorem to expand the expression [(1+z)/2]*(1+n)"/? in powers of z and determine the coeficientsg{k] fork 1 Sby comparing the corresponding coefficients of 2~5, ..., z° in Go(z) and [(1+2)/2}°(1+n)'/. ‘You should use symbolic packages such as Mathematica or Maple for these computations. 10. Construction of linear B-spline wavelet: Given the two-scae relation forthe hat fanction moet (men. & ‘we want to determine the two-scale relation for a linear wavelet with minimal ‘support, alt) =D sifklN2@r — 8), F rT EXERCISES 137 ‘using the corresponding Euler-Frobenius-Laurent polynomial E(2) = 2! + 4+ z, It was shown that for the corresponding symbols i Go) 1 z Desoleie aad Gi) = 5 Pails, the orthogonality condition is equivalent to Go@)ET@E(2) + Go(-NEV-DE(-2) = 0 with |z| = 1, We need to determine the polynomial Gy(z) from the equation above. There is no unique solution to this equation. (a) Show that Gy(z) = (—1/3)z*Go(—z)E(—z) is a solution of the equation above. (b) Show that Go(2) = [1 +.2)/2F. (©) Expand Gi(z) = (-1/31)z3Go(—z)E(-—z) in powers of z and thus deter- ‘mine the two-scale relation forthe function yy, by comparing coefficients Geo SDaut = seven. r (@) Graph yr. Complete the missing steps in the derivation of Daubechies wavelet in Sec- tion 644, Note that |S(2)* is a polynomial in cos §. . Use the sequence {—0.102859456942, 0.47859456942, 1.205718913884, (0,544281086116, ~0.102859945694, ~0.022140543058) as the two scale se- quence {g0[n}} in the program iterate.m and view the results. The resultant sealing function is a member of the Coifman wavelet system or coifets [8]. The ‘main feature of this system is that inthis case the scaling functions also have ‘vanishing moment properties. For mth order coiflets [levoan0 9 [rome p= [Looe ‘Construct biorthogonal wavelets beginning withthe wo scale sequance (gO{n}) for linear spline. 138 CONSTRUCTION OF WAVELETS 6.8 COMPUTER PROGRAMS 6.8.1 Daubechies Wavelet a ‘PROGRAM wavelet -m . & Generates Daubechies scaling functions and wavelets 1.18301; 0.31699; 0.18301); ® a i = ELipuatgo) .*(-1).7 gh = Length (gi): ‘© Compute scaling funtion first utter = 10; & interation tine unit = 2°(-107 phi = conv(g0,phi_new) ; n= length(phi): phi_new(1:2:2%n) = phis Length (phi_new) : Ae(L == (utter-1)) phiz = phi, end end 2 ae = 1.0 / (2 * unity; t= (LHength(phiy} * at; subplot (2,1,1), plot (t,phi) title(‘scaling Function’) * compute wavelet using 2-scale relation for 4 = i:ngt a= (cl) * unit +1 b= a+ lengthiphi2) - 1; peize(i,a:b) = phi2 * gl(d); paizs(1,n) = end psi = sum(psi2s) ; REFERENCES 139 at = 1.0 / (2 * unity; © = [O:length(phi)=1] * at - (ng - 2) / 25 subplot (2,1,2), plot(t,pad) titie( ‘Wavelet’ ) 6.8.2 Iteration Method ‘ 4 PROGRAM Sterate.m : & Iterative procedure to get scaling function 4 generates Figure 6.10 ‘ 90 = [0.68301 1.18301 0.31699 -0.183011; Nicer = 10; ¢ number of interation phi_new = 1; @ initialization for i = lintter unit = 2°(i-1); phi = conv(g0,phi_new); n= Length(phi) phi_new(1:2:2*n) = phi; subplot (5,2,4), plot (phi); hold on; heading = sprintf (‘iteration = ¥.43",1) title (heading) : end . REFERENCES 1. C.K. Chui and J.2, Wang, “On compactly supported spine wavelets and a duality prin- ipl" Trans. Am. Math. Soe, 390, pp. 903-915, 1992. 2. C.K Chui and .7. Wang, “High-order orthonormal scaling functions and wavelets give oor time-frequency localization” Fourier Anal. App. 2, pp. 415-426, 1996. 3. Y. Meyer, “Principe d'inertitude, bases Hilbertiennes et algebres d'opérateur,” Semin, Bourbali, 662, 1985-1986. 4. Y.Meyer, Wivelets: Algorithms and Applications, Philadelphia: SIAM, 1993 5. G, Bate, “A block spline construction of ondeletes: I. Lemaré functions.” Commun, ‘Math. Phys. 110, pp. 601-615, 1987 6. P.G, Lemarié, “Une nouvelle base d'ondeetes de L2QR)." J. Math. Pures Appl 67, 1p. 227-236, 1988 7. 1, Daubechies, “Onthonormal bases of compactly supported wavelets” Commun. Pure Appl. Math. 41. pp. 909-996, 1988, 140 CONSTRUCTION OF WAVELETS 8, 1. Daubechies, Ten Lectures on Wavelets. CBMS-NSF Ser, App. Math. 61. Philadelphia: STAM, 1992. Wavelets: A Mathematical Tool for Signal Analysis. Philadelphia: SIAM, 10. C.K. Chui, An fnroduction to Wavelets. San Diego, Calif: Academe Press, 1992. LL A. Cobea, I. Daubechies, and J.C. Feauveau, “Biorthogonal bases of compactly sup- Ported wavelets" Canmmur. Pure Appl. Math. 48, pp. 485-500, 1992. CHAPTER SEVEN Discrete Wavelet Transform and Filter Bank Algorithms ‘The discussion of multiresolution analysis in Chapter 5 prepares readers for an un- derstanding of wavelet construction and algorithms for fast computation ofthe con- tinuous wavelet transform (CWT). The two-scale and decomposition relations are essential for development ofthe fast algorithms. The need for these algorithms is obvious since a straightforward evaluation of the integral in (4.32) puts a heavy ‘computation load on problem solving. The CWT places redundant information on the time-frequency plane. To overcome these deficiencies, the CWTT is discretized and algorithms equivalent to the two-channel filter bank have been developed for signal representation and processing, The perfect reconstruction (PR) constraint is placed on these algorithm developments. In this chapter we develop these algorithms ‘in detail Since the semiorthogonal spline functions and compactly supported spline wavelets require their duals in the dual spaces, signal representation and the PR con- dition for this case are developed along with the algorithms for change of bases. Before we develop the algebra for these algorithms, we discuss the basic concepts of, ‘sampling-rate changes through decimation and interpolation. 7.1 DECIMATION AND INTERPOLATION {In signal processing we often encounter signals whose spectrum may vary with time. A linear chirp signal is a good example. To avoid aliasing, this chirp signal must be sampled at least twice at its highest frequency. For a chirp signal with a wide bandwith, this Nyquist rate may be too high for the low-frequency portion ofthe chirp. Consequently, there is a lot of redundant information to be carried around if one uses a fixed rate for the entire chirp. There is the area of multirate signal Processing, which deals with signal representation using more than one sampling ‘ate. The mechanisms for changing the sample rate are decimation and interpolation, ‘We discuss their basic characteristics in the time and spectral domains, 42 _ DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS, 7A.1 Decimation ‘An M-point decimation retains only every Mth sample ofa given signal. Inthe time /M), an M-point decimator will introduce aliasing in its output signal. We will see later that aliasing does indeed occur in ‘a wavelet decomposition tree or a two-channel filter bank decomposition algorithm. ‘However, te aliasing is canceled by carefully designing the reconstruction algorithm to.remove the aliasing and recover the original signal. For M = 2, we decimate a sequence by taking every other data point. From (7.4), wwe obtain 12 Ye= xe!" pee) 4x2) a6) ana Hele) = Melly 4-3-0 an ‘The spectrum of F(e/) is shown in Figure 7.2. For the sake of simplicity in using matrix form, we consider only the case where ‘M = 2. We use | 2 in the subscript to represent decimation by 2. We write bI=G12 78) fre) FIGURE 7.2. Spectral characteristic of decimation by 2, “144 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS y(-2) x(-4) yp] |x-2) yO | |x@ yd) | =|[2@) a . x | [xa ¥) (6) x8) In terms of a matrix operator, we write (7.9) as y-2) 2) 100... y-)| Joo 100 x1) yO) 20:0 1 0 0 x0) x f= 20 01 00 x() (7.10) yQ) 0010 offxa 30) 00 1} {xa : x4) Dl = [DEC,] iI. ‘The shift-variant property ofthe decimator is evident when we shift the input column cither up or down by a given number of position. In addition, the decimation matrix ‘an orthogonal matrix since [pec = [DEC] Consequently, decimation ia tog ransfomatio, 74.2 Interpolation Interpolation of data means inserting additional data points into the sequence to in- crease the sampling rate. Let y(n) be the input to the imerpolator. If we wish to increase the number of sample by Mf folds, we insert M — 1 zeros in between any {wo adjacent samples so thatthe interpolator output gives foe G0 os aa DECIMATION AND INTERPOLATION 145, bm) eo} FIGURE 7.3 An M-point interpolator. ‘The system diagram of an M-point interpolator is shown in Figure 7.3. We can also \write the expression of interpolation in the standard form of a convolution sum, #1) = Syd b(n ~ kan. a1) cs ee Fee) = LTE vse ene de = Dymene Te): 7.13) ‘The <-transform of the interpolator output is X@) =¥CeM). (7.14) Interpolation raises the sampling rate by filing zeros in between samples. The output Sequence has M times more points than the input sequence, andthe output spectrum is shrunk by a factor of M on the w-axis. Unlike the decimator, there is no danger of aliasing for interpolator since the output spectrum has a narrower bandwidth than the input spectrum. The spectrum ofa twofold interpolator is given in Figure 7.4, xx 47 FIGURE 7.4 Spectral characteristic of interpolation by 2, 146 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS Using M = 2s an example, we write 20) = ylaya = [202 forneven = [or Sons ou In matrix form we have xD) Jen xen} /o x@ | |¥@ xa falo |. 16) x@ | [yw x@ | [0 x@ | [»@ As before, we represent the interpolator by a linear matrix operator. It turns out that the interpolation matrix is the transpose of the decimation matrix 10 oo x2) o10 (2) xO) 0000 y1) x0) oo10 50) xa 500000 xa) ay x2) 500010 ¥@) ¥@) 00000 3) 10 ya) oo : o1 or we can write 6} = EINT;2] ts 18) ‘The operations of convolution followed by decimation and interpolation followed by convolution are two of the most important building blocks of algorithms. They will be used to build tree algorithms for wavelets and wavelet packets as well as in two- and three-dimensional signal processing. We show only their time-domain identities in the following sections. DECIMATION AND INTERPOLATION 147 stn) un) we of (2) FIGURE 7.5. Convolution followed by decimation, 7.1.3 Convolution Followed by Decimation “Mathematically, we express this operation by yn) = Hn) #xIyo 19) ‘The processing block diagram is given in Figure 7.5. If we label the intermediate ‘output as u(n), it isthe convolution of x(n) and h(n) given by u(r) = x(k yhin—k). ‘The two-point decimation gives y(n) = un) = So x0h2n = 720) 7.14 Interpolation Followed by Convolution ‘The time-domain expression of this operation is given by y(n) = (g(r) * Le} - 21 Using v (n) asthe intermediate output, we have yin) =P v@®e(n —&). c ‘Since u(k) = x(k/2) for even k, we have & m= > +() an 5 rc =DxOen- 29. a2) This process is shown in Figure 7.6. fo) wa) bor ——+(}2)——+} tm) FL FIGURE 7.6 Interpolation followed by convolution. 148 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS 7.2 SIGNAL REPRESENTATION IN THE ‘APPROXIMATION SUBSPACE We have shown in Chapter 5 thatthe approximation subspaces Ay are nested so that the subspace Axo = L2, Aco = (0), and Ay C Ansi for any n € 2. For an arbitrary finite-energy signal x(0, there is no guarantee that this signal isin any of these approximation subspaces. That is, we may not be able to find a coefficient a,x such that, x0) = as@Q't—K) — forsomes, (723) fa ‘To make use of the two-scale relations for processing, a signal must be in one of these nested approximation subspaces. One way of meeting this requirement is by projecting the signal into one of the A, for some s. This is particularly important if ‘one only knows the sampled values of the signal at x(¢ = k/2',k € Z) for some large value of s. “Assuming that the signal x(0) is not in the approximation A,,we wish to find s(t) € Ay such that (0) S5() = Dae sbes) = Daeg - Bs 724) 7 F ‘where ag.s are the scaling function coefficients to be computed from the signal sam- ples. We will show how one can determine a, from the sample data x(t = k/2*) using the orthogonal projection of x(£) onto the Ay space. ‘Since A, is a subspace of L? and x(¢) € L?, we consider x, (0) as the orthogonal projection of x() onto the A, subspace. Then x(¢) —x,(t) is orthogonal to A, and therefore orthogonal tothe basis function ges: (G@)- x40) Ge9) =0 YET. 728) ‘Consequently, the coefficients are determined from the equation (0), 60s) = (HO, be) (Sasmu0o a0) (726) a ‘We expand the last equality, yielding 28 [” 09@T=Dar =? Na, [[” 604 -ve=at] ons [ [080 — k. The matrix form of (7.27) is mar]. a where we have mad change of i WAVELET DECOMPOSITION ALGORITHM 149 a) a a amr | | 00s Ama) | a on (728) where: [_s0Bt= Wa =m {s the autocorrelation ofthe scaling function #0). If the scaling function is supported. compactly, the autocorrelation matrix is banded with a finite-size diagonal band. If the scaling function and its translates form an orthonormal basis, then Gn = bm0. By assuming an orthonormal basis, the autocorrelation matrix isthe identity matrix and the coefficient are obtained by computing the inner product, Ams = (HCO). Pm) (729) If we are given only the sample values of the signal x(t) at x(¢ = k/2*), we can approximate the integral by a sum. That i, na = 2? |” x05 at s “Ye (5) am (7.30) ‘This equation demonstrates the difference between the scaling function coefficients ‘and the sample values ofthe signal. The former are expansion coefficients of an ana- log signal, while the later are samples of a discrete-time signal. For representation of a given discrete signal in terms of a spline series of orders 2 and 4, we have given formulas in Section 5.6. 7.3 WAVELET DECOMPOSITION ALGORITHM Letus rewrite the expression of the CWT of a signal x(*): Wyx(b,a) aay 150 __ DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS Lets denote the sale a = 1/2 andthe ransationb = 2%, wheres and belong to the integer set Z. The CWT of x(¢) is a number at (k/2*, 1/2*) on the time-scale plane. It represent the comelation between (0) and ¥() at tha time-scale point. We cal this the discrete wavelet ransform (DWT), which generates a sparse tof values onthe time-scale plan. We use wunwr(hd)=[lx0r(GE)u aay to represent the wavelet coefficient at (b = k/2#,a = 1/25). A discrete time-scale ‘map representing the signal x(t) may look like Figure 7. Itisknown that the CWT generates redundant information about the signal on the time-scale plane. By choosing (b = k/2*,a = 1/2, itis much more efficient using the DWT to process a signal. It has been shown that the DWT keeps enough infor- mation of the signal such that it reconstructs the signal perfectly from the wavelet ‘coefficients. Infact, the number of coefficients needed for perfect reconstruction is the same asthe number of data samples. Known as critical sampling, this minimizes redundant information. ‘The decomposition (analysis) algorithm is used most often in wavelet signal pro- ‘essing. Its used in signal compression as well asin signal identification, although in the latter case, reconstruction of the original signal is not always required. The 20 Ps = 10F 4 5 ° 1 L 0 0.2 0.4 0.6 0.8 1.0 b FIGURE 7.7 Typical time-scale grid using the decomposition algorithm. (Reprinted with ‘Permission from {1}, copyright © 1995 by Springer-Verlag.) WAVELET DECOMPOSITION ALGORITHM 151 algorithm separates a signal into components at various scales corresponding to suc- cessive octave frequencies. Each component can be processed individually by a dif- ferent algorithm, In echo cancellation, for example, each component is processed with an adaptive filter ofa different filter length to improve convergence. The impor- {ant issue ofthis algorithm is to retain all pertinent information so thatthe user may recover the original signal (if needed), The algorithm is based on the decomposition relation in MRA discussed in Chapter 5. We rewrite several of these relations here for easy reference. Let Rept) € Ag > tH) = atest Ph ee) z 29(0) € Ay, => 44) = Dat seo) elt) € We, => yell Dusviso- Since the MRA requires that Asa thn 33) we have Xea1(t) = x60) + ye) Levsitise) =LatO+ Tmo. 038 ‘We substitute the decomposition relation OQ — 0) = Do [ho 2k ~ C101 k) +h Ak-QVAr—b} (7.35) T imo (7.34) to yield an equation in which all bases ae at resolution s. After inte- changing the order of summations and comparing the coefficients of gx,+(t) and ‘Vu.s(0) on both sides of the equation, we obtain 45 = ho 2k ares wre = hr Rk = Dacor Where the right side of the equations corresponds to decimation by 2 after convolu- tion (see Section 7.1.3). These formulas relate the coefficients ofthe scaling func- tions and wavelets at any scale to coefficients atthe next higher scale. By repeating this algorithm, one obtains signal components at various frequency octaves. This algorithm is depicted in Figure 7.8, where we have used the vector notation tholKD}, and hy = (hulk]} (7.36) = laiel We (Wksh hg = 152 _ DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS. FIGURE7.8 Single-level wavelet decomposition proces. ‘with k € Z, This decomposition block can be applied repeatedly to the scaling func- tion coefficients at lower resolution to build a wavelet decomposition tree as shown in Figure 7.9. ‘The reader should note that the wavelet decomposition tree is not symmetric since only the scaling function coefficients ar further “decomposed” to obtain signal com- ponents at lower resolutions. A symmetric tree may be constructed by decomposing the wavelet coefficients as well. This is the wavelet packet tee discussed in Chap- ter8. oe Lae »}- Oa mh iy 42 wut www FIGURE 7.9. Wavelet decomposition tee. RECONSTRUCTION ALGORITHM 153 7.4 RECONSTRUCTION ALGORITHM ‘It is important for any transform to have a unique inverse such that the original data can be recovered perfectly. For random signals, some transforms have their unique inverses in theory but camot be implemented in reality. There exists a une - ‘verse discrete wavelet transform (or the synthesis transform) such the the original Function can be recovered perfectly from its components at different scales. The 1e- construction algorithm is based on the two-scale relations of the scaling function and the wavelet. We consider a sum ofthese components tthe sth resolution: OF YO) = Lato + Tmt =a. (737) ‘By a substitution ofthe two-scale relations into (7.37) one obtains Dae Dances ~2- o+ Dime Lato -%-6 =Daenoette—o, 738) ‘Comparing the coefficients of 6(2"*1 — £) on both sides of (7.38) yields eet DY (eole - 2kIa,s + silé - 2k]vns}, (7.39) T where the right side ofthe equations corresponds to interpolation followed by convo- lution, as discussed in Section 7.1.4. The reconstruction algorithm of (7.39) is shown sraphically in Figure 7.10. me @) a cies 7 o 8 8 ew a FIGURE 7.10 Signa reconstruction from scaling function and wavelet coefficients, 154 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS ‘We emphasize here that athough the mechanics of computation are carried out in digital signal processing fashion, the decomposition and reconstruction algorithms are actually processing analog signals. The fundamental idea isto represent an analog signal by its components at different scales for efficient processing. 7.5. CHANGE OF BASES ‘The algorithms discussed in Section 7.4 apply to all types of scaling functions and wavelets, including orthonormal, semiorthogonal, and biorthogonal systems. ‘We have seen in Chapter 6 that the processing sequences (go[k], gi[k]}, and (ol, {1} are finite and equilength sequences for compactly supported orthonor- ‘mal wavelets. In the case of semiorthogonal wavelets such as compactly supported ‘B-spline wavelets, the processing sequences (hofh], hy {k]} are infinitely long. Trun- cation of the sequences is necessary for efficient processing, To avoid using the infinite sequences, it is beter to map the input function into the dual spline space and process the dual spline coefficients with go{k] and gi[A] that have finite lengths This and the next two sections are devoted to modification of the algorithm via a change of bases, ‘We have shown in Chapter 6 that the mth-order spline gm = Nr and the cor- responding compactly supported spline wavelets Yim are semiorthogonal bases. To ‘compute the expansion coefficients of a spline series or a spline wavelet series, it is necessary to make use of the dual spline Gy or the dual spline wavelet Yq. In semiorthogonal spaces, all these bases span the same spline space Sy. For certain real-time applications in wavelet signal processing, i s more desirable o use fnite- length decomposition sequences for efficiency and accuracy. Therefore, itis nec- essary to represent the input signal by dual splines of the same order before the decomposition process, Let us recall the formulation ofthe multiresolution analysis, in which we have the approximation subspace as an orthogonal sum of the wavelet subspaces, tw Ws © Anew @W-2 +O Wa @ Aww (7.40) for any positive integer M’. Consequently, any good approximant xy ¢ Ay of a ‘given function x € L? (for sufficiently large M1) has a unique (orthogonal) decom- position . sw Dea team, aan where x € Ar andy, € W Since and i generate the same MRA wil Yr and ‘a generat the sre wavelet obopce (a poperty not pasate by biortogona CHANGE OF BASES 155 scaling functions and wavelets that are not semiorthogonal), we write 21) = Last —W) = Dy BOK) yw) = Ty mast —W) = Ty Hse for each s € Z. To simplify the implementation, we have not included the normal- ization factor 27 ‘if we apply the decomposition formula (7.36) to the scaling function coefficients, wehave 742 ae, = De ho lk — Cages wea = Deh 2k ~ Maes Since sequences (ho(k}} and (h{k]} are infinitely long for semiorthogonal setting, it wil be more efficient to use sequences {gofk]) and (gi(A)) instead. This change of sequences is valid from the duality principle, which states that (glk},gi(A)) and (io (kh [k} an be interchanged, in the sense that 743) }aolkl < ho [-k] 7.48) Faulk] & ha [-K) oe when dm and Yn are replaced by Gy and Gin. With the aplication of the duality principle, we have = Le soll — ler Ecsite = les However, to take advantage of the duality principle, we need to transform the coef- ficients {ar} to {as}. We recall that both ¢ and g generate the same Ay space, so that ¢ can be represented by a series of 0 = onde -® (7.46) r 745) for some sequence {rr}. We observe that this change-of-bases sequence is a finite ‘sequence ifthe scaling function has compact support. Indeed, by the definition of the dual, we have ” f. oO — Kat. an ‘Therefore, atthe original scale of approximation, with 5 = M, application of (7.46) yields Gm = Do re-caeay (7.48) 7 156 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS in {2)4iow oy yw FIGURE 7.11 Stundard wavelet decomposition process implemented with change of bases, (Reprinted with permission from [1], copyright © 1995 by Springer-Verlag.) ‘which is an FIR operation, Observe that if we take splines as scaling functions [i., (0) = Nm(Q)), then ru = Nom ~ k); k = 0,31, .. yam — [1]. As we have seen in previous discussions, the sequences (olk}} and (g10)} inthe decomposition algorithm are finite sequences. ‘We can summarize our computation scheme as in Figure 7.11. The computation Of gg, 5 = M 1,4 M— Musing ay as the inp sequence requires 2M" FIR filters. The importance ofthe coeficients i. is that they constitute the CWT of x4 relative tothe analyzing wavelet Yq at certain dyadic points, namely es <2"? (Wom) (& 7.6 SIGNAL RECONSTRUCTION IN SEMIORTHOGONAL SUBSPACES ‘The algorithm described in Section 7.4 concerns the recovery ofthe original data Tn that cas, the original data are the set of scaling function coefficients ae a) at the highest resolution. Since the original input signal is an analog function x(#) ~ xu (0) = Dp ae,u6(2M1— 0), itis necessary to recover the signal by performing the summation. Recall thatthe decomposition algorithm discussed in Section 7.5 pro- duces the spline and wavelet coefficients in the dual spaces, namely ({s). {})- ‘Tose finite-length two-scale sequences forthe reconstruction, we mus express the coefficients in dual spaces in terms of ({ax.} (we, inthe spine and wavelet spaces. In addition, if users need to see the signal component at any intermediate steps in the decomposition, they would have to use the dual spline and dual wavelet ‘series. In both cases one can simplify the problem by a change of basis that maps the dual sequences back to the original space [2]. Since the sequences do not depend on the scale, the second subscript of the coefficients can be arbitrary. Such sequences ane applicable to mapping between any wo diferent scales. SIGNAL RECONSTRUCTION IN SEMIORTHOGONAL SUBSPACES 157 7.6.1 Change of Basis for Spline Functions vr objective isto write 50 = iin 0 = DaeNnlt = 8). (7.50) F F By taking the Fourier transform of (7.50), we get Kee) Kin(a) = ACC!) Nn), as) ‘where, as usual, the hat over a function implies its Fourier transform, and A(e/*) and A(e/*) are the symbols of {ag} and (ax) respectively, defined as Kei) = Dive, ace!) = Dwell. (7.52) c T ‘The dual scaling function Nm is given by 753) where Ey, (22) = |Nu(a-+2nk)|? # 0 for almost all w since (Nw (-—k)} isa stable or Riesz basis of Ao, As discussed in Chapter 6, En, (w) is the Euler-Frobenius Laurent series ands given by Eig (22) = [Nm + 2k) |? SF Nanton +8322, 5% Iki clear that by multiplying (7.54) by 22"-?, we can get a polynomial of degree 2m ~ 1 in 22. The last equality in (7.54) is « consequence of the relation E[[lrweorwa]em, aso PROOF FOR (7.55): Using Parseval’s idemity, we have DY e+26/ Se FO =f se+07Oa Lf $f ioPemae Le pan oe ve 158 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS. os . x f DY [Feo+2em |e deo, (756) Sto 1 is clear the F(O) is the &h Fourier coeficient of a 2x-periodic function Ti 20|Fle+ 2kr)]?. With this relation, (7.55) follows directly. It is easy to Siow that [Cresnataarmimen sn with suppan(t) = (0, 2m]. ‘Combining (7.51), (753), and (7.54) and taking the inverse Fourier transform, we et 44 = (Gan) * pln ®, 758) where et Lolita 7.5) mG iv Ik, tel (759) Itcan be shown that PARI = im s Aa, 20, (7.60) a where A= MTT jas — 4) 761) +»2pm are the roots of (7.54) with lay] < 1 and :Xapast—i for i = 1y..+4 Po Hete tm = (2m — 1)! and pm = m— 1. Observe from (7.54) and (7.82) 1 1 Dll Bp 7 Ee T +1 Nam(n +) here the last equality i «consequence of the partition of unity property of carina! B-splines, described in Chapter 5. Roots i; for linear and cubic splines are given below. The coefficients (pk) are given in Tables 7.1 and 7.2. The coeficients pf kl have better decay than (Ao(A} (see Figure 7.12) SIGNAL RECONSTRUCTION IN SEMIORTHOGONAL SUSSPACES 159 TABLE7.1 Coeficiens{p{k}} for the Linear Spline Case (p(t) = pl—t)) z pie) k Pie ° 17320510 8 (046023608 x 10-4 1 -0.46410170 9 0.12331990 x 10-4 2 0.12438570 10 (0.33083470 x 10° 3 033321008 x 107! n ~0.88539724 x 10-6 4 (0.89283381 x 10-2 R (023724151 x 10-6 3 023923414 x 10°? B -0,63568670 x 10-7 6 (0.64102601 x 10-> “ (0417033177 x 10-7 7 0.177624 x 10°? 1s 0.45640265 x 10-8 Linear spline (m = 2): aya-2evi= tb 6) lk = (-1)*"V3 (2 v3)" 7.64) Cubic spline m re sieuecirt= tL ron msn ass) 4g = ~0.5352805 = = ‘TABLE 7.2. Coeticients{pIk}} for Cube Spline Case (pIk] = pl—ED pik k pt ° 0.49647341 15 051056878 x 10-9 1 -0.30910430 16 (027329488 x 10-> 2 0.17079600 0 014628941 x 10-3 3 -0.92078239 8 0.78305879 x 10-4 4 0.49367899 9 -0.51915609 x 10-4 5 =0.26438509 » 0.22436609 x 10-4 6 o1sist619 2 012009880 x 10-4 7 =0.15752318 « 10 2 (064286551 x 10-5 8 040548921 x 10-1 2 0.34411337 x 10-5 9 =0.21705071 x 107 oy (0.18419720 x 10-5 0 (0.11618308 x 10-1 25 0.98597172 x 10-6 u ~0.62190532 x 10- 6 os2777142 x 10- 2 0.33289378 x 10-2 2 -0.28250579 x 10-6 B =0.17819155 x 10-2 28 0.15121984 x 10-6 4 (095382473 x 10° 2 0.80945043 x 10-7 160 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS ‘sss Tog(liafD) s+ lop nt 4» log(|hofk}) aoe osngttD ves log(|pit]) == log(ipikD = log(gikD = letiatkD “9 8 0 15 0 a k k @ ® FIGURE 7.12. Plots of holt], hilR}, p(A), and afk] versus k for (a) linear and (b) cubic spline cases, 7.6.2 Change of Basis for Spline Wavelets ‘Here our objective is to write 1) =O Fnlt - 0 =P wevalt - 8). (7.66) r r Replacing Nm by Yin in (7.53), we get the relationship between Yom and Gp. Pro- ceeding inthe same way as before, we get we = (itn) + gin), (7.67) where — Di Wet 20 P Furthermore, we have Diino + 2b? = Eyg(VENq ZDEve(—2, lel. (7.69) T Datei. (7.68) rs PROOF FOR (7.69): Wit the help ofa two-scale relation, we can write Livatot ann? Ya [ae(P")]x (224) 070) SIGNAL RECONSTRUCTION IN SEMIORTHOGONAL SUBSPACES 161 with Gel? = 5S atklelb?, am 2 ‘Now separating the right-hand side of (770) into parts with even k and odd & and smaking use of th relation (7-54), we can write LVinl+ 2k? = 1G1@)PEnq(2) +1G1(-2)PEng(-2). (7-72) From the relation |G (2)| = |Go(—z)Ewq(~2), with Go(2) defined ina similar way as in (7.71) with gi{k] replaced by go[A], we can write Divo + Auk)? = (I60(-2? Ex Hd + 1Go? Ex} T Eig (Z)ENq (—2): 1.73) Following the steps used o ative at (772), it ean be shown that IGo(—2)? Exs-2) “D+ 1Goe? Exe) Ng) - (774) hich, together with (7.72), gives the desired relation (7.65). ‘The expression for q{k] has the same form as that of pt] with tq = —[2m ~ DIP, pm = 2m—2, and 2; being the roots of (7.69). Observe from (7.68) and (7.69) that 1 dae ep 7.75) sie En, (1) = 1. Roots A and yg] for linear and cubic splines are given Telow. The coetcients are given in Tales 73 and 74. The oefient [have beter decay than (ee Figure 7.12) TABLE7.3 Coefficient (gk}) forthe Linear Spline Case (fk) = l-E)) t ait) z att ° 43301268 8 0.92087740 x 10+ 1 -0.86602539 9 ~0.24663908 x 10-4 2 025317550 10 0.66086895 x 10-$ 3 =0.66321477 x 107! 1 ~0.17707921 x 10-5 4 (0.17879680 x 10! 2 1047448233 x 10-6 5 049830273 x 10-2 B ~0.12713716 x 10° 6 0.12821698 x 10-2 1“ 1034066300 x 10-? 1 15 0.91280379 x 10-* 162 a BLE 7.4 Coeflcients (q(t) forthe Cubic Spline Case (1 a k 3.823959 18 =13.938340 9 9.076698 20 44465132 21 . 25081881 2 13056690 2B (0,70895731 24 -0.37662071, 25 020042150 6 0.1081 1640 2 o.s794018s x 10-1 8 -0,30094879 x 107! 29 0.16596500 x 10-! 30 -0,88821910 x 10°? 31 (047549186 x 10-2 32 ~0,25450843 x 107? 3 (013623710 x 107? Py o-72923988 x 1073 35 Linear spline (m Blerl= (Cubic spline (m = 4): dy = 8:3698615 x 1078 = neat = 1.5019634 x 10°? = = io Ag = —0.1225546, x is i 02865251 = = DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS ale) : = —0.20894629 x 10-3 O.11184511 x 10-3 —0,59868424 x 10-4 0,32046413 x 10-4 —0,17153812 x 10-4 0,91821012 x 10-5 —0,49149990 x 10-5 0,26309024 x 10-5 0,14082705 x 10-5 0,75381962 x 10-6 0.40350486 x 10-6 021598825 x 10-6 -0.11561428 x 10-6 os -0.33126394 x 10-7 | 0.949154 x 107 776) am (7.78) EXAMPLES 163, -ossaint= aso Ya 7.7 EXAMPLES In Figure 7.13 we have shown decomposition of a music signal with some additive noise. Here the music data are considered to be at integer points. Intermediate ap- proximate functions s; and detail functions r; have been plotted after mapping the en eno Zo fleas. Fo FIGURE7.13 Decomposition of musi signal with noise. 164 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS : 5 ° 01 02 03 04 05 tf) 0 01 02 03 04 05 "0 01 02 03 04 OS 01 02 05 04 05 "0 01 02 03 04 O05 10 10 a8 8 ° ° 01 02 05 04 05 "0 01 02 03 04 OS FIGURE7.14 Magnitude spectrum ofthe decomposed music signal, indicating the lowpass and bandpass filter characteristics of scaling functions and wavelets, respectively 20 | rl «x(t dual spline and wavelet coefficients into the original space with the help of coef- ficients p{k] and q{k] derived in this chapter. To illustrate the lowpass and band- pass characteristics of splines and wavelets, respectively, in Figure 7.14 we show the magnitude spectra of the decomposed signals at various scales. The reconstruc- tion proces is shown in Figure 7.15 using the same sequences ({zofA]) (gi{k]) a8 ‘were used for the decomposition. The original signal s(t) is also plotted next to the reconstructed signal s(t) for the purpose of comparison. ‘To further expound the process of separating a complicated function into several simple ones withthe help of wavelet techniques, we consider a function composed of three sinusoids with diferent frequencies. These frequencies are chosen such that ‘TWO-CHANNEL PERFECT RECONSTRUCTION FLTER BANK 165, : : 2 Fo Madaltatiha. Fo bean 2 . a : « Fo Htahettldidle Fo pene ete s oe « : on = s s » : on = S os FIGURE 7.15 Reconstruction ofthe music signal ater removing the noise. they correspond to octave scales. As can be seen from Figures 7.16 and 7.17, standard ‘wavelet decomposition separates the frequency components fairly wel. 7.8 TWO-CHANNEL PERFECT RECONSTRUCTION FILTER BANK Many applications in digital signal processing require multiple bandpass filters to Separate a signal into components whose spectra occupy different segments of the frequency axis. Examples ofthese applications include a filter bank for Doppler fre- ‘quencies in radar signal processing and tonal equalizer in music signal processing. Figure 7.18 demonstrates the concept of multiband filtering. In this mode of multi- 166 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS. Oi: 20012. 100027 005: 1000 2000 3000 4000 Sih lbe se FIGURE 7.16 Decomposition of a signal composed of tree sinusoids with different fe- ‘quencies corresponding to octave scales band fitering, the spectral bands corresponding o components ofthe signal may be processed with a different algorithm to achieve a desirable effect on the signal. For Doppler processing and tonal equalizer, there is no need to reconstruct the original signal from the components processed. However, there is another form of filtering that requires the original signal to be recovered from its component: the subband filter bank. The major application of subband filtering is in signal compression, in Which the subband components are coded for archiving or transmission purpose. ‘The original signal can be recovered from the coded components with various de- srees of ielity. "We use abasic two-channel perfect reconstruction (PR) filterbank to illustrate the smain features of this algorithm. Filter bank tre structures ean be constructed using ‘TWO-CHANNEL PERFECT RECONSTRUCTION FILTER BANK 167, 2 2 SLU -2 oor 002 003 «= «oY oR 0S 2 2 ~2 -2 2 2 a Beep -2 2 2 ¥ | + So a -2 2 FIGURE 7.17 Decomposition of signal with thre frequency components (continved from Figure 7.16). this basic two-channel fiter bank. A two-channel filter bank consists of an analysis section and a synthesis section, each consisting of two filters. The analysis section includes a highpass and a lowpass filter that are complementary to each other 50 that information in the input signal is processed by either one ofthe two filters. The block diagram for a two-channel PR filter bank is shown in Figure 7.19. ‘The perfect reconstruction condition is an important condition in filter bank the ‘ory. It establishes the unique relationship between the lowpass and highpass filters of the analysis section. Removal of the aliasing caused by decimation defines the relations between analysis and synthesis filters. We elaborate on these conditions in uch greater detail below. 168 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS Tp 20 Zn x(t) H y(t) Ht yal) FIGURE 7.18 Muluband fer benk. ‘The filters in a two-channel PR filter bank are specially designed so thatthe com- ponent signals may be reconstructed perfectly with no loss of information. The out- put ofthe filter bank is simply a delayed version ofthe input signal. Foratwo-channel filter bank, the filtering operation is exactly the same as the wavelet algorithm. Be- cause of the PR condition and the need to remove the aliasing components in the ‘output, one needs to design only one of the four filters. For further detail on filter ‘banks and how they relate to wavelet theory, readers are referred to [3~7] 7.8.1 Spectral-Domain Analysis of a Two-Channel PR Filter Bank ys Let a discrete signal X (z) be the input to a two-channel PR fiter bank as shown in Figure 7.19 in terms of z-ransforms with intermediate output signals. The analysis ngm}-() 2) ot 12)eam BO) 224 P (12) few) FIGURE 7.19 Tworchannel perfect reconstruction filterbank: xt) a ‘TWO-CHANNEL PERFECT RECONSTRUCTION FILTER BANK 169) section ofthe filter bank consists of a lowpass filter H(z) and highpass filter (2). ‘The convolved output ofthe lowpass filter Ho(2) followed by atwe-point decimation WDis UE) = KG"? Hole!) + X (2!) Holey), (7.79) 2 ‘while the highpass filter H(z) with decimation yields SCI MED) +XCI MCR, 7.80) ve) For analysis purposes, we assume thatthe outputs of the analysis bank are not pro- cessed, so that the outputs of the processor labeled U"(2) and V’(2) are V@=U@ V'@) =e. ‘After the interpolator (+ 2) and the synthesis filter bank Go(2) and G (2), the outputs of the filters are U"G@) = 5{X @)Hol2)Gole) + X(—2) Ho(—2)Gol2)] (78!) VIQ=OMOAD+XIMA-IEO). (182) ‘These outputs are combined synchronously so thatthe processed output X*(2) is XO =U"|O+TV"O 1 X/(2)LHo(2)Gole) + Hilz)Gi(2)) 2 FEXCO-DG)+ MOG. — 83) ‘The second term of the expression contains the alas version of the input signal [one that contains X(—2)). For perfect reconstruction, we may choose the filters Go() ‘and Gz) to eliminate the aliasing component. We obtain the aliasing-free condition forthe filter bank Go(z) = £H(—2) Gy) = FHo(-0. 7.84) Once the analysis filters have been designed, the synthesis filters are determined au- ‘omatically. Choosing the upper signs in (7.84), the output of the filter bank becomes 170 _ DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS SX HOC) Fe) — Hy(2)Hol—2)- 78s) xe ‘The perfect reconstruction condition requires that X*(z) can only be a delayed ver- sion of the input X(2) [ue., X*(z) = X(z)z7™ for some integer m]. We obtain the following relations: Ho(2)Go(2) + Hi(z)Gi (2) = Ho(z)Hi(—2) ~ Hi()Ho(—2) (7.86) : = Ho(2)Go(z) — Ho(—2)Go(—2) (7.87) 22. 7.88) We define the transfer function ofthe filter bank: x@ 1 Fey 7 7 lMole)Gote) + MEL] z ‘To simplify the analysis let us also define composite filters Co(z) and'CV(6) as prod- uct filters forthe two filtering pats Co(z) = Ho(2)Go(2) = —Ho(e)#i(—2) C1(2) = Hy(2)Gie) = Hi (2) Hol—2) —Ho(-2)Go(—2) = -Co(-2), 7.89) ‘where we have made use of the aliasing-free condition. In terms of the composite filters, the PR condition becomes Co(z) ~ Cof—2) = 227" (7.90) Te 1 3 ICote) - Col—21 791) I we design a composite filter Cz) that satisfies the condition in (7.90), the analysis filters Ho(z) and Go(z) can be obtained through spectral factorization. We will have ‘numetical examples to demonstrate this procedure in later sections. ‘We note thatthe transfer function 7 (z) is an odd function since 70-2) = fICal-2)- CoO] =-76. an The integer m in (7.90) must be od, which implies that Co(z) must contain only cevemindexed coefiiens except cm = 1, where m is odd. Finding Ho2) and Hi(2) ‘TWO-CHANNEL PERFECT RECONSTRUCTION RUTER BANK 171 [He lence 0 t x FIGURE 7.20 Spectral characteristic of quadrature miror filter. {or Ho(2) and Go(2)] to meet the PR requirement isthe subject of filterbank design. ‘Tao basic approaches emerged in the early development of PR fiter bank theory: (2) the quadrature mirror filter (QMF) approach, and (2) the hal-band filter (HBF) approach, Ia this section we discuss the fundamental ideas in these two approaches. Quadrature Mirror Filter Approach Let us choose H(z) = Ho(2). We have, in the specval domain, By(e!*) = Hole) = Hoel"), 7.93) ‘The spectrum of the highpass filter H;(e!®) isthe mirror image of that of the lowpass filter with the spectral crossover point at « = x/2, as shown in Figure 7.20. The transfer function becomes Te 794) BO — Ha) ‘Suppose that H(z) isa linear phaset FIR filter of order NY, so that PUR - He) ‘A tunction J € £3) has incor phase = HF, Whee as some el constant. The function f has generale linear phase if Fee) = Feri, hee la) isa rea-valued fonction and constants ¢ and are aso rea-vaed. To void diario in Signal constuction, ie must hav near or generalized ines pase 172 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS Hole) = H0P¥- |r| sel) ei elo ‘The spectral response of the transfer function becomes Fe) =} erm» Tse! = [race] 795) If (W ~ 1) iseven, Te!® = 0 at the crossover point = m /2! The transfer function produces severe amplitude distortion at this point, and that violates the PR require ‘ment. Therefore, IV must be even. If we wish to eliminate all amplinude distortion for leven N, we must have the spectral amplitude of H(z) and H) (2), satisfying Hace! + |ricel)|* =2. 2.96) | [rice ‘Observe tha the condition (7:96) fers from he nrmalized form by factor of 200 the righthand side. This happens because previously we used a normalizing factor in the definition of -ransform with two sale and decomposition sequences. ‘The trivial solution to (7.96) is the sine and cosine function for Ho(e/”) and H,(e!), which contradict our initial assumption for an FIR filter. Any nontrivial linear phase FIR filter Ho cases amplitude dition. Ifthe right-hand side of (7.96) is moralized to nity, the type offerte sais this normalization she power complementary iter, an TR file hat can be used in UR-PR filter banks Retuning o (7:94), we etic the fiters tobe FI, Ho() can have at most 160 cooicient, tht (2 has only one term with nod power of zis es) see thi this solution Teas to Haar filters. We discuss these filers further when We discuss orthogonal filter bans. Half-Band Filter Approach Observe from (7.89) that if we allow only causal FIR filters forthe analysis filter bank, the composite filter Co is also causal FIR with only ‘one odd-indexed coefficient. To overcome this restriction, we can design anticausal ‘or noncausal filters and then add a delay to make them causal. We first simplify the analysis by adding an “advance” to the composite filter and by making use of the properties of @half-band filter defined below. The composite filter Co is advanced by -m taps, o that S(2) = 2"C0(2), sn where S(z) is a noncausal filter symmetric with respect tothe origin. The PR condi- tion becomes S() + S(-2) =2 (7.98) ‘TWO-CHANNEL PERFECT RECONSTRUCTION FILTER BANK 173, since S(—z) = (~2)"Co(—2) = =2"Co(2) for odd m. All even-indexed coef- ficients in $(z) are zero except s(0) = 1. $() is a half-band filter satisfying the following conditions: 1. sm) 2. 5(0) = constant. 3. s(n) = s(-n). 4, SC!) + S(=* for all even n except n = 0. ‘This half-band filter is capable of being spectrally factorized into a product of two filters. We use examples to discuss the HBF. ‘To find the solution to (7.98), let Hy(z) = becomes " Ho(—2~); the transfer function 2 Te) Flot H-2 — Huey Ho(—2)1 [-Hotenote“ ("+ Holz) Hol]. 7.99) In view of (7.90),m must be odd. We have the expression [Ho(c) Mote") + Ho(—2) Ho(—2 y]. 100) ‘The filter bank has been designed once the half-band filter has been designed. The resultant filters are Fisted as follows: S(z) = Hole) Hote) Cole) = Hole) Hote" ce) Hy) =~ @.101) Go(z) = G@ TQ) = }1S@ + 5-2). ‘The lowpass filter H(z) comes from the spectral factorization of S(z). Example 1: We use the derivation of the Daubechies {5} scaling function coeffi- Cients as an example. Let us recall the conditions on the half-band filter, S@) + S(-2) ‘The simplest form of S(z) other than the Haar filter is S@) = +70 +271P RQ). 7.102) 174 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS, All even coefficients of $(z) must be zero except at 0, where s(0) = 1. Let RG) =0z+b+az bbe a noncausal symmetric filter so that $(2) remains symmetric. By carrying out the algebra in (7.102) and using conditions on S(z), we have => Sat 6b=1 =0, ) pera e]firviea-v5y] “This expression is substituted int 7.102) 50 that we can face) a product ‘of two filters, Ho(z) and Ho(z~"). The result of this spectral factorization gives a caus iter: 1 -1 Hye) = pate? [is vita —vae"] (49) + 04 B)ert4 6-8) 7+ (1-8) 29] = 0.12942-9. [Note that these coefficients need to be muikipid by V3 to get the values given i Chapter 6 : zal = 0.4929 + 0.836527! + 0.22412 Biorthogonal filter Bank A linear-phase FIR filter bank is desirable because it rinimizes phase distortion in signal processing. On the other hand, an orthogonal FIR filter bank is also desirable because ofits simplicity. One has o design only one filter, namely Ho(2), and all other filters inthe entire bank are specified. Biorthogonal filter banks ae designed to satisfy the linear-phase requirement. ‘Letus recall the PR and antialiasing conditions with synthesis and analysis filters. They are Ho(2)Go(2) + Hy(2)Gi(e) = 22" Go(2)Ho(—2) + G2) Hi(—2) = 0. ‘We ean solve forthe synthesis filter's Go(z) and G (2) in terms ofthe analysis filter's H(z) and H(z). The result is Gole)] _ 227 [Malay en) aaa [=m ou TWO-CHANNEL PERFECT RECONSTRUCTION FILTER BANK 175, ‘where the transfer matrix is Holz) Hitz) = [no-o ma] If we allow symmetric filters Holz) = Holz!) ¢ ho(n) = ho(—n) and are not concemed with causality atthe moment, we may safely ignore the delay 2°". This is equivalent to designing all filters to be symmetric or antisymmetric about the origin, We aso recall the definitions ofthe composite filters Co(z) = Holz) Gute) C1) = Mi@)Gi). ‘Using the result of (7.103), we write Co(z) = Ho(2)Gole) = (7.104, C12) = MG If we replace ~z for z in the second equation and note that det{Tr(—2)] = — det(Tr(2)], wehave C1@) = Col-2. 7.105) ‘The final result is Coz) + Co(—2) = 2. (7.106) ‘We now have a half-band filter for Co (2), from which we can use spectral fectriza- tion to obtain H(z) and Go(2). There are many choices for spectral factorization and the resulting fiers are also correspondingly different. They may have different filter Jengths forthe synthesis and analysis banks. The resulting filters have linear phase. ‘The user can make a judicious choice to design the analysis bank othe synthesis bank to meet the requirements ofthe problem on hand. We use the example in (3} 10 show different ways of spectral factorization to obtain Ho(z) and Go(2) Let the product filter Cole) = Ho(z)Gote) 'Qt2) 1 = yg b+ 9 + 169 + 92-4 — 2-5), aor 176 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS Since the binomial (1 + z~!)" is symmetical, Q(z) must be symmetrical t make Co(z) symmewical. An advance of :? makes S(z) a half-band filter. The choices of spectral factorization include 1 Hole) = (14274) Gols) = +29) 2 Hoe) = (427! Gola) = +2" 0) 3. Holz) = (+271? Golz) = (+277 (2) or +e) @= 3-2) ord teat V3-2 4 Ho) = +209 Golz) = +2) OG) Goe) = 1+ 772+ V5 = 2). 7.108) 5. Hole) = (14 2-N°@— V3 27 ‘The last choice corresponds to Daubechies’ orthogonal fiters, which do not have linear phase. The 3/5 filter inthe upper line of factorization choice 3 gives ¢ Linear- phase filter, whereas the lower one does not. 7.8.2 Time-Domain Analysis ‘The development of filter bank theory is based primarily on spectral analysis. We 5”), we have 80 = TD hohotk +m ae rom (7.98) and the fact thet s(2n) = 0 for all integer n, we have the orthonormality ‘condition required for PR: Yohoibrhok +20) = dno aus ‘This implies the orthogonality ofthe filter on all its even translates. We apply the same analysis to the highpass filter h(n) and get the same condition for hy (n): Shh +29) = an, Thome +20) aury r Interms of wavelet nd approximation function basis, the ortonormalitycondions ven above ae expresied as inner products: {ho(k), ho(k + 2n)) hi (ke), bik + 2n)) (hot), hi (k + 2n)) =5 7.18) where the approximation basis ho(k) and the wavelet basis (K) are orthonormal to their even translates. They are also orthogonal to each other. If we construct an infinite matrix [Ho] using the FIR sequence p(n) such that Fee oe at po tals} 0 0 ol] oll} Aol2] hol3) 0 0 0 0 hol} fof} Aol2) [to] = 0 02 0 mm, C19 o 0 ° 0 ‘TWO-CHANNEL PERFECT RECONSTRUCTION FILTER BANK 179 itis obvious that [eo] = 1 120) using the orthonormality conditions in (7.118). Therefore, [Ho] is an orthogonal ms- trix, We define [#1] in a similar way using the FIR sequence of h(n) and show that CH AY = 1. a2) In addition, the reader can also show that Ui) (Hol = [Ho] [HI = (0) (7.122) Equations in (7.118) constitute the orthogonal conditions imposed on the FIR filters. This type of filter bank is called an orthogonal filter bank. The processing sequences for the Haar scaling function and Haar wavelets are the simplest linear- phase orthogonal filterbank. Indeed, if we denote aio (I these two sequences satisfy the orthogonal conditions in (7.118). We recall that linear-phase FIR filters must be either symmetric or antisymmetric, a condition not "usually satisfied by orthogonal filters. This set of Haar filters isthe only orthogonal set that has linear phase. Two-Channel Biorthogonal Filter Bank in the Time Domain We have shown ‘that the biorthogonal condition on the analysis and synthesis filters is Cole) + Co(—2) = Ho(z)Gol2) + Ho(-2)Go(—2) =2 ‘Writing this equation inthe time domain and using the convolution formula yields the time-domain biorthogonal condition DT hotk)eo(e - B+ 1)" Sho gol — b) = 28¢0- 7.123) F F For noatrivial solution, the equality holds only if ¢is even. Ths results in a biorthog- ‘onal relation between the analysis and synthesis filters: Lhothgot2n — ) = hol), g0(2n — ) 7 = by0- 12 190 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS ‘The biorthogonal condition can also be expressed in terms of H(2) to yield Ln Wai Qn = = thnk), gn -b) = bo. 7.125) ‘The additional biorthogonal relations are aire Bat om eS Bm(k) = am(—K)- a.m = = (ho(k), Botk — 2n)) (hy (k), Blk — 2n)) = 7.128) (ha (k), Bok — 20) (ho(k), Bk — 2n)) 7.9 POLYPHASE REPRESENTATION FOR FILTER BANKS Polyphase representation of a signal isan alternative approach to discrete signal rep- resentation other than in the spectral and time domains. It is an efficient representa- tion for computation. Consider the process of convolution and decimation by 2; we ‘compute all the resulting coefficients and then cast out half of them. In the polyphase approach we decimate the input signal and then convolve with only half of the filter ‘coefficients. This approach increases the computational efficiency by reducing the redundancy. 7.9.1. Signal Representation in the Polyphase Domai Let the z-transform of a discrete causal signal separated into segments of Mf points be writen as X() = x(0) $x()et +x Qe? +.2)eF +--+ x(M — DM aM) M x + DMD + xO + 2) $XQM) eM OM + eM + 2M 42)2° OHH 4» MM 2M + 17 OMFD 4. (7.129) wet = yo txee), (7.130) & POLYPHASE REPRESENTATION FOR FILTER BANKS 181 where Xe(2M) is the z-transform of x(n) decimated by M({ Mf). The index € indi- cates the number of sample shifts For the case of M = 2, we have X(@) = Xo) +271) 7.131) 7.9.2 Filter Bank in the Polyphase Domain For a filter H(z) in a two-channel setting, the polyphase representation is exactly the same as in (7.131): H(@) = Hel?) + Holz), (7.132) where H(z2) consists of the even samples of h(n) and H, (22) has all the odd sam- ples. The odd and even parts of the filter are used to process the odd and even co- efficients of the signal separately. To formulate the two-channel filter bank in the polyphase domain, we need the help of two identities: 1. Gi G@=GeG 2 (M)GEM) = GE) (tM) os) A fiter G(@) followed by a two-point decimator is equivalent toa two-point dec- ‘mator followed by G(z). The second identity is useful forthe synthesis filter bank. Let us consider fist the time-domain formulation of the lowpass branch of the analysis filter. Assuming a causal input sequence and causal filter, the output y(n) E200) * f Go] yz is expressed in matrix form as yO 70.0 0 0 0 of /x@ x) 7) f© 0 0 0 ol }xa y@]_] F2) Fa) FO 0 0 O\fs@] Gray 0) £G) £0) fa) FO 0 |} xG)]° xa) £4) $B) 2) Fa) FO) 0} |x@ 6) FS) $4 4B) 2) fF) || x6) yO), » flo). ‘The output coefficients are represented separately by the odd and even parts as [yO] = Del) + (delay) Lye], where ‘yO y@) x) yO), DO]2 = be] = 7.135) 182 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS ‘The even part of y(n) is made up ofthe products of fe(n) with x(n) and fo(n) with s(n) plus a delay. The signal x(n) i divided into the even and odd pats and they are processed by the even and odd parts ofthe filter, respectively In the same way, the highpass branch ofthe analysis Section can be Seen exactly as we demonstrate above. Inthe polyphase domain, the intermediate output from the analysis filter is given by [rei] [sees roe] [se Ui). Hy) Hired} Le“ Xr(@), - ‘Xo(z) [32 ous9 ‘where (H] is the analysis filter in the polyphase domain. In the same manner, we ‘obtain the reconstructed Sequence X'(z) from the synthesis iter bank as q 1 1) [Go0t2?) ne [2S] *@=[e fens) Gn} Le = Wo(ey =[z yea [Pe]. 7.137) ‘The PR condition forthe polyphase processing matrices is (#]{G) 7.40 COMMENTS ON DWT AND PR FILTER BANKS ‘We have shown the parallel between the algorithms of the DWT and the two-channe! filter bank. In terms of numerical computation, the algorithms of both disciplines are exactly the same. We would like to point out several fundamental differences between the two disciplines. 1. Processing domain. Let us represent an analog signal f(t) € L? by an or- thonormal wavelet series FO= LTV wsvisO. (7.138) 7s ‘The coefficients wy., are computed via the inner product wes = (f(0, Vaal). 7.139) In much the same way as the Fourier series coefficients, the wavelet series co- efficients are time (or analog)-domain entities. From this point of view we see that the DWT is a fast algorithm to compute the CWT at a sparse set of points ‘onthe time-scale plane, much like the FFT is a fast algorithm to compute the discrete Fourier transform. The DWT is a time-domain transform for analog signal processing, On the other hand, the filter bank algorithms are designed EXERCISES 183, from spectral domain considerations (.e., highpass and lowpass design) for processing of signal “samples” (instead of coeflicients) 2. Processing goal. We have shown thatthe wavelet series coefficients are essen tially the components (from projection) of the signal in the “direction” of the wavelet y atthe scale a = 2~* and atthe time point b = k2~*. This concept of ‘component is similar to that of the Fourier component. The magnitude of the ‘wavelet series coefficient represents the strength of the correlation between the signal and the wavelet at that particular scale and point in time. The processing, {oa of the filter bank isto separate the high- and low-frequency components ‘of the signal so that they may be processed or refined by different DSP algo- rithms, Although the DWT algorithms inherently have the same function, the focus of DWT is on finding the similarity between the signal and the wavelet ata given scale, 3. Design origin. A wavelet is designed primarily via the two-scale relation to satisfy the MRA requirements. Once the two-scale sequences are found, the DWT processing sequences have been set. A wavelet can be constructed and its time and scale window widths can be computed. In general, a filter bank is de- signed in the spectral domain via spectral factorization to obtain the processing filters. These sequences may or may not serve as the two-scale sequences for the approximation function and the wavelet. The time-scale or time-frequency characteristics ofthese filters may not be measurable. 4. Application areas. Most signal and image processing applications can be car- red out with either DWT or filter bank algorithms. In some application areas, such as in non-Fourier magnetic resonance imaging, where the processing plse required is in the analog domain, a wavelet is more suitable for the job because the data set is obtained directly via projection, 5. Flexibility. Since filter banks may be designed in the spectral domain via spec- tral factorization, a given half-band filter may result in several sets of filters, cach having its own merit vis-i-vis the signal given. In this regard, the filter bank is much more adaptable than wavelets to the processing need. Wavelet or filter bank? Users must decide for themselves based on the problem at hhand and the efiiciency and accuracy of using either one or the other. 7.11 EXERCISES 1. Fora positive integer M > 2, set wh, = exp(j2ff) for + M. Show that TS ae fo itmre mye ime: 7.140) 184 plcHETe WAVELET RARSIORM AND LTR BANK ALGORTHNS ‘Using hs retain, prove tat ie) SS (yp o ¥(e™)=LYx(eyew(-s2)), ats where X(z) = Dy x{klet and ¥(z) = Dy ylkle* are the z-transform of se- quences (x[k]} and {y(k)}. 2, Ifthe sequence {y[k]) is generated from {x[k]} by upsampling by M, that is, ifke MZ. otras, 7.142) bh vie) = {5a show hat (r(e") = x(emme))) as) forte reppecve causfoms 2. nthe QM solution othe PR conn, iis found tattoo soon tat ‘can satisfy the condition is the use of Haar filters. Why can no other FIR filters satisfy the PR condition? 4. Use the antialiasing condition and the PR condition, find the filter sequences ‘ho(n), hy(n), g3(n) if go(n) is the Da sequence given the example discussed in this chapter. . Show the validity ofthe identities given in Section 7.9.2 7.12 COMPUTER PROGRAMS 7421 Algorithms . % PROGRAM algorithn.m . & Decomposes and reconstructs a function using Daubechi % wavelet (m= 2). The initial coefficients are taken as © the function values thenselv . & signal vi = 100; ® frequency ‘Seampling rate COMPUTER PROGRAMS 185 k= 1:100; ts eb / Fi 8 = sin(2pitvitt) + sin(2epitvate) + sin(2*pitvate) 8 Decomposition and reconstruction filters 90 = (0.68301; 1.16301; 0.31699; -0.18301); k= (0; 1; 2; 3) gi = £lipua(go).*(-1) hO = flipud(g0) / 2; bi = Lipud(gi) / 2; ' Decomposition process © First level decomposition x = conv(s,h0); xe(1:2: Length (x) conv (s, ht} xe(1:2! Length (x) '§ Second level decomposition x = conv(a0,h0): al = x(1:2:Length (x) ); x = conv(a0,hL): wh = x(1:2eLength (x) ) + # Plot subplot (3,2,1), pot (s) ylabel ("Signal") ‘subplot (3,2,3),. plot (a0) vlabel ("a0") Subplot(3,2,4), plot (wo) vlabel ("0") ‘subplot (3,2,5), plot (al) ylabel ("a(-1)") Subplot (3,2, 6), plot (wl) ylabel ("w(-1)") set (gcf, ‘paperposition’ , [0.5 0.5 7.5 101) 8 Reconstuction process © Second level reconstruction x = eros (2*2ength(al) ,1); x(1:2:2*length(al)) = al(1:length(al)): y = zeros (2*length(w1) ,1) y(1:2:2*1ength(w)) = w1¢ fength (w1))2 186 DISCRETE WAVELET TRANSFORM AND FILTER BANK ALGORITHMS x = conv(x,g0) + conviy.a1); a0_xec = x(d:length(x)~4) ; @ First level reconstruction y = zeros (2*length(w0), 1); y(1:2:2*length(w0)) = ¥0(1:Length (WO) ) x = reros(2*length(ad_rec), 1); x(1:222*length(ad_rec)) = a0_recr ‘conv (x90): conv(y,g2): x(i:length(y) )+y: rec = y(4:length(y) y y 9 Plot figure(2) subplot (3,2,1), plot (al) ylabe1 (*a_(-1)") subplot (3,2,2) ylabel ('w_(-1)") ‘subplot (3,2,3), plot (a0_rec) ylabel (’Reconatructed a_0°) subplot (3,2,4), plot (w0) ylabel ("w0") subplot (3,2,5), plot (s_rec) ylabel (‘Reconstructed Signal") set (gcf, "paperposition’, (0.5 0.5 7.5 10]) plot (wi) ‘REFERENCES, 1. C.K. Chui, J.C. Goswami, and A. K. Chan, “Fas integral wavelet transform on a dense set of time-scale domain." Muner Math, 70, pp. 283-202, 195. 2, J.C. Goswami, A. K. Chan, and C. K. Chui, "On a spline-hased fast integral wavelet {eansform algorithm,” in Ulira-Wideband Short Pulse Electromagnetics 2, Carin an L. B, Felsen (Eds). New York: Plenum Press, 1995, p. 455-463, 3. G. Strang and T. Nguyen, Wavelets and Filter Banks. Wellesley, Mass: Wellesley- ‘Cambridge Press, 1996. 4. PP. Vaidyanathan, Multrate Systems and Filter Banks. Upper Sale River, NJ: Preatice Hall, 1993. 5. M, Veweli and J. Kovacevic, Wavelets and Subband Coding. Upper Saddle River, NJ: Prentice Hall, 1995, 6, ALN, Akansu and R. A. Haddad, Multesoluion Signal Decomposition. San Diego. (Calif: Academie Press, 1992, 7. AN, Akansu and M. J. Saxth (Eds), Subband and Wavelet Transform. Boston: Kluwer ‘Academic, 199. CHAPTER EIGHT Fast Integral Transform and Applications In Chapter 7 we discussed standard wavelet decomposition and reconstruction algo- rithms. By applying an optimal-order local spine interpolation scheme as described in Section 5.6, we obtain the coefficient sequence a of the desired B-spline series representation. Then, depending on the choice of linear or cubic spline interpola- tion, we apply te change-f-bases sequence (Section 75) to obtain the coefficient sequence & ofthe dual series representation for the purpose of FIR wavelet decom- position ‘A typical time-scale grid obtained by following the implementation scheme de- scribed in Chapter 7 is shown in Figure 7.7. In other words, the integral wavelet transform (IW) values ofthe given signal atthe time-scale positions shown in Fig- lure 7.7 can be obtained (in realtime) by following this scheme. However, in many signal analysis applications, such as wideband correlation processing {1] used in some radar and sonar applications this information on the IWT of f on such a sparse set of dyadic points (as shown in Figure 7.7) is insufficient for the desired time-frequency analysis of the signal It becomes necessary to compute the TWT st nondyadie points as wel. By maintaining the same time resoltion at all the bi- nary scales, the aliasing and the time variance difficulties associated with the stan- dard wavelet decomposition algorithm can be circumvented. Furthermore, as will be shown in this chapter, computation only at binary scales may not be appropriate to separate all the frequency contents ofa function. ‘An algorithm for computing the IWT with finer time resolution has been intro- duced and studied by Rioul and Duhamel 2] and Sense [3]. In adltion, there have been some advances in fast computation of the IWT with finer frequency resolution, such as the multivoice per octave (MYVPO) scheme, fist introduced in [4] (ee also (5, pp. 71-72) and later improved with the help of FFT by Rioul and Duhamel (2) However, the computational complexity of the MVPO scheme, with or without FFT, increases with the number of values ofthe scale parameter 2. For example, inthe 188 FAST INTEGRAL TRANSFORM AND APPLICATIONS. FFF-based computational scheme, both the signal and the analyzing wavelet have to be sampled atthe same rate, with the sampling rate determined by the highest frequency content (or the smallest scale parameter) of the signal, and this sampling rate cannot be changed at the subsequent larger scale values for any fixed signal dis- cretization. Furthermore, even atthe highest frequency level, where the width ofthe ‘wavelet is narrowest inthe time domain, the number of sampled data required forthe ‘wavelet willbe significantly larger than the number of decomposition coefficients in the pyramid algorithm. Tn this chapter we discuss the fast integral wavelet transform (FIWT) algorithm {6-8] to compute the integral (continuous) wavelet transform on a dense st of points in the time-scale domain 2.1 FINER TIME RESOLUTION In is seion we ae concer with manning the ape ing reson on tach sae by filing in he “ok along the ne als oe eae at ‘we want to compute Wyxu(n/2™, 1/27), n € Z,s < M. Recall that the stan- Te lpn daca n Chap) ve te IW sales ol at aie poms tna in¢2ys <1) orf eslon wei ours a fr nh tna, by indig be taon en susan (43 we have @2) Now, since zu r% ad (241-8 63) wwe have Mall =ba Mb" +n—b Din dh —B. (4) FINERTIME RESOLUTION 189 ‘Hence we observe from (8.2) thatthe IWT of x4 at (n/2, 1/2°) is the same as that (of iy.» at 0, 1/24). In general, for every k € ZZ, we even have i nee a2? |” sua OF OT Bat a2 [* 2) re. 28 ("au (1+ Gy) VE Dar 2? [ swoy (21 =4— Sr) kOM +n 1 tora (EE 1), as where s < M. Hence, for any fixed s and M with s < M, since every integer ¢ can bbe expressed as K2M—* +n, where n = 0,...,2M-* — 1 and k € Z, we obtain all ae e4 Wot (sae 3p) 2a 66) of xy at (2/2, 1/2), € € ZZ and s < M, by applying the standard wavelet decomposition algorithm of Chapter 7 to the function xyq. The time-scale grid for ‘M—1,M ~2, and M —3, bat only € = 0, ...,3,is given in Figure 8.1 20f ae t rer) ae ted . tas 18 0 02 0.4 06 08 10 > FIGURE 8.1 Filling in “holes” along the time axis. Reprinted with permission from [6], ‘copyright © 1995 by Springer-Verlag.) 190 FAST INTEGRAL TRANSFORM AND APPLICATIONS. For implementation, we need the notations weer 80 Be [darn en @n ‘and the notation for the upsampling operations ele mee 68) where” ole : Zap forevenn Gn) = (in) with yp = {792 Torewen™ 69) [As consequence of (8.2) and (7.45), we have, for s = M — Valelanew F 10) Ina similar way itean be shown that Gn, 8.11) ‘That is, in terms of the notations in (8.7) and (8.8), we have Gy = (Go) ¥ a iM go) * ag (6.12) yr = (°F) say ‘To extend this to other lower levels, we rely on the method given in (3), yielding the algorithm t=O f ea, a withs =M,M—1,....M—M’41, (813) BONE) am, A schematic diagram for implementing this algorithm is shown in Figure 8.2. 8.2 FINER SCALE RESOLUTION For the purpose of computing the IWT at certain interoctave scales, we define an Interoctave parameter: 2" — yaw aa N>0 andn 14) FINERSCALERESOLUTION 191 on aul. Leu. Mg ur M jg} —*# tua yaw FIGURE 8.2 Wavelet decomposition process with fine time resolution. (Reprinted with permission from [6], copyright © 1995 by Springer-Verlag.) which gives 2” — 1 additional levels between any two consecutive octave levels, as follows. For each k € ZZ, s < M, to add 2" — 1 levels between the (s ~ 1)st and sth ‘octaves, we introduce the notations Ps) = Qa)! Gant — ) : G5) WE lt) = (a9)? W Raat —B). (Observe that since 1/2 < ae < 1, we have Supp ds Supp of, < SUPP df, PP oe, PP M5 C SUPP HE 5 a9 Supp Wis C supp Wi, C SUP Vf,1- As a consequence of (8.16), the RMS bandwidths of $f» and yfjq are narrower than those of ¢ and yy and wider than those of 4(2:) and y(2-), respectively. “The interoctave scales are described by the subspaces Vp = clos,2(6f, #22). ein Tis clear that for each n, these subspaces also constitute an MRA of 2. In fact, the ‘wo-scale relation remains the same as that of the original scaling function , with the two-scale sequence {go(A]}, namely 48,00 = Dott, (#~ 2) 18) » . Iisalso easy tose that Vf, is othogonal iV Indeed, Wis Vis) = bea, Vas) =0, LkED, (8.19) 192 _FASTINTEGRAL TRANSFORM AND APPLICATIONS for any + € 22 Hence the spaces Wy = closy2(vp,, 2k € ZZ) (8.20) are the orthogonal complementary subspaces of the MRA spaces Vn. In addition, analogous to (8.18), the two-scale relation of ¥., and gf" remains the same as that of y and g, namely V5") =D silkles,, (« = + ) @21) F Since (got. (gi)) remain uochanged for any introctive scale, we can use the same implementation scheme, as shown in Figure 7.11 to compute the IWT values at (k/2%aip, 1/2%an). However, there are still two problems. First, we need to map 0 Vga ond camp th IT ao (a, 123) instead ofthe corer grid (k/20q, 1/2). Let us fst consider tbe second problem. Tats, suppose that x], © Via already been determined, Then we may wate y= DaiyoMat = Lah ydetar—h 622) 7 7 for some sequences {af y,} and {27 44) € ¢?. Then the decomposition algorithm as described by Figure 71 yields i, = Oar)" Oxhy, vA) =2ay f ” eqn ant =H dt bi an)! Ws (seo ae . 623) ‘Now by following the algorithm in (8.13), we can also maintain the same time reso lution along the time axis on each interoctave scale for any fixed n. More precisely, by introducing the notations hts new 624) ‘we have an algoritim for computing the TWT athe interoctave scale levels (oM—* go) + = with :M—M'+1, 625) = OME) aa, However, itis clear from (8.23) thatthe time resolution for each fixed n is 1/2Mé which is less than the one for the original octave scales, in which case the time resolution is 1/2. As discussed in Chapter 7, the highest attainable time resolution FINERSCALERESOLUTION 193. OM ay aly % aus [pucaeg) tue her thew FIGURE 8.3 Wavelet decomposition process with finer time-scale retlution, (Reprinted ‘with permission from (6, copyright © 1995 by Springer-Verlag.) in the case of the standard (pyramid) decomposition algorithm is 1/2-". It should be pointed out that the position along the time axis on the interoctave scales is not the same as the original octave levels (i.e., we do not get a rectangular time-scale atid) (see Figure 8.4). The schematic diagram of (8.25) is shown in Figure 8.3. If we begin the index n of (8.14) from 0, then = 0 corresponds to the original octave level. Figure 8.4 represents atypical time-scale grid for s = M — 1M —2, and ‘M—3 with N =2andn=0,....3. ae Octave scales 44, Inter-octave scales 02 0.8 06 08 1.0 FIGURE 8.4 Time-scale grid using the scheme described in Figure 8:3, (Reprinted with permission from [6], copyright © 1995 by Springer-Verlag.) 194 FAST INTEGRAL TRANSFORM AND APPLICATIONS. 8.3 FUNCTION MAPPING INTO THE INTEROCTAVE ‘APPROXIMATION SUBSPACES [Now going back to the first problem of mapping xy to xf,, we observe that since Va # Vij. we cannot expect to have xf, = x4 in general. However, if the MRA spaces {Vj} contain locally all the polynomials up to order m in the sense that for each ¢, OS E, will then He between ceya and ceiaa- Taking all ofthe points above into account, we ean obtain ay from ayy by fol lowing these steps: 2) = Daravat— 8 -t). 629) rs 1. Based on the given discretized function data, determine the stating index of aly. Letitbe ay. 2, Let ay Ue between af and ae. 3 Letafy be shifted from a, toward the ight by & in ime. Then starting with r= 0,compute Oy =~ Masta + Ease (8.30) 4, Increment i, s by 1 and § by n/2+™, '. Repeat steps 3 and 4 until 1-2 < 0. When 1 —2E <0, increment r by Land reset &to.n/2M*™, Increment i, s by 1. 6. Repeat steps 3 to 5 until. ay414.a¢ takes the last index of ayy. Fora general case, the mapping of x to x}, can be obtained following the method ‘described in Sections 5.6 and 7.2. For instance, to apply the Kinear spline inerpola- tory algorithm or the cubic spline interpolatory algorithm, we need to compute the 196 FAST INTEGRAL TRANSFORM AND APPLICATIONS. function values of xu (k/2Man) or xu (k/2M~faq) , € ZL, These values can eas- ily be determined by using any spline evaluation Scheme. More precisely, we have the following: 1. Form = 2 (linear splines), iis clear that whO = Doak yNaMagt with af y =H ( F 2. Form = 4 (cubic splines), we have (0) = Doak Ne OMagt — 4) (832) z with wns. stu (sat): aay : ‘where the weight sequence (uy) is given in Section 5.6 Finally, wo obtain the input coeficient sequence {fy} ftom (a forthe interoctave scale algorithm (8.19), ‘we use the same change-of-bases sequence r as in (7.47). 8.4 EXAMPLES In this section we present a few examples to illustrate the FIWT algorithm dis- ‘cussed inthis chapter. The graphs shown are the centered integral wavelet transform (CIWN), defined with respect to the spline wavelet Ye 88 Wosbee) =a"? J sorvm(Pre) an 39 =: 835) ‘Observe thatthe IWT as defined by (4.32) does not indicate the location of the dis continuity ofa function properly, since the spline wavelets are not symmetrical with respect othe origin. The CIWT circumvents this problem by shifting the location of, the IWT in the time axis toward the right by ar". ‘The integral wavelet transform of a function gives local time-scale information. To get the time-frequency information we need to map the scale parameter to frequency. ‘There is no general way of doing so. However, asa first approximation, we may ‘consider the following mapping: aos 836) EXAMPLES 197 where ¢ > 0 is a calibration constant. In this book the constant ¢ has been de- termined based on the one-sided center («',) and one-sided radius (Avs) of the wavelet ¥r(w), which are defined in Chapter 4 For the cubic spine wavelet we get of = 5.164 and AV = 0.931. The cor responding figures for the linear spline wavelet are 5.332 and 2.360, respectively. Based on these parameters, we choose values of as 1.1 for eubic spline and 1.5 for linear spline cases It is important to point out that these values of ¢ may not be suitable fr all cases. Further research inthis direction is required. We have chosen by taking the lower cutoff frequency of ¥(«). 8.4.1 IWT of a Linear Function ‘To compare the results obtained by the method presented in this chapter with the results obtained by evaluating the integral of (8.34), we first take the linear function which changes slope as shown in Figure 8.7. The function is sampled with 0.25 as the step size. So for linear splines, it means tht the function is mapped into Vo, whereas for cubic splines the function is mapped into V3. We choose N = 1, which gives one additional scale between two consecutive octaves. It is clear from Figures 8.8 and 89 that the FIWT algorithm and direct integration give identical results for wavelet coefficients for octave levels, but there are errors in the results for interoctave levels as discussed before. Fa 20 0 Zo 0 ~10 -10 -20 Oe 00 ot FIGURE 8.7 Linear function whose WT is shown in Figures 88 to 8.10, (Reprinted with permission from (8) copyright © 1995 by IEEE.)

You might also like