An Introduction To Wavelet Analysis With SAS

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

NESUG 18 Posters

An Introduction to Wavelet Analysis with SAS®


Michael Lane, Consultant, Watertown, MA

ABSTRACT
Wavelet analysis is a mathematical technique used to represent data or functions. The wavelets used in the analysis are func-
tions that possess certain mathematical properties, and break the data down into different scales or resolutions. Wavelets are
better able to handle spikes and discontinuities than traditional Fourier analysis making them a perfect tool to de-noise noisy
data.
Traditional applications of wavelets have focused on image compression and analysis, but they are also being used to analyze
time series, biological processes, spectroscopic data of chemical compounds, seismic signals for earthquake prediction, and
atmospheric data for weather prediction. SAS® began introducing tools to perform wavelet analysis in version 8.2. This paper
introduces some of the basic concepts of wavelet analysis, and how to perform wavelet analysis with SAS IML®.

INTRODUCTION
In this paper, signal data refer to data with some type of time or spatial relationship. The majority of signal data we encounter
in practical situations are a combination of low and high frequency components. The low frequency component is somewhat
stationary over the length of the signal data. An example of a low frequency component is a moving average in a time series,
or a constant background color in a photograph. High frequency components are jump discontinuities or noisy pieces of the
signal, such as outliers in a time series or a shift from the background to a person’s face in the photograph.
Wavelet analysis employs two functions, often referred to as the father and mother wavelets, to generate a family of functions
that break up and reconstruct a signal. The father wavelet is similar in concept to a moving average function, while the mother
wavelet quantifies the differences between the original signal and the average generated by the father wavelet. The combina-
tion of the two functions allows wavelet analysis to analyze both the low and high frequency components in a signal simultane-
ously.
Image analysis has been the application of choice for wavelets. In 1995 the FBI had roughly 200 million fingerprint records
stored as inked impressions on paper cards, which they wanted to convert to digital form. Each card required 10 MB of stor-
age space based on a resolution of 500 pixels per inch with 256 levels of gray-scale, which meant that the FBI needed
2,000,000,000 MB or 2,000,000 gigabytes worth of storage space. An algorithm based on wavelets was chosen as the FBI
standard for fingerprint image compression, because it yielded compression ratios between 15:1 and 20:1 with minimal loss of
image quality. A compression ratio of 20:1 meant that only 100,000 gigabytes of space was needed, and while this was still a
large amount of memory it was a substantial savings. The new JPEG-2000 image compression standard is also based on an
algorithm that uses wavelets, and has a compression ratio of 20:1.
Image analysis is not the only area of application for wavelet analysis. Wavelets can be used in the analysis and modeling of
other types of signal data. Chemists are using wavelets to help reduce the number of potential predictors and improve the
modeling of spectroscopic signals [4]. Statisticians and mathematicians are using wavelets to analyze time series [2, 10], and
scientists are using wavelets to study the breathing patterns of newborn babies [1].
The basic concepts of wavelet analysis are accessible to those with some exposure to linear algebra and calculus. General
introductions can be found in [1, 7] and on numerous web-sites, while more mathematically detailed introductions can be found
in [3, 5, 9]. This paper will scratch the surface of wavelet analysis theory, and present an example of performing wavelet
analysis with SAS®.

A VERY SHORT REVIEW OF LINEAR ALGEBRA


The set of vectors {v1, v2, …, vk} is linearly independent if the only solution to the equation c1v1 + c2v2 + … + ckvk = 0 is c1 = c2
= … = ck = 0. A nonempty set of vectors {v1, v2, …, vk} is a basis for a vector space V if it is linear independent, and every vec-
tor in V can be expressed as a linear combination of v1, v2, …, vk. The vectors (1, 0, 0), (0, 1, 0), and (0, 0, 1) form a basis for
the space of all vectors in R3, because they are linearly independent and any vector (x, y, z) can be written as the linear com-
bination x(1, 0, 0) + y(0, 1, 0) + z(0, 0, 1) = (x, y, z). Furthermore, the set {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is orthonormal, because
the inner product of any two of the vectors is zero, and the norm of each vector is one. These are the standard definitions and
examples most readers probably encountered in their first course on linear algebra. The same definitions that apply to the set
{(1, 0, 0), (0, 1, 0), (0, 0, 1)} in R3 also apply to sets of functions in more complex vector spaces. For example, the set of func-
tions {1, t, t2, t3} is a basis for the space of all polynomials of degree 3 or less.

1
NESUG 18 Posters

WHY DEVELOP WAVELETS?


Approximating signal data with functions is not a new concept. Joseph Fourier developed a method using sines and cosines to
represent other functions in the early 1800’s. Fourier analysis generates a set of basis functions comprised of sines and co-
sines with different amplitudes and frequencies to approximate a signal. Fourier analysis is very good for analyzing signal data
that does not change with time or involve jump discontinuities, because of the smooth and periodic behavior of sines and co-
sines. The graph on the left in Figure 1 displays a well behaved signal, which might represent sound from a musical instru-
ment. The graph on the right in Figure 1 displays a much more complex signal that includes many jump discontinuities and
appears to be dampening with time. Fourier analysis would easily approximate the signal on the left, but not the one on the
right.
Figure 1. A well-behaved, repetitive signal (left), and a more complex signal (right).
Signal

Signal

Time Time

Throughout the twentieth century scientists and mathematicians discovered and rediscovered the theory and functions that
would become wavelet analysis, because they were interested in analyzing more complex signals. A joint effort by Jean Morlet
and Alex Grossman in 1984 was the first time that the theory began to synthesize and wavelets took shape. Morlet was a pe-
troleum engineer working on methods for analyzing seismic signals to help geologists locate underground oil deposits. Geolo-
gists use sound waves to determine what type of material lies underneath the surface at different depths. If the waves move
very fast through one rock layer, then it may be a salt dome which traps a layer of oil underneath. The seismic signals contain
many jump discontinuities as the waves pass from one rock layer to the next, resulting in more complex patterns such as the
one displayed on the right side of figure 1. Morlet developed his own method of analyzing the seismic signals by creating func-
tions that were localized in space, and asked Alex Grossman, a physicist, to confirm that the use of “wavelets” of constant
shape was mathematically sound. Mackenzie [8] contains a very low level introduction to wavelets, their applications, and
more on their history including work by Morlet, Grossman, Mallat [9], and Daubechies [5].

THE HAAR WAVELET


The father wavelet is usually referred to as the scaling function, φ , while the mother wavelet is simply the wavelet, ψ . The
⎧ 1
⎪1 if 0 ≤ x < 2

⎧1 if 0 ≤ x < 1 ⎪ 1
Haar scaling function and wavelet are defined as φ(x) = ⎨ , and ψ( x ) = ⎨- 1 if ≤ x < 1 . Figure 2 displays the Haar
⎩0 elsewhere ⎪ 2
⎪0 elsewhere


scaling function and wavelet. The Haar scaling function and wavelet are used to generate sets of basis functions, which are
used to break up or reconstruct a signal. The basis functions are similar to the original scaling function and wavelet, except
that they are shifted and have different heights and widths. For example, φ( x - k) has the same graph as φ( x) but shifted to the

2
NESUG 18 Posters

right k units, cφ( x) has the same graph as φ( x) but with height c instead of 1, and φ(ax) has the same graph as φ( x) but
takes value 1 for the range [0, 1/a) and 0 elsewhere. Combining these concepts allows us to rewrite the Haar wavelet using
the scaling function as ψ( x ) = φ(2x) - φ(2x - 1) , where φ(2x) and φ(2x - 1) are orthogonal.

Figure 2. Graph of the Haar scaling function (left), and wavelet function (right).

y y

0 0

0 0 0 0.5 1
x

x
0 1

-1

Figure 3 illustrates how the Haar scaling function can be used to approximate a signal. The left side of figure 3 displays a sig-
nal that contains a small amount of noise and appears to be dampening with time. The right side of figure 3 shows one possi-
ble approximation of the signal on the left using building blocks based on the Haar scaling function. Although this is a simple
example, it highlights basic concepts of approximating a signal using a multiresolution analysis.
The Haar scaling function and wavelet are easy to understand, and statisticians have shown that they are useful for detecting
outliers in time series [2]. On the other hand, the Haar scaling function and wavelet are discontinuous, and do not approximate
smooth signals well. Daubechies [5] developed wavelets that are localized in behavior, continuous, and yield better approxi-
mations to smooth signals.

3
NESUG 18 Posters

Figure 3. Original signal with noise (left), and the same signal approximated with small steps based on the Haar scaling func-
tion (right).
Signal

Signal

Time Time

MULTIRESOLUTION ANALYSIS AND THE DISCRETE WAVELET TRANSFORM


How can we use the Haar wavelet and scaling function to actually analyze a signal? Lets use the points of data a3 = [3 4 20
25 15 5 20 3] to perform a discrete wavelet transform and illustrate the concepts behind a multiresolution analysis.
Wavelet Decomposition Algorithm
Two filters are necessary to decompose a signal using the wavelet decomposition algorithm. The low pass filter, L, is for aver-
aging, and the high pass filter, H, is for differencing. Deriving the low and high pass filters based on the Haar scaling function
and wavelet yields L = [0.5 0.5] and H = [-0.5 0.5].
Step 1: Calculate the convolutions of L and H with the signal.
conv(L, a3) = [1.5 3.5 12 22.5 20 10 12.5 11.5 1.5]
conv(H, a3) = [-1.5 -.5 -8 -2.5 5 5 -7.5 8.5 1.5]
Step 2: Discard the odd numbered coefficients, which is called downsampling. The resulting vectors, a2 and b2 are respec-
tively referred to as the scaling and wavelet coefficients.
a2 = [3.5 22.5 10 11.5 ]
b2 = [-.5 -2.5 5 8.5]
The first coefficient in a2 is 3.5, or (3 + 4)/2, and the second coefficient is 22.5, or (20 + 25)/2. The convolution of a3 and the
low pass filter yielded the averages of neighboring data points. The first coefficient of b2 is -.5, which is the difference of 3 and
4 with their average, 3.5. The last coefficient in b2 is 8.5, which is the difference of 20 and 3 with their average, 11.5. The
convolution of a3 and the high pass filter yielded the differences of neighboring data points and their average.
The process continues by decomposing the scaling coefficient vector using the same 2 steps, and finishes when 1 coefficient
remains.
a1 = [13 10.75]
b1 = [-9.5 -.75]
a0 = [11.875]
b0 = [1.125]

4
NESUG 18 Posters

It is not a coincidence that the original signal data, a3, has 23 = 8 data points, or that the first level decompositions contain 22 =
4 coefficients. All signals that are analyzed using a discrete wavelet transform must have length equal to some power of 2,
which is referred to as dyadic length.
Why not stop with the first level decompositions, a2 and b2? The largest coefficients in b2 are 5 and 8.5, which are associated
with the changes from fifth data point, 15, to the sixth data point, 5, and from the seventh data point, 20, to the eighth data
point, 3. If the goal is to detect large changes in the signal, then the first decomposition missed the change from the second
data point, 4, to the third data point, 20. The change is detected in the largest coefficient of b1, -9.5, which is associated with
the coefficients 3.5 and 22.5 in a2, or the shift from 4 to 20 in the original signal. Multiresolution analysis uses different scales
of resolution to build a complete picture of the original signal.
Wavelet Reconstruction Algorithm
A similar process can be used to rebuild the original signal using the wavelet reconstruction algorithm. A new low pass filter,
LT, and a new high pass filter, HT, are needed. LT = [1 1] and HT = [1 -1] based on the Haar scaling function and wavelet.
Step 1: Upsample the scaling and wavelet coefficient vectors by adding zeros.
Up(a0) = [11.875 0]
Up(b0) = [1.125 0]
Step 2: Calculate the convolutions of the scaling coefficients and LT, and the wavelet coefficients and HT.
conv(LT, Up(a0)) = [11.875 11.875]
conv(HT, Up(b0)) = [1.125 -1.125]
Step 3: Add the new average and difference vectors to yield the reconstructed average vector, a1.
ra1 = [11.875 11.875] + [1.125 -1.125] = [13 10.75]
The reconstructed version of a1 and the difference vector, b1, can be used to reconstruct a2.
Up(ra1) = [13 0 10.75 0]
Up(b1) = [-9.5 0 -.75 0]
ra2 = conv(LT, Up(ra1)) + conv(HT, Up(b1)) = [13 13 10.75 10.75] + [-9.5 9.5 -.75 .75] = [3.5 22.5 10 11.5]
Finally, we can reconstruct the original signal.
ra3 = conv(LT, Up(ra2)) + conv(HT, Up(b2)) = [3 4 20 25 15 5 20 3]
The discrete wavelet transform is complete and has zero redundancy. All of the information needed to reconstruct the original
signal exactly was contained in b2, b1, b0, and a0, and no information was duplicated between transform coefficients.
There were 4 coefficients in b2, 2 coefficients in b1, and 1 coefficient in a0 and b0. Eight coefficients from the wavelet decom-
position were necessary to reconstruct the original signal, which contained 8 data points. What if we set the -.75 in b1 and the -
.5 in b2 to zero, and reconstruct the signal based on the updated difference vectors? The reader can verify the reconstructed
signal is [3.5 3.5 20 25 15.75 5.75 19.25 2.25], and that the difference between the original signal and newly reconstructed
signal equals [.5 -.5 0 0 .75 .75 -.75 -.75]. We reduced the number of coefficients by 25%, and the resulting reconstructed
signal is nearly equal to the original. This is the idea behind image compression, and denoising noisy data. Neighboring pixels
in digital images such as FBI fingerprint cards will be very much alike. This means that the average of their 256 gray-scale
values will be close to the originals, so the wavelet coefficients will be very small or zero. If we round all of the small coeffi-
cients to zero, then the amount of data needed to store the image has been greatly reduced.
Stephan Mallat developed the theory behind multiresolution analysis, which is used to create more general scaling functions
and wavelets that are continuous. The theory is based on a sequence of subspaces of functions {Vj, j = …, -2, -1, 0, 1, 2, …},
j
with scaling function φ . The set of functions {φ jk (x) = 2 2 φ(2 j x − k); k = ...,−2,−1,0,1,2,...} is an orthonormal basis for Vj, and is
comprised of shifts and dilations of the scaling function. The Vj satisfy a series of conditions including nesting and density.
Nesting means that any function contained in Vj is contained in Vj+1, or Vj ⊂ Vj+1 . Density implies that an approximation of a
signal by a function in Vj improves as j increases, and eventually captures all of the details in the original signal. Each Vj can
be decomposed as Vj = Vj −1 ⊗ Wj −1 , where Wj-1 is the orthogonal complement of Vj-1 in Vj, and the set of functions
j
{ψ jk (x) = 2 2 ψ(2 j x − k); k = ...,−2,−1,0,1,2,.. .} is an orthonormal basis for Wj. The set of functions that make up the basis for Wj
are merely shifts and dilations of the wavelet. Successive decompositions can be applied to
yield Vj = Wj −1 ⊗ Wj− 2 ⊗ Wj− 3 K ⊗ W0 ⊗ V0 . Now we can see the relationship between the more general multiresolution theory,
and the discrete wavelet transform. Recall that we reconstructed the original signal a3 using only information contained in a0,

5
NESUG 18 Posters

b0, b1, and b2. The average information contained in a0 is based on the scaling function, and is related to V0. The difference
information contained in b0, b1, and b2 are based on the wavelet, and are related to the Wj. Obviously, I left out numerous
mathematical details involved in multiresolution analysis theory, and interested readers should refer to [3] for a more thorough
explanation.

WAVELET ANALYSIS IN SAS


The nuclear magnetic resonance spectrum in figure 4 was presented in [10], and can be obtained from
http://faculty.washington.edu/~dbp/wmtsa.html. The NMR spectrum consists of an underlying low frequency signal, and high
frequency oscillations that appear as noise. Wavelet analysis can be used to smooth out the high frequency oscillations with-
out removing interesting pieces of the spectrum.
Figure 4. NMR spectrum.
60
40
20
0

0 256 512 768 1024

Frequency

The following code is used to start up the wavelet analysis tools in SAS IML.
%wavginit;
Proc IML;
%wavinit;

The WavgInit macro defines several macro variables that are used to set default values for graphing. The WavInit must be
called from within Proc IML, and it defines symbolic macro variables, modules for producing standard wavelet analysis graphs,
utility modules, and the WavHelp macro. The WavHelp macro can be used to obtain help about the wavelet analysis macros
and modules once the WavInit macro has been called. For example,
%wavhelp(wavginit);

will send information about the WavgInit macro and its arguments to the log. After the WavInit macro is called the end-user
can begin setting up the wavelet analysis options.

sym5 = &waveSpec; /* sym5 = j(1, 4, .); */


sym5[&family] = &symmlet; /* sym5[3] = 2; */
sym5[&member] = 5; /* sym5[4] = 5; */
sym5[&boundary] = &zeroExtension; /* sym5[1] = 0; */
sym5[&degree] = .; /* sym5[2] = .; */

The first line uses the symbolic macro variable waveSpec, and sets sym5 to a vector with 4 missing values. I included equiva-
lent code that does not use the symbolic macro variables in comments. The symbolic macro variables create code that is eas-
ier to read. The next two lines specify that the 5th member of the Symmlet family of wavelets is to be used. Valid values for
the member argument are the integers from 4 to 10. The other choice for the wavelet family is &daubechies with members 1

6
NESUG 18 Posters

through 10. The Daubechies family member 1 specification is equivalent to the Haar wavelet. Increasing the member argu-
ment will result in smoother approximations, because more data points will be involved in the averaging and differencing proc-
ess. The member argument is related to the concept of vanishing moments, and the smoothness of the scaling function and
wavelet increase with the number of vanishing moments.
Finally, the last two lines specify the method for handling the boundaries. Recall that the discrete wavelet transform requires
the signal to have length equal to some power of 2. If the signal length is not equal to some power of 2, then the boundaries
must be extended. The signal can be padded with zeros on either end, which is appropriate if the signal is long and the ends
do not matter, or if the signal really is abruptly started and stopped. Periodic extension reuses the signal to make it periodic,
so that sk+n = sk. For example, if the signal had twelve data points, s1, s2, …, s12, then s13 = s1, s14 = s2, and so on. Reflec-
tion involves reflecting the signal about a line, and interpolation extrapolates the signal beyond the ends using a polynomial.
Possible values for the boundary option in SAS are &zeroExtension, &periodic, &polynomial, &reflection, and &antisym-
metricReflection. The degree option takes values &constant, &linear, and &quadratic, but is only relevant for the polynomial
boundary option. The NMR spectrum in figure 4 contains data at 210 = 1,024 frequencies, so SAS will not extend the signal.
I created a SAS data set, nmrSpec, with the NMR spectrum contained in a variable called absorb. The following IML code is
used to read the NMR signal into a vector named nmrSignal.
use nmrSpec;
read all var{absorb} into nmrSignal;

Now we are ready to create the wavelet decomposition using the NMR signal data and wavelet specification vectors.
call wavft(nmrDecomp, nmrSignal, sym5);

The WavFt module takes the signal data and wavelet specification vectors, and returns a vector containing the decomposition.
The decomposition vector will be used as an input to the other wavelet analysis modules, such as mraDecomp. mraDeomp
generates a graph that displays the multiresolution decomposition results. Figure 5 contains a multiresolution decomposition
of the nmrDecomp vector.
call mraDecomp(nmrDecomp, , 5, 9, , “NMR Spectrum”);

Figure 5. Multiresolution decomposition plot.

The wavelet coefficients in figure 5, which correspond to the bj in the discrete wavelet transform example, for the level 9
through 5 decompositions are displayed within the box titled “Independent Level Scaling”. The reconstructed signal at levels 5
and 10 are also shown in the graph. Recall from the discrete wavelet transform example that the reconstructed signals at each
level were generated using the wavelet decomposition information below that level, and were referred to as the raj. No thresh-
olding was applied to the reconstructions, so the level 10 signal is identical to the original NMR spectrum.

7
NESUG 18 Posters

The high frequency oscillations in the NMR spectrum are captured in the small wavelet coefficients at the higher levels of the
decomposition. Figure 6 displays the full set of wavelet coefficients which were scaled uniformly by calling the coefficientPlot
module with the “uniform” option.
call coefficientPlot(nmrDecomp, , , , “uniform”, “NMR Spectrum”);

Figure 6. Wavelet coefficient plot.

We can see that the detail coefficients at the higher levels, such as 7, 8, and 9, are much smaller in magnitude than those at
levels 2, 3, and 4. Thresholding can be used to zero out or shrink the small wavelet coefficients at the higher levels, so the
reconstructed signal will be smoother. Donoho and Johnstone [6] developed an algorithm called SureShrink for thresholding
wavelet coefficients. SureShrink is an adaptive thresholding method, which means that coefficients are treated in a level-by-
level fashion. In each level, if there is information that the wavelet representation of that level is not sparse a threshold that
minimizes Stein's unbiased risk estimate (SURE) is applied. Otherwise, a universal type threshold is applied. The WavIft
module inverts a wavelet transformation, and returns the reconstructed signal.
call wavift(nmrRecon, nmrDecomp, &SureShrink);

The nmrRecon vector returned by WavIft contains the reconstructed NMR spectrum, which is displayed in figure 7. The high
frequency oscillations have been removed, leaving the smooth underlying signal and keeping all of the interesting pieces of the
spectrum intact.
Figure 8 contains wavelet scalograms of NMR spectrum decomposition, which were created using the scalogram module.
call scalogram(nmrDecomp, , , , , ,"NMR Spectrum"); /* no thresholding */
call scalogram(nmrDecomp, &SureShrink, , , , ,"NMR Spectrum"); /* SureShrink thresholding */

Scalograms display the time frequency localization property of the discrete wavelet transform. Each individual rectangle repre-
sents a wavelet coefficient. Blue rectangles represent coefficients with small magnitudes, and red rectangles represent coeffi-
cients with larger magnitudes. The y-axis measures frequency resolution, and the x-axis measures time resolution. The range
of frequencies in the original signal analyzed decreases by ½ as the decomposition algorithm progresses from higher to lower
levels, and this is communicated in the scalogram by reducing the heights of rectangles in each level by ½. The range of time
values analyzed doubles as the decomposition algorithm progresses from higher to lower levels, and this is communicated by
doubling the widths of rectangles. This is referred to as the time frequency localization property of the discrete wavelet trans-
form. Rectangles at the higher levels localize wide frequency ranges, but small time ranges. Rectangles at the lower levels
localize wide time ranges, but small frequency ranges.

8
NESUG 18 Posters

Figure 7. Wavelet reconstruction of NMR spectrum based on SureShrink thresholding.

40
20
0

0 256 512 768 1024


Frequency

Figure 8. Wavelet scalogram with no thresholding (left), and SureShrink thresholding (right).

The scalogram module amplifies the small wavelet coefficients by scaling the magnitudes of all coefficients to lie in the interval
[0, 1], and then raising the scaled magnitudes to a default power of 1/3. Otherwise, the small coefficients at the higher levels
would be all blue. The scalogram on the left represents the wavelet coefficients with no thresholding applied. The bar to the
left displays the total energy of each level, which is defined as the sum of squares of the wavelet coefficients. The total energy
is higher at levels 4, 5, and 6, which is consistent with the results in figure 6. The scalogram on the right represents the wave-
let decomposition after applying SureShrink thresholding. Levels 8 and 9 are almost entirely blue, because SureShrink zeroed

9
NESUG 18 Posters

out or shrank the small detail coefficients.

Finally, the original NMR signal and the reconstructed approximation based on the SureShrink algorithm can be compared
using the sums-of-square differences as follows.
reconErr = ssq(nmrRecon - nmrSignal);
Print reconErr; /* reconErr = 2,857.28 */

Quantifying the difference between original and reconstructed signals can help decide which wavelet and thresholding specifi-
cations yield the best results.

CONCLUSIONS
While wavelet analysis has already established itself as a valuable tool for image analysis and compression, its application to
other types of signal data is still being explored. For statisticians and other scientists that analyze data, wavelets are a poten-
tially powerful tool to aid in the analysis and preprocessing of data. Wavelet analysis is well suited for signals that are intermit-
tent, non-stationary, or noisy, and wavelet coefficients effectively reveal characteristics of the data that other techniques might
miss. Hopefully, SAS will continue to support and expand the wavelet analysis tools available in SAS IML.

REFERENCES
[1] Addison, P. (2004), “The little wave with the big future”, Physics World, Volume 17, Number 3, 35-39.
[2] Bilen, C., Huzurbazar, S. (2002), “Wavelet-Based Detection of Outliers in Time Series”, Journal of Computational and
Graphical Statistics, Volume 11, Number 2, 311-327.
[3] Boggess, A., Narcowich, F.J. (2001), A First Course in Wavelets with Fourier Analysis. Prentice Hall. Upper Saddle River,
N.J.
[4] Cocchi, M., Seeber, R., Ulrici, A. (2003), “Multivariate calibration of analytical signals by WILMA (wavelet interface to linear
modeling analysis)”, Journal of Chemometrics, Volume 17, 512-527.
[5] Daubechies, I. (1992), Ten Lectures on Wavelets. SIAM.
[6] Donoho, D. and Johnstone, I. (1995), "Adapting to Unknown Smoothness via Wavelet Shrinkage," Journal of the American
Statistical Association, 90, 1200 -1224.
[7] Graps, Amara. (1995), “An Introduction to Wavelets”, IEEE Computational Sciences and Engineering, Volume 2, Number 2,
50-61.
[8] Mackenzie, D. (2001), “Wavelets: Seeing the Forest and the Trees”. The National Academy of Sciences. Washington, DC.
[9] Mallat, S. (1999), A Wavelet Tour of Signal Processing. Academic Press, San Diego, CA.
[10] Percival, D., Walden, A. (2000), Wavelet Methods for Time Series Analysis. Cambridge University Press.

ACKNOWLEDGMENTS
SAS is a Registered Trademark of the SAS Institute, Inc. of Cary, North Carolina.

CONTACT INFORMATION
Your comments and questions are valued and encouraged.
Please feel free to contact the author at:
Michael Lane
Consultant
506 Main Street
Watertown, MA 02472
lanemc7@yahoo.com

10

You might also like