Professional Documents
Culture Documents
3.7 Trends of A Simple IIR Filter
3.7 Trends of A Simple IIR Filter
0.6
x[n] y[n]
D 0.2 1.1 D
input output
0 0
1 0.6
0 0.86
0 0.946
0 1.0406
0 1.14466
0 1.259126:
The output will continue to get larger in magnitude. This demonstrates a prob-
lem with IIR lters, that the output may approach in nity (positive or negative in
nity). It is also possible for an IIR lter's output to oscillate toward in nity; that the
output becomes larger in magnitude, even though the sign may change. A lter is
said to be stable when the output approaches zero once the input falls to zero. If
the output oscillates at a constant rate, we call it conditionally stable. If the
output approaches positive or negative in nity (such as the example above), or if
it oscillates toward in nity, we say that the IIR lter is unstable.
The general form of the IIR lter is shown in Figure 3.27 [11]. Other texts refer
to this as Direct Form I, due to the direct correspondence between the gure and
the equation that describes it.
b[0]
b[2] a[2]
D D
... ...
... ...
b[K] a[L]
D D
Key
x[n] y[n]
a D
For an impulse function as input, only the initial value of y would be directly
impacted by x. That is, assuming that the system is initially at rest (y[ 1] = 0), we
compute the rst output:
y[0] = bx[0] + 0:
For the impulse function, only x[0] has a nonzero value, and x[0] = 1. So the
output becomes:
y[0] = b
and
y[n] = ay[n 1]; n > 0:
How the output looks over time depends directly on the value of a. The following
code gives us the output, shown in Figure 3.29.
N = 20;
y(1) = 10;
for n=2:N+1
y(n) = a * y(n-1);
end
From the graphs, we see the output either approaches zero (for values a =
0:5 and a = 0:5), oscillates (a = 1), stays constant (a = 1), or gets larger in
magnitude for successive outputs (a = 2 and a = 2). Later, in Chapter 8, we will
see why this is the case.
3.8 Correlation
Correlation determines how much alike two signals are, using convolution. Corre-
lation by itself generates a single number, called a correlation coe cient [13]. A
large, positive correlation coe cient indicates a relation between the signals, while a
correlation coe cient close to zero means that the two signals are not (or that they
are not lined up properly). A negative number indicates a negative correlation,
meaning that one signal tends to decrease as the other increases [14].
The problem occurs when the input signals are not lined up with one another,
since a single correlation calculation may give the appearance that the two signals
are not related. To x this, one of the signals should be shifted, but how much to shift
it is not known. Of course, a human observer could say how one signal should be
shifted to line it up with the other. This is an example of a problem that is easy for a
human to solve, but di cult for a computer. Therefore, one signal is shifted up to N
times, and the maximum value from the cross-correlation is chosen.
116 DSP Using MATLAB and Wavelets
a = 0.5 a = 0.5
10 10
5
5
0
50 5 10 15 20 0
0 5 10 15 20
a = 1 a=1
10 11
0 10
100 5 10 15 20 90 5 10 15 20
7 a = 2 6 a=2
2 x 10 15 x 10
1 10
0 5
10 5 10 15 20 00 5 10 15 20
The above equation gives an approximation for sx;y [k], since it does not take
the signal's average into account, and assumes that it is 0 for both x and y. If the
signal averages are nonzero, we need to use the following equation (cross-
covariance) instead:
N 1
sx;y [k] = N 1
n=0 x[n]y[n + k]
X
(P N
n=0
1 x[n ])( P
N n=0
y[n]) :
sx;x = n= 0 x[n]
N 1 (P N 1
n
x[n])2 :
N
X 2 =0
To nd sy;y , use the above equation for sx;x, and replace x with y.
The cross-correlation is estimated to be:
sx;y [k]
[k] = :
x;y psx;xsy;y
For the autocorrelation, replace y with x. sx;x[k] can be found by substituting x for
both parameters in the equation for sx;y [k].
118 DSP Using MATLAB and Wavelets
sx;x[k]
x;x[k] = :
Note that x;y [k] is an estimate of the correlation, since we are dealing with
sampled data, not the underlying processes that create this data. As you may have
guessed, these formulas come from probability and statistics. The cross-correlation
2
is related to variance, . For more information, see Probability and Statistics for
Engineers and Scientists, Third Edition by Walpole and Myers [13].
A few examples about correlation follow. Here we have x and y, two signals
that are identical, but shifted. To compute the correlation, we use a simple
approxima-tion. First, we nd the point-for-point multiplication of x and y, with the
codes that follows.
x*y.
Notice that we use the transpose of y (otherwise, we would get an error). Also,
after the rst computation, we use a di erent range for y by specifying [y(10),
y(1:9)], which concatenates the last value for y with the rst nine values. There is
no need to nd the sum of the point-for-point multiplication with the sum( )
command, since this is done automatically. (Remember that [a b c] transpose[d e
f ] = ad + be + cf , a single value.) Finally, we divide by the number of samples.
This code only works because we have been careful to keep x and y the exact
same length, and because x and y happen to have an average of zero.
>> x = [ 0 0 1 5 1 -2 -3 -2 0 0 ];
>> y = [ 1 5 1 -2 -3 -2 0 0 0 0 ];
>> x*y. /length(x)
ans =
-0.8000
ans =
Filters 119
4.4000
-0.8000
-1.9000
-1.3000
-0.4000
ans =
-1.3000
-1.9000
Notice how the answer varied from 1:9 to 4.4, depending on the alignment of
the two signals. At the largest magnitude's value (furthest away from 0), 4.4, the
signal given for y was actually [y(9:10), y(1:8)].
0 0 1 5 1 -2 -3 -2 0 0
>> x
x=
0 0 1 5 1 -2 -3 -2 0 0
As the above output shows, the maximum value occurred when the two signals
were exactly aligned. A second example is given below. Here, we keep x the same,
but we use a negated version of y. Notice that the values generated by our simple
correlation function are the same values as above, except negated. Again, the value
largest in magnitude (-4.4) occurs when the signals are aligned.
yn =
-1 -5 -1 2 3 2 0 0 0 0
ans =
0.8000
-2
-4.4000
-2
0.8000
1.9000
1.3000
0.4000
1.3000
1.9000
Technically, the value returned above is not the correlation, but the cross-
covariance. One problem is that the shortcut used above only works when the
signal has an average of 0. Another problem with this value is that it does not tell
us if the two signals are an exact match. That is, in the rst example above, the
cross-covariance between x and y was found to be 4.4, but if we were to
compare signal x with some other signal, could the cross-covariance be higher?
Actually, the answer is yes! The following two commands show why this is the
case. Taking the signal y and multiplying each value by 10, then nding the cross-
covariance, leads to an answer 10 times the original.
y10 =
44
%
% correlate.m - How much are 2 signals alike?
%
% Usage: [rho] = correlate(k, x, y);
% inputs: k = shift (integer between 0 and length(y)-1)
% x, y = the signals to correlate
% output: rho = the correlation coefficient -1..0..1
%
% Note: This function assumes length(x) = length(y).
%
function [rho] = correlate(k, x, orig_y)
% Shift y by k units
y = [orig_y(k+1:length(orig_y)), orig_y(1:k)];
N = length(x);
When run, we see that the output rho ( ), the cross-correlation, is a much better value
to use since it gives us an indication (up to 1) of how well the two signals match.
rho =
-1
The last example shows that a negative correlation exists between x and yn (the
negated version of y). The cross-correlation function takes care of the fact that the
signals may be scaled versions of each other. That is, we found that x and y10
above were a match, even though y10's values were 10 times that of x's.
Let us look at another example. What if we try to correlate x to a signal that is
not like it? We will call the second signal noise.
my_rho =
Columns 1 through 6
11
max_rho =
0.4308
The array my rho stores all correlation coe cients for these two signals. To
see how well x and noise match up, we need to examine both the minimum as
well as the maximum values. With cross-correlation, we are able to say that two
signals match, or that they do not match well.
Example:
The following example comes out of an image processing project. In Figure 3.30, we
see two rectangles. Our goal is to automatically match these two objects, something
people can do easily, but is a di cult pattern-matching process for a computer.
The rectangles are similar, though one is smaller than the other, as well as being
126 DSP Using MATLAB and Wavelets
rotated (i.e., one is longer than it is wide, the other is wider than it is long). To have
the computer determine that they are a match, we will trace the boundaries of these
objects, and calculate the center point. From this point, we will measure the distance
to each point that makes up the boundary, from the upper-left corners clockwise
around the object. When we nish, we have a one-dimensional view of these objects,
as seen in Figure 3.31. The solid line represents the object on the left in Figure 3.30,
while the dotted points represent the object on the right.
Notice that the two have the same length. This is necessary for correlation,
so the smaller of the two was \stretched" to make it longer. The original signals
are d1 and d2, and the code below shows a simple way to stretch the shorter
signal d2 to be the same length as d1. The rst line creates an array of points at
fractional increments. The second line makes a new data signal called d3 that is
a copy of d2, with redundant points. Better ways to interpolate a signal exist (see
the spline command), but this method su ces for this example.
pts = 1:length(d2)/length(d1):length(d2)+1; d3 =
d2(round(pts(1:length(d1))));
Once we have two signals to compare that are the same length, we can use
the correlation function, and take the largest value as an indication of how well
the objects match.
Our code gives us a 0.9994 correlation coe cient, indicating a very good
match between the two objects, as expected.
Filters 127
70
65
60
55
50
45
40
35
30
Figure 3.31: Two rectangles represented as the distance from their centers to
their edges.
128 DSP Using MATLAB and Wavelets
Example:
To show that the previous example was not just a coincidence, we will repeat the
above experiment with a rectangle and a triangle, Figure 3.32.
This time, we see that the resulting one-dimensional signals are di erent, as
we might expect (Figure 3.33). With the rectangle, we see the 1D signal
gradually get smaller, then get larger again until it reaches the same height.
Next, it gets a bit smaller, then back to its original height. This pattern repeats.
From the center, the corners are going to be the furthest points away, and we
see this as the top height. As we trace the boundary of the rectangle, the
distance to the center gets a little smaller, then larger until we reach the next
corner. As we trace the next side, the distance gets smaller and then back to the
original distance at the corner. Since the second side is not as long as the rst,
the gradual dip in height is not as prominent. With the triangle, the corners are
not equidistant from the center (due to the use of a simple averaging function to
nd the center), thus we see three peaks in the dotted one-dimensional object
representation, but two of them are not as far away from the center as the third
one. We started from the upper-right corner, though the starting position does
not matter since the correlation calculations consider the shift of the signal.
3.9 Summary
Finite Impulse Response (FIR) lters are presented in this chapter, including their
structure and behavior. Three important properties that FIR lters possess are
Filters 129
70
60
50
40
30
20
Figure 3.33: A rectangle and triangle represented as the distance from their
centers to their edges.
130 DSP Using MATLAB and Wavelets
causality, linearity, and time-invariance. In nite Impulse Response (IIR) lters are
also a topic of this chapter. Correlation is covered here as well, which is one
appli-cation of an FIR lter.
5. Write MATLAB code to compute output (y[n]) for convolution, given any FIR
lter coe cients (b[k]) and input (x[n]).
6. Filter input x = [5; 1; 8; 0; 2; 4; 6; 7] with a 2-tap FIR lter of coe cients f1, 1g
by hand (do not use a computer).
7. What word describes the operation below? of x[n] and b[n] produces
y[n].
K
X
y[n] = b[k]x[n k]
k=0
8. Make up a test signal of at least 10 values, and lter it with the following coe
cients. Use MATLAB for this question.
What do you observe? Which set of lter values do the best job of
\smoothing out" the signal?
9. What do we call h[n]? Why is it signi cant?
10. What does MAC stand for? What does it mean?
11. For system1 and system2 below, are the systems linear? Are they time-
invariant? Are they causal? Explain. Let x[n] be the input and y[n] be the out-
put, where system1 is y1[n] = x[n] x[n 1], and system2 is y2[n] = 3x[n]+1.
Filters 131
12. What is the di erence between an FIR lter and an IIR lter? Draw one of
each.
14. For the input signal x[n] = [6; 1; 3; 5; 1; 4], and a lter h[k] = [ 1; 3; 1], nd y[n],
where y[n] = h[k] x[n]. Use the algorithm given in section 3.2.
Chapter 4
Sinusoids
Most analog signals are either sinusoids, or a combination of sinusoids (or can be
approximated as a combination of sinusoids). This makes combinations of sinusoids
especially interesting. It is easy to add sinusoids together; pressing keys on a piano
or strumming a guitar adds several sinusoids together (though they do decay, unlike
the sinusoids we usually study). In this chapter, we will investigate sinusoids and see
how they can be added together to model signals. The goal is to gain a better
understanding of sinusoids and get ready for the Fourier transform.
1According to [16] the Babylonians knew this, but the Greeks were rst to prove it.
133