Professional Documents
Culture Documents
c4 PDF
c4 PDF
4.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4.2
4.3
4.4
4.4
4.5
4.7
4.8
4.10
4.11
4.12
4.13
4.14
4.16
4.18
4.21
4.23
4.30
4.32
4.32
4.33
4.35
4.37
4.42
4.44
4.47
4.47
4.48
4.2
Complex exponential signals play an important and unique role in the analysis of LTI systems both in continuous and discrete time.
Complex exponential signals are the eigenfunctions of LTI systems.
The eigenvalue corresponding to the complex exponential signal with frequency 0 is H(0 ),
where H() is the Fourier transform of the impulse response h().
This statement is true in both CT and DT and in both 1D and 2D (and higher).
The only difference is the notation for frequency and the definition of complex exponential signal and Fourier transform.
Continuous-Time
x(t) = e2F0 t LTI, h(t) y(t) = h(t) e2F0 t = Ha (F0 ) e2F0 t .
Proof: (skip )
y(t) = h(t) x(t) = h(t) e2F0 t =
where
Ha (F ) =
h( ) e2F0 (t ) d = e2F0 t
h( ) e
2F
d =
h(t) e2F t dt .
Discrete-Time
x[n] = e0 n LTI, h[n] y[n] = h[n] e0 n = H(0 ) e0 n = |H(0 )| e(0 n+ H(0 ))
Could you show this using the z-transform? No, because z-transform of e0 n does not exist!
Proof of eigenfunction property:
#
"
X
X
0 k
0 (nk)
0 n
h[k] e
e0 n = H(0 ) e0 n ,
h[k] e
=
y[n] = h[n] x[n] = h[n] e
=
k=
k=
H() =
k=
h[k] ek =
h[n] en .
n=
In the context of LTI systems, H() is called the frequency response of the system, since it describes how much the system
responds to an input with frequency .
This property alone suggests the quantities Ha (F ) (CT) and H() (DT) are worth studying.
Similar in 2D!
Most properties of CTFT and DTFT are the same. One huge difference:
The DTFT H() is always periodic: H( + 2) = H(), ,
whereas the CTFT is periodic only if x(t) is train of uniformly spaced Dirac delta functions.
Why periodic? (Preview.) DT frequencies unique only on [, ). Also H() = H(z)| z=e so the frequency response for any
is one of the values of the z-transform along the unit circle, so 2 periodic. (Only useful if ROC of H(z) includes unit circle.)
4.3
Signal
Aperiodic
Transform
Continuous Frequency
Periodic
Discrete Frequency
Continuous Time
Fourier Transform (306)
(4.1)
Fourier Series (306)
(periodic in time)
(4.1)
Signal
Discrete Time
DTFT (Ch. 4)
(periodic in frequency)
DTFS (Ch. 4) or DFT (Ch. 5)
(periodic in time and frequency)
FFT (Ch. 6)
Overview
The DTFS is the discrete-time analog of the continuous-time Fourier series: a simple decomposition of periodic DT signals.
The DTFT is the discrete-time analog of the continuous-time FT studied in 316.
Ch. 5, the DFT adds sampling in Fourier domain as well as sampling in time domain.
Ch. 6, the FFT, is just a fast way to implement the DFT on a computer.
Ch. 7 is filter design.
Familiarity with the properties is not just theoretical, but practical too; e.g., after designing a notch filter for 60Hz, can use scaling
property to apply design to another country with different AC frequency.
4.4
Skim / Review
If a continuous-time signal xa (t) is periodic with fundamental period T0 , then it has fundamental frequency F0 = 1/T0 .
Assuming
conditions hold (see text), we can represent xa (t) using a sum of harmonically related complex exponential
the Dirichlet
signals e2kF0 t . The component frequencies (kF0 ) are integer multiples of the fundamental frequency. In general one needs an
infinite series of such components. This representation is called the Fourier series synthesis equation:
xa (t) =
ck e2kF0 t ,
k=
where the Fourier series coefficients are given by the following analysis equation:
Z
1
ck =
xa (t) e2kF0 t dt,
k Z.
T0 <T0 >
If xa (t) is a real signal, then the coefficients are Hermitian symmetric: ck = ck .
4.1.2 Power density spectrum of periodic signals
The Fourier series representation illuminates how much power there is in each frequency component due to Parsevals theorem:
Z
X
1
2
2
|ck | .
|xa (t)| dt =
Power =
T0 <T0 >
k=
10
t [ms]
xa (t) =
k=
x
a (t kT0 ), where x
a (t) =
t/2 + 1,
0,
|t| < 2
otherwise.
-2
2 t
1
4
(t/2 + 1) e2(k/4)t dt =
2
1,
e/2
k=0
1,
k=0
/2 1
e
,
k=
6 0, even
=
k
(1)k
/2 1
k , k 6= 0,
e
k , k 6= 0, odd,
which can be found using the following line of M ATLAB and simplifying:
syms t, pretty(int(1/4 * (1+t/2) * exp(-i*2*pi*k/4*t), -2, 2))
4.5
k=
ck e
2(k/4)t
=1+
/2
k6=0
X
X
(1)k 2(k/4)t
2
k
2
+
.
=1+
e
cos 2 t +
cos 2 t
k
k
4
2
k
4
2
k=1
odd
k=2
even
0.25
1
4 2
0.50
...
1
9 2
F [kHz]
0.75
Note that an infinite number of complex exponential components are required to represent this periodic signal.
For practical applications, often a truncated series expansion consisting of a finite number of sinusoidal terms may be sufficient. For example, for additive music synthesis (based on adding sine-wave generators), we only need to include the terms with
frequencies within the audible range of human ears.
4.1.3
The Fourier transform for continuous-time aperiodic signals
R
Analysis equation: Xa (F ) = xa (t) e2F t dt
R
Synthesis equation: xa (t) = Xa (F ) e2F t dF
Example.
CTFT
1,
sin(F )
,
F
F =0
F 6= 0.
Caution: this definition of sinc is consistent with M ATLAB and most DSP books. However, a different definition (without the
terms) is used in some signals and systems books.
4.1.4 Energy density spectrum of aperiodic signals
Parsevals relation for the energy of a signal:
Energy =
|xa (t)| dt =
|Xa (F )| dF
4.6
= H(F0 ) (F F0 )
X1 (F ) + X2 (F )
Linearity
x1 (t) + x2 (t)
Time shift
xa (t )
Xa (F ) e2F
xa (t)
Xa (F )
x1 (t) x2 (t)
X1 (F ) X2 (F )
xa (t) e2F0 t
Xa (F F0 )
Time reversal
Convolution
Cross-Correlation
Frequency shift
Modulation (cosine)
xa (t) cos(2F0 t)
Multiplication
x1 (t) x2 (t)
Freq. differentiation
t xa (t)
Time differentiation
Conjugation
d
dt xa (t)
xa (t)
Scaling
xa (at)
Xa (F F0 ) + Xa (F + F0 )
2
X1 (F ) X2 (F )
d
Xa (F )
2 dF
2F Xa (F )
Xa (F )
F
1
Xa
a
a
Xa (F ) = Xa (F )
...
xa (t) = xa (t)
Xa (F ) real
Duality
Xa (t)
Relation to Laplace
Parsevals Theorem
Rayleighs Theorem
DC Value
xa (F )
Xa (F ) = X(s)
R
R s=2F
x1 (t) x2 (t) dt = X1 (F ) X2 (F ) dF
R
R
2
2
|x
(t)|
dt
=
a
|Xa (F )| dF
R
xa (t) dt = Xa (0)
4.7
skim
Do these conditions guarantee that taking a FT of a function xa (t) and then taking the inverse FT will give you back
exactly the same function xa (t)? No! Consider the function
1, t = 0
xa (t) =
0, otherwise.
Since this function (called a null function) has no area, Xa (F ) = 0, so the inverse FT x
= F 1 [X] is
a (t) = 0, which
R simply x
2
2
does not equal xa (t) exactly! However, x and x
are equivalent in the L2 (R ) sense that kx x
k2 = |xa (t) x
a (t)| dt = 0,
which is more than adequate for any practical problem.
If we restrict attention to continuous functions xa (t), then it will be true that x = F 1 [F[x]].
Most physical functions are continuous, or at least do not have type of isolated points that the function x a (t) above has, so the
above mathematical subtleties need not deter our use of transform methods.
We can safely restrict attention to functions xa (t) for which x = F 1 [F[x]] in this course.
R
Lerchs theorem: if f and g have the same Fourier transform, then f g is a null function, i.e., |f (t) g(t)| dt .
4.2
Frequency Analysis of Discrete-Time Signals
In Ch. 2, we analyzed LTI systems using superposition. We decomposed the input signal x[n] into a train of shifted delta functions:
x[n] =
ck xk [n] =
k=
k=
x[k] [n k],
The shifted delta decomposition is not the only useful choice for the elementary functions x k [n]. Another useful choice is the
collection of complex-exponential signals {en } for various .
We have already seen that the response of an LTI system to the input en is H() en , where H() is the frequency response
of the system.
T
Ch 2: xk [n] = [n k] yk [n] = h[n k]
T
Ch 4: xk [n] = ek n yk [n] = H(k ) ek n
So now we just need to figure out how to do the decomposition of x[n] into a weighted combination of complex-exponential signals.
4.8
4.2.1
The Fourier series for discrete-time periodic signals
For a DT periodic signal, should an infinite set of frequency components be required? No, because DT frequencies alias
to the interval [, ].
Fact: if x[n] is periodic with period N , then x[n] has the following series representation: (synthesis, inverse DTFS):
x[n] =
N
1
X
ck ek n , where k =
k=0
2
k. picture of k s on [0, 2)
N
(4-1)
Intuition: if x[n] has period N , then x[n] has N degrees of freedom. The N ck s in the above decomposition express those degrees
of freedom in another coordinate system.
2
N kn
To show that the columns are orthogonal, we first need the following simple property:
X
m = 0, N, 2N, . . .
[m kN ] .
=
otherwise
N 1
1 X 2 nm
1,
N
=
e
0,
N n=0
k=
The case where m is a multiple of N is trivial, since clearly e2n(kN )/N = e2nk = 1 so
is not a multiple of N , we can apply the finite geometric series formula:
1
N
N
1
X
e N nm =
n=0
1
N
N
1
X
e N m
n=0
n
1
N
PN 1
n=0
2 N
1
e N m
1
11
1
=
2 = 0.
=
N 1 e 2
N
m
N
1 e N m
Now let w k and wl be two columns of the matrix W, for k, l = 0, . . . , N 1. Then the inner product of these two columns is
hwk , wl i =
N
1
X
wnk (wnl ) =
n=0
N
1
X
e N kn e N ln =
n=0
n=0
proving that the columns of W are orthogonal. This also proves that W0 =
its Hermitian transpose:
W01
W00
1 W0 ,
N
N
1
X
e N (kl)n = N [k l],
1 W
N
1 1
1
1
W1 = ( N W0 )1 = W01 = W0 = W0 .
N
N
N N
Thus we can rewrite (4-1) as
x[0]
x[1]
..
.
x[N 1]
= W
c0
c1
..
.
cN 1
, so
c0
c1
..
.
cN 1
= W0
x[0]
x[1]
..
.
x[N 1]
4.9
1
N
"N 1
# N 1 " N 1
# N 1
N
1
X
X
X
2
1 2 ln X
1 X 2 (kl)n
1 2 ln
kn
ck
ck [k l] = cl .
e N x[n] =
e N
ck e N
=
=
e N
N
N
N
n=0
k=0
k=0
k=0
k=0
Therefore, replacing l with k, the DTFS coefficients are given by the following analysis equation:
N 1
2
1 X
x[n] e N kn .
ck =
N n=0
(4-2)
The above expression is defined for all k, but we really only need to evaluate it for k = 0, . . . , N 1, because:
The ck s are periodic with period N .
Proof (uses a simplification method well see repeatedly):
ck+N =
N 1
N 1
N 1
2
2
2
2
1 X
1 X
1 X
x[n] e N (k+N )n =
x[n] e N kn e N N n =
x[n] e N kn = ck .
N n=0
N n=0
N n=0
Equations (4-1) and (4-2) are sometimes called the discrete-time Fourier series or DTFS.
In Ch. 6 we will discuss the discrete Fourier transform (DFT), which is similar except for scale factors.
Skill: Compute DTFS representation of DT periodic signals.
Example.
Consider the periodic signal x[n] = {4, 4, 0, 0}4 , a DT square wave. Note N = 4.
ck =
3
i
2
2
1h
1X
4 + 4 e 4 k1 = 1 + ek/2 ,
x[n] e 4 kn =
4 n=0
4
2
4 n
+ (1 + ) e
2
4 3n
= 2 + (1 ) e
2
4 n
+ (1 + ) e
2
4 n
= 2 + 2 2 cos n
.
2
4
This square wave is represented by a DC term plus a single sinusoid with frequency 1 = /2.
From Fourier series analysis, we know that a CT square wave has an infinite number of frequency components. The above signal
x[n] could have arisen from sampling a particular CT square wave xa (t). picture. But our DT square wave only has two frequency
components: DC and 1 = /2. Where did the extra frequency components go? They aliased to DC, /2, and .
Hermitian symmetry
If x[n] is real, then the DTFS coefficients are Hermitian symmetric, i.e., ck = ck .
But we usually only evaluate ck for k = 0, . . . , N 1, due to periodicity.
So a more useful expression is: c0 is real, and cN k = ck for k = 1, . . . , N 1. The preceding example illustrates this.
4.10
What is the power in the periodic signal xk [n] = ck ek n ? Recall power for periodic signal is
Pk =
N 1
1 X
2
2
|xk [n]| = |ck | .
N n=0
Parsevals relation expresses the average signal power in terms of the sum of the power of each spectral component:
Power =
N
1
N 1
X
1 X
2
2
|ck | .
|x[n]| =
N n=0
k=0
Read. Proof via matrix approach. Since x = Wc, kxk2 = kWck2 = k N W0 ck2 = N kW0 ck2 = N kck2 .
Even though we cannot take the z-transform of a periodic signal x[n], there is still a (somewhat) useful relationship.
Let x
[n] denote one period of x[n], i.e., x
[n] = x[n] (u[n] u[n N ]). Let X(z)
denote the z-transform of x
[n]. Then
1
ck =
.
X(z)
2
N
z=ek =e N k
Clearly only valid if z-transform includes the unit circle. Is this a problem? No problem, because x
[n] is a finite duration signal,
so ROC of X(z)
is C (except 0), so ROC includes unit circle.
1
X(z)
= 1 + z 1 z=ek/2 = 1 + ek/2 ,
4
z=e2k/4
Now we have seen how to decompose a DT periodic signal into a sum of complex exponential signals.
This can help us understand the signal better (signal analysis).
Example: looking for interference/harmonics on a 60Hz AC power line.
It is also very useful for understanding the effect of LTI systems on such signals (soon).
4.11
We already showed that the response of the system to the input xk [n] is just yk [n] = H(k ) e N kn , where H() is the frequency
response corresponding to h[n].
Therefore, by superposition, the overall output signal is
y[n] =
N
1
X
ck yk [n] =
k=0
N
1
X
k=0
ck H(k ) e N kn .
| {z }
multiply
For a periodic input signal, the response of an LTI system is periodic (with same frequency components).
The DTFS coefficients of the output signal are the product of the DTFS coefficients of the
input signal with certain samples of the frequency response H(k ) of the system.
This is the convolution property for the DTFS, since the time-domain convolution y[n] = h[n] x[n] becomes simply multiplication of DTFS coefficients.
The results are very useful because prior to this analysis we would have had to do this by time-domain convolution.
Could we have used the z-transform to avoid convolution here? No, since X(z) does not exist for eternal periodic signals!
Example. Making violin sound like flute.
picture of x[n], y[n], H(), power density spectrum before and after filtering
4.12
The DTFS allows us to analyze DT periodic signals by decomposing into a sum of N complex exponentials, where N is the period.
What about aperiodic signals? (Like speech or music.)
Note in DTFS k = 2k/N . Let N then the k s become closer and closer together and approach a continuum.
4.2.3
The Fourier transform of discrete-time aperiodic signals
n=
DTFT
(4-3)
This is called the discrete-time Fourier transform or DTFT of x[n]. (The book just calls it the Fourier transform.)
We have already seen that this definition is motivated in the analysis of LTI systems.
The DTFT is also useful for analyzing the spectral content of samples continuous-time signals, which we will discuss later.
Periodicity
The DTFT X () is defined for all . However, the DTFT is periodic with period 2, so we often just mention .
Proof:
X
X
X
x[n] en = X () .
x[n] en e2n =
x[n] e(+2)n =
X ( + 2) =
n=
n=
n=
In fact, when you see a DTFT picture or expression like the following
1, || c
X () =
0, c < || ,
we really are just specifying one period of X (), and the remainder is implicitly defined by the periodic extension.
Some books write X(e ) to remind us that the DTFT is periodic and to solidify the connection with the z-transform.
The Inverse DTFT
(synthesis)
If the DTFT is to be very useful, we must be able to recover x[n] from X ().
1
2
en
X () d =
2
em d = [m] =
1,
0,
m=0
m 6= 0,
for m Z.
en
2
X
X
en X
1
k
x[k] e
e(nk) d =
x[k] [n k] = x[n] .
d =
x[k]
2
2
k=
k=
k=
x[n] =
1
2
X () en d .
Because both en and X () are periodic in with period 2, any 2 interval will suffice for the integral.
Lots of DTFT properties, mostly parallel those of CTFT. More later.
(4-4)
4.13
4.2.6
Relationship of the Fourier transform to the z-transform
X () = X(z)|z=e ,
if ROC of z-transform includes the unit circle. (Follows directly from expressions.) Hence some books use the notation X(e ).
Example: x[n] = (1/2)n u[n] with ROC |z| > 1/2 which includes unit circle. So X(z) =
1
1 12 z 1
so X () =
1
.
1 21 e
If the ROC does not include the unit circle, then strictly speaking the FT does not exist!
An exception is the DTFT of periodic signals, which we will define by using Dirac impulses (later).
n=
|x[n]| < .
Any signal that is absolutely summable will have an ROC of X(z) that includes the unit circle, and hence will have a well-defined
DTFT for all , and the inverse DTFT will give back exactly the same signal that you started with.
In particular, under the above condition, one can show that the finite sum (which always exists)
XN () =
converges pointwise to X () for all :
N
X
x[n] en
n=N
or in particular:
XN () X () as N .
Unfortunately the absolute summability condition precludes periodic signals, which were part of our motivation!
To handle periodic signals with the DTFT, one must allow Dirac delta functions in X (). The book avoids this, so I will also for
now. For DT periodic signals, we can use the DTFS instead.
However, there are some signals that are not absolutely summable, but nevertheless do satisfy the weaker finite-energy condition
Ex =
n=
|x[n]| < .
This is theoretically weaker than pointwise convergence, but it means that the error energy diminishes with increasing N , so
practically the two functions become physically indistinguishable.
Fact. Any absolutely summable signal has finite energy:
n=
|x[n]| < =
n=
|x[n]|
m=
|x[m]|
< =
n=
|x[n]|2 +
m6=n
n=
|x[n]|2 < .
4.14
=c
Is h[n] an FIR or IIR filter? This is IIR. (Nevertheless it could still be practical if it had a rational system function.)
1.2
1
H()
0.8
0.6
0.4
0.2
0
0.5
0.4
h(n)
0.3
0.2
0.1
0
0.1
0.2
20
15
10
0
n
10
15
20
1
2k+1
= , so it is unstable.
4.15
Can we implement h[n] with a finite number of adds, multiplies, delays? Only if H(z) is in rational form, which it is not! It
does not have an exact (finite) recursive implementation.
How to make a practical implementation? A simple, but suboptimal approach: just truncate impulse response, since h[n] 0
for large n. But how close will be the frequency response of this FIR approximation to ideal then? Let
h[n], |n| N
hN [n] =
0,
otherwise,
and let HN () be the corresponding frequency response:
HN () =
N
X
N
X
sin c n n
e
.
n
hN [n] en =
n=N
n=N
0.5
0.3
h10(n)
|H () H()|
N=10
0.4
0.2
0.4
0.3
0.5
0.1
0
0.2
0.1
0.1
0.2
80
H ()
N
1.5
60
40
20
20
40
60
80
1.5
0.5
0.5
N=20
0.4
0.4
h20(n)
0.3
0.2
0.1
0
0.2
0.1
0.1
0.2
80
0.3
0.5
60
40
20
20
40
60
80
1.5
0.5
0.5
N=80
0.4
0.4
h80(n)
0.3
0.2
0.3
0.5
0.2
0.1
0
0.2
80
0.1
0.1
60
40
20
0
n
20
40
60
80
0
0
As N increases, hN [n] h[n]. But HN () does not converge to H() for every . But the energy difference of the two spectra
decreases to 0 with increasing N . Practically, this is good enough.
This effect, where truncating a sum in one domain leads to ringing in the other domain is called Gibbs phenomenon.
We have just done our first filter design. It is a poor design though. Lots of taps, no recursive form, suboptimal approximation to
ideal lowpass for any given number of multiplies.
4.16
4.2.5
Energy density spectrum of aperiodic signals or Parsevals relation
E
=
=
Z
1
n
x[n]
x[n] x [n] =
|x[n]| =
X () e
d
2
n=
n=
n=
"
#
Z
Z
Z
X
1
1
1
2
n
X ()
x[n] e
X () X () d =
|X ()| d .
d =
2
2
2
n=
2
1
|x[n]| =
2
n=
2
|X ()| d .
When x[n] is aperiodic, the energy in any particular frequency is zero. But
Z 2
1
2
|X ()| d
2 1
quantifies how much energy the signal has in the frequency band [1 , 2 ] for 1 < 2
1
1
so X () =
, so
1 az 1
1 a e
2
1
1
2
=
Sxx () = |X ()| =
.
1 a e
1 2a cos + a2
a=0.5
xx
S ()
3
2
a=0.2
a=0
a=0.2
a=0.5
1
0
5
1
a=0.5
x(n) = a u(n)
0.8
0.6
0.4
0.2
0
5
10
1
0.5
x(n) = a u(n)
a=0.5
0.5
5
10
Why is there greater energy density at high frequencies when a < 0? Recall a n = (|a|)n = (1)n |a|n which alternates.
4.17
What is wrong with the above picture and the formula in quotes? Not explicitly periodic.
Consider (using Dirac delta now):
X () = 2 ( 0 ) = 2
k=
( 0 k2)
Assume 0 [, ]
Find inverse DTFT:
x[n] =
1
2
X () en d =
k=
( 0 k2) en d = e0 n ,
since integral over [, ] hits exactly one of the Dirac delta functions.
Thus
DTFT
e0 n 2
k=
( 0 k2) = 2 ( 0 ) .
The step function has a pole on the unit circle, yet some textbooks also state:
DTFT
u[n] U () =
X
1
+
( 2k) .
1 e
k=
The step function is not square summable, so this transform pair must be used with care. Rarely is it needed.
4.2.9 Sampling
Will be done later!
4.2.10 Frequency-domain classification of signals: the concept of bandwidth
Read
4.2.11 The frequency ranges of some natural sounds
Skim
4.2.12 Physical and mathematical dualities
Read
discrete-time periodic spectrum (DTFT, DTFS)
periodic time discrete spectra (CTFS, DTFS)
4.18
4.3
Properties of the DTFT
X () =
x[n] en
n=
Most properties analogous to those of CTFT. Most can be derived directly from corresponding z-transform properties, by using the
fact that X () = X(z)|z=e . Caution: reuse of X() notation.
Periodicity: X () = X ( + 2)
Time reversal
Z
DTFT
Recall x[n] X z 1 . Thus x[n] X z 1 z=e = X(e ) = X(z)|z=e = X ().
DTFT
So x[n] X () .
Conjugation
Z
DTFT
Recall that x [n] X (z ). Thus x [n] X (z )|z=e = X (e ) = X (z)|z=e = X ().
DTFT
So x [n] X () .
Real signals
Combining: if x[n] is real and even, then X () is also real and even i.e., X () = X () = X () = X () .
There are many such relationships, as summarized in the following diagram.
x[n]
X ()
xeR [n]
XRe ()
+
+
xeI [n]
l
XIe ()
xoR [n]
XRo ()
+
.
&
%
xoI [n]
+ XIo ()
4.19
DTFT
DTFT
DTFT
e0 n x[n] X ( 0 )
Example. Let y[n] = e
3
4 n
x[n],
/2
6X ()
/2
/2
Y() = X ( 3/4)
6
/2
Modulation
DTFT
x[n] cos(0 n)
1
[X ( 0 ) + X ( + 0 )]
2
since cos(0 n) = (e0 n + e0 n )/2. Now what? Apply frequency shift property.
R
P
2
2
1
Parsevals Theorem (earlier) n= |x[n]| = 2
|X ()| d
Frequency differentiation
DTFT
n x[n]
careful with n = 0!
X
X
d X
d
X () =
x[n] en =
x[n](n) en =
(n x[n]) en
d
d n=
n=
n=
4.20
Example. Find x[n] when X () = 1 for a b (periodic) with a < b , i.e., X () = rect
DTFT
(b+a)/2
ba
(periodic)
How does X () relate to Y()? By a frequency shift: X () = Y( (b + a)/2) so x[n] = e (b+a)/2n y[n]
ba
ba
b+a
n
2
x[n] = e
sinc
n .
2
2
Signal multiplication (time domain)
Should it correspond to convolution in frequency domain? Yes, but DTFT is periodic.
Z
1
DTFT
x[n] = x1 [n] x2 [n] X () =
X1 () X2 ( ) d .
2
Proof:
X () =
x[n] en =
n=
n=
1
=
2
x1 [n] x2 [n] en =
X1 ()
"
x2 [n] e
()n
n=
Z
X
1
X1 () en d x2 [n] en
2
n=
1
d =
2
X1 () X2 ( ) d
Similar to ordinary convolution (flip and slide) but we only integrate from to .
Compare:
Convolution in time domain corresponds to multiplication in frequency domain.
Multiplication in time domain corresponds to periodic convolution in frequency domain (because DTFT is periodic).
Intuition: x[n] = x1 [n] x2 [n] where x1 [n] = e1 n and x2 [n] = e2 n . Obviously x[n] = e(1 +2 )n , but if |1 + 2 | > then
the 1 + 2 will alias to some frequency component within the interval [, ].
The periodic convolution takes care of this aliasing.
Parseval generalized
= 0 (DC value)
n=
x[n] y [n] =
1
2
X () Y () d
X (0) =
n = 0 value
x[0] =
1
2
x[n]
n=
X () d
Other missing properties cf. CT? no duality, no time differentiation, no time scaling.
Upsampling, Downsampling
See homework - derive from z -transform
4.21
4.2.9
The sampling theorem revisited
xa (t)
CTFT l
Xa (F )
Basic relationships:
sampling
x[n]
DTFT l
X ()
DTFT
P
X () = n= x[n] en
R
1
X () en d
x[n] = 2
(CT)FT
R
Xa (F ) =R xa (t) e2F t dt
xa (t) = Xa (F ) e2F t dF
Sampling:
x[n] = xa (nTs )
where Ts = 1/Fs .
Questions
I. How does X () relate to Xa (F )? Sum of shifted replicates.
What analog frequencies F relate to digital frequencies ? = 2F/Fs .
II. When can we recover xa (t) from x[n] exactly ? Sufficient to have xa (t) bandlimited and Nyquist sampling.
III. How to recover xa (t) from x[n]? Sinc interpolation.
There had better be a simple relationship between X () and Xa (F ), otherwise digital processing of sampled CT signals could be
impractical!
4.22
Spectral Replication
(Result I)
Intuition / ingredients:
If xa (t) = cos(2F t) then x[n] = cos(2F nTs ) = cos 2 FFs n so = 2F/Fs .
DTFT X () is periodic
CTFT Xa (F ) is not periodic (for energy signals)
CTFT
/(2) k
1 X
Xa
.
Ts
Ts
k=
So there is a direct relationship between the DTFT of the samples of an analog signal and the FT of that analog signal: X () is the
sum of shifted and scaled replicates of the (CT)FT of the analog signal.
Example. Consider the CT signal xa (t) = 100 sinc2 (100t) which has the following spectrum.
-100
Xa (F )
6
100 F
If x[n] = xa (nTs ) where 1/Ts = Fs = 400Hz, then the spectrum of x[n] is the following, where c = 2100/400 = /2.
400
/2
X ()
6
...
/2
On the other hand, if Fs = 150Hz, then 2100/150 = 34 and there will be aliasing as follows.
150
X ()
6
...
2
3
4
3
4.23
Proof 1.
x[n] = x
s)
Za (nT
Xa (F ) e2F (nTs ) dF
=
Z
2(k+1/2)/Ts
X
=
Xa (F ) e2F Ts n dF
inverse (CT)FT
rewrite integral
k= 2(k1/2)/Ts
Z
X
let = 2F Ts 2k
/(2) + k 2( /(2)+k
)Ts n d
Ts
Xa
=
e
/(2)+k
Ts
2Ts so F =
Ts
k=
#
Z "
1 X
1
/(2) + k
=
Xa
en d
simplifying exp.
2 Ts
Ts
k=
The bracketed expression must be X () since the integral is the inverse DTFT formula:
Z
1
x[n] =
X () en d .
2
Thus, when x[n] = xa (nTs ) (regardless of whether xa (t) is bandlimited) we have:
/(2) + k
1 X
Xa
.
X () =
Ts
Ts
k=
Proof 2.
P
First define the train of Dirac impulses, aka the combPfunction: comb(t) = n= (t n) .
If x[n] = xa (nTs ), then (using the sifting property and the time-domain multiplication property):
#
"
Z
Z
X
X
X
(t nTs ) et/Ts dt
xa (t)
X () =
x[n] en =
xa (t) (t nTs ) et/Ts dt =
n=
n=
xa (t)
"
t
1
n
T
T
s
s
n=
[Xa (F ) comb(Ts F )]
"
1
Ts
F =/(2Ts )
X
Xa (F k/Ts )
k=
#
n=
t
1
comb
T
T
s
s
"
#
1 X
= Xa (F )
(F k/Ts )
Ts
et/Ts dt =
k=
F =/(2Ts )
xa (t)
/(2) k
1 X
Xa
Ts
Ts
k=
e2/(2Ts )t dt
F =/(2Ts )
4.24
Proof 3.
Engineers fact: the impulse train function is periodic, so it has a (generalized) Fourier series representation as follows:
k=
(F k) =
e2F n .
n=
=
=
=
n=
=
=
xa (nTs ) en
n=
Z
X
n=
x[n] en
Xa (F ) e
2F (nTs )
dF en
Xa (F )
"
n=
Xa (F )
"
k=
"
(2F Ts )n
dF
#
(F Ts /(2) k) dF
#
1
/(2) + k
Xa (F )
dF
F
Ts
Ts
k=
/(2) + k
1 X
.
Xa
Ts
Ts
k=
Note (at) =
1
|a|
(t) for a 6= 0.
A mathematician would be uneasy with all of these proofs. Exchanging infinite sums and integrals really should be done with
care by making appropriate assumptions about the functions (signals) considered. Practically though, the conclusions are fine since
real-world signals generally have finite energy and are continuous, which are the types of regularity conditions usually needed.
4.25
1
1
.
1 + (t/t0 )2
0.35
CT Fourier Transform
0.3
0.8
t =1
0.25
t =1
0.6
x(t)
X(F)
0.2
0.15
0.4
0.1
0.2
0.05
0
20
0
t
0
1
20
0
F
Bandlimited? No. Suppose we sample it anyway. Does DTFT of sampled signal still look similar to X a (F ) (but replicated)?
Note that since Xa (F ) is real and symmetric, so is X (). Since the math is messy, we apply the spectral replication formula
focusing on [0, ]:
X () /
1 1 X
/(2) k
Xa
Ts
Ts
k=
X
2k
1 X
t0 et0 | Ts | =
e|+2k|
Ts
k=
k=
1
X
= e|| +
e|+2k| +
k=
1
X
= e|| +
= e
||
e|+2k|
k=1
e(+2k) +
k=
+e
k=1
2k0
+e
k0 =1
= e
||
+ e
= e|| +
+e
e2k
k=1
e2
1 e2
e(2) + e(2+)
1 e2
where = t0 /Ts . Thus for [0, ] we have the following relationship between the spectrum of the sampled signal and the
spectrum of the original signal:
e(2) + e(2+)
||
X () = e
+
.
1 e2
Ideally we would have X () = T1s Xa (/(2Ts )) = e|| for [0, ]. The second term above is due to aliasing. Note that
as Ts 0, and the second term goes to 0.
The following figure shows xa (t), its samples x[n], and the DTFT X () for two sampling rates. For Ts = 1 there is significant
aliasing whereas for Ts = 0.5 the aliasing is smaller.
4.26
DTFT of samples
T=1
0.2
0.1
0
t =1
X()
0.3
0.3
10
0
t
10
t0 = 1
T = 0.5
0.2
X()
Cauchy signal
0.1
0
10
0
t
10
Note that for smaller sample spacing Ts , the Xa (F ) replicated become more compressed, so less aliasing overlap.
Can we recover xa (t) from x[n], i.e., Xa (F ) from X () in this case? Well, yes, if we knew that the signal is Cauchy but just do
not know t0 ; just figure out t0 from the samples. But in general there are multiple analog spectra that make the same DTFT.
4.27
Suppose there is a maximum frequency Fmax > 0 for which Xa (F ) = 0 for |F | Fmax , as illustrated below.
1
-Fmax
1/Ts
Xa (F )
6
Fmax F
X ()
6
max
...
2 max 2
Result II.
If the sampling frequency Fs is at least twice the highest analog signal frequency Fmax ,
then there is no overlap of the shifted scaled replicates of Xa (F ) in X ().
2Fmax is called the Nyquist sampling rate.
Recovering the spectrum
When there is no spectral overlap (no aliasing), we can recover Xa (F ) from the central (or any) replicate in the DTFT X ().
0
Let 0 be any frequency between max and . Equivalently, let F0 =
2 Fs be any frequency between Fmax and Fs /2.
Then the center replicate of the spectrum is
X () =
Xa
rect
20
Ts
2Ts
F
X (2F Ts ), if 0 < Fmax F0 Fs /2
= Ts rect
2F0
Ts X (2F Ts ), |F | F0 Fs /2
=
0,
otherwise.
So the Ts -scaled rectangular-windowed DTFT gives us back the original signal spectrum.
4.28
Result III.
How do we recover the original analog signal xa (t) from its samples x[n] = xa (nTs ).
Convert central replicate back to the time domain:
Z
Xa (F ) e2F t dF
xa (t) =
Z
F
Ts rect
=
X (2F Ts ) e2F t dF
2F0
|
{z
}
central replicate
#
" X
Z
F
x[n] e(2F Ts )n e2F t dF
= Ts rect
2F0
| n=
{z
}
DTFT
X
F
2F (tnTs )
x[n] Ts rect
=
e
dF
2F0
n=
{z
}
|
FT of rect
X
x[n] (2F0 Ts ) sinc(2F0 (t nTs )) .
=
n=
This is the general sinc interpolation formula for bandlimited signal recovery.
The usual case is to use 0 = (the biggest rect possible), in which case F0 =
In this case, the reconstruction formula simplifies to the following.
X
t nTs
xa (t) =
x[n] sinc
Ts
n=
1
0
=
.
2Ts
2Ts
sin(x)
. Called sinc interpolation.
x
The sinc interpolation formula works perfectly if xa (t) is bandlimited to Fs /2 where Fs = 1/Ts , but it is impractical because of
the infinite summation. Also, real signals are never exactly bandlimited, since bandlimited signals are eternal in time. However,
after passing through an anti-aliasing filter, most real signals can be made to be almost bandlimited. There are practical interpolation
methods, e.g., based on splines, that work quite well with a finite summation.
where sinc(x) =
4.29
0.4
t0=1, T=1
x(n)
0.2
0.2
0.1
0
10
0.1
5
0
20
10
(t)
0.2
0.1
0.1
sinc
0.3
0.2
sinc
15
10
0.4
0.3
(t)
0
n
Sinc interpolation
0.4
0
0.1
10
t0=1, T=0.5
0.3
x(n)
0.3
0.4
0
5
n
Sinc interpolation
10
0
t
20
0
5
0
t
0.1
10
10
0.4
10
0.4
0.3
0.3
x (t)
x (t)
0.2
0.2
(t)
sinc
(t)
sinc
0.1
0
10
15
0.1
5
0
t
0
10
10
0
t
10
(t) x (t)
sinc
0.01
t =1, T=1
0
0.005
0
0.005
0.01
15
10
0.01
10
15
t0=1, T=0.5
0.005
0
0.005
0.01
15
10
0.01
10
15
t0=1, T=0.25
0.005
0
0.005
0.01
15
10
0
t
10
15
Why use a non-bandlimited signal here? Real signals are never perfectly bandlimited, even after passing through an anti-alias
filter. But they can be practically bandlimited, in the sense that the energy above the folding frequency can be very small. The
above results show that with suitably high sampling rates, sinc interpolation is robust to the small aliasing that results from the
signal not being perfectly bandlimited.
Summary: answered questions I, II, III.
4.30
4.4
Frequency-domain characteristics of LTI systems
We have seen signals are characterized by their frequency content. Thus it is natural to design LTI systems (filters) from frequencydomain specifications, rather than in terms of the impulse response.
4.4.1 Response to complex-exponential and sinusoidal signals: The frequency response function
We have seen the following input/output relationships for LTI systems thus far:
x[n] h[n] y[n] = h[n] x[n]
X(z) H(z) Y (z) = H(z) X(z)
4.31
Re(z)
gain = 1
H(z) =
z1
z
Frequency response:
i
h
H() = 1 e = e/2 e/2 e/2 = e/2 2 sin(/2) .
/2
This is an example of a linear phase filter.
H()
6
4.32
Im(z)
Analyze: H(z) =
(zp)(zp )
z2
z 2 2z cos c +1
z2
Magnitude Response
1
2
|H()|
Im(z)
Re(z)
2
1
0
Re(z)
Impulse response
Phase Response
H()
h[n]
0.5
0.5
1
1.5
10
These pictures show the response to eternal 60Hz sampled sinusoidal signals. But what about a causal sinusoid, or e c n u[n]
(applied after system first started)?
4.4.5
Relationship between system function and frequency response
If h[n] is real, then H () = H(), so
2
H() = H(z)|z=e , |H()| = H() H () = H(z) H z 1 z=e .
4.33
4.4.6
Computing the frequency response function
Skill: pole-zero (or H(z) or h[n] etc.) to H()
Focus on diffeq systems with rational system functions and real coefficients.
QM
P
bk z k
k=1 (z zk )
=
G
H(z) = P k
QN
k
a
z
k k
k=1 (z pk )
QM
zk )
k=1 (e
= H() = H(z)|z=e = G QN
p )
(e
k
k=1
See M ATLAB function freqz, usage: H = freqz(b, a, w) where w is vector of values of interest, usually created using
linspace.
For sketching the magnitude response:
QM
k=1
|H()| = |G| QN
k=1
|e zk |
|e pk |
Product of contribution from each pole and each zero. (On a log scale these contributions would add).
Geometric interpretation: closer to zeros, H() decreases, closer to poles, H() increases.
Example: 60Hz notch filter earlier.
Is H(0) or H() bigger? H() since further from zeros.
Phase response
H() = G +
M
X
k=1
(e zk )
N
X
k=1
(e pk )
phases add (zeros) or subtract (poles) since phase of a product is sum of phases.
Example:
How to improve our notch filter for 60Hz rejection? Move poles from origin to near the zeros.
pole-zero diagram with poles at z = r exp(c ). For r = 0.9:
H(z) =
(z ec )(z ec )
z 2 2z cos c + 1
=
(z r ec )(z r ec )
z 2 2rz cos c + r2
zplane
1.5
1
|H()|
Im(z)
Magnitude Response
0.5
0
Re(z)
Impulse response
Phase Response
0.8
H()
h[n]
0.6
0.4
0.2
0
2
10
4.34
Phase Response
zplane
Phase Response
1
0.5
0
0.5
1
1
1
0.5
0
0.5
1
1
1
0.5
0
0.5
1
1
H()
Im(z)
H()
Im(z)
0.5
0.5
0
Re(z)
zplane
Phase Response
0
Re(z)
zplane
0.5
0.5
0
0.5
0.5
1
1
0.5
0.5
0
0.5
0.5
1
1
0.5
0.5
0
0.5
0.5
1
1
Im(z)
1
0.5
H()
1
0.5
0
0.5
H()
Im(z)
Phase Response
0.5
1
1
0
Re(z)
0
Re(z)
4.35
4.4.2
Steady-state and transient response for sinusoidal inputs
The relation x[n] = e0 n h[n] y[n] = H(0 ) e0 n only holds for eternal complex exponential signals. So the output
H(0 ) e0 n is just the steady-state response of the system.
In practice (cf. 60Hz rejection) there is also an initial transient response, when a causal sinusoidal signal is applied to the system.
Example. Consider the simple first-order (stable, causal) diffeq system:
y[n] = p y[n 1] + x[n] where |p| < 1.
Find response to causal complex-exponential signal: x[n] = e0 n u[n] = q n u[n] where q = e0 .
p
1
1
pq
qp
=
+
1
1 pz 1 qz 1
1 pz 1
1 qz 1
p
q
p n
p u[n] +
q n u[n] =
pn u[n] + H(0 ) e0 n u[n] .
{z
}
|
pq
qp
pq
{z
}
|
steady-state
transient
Transient because (causal and) pole within unit circle, so natural response goes to 0 as n .
This property hold more generally, i.e., for any stable LTI system the response to a causal sinusoidal signal has a transient response
(that decays) and a steady-state response whose amplitude and phase are determined by the usual eigenfunction properties. What
determines the duration of the transient response? Proximity of the poles to the unit circle.
Example. What about our 60Hz notch filter? First design is FIR (only poles at z = 0). So transient response duration is finite.
Y (z) = H(z) X(z) =
H(z)
H(z) H(q)
H(q)
=
+
.
1 qz 1
1 qz 1
1 qz 1
The first term has a root at z = q in both numerator and denominator that cancel as follows:
H(z) H(q)
(1 z 1 2 cos c + z 2 ) (1 q 1 2 cos c + q 2 )
(z 1 q 1 )2 cos c + (z 2 q 2 )
=
=
1
1
1 qz
1 qz
q(q 1 z 1 )
=
2 cos c q 1 z 1
2 cos c q 1
1
=
z 1 ,
q
q
q
1
H(q)
2 cos c q 1
z 1 +
,
q
q
1 qz 1
1
2 cos c q 1
[n] [n 1] + H(0 ) e0 n u[n] .
|
{z
}
q
q
|
{z
}
steady-state
transient
Our second design is IIR. Closer the poles are to unit circle, the closer we get to the ideal notch filter magnitude response. But then
the longer the transient response.
4.36
Frequency components present in the input signal can be amplified (|H()| > 1) or attenuated (|H()| < 1).
energy density spectrum, output vs input:
2
skim
4.4.8 Correlation functions and power spectra for random input signals
skim
Summary of LTI in frequency domain
There are several cases to consider depending on the type of input signal.
Eternal periodic input. Use DTFS.
x[n] =
N
1
X
k=0
ck e N kn H() y[n] =
N
1
X
k=0
ck H
2
2
k e N kn
N
DTFT
4.37
4.5
LTI systems as frequency-selective filters
Filter design: choosing the structure of a LTI system (e.g., recursive) and the system parameters ({a k } and {bk }) to yield a desired
frequency response H().
Since Y() = H() X (), the LTI system can boost (or leave unchanged) some frequency components while attenuating others.
4.5.1 Ideal filter characteristics
Types: lowpass, highpass, bandpass, bandstop, notch, resonator, all-pass. Pictures of |H()| in book.
The magnitude response is only half of the specification of the frequency response. The other half is the phase response.
An ideal filter has a linear phase response in the passband.
The phase response in the stopband is irrelevant since those frequency components will be eliminated.
C en0 , 1 < < 2
Consider the following linear-phase bandpass response: H() =
0,
otherwise.
Suppose the spectrum X () of the input signal x[n] lies entirely within the passband. picture Then the output signal spectrum is
Y() = H() X () = C en0 X () .
What is y[n]? By the frequency-shift property y[n] = C x[n n0 ] . For a linear-phase filter, the output is simply a scaled and
shifted version of the input (which was in the passband). A (small) shift is usually considered a tolerable distortion of the signal.
If the phase were nonlinear, then the output signal would be a distorted version of the input, even if the input is entirely contained
in the passband.
The group delay1
d
()
d
is the same for all frequency components when the filter has linear phase () = n 0 .
g () =
These ideal frequency responses are physically unrealizable, since they are noncausal, non-rational, and unstable (impulse response
is not absolutely summable).
We want to approximate these ideal responses with rational and stable systems (and usually causal too).
Basic guiding principles of filter design:
All poles inside unit circle so causal form is stable. (Zeros can go anywhere).
All complex poles and zeros in complex-conjugate pairs so filter coefficients are real.
# poles # zeros so causal. (In real-time applications, not necessary for post-processing signals in M ATLAB.)
1 The reason for this term is that it specifies the delay experienced by a narrow-band group of sinusoidal components that have frequencies within a narrow
frequency interval about . The width of this interval is limited to that over which the group delay is approximately constant.
4.38
Relationship between a phase shift and a time shift for sinusoidal signals.
Suppose x[n] = cos(0 n) and y[n] = x[n m0 ] is a time-shifted version of x[n]. Then
y[n] = cos(0 (n m0 )) = cos(0 n + ) where = m0 0 .
Note that the phase shift depends on both the time-shift m0 and the frequency 0 .
x1(n) = cos(n 2/16)
0.5
0.5
0.5
0.5
10
15
20
0.5
0.5
0.5
10
n
15
20
10
15
20
0.5
10
n
15
20
To time-shift a sinusoidal signal by m0 samples, the required phase shift is proportional to the frequency.
A more complicated signal will be composed of a multitude of frequency components. To time-shift each component by a certain
amount m0 , the phase-shift for each component should be proportional to the frequency of that component, hence linear phase:
() = m0 .
Since x[n] =
1
2
X () en d,
y[n] = x[n m0 ] =
where () = m0 .
1
2
X () e(nm0 ) d =
1
2
X () e(n+()) d
4.39
1
2
0
Real part
Imaginary part
Imaginary part
H1()
1
2
15
|H()|
|H()|
10
1
/2
/2
1
()
()
1
0.5
0
0.5
1
20
0
Real part
/2
/2
/2
/2
0
1
/2
/2
a)/2
for
unity
gain
at
DC
(z
=
1):
H
()
=
.
2
za
1 az 1
2 1 a e
To make a highpass filter, simply reflect the poles and zeros around the imaginary axis.
H2()
1
2
0
Real part
Imaginary part
Imaginary part
H1()
1
1
2
15
|H()|
|H()|
10
/2
2
1
()
()
/2
0.5
0
0.5
1
/2
/2
/2
/2
0
1
/2
/2
20
0
Real part
()
, so Hhp () = Hlp ( ) .
So by the frequency-shift property of the DTFT: hhp [n] = en hlp [n] = (1)n hlp [n] .
So to make a highpass filter from a lowpass filter design, just reverse the sign of every other sample of the impulse response.
We will do better lowpass filter designs later, from which we can easily get highpass filters.
What about bandpass filters?
4.40
Pair of poles near unit circle at the desired pass frequency or resonant frequency.
Example application: detecting specific frequency components in touch-tone signals using a filter bank.
How many zeros can we choose? Two. If we use more, then noncausal.
Putting two zeros at origin and poles at z = p = r exp(0 ), we have
H(z) = G
1
1
z2
=G
=G
1
(z p)(z p )
(1 pz )(1 p z )
1 (2r cos 0 )z 1 + r2 z 2
1
(1 r e0 e )(1 r e0 e )
1
1 2r cos 0 + r2 .
so for unity gain at = 0 , 1 = G (1r)(1re
20 ) so G = (1 r)
H() = G
G
U1 ()U2 ()
p
U1 () = 1 r e0 e = 1 2r cos(0 ) +r 2
p
U2 () = 1 r e0 e = 1 2r cos(0 + ) +r 2 .
|H()| =
()
0.5
0.5
0.5
0.5
1
1
0
Real(z)
0.6
0.4
0.2
0.2
/2
/2
/2
/2
/2
/2
1
()
()
1
0
1
0.6
0.4
0
Real(z)
|H()|
1
0.8
|H()|
1
0.8
()
0.95
1
Imag(z)
Imag(z)
0.8
0
1
/2
/2
4.41
H() = H0 () Hallpass ()
General form for allpass filter (with real coefficients) uses bk = aN k (reverse the filter coefficient order):
H(z) =
PN
k
k=0 aN k z
PN
k
k=0 ak z
PN
k
aN + aN 1 z 1 + + a0 z N
A(z 1 )
N
k=0 ak z
=
z
= z N
.
PN
1
N
k
a0 + a 1 z + + a N z
A(z)
k=0 ak z
Fact. If z = p is a root of the denominator, then z = 1/p is a zero. So the poles and zeros come in reciprocal pairs.
Frequency response:
H() = H(z)
z=e
= eN
A(e )
A(e )
so H () = eN
.
A(e )
A(e )
A(e )
2
N A(e )
|H()| = H() H () = eN
e
= 1.
A(e )
A(e )
1
2
)
N A(z)
Alternative proof. Since h[n] is real: |H()| = H(z) H z 1
= z N A(z
z
1
A(z)
A(z )
z=e
z=e
= 1.
Example. A pair of poles and zero with 0 = /4 and r = 4/5, with gain g = r 2 yields the following phase response.
H
()
0.8
Imag(z)
1
0.5
0
0.5
1
1
0
Real(z)
|H()|
0.8
0.6
0.4
0.2
0
/2
/2
/2
/2
()
2
0
2
(z 1r e0 )(z 1r e0 )
r2 2r cos 0 z 1 + z 2
, from which the feed-back filter
=
(z r e0 )(z r e0 )
1 2r cos 0 z 1 + r2 z 2
coefficients are seen to be the reversal of the feed-forward coefficients.
4.42
4.5.7
Digital sinusoidal oscillators
What if we want to generate a sinusoidal signal cos(0 n), e.g., for digital speech or music synthesis. A brute-force implementation
would require calculating the cos function for each n. This is expensive. Can we do it with a few delays, adds, and multiplies?
Solution: create LTI system with poles on the unit circle. This is called marginally unstable, because blows up for certain inputs,
but not for all inputs. We do not use it as a filter; we only consider the unit impulse input.
Single pole complex oscillator
H(z) =
z
zp
1
1pz 1
If the input is a unit impulse signal, then the output is a complex exponential signal!
Difference equation: y[n] = p y[n 1] + x[n]
This system is marginally unstable because the output would blow up for certain input signals, but for the input x[n] = [n] the
output is a nice bounded causal complex exponential signal.
Very simple digital waveform generator.
Im(z)
p = e0
y[n] = e0 n u[n]
x[n] = [n]
Re(z)
z
PSfrag replacements
-1
Two poles
What if only a real sinusoidal is desired?
Im(z)
p = e0
-1
Im(z)
+
Re(z)
Re(z)
x[n] = [n]
PSfrag replacements
p = e0
p
-1
1
1
1 cos 0 z 1
+
=
2
,
1 pz 1
1 p z 1
1 2 cos 0 z 1 + z 2
4.43
1 0 n 1 0 n
y[n] =
u[n]
e
e + e
e
2
2
so
Y (z) =
where
e /2
cos cos(0 ) z 1
e /2
= X(z) H(z)
+
=
1
1
0
0
1e z
1e
1 2 cos 0 z 1 + z 2
z
1
1 2 cos 0 z 1 + z 2
y[n] = cos(0 n + ) u[n]
Im(z)
2 cos 0
2
Re(z)
PSfrag replacements
4.44
4.6
Inverse systems and deconvolution
Example application: DSP compensation for effects of poor audio microphone.
4.6.1 Invertibility of LTI systems
Definition: A system T is called invertible iff each (possible) output signal is the response to only one input signal.
Otherwise T is not invertible.
Example. y[n] = x2 [n]. not invertible, since x[n] and x[n] produce the same response.
1
2
such that
y[n + 7] +2.
An LTI system is completely characterized by its impulse response, so T 1 has some impulse response hI [n] under the above
conditions. In this case we call T 1 the inverse filter. We call the use of T 1 deconvolution since T does convolution.
x[n] h[n] y[n] = x[n] h[n] hI [n] hI [n] y[n] = hI [n] h[n] x[n] = x[n] .
In particular
hI [n] h[n] = [n] so HI (z) H(z) = 1 and HI (z) =
1
.
H(z)
z
z+a
z+a
z
Is (causal form) of inverse filter stable? Only if |a| < 1, i.e., only if the zero of the original system function is within the unit
circle.
Generally: if H(z) = g B(z)
A(z) then HI (z) =
the gain).
1 A(z)
g B(z)
so to form the inverse system one just swaps all poles and zeros (and reciprocates
Then when the two systems are cascaded, one gets pole zero cancellation: H(z) H I (z) =
pole-zero cancellation, and results can be very noise sensitive!
When is the inverse system causal and stable? Only if
zeros of H(z) are within unit circle!
N M , i.e., # poles of H(z) # of zeros of H(z)
B(z) A(z)
A(z) B(z)
= 1. In practice, imperfect
Recall that for H(z) to be causal we need N M , so for a causal system to have a causal stable inverse we need N = M .
4.45
z 2 1/2z+1/4
z2
HI(z)
H(z)
Imag(z)
1
0.5
Imag(z)
1
0.5
Imag(z)
1
0.5
0.5
0.5
0.5
0
Real(z)
0
Real(z)
|HI()|
|H()|
2
1.5
1.5
1.5
0.5
0.5
0.5
1
1
0
Re(z)
0
Re(z)
|G()|
2
1.5
1.5
1.5
0.5
0.5
0.5
0
Re(z)
|H() G()|
1
1
|H()|
Im(z)
H(z) G(z)
Im(z)
0
Real(z)
|H() HI()|
G(z)
H(z)
Im(z)
H(z)HI(z)
4.46
Locations of zeros
For a real system, moving zeros to reciprocal locations leaves relative magnitude response
unchanged except for a gain constant.
But the phase is affected.
Brief explanation:
2
|H()| = H(z) H z 1
z=e
zi )
(z 1/zi )
and H2 (z) = g2 i
, where both systems are real so the zeros are real or
A(z)
A(z)
occur in complex conjugate pairs. Then since
1 e
1
1
1
e =
(e
q) =
|e q | ,
e
q =
(e
q) =
q
q
|q|
|q|
|q|
we have
i (z
Q
Q
H2 () g2 i |e 1/zi | g2 i |z1i | |e zi | g2 Y 1
Q
= Q
,
=
=
g1
g1
H1 () g1
z |
z |
|zi |
i
i
i |e
i |e
i
because the zeros occur in complex conjugate pairs. So |H2 ()| and |H1 ()| differ only by a constant.
Example. Here are two lowpass filters; the magnitude response has the same shape and differs only by a gain factor of two.
But note the different phase response.
H (z)
H (z)
1
Imaginary Part
Imaginary Part
1
0.5
0
0.5
0.5
0
0.5
1
1
0
Real Part
2.5
2.5
2
|()|
|()|
1.5
0.5
0.5
0
()
()
1.5
1
0
Real Part
4.47
4.6.2
Minimum-phase, maximum-phase, and mixed-phase systems
A system is called minimum phase if it has:
all poles inside unit circle,
all zeros inside unit circle.
What can we say about the inverse system for a minimum-phase system? It will also be minimum-phase and stable.
maximum phase if all zeros outside unit circle.
mixed phase otherwise
Example application: invertible filter design, e.g., Dolby, with |H()| given.
Linear phase (preview of 8.2)
We have seen that a zero on the unit circle contributes linear phase, but this is not the only way to produce linear phase filters.
Example.
Im(z)
Re(z)
2
3
4
4
3
1
(z r)(z 1/r)
= 1 (r + )z 1 + z 2 = h[n] =
H(z) =
z2
r
1
1, (r + ), 1 .
r
1
1
1
2 cos (r + ) .
H() = 1 (r + ) e
e (r + ) + e
=e
+e
=e
r
r
r
The bracketed expression is real, so this filter has linear phase. Specifically, the phase response is:
,
2 cos > r + 1r
H() =
, 2 cos < r + 1r .
Any pair of reciprocal real zeros contribute linear phase.
Im(z)
Re(z)
4
1
2
(z q)(z 1/q)(z q )(z 1/q )
= 1 + b1 z 1 + b2 z 2 + b1 z 3 + z 4 = z 2 z 2 + b1 z + b2 + b1 z 1 + z 2
4
z
2
Each set of 4 complex zeros at {q, 1/q, q , 1/q } contribute linear phase.
4.48
skip