Professional Documents
Culture Documents
Lecture Notes SS
Lecture Notes SS
com
LECTURE NOTES
ON
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
ELECRTONICS AND COMMUNICATION ENGINEERING
Downloaded from Jntumaterials.com
SIGNAL ANALY5I5
. c ...(1.1)
ls
Ima x
I2
ir a
0 2 3 Im
t
Jn
tu
(b ) Its vector notation
m
(a ) A sine w ave a .c.
at
er
t
ia
Fig. 1
ls.
co
a
The same is usually represented by a vector indicating the maximum value by
m
the length of the vector.
Another current, say i2 which lags the first current by a phase angle is
similarly denoted as :
i2 =
u M
I2m sin (t ) ...(1.2)
lagging angle
n t
Its vector notation is given to below the first by the
This notation of sinusoidal signals by vectors is useful for easy addition and
J
subtraction. The addition is done by the parallelogram law for vector
addition.
Im
I1
I2m I2 Ito t
(a ) (b )
Fig. 2
The total current (i1 + i2) lags behind the first current is by the angle .
This method of representing waveforms of a.c. voltage or current by vectors
is also known as phasor representation.
1
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 3
The above vector represents a time varying signal. There are signals which are
from image data. These are two dimensional signals when a plane image is
considered. A three dimensional object can be taken as a three vector
signal.
In short, a vector of the kind used in geometrical applications can be used to
indicate a time-varying sinusoidal signal or a fixed two dimensional image or
a three dimensional point.
But when a signal, such as arising from a vibration or a seismic
signal or even a voice signal, is to be represented, how can we use a
compact vector notation? Definitely not easy.
m
Such signals change from instant to instant in a manner which is not defined as
o
in the pure sine save. If we take the values from instant to instant and write them
. c
down, we will get a large number even for a short time internal. If there are a
thousand points in one second of the signal, then there will be a thousand numbers
ls
which would represent the signal for that one second interval.
For example, a variation with time of a certain signal could be as in :
ir a
5 0 1 5, 7 4 3 2, 0 2.4 7, 5 3 1 0
Jn
...(1.3
tu
)
m
e
at
er
These 16 points can be considered as a vector, i.e., a row vector
ia
ls.
of 16 points. Even for a pure sine wave, we can take points
co
a
m
and we would have got, for e.g.,
56, 98, 126, 135, 126, 98, 56, 7, 41, 83, 111, 120, 83, 418
u M ...(1.4
n t
This vector represents the samples or points on a sine wave of amplitude 128.
But when we plot these points and look at them, it is easy to find that it is a sine
signal.
J
t
In this case, the representation of the entire set of points is done by simply stating
that the signal is a sine wave and denoted as A sin t, where A is the peak value
and is the frequency.
The analogy between the sets of points of the signal and the vector of points is
now clear. The purpose of visualising such an analogy is to mathematically
perform such operations on the signal which would convert the signal information
in a more useful form. A filter which operates on the signal does it. A Fourier
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 4
transform which operates on the vector of data tells the frequencies present in the
signal.
As an example, we present below the signal obtained by nuclear magnetic
resonance of a chemical, dichloro-ethane from such a spectrometer instrument.
The sample points of the signal are shown in Fig. 4(b).
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
Fig. 4 (a) Shows the free induction decay signal of 1-2-3 dichloroethane and its Fourier
n t
transform spectrum shown in terms of parts per million of the
Carrier frequency of the NMR instrument.
Sweep Width = 6024 Spectrometer freq. 300 ref. shift 1352 ref pt. 0 Acquistions
= 16,
J
1-1-2-Dichloroethane
Fig. 4 (b) Shows the actual signal points. These points form the vector. There are two
signals, one is the in-phase detected signal and the other is the 90-phase detected
(qudarature) signal. These two are considered as a complex signal a+jb.
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 5
Like this, there are hundreds of examples in scientific and engineering fields
which deal with the signal by taking the vector of samples and doing various
mathematical operations, presently, known by the term Digital Signal
Processing.
o m
. c
ls
ir a
Jn
Fig. 5
tu
m
e
Fig. 5 shows the fact that the pure sine wave and the points of a measured
at
er
t
signal may deviate at some points. In a general case, where a signal may not be
ia
ls.
from a sine wave only, we would like to find out a mathematical form of the signal
co
a
m
in terms of known functions, like the sine wave, exponential wave and even a
triangular wave (Fig. 6).
sin w t
u M t t
n t e dt
J
Fig.
(a)
6 (a)
t
These signals and many others are amenable for mathematical formulation to
represent and operate on the signal vector.
Thus, we have signals of the form
sin t, sin (t )
...(1.5
)
cos (t), cos (t ) ...(1.6)
a sin (mt ) b cos (n cos t ) (for various values of
m and n),
et sin(t ) ...(1.7)
SIGNAL ANALYSIS 6
kt for t = 0 to 1 ...(1.9)
kt for t = 1 to 2 and so on.
...(1.10
)
But the question arises about the error or differences between the vector points and
the function curve (Fig. 7).
In Fig. 7, the differences at the points are A 1 A2, B1
B2,...............
To find the function that best fits the points is a mathematical problem. This is
done by the principle which states that if the squares of the errors at all points of the
vector and the function are summed up, it will be a minimum.
Points
o m
y1
A
y2
B2
y3
of s ignal
. c
Curve
o f function
A2
B1
ls
ir a
Jn
t1 t2 t3
tu
Fig. 7
m
e
at
er
t
ia
DERIVATION OF LEAST SQUARE ERROR PRINCIPLE
ls.
co
a
m
Given a vector of r points of time samples of a signal as (t1, y1) (t2, y2),
(t3, y3) ... (tr, yr).
M
We find such a function f(t) which best fits this vector, where
t
Here f(t) is expressed as a series. If r = n, then the given r points can be substituted
in the above equation and then solve them (r equations). That will give the
n
values of a0, a1, ..., ar.
f(t) will
J
But if r > n, then we have more points than a values to determine. In that case
G Je
h i
2
2
...(1.13)
P =
H K
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 7
If P is to be a maximum, then i must be a minimum, which is the principle
2
of the least square error.
The average of the sum of errors will also be a minimum, which is called the
least mean square error.
ORTHOGONAL FUNCTIONS
A set of so-called orthogonal functions are useful to represent any signal
vector by them. These functions possess the property that if you represent a set of
points (or signal vector) by the combinations of an orthogonal function set, then the
error will be the least. In other words, using a set of orthogonal functions,
we can represent a signal vector as approximately as possible.
ir a
of a number of sine and cosine waves sin , sin 2, sin 3... cos , cos 2, cos 3...
etc. Here denotes the time variable, by the usual = t relationship.
Jn
Let us show how the sinusoidal function set is an orthogonal one and how it
tu
m
makes the MSE as least.
at
er
t
ia
Sine and cos Functions as an Orthogonal Set
ls.
co
a
m
Suppose we want to approximate a function f(x) or a set of discrete values yi by
trigo- nometric functions. This can be done if the function is periodic of
period T, so that
u M
f(x + T) = f(x).
n t
By a change of variable x1 , the above becomes
T
2
J f ( x1 2)
and the period is 2. (Rewrite x1 as x again.)
= f ( x1)
z
f ( x) cos kx dx
Downloaded from Jntumaterials.com
SIGNALBut we know
ANALYSIS that the definite integrals 8
U
z
cos mx sin kx dx 0|
|
|V for m k ...(1.14b)
z |
cos mx sin kx dx 0
||
= if m= k
W 0
= 2 if m= k = 0
The above relation is what is meant by an orthogonal function
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 9
ak + 0 =
a0 =
2
1
z
f ( x) dx, (between limits an
d )
...(1.15
)
similarly, by differentiating (1.14) w.r.t. bk,
1
z
bk =
2
...(1.16
f ( x)sin kxdx between same limits
)
(1.15) and (1.16) are the co-efficients of the well-known Fourier Series (see
o m
later in chapter 2).
. c
ls
If the function is an odd function, f(x) = f(x) and so, from (1.15),
z z
0
ir a
ak = f ( x) cos kx dx f ( x) cos kx dx
0
Jn
Put
z
x = x1 in the second integral, cos k (x1) = cos kx1,
z
tu
m
f ( x) cos kx dx f ( x ) cos kx ( dx )
at
ak =
er
t
0
ia
1 1 1
ls.
co
a
0
m
= 0
So if f(x) is odd, terms ak are absent. Similarly even functions such that f(x) =
M
f(x), have no cosine terms. Also for odd functions
u z
t
bk = 1 f ( x) sin kx dx
...(1.17)
J z
2
= f ( x) sin kx if f ( x) is odd
0
SIGNAL ANALYSIS 10
t 0 2 4 6 8 10 12 14 16 18 20 22 24
i 1 2 2.5 4 5 3 0 3 5 4 2.5 2 1
Find the least squares approximation to a trigonometric series of 3 terms (or, find
the Fourier series upto the third harmonic).
We first choose the origin at t = 12 as we notice odd symmetry to the left and
right of
t = 12.We have to change the variable to x so that x , and so
2
x = t 12
24
x
180
i
180
1
150
2 2.5
120
4
90
5
60
3
30
0
0
3
30
5
60
o m
90
2.5
120 150
2 1
Notice that the function is odd, so that cosine terms are absent.
by : The bk terms are got
. c
ls
x j i sin x sin 2x sin 3x
0 0 0 0 0 0
ir a
30 1 3 0.5 0.866 1
60 2 5 0.866 0.866 0
Jn
tu
90 3 4 1 0 1
m
te
at
1 0.866 0
er
ia
2
ls.
120
150 4
5 2
2 0.5 0.866 1
co
a 0.866
m
180 6 1 0 0 0
12.995
u M 3.031 0.167
representing
n t
The integrals (1.17), for discrete number of points, are to be interpreted as
COMPLETE
SIGNAL
ANALYSIS
ORTHOGONAL SYSTEM OF FUNCTIONS 11
A set of orthogonal n q
n ( x) is termed as complete in the
functions closed interval
x [a, b] if, for every particular function which is continuous, f (x), say, we can
find the
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 12
term between the actual function f and its equivalent representation in terms of
these orthogonal functions as a square ingegral.
2
...(1.17
En = ( f (c1 1 c2 2 ... cn2 ))
)
For more terms included in the cnn functions, En will become or tend to
zero. This means that when we use a large number of sine terms in a sine series,
the equivalence to
the actual function is tending to be exact.
Completeness of a set of functions is given by the integral below, which should
tend to zero also for large n.
Lt
z
b
f x
[
a
n
mm
x
(
2
w x dx
)] 0
o m
n
a
() m0
)
(
)
. c ...(1.18
Jn
meaning the difference between the original signal or function and its approximation
tu
m
e
in terms of sum of sine, cos or any orthogonal set of function). This area is found
at
er
only between x =
t
a and x = b. The w(x) is a function of x, which is used
ia
ls.
co
to limit the area value and enable the integral to tend to zero.
m
The above integral is called Lebesgue Integral.
lsin(nx), cos(nx)q
1.
functions. The
is
2. t
limits are to +.
n
The Legendre polynomials lP ( x)q .
n
3. The
J
Bessed
polynomials
SIGNAL ANALYSIS 13
Thus all the above functions can be used to approximate a signal. We are usually
familiar with sine-cosine series of a signal.
Fig. 8 shows how the signal (a square wave) is approximated better and better by
adding more and more terms of the sine series; when n tends to be large, the
approximation is perfect and matches the square wave itself.
1.0
0.5
0.6
m
0.4
0.3
0.2
0.0
0.2
. c o
ls
0.4
0.6
0.8
ir a
1.0
/2 0 /2
Jn
tu
Fig. 8 (a) A square wave and its sinusoidal function approximation.
m
e
at
er
t
ia
ls.
co
a
m
0.7
.636
0.6
0.5
0.4
Third
u M Average or
t
0.3 D C com ponent
Harm onic
n
0.2
0.1
0.1
0.2
J Fifth
Harm onic
0.3
0.4
First
0.5 Harm onic
0.6
0.7 t
Seconds
0 1 2 3 4 5 6
Fig. 8
(b)
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 14
~ f(t)
Approxim ation o f f(t) using F ourier S eries
1.1
0.7
0.6
0.5
o m
0.4
. c
ls
0.3
0.2
ir a
0.1
0 t
Jn
tu
0.1
m
te
at
er
0 1 2 3 4 5 6
ia
ls.
Fig. 8 (c) Four compnent approximation for square wave signal.
co
a
m
FITTING ANY POLYNOMIAL TO A SIGNAL
VECTOR
u M ...(1.19
)
t
2
E = ( f k11 K22 )
for a polynomial function, we can represent f(t) as
any polynomial,
J n f (t) =
antn
ao1 a2 t a t2 upto n terms.
If the f(t) signal points r in number are just equal to n, then we get a unique
value for each of the coefficients a0 to an which is obtainable by a solution of the
n equations, one for each point.
But if r > n, then we can fit the r points to pass through the function
(polynomial function).
Since E is a minimum,
E E = ... = 0
= ...(1.20
E )
a0 a1 a2
Here, E = ( f (t) k) 2 ( f (t) akxk )2 ...(1.21)
Taking the partial derivative, we get
j
2 j n [ f (t j ) a0 a1t1 a2t2 ...] t )= 0 ...(1.22
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 15
2 k
This gives
k k1 k2 k
a0 tj a 1tj a2tj ... = ...(1.23
yitj )
k k
Rewriting tj as Xk, y i tj as Yk, for every value of k, we get for k=
0 to r,
a0X0 + a1X1 + a2X2 + ... + a rXr+1 = Y0
a 0 X1 + a 1 X2 + a 2X 3 + + arXr+1 = Y1
.............................................. ...(1.24)
a 0X r + a1X r+1 + a2Xr+2 + + arX2r
Here is a set of linear equations, which when solved gives a0, a1, ... ar as the
= Yr
o m
coefficients.
The polynomial which represents the time function f(t) is then
. c
ls
f (t) = a a t a t2 art r . ...(1.25)
0 1 2
ir a
orthogonal, we can more easily determine the coefficients.
Jn
tu
EXERCISE
m
e
at
er
Given a signal vector for different instants of
ia
ls.
co
time t, t 3 4 5
a 6 7
m
f(t) 6 9 10 11 12
Find a straight-line approximation using least squares
principle. Let y = f(t).
u M Put x= t 5.
Let us fit
n t y= a0 +
2
x
a1x
y
6 12
xy x2
4
J 1
0
1
2
10
11
12
9 9
0
11
24
1
0
1
4
0 48 14 10
5
5
E
2 ( y a
0
j 0 a1xj ) xj = 0 from
a1
0
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 16
The above equations can be rewritten as
5a0 a1xj = yj
a0xj a1xj
2
= xj y j
Substituting values from table above,
5a0 + 0 =
48
0 + 10a1 =
14
48 7
Thus y x =
=
5 5
48 7
13 7
(t 5) = t
5 5 5 5
o m
This is the straight line approximation for the signal vector given.
. c
ls
Note that the approximation holds good with minimum total error between the
points given (for the 5 time values) and the points on the straight line (at these
5 time values).
ir a
From the straight line approximation, we connot find the actual points at any
Jn
time.
tu
m
e
at
er
INNER PRODUCT AND NORM OF A
t
ia
ls.
SIGNAL VECTOR
co
a
m
Let us consider the inner product of two signals x1 (t), x2 (t). We form the
product at any t.
u M x1 (t) x2 (t)
t
Then we find the area under this curve with
n x1 (t) x2 (t) dt
t.
J
Then, let this area span the entire range of real values of t, from to .
Then divide by this range. This can only be done using limits.
Lt
T T
1
z
T
x (t)x (t)dt
2 T
1 2
SIGNAL ANALYSIS 17
1
1
1
1 1 t 0
t
Let us find the inner product of the above two funcitons. This gives
( x2 , x1 ) z
=
1
1
x2 x1 * dt
In general, the conjugate of the first function is used for the inner product. In
o m
z (t)(1)dt z
case the function is real, the conjugate is the same as the function itself.
(t)(1)dt LOM t LMOt L
.1
c O L1 O
P P
ls
0 1 2 0 1
2
=
N Q N2Q MN PQ MN 2 0PQ 1.
ir a
0
= 0
1 0 2
2
Norm 1
Jn
tu
m
If we do the operation of multiplication of a function x(t) by itself in forming
at
er
the product integral average, then we get the Norm. Denoted as |x|.
ia
ls.
co
Example: Find the norms of the above two functions. The norm is got as the
m
square root of the product integral.
||x1||2
z
1 dt 2
M
= [1]
2
u
1
(Since the function is 1 between
Likewise,
n t ||x1||
||x2||
=
2 z 2
1 < t < 1).
[t][t]dt [t][t]dt z
=
J 1 1 2
0
1
1
= 3 3 3
||x2|| = 2 /3 .
Example: There are two waveforms given below. Show that these time functions
are orthogonal in the range 0 to 2.
x1 (t) x 2 (t)
1 1
1 2 t 1 2 t
Downloaded from Jntumaterials.com
x1 (t) 1 0t1
SIGNAL ANALYSIS 18
x2 (t) 1 1 t 2
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 19
The inner product vanishes for orthogonality.
aT
a
z
xm (t) x*n (t)dt 0 for all m and n except m = n.
So,
a
x1 (t) x2 (t) dt z
0
(1)(0)dt z
1
(0)(1)dt 0
So, the two functions are orthogonal (only in the range (0, 2)).
Example: There are four time functions, shown as wave forms in Fig (a). Then
write down the function shown in Fig. (b) below as a sum of these orthogonal
functions.
That the functions in Fig. (a) are orthogonal to each other in the range is clear
o m
because they do not at all overlap.
So, we can represent any other time function (in that range 0 to 4) using the
. c
ls
combina- tions of these four functions.
1 1
ir a
Jn
tu
1
m
te
at
er
1 2 3
ia
ls.
T=4 t
co
1 1
m
(b) )
u M
(a )
3 4
C1 =
z
x(t)1 (t) t
z
c11 (t) c22 (t) c33 (t) c44 (t)
dt
1 t 1 F I
2 4
1 1
T HG KJ
2
T
0 0
z
2
3
C2 = x(t)2 (t)
1
8
C3
z
3
= x(t)3 (t) 5
8
C4 =
Downloaded from Jntumaterials.com
x(t)4 (t)
2 8
z
SIGNAL
4 ANALYSIS 20
7
3
Hence x(t) is approximated as
1 3 5 7
x(t) = 1 (t) 2 (t) (t) 4 (t)
4 8 8 83
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 21
If m and n are identical, then only the integral will yield a value.
This means that each sine or cosine component (as in sin t, sin 2t... or cos t,
cos 2t...) will be uniquely determinable for a given signal vector.
That was how, in the previous example, we obtained the three cosine terms of the
o m
3-term series representation (approximation) of the signal vector of the magnetising
current wave- form.
. c
Such sets or orthogonal functions, other than the trigonometric ones, are also
well known.
ls
Orthogonal Polynomials
ir a
Jn
tu
m
The given signal can be approximated by a function
at
er
f (t) =
ia
ls.
where P1(t); P2 (t) are certain polynomials in t which are called orthogonal
co
a
m
polynomials. They are such that
n
P (t)P (t)
or
u M
P2 (t)P3 (t), P1 (t)P3 (t) =
t0
0
1 2
n t )
...(1.27
In general,
n
y j c0 c1P1 c2P2 ...2 s is a minimum.
The necessary condition that partial derivatives with represent to c0, c1,
c2 be zero, yields
2Pj (t)[ y j c0 c1P1 (t)... c jP j (t)...] = 0 ...(1.29)
Using the orthogonality relation, only the j-th term product sum does not vanish,
so, this becomes
P j (t) yj j c P2 (t) = 0 so that
j
yjPj (t)
cj = ...(1.30
P2j (t) )
0
The cj values are independently determined provided we know pj (t) for all
Downloaded from Jntumaterials.com
values of t.
SIGNAL
ANALYSIS 22
(We dont have to solve a set of equations as in 1.24).
It can be shown using algebraic methods that p j (t) satisfying the
orthogonality relation is given by
t
P1(t) = 12
n
t 6t(t 1)
P2(t) = 16
n n(n 1)
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 23
t t (t 1) t(t 1)(t 2)
P3(t) = 1 12 30 20
n n (n 1) n(n 1)(n 2)
t t (t 1) t(t 1)(t 2)
P4(t) = 1 20 90 140
n n (n 1) n(n 1)(n 2)
t(t 1)...(t 3)
7 ...(1.31)
0 1)...(n 3)
n(n
These values are generally available for different t, n values as tables. For 6
points i.e., t = 0, 1, 2, 3, 4, 5 the values of P1 (t), P2(t)
m
appear as in table.
t P1(t) P2 P3 P4
0
1
5
3 1
5 5
7
. c
1
3 o
ls
2 1 4 -4 2
3 1 4 4 2
4 3 1 7 3
5 5 5
ir a 5 1
Jn
S 70 84 180 28
tu
m
e
at
These numbers have to be divided by the respective top entries 5 for P1, 1 for P4
er
etc.
ia
ls.
For example, P2(3) = 4/5, P4(4) = 3/1.
co
a
m
2 2 2 2 2 2
The values P1 (t) are given by 70/5 , P2 84/5 , P3 180/5 ,
M
P42 (t) 28 / 12 . In general = S/(term on top of column) 2.
Example: Using orthogonal polynomials, approximate the following simple
u
signal vector to a third degree.
t
y
5
1
n t 8
2
11
3
14
4
17
5
20
7
3.
t y
J
First we have to convert the variable t so that, instead of samples at 5, 8, 11,
14, we have 0, 1, 2, 3. Put t = (t 5)/3 Then t varies as 0, 1, 2,
0 1 1 5 5 5 5 5 5
1 2 1 3 1 7 6 2 14
2 3 1 1 4 4 3 12 12
3 4 1 1 4 4 4 16 16
4 5 1 3 1 7 15 5 35
5 7 1 5 5 5 35 35 35
S 22 10 70 84 180 40 5 5
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 24
40 / 5 40
= yP1 20
5 = 7
C1
70 / 52
= 2 =
P1 70
yP0 22
C0 = = 2.2
P20 10
5/5
C2 = yP2
25 / 84
85
5/ 5/ 5 25 / 180
2
= 2
P2
C3 =
P32
yP
3
180 / 5
=2
o m
So the polynomial approximation to the signal vector is
. c
ls
20 25 25
f (t) =. 22 P1(t) P2 (t ...(1.32
) P3 (t ) 7 84 )
180
ir a
We can leave at just three terms as above. For further computations, the
Jn
further Ps can be got from tables or from computer storage.
tu
m
One main advantage of orthogonal functions is the co-efficients (cs)
at
er
t
are independ- ent and so, if a fit has been made with an mth-degree polynomial in P,
ia
ls.
and it is decided later to use a higher degree, giving more terms, only the
co
a
m
additional coefficients are required to be calculated and those already calculated
remain unchanged.
LEGENDRE POLYNOMIALS
u M
orthogonal set.
n t
The general set of Legendre polynomials P n(x) (for n = 0, 1, 2...) form an
dn
J
1 2 n
Pn (t) = (t 1) ...(1.33)
2nn! dtn
Thus P0 (t) = 1
P1 (t) = t
3 1
t2 ...(1.34)
P2 (t) =
2 2
5 3 3
t t etc.
P3 (t) =
2 2
These are actually orthogonal, which can be verified by taking the products of
any two of the above and integrating between 1 and +1.
z
1
Pm (t) Pn (t)dt =
1
0
...(1.35
)
But if m = n, then the value of the integral becomes non-zero.
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 25
z
1
1
Pm (t)Pm (t)dt
=
2
2m 1
...(1.36)
Here Ck
z
1
1
f (t)Pk (t)dt
z
1
z
2k f (t)Pk (t)dt
o m
c
1
1 2 1
.
= Pk2(t)dt
ls
1
By a change of variable, the limits 1 to 1 can be made useful for any other
ir a
range for which time the signal exists.
Exericse: Find the Legendre-Fourier series of the square-wave
Jn
tu
given below.
m
e
at
er
f(t)
ia
ls.
1
co
1
m
t
M
1
Jn
f(t) is 1 for t < 0 and
C0 =
2.02 1
z
1
f
( )
dt 0
1
C1 =
2
z
This first coefficient is the average function; it is zero.
3 t f (t)dt 3
2
FG tdt tdtIJ
G J z z
0 1
H K 1
L
3 F t I PHFPGt 2 KIJ O 23 FH 12 21KI 23
2 0
0
2 MHG 2 KJ
2 1
= M = G J
C2 =
2
5
z
N f (t) G t JQdt
H2
F 3 12 IK
1
1
2
0
1
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS
5L 3 1 F 3 1I O 2
26
z z GH 2 J P dt
1
= M 2t
2M
0
2
2 dt
N 1 Q
t 2K P
0
= 0
Here C2 is an even value. The function is having even symmetry. So all even
values are zero only.
C3 = 2
z f (t) MNGH 2 t
LF 5
2K PQ
O
3 IJ dt
1
7
z
3
HG 2 t
m
1
J t dt
I
2
z
2
t
3
3
2
t dt
F5
. c
2K
o
ls
7 0
5
1
3
= 1 0
3
ir a
= 8
3 FG t
7 5 3 I
Jn
3
t t
tu
Hence the function, upto four terms, 8H2 J...
m
2K
at
is f(t) = 2
er
t
ia
ls.
co
a
m
OTHER ORTHOGONAL FUNCTION SETS
A class of signals, called wavelets, are also suitable for signal vector
1. Daubechies wavelet M
approximation by functions. The commonly used wavelets are
u
2.
3.
Morlet
t
wavelet
n
Gaussian wavelet
4.
5. Symlet
J
Maxican Hat wavelets
n
It is not completely orthogonal. It is given by
f (x) =
2
e
Cn diff exp ( x ), n j ...(1.40)
Where diff denotes the symbolic derivative; C n is such that the 2-norm of the
Gaussian wavelet (x, n) = 1. This is not orthogonal.
Maxican Hat
This is a function
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 27
2
where c = .
1/ 4 3
This is also not fully orthogonal.
Coiflet and Symlet wavelets are orthogonal.
Application
Now-a-days, a large number of applications make use of signed processing of data
vectors using the several wavelets.
Wavelets based approximation helps in de-noising the signal or for
compressing the signal with minimal data for storage. Feature extraction from
unknown signals is possible using wavelet approximations.
o m
Complex Functions
. c
ls
Fourier transforms involve complex numbers, so we need to do a quick
review. A com- plex number z = a + jb has two parts, a real part x and an imaginary
part jb, where j is the square-root of 1. A complex number can also be expressed using
ir a
complex exponential nota- tion and Eulers equation:
Jn
z = a jb Aej A[cos () j sin ()]
tu
m
e ...(1.4
at
er
t
ia
2) where A is called the amplitude and is called the phase. We can express the
ls.
co
a
complex number either in terms of its real and imaginary parts or in terms of its
m
amplitude and phase,
and we can go back and forth between the two :
u M
a = A cos (), b = A sin ()
...(1.43
n t )
A = a2 b2 , = tan1(b/a)
relates
the magical
J
Eulers equation, ej = cos () + j cos (), is one of the wonders of mathematics. It
2 )
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 1 28
2
so that the amplitudes multiply and the phases add. If you were instead to do the
multipli- cation using real and imaginary a + jb, notation, you would get four
terms that you could write using sin and cos notation, but in order to simplify it
you would have to use all those trig identities that you forgot after graduating
from high school. That is why complex expo- nential notation is so widespread.
Complex Signals
A complete signal has a real part and an imaginary part.
fr (t) jfi (t)
m
f(t) =
The example in Fig. 4(a) showed two components of the NMR signal, which are fr
fi(t).
(t) and
You may wonder how can the NMR instrument give an imaginary signal!
. co
ls
When a signal
is detected with a sine wave carrier synchronously, then we get the imaginary part
signal, just as we get its real part with a cosine wave (or 90 phase shifted) carrier
wave detection.
ir a
Jn
Complex Conjugate Signal
tu
m
e
at
er
t
f * (t) = fr (t) jfi (t) ...(1.45
ia
ls.
Real and imaginary parts can be found from the signal and its )
co
a
m
conjugate.
1
e f (t) f (t)j
M
*
fr (t) =
2
1
t u fi (t) = 2
j
*
e f (t) f (t)j ...(1.46)
J n Im
f
fi
f
Re
fr
f*
P ase
h
Downloaded from Jntumaterials.com
angle: f (t) =
SIGNALtanANALYSIS 1 fi (t) 29
fr (t)
...(1.48
)
Eulers
identity:
f (t) = exp
( j0t)
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 30
exp( j0t)
Real and imaginary part:
= cos (0t) j
in (0 t)
...(1.49)
o m
Phase angle: f (t) = 0t
...(1.52) Complex conjugate signal:
. c
f * (t) b j t g
ls
= exp 0 ...(1.53)
ir a
Jn
Im
tu
m
e
at
er
t
ia
1
ls.
co
a
m
t = t1
u M 0 t1
n t t=0
1
Re
J
ORTHOGONALITY IN COMPLEX FUNCTIONS
So far we considered real variables for functions. But, just as a phasor
of A.C. current can be considered as a complex variable, e jt cos t + j sin
t is also a complex function. We can have functions of e jt as f ( e jt ).
Two complex functions f1 (t) and f2 (t) are said to be orthogonal, if the
integral is zero for them over the interval t1 to t2, say, as per :
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS * 31
t2 f1(t) f 2 (t)dt
z
t1
*
=
)
0
...(1.54
t2 f2(t) f 1 (t)dt
z
t1
= 0
...(1.55
)
In general, when we have a set of complex orthogonal functions gm
(t),
o m....(1.56)
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 32
But if m= n, z *
gm (t) gm (t) dt = Km.
...(1.57
)
In this case, any given time function f(t), which itself is complex,
can be expressed as a series in terms of the g-functions.
f (t) = C0 C1 g1 (t) C2 g2 (t) C3 g3 (t)... etc. ...(1.58)
Here we evaluate CK coefficients in the above by
z
t2
1 *
m
CK = f (t) gK (t)dt ...(1.59)
Kt
o
1
Note that the product in the above integral makes use of the
conjugate of g(t).
. c
ls
PERIODIC, APERIODIC, AND ALMOST PERIODIC SIGNALS
A signal is periodic if it repeats itself exactly after a fixed length of time.
f (t T) = f (t) fT for all
ir a t, T: period
Jn
(t) ...(1.60)
tu
m
e
Example: Complex Exponential Function
at
er
f(t)
t = exp (j0 t) T =
ia
= exp (j0(t + T)) with
ls.
2/0
co
a
m
...(1.61)
The complex exponential function repeats itself after one complete rotation of the
phasor.
Example: Sinusoids
u M
n t
sin
0 )
(0t =
(0 t
=
sin
cos
(0 (t T) 0 )
(0 (t T) 0 )
J
cos
0 )
T = 2 / 0 ...(1.62)
1
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 33
Note: sin () = (exp (j) exp (j))
2j
cos () = 1/2 (exp(j) + exp (j))
...(1.63
)
sin (0 t + 0 ) = cos (0 t + 0 /2))
o m
. c
ls
1
2T T 0 T 2T 3T
ir a
A signal is nonperiodic or aperiodic if there is no value of T such that f(t + T) = f(t).
Jn
tu
m
SINGULARITY FUNCTIONS
e
at
er
t
ia
ls.
Singularity functions have simple mathematical forms but they are either not finite
co
a
m
everywhere or they do not have finite derivatives of all orders
everywhere.
z u M RS f at
t
b f (t)(t t )dt b ...(1.64)
(t )
n
0 0
=
0
T0 elsewhere
J
a
Area
(t) has unit area since for f(t) = 1:
...(1.65
)
z b
a
(t t
0
)dt = 1, a < t0 < b.
Amplitu
de
(t t0 ) =
SR 0 for all t t0 ...(1.66)
for t = t0
Tundefined
Downloaded from Jntumaterials.com
UNITANALYSIS
SIGNAL STEP FUNCTION AND u(t) 34
RELATED FUNCTIONS 1
for t 0
u(t) ...(1.6
= RS01 for t
7) 0
t
Gate T 0
1
react (t/)
Function
R0 for |t| / 2
rect (t/) S0 for |t| / 2
T
=
= u
0.5)
(t 0.5) u(t /2
...(1.68)
0
o m /2
t
Signum
= u (t 0.5).u(t 0.5)
. c
Function
ls sgn |t|
ir a
R1 fo t0
|S 0 t0
1
Jn
sgn(t) = r ...(1.69)
|T 1
tu
t0
m
fo t
te
at
0
er
r
ia
ls.
1
co
fo
m
r
M
= 2u(t) 1
u
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 35
Triangular Function 1
|t||
R1 fo |t| U
(t/) = ST |t| / V ...(1.70)
r |t|
0 t
0
fo W
r
Graphical Representation
The area of the impulse is designated by a quantity in parenthesis beside the
arrow and/ or by the height of the arrow. An arrow pointing down denotes a
negative area.
o m
c
(A)
.
1
A(t t0 )
0 t0
ls
t
Jn
tu
FtI J
m
= rect G
1
Gate function
at
lim (t)
er
H K
t
ia
0
ls.
co
a
(t) = lim 1 / rect (t/)
m
0
u M
n t
J area = 1
1/
t
/2 /2
F tI
JG
1
Tringular Function
lim (t) = ...(1.72
)
0 H K
1
lim exp(2|t|/
Two-sided Exponential (t) = 0 ...(1.73
)
)
1 )2 )
Gaussian Pulse = lim exp((t /
(t) 0
Downloaded from Jntumaterials.com
1 sin (t / )
SIGNAL ANALYSIS 36
...(1.74
)
Sine-over-argument (t) 0 = t / ...(1.75
)
lim
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 37
area =
o m a rea = 1/
1
. c |a |
ls
1/ 1/
ir a
t t
/2 /2 /2 |a| /2 |a |
Jn
tu
Multiplication by a Time Function
m
te
at
er
f (t).(t t0 ) = f (t0 ).(t f(t) continuous at t 0.
ia
ls.
t0 ), ...(1.78
co
a
m
)
Relation to the Unit Step Function
d
t )
u M
A.u(t t ) = A.(t ...(179)
n t dt
0 0
A
J
Au(t t0 ) d/dt Au(t t0 )
(A)
t t
t0 t0
fe (t)
1
t
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 38
f0 (t)
Every function f(t) can be split into an odd and even part:
o
, m ...(1.82)
u(t)
fe(t) fo(t)
. c
ls
1
ir a
t
0
Jn
fe (t)
tu
0.5
m
te
at
er
ia
t
ls.
0
co
a
m
fo (t)
0.5
u M 0.5
t
n t u(t) =
u(t)
u(t)
u(t)
u(t)
...(1.83)
J
2 2 2 2
, , , ,
fe(t ) f0(t )
EXERCISES
1. A sine wave signal of 1 kHz is measured for 2 ms at intervals of 0.125 ms. Find the signal
vector, if amplitude is 1 volts.
[0, 0.707, 1, 0.707, 0, 0.707, 1, 0.707,0,
0, 0.707, 1, 0.707, 0, 0.707, 1,
2. In a 57 matrix, the number 2 is represented 0.707]
by dot as shown.
Write down the signal vector.
[01100, 00010, 00001,
00010,
00100, 01000,
11111]
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 39
3. If a square wave is approximated by a sine wave (of the same period and amplitude), find
the mean square error.
z
1
(1 sin )2
d
o
m
[f(t) = 2.3335 + 11.6713t.]
5. Find the first two sinusodial components of the signal vectors approximation to a Fourier
series, using least square
Q=
Value
30
= 3.5
60
principle.
90
6.09 7.82
120 150
8.43
180
7.73
210
6.98 6.19
240
. c
6.04 5.55
270
o 300 330
360
5.01
ls
8.58 [12.545 1.363 cos 3.35
0.936 cos 2
6. Find the Legendre-Fourier series for the +1.97 sinshown,
periodic function + for0.235
first
ir a
3 terms. sin 2]
[0.5 + 3/4 P1(t) 1/8 P2(t)) + ...]
Jn
tu
m
7. Show that P1
at
(t) t 1
er
t
ia
ls.
co
t
a
1 2 3
m
u M
and
3
P2 (t)
2
form an orthogonal set.
1
2
t
t
2
8.
z
Show that (by integrating the product of the function and congugate) the complex function
J
The limits are
n
e jm f () is orthogonal.
0
Hint:
to
jm jn
2 . If
( )
e e d gm g
MN
( )
L
n
*
as per formula.
d
SIGNAL ANALYSIS 40
1 1
10. Expand the function y(t) = sin 2t terms of the above functions and find the error in the
approximation. [2 / x2|t) only;
0.0947]
o m
. c
ls
ir a
e
Jn
t
tu
m
at
a
er
ia
ls.
co
m
u M
n t
J
Downloaded from Jntumaterials.com
Fourier Series
SIGNAL ANALYSIS 41
We have already seen that the Fourier transform is important. For an LTI system, () = , then the
complex number determining the output () = () 2 is given by the Fourier transform of the
impulse response:
() = () 2
Well what if we could write arbitrary inputs as superpositions of complex exponentials, i.e. via sums or
integrals of the following kind:
() = 2
Then notice, outputs of LTI systems y(t) will always take the form
() = ( ) 2
Proposition 1.1. Let x(t) be period with period T, so that the frequencies = = 0 , and
o m
c
2
() = - SYNTHESIS EQUATION
= 20
ls
Then, () = ( ), and
ir a
1
= 0 () 2 - ANALYSIS EQUATION
1 2
= () 20
e
2
Jn
Proof: Use the property that
t
tu
m
()
at
2
= [ ]
a
er
ia
0
ls.
Then we have
co
m
() 2 = 2 2
0
u M 0
2
()
t
=
0
J
Fourier series coefficients .
Example 1.1.
n = [ ]
OK, so how do we use this. Well, for periodic signals with period T, then we just have to evaluate the
1
SIGNAL ANALYSIS () = 20 , where 0 = 42
Example 1.2. Let () = 1, [ 2 , 2], and 0 otherwise. Then
() = () 2
2
= 2
2
2 | 2
2
=
2
=
2
m
sin()
=
() = 20 where 0 =
1
Let ( ) = (), [ 2 , 2]. Then,
. c o
ls
1 sin(0 ) sin( )
= (0 ) = =
0
OK, so we see that the Fourier transform can be used to define the Fourier series. Now what we would like
ir a
to do is understand how to represent the periodic signals when the period goes to infinity , so that
we can have a synthesis pair. Lets remind ourselves that () is the periodic version of x(t), where x(t)
has finite support () = 0, || 2 .
te
Jn
Proposition 1.3. Let () be periodic with period T, and () = (). Then
tu
m
2
at
() = ()
a
er
ia
To see this,
ls.
co
() = lim () = lim 20
m
M
1
t u
= lim (0 ) 20
= lim (0 ) 20
J n
= lim (0 ) 20 0
0
= () 2
Downloaded from Jntumaterials.com
1.1 Fourier
SIGNAL transform
ANALYSIS 43
We have already seen that the Fourier transform is important. For an LTI system, () = , then the
complex number determining the output () = () 2 is given by the Fourier transform of the
impulse response:
() = () 2
Well what if we could write arbitrary inputs as superpositions of complex exponentials, i.e. via sums or
integrals of the following kind:
() = 2
Then notice, outputs of LTI systems y(t) will always take the form
o m
Proposition 1.1. Let x(t) be period with period T, so that the frequencies = = 0 , and
() = 2
- SYNTHESIS EQUATION
. c
ls
= 20
ir a
Then, () = ( ), and
1
= 0 () 2 - ANALYSIS EQUATION
1 2
= () 20
e
Jn
t
tu
2
m
Proof: Use the property that
at
a
er
()
2
ia
= [ ]
ls.
0
co
Then we have
M
() 2 = 2 2
0
u 0
t
()
= 2
n 0
J
= [ ]
OK, so how do we use this. Well, for periodic signals with period T, then we just have to evaluate the
Fourier series coefficients .
Example 1.1.
1. x(t)=constant, then 0 =constant and = 0, 0 for any period T.
1
2. () = 20 , then = , 1 = 1, = 0, 1.
0
1 1
3. () = cos(20 ), then = , 1 = 1 = 2, = 0, 1.
0
1 1 1
4. () = sin(20 ), then = , 1 = 2 , 1 = 2 , = 0, 1.
0
where
SIGNAL ANALYSIS 1 44
() = 20 , where 0 =
Example 1.2. Let () = 1, [ 2 , 2], and 0 otherwise. Then
() = () 2
2
= 2
2
2 | 2
2
=
2
=
m
2
sin()
o
=
Let ( ) = (), [ 2 , 2]. Then,
() = 20 where 0 =
1
. c
ls
1 sin(0 ) sin( )
= (0 ) = =
0
ir a
OK, so we see that the Fourier transform can be used to define the Fourier series. Now what we would like
to do is understand how to represent the periodic signals when the period goes to infinity , so that
we can have a synthesis pair. Lets remind ourselves that () is the periodic version of x(t), where x(t)
has finite support () = 0, || 2 .
te
Jn
tu
Proposition 1.3. Let () be periodic with period T, and () = (). Then
at
a
er
2
() = ()
ia
ls.
co
To see this,
m
u M
() = lim () = lim 20
n t = lim (0 ) 20
= lim (0 ) 20
J
= lim (0 ) 20 0
0
= () 2
Downloaded from Jntumaterials.com
m
()
()
o
Integration 1
() () + (0)()
Differentiation in Frequency
()
. c
()
ls
Conjugate Symmetry for () = ()
Real Signals { {()} = { ()}
ir a
() {()} = {()}
|()| = |()|
{
< () =< ()
Symmetry for Real and () ()
Even Signals
e
Jn
t
tu
Symmetry for Real and () ()
m
Odd Signals
at
a
er
Even-Odd Decomposition () = {()} [()] {()}
ia
ls.
For Real Signals () = {()} [()] {()}
co
m
u M
n t
J
Downloaded from Jntumaterials.com
SIGNAL ANALYSIS 46
ECE 3640
Lecture 6 Sampling
Objective: To learn and prove the sampling theorem and
understand its impli- cations.
. c
ls
domain.
We have already encountered the sampling theorem and,
arguing purely from a trigonometric-identity point of view, have
ir a
established the Nyquist sampling cri- terion for sinusoidal signals.
However, we have not fully addressed the sampling of more
general signals, nor provided a general proof. Nor have we
indicated how to reconstruct a signal from its samples. With the
e
Jn
tools of Fourier transforms and Fourier series available to us we
t
tu
m
are now ready to finish the job that was started months ago.
at
a
To begin with, suppose we have a signal x(t) which we wish to
er
ia
sample. Let us suppose further that the signal is bandlimited to B
ls.
co
Hz. This means that its Fourier transform is nonzero for 2B <
m
< 2B. Plot spectrum.
the picket
u M
We will model the sampling process as multiplication of x(t) by
fence function
n t .
T (t) =
n
(t nT ).
J
We encountered this periodic function when we studied Fourier
series. Recall that by its Fourier series representation we can write
T (t) =
1.
T n
ejns t
Plot the spectrum of the sampled signal with both frequency and f
frequency. Observe the following:
ir a
spectrum, the answer is to filter the signal with a lowpass filter with
cutoff c 2B. This cuts out the images and leaves us with the
original spectrum. This is a sort of idealized point of view,
e
Jn
because it assumes that we are filtering a continuous-time
t
tu
function x(t), which is a sequence of weighted delta functions. In
m
at
practice, we have
a
er
numbers x[n] representing the value of the function x[n] = x(nT )
ia
ls.
= x(n/fs). How can we recover the time function from this?
co
m
Theorem 1 (The sampling theorem)
M
If x(t) is bandlimited to B Hz then it can be recovered from signals taken at a
u
sampling rate fs > 2B. The recovery formula is
x(t) =
n t.
x(nT )g(t nT )
n
where
J
g(t) =
sin(fs t)
fs t = sinc(f s t).
Show what the formula means: we are interpolating in time
between samples using the sinc function.
We will prove this theorem. Because we are actually lacking a
few theoretical
tools, it will take a bit of work. What makes this interesting is we
will end up using in a very essential way most of the transform
ideas we have talked about.
1. The first step is to notice that the spectrum of the sampled signal,
1.
X() = X( n s)
T
n
X()e
s
ral is just the inverse F.T., evaluated at .t = nT :
1 2 .
x(t).
cn = s . = x(nT ),
t=nT
T
so . .
X() =
Then
x(nt)ejnT =
n
x(nt)ejnT .
o m
g(t) T rect
.
.
.
. c
ls
2fs
3. Le .
ir a
t y(t) = x(nT )g(t nT ).
n
e
Jn
shifting property:
t
tu
. ..
m
. . jnT
= T rect.
at
Y () = x(nT )T rect e x(nT )ejnT
a
er
ia
2fs 2fs
ls.
n n
co
Observe that the summation on the right is the same as the
m
F.S. we derived in step 1:
M
Y () = T rect X().
u 2fs
(derived above)
n t
Now substituting in the spectrum of the sampled signal
J .
Y () = T rect .
2fs
..
X( ns) = X()
1
T
n
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
Some applications
Why digital?
1. Recoverable signals
o m
. c
ls
2. Flexibility
3. Channel coding theorem. Source coding theorem.
ir a
4. Encryption
Jn
Time division multiplexing
tu
m
Pulse code modulation
at
er
t
ia
ls.
Spectral sampling
co
a
m
Just as we can sample a band-limited signal in the time domain and reconstruct it,
provided that we sample often enough, so can we also sample a time-limited signal in
u M
the frequency domain and exactly reconstruct the spectrum, provided that we take the
samples close enough together in the spectrum. There is thus a dual to the sampling
n t
theorem which applies to sampling the spectrum. We can also use this kind of thinking
J
F () = f (t)ejtdt.
0
Dn =
T0
T0
f(t)e
0
aring the F.S. coefficient with the F.T. above, it follows that
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 1/5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
1
Dn = F (n 0).
T0
An implication of this is that we can find the F.S. coefficients by first taking the
F.T. of one period of our signal, then sampling and scaling the F.T.
In terms of reconstructing the signal spectrum from its samples, we can see that as
long as T0 > , the cycles of f (t) do not overlap. We can then (at least in concept)
reconstruct the entire spectrum of F () from its samples. Conceptually, we time- limit
the function, then take its F.T. The sampling condition can be expressed as follows:
T0 >
1
F0 <
So we can reconstruct from samples of the spectrum, provided that the samples are close
enough by comparison with the time-limit of the function.
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 2/5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
Example: Find the Fourier transform of the following signal z(t) [assume the sinusoid has the shape of cos(6t)].
z(t)
5
4
2
1
t
-5 -4 -3 1 4 7
-2
Solution: We need to decompose the signal z(t) into simpler signals that will allow us to apply the FT to each component
m
independently and then add the different FTs to get the overall FT of the signal z(t) .
We easily see that the signal z(t) can be decomposed into the signals z1(t) , z2(t) , and z3(t) shown below
z1(t)
. c o
ls
5
4
ir a
2
Jn
1
tu
t
m
e
at
-5 -4 -3 1 4 7
er
t
ia
ls.
-2
co
a
m
u M z2(t)
n t 5
4
J 2
1
t
-5 -4 -3 1 4 7
-2
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 3/5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
z3(t)
5
4
2
1
t
-5 -4 -3 1 4 7
-2
The signal z1(t) is itself composed of two signals: a function with amplitude of 2, centered around t = 4, and has width of 2 seconds;
and a rect function that also has amplitude of 2, is centered around t = 4, and has width of 2 seconds (note that the function is NOT
m
added to a constant of 2 since doing this will result in a constant of 2 everywhere with a triangular shape around t = 4, which is
different from what we have here). Therefore,
t 4
z 1 (t ) 2
2
t 4
2rect
2
.
. c o
ls
The signal z2(t) is basically a cosine function [cos(6t) as given in the problem] that is limited between t = 3 and t = +1. Limiting a
ir a
signal can be achieved by multiplying the unlimited signal (the cosine function in this case which extends from inf. to +inf.) by a rect
function that covers the range that we would like to limit the signal over. The center of the range we would like to limit the cosine over
Jn
is t = 1 and the duration is 4 seconds. Therefore, we have to multiply the cosine function by a rect centered at t = 1 that has a width
tu
of 4 s. Therefore,
m
e
at
er
t 1
t
ia
z 2 (t ) 2 cos 6 t rect
ls.
.
co
4
m
The signal z3(t) is similar to z2(t) except that we have added a function to it that is centered at t = 4 and has a width of 6 s and
u M
amplitude of 3. Note that the function is added and not multiplied. This can be seen by the upper and lower covers of the cosine
function (the blue lines) where both are moving in a parallel form (when one increases, the other one also increases and vice versa).
Therefore,
t 4
z 3 (t ) 2 cos 6 t rect
n t
t 4
3 .
So,
6
J 6
z (t ) z 1 (t ) z 2 (t ) z 3 (t )
t 4 t 4 t 1 t 4 t 4
2 2rect 2 cos 6 t rect 2 cos 6 t rect 3 .
2 2 4 6 6
Using the linearity property of the FT,
Z ( ) Z 1 ( ) Z 2 ( ) Z 3 ( ) .
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 4/5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
2 2 j ( 4) 2 j ( 4)
Z 1 ( ) (2) sinc 2 e (2)2sinc e
2 4 2
2sinc 2 e j 4 4sinc e j 4
2
Using FT 13 and FT properties 7 and 13 in the table of the previous lecture.
1 4 6 j 6 ( 1) 4 6 j 6 ( 1)
Z 2 ( ) (2)4sinc e (2)4sinc e
2 2 2
4sinc 2 6 e j 6 4sinc 2 6 e j 6
Using FT 13 and 15 and FT properties 7 and 13 in the table of the previous lecture.
1 6 6 j 6 (4) 6 6 j 6 (4)
Z 3 ( ) (2)6sinc e (2)6sinc e
2
6
2
6 j (4)
(3) sinc 2 e
2
o m
2 4
. c
3 j 4
6sinc 3 6 e j 4 6 6sinc 3 6 e j 4 6 9sinc 2
ls
e
2
ir a
Now, just add the FTs given above to get the FT of z(t) .
Jn
tu
m
e
at
Signal Transmission Through a Linear System
er
t
ia
ls.
co
a
A communication system is usually described by its impulse response h(t). The impulse response of a system is basically the output of
m
that system when the input signal to that system is a unit impulse function (t). The impulse response of the system is the time
domain representation of that system. The FT of the impulse response denoted H() is known as the frequency response of the
system.
u M
t
A signal g(t) that is transmitted through the system with the impulse response h(t) produces an output signal y(t) that is given by
the convolution equation
y (t ) g (t ) * h(t ) .
| Y ( ) | e jY ( ) | G( ) | | H ( ) | e j G ( ) H ( ) .
Distortionless Transmission
When transmitting a signal g(t) through a communication system, the system may or may not distort the transmitted signal. A
system that does not distort the transmitted signal is allowed to possibly change its magnitude and possibly delay it. If the output
signal a specific communication system is an amplified/attenuated and delayed form of the input signal, than that system is called and
distortionless communication system.
y(t ) kg(t td ) ,
where k is a constant, and td is a time delay that is greater than zero. In frequency domain, this gives
Y ( ) kG( )e jt d H ( ) ke jt d
| H ( ) | k & H ( ) td
A system that is described by the above frequency response is known as a distortionless system or a linear phase system (the phase of
the frequency response changes linearly with the frequency).
Notice that the impulse response of the system described by the frequency response H() given above is
h(t ) k (t td ) .
important thing here is that the input signal is not distorted but only delayed and scaled.
o m
Therefore, inputting an impulse function into this system produces a scaled and delayed impulse function at the output. The
Electric Filters
. c
ls
Filters are electric devices that allow part of their input signals to pass and block part of their input signals. The distinction between
ir a
the parts that are blocked and the parts that are allowed to pass is based on frequency. The range of frequencies that are allowed to
pass is called the PASSBAND and the range of frequencies that are blocked is called STOPBAND. A LowPass Filter (LPF) is a filter that
Jn
allows low frequencies up to a specified frequency to pass and block the rest of the frequencies. A HighPass Filter, on the other hand,
tu
m
allows all frequency components that are above a specific frequency to pass and block the rest. A BandPass Filter is a filter that
at
er
allows frequencies in a specific range that is greater than zero and less than infinity to pass and blocks frequencies above or below
ia
ls.
that range.
co
a
m
a) LowPass Filters (LPF): a major characteristic of LPFs is the bandwidth of the filter. The bandwidth of a LPF is half the width
of pulse of its frequency response (i.e., the width of the part of the pulse that is in the positive range of the frequency which is
u M
W1 ). The frequency W1 is also known as the CUTOFF frequency of the filter.
HLPF()
n t
J -W1 W1
b) HighPass Filters (HPF): no bandwidth is defined for a HPF since the frequency response of that filter extends up to infinite.
However, this filter is characterized by its CUTOFF frequency, which is W1 as shown below.
HHPF()
-W2 W2
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 6/5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
c) BandPass Filters (BPF): the BPF is characterized by two frequencies, W1 known as the LOWER CUTOFF frequency, and W2
known as the UPPER CUTOFF frequency. The bandwidth of that filter is also the width of pulse that is in the positive frequency
region, or BW = W2 W1.
HBPF()
-W2 -W1 W1 W2
m
Ideal vs. Real Filters:
. c o
The frequency responses for the three types of filters shown above are those of ideal filters. The reason is that there is an extremely
sharp transition between the passbands and stopbands of these filters. The sharpness of the transition between passband and
stopband is determined by something called the ORDER of the filter. The order of the filter is generally determined by the number of
reactive components (capacitors and inductors) that are used in that filter. A zeroorder filter (no capacitors or inductors) is basically a
ls
flat filter that allows all signals to pass. A firstorder filter (one capacitor or inductor) is a filter that has very smooth transition
between the passband and stopband. A secondorder filter (number of reactive elements = number of capacitors + number of
ir a
inductors = 2) has a sharper transition. The ideal filters shown above have in fact an infinite order (require an infinite number of
inductors or capacitors, which makes them unrealizable (cannot be built in practice). Also an ideal filter would result in an infinite
Jn
amount of delay between the input and output signals, which would make it useless even if you were able to build it.
tu
m
te
at
er
Ideal (infinite order) LPF
ia
ls.
HLPF()
co
a
m
3rd order LPF
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 7/5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
2.1 The Convolution Integral
So now we have examined several simple properties that the differential equation satisfies linearity and time-invariance. We have also
seen that the complex exponential has the special property that it passes through changed only by a complex numer the differential
equation. Also, we have discussed the roll of tansforms, as representing arbitrary inputs via the superpositions of complex exponentials.
This discussion is often called a frequency domain analysis. Frequency domain analysis studyies the outputs of linear and
time-invariant systems via their response to complex exponentials. Now we turn our focus to a pure time domain analysis,
understanding the response of the differential equation directly in terms of its time domain inputs. For this we explore the convolution
integral. We do this by solving the first-order differential equation directly using integrating factors. For this, examine the differential
equation and introduce the integrating factor f(t) which has the property that it makes one side of the equation into a total differential.
Define
()() = () () + ()()
= (()())
which implies
()() + () () = () () + ()()
This implies the integrating factor is () = , and using the boundary condition y() = 0 the total differential is solved giving
() = ()
m
We have almost arrived at our convolution formula. For this introduce the unit step function, and the definition of the convolution
o
formulation. The unit-step function is zero to the left of the origin, and 1 elsewhere:
1, 0
c
() = {
0, < 0
Definition 2.2. Given time signals f(t), g(t), then their convolution is defined as
() () = ()( )
ls .
Proposition 2.1. The output of this first order differential equation with input x(t) is given according to
ir a
() = () ()
To see this, simply use the property of the unit step to rewrite the solution of Eqn. 13 according to
Jn
tu
() = () () ( )
m
e
at
er
We make the following comment. Notice the output is a function of the input convolved with a property of the system, ().
ia
ls.
This property we will call the impulse response of the system and we will study it extensively. For LTI systems this will always be
co
a
true, although the property of the system will change depending on the system. So we have arrived at the second major component of
m
our study of linear, time-invariant systems. To understand the outputs of LTI systems to arbitrary inputs, one needs to understand the
convolution integral. The remaining 12 lectures work to generalize and strengthen the these very notions.
u M
n t
J
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 8/5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
next up previous
Next: Region of Convergence (ROC) Up: Laplace_Transform Previous: Laplace_Transform
o m
. c
provided is absolutely integrable, i.e.,
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
Obviously many functions do not satisfy this condition and their Fourier transform do not exist, such as ,
m
, and . In fact signals such as , and
u M
are not strictly integrable and their Fourier transforms al contain some non-conventional function such as .
n t
To overcome this dif iculty, we can multiply the given by an exponential decaying factor so that
J
may be forced to be integrable for certain values of the real parameter . Now the Fourier transform becomes:
The result of this integral is a function of a complex variable , and is defined as the Laplace
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 9/5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
provided the value of is such that the integral converges, i.e., the function exists. Note that is a function
defined in a 2-D complex plane, caled the s-plane, spanned by for the real axis and for the imaginary axis.
derived from the corresponding Fourier transform. We first express the Laplace transform as a Fourier transform:
o m
. c
ls
ir a
Jn
Multiplying both sides by , we get:
tu
m
e
at
er
t
ia
ls.
co
a
m
To represent the inverse transform in terms of
u M (instead of ), we note
n t
J
and the inverse Laplace transform can be obtained as:
Note that the integral with respect to from to becomes an integral in the complex s-plane along a
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 10/
5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
The forward and inverse Laplace transform pair can also be represented as
o m
In particular, if we let , i.e.,
. c
, then the Laplace transform becomes the Fourier transform:
ls
ir a
Jn
tu
m
e
at
This is the reason why sometimes the Fourier spectrum is expressed as a function of .
er
t
ia
ls.
co
a
m
Dif erent from the Fourier transform which converts a 1-D signal in time domain to a 1-D complex spectrum
t
defined over a 2-D complex plane, caled the s-plane, spanned by the two variables (for the horizontal real axis)
and (for the vertical imaginary axis).
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 11/
5
Downloaded from Jntumaterials.com
10/3/13 From Continuous Fourier Transform to Laplace Transform
ir a
Jn
tu
m
e
at
er
t
ia
where is the impulse response function of the system. In particular if the input is a complex exponential
ls.
co
a
m
, then the output of the system can be found to be:
u M
n t
corresponding to its eigenvalue defined as:
J
This is an eigenequation with the complex exponential being the eigenfunction of any LTI system,
which is the Laplace transform of its impulse response , caled the transfer function of the LTI system. In
particular, when , i.e., , the transfer function becomes the frequency response function, the
Fourier transform of the impulse response:
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node1.html 12/
5
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
finite) can be found from its Laplace transform by the following theorems:
o m
. c
Final value theorem:
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
Proof: As for
a
, we have
m
u M
n t
J
When , the above equation becomes
i.e.,
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node17.html 1/5
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
When , we have
i.e.,
o m
However, whether a given function
c
has a final value or not depends on the locations of the poles of its transform
.
ls
. Consider the followingcases:
Jn
tu
m
not bounded, does not exist.
at
er
t
ia
ls.
If there are pairs of complex conjugate poles on the imaginary axis, will contain sinusoidal components
co
a
m
and is not defined.
u M
If there are poles on the left side of the S-plane, will contain exponentially decaying terms without
n t
J
Only when there are poles at the origin of the S-plane,
is the final value, the steady state of the signal.
will contain constant (DC) component which
Based on the above observation, the final value theorem can also be obtained by taking the partial fraction expansion of
the given transform :
where are the poles, and by assumption. The corresponding signal in time domain:
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node17.html 2/5
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
All terms except the first one represent exponentially decaying/growing or sinusoidal components of the signal.
Multiplying both sides of the equation for by and letting , we get:
o m
We see that all terms become zero, except the first term . If all poles
ls
of the S-plane, their corresponding signal components in time domain will decay to zero, leaving only the first term
, the final value .
ir a
Jn
tu
Example 1:
m
e
at
er
t
ia
ls.
co
a
m
u M
First find :
n t
J
When , we get . Next we apply the final value theorem:
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node17.html 3/5
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
Example 2:
o m
. c
ls
ir a
Jn
tu
m
e
at
is unbounded (the first term grows exponentially), final value does not exist.
er
t
ia
ls.
co
a
The final value theorem can also be used to find the DC gain of the system, the ratio between the output and input
m
in steady state when all transient components have decayed. We assume the input is a unit step function
, and find the final value, the steady state of the output, as the DC gain of the system:
u M
n t
Example 3:
J
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node17.html 4/5
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
o m
. c
ls
ir a
, ,
Jn
tu
Due to the property of time domain integration, we have
m
e
at
er
t
ia
ls.
co
a
m
u M
Applying the s-domain dif erentiation property to the above, we have
n t
and in general J
,
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node2.html 5/4
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
we have
we have
o m
. c
, ,
ls
ir a
Jn
tu
Replacing in the known transform
m
e
at
er
t
ia
ls.
co
a
m
by , we get
u M
n t
and therefore J
and
,
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node2.html 6/4
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
by , we get
o m
. c
ls
ir a
and
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
,
u M
n t
and
J
we get, respectively
and
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node2.html 7/4
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
Linearity
o m
(
of .)
means set contains or equals to set , i.e,.
l .s c
is a subset of , or is a superset
ir a
should be the intersection of the their individual ROCs
Jn
tu
in which both and exist. But also note that in some cases when zero-pole cancellation occurs,
m
e
at
er
t
the ROC of the linear combination could be larger than , as shown in the example below.
ia
ls.
co
a
m
Example: Let
u M
then
n t
J
We see that the ROC of the combination is larger than the intersection of the ROCs of the two individual terms.
Time Shifting
Shifting in s-Domainote that the ROC is shifted by , i.e., it is shifted vertically by (with no effect to ROC) and
horizontally by
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node2.html 8/4
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
.
Time Scaling
Note that the ROC is horizontally scaled by , which could be either positive ) or negative ) in which
case both the signal and the ROC of its Laplace transform are horizontally flipped.
Conjugation
Proof:
o m
. c
ls
ir a
Convolution
Jn
tu
m
e
at
er
t
ia
ls.
co
a
Note that the ROC of the convolution could be larger than the intersection of and , due to the possible pole-zero
m
cancellation caused by the convolution, similar to the linearity property.
Example Assume
u M
n t
then
J
Differentiation in Time Domain
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node2.html 9/4
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
In general, we have
Again, multiplying by may cause pole-zero cancellation and therefore the resulting ROC may be larger than .
Example: Given
we have:
o m
Differentiation in s-Domain
. c
ls
ir a
Jn
This can be proven by differentiating the Laplace transform:
tu
m
e
at
er
t
ia
ls.
co
a
m
Repeat this process we get
u M
n t
J
Integration in Time Domain
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node2.html 10/
4
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
Also note that as the ROC of is the right half plane , the ROC of is the
intersection of the two individual ROCs , except if pole-zero cancellation occurs (when with
as the signal itself. Al complex values of for which the integral in the definition converges form a region of
convergence (ROC) in the s-plane.
imaginary part
exists if and only if the argument
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
For this integral to converge, we need to have
n t
J
and the Laplace transform is
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node2.html 11/
4
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
As a special case where , and we have
Only when
o m
. c
ls
ir a
Jn
wil the integral converge, and Laplace transform is
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
Again as a special case when
n t , we have
J
Comparing the two examples above we see that two dif erent signals may have identical Laplace transform erent
, but dif ROC. In the first case above, the ROC is , and in the second case, the ROC
is
. To determine the time signal by the inverse Laplace transform, we need the ROC as wel
as .
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node2.html 12/
4
Downloaded from Jntumaterials.com
10/3/13 Initial and Final Value Theorems
The Laplace transform is linear, and is the sum of the transforms for the two terms:
If , i.e., decays when , the intersection of the two ROCs is , and we have:
o m
. c
ls
ir a
Jn
However, if , i.e., grows without a bound when , the intersection of the two ROCs is a
tu
m
e
at
empty set, the Laplace transform does not exist.
er
t
ia
ls.
co
Example 4:
a
m
u M
n t
The Laplace transform of this signal is
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node2.html 13/
4
Downloaded from Jntumaterials.com
10/3/13 Representation of LTI Systems by Laplace Transform
This exists only if the Laplace transforms of al three individual terms exist, i.e, the conditions for the three
i.e., .
Example 5:
o m
. c
As the Laplace integration converges independent of
ls
, the ROC is the entire s-plane. In particular, when
ir a
, we have
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
presentation of LTI Systems by Laplace Transform
u M
n t
Due to its convolution property, the Laplace transform is a powerful tool for analyzing LTI systems:
J
Also, if an LTI system can be described by a linear constant coef icient dif erential equation (LCCDE), the
Laplace transform can convert the dif erential equation to an algebraic equation due to the time derivative
property:
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node7.html 1/5
Downloaded from Jntumaterials.com
10/3/13 Representation of LTI Systems by Laplace Transform
We first consider how an LTI system can be represented in the Laplace domain.
An LTI system is causal if its output depends only on the current and past input (but not the
future). Assuming the system is initialy at rest with zero output , thenits response
o m
. c
ls
Due to the properties of the ROC, we have:
ir a
If an LTI system is causal, then the ROC of is a right-sided half plane. In
particular, If
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node7.html 1/6
Downloaded from Jntumaterials.com
10/3/13 Representation of LTI Systems by Laplace Transform
greater than that of the denominator , so that the ROC is a right-sided plane without any
poles (even at ).
. c
can be considered as a special polynomial (Taylor
ls
series expansion):
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
As this numerator polynomial has infinite order, greater than that of the denominator (zero), there is a
m
pole at , ROC is not a right-sided plane, is not causal.
When
u M
, we have:
n t
J
As the order of the denominator polynomial is infinite, greater than that of the numerator (zero),
there is no pole at , ROC is a right-sided plane, is causal.
An LTI system is stable if its response to any bounded input is also bounded for al :
and
o m
In other words, if the impulse response function
c
of an LTI system is absolutely integrable, then the system
.
ls
is stable. We can show that this condition is also necessary, i.e., al stable LTI systems' impulse response
functions are absolutely integrable. Now we have:
Jn
integrable, i.e., the frequency response function exists, i.e., the ROC of its
tu
m
e
at
er
transfer function contains -axis:
t
ia
ls.
co
a
m
u M
Causal and stable LTI systems
n t
J
Combining the two properties above, we have:
A causal LTI system with a rational transfer function is stable if and only if all
poles of are in the left half of the s-plane, i.e., the real parts of all poles are
negative:
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node7.html 3/5
Downloaded from Jntumaterials.com
10/3/13 Representation of LTI Systems by Laplace Transform
As shown before, without specifying the ROC, this could be the Laplace transform of one of the two possible
time signals .
o m
anti-causal, anti-causal, stable
. c
ls
unstable
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
This is a time-shifted version of , and the corresponding impulse response is:
u M
n t
If , then
J
during the interval , the system is not causal, although its ROC is a right
half plane. This example serves as a counter example showing it is not true that any right half plane ROC corresponds
to a causal system, while al causal systems' ROCs are right half planes. However, if is rational, then the
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node7.html 4/5
Downloaded from Jntumaterials.com
10/3/13 Representation of LTI Systems by Laplace Transform
i.e., its ROC cannot be a right-sided half plane, therefore the system is not causal. On the other
hand, if , then this polynomial appears in denominator, there is no pole at ,
the ROC is a right-sided half plane, the system is causal.
o m
Zeros and Poles of the Laplace Transform
. c
ls
Al Laplace transforms in the above examples are rational, i.e., they can be written as a ratio of
ir a
polynomials of variable in the general form
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
is the numerator polynomial of order with roots ,
u M
is the denominator polynomial of order with roots .
n t
In general, we assume the order of the numerator polynomial is always lower than that of the
t
J
. If this is not the case, we can always expand
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node3.html 5/3
Downloaded from Jntumaterials.com
10/3/13 Representation of LTI Systems by Laplace Transform
m
l
. c o
1
ls
ir a
:
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
Two zeros: , ;
(Three poles: ,
u M
t
and
J n
.) Example 2:
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node3.html 6/3
Downloaded from Jntumaterials.com
10/3/13 Representation of LTI Systems by Laplace Transform
and get
o m
and
. c
ls
ir a
Jn
tu
Alternatively, the same result can be obtained more easily by a long division . The zeros and poles
m
e
at
er
t
ia
ls.
co
a
Zero: Each of the roots of the numerator polynomial for which is a zero of
m
M
;
If the order of
t
there is a zero at infinity: u
exceeds that of (i.e., ), then , i.e.,
J n
Pole: Each of the roots of the denominator polynomial for which is a pole of
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node3.html 7/3
Downloaded from Jntumaterials.com
10/3/13 Representation of LTI Systems by Laplace Transform
n the s-plane zeros and poles are indicated by o and x respectively. Obviously al poles are outside
the ROC. Essential properties of an LTI system can be obtained graphical y from the ROC and the
zeros and poles of its transfer function on the s-plane.
o m
. c
ls
ir a
Jn
tu
m
e
at
er
t
ia
ls.
co
a
m
u M
n t
J
fourier.eng.hmc.edu/e102/lectures/Laplace_Transform/node3.html 8/3