Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 50

Introduction To Equalization

Presented By :
Guy Wolf
Roy Ron
Guy Shwartz


(Adavanced DSP, Dr.Uri Mahlab)
HAIT
14.5.04
TOC
Communication system model
Need for equalization
ZFE
MSE criterion
LS
LMS
Blind Equalization Concepts
Turbo Equalization Concepts
MLSE - Concepts
Information
source
Pulse
generator
Trans
filter
channel

X(t)
Receiver
filter
A/D
+
Channel noise
n(t)



Digital
Processing
+
HT(f)
HR(f)
Y(t)
Hc(f)
Basic Digital Communication System
Trans
filter
channel
Receiver
filter


Basic Communication System
HT(f) HR(f) Hc(f)

+ =
k
b d c k
t n kT t t h A t Y ) ( ) ( ) (
0
The received Signal is the transmitted signal, convolved with the channel
And added with AWGN (Neglecting HTx,HRx)
( ) ( ) | | ( )
t n T A t
m b
m K
c k
m m
k m h A Y
0
+ + =

=
ISI - Inter Symbol
Interference
Y(t)
Ak
Y(tm)


Explanation of ISI
t
f
Fourier
Transform
Channel
f
Fourier
Transform
t
Tb 2Tb
3Tb 4Tb
5Tb
t
6Tb
Reasons for ISI
Channel is band limited in
nature
Physics e.g. parasitic
capacitance in twisted pairs
limited frequency response
unlimited time response


Tx filter might add ISI when
channel spacing is crucial.
Channel has multi-path
reflections
Channel Model
Channel is unknown
Channel is usually modeled as Tap-Delay-
Line (FIR)
D D D
h(0) h(1) h(2) h(N-1) h(N)
y(n)
x(n)
+
+
+
+
+
Example for Measured Channels
The Variation of the Amplitude of the Channel Taps is Random
(changing Multipath)and usually modeled as Railegh distribution
in Typical Urban Areas
Example for Channel Variation:
Equalizer: equalizes the channel the
received signal would seen like it
passed a delta response.
)) ( arg( )) ( arg(
) ( ) ( 1 | ) ( | | ) ( |
| ) ( |
1
| ) ( |
f G f G
t t h f G f G
f G
f G
C E
total C E
C
E
=
= = = o
Need For Equalization
Need For Equalization:
Overcome ISI degradation
Need For Adaptive Equalization:
Changing Channel in Time

=> Objective:
Find the Inverse of the Channel Response
to reflect a delta channel to the Rx
* Applications (or standards recommend us the channel
types for the receiver to cope with ).
Zero forcing equalizers
(according to Peak Distortion Criterion)
Tx Ch Eq
q
x

= )
`

=
=
= =
2
2
... 2 , 1 , 0
0 , 1
) ( ) (
n
m
m
n mT X Cn mT q t
No ISI
Equalizer taps
is described as matrix
) 2 / ( nT mT x
(
(
(
(



=
) 1 ( ) 5 . 1 ( ) 2 ( ) 5 . 2 ( ) 3 (
) 0 ( ) 5 . 0 ( ) 1 ( ) 5 . 1 ( ) 2 (
) 1 ( ) 5 . 0 ( ) 0 ( ) 5 . 0 ( ) 1 (
) 2 ( ) 5 . 1 ( ) 1 ( ) 5 . 0 ( ) 0 (
T x T x T x T x T x
x T x T x T x T x
T x T x x T x T x
T x T x T x T x x
X
Example: 5tap-Equalizer, 2/T sample rate:
:Force
(
(
(
(
(
(

2
1
0
1
2
c
c
c
c
c
C
(
(
(
(
(
(

=
0
0
1
0
0
q
Equalizer taps as vector
Desired signal as vector

XC=q C
opt
=X
-1
q

Disadvantages: Ignores presence of additive noise


(noise enhancement)
MSE Criterion
2
1
0
]) [ ] [ ( ] [ n h n x J
N
n
u u =

=
between the received signal Mean Square Error
and the desired signal, filtered by the equalizer filter
LS Algorithm
LMS Algorithm
Desired Signal
UnKnown Parameter
(Equalizer filter response)
Received Signal

LS
Least Square Method:
Unbiased estimator
Exhibits minimum variance (optimal)
No probabilistic assumptions (only signal
model)

Presented by Guass (1795) in studies of
planetary motions)
LS - Theory
] [ ] [ ] [ m m n h n s u

=
H n s u = ] [
2
1
0
]) [ ] [ ( ] [ n h n x J
N
n
u u =

=
=
1
0
2
1
0
] [
] [ ] [

N
n
N
n
n h
n h n x
u
Derivative according to :
u
1.
2.
3.
4.
MSE :
The minimum LS error would be obtained by substituting 4 to 3:
] [ ]
]) [ ] [ (
] [
] [ ] [

] [
]) [

] [ ( ] [

]) [

] [ ( ] [
]) [

] [ ])( [

] [ ( ]) [

] [ ( ] [
2
1
0
2
1
0
1
0
2
min
1
0
1
0
2
)

( 0
1
0
1
0
1
0
2
1
0
min
n h
n h n x
n x J
n h n x n x
n h n x n h n h n x n x
n h n x n h n x n h n x J J
N
n
N
n
N
n
N
n
N
n
tinh BySubstitu
N
n
N
n
N
n
N
n

=
= =>
=
=
= = =
u
u u u
u u u u
u

Energy Of
Original Signal
Energy Of
Fitted Signal
] [ ] [ n w Signal n x + = If Noise Small enough (SNR large enough): Jmin~0
Back-Up
Finding the LS solution
u H n s = ] [
]) ] [ ( ]) ] [ (
]) [

] [ ])( [

] [ ( ]) [

] [ ( ] [
1
0
2
1
0
u u
u u u u
H n x H n x
n h n x n h n x n h n x J
T
N
n
N
n
=
= =


=

=
u u u
u u u u u
H H H x z x
H H x H H x x x J
T T
scalar
T T
T T T T T T
+ =
+ =

2
] [
(H: observation matrix (Nxp) and
u
u
u
H H x H
J
T
scalar
T
2 2
) (
+ =
c
c

x H H H
T T 1
) (


= u
T
N s s s n s ]) 1 [ ],... 1 [ ], 0 [ ( ] [ =
LS : Pros & Cons
Advantages:
Optimal approximation for the Channel- once calculated
it could feed the Equalizer taps.
Disadvantages:
heavy Processing (due to matrix inversion which by
It self is a challenge)
Not adaptive (calculated every once in a while and
is not good for fast varying channels

Adaptive Equalizer is required when the Channel is time variant
(changes in time) in order to adjust the equalizer filter tap
Weights according to the instantaneous channel properties.

Contents:
Introduction - approximating steepest-descent algorithm
Steepest descend method
Least-mean-square algorithm
LMS algorithm convergence stability
Numerical example for channel equalization using LMS
Summary
LEAST-MEAN-SQUARE ALGORITHM
Introduced by Widrow & Hoff in 1959
Simple, no matrices calculation involved in the adaptation
In the family of stochastic gradient algorithms
Approximation of the steepset descent method
Based on the MMSE criterion.(Minimum Mean square Error)
Adaptive process containing two input signals:
1.) Filtering process, producing output signal.
2.) Desired signal (Training sequence)
Adaptive process: recursive adjustment of filter tap weights
INTRODUCTION
NOTATIONS
Input signal (vector): u(n)

Autocorrelation matrix of input signal: Ruu = E[u(n)u
H
(n)]

Desired response: d(n)

Cross-correlation vector between u(n) and d(n): Pud = E[u(n)d*(n)]

Filter tap weights: w(n)

Filter output: y(n) = w
H
(n)u(n)

Estimation error: e(n) = d(n) y(n)

Mean Square Error: J = E[|e(n)|
2
] = E[e(n)e*(n)]



SYSTEM BLOCK USING THE LMS
U[n] = Input signal from the channel ; d[n] = Desired Response
H[n] = Some training sequence generator
e[n] = Error feedback between :
A.) desired response.
B.) Equalizer FIR filter output
W = Fir filter using tap weights vector
STEEPEST DESCENT METHOD
Steepest decent algorithm is a gradient based method which
employs recursive solution over problem (cost function)
The current equalizer taps vector is W(n) and the next
sample equalizer taps vector weight is W(n+1), We could
estimate the W(n+1) vector by this approximation:


The gradient is a vector pointing in the direction of the
change in filter coefficients that will cause the greatest
increase in the error signal. Because the goal is to minimize
the error, however, the filter coefficients updated in the
direction opposite the gradient; that is why the gradient term
is negated.
The constant is a step-size. After repeatedly adjusting
each coefficient in the direction opposite to the gradient of
the error, the adaptive filter should converge.






]) [ ( 5 . 0 ] 1 [ ] [ n J n W n W V + + =

Given the following function we need to obtain the vector
that would give us the absolute minimum.

It is obvious that
give us the minimum.






STEEPEST DESCENT EXAMPLE
2
2
2
1 2 1
) , ( C C c c Y + =
, 0
2 1
= = C C
1
C
2
C
y
Now lets find the solution by the steepest descend method


We start by assuming (C1 = 5, C2 = 7)
We select the constant . If it is too big, we miss the
minimum. If it is too small, it would take us a lot of time to
het the minimum. I would select = 0.1.
The gradient vector is:





STEEPEST DESCENT EXAMPLE

] [
2
1
] [
2
1
] [
2
1
] [
2
1
] 1 [
2
1
9 . 0 1 . 0 2 . 0
n n n n n
C
C
C
C
C
C
y
C
C
C
C
(

=
(

= V -
(

=
(

+
(

=
(
(
(
(

= V
2
1
2
1
2
2
C
C
dc
dy
dc
dy
y
So our iterative equation is:
STEEPEST DESCENT EXAMPLE
(

=
(

=
(

=
(

567 . 0
405 . 0
: 3
3 . 6
5 . 4
: 2
7
5
: 1
2
1
2
1
2
1
C
C
Iteration
C
C
Iteration
C
C
Iteration
(

=
(

=
(


0
0
lim
013 . 0
01 . 0
: 60
......
] [
2
1
2
1
n
n
C
C
C
C
Iteration
As we can see, the vector [c1,c2] convergates to the value
which would yield the function minimum and the speed of
this convergence depends on .
1
C
2
C
y
Initial guess
Minimum
MMSE CRITERIA FOR THE LMS
MMSE Minimum mean square error
MSE =



To obtain the LMS MMSE we should derivative
the MSE and compare it to 0:

} ] ) ( ) ( ) ( {[( } )] ( ) ( {[(
2 2

=
=
N
N n
n k u n w k d E k y k d E
) (
) ) ( ) ( ) ( ) ( ) ( 2 } ) ( { (
) (
) (
2
k dW
m n R m w n w n P n w k d E d
k dW
MSE d
N
N n
N
N m
N
N n
du
= = =
+
=
)} ( ) ( { ) (
)} ( ) ( { ) (
) ( ) ( ) ( ) ( ) ( 2 } ) ( { } ] ) ( ) ( ) ( {[(
2 2
k n u k m u E m n R
k n u k d E n P
m n R m w n w n P n w k d E n k u n w k d E
uu
du
N
N n
N
N m
N
N n
du
N
N n
=
=
+ =

= = = =
MMSE CRITERION FOR THE LMS
,... 2 , 1 , 0 ), ( ] [ 2 ) ( 2
) (
) (
) ( = + = = V


k k n R n w k P
k dW
MSE d
n J
uu
N
N n
du
And finally we get:
By comparing the derivative to zero we get the MMSE:
P R w
opt
- =
1
This calculation is complicated for the DSP (calculating the inverse
matrix ), and can cause the system to not being stable cause if there
are NULLs in the noise, we could get very large values in the inverse
matrix. Also we could not always know the Auto correlation matrix of the
input and the cross-correlation vector, so we would like to make an
approximation of this.
APPROXIMATION OF THE LMS
STEEPEST DESCENT METHOD
W(n+1) = W(n) + 2
*
[P Rw(n)] <= According the MMSE criterion
We assume the following assumptions:
Input vectors :u(n), u(n-1),,u(1) statistically independent vectors.
Input vector u(n) and desired response d(n), are statistically independent of
d(n), d(n-1),,d(1)
Input vector u(n) and desired response d(n) are Gaussian-distributed R.V.
Environment is wide-sense stationary;
In LMS, the following estimates are used:
Ruu^ = u(n)u
H
(n) Autocorrelation matrix of input signal

Pud^ = u(n)d*(n) - Cross-correlation vector between U[n] and d[n].

*** Or we could calculate the gradient of |e[n]|
2
instead of E{|e[n]|
2
}

LMS ALGORITHM
~
[n]} y [n] u[n]{d { w(n)
[n]w[n]} u[n]u [n] {u[n]d w(n)
w[n]} R {P W[n] 1] W[n
* *
H *
^ ^


+ =
+ =
+ ~ +
We get the final result:
[n]} u[n]e { W[n] 1] W[n
*
+ ~ +
LMS STABILITY
The size of the step size determines the algorithm convergence
rate. Too small step size will make the algorithm take a lot of
iterations. Too big step size will not convergence the weight taps.

Rule Of Thumb:
R
P N ) 1 2 ( 5
1
+
=
Where, N is the equalizer length
Pr, is the received power (signal+noise)
that could be estimated in the receiver.
LMS CONVERGENCE GRAPH
This graph illustrates the LMS algorithm. First we start from guessing
the TAP weights. Then we start going in opposite the gradient vector,
to calculate the next taps, and so on, until we get the MMSE, meaning
the MSE is 0 or a very close value to it.(In practice we can not get
exactly error of 0 because the noise is a random process, we could
only decrease the error below a desired minimum)
Example for the Unknown Channel of 2
nd
order:
Desired Combination of taps Desired Combination of taps
LMS Convergence Vs u
LMS EQUALIZER EXAMPLE
Channel equalization
example:
Average Square Error as a
function of iterations number
using different channel
transfer function
(change of W)
LMS Advantage:
Simplicity of implementation
Not neglecting the noise like Zero forcing equalizer
By pass the need for calculating an inverse matrix.

LMS : Pros & Cons
LMS Disadvantage:
Slow Convergence
Demands using of training sequence as reference
,thus decreasing the communication BW.

Non linear equalization
Linear equalization (reminder):
Tap delayed equalization
Output is linear combination of the equalizer
input


C
E
G
G
1
=
...
) (
) (
) (
2
3
1
1 0
1
+ + + = =
=

[
z a z a a C
z X
z Y
z a G
E
i
i E
as FIR
... ) 2 ( ) 1 ( ) ( ) (
2 1 0
+ + + = n x a n x a n x a n y
Non linear equalization DFE
(Decision feedback Equalization)

= ) ( ) ( ) ( i n y b i n x a n y
i i
Advantages: copes with larger ISI
[
[

= =
i
i
i
i
E
z b
z a
G
z X
z Y
) (
) (
) (
) (
1
1
as IIR
A(z)
Receiver
detector
B(z)
+ In Output
+
-
The nonlinearity is due the
detector characteristics that
is fed back (MAPPER)
The Decision feedback leads poles in z domain
Disadvantages: instability danger
Non linear equalization - DFE

Blind Equalization
ZFE and MSE equalizers assume
option of training sequence for
learning the channel.
What happens when there is
none?
Blind Equalization
Adaptive
Equalizer
Decision
+
-
Input Output
Error
Signal
n
V
n
I

n
I
~
n
d
n
e
But Usually employs also :
Interleaving\DeInterleaving
Advanced coding
ML criterion

Why? Blind Eq is hard and complicated enough!
So if you are going to implement it, use the best blocks
For decision (detection) and equalizing
With LMS
Turbo Equalization
MAP
Decoder
+ H
H
1

MAP
Equalizer
+
Channel
Estimator
r
L
D
e
(c)
L
D
e
(c)
L
D
(c)
L
D
(d)
L
E
e
(c) L
E
e
(c)
L
E
(c)
Iterative :
Estimate
Equalize
Decode
ReEncode
Usually employs also :
Interleaving\DeInterleaving
TurboCoding (Advanced iterative code)
MAP (based on ML criterion)

Why? It is complicated enough!
So if you are going to implement it, use the best blocks
Next iteration would rely on better estimation
therefore would lead more precise equalization
Performance of Turbo Eq Vs
Iterations
ML criterion
MSE optimizes detection up to 1
st
/2
nd
order
statistics.
In Uris Class:
Optimum Detection:
Strongest Survivor
Correlation (MF)
(allow optimal performance for Delta ch and Additive noise.
Optimized Detection maximizes prob of detection (minimizes
error or Euclidean distance in Signal Space)
Lets find the Optimal Detection Criterion while in
presence of memory channel (ISI)
ML criterion Cont.
Maximum Likelihood :
Maximizes decision probability for the received trellis

Example BPSK (NRZI)
b
E S S = =
0 1
2 possible transmitted signals
Energy Per Bit
k b k
n E r + =
Received Signal occupies AWGN
(
(


=
2
2
1
2
) (
exp
2
1
) | (
n
b k
n
k
E r
s r p
o to
(
(

+
=
2
2
0
2
) (
exp
2
1
) | (
n
b k
n
k
E r
s r p
o to
Conditional PDF (prob of correct decision on r1 pending s1 was transmitted)
N0/2
Prob of correct decision on a sequence of symbols
[
=
=
K
k
m
k k
m
k
s r p s r r r p
1
) (
) (
2 1
) | ( ) | ,..., , (
Transmitted sequence
optimal
ML Cont.
With logarithm operation, it could be shown that this is equivalent to :
Minimizing the Euclidean distance metric of the sequence:

=
=
K
k
m
k k
m
s r s r D
1
2
) (
) (
) ( ) , (
How could this be used?
Looks Similar?
while MSE minimizes Error (maximizes Prob) for decision on certain Sym,
MLSE minimizes Error (maximizes Prob) for decision on certain Trellis ofSym,

(Called Metric)
Viterbi Equalizer
(On the tip of the tongue)
b
E
b
E
b
E / 1
Example for NRZI:
Trasmit Symbols:
(0=No change in transmitted Symbol
(1=Alter Symbol)
T t =
T t 2 = T t 3 =
T t 4 =
S0
S1
b
E / 0
b
E / 0
b
E / 0 b
E / 0
b
E / 1
b
E / 1
b
E / 1
b
E / 1
b
E / 1
b
E / 1
2
2
2
1 0
) ( ) ( ) 0 , 0 (
b b
E r E r D + + + =
2
2
2
1 0
) ( ) ( ) 1 , 1 (
b b
E r E r D + + =
2
2
2
1 0
) ( ) ( ) 1 , 0 (
b b
E r E r D + + =
2
2
2
1 0
) ( ) ( ) 0 , 1 (
b b
E r E r D + =
b
E / 0
b
E / 0
b
E / 0
Metric
(Sum
of
Euclidean
Distance)
2
3 0 0
) ( ) 0 , 0 ( ) 0 , 0 , 0 (
b
E r D D + + =
2
3 0 0
) ( ) 1 , 0 ( ) 1 , 1 , 0 (
b
E r D D + + =
2
3 0 0
) ( ) 0 , 0 ( ) 1 , 0 , 0 (
b
E r D D + =
2
3 0 0
) ( ) 1 , 0 ( ) 0 , 1 , 0 (
b
E r D D + =
We Always disqualify one metric for possible S0 and possible S1.
Finally we are left with 2 options for possible Trellis.
Finally are decide on the correct Trellis with the Euclidean
Metric of each or with Apost DATA
References
John G.Proakis Digital Communications.
John G.Proakis Communication Systems Eng.
Simon Haykin - Adaptive Filter Theory
K Hooli Adaptive filters and LMS
S.Kay Statistical Signal Processing Estimation Theory

You might also like