6 227 2005

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 29

Equalization in a

wideband TDMA system

• Three basic equalization methods


• Linear equalization (LE)
• Decision feedback equalization (DFE)
• Sequence estimation (MLSE-VA)
• Example of channel estimation circuit
Three basic equalization methods (1)

Linear equalization (LE):


Performance is not very good when the frequency response
of the frequency selective channel contains deep fades.

Zero-forcing algorithm aims to eliminate the intersymbol


interference (ISI) at decision time instants (i.e. at the
center of the bit/symbol interval).
Least-mean-square (LMS) algorithm will be investigated in
greater detail in this presentation.
Recursive least-squares (RLS) algorithm offers faster
convergence, but is computationally more complex than
LMS (since matrix inversion is required).
Three basic equalization methods (2)

Decision feedback equalization (DFE):


Performance better than LE, due to ISI cancellation of tails
of previously received symbols.

Decision feedback equalizer structure:

Feed-back
filter (FBF)

Input Feed-forward Output


filter (FFF) +
+
Adjustment of Symbol
filter coefficients decision
Three basic equalization methods (3)

Maximum Likelihood Sequence Estimation using


the Viterbi Algorithm (MLSE-VA):
Best performance. Operation of the Viterbi algorithm can be
visualized by means of a trellis diagram with m K-1 states,
where m is the symbol alphabet size and K is the length of
the overall channel impulse response (in samples).

State trellis
Allowed transition
diagram
between states
State
Sample time
instants
Linear equalization, zero-forcing algorithm
Basic idea: Z  f   B f  H  f  E  f 

Raised Transmitted Channel frequency Equalizer


cosine = symbol response frequency
spectrum spectrum (incl. T & R filters) response

B f 
H f 
E f 
Z f 
0 fs = 1/T f
Zero-forcing equalizer

Transmitted r k  z k 
impulse Communication Equalizer
sequence channel Input to
decision
FIR filter contains FIR filter contains
Overall circuit
2N+1 coefficients 2M+1 coefficients
channel

Channel impulse response Equalizer impulse response


N M
h k    h  k  n
n  N
n c k   
m  M
cm   k  m 

M
Coefficients of
equivalent FIR filter
fk  
m  M
cm hk m ( M  k  M )

(in fact the equivalent FIR filter consists of 2M+1+2N coefficients,


but the equalizer can only “handle” 2M+1 equations)
Zero-forcing equalizer

We want overall filter response M


1, k  0
to be non-zero at decision time fk  ch m k m 
k = 0 and zero at all other m  M 0, k  0
sampling times k  0 :

h0 c M  h1c M 1  ...  h2 M cM  0 (k = –M)


h1c M  h0 c M 1  ...  h2 M 1cM  0
This leads to :
a set of 2M+1
equations: hM c M  hM 1c M 1  ...  h M cM  1 (k = 0)
:
h2 M 1c M  h2 M  2 c M 1  ...  h1cM  0
h2 M c M  h2 M 1c M 1  ...  h0 cM  0 (k = M)
Minimum Mean Square Error (MMSE)

J  E ek
2
The aim is to minimize:

ek  zk  bˆk (or bˆk  zk depending on the source)

Input to Estimate
decision of k:th
circuit symbol Error
ek
+
zk bˆk
Channel Equalizer
s k  r k  z k  b̂  k 
MSE vs. equalizer coefficients

J  E ek 
2
quadratic multi-dimensional function of
equalizer coefficient values

J
Illustration of case for two real-valued
c2 equalizer coefficients (or one complex-
valued coefficient)
c1

MMSE aim: find minimum value directly (Wiener solution),


or use an algorithm that recursively changes the equalizer
coefficients in the correct direction (towards the minimum
value of J)!
Wiener solution

We start with the Wiener-Hopf equations in matrix form:

Rc opt  p

R = correlation matrix (M x M) of received (sampled) signal


values rk
p = vector (of length M) indicating cross-correlation between
received signal values rk and estimate of received symbol bˆk
copt = vector (of length M) consisting of the optimal equalizer
coefficient values

(We assume here that the equalizer contains M taps, not


2M+1 taps like in other parts of this presentation)
Correlation matrix R & vector p

R  E r  k  r*T  k  

r  k    rk , rk 1 ,..., rk  M 1 
T
where

p  E r  k  bˆk 
*
M samples

Before we can perform the stochastical expectation operation,


we must know the stochastical properties of the transmitted
signal (and of the channel if it is changing). Usually we do not
have this information => some non-stochastical algorithm like
Least-mean-square (LMS) must be used.
Algorithms

Stochastical information (R and p) is available:


1. Direct solution of the Wiener-Hopf equations:
1 Inverting a large
Rc opt  p copt  R p matrix is difficult!

2. Newton’s algorithm (fast iterative algorithm)


3. Method of steepest descent (this iterative algorithm is slow
but easier to implement)

R and p are not available:


Use an algorithm that is based on the received signal sequence
directly. One such algorithm is Least-Mean-Square (LMS).
Conventional linear equalizer of LMS type
Widrow
Received complex Transversal FIR filter
signal samples with 2M+1 filter taps LMS algorithm for
adjustment of
rk  M rk  M
T T T tap coefficients

ek
c M c1 M cM 1 cM
+

zk bˆk

Complex-valued tap coefficients Estimate of k:th symbol


of equalizer filter after symbol decision
Joint optimization of coefficients and phase

zk
r k  Equalizer filter

e j bˆk
Coefficient Phase
updating synchronization +
ek
Godard
Proakis, Ed.3, Section 11-5-2
Minimize:
 M

J  E ek ek  zk  bk    cm rk m  exp  j   bˆk
ˆ
2

 m M 
Least-mean-square (LMS) algorithm
(derived from “method of steepest descent”)
for convergence towards minimum mean square error (MMSE)

 ek
2

Real part of n:th coefficient: Re cn  i  1  Re cn  i   


  Re cn 

 ek
2

Imaginary part of n:th coefficient: Im cn  i  1  Im cn  i   


  Im cn 

 ek
2

ek  ek ek
2
Phase:   i  1    i   

2  2M  1  1
equations Iteration index Step size of iteration
LMS algorithm (cont.)

After some calculation, the recursion equations are obtained in


the form

 j M   
Re cn  i  1  Re cn  i   2 Re  e  cm rk m  bˆk  rk n e j 
 m M  

 j M    j 
Im cn  i  1  Im cn  i   2 Im  e  cm rk m  bk  rk n e 
ˆ
 m M  

 ˆ  j M  ek
  i  1    i   2 Im bk e  cm rk m 
 m  M 
Effect of iteration step size


smaller larger


Slow acquisition Poor stability

Poor tracking Large variation


performance around optimum
value
Decision feedback equalizer

bˆk 1 bˆk Q bˆk


T T
q1 qQ 1 qQ
?
FBF zk +

+ ek
rk  M rk  M
T T T LMS
c M c1 M cM 1 cM algorithm
for tap
coefficient
FFF
adjustment

Decision feedback equalizer (cont.)

The purpose is again to minimize J  E ek  ek


2 2

M Q

where ek  zk  bˆk  c
m  M
r
m k m   qn bˆk  n  bˆk
n 1

Feedforward filter (FFF) is similar to filter in linear equalizer


tap spacing smaller than symbol interval is allowed =>
fractionally spaced equalizer
=> oversampling by a factor of 2 or 4 is common

Feedback filter (FBF) is used for either reducing or canceling


(difference: see next slide) samples of previous symbols at
decision time instants
tap spacing must be equal to symbol interval
Decision feedback equalizer (cont.)

The coefficients of the feedback filter (FBF) can be


obtained in either of two ways:

Recursively (using the LMS algorithm) in a similar


fashion as FFF coefficients Proakis, Ed.3, Section 11-2

By calculation from FFF coefficients and channel


coefficients (we achieve exact ISI cancellation in
this way, but channel estimation is necessary):
M
qn   
m  M
cm hn  m n  1, 2, ,Q
Proakis, Ed.3, Section 10-3-1
Channel estimation circuit
Proakis, Ed.3, Section 11-3

Estimated Filter length = CIR length


symbols bˆk bˆk  M
T T T

LMS
hˆm  cm
c0 c1 cM 1 cM
algorithm

rk rˆk

+

k:th sample of received signal Estimated channel coefficients


Channel estimation circuit (cont.)

1. Acquisition phase
Uses “training sequence”
Symbols are known at receiver, bˆk  bk .

2. Tracking phase
Uses estimated symbols (decision directed mode)
Symbol estimates are obtained from the decision
circuit (note the delay in the feedback loop!)
Since the estimation circuit is adaptive, time-varying
channel coefficients can be tracked to some extent.

Alternatively: blind estimation (no training sequence)


Channel estimation circuit in receiver
Mandatory for MLSE-VA, optional for DFE

b k  Symbol estimates (with errors)

Training
symbols Estimated channel coefficients
(no
errors)
ĥ  m  b̂  k 
Channel Equalizer
estimation & decision
circuit circuit
r k 
Received signal samples “Clean” output symbols
Theoretical ISI cancellation receiver
(extension of DFE, for simulation of matched filter bound)

bˆk  P bˆk 1 bˆk 1 bˆk Q


Precursor cancellation Postcursor cancellation
of future symbols of previous symbols

rk  P Filter matched to
bˆk
sampled channel +
impulse response

If previous and future symbols can be estimated without error


(impossible in a practical system), matched filter performance
can be achieved.
MLSE-VA receiver structure

r t  Matched NW
y k  MLSE
b̂  k 
filter filter (VA)

fˆ  k 
Channel
estimation circuit
f k 

MLSE-VA circuit causes delay of estimated symbol sequence


before it is available for channel estimation
=> channel estimates may be out-of-date
(in a fast time-varying channel)
MLSE-VA receiver structure (cont.)
The probability of receiving sample sequence y (note: vector form)
of length N, conditioned on a certain symbol sequence estimate and
overall channel estimate:

Since we have AWGN Length of f (k)

 2

   
K 1
N
1  1 N

p y b, f   p yk b, f 
ˆ ˆ ˆ ˆ exp  2  yk   f n bk n 
ˆ ˆ
 2    2
N 2 N
k 1  k 1 n 0 

Objective: This is allowed since
Metric to be
find symbol noise samples are
minimized
sequence that uncorrelated due to NW
(select best b̂
..
maximizes this (= noise whitening) filter
using VA)
probability
MLSE-VA receiver structure (cont.)

We want to choose that symbol sequence estimate and overall


channel estimate which maximizes the conditional probability.
Since product of exponentials <=> sum of exponents, the
metric to be minimized is a sum expression.
If the length of the overall channel impulse response in
samples (or channel coefficients) is K, in other words the time
span of the channel is (K-1)T, the next step is to construct a
state trellis where a state is defined as a certain combination
of K-1 previous symbols causing ISI on the k:th symbol.

f k  Note: this is overall CIR,


including response of matched
filter and NW filter
0 K-1 k
MLSE-VA receiver structure (cont.)

At adjacent time instants, the symbol sequences causing ISI


are correlated. As an example (m=2, K=5):
:
At time k-3 1 0 0 1 0

At time k-2 1 0 0 1 0 0

At time k-1 1 0 0 1 0 0 1
At time k 1 0 0 1 0 0 1 1
:
16 states
Bit detected at time instant

Bits causing ISI not causing ISI at time instant


MLSE-VA receiver structure (cont.)
State trellis diagram

Number
The ”best” state
of states
sequence is
m K 1 estimated by
means of Viterbi
Alphabet algorithm (VA)
size
k-3 k-2 k-1 k k+1

Of the transitions terminating in a certain state at a certain


time instant, the VA selects the transition associated with
highest accumulated probability (up to that time instant) for
further processing.
Proakis, Ed.3, Section 10-1-3

You might also like