Professional Documents
Culture Documents
Digital Communication
Digital Communication
Digital Communication
If G ( f ) 0 for f W and Ts 1
2W
n
j n f
G ( f ) g ( ) exp( ) (3.4)
n 2W W
With
1.G ( f ) 0 for f W
2. f s 2W
we find from Equation (3.5) that
1
G( f ) G ( f ) , W f W (3.6)
2W
Substituting (3.4) into (3.6) we may rewrite G ( f ) as
1
n jnf
n
To reconstruct g (t ) from g ( ) , we may have
2W
g (t ) G ( f ) exp( j 2ft )df
1
n j n f
g ( 2W ) exp(
W
) exp( j 2 f t )df
W 2W W
n
n 1 n
W
n
g(
)
W
2W 2W
exp j 2 f ( t
2W
) df (3.8)
n sin( 2 Wt n )
g( )
n 2W 2 Wt n
n
n
g(
2W
) sin c( 2Wt n ) , - t (3.9)
Figure 3.4 (a) Anti-alias filtered spectrum of an information-bearing signal. (b) Spectrum
of instantaneously sampled version of the signal, assuming the use of a sampling rate
greater than the Nyquist rate. (c) Magnitude response of reconstruction filter.
Pulse-Amplitude Modulation :
1, 0 t T
1
h (t ) , t 0, t T (3.11)
2
0,
otherwise
The instantaneously samp led version of m ( t ) is
m ( t ) m( nT
n
s ) ( t nTs ) (3.12)
m ( t ) h ( t ) m ( )h ( t )d
m( nT ) (
n
s nTs ) h ( t ) d
m( nT )
n
s
( nTs )h ( t )d (3.13)
The PAM signal s (t ) is
s (t ) m (t ) h(t ) (3.15)
S ( f ) M ( f ) H ( f ) (3.16)
Recall (3.2) g (t ) fs G( f
m
mf s ) (3.2)
M ( f ) f s M( f
k
k fs ) (3.17)
S ( f ) fs M( f
k
k fs ) H ( f ) (3.18)
fs > 2fA(max)
Quantization Process:
Quantization Noise:
- law
log(1 m )
(3.48)
log(1 )
dm log(1 )
(1 m ) (3.49)
d
A - law
A( m) 1
0 m
1 log A
A (3.50)
1 log( A m ) 1
m 1
1 log A A
1 log A 1
dm 0 m
A A (3.51)
d (1 A) m
1
m 1
A
Figure 3.15 Line codes for the electrical representations of binary
data.
(a) Unipolar NRZ signaling. (b) Polar NRZ signaling.
(c) Unipolar RZ signaling. (d) Bipolar RZ signaling.
(e) Split-phase or Manchester code.
Noise consideration in PCM systems:
(Channel noise, quantization noise)
Time-Division Multiplexing(TDM):
Digital Multiplexers :
p
J E x n 2 wk E xnxn k
2
k 1
p p
w j wk E xn j xn k (3.62)
j 1 k 1
Assume X (t ) is stationary process with zero mean ( E[ x[ n]] 0)
X2 E x 2 n ( E xn) 2
E x 2 n
The autocorrelation
R X ( kTs ) R X k E xn xn k
We may simplify J as
p p p
J 2
X 2 wk R X k w j wk R X k j (3.63)
k 1 j 1 k 1
J p
2 R X k 2 w j R X k j 0
wk j 1
p
w R k j R k R k ,
j 1
j X X X k 1,2 , ,p (3.64)
RX 0 , RX 1 , , RX p
Substituting (3.64) into (3.63) yields
p p
J min 2 wk RX k wk RX k
2
X
k 1 k 1
p
wk RX k
2
X
k 1
wk n 1 wk n g k , k 1,2, ,p (3.69)
1
2
1
where is a step - size parameter and is for convenienc e
2
of presentation.
J P
gk 2 RX k 2 w j RX k j
wk j 1
p
2 Exnxn k 2 w j Exn j xn k , k 1,2,, p (3.70)
j 1
p
w k n 1 w k n xn k xn w j nxn j
j 1
w k n xn k en , k 1,2,, p (3.72)
p
where en xn w j nxn j by (3.59) (3.60) (3.73)
j 1
Usually PCM has the sampling rate higher than the Nyquist rate
.The encode signal contains redundant information. DPCM can
efficiently remove this redundancy.
en mn m n (3.74)
m n is a prediction value.
The quantizer output is
eq n en qn (3.75)
where qn is quantization error.
The prediction filter input is
mq n m
n en qn (3.77)
From (3.74)
mn
mq n mn qn (3.78)
Processing Gain:
Duo-binary Signaling :
1 if symbol bk is 1 ck ak ak 1
ak
1 if symbol bk is 0
H I ( f ) H Nyquist( f )[1 exp( j 2fTb )]
H Nyquist( f )[exp( jfTb ) exp( jfTb )] exp( jfTb )
2 H Nyquist( f ) cos(fTb ) exp( jfTb )
1, | f | 1 / 2Tb
H Nyquist( f )
0, otherwise
sin(t / Tb ) sin[ (t Tb ) / Tb ]
hI (t )
t / Tb (t Tb ) / Tb
Tb2 sin(t / Tb )
t (Tb t )
d k bk d k 1
ck ak ak 1
0 if data symbol bk is 1
ck
2 if data symbol bk is 0
ck ak ak 1
H IV ( f ) H Nyquist ( f )[1 exp( j 4 fTb )]
2 jH Nyquist ( f )sin(2 fTb )exp( j 2 fTb )
precoding
dk bk dk 2
symbol 1 if either symbol bk or d k 2 is 1
symbol 0 otherwise
|ck|=1 : random guess in favor of symbol 1 or 0
If | ck | 1, say symbol bk is 1
If | ck | 1, say symbol bk is 0
Generalized form of correlative-level coding:
N 1
t
h(t ) wn sin c n
n Tb
Baseband M-ary PAM Transmission:
N N
wk c(t ) (t kT )
k N
w c(t kT )
k N
k
1, n 0 1, n0
p(nT )
0, n 0 0, n 1, 2, ....., N
Zero-forcing equalizer
Optimum in the sense that it minimizes the peak
distortion(ISI) worst case
Simple implementation
The longer equalizer, the more the ideal condition for
distortionless transmission
Adaptive Equalizer :
Least-Mean-Square Algorithm:
E en2
Ensemble-averaged cross-correlation
e y
2 E en n 2 E en n 2 E en xn k 2Rex (k )
wk wk wk
Rex (k ) E en xn k
Optimality condition for minimum mean-square error
0 for k 0, 1,...., N
wk
Mean-square error is a second-order and a parabolic function
of tap weights as a multidimentional bowl-shaped surface
Adaptive process is a successive adjustments of tap-weight
seeking the bottom of the bowl(minimum value )
Steepest descent algorithm
The successive adjustments to the tap-weight in
direction opposite to the vector of gradient )
Recursive formular ( : step size parameter)
1
wk (n 1) wk (n) , k 0, 1,...., N
2 wk
wk (n) Rex (k ), k 0, 1,...., N
Least-Mean-Square Algorithm
Steepest-descent algorithm is not available in an
unknown environment
Approximation to the steepest descent algorithm using
instantaneous estimate
Rex (k ) en xn k
wk (n 1) wk (n) en xn k
Analog
CCD, Tap-weight is stored in digital memory, analog
sample and multiplication
Symbol rate is too high
Digital
Sample is quantized and stored in shift register
Tap weight is stored in shift register, digital
multiplication
Programmable digital
Microprocessor
Flexibility
Same H/W may be time shared
h0 xn hk xn k hk xn k
Using data decisions
k 0
madek on
0
the basis of precursor to take
care of the postcursors
The decision would obviously have to be correct
Feedforward section : tapped-delay-line equalizer
Feedback section : the decision is made on previously
detected symbols of the input sequence
Nonlinear feedback loop by decision device
wn(1) x
cn (2) vn n en an cnT vn wn(1)1 wn(1)1 1en xn
wn an
wn(2)1 wn(2)1 1en an
Eye Pattern:
FSK Bandwidth:
Applications
On voice-grade lines, used up to 1200bps
Used for high-frequency (3 to 30 MHz) radio
transmission
used at higher frequencies on LANs that use coaxial
cable
DBPSK:
Differential BPSK
0 = same phase as last signal element
1 = 180 shift from last signal element
A cos 2f c t
11
4
3
A cos 2f ct
s t
01
4
3
A cos 2f ct 00
4
A cos 2f ct
4
10
Concept of a constellation :
M-ary PSK:
Using multiple phase angles with each angle having more than one
amplitude, multiple signals elements can be achieved
R R
D
L log 2 M
Figure 6.26 Block diagrams for (a) binary FSK transmitter and
(b) coherent binary FSK receiver.
Fig. 6.28
6.28
Figure 6.31 Block diagrams for (a) MSK transmitter and (b)
coherent MSK receiver.
UNIT IV BASEBAND RECEPTION TECHNIQUES
Block Codes:
d 3 d1 d 2
So,
k k k k
c3 d3i a i (d1i d 2i )a i d1i a i d 2i a i
i 1 i 1 i 1 i 1
c3 c1 c 2
Error Correcting Power of LBC:
1 0 1 1
G
0 1 0 1
a1 = [1011]
a2 = [0101]
For d = [1 1], then;
1 0 1 1
0 1 0 1
c
_ _ _ _
1 1 1 0
Systematic Codes:
This is so since,
k
c di a i
i 1
and so,
k k
b j .c b j . d i a i di (a i .b j ) 0
i 1 i 1
Error Syndrome:
0 1 1
1 0 1
1 1 0
s c r H T 1 1 0 1 0 0 11 1 1 0 0 0
1 0 0
0 1 0
0 0 1
Standard Array:
c1 (all zero) c2 cM s0
e1 c2+e1 cM+e1 s1
e2 c2+e2 cM+e2 s2
e3 c2+e3 cM+e3 s3
eN c2+eN cM+eN sN
Hamming Codes:
x ''' j m j2 m j
Here each message bit influences
a span of C = n(L+1)=3(1+1)=6
successive output bits
Convolution point of view in encoding and generator matrix:
g ( 2 ) [111 1]
x ' j m j 2 m j 1 m j
x '' j m j 2 m j
xout x '1 x ''1 x '2 x ''2 x '3 x ''3 ...
Representing convolutional codes compactly: code trellis and
state diagram:
State diagram
ln p(y, xm ) j0 ln p( y j | xmj )
that is maximized by the correct path
Exhaustive maximum likelihood
method must search all the paths
in phase trellis (2k paths emerging/
entering from 2 L+1 states for
an (n,k,L) code)
The Viterbi algorithm gets its
efficiency via concentrating intosurvivor paths of the trellis
H(7,4)
Generator matrix G: first 4-by-4 identical matrix
Transmission vector x
Received vector r
and error vector e
Parity check matrix H
Error Correction:
g [1 0 11]
(1)
g ( 2 ) [111 1]
11
00 01
11 01
11 10
01
correct:1+1+2+2+2=8;8 (0.11) 0.88
false:1+1+0+0+0=2;2 (2.30) 4.6
total path metric: 5.48
Turbo Codes:
Backgound
Turbo codes were proposed by Berrou and Glavieux in
the 1993 International Conference in Communications.
Performance within 0.5 dB of the channel capacity limit
for BPSK was demonstrated.
Features of turbo codes
Parallel concatenated coding
Recursive convolutional encoders
Pseudo-random interleaving
Iterative decoding
Comparison:
Rate 1/2 Codes.
K=5 turbo code.
K=14 convolutional code.
Plot is from:
L. Perez, Turbo Codes, chapter 8 of Trellis Coding by
C. Schlegel. IEEE Press, 1997
Pseudo-random Interleaving:
In a coded systems:
Performance is dominated by low weight code words.
A good code:
will produce low weight outputs with very low
probability.
An RSC code:
Produces low weight outputs with fairly low
probability.
However, some inputs still cause low weight outputs.
Because of the interleaver:
The probability that both encoders have inputs that
cause low weight outputs is very low.
Therefore the parallel concatenation of both encoders
will produce a good code.
Iterative Decoding:
There is one decoder for each elementary encoder.
Each decoder estimates the a posteriori probability (APP) of
each data bit.
The APPs are used as a priori information by the other
decoder.
Decoding continues for a set number of iterations.
Performance generally improves from iteration to
iteration, but follows a law of diminishing returns
The Turbo-Principle:
Turbo codes get their name because the decoder uses feedback,
like a turbo engine
Performance as a Function of Number of Iterations:
0
10
-1
10
1 iteration
-2
10
-3
2 iterations
10
BER
-4
10 6 iterations 3 iterations
-5
10 10 iterations
-6
10 18 iterations
-7
10
0.5 1 1.5 2
Eb/No in dB
Gains:
Basic Operation:
Typically 2k carriers frequencies forming 2k channels
Channel spacing corresponds with bandwidth of input
Each channel used for fixed interval
300 ms in IEEE 802.11
Some number of bits transmitted using some encoding
scheme
May be fractions of bit (see later)
Sequence dictated by spreading code
Frequency Hopping Example:
Frequency Hopping Spread Spectrum System (Transmitter):