Download as pdf or txt
Download as pdf or txt
You are on page 1of 76

Principles of Communications

draft
2
Contents

1 Introduction 5

2 Signals and Information 9


2.1 Signal as Random Process . . . . . . . . . . . . . . 9
2.2 Discretization . . . . . . . . . . . . . . . . . . . . . 12
2.3 Information Capacity . . . . . . . . . . . . . . . . . 14

3 Codes and Coding 19


3.1 Error Proof Codes . . . . . . . . . . . . . . . . . . . 19
3.2 Block Codes . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Cyclic Codes . . . . . . . . . . . . . . . . . . . . . 23
3.4 Convolution Codes . . . . . . . . . . . . . . . . . . 31
3.5 Effectiveness of Codes . . . . . . . . . . . . . . . . 32

4 Communication Lines, Channels and their Models 37


4.1 Communication Line as four pole system . . . . . . 37
4.2 Noise . . . . . . . . . . . . . . . . . . . . . . . . . 39

3
4.3 Radio Channels . . . . . . . . . . . . . . . . . . . . 42
4.4 Discrete Channels Models . . . . . . . . . . . . . . 44

5 Signal Reception in Noise 47


5.1 Decision Rules . . . . . . . . . . . . . . . . . . . . 47
5.2 Optimal Receiver in White Noise Environment . . . 48
5.3 Noise Resistance . . . . . . . . . . . . . . . . . . . 52
5.4 Matched Filtering . . . . . . . . . . . . . . . . . . . 57

6 Multiplexing of Communication Lines 63


6.1 Methods of Sharing of Communication Lines . . . . 63
6.2 Frequency Division Multiple Access . . . . . . . . . 65
6.3 Time Division Multiple Access . . . . . . . . . . . . 68
6.4 Code Division Multiple Access . . . . . . . . . . . . 70

4
Chapter 1

Introduction

In our informative society all humans, enterprises and organization


communicate with each other by using various communication sys-
tems and networks.
Communication theory deals with various types of information
broadcasting processes. Information sources produces messages or
signals that are transmitted trough telecommunication networks.
Signal of natural sources as usual are continuous time functions.
Continuous are sound and visual signals, and for a long time they
were broadcasted directly as continuous functions. In modern com-
munication systems all types of signals (audio, video) are discretized
and became digital signals that are represented as set or sequence of
numbers. For example, digital signal in binary form is: 10111001.
To improve noise resistance digital signals are coded. Coding
does not change signal representation form, it remain digital only

5
CHAPTER 1. INTRODUCTION

some control numbers are inserted. These numbers are used to verify
if all numbers were received correctly.
Before transmission signals are discretized, quantized and com-
pressed due to reduce amount of transmitted data. That is why, source
and channel coding, data decomposition into separate packets are of
primary important in telecommunication theory.
Messages and signals are transmitted by electromagnetic waves
that propagates through different media — wires, fiber optic and radio
lines. So, telecommunication theory also examines various models of
transmission lines, encounters noise influence.
Signals that are transmitted through specific communication lines
and channels must be adopted to line properties. For example, to
transmit digital signal through communication line, it must be con-
verted to sequence of specific continuous time functions, that physi-
cally can propagate over that line.
Signals that propagate across communication line are distorted.
During reception of distorted signals complex signal processing pro-
cedures are applied to as correctly as possible restore original infor-
mation.
Other important problem in telecommunication theory is division
of communication lines as shared resources into different channels.
Division into channels is typical technology of traditional telephone
network. When two people speaking by phone, between their phones
is created communication channel over which signals are transmitted.
In telephone network number of communication channels is al-
ways less then number of subscribers. Common communication chan-
nels are switched, channels of separate lines are connected together to
make one continuous channel and for a some time are attached to spe-
cific subscriber. Temporary channel allocation problems investigates

6
CHAPTER 1. INTRODUCTION

teletrafic theory.
Other way to share common communication resources is trans-
mission of messages and signals from many subscribers over the same
transmission media and the same channels. In that case, message are
divided into packets then transmission media or channels remain un-
changed.

7
CHAPTER 1. INTRODUCTION

8
Chapter 2

Signals and Information

2.1 Signal as Random Process


Signals produced by typical information sources are described as ran-
dom processes. Observations of realizations of such processes always
are different continuous time functions s(t) (Fig. 2.1)
Main characteristic of such functions is density function of com-
plex spectrum S(jω). Because each time we observe different func-
tion, such random process is characterized by the power spectral den-
sity:
Gs (ω) = M {S(jω)S(−jω)} /T. (2.1)

Here M {} – expectation operator, T – observation interval of func-


tions s(t).

9
CHAPTER 2. SIGNALS AND INFORMATION

s(t)

Figure 2.1: Realizations of random signal.

The Fourier transform relates power spectral density with the cor-
relation function Ks (τ ) of random signal:

Gs (τ ) = F {Ks (τ )} , Ks (τ ) = F −1 {Gs (ω)} . (2.2)

That enables us find out power spectral density if we have correlation


function (obtained experimentally), or vise versa.
Correlation function of a random the process is defined as ex-
pectation of the product of signal s(t) and the same delayed signal
s(t − τ ):
Ks (τ ) = M {s(t)s(t − τ )} . (2.3)
Such function is also called autocorrelation function.
Biggest value of correlation function is at the point τ = 0 (Fig. 2.2).
That value is called dispersion Ks (0) = σ 2 . When τ increases, the
correlation function decreases. If samples of process s(t) and s(t−τ )
are statistically independent, then correlation function reaches zero,

10
CHAPTER 2. SIGNALS AND INFORMATION

when τ is significant. The more adjacent samples are related the less
changes correlation function. therefore, correlation function shows
speed of change of process statistical dependency.

s(t)

0 t

Figure 2.2: Correlation function example.

Often instead of correlation function is used normalized correla-


tion function R(τ ) = K(τ )/σ 2 .
Experimentally correlation function can be obtained using devise
- correlometer (Fig. 2.3). Here to multiplier signal is applied directly
and thorough delay line DL with delay time τ . As expectation deter-
mination devise could be an integrator.

s(t)
× E

DL

Figure 2.3: Correlometer block diagram.

Such correlometer can measure one point of correlation function

11
CHAPTER 2. SIGNALS AND INFORMATION

at time. In order to measure more points, measurements must be re-


peated each time with different τ .

2.2 Discretization
In modern communication systems signals are broadcasted in digi-
tal form. Therefore, before broadcasting continuous signals are dis-
cretized. According to discretization theorem, signal with maximal
frequency F , can be represented using discrete time samples spaced
by ∆t = 1/2F (Fig. 2.4).

Figure 2.4: Discretized signal.

In other words, continuous signal can represented as sequence of


numbers. Signal with length T corresponds to 2F T number of sam-
ples.
Signals are discretized not only in time axis, but also in ampli-
tude of each sample. That process is called quantization. Discretized
amplitudes of samples are represented in binary numbers — binary
code. Number of signs k depends on required precision. For exam-
ple, telephony signals are sampled by 8 kHz and each sample is coded
by 8-bits binary code. So, continuous speech signal in telephony is
converted to 64 kbits/s data stream.
Described signal discretization is called pulse code modulation

12
CHAPTER 2. SIGNALS AND INFORMATION

(PCM). PCM signal is restored to continuous signal using low-pass


filter with cut-off frequency F .
However, spectrum of telephony signals is limited to 3.4 kHz and
that is why PCM is not optimal solution for telephony (64 kbits/s data
stream can’t be sent over telephone line).
Therefore, before transmission digital signal must be compressed
using, for example, differential PCM (DPCM). DPCM based on idea
that speech signals change comparatively slowly and their future val-
ues can be predicted. If from signal at converter input s(n) we sub-
tract predicted signal ŝ(n) thus obtaining differential signal (error sig-
nal) ∆s(n) that has smallest dynamic range:

s(n) ∆s(n)
+ Quantization

Prediction +
ŝ(n)

Figure 2.5: DPCM converter.

∆s(n) = s(n) − ŝ(n). (2.4)


Predicted value as usually is expressed as sum of its previous val-
ues:

ŝ(t) = a1 s(n − 1) + a2 s(n − 1) + . . . + aN s(i − N ). (2.5)

13
CHAPTER 2. SIGNALS AND INFORMATION

Coefficients ai are obtained by minimizing mean square error (MSE)


(mean of square of ∆s(n)). In simplest case, when N = 1, solution
is a1 = R1 . Here R1 - correlation coefficient of signal samples s(n)
and s(n − 1).
If is necessary to find N coefficients, we have to solve system of
linear equations that in matrix form is:

r = R · a. (2.6)

Here R – normalized correlation matrix of signal samples s(n), s(n−


1), . . . , s(n − N ), r – column vector — first column of matrix R and
a – column vector of coefficients a1 , a2 , . . . , aN .
If in prediction we use large number of samples (N >> 1), then
speech signal can be transmitted with high quality using DPCM with
data rates of 32 kbits/s or even 16 kbits/s. It is achieved also by adopt-
ing prediction coefficients to particular speech signal. To reduce data
rate even more, considerably more complex methods must be applied.

2.3 Information Capacity


Signals are used to transmit information. But how much information
is transmitted? With that purpose we introduce conception of infor-
mation capacity.
Founder of modern information theory Claude Shannon proposed
define information capacity as the resolution of uncertainty. If be-
fore some experiment initial uncertainty is H1 and after experiment
remains uncertainty H2 , then information capacity obtained during
experiment is:
I = H 1 − H2 . (2.7)

14
CHAPTER 2. SIGNALS AND INFORMATION

Uncertainty, also known as entropy, is a characteristic of inexact sit-


uation. For example, if some event may occur or may not, then both
events has own probabilities. If probability that event occurs we de-
note as p, then probability that event do not occur is 1 − p. Then mean
entropy is:
H = −p log p − (1 − p) log(1 − p). (2.8)
In general, when random value is characterized by probability distri-
bution p1 , p2 , . . . , pn , then mean entropy is equal:
n
X
H =− pi log pi . (2.9)
i=1

It is clear, that biggest entropy is when all probabilities are equal


— pi = 1/n and Hmax = log n.
If information is broadcasted without errors, that means uncer-
tainty is totaly removed (H2 = 0), then according to Eq. (2.7) infor-
mation capacity is expressed as entropy:
n
X
I=− pi log pi . (2.10)
i=1

When during transmission errors occurs, H 6= 0 and initial un-


certainty is reduced bet not totally removed. That is why, less infor-
mation is transmitted through noisy channel then through noiseless
channel.
Dimension of information capacity (Eqs. (2.7) and (2.10)) de-
pends on base of logarithm. If base is 2 then information unit is bit, if
base – 8 then information unit – byte. One byte corresponds to 8 bits.

15
CHAPTER 2. SIGNALS AND INFORMATION

Suppose, that messages from information source are received as


symbols c1 , c2 , . . . , cn (as letters). Probability of each symbol to oc-
cur is p1 , p2 , . . . , pn , then each signal carries information capacity
according to Eq. (2.10). Maximum information is carried by signal
when probabilities of all symbols are equal pi = 1/n. Then one sym-
bol carries Imax = log n of information.
Portion of information capacity in each symbol strongly relates
to source coding. If probabilities p1 , p2 , . . . , pn differs a lot it means
that such information source is badly coded. Effectiveness of source
coding is indicated by excess:

1 − log(1/n)
r = 1 − H/Hmax = n . (2.11)
X
pi log pi
i=1

If excess is significant then sources must be re-coded that all symbols


appear with equal frequency.
Suppose, that source during some time period generates ν sym-
bols. If portion of information capacity per one symbol is calculated
according to Eq. (2.10) then productivity of information source is:
n
X
I ′ = −ν pi log pi . (2.12)
i=1

All these characteristics also can describe informative properties


of time continuous processes s(t). Entropy of continuous source is
expressed as: Z
h=− log[w(x)]w(x)dx. (2.13)

16
CHAPTER 2. SIGNALS AND INFORMATION

Here w(x) – probability density function of signal s(t). Information


that is carried by one sample of s(t) is the same as in Eq. (2.7):

I = h1 − h2 . (2.14)

Continuous signal contrary to digital signals can not be transmit-


ted without distortions, because noise exist everywhere. Suppose sig-
nal s(t) and noise n(t) are normal processes. Their probability den-
sity functions is Gaussian with dispersions σs2 = S and σn2 = N
respectively. Then, according to Eq. (2.13):
√ √
h1 = log 2πeS, h2 = log 2πeN, (2.15)

and then information capacity from Eq. (2.14):


p
I = log S/N . (2.16)

So, information capacity carried by continuous signal depends on


signal and noise dispersion ratio. That is why, signal to noise ratio
always is important in communication theory.

17
CHAPTER 2. SIGNALS AND INFORMATION

18
Chapter 3

Codes and Coding

3.1 Error Proof Codes


Information in modern communication systems is encoded. Data cre-
ated by information sources is encoded using primary codes. Sim-
plest primary code is k-bits binary code that lets encode 2k different
numbers or, in other words, symbols. Each encoded symbol corre-
sponds to particular set of bits — code word. Relation between sym-
bols and code words is determined using code tables. For example,
Table 3.1 of representation of decimal numbers by binary numbers.
To compare code words special measure is used — Hamming dis-
tance d that describes number of different positions between two code
words. For example, Hamming distance of binary codes of decimal
numbers "1" and "4" is 2, between "1" and "5" — d = 2. When it

19
CHAPTER 3. CODES AND CODING

Table 3.1: Representation of decimal numbers by binary numbers.


Decimal 1 2 3 4 5 6 7
Binary 001 010 011 100 101 110 111

is necessary to describe all set of code words, each code word must
be compared with all others. The minimal Hamming distance d0 is
important measure of selected code.
In primary codes all possible words are in use. Minimal Hamming
distance for such codes is d0 = 1. When during transmission noise
corrupts one symbol of code word (1 is received as 0, or 0 is received
as 1), all code word is received erroneously.
Code word error probability in that case is expressed as perr =
k · p0 , here p0 – error probability of one symbol (bit). As it is clear,
if number of symbols in code word k is increased, code word error
probability is increased proportionally.
Error probabilities can be reduced by increasing signal to noise
power ratio (decreasing probability p0 ) and using error proof codes
that detects and corrects errors.
There are two groups of error proof codes — block and continu-
ous. Block codes — when message is splited into blocks of specific
number of symbols that are appended by control bits. In continuous
codes control bits are inserted among message bits continuously.
Detect and correct errors is possible when code has unused words
— not all possible words are allowed. Sender can not use forbidden
code words.
Error correction capability of some code is described by minimal
Hamming distance d0 . If d0 = 1 — error correction is impossible.

20
CHAPTER 3. CODES AND CODING

If d0 = 2 — code can only detect errors, when d0 ≥ 3 — error


correction is possible.
Error detection is carried out in that way: when sender trans-
mits only allowed words but receiver receives forbidden ones error
is detected. The more are forbidden words the more errors are de-
tected. However, if during transmission one allowed word is changed
to other allowed word, error detection is impossible. So, error proof
codes does not guaranty error free broadcasting. Error proof codes
decreases error probabilities but does not protect from errors.
Error proof codes are made by adding r additional control sym-
bols — parity bits — to k bits of primary code. The more par-
ity bits are added the more better are correction capabilities of the
code. For example, for message that are coded with k bits there
is enough Nallowed = 2k allowed words to send. If to k bits add
r parity bits in total will be n = k + r bits or Npossible = 2n
possible words. Number of forbidden words in that case will be
Nf orbiden = 2n − 2k = 2k · (2r − 1) — proportional to 2k and
will rapidly increase when k increases.
However, when number of parity bits increases, the other measure
of code — code density, defined as ratio of all and message number
of symbols Rc = k/n = 1 − r/(k + r), reduces. It is important, if k
increases then dependency of code density on number of control bits
r reduces.
Conclusion is, correction capabilities of code is better when num-
ber of symbol in code increases. However, when n increases, encod-
ing and decoding procedures became more complex, therefore, must
be compromise in code selection.

21
CHAPTER 3. CODES AND CODING

3.2 Block Codes


Error proof codes as usual are made by adding redundant control sym-
bols — parity bits. By differently determining and placing parity bits
different codes are obtained. Let denote message part of code as array
of symbols a,0 a1 , . . . , ak−1 . Adding parity bits ak , ak+1 , . . . , an−1
give us systematic code a,0 a1 , . . . , an−1 . Systematic code is denoted
as (n, k) – from n code symbols k symbols are message. Different
systematic codes differ by algorithm of making control symbols.
Wide class of systematic codes are linear codes. Such codes are
expressed as algebraic first order polynomials of message symbols of
the code. For example, parity bits of binary linear code are deter-
mined as follows:
ak+j = gj0 a0 ⊕ gj1 a1 ⊕ . . . ⊕ gjk−1 ak−1 , j = 0, 1, . . . , r − 1.
Here ⊕ denotes addition in modulo-2. Coefficients gij – binary
numbers {0, 1} are chosen in such a way that the rows of genera-
tor matrix are linearly independent and parity equations are unique.
When decoding of systematic code the same rule of parity bits deter-
mination is used as during encoding procedure. Therefore, at receiver
side two sets — received = ǎi , ǎk+1 , . . . , ǎn−1 and generated us-
ing received word âk , âk+1 , . . . , ân−1 — of parity bits are compared.
Then these two sets are added in modulo-2 and the error-syndrome
code or simply the syndrome c0 , c1 , . . . , cr−1 is obtained:
âk âk+1 . . . ân−1

.
ǎk ǎk+1 . . . ǎn−1
c0 c1 . . . cr−1
If received code word bits was without errors, then generated parity

22
CHAPTER 3. CODES AND CODING

bits will be the same as received parity bits and all syndrome bits will
be 0. If during transmission code word was corrupted, then generated
parity bits will be different from received ones and syndrome code
will have ones. That means that code word is received with errors. If
selected code has minimal Hamming distance d0 ≥ 3, then syndrome
code lets determine place of wrong bits in the code.
Example.
simplest systematic code is obtained just adding one parity bit (code
(n, n − 1)). Parity bit is obtained by adding in modulo-2 all message
bits an−1 = a0 ⊕ a1 ⊕ . . . ⊕ an−2 . Suppose, code 0111000101 is
received, then generated parity bit is 1. Notice, that number of ones
in such code always is even and only odd number of errors can be
detected. However, place of error is unknown.
More complex example — code (7, 4). Parity bits can be obtained
as follows: a4 = a0 ⊕ a1 ⊕ a2 , a5 = a0 ⊕ a1 ⊕ a3 , a6 = a1 ⊕ a2 ⊕ a3 .
Such code has minimal Hamming distance 3, and lets find place of an
error.

3.3 Cyclic Codes


Cyclic codes form a subclass of linear block codes n, k. Here n –
number of code bits, k – number of message bits. Number of parity
bits r = n − k. Cyclic code is obtained, when bits of allowed code
are rearranged in cyclic manner producing other allowed word.
For example, if sequence an−1 , an−2 , a1 , a0 is allowed word of
the cyclic code, then sequence a0 , an−1 , an−2 , a1 also is allowed word
of the cyclic code.
Theoretically analyzing cyclic codes are used elements of Galois

23
CHAPTER 3. CODES AND CODING

algebra. The word of binary code is represented as algebraic polyno-


mial:

F (x) = an−1 xn−1 + an−2 xn−1 + . . . + a1 x1 + a0 . (3.1)

Here x – fictitious variable, ai – code elements — binary numbers.


For example, the code word 01001 is represented by such polyno-
mial: F (x) = 0 · x4 + 1 · x3 + 0 · x2 0 · x1 + 1 = x3 + 1.
When code words are expressed as polynomials, they can by anal-
ysed using mathematical methods. For example, addition of binary
code words is done by adding in modulo-2 polynomial coefficients:

x3 + x2 + 1

x + 1 .

x3 + x2 + x

Multiplication of two binary code words is done as polynomial multi-


plication only coefficients of the same oder x are added in modulo-2:

(x3 + x2 + 1)(x + 1) = x4 + x3 + x + x3 + x2 + 1 = x4 + x2 + x + 1.

Division is also carried out as with simple functions only subtrac-


tion is replaced by addition in modulo-2, because of subtraction or

24
CHAPTER 3. CODES AND CODING

addition in modulo-2 gives the same result:


x4 + 0 + x2 + x+ 1 | x+1
⊕ x3 + x2 + 1
x4 + x3

x3 + x2

x3 + x2

0 + x + 1

x+ 1

0
In theory of cyclic codes very important is Galois proposition
xn = 1 that gives such property of multiplication of polynomial by
x: Multiplication of algebraic polynomial by variable x corresponds
to cyclic shift (rotation) of code bits:
F̂ (x) = x · F (x) = an−2 xn−1 + . . . + a1 x2 + a0 x + an−1 . (3.2)
For example, code 0101110 is described as polynomial x5 + x3 +
x + x. After multiplication by x, we obtain polynomial x6 + x4 +
2

x3 + x2 that corresponds to code 1011100. So, the resulting code is


rotation by one bit of the initial code.
Very important in creating cyclic codes are Generator Polynomial
that in detail describes specific cyclic code:
G(x) = xr + gr−1 xr−1 + . . . + g1 x1 + 1. (3.3)

25
CHAPTER 3. CODES AND CODING

The polynomial power is r = n − k.


Polynomials G(x) can be described as the least power polyno-
mial F (x). Also, they can be described as no-reduce polynomials,
because of they can not be expressed as product of two less power
polynomials. Example of generating polynomials of cyclic codes and
corresponding binary codes are presented in Table 3.2

Table 3.2: Generating polynomials and corresponding binary codes.


r G(x) Code
x3 + x + 1 1011
3 x3 + x2 + 1 1101
4 x4 + x2 + 1 10011
x5 + x4 + x3 + x2 + 1 111101
5
x5 + x4 + x2 + x + 1 110111
x6 + x + 1 1000011
6 x6 + x5 + x2 + x + 1 1100111
x7 + x3 + 1 10001001
7 x7 + x3 + x2 + x + 1 10001111
x7 + x4 + x3 + x2 + 1 10011101
x8 + x7 + x6 + x5 + x2 + x + 1 111100111
8 x8 + x4 + x3 + x2 + 1 100011101
x8 + x6 + x5 + x + 1 101100011

Creation of the Cyclic Code. First of all generator polynomial is


selected. Afterwards, polynomial of cyclic code is form from two
parts:
F (x) = xn−k M (x) + R(x). (3.4)

26
CHAPTER 3. CODES AND CODING

First part M (x) – polynomial that corresponds to message part of the


code, second part R(x) – remainder of xn−k M (x)/G(x).

Example. Let us create cyclic code (7, 4). There are two possible
different cyclic codes (7, 4). Generating polynomials are G1 (x) =
x3 +x2 +1 and G2 (x) = x3 +x+1. Let us select G1 (x). For example
message sequence 0111 is expressed as polynomial M (x) = x2 +
x + 1. After multiplying that polynomial by xn−k = x3 , we obtain
x3 M (x) = x5 + x4 + x3 . Now let us divide obtained polynomial by
generating polynomial:

x5 + x4 + x3 | x3 + x + 1
⊕ x2 + 1
x5 + x4 + x2

x3 + x2 + 1

x3 + x2 + 1

So, remainder R(x) = 1. According to Eq. (3.4) F (x) = x5 + x4 +


x3 + 1 and corresponding code word 0111001.
In hardware cyclic encoding is done by encoder (Fig. 3.1). Such
device consists of shift registers R and adder (mod-2). Summation is
performed only if gi = 1. In the beginning switch for k time intervals
J1 is on — feedback loop is closed, and switch J2 — as shown in Fig.
3.1.
Through switch J2 message bits gets to the output of the encoder.

27
CHAPTER 3. CODES AND CODING

Information 101...

+
J1
gr−1 g2 g1

Rr−1 + R2 + R1 + R0
Parity bits
J2
Message bits

Figure 3.1: DPCM converter.

During k time intervals message bits forms parity bits. After that
for r time intervals both switches are switch over, feedback loop is
terminated and to the output of the encoder parity bits from registers
are send.
Cyclic encoders and decoders are simplest devices from all other
error proof code generators.

Error Detection. When code word is received, error detection pro-


cedure is similar to encoding. Mathematically it can be shown that
polynomial that describes each cyclic code word is divided from gen-
erating polynomial without remainder. If during division remainder
appears — error is encountered.
Code that corresponds to division remainder is called syndrome.

28
CHAPTER 3. CODES AND CODING

If syndrome code has at least one 1 it means that word received with
error.
In hardware error detection is carried out by once more encoding
received code word. Encoding is done by the same encoder as in Fig.
3.1. Difference only that switches J1 and J2 is always as shown. If
there is no error, after n time intervals all registers contains verifying
code 00...0. If code word is received with errors, after n time
intervals some bits of verifying code will be 1 — syndrome code id
formed.
Syndrome code has n − k = r bits. There are totally 2r − 1 pos-
sible different syndrome codes. It means that cyclic code can correct
2r −1 different errors. Syndrome code also shows place of incorrectly
received bits. By adding 1 in modulo-2 to 0 or 1 corresponding bit
changes. Therefore, error formation mathematically can be expressed
as summation of two binary codes or corresponding polynomials:

F̂ (x) = F (x) + E(x). (3.5)

Here F̂ (x) – polynomial that describes code word with error, F (x)
– polynomial that describes allowed code words and E(x) – error
code polynomial. It has n symbols and 1 are at positions that were
received with errors. For example, error in first bit is described by
E(x) = 100...0, error in second bit — E(x) = 010...0, error
in first and second bits — E(x) = 110...0. It is obvious, that
syndrome code is predetermined by polynomial E(x). According to
that proposition it is easy to form all error codes, and by dividing
them from generating polynomial syndrome codes table is obtained.
In hardware it can be done by sending to encoder error codes. Table
3.3 presents syndrome codes of code (7, 4) that was formed according

29
CHAPTER 3. CODES AND CODING

generating polynomial x3 + x + 1. If error is in message part of


the code (first 4 rows) then syndrome codes at 5-7 columns do not
match error codes. If error is in verifying part of the code (5-7 rows),
syndrome codes are identical to error codes.

Table 3.3: Syndrome codes table.


1 0 0 0 1 0 1
0 1 0 0 1 1 1
0 0 1 0 1 1 0
0 0 0 1 0 1 1
0 0 0 0 1 0 0
0 0 0 0 0 1 0
0 0 0 0 0 0 1

BCH Codes. It is special class of cyclic codes that were proposed


by Bose and Chaudhuri and also independently by Hocquenghem.
BCH code can correct t = r/m number of errors. Here r = n − k
and m = log2 (n + 1). Minimal Hamming distance d0 is:

2t + 1 ≤ d0 ≤ 2t + 2. (3.6)

Specific BCH code is formed by selecting number of code bits n and


number of errors that code can correct t. Generating polynomials are
obtained from product of minimal polynomials (polynomials that can
by divided by it self or 1).

30
CHAPTER 3. CODES AND CODING

3.4 Convolution Codes


Convolution codes are related to convolution integral:
Z
x(t) = m(t − τ )g(τ )dτ. (3.7)

In digital form that integral is represented as sum:


L
X
xj = mj−i gi = mj−L gL + . . . + mj−1 g1 + mj g0 . (3.8)
i=0

Here sum is in modulo-2. The sign of xj depends on given sign of mj


and previous signs of L symbols. All signs are stored in shift register.
In general a L bits register is required.
For example, analyze encoder shown in Fig. 3.2 that consists from
L = 3 bits register. Presented encoder each message bit encodes with
three bits code (n = 3, k = 1). Into register only message bits are
written. During each time interval at encoders input one message
bit is written to encoder and three time intervals at encoder output
produce three bit code.
Suppose, that at the beginning all registers bits are set to 0. Then
all three outputs are 0, too.
Let first message bit is 1. That bit is written to first register bit
R1 while bits R2 and R3 still are 0. After summing in modulo-2, as
shown in Fig. 3.2, all three outputs will be 1 (first 1 is encoded to
111).
Let second message bit is 0, so corresponding output code will be
001. More examples encoding are given in Table 3.4. Output code of
each message bit depends on given input (message) bit and two bits

31
CHAPTER 3. CODES AND CODING

g1 +
g2 1
in out
R1 R2 R3 2
3

g3 +

Figure 3.2: Convolution encoder (L = 3,k = 1,n = 3).

before. That rule can be written as follows: m1 xxm2 xxm3 xx . . ..


Here m1 , m2 , m3 , . . . – message bits and xx – control bits. Convolution
codes were made for speech communication.

3.5 Effectiveness of Codes


In information broadcasting systems correctional codes are used to
detect, correct, detect and correct errors. That is why effectiveness
of codes also is estimated taking into account influence of code to
information broadcasting. Simplest in theoretical and experimental
point of view estimates of effectiveness are related to reduction of er-
ror probability. For example, encoding reduces error of broadcasting
of individual code words.
The other estimates of code effectiveness are related to energy re-

32
CHAPTER 3. CODES AND CODING

Table 3.4: Coding table of encoder in Fig. 3.2.


Input R1 R3 R3 Output
0 0 0 0 111
1 1 0 0 111
0 0 1 0 001
0 0 0 1 011
1 1 0 0 111
1 1 1 0 110
1 1 1 1 101
0 0 1 1 010
0 0 0 1 011
1 1 0 0 111
1 1 1 0 110

duction. These estimate are used in complex analysis of communica-


tion systems, because of resistance to distortions also can be increased
by increasing energy of transmitted signals.
Both types of estimates are related. Reduction of error probability
can be recalculated to energy reduction.
Effectiveness of different codes must be analyzed in specific trans-
mission conditions by determining the model of communication chan-
nel.

Examples. Lets analyze effectiveness of codes in simplest channel


case, when error bit probability p0 is independent on correct or erro-
neous reception of neighboring bits.
If information is transmitted using k symbol code without control

33
CHAPTER 3. CODES AND CODING

symbols, then word will be received correctly if all k symbols will


be received correctly. Probability of correct transmission is pc (k) =
(1 − p0 )k , and probability of erroneous transmission is pe (k) = 1 −
pc (k) = 1 − (1 − p0 )k .
Effectiveness of code with even number of ones. Code with
even number of ones is described as code (n, n−1), n = k+1. Using
that code message word will be received correctly if all n symbols
will be transmitted correctly or occur only such errors that code is
able to correct. So, probability of correct transmission is
pc (k+1, k) = (1−p0 )k+1 +Ck+1
1
p0 (1−p0 )k +Ck+1
3
p30 (1−p0 )k−2 +. . .
and relative reduction of probability of erroneous transmission is
ek = [1 − pc (k)]/[1 − pc (k + 1, k)].
Effectiveness of cyclic code (7, 4). Using that code message
word will be received correctly if all n symbols will be transmitted
correctly or occur only such errors that code is able to correct. That
code lets to correct only one error in the transmitted word. Therefore
probability of correct transmission is
pc = (1 − p0 )7 + C71 p0 (1 − p0 )6 .
The relative reduction of probability of erroneous transmission (ef-
fectiveness) is
e7,4 = [1 − (1 − p0 )4 ]/[1 − (1 − p0 )7 p0 (1 − p0 )6 ].
According to that formula calculated effectiveness is shown in Fig.
3.3. It can be seen, when probability is p0 > 0.1, the effectiveness of
code is very poor. The effectiveness is increasing noticeably when p0
decreases.

34
CHAPTER 3. CODES AND CODING

2 log e

0
−5 −4 −3 −2 −1 0
log p0
Figure 3.3: Effectiveness of cyclic code (7, 4).

35
CHAPTER 3. CODES AND CODING

36
Chapter 4

Communication Lines,
Channels and their
Models

4.1 Communication Line as four pole sys-


tem
The simplest communication line is pair of wires — four pole system.
Main characteristics of such line are input impedance Zin (jω), output
resistance Zout (jω) and transfer function Ku (jω). Signal transmis-
sion capabilities of such line is limited by frequency range F (width
of pass-band), noise and distortions.

37
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS

Frequency Range. Discretization theorem states that signal with


highest frequency F can be discretized taking its samples with period
δt = 1/(2F ). Quantizing each sample with precision of N bits,
sequence of binary numbers with density (number of bits per second)
2F N is obtained.
Analyzing transmission process propositions of discretization the-
orem can be turned around. Signal that must be transmitted thorough
communication line with pass-band F is formed as sequence of short
impulses with frequency 2F . The amplitude of each impulse can be
controlled using k bits encoder (see Fig. 2.4). Communication line
acts as low-pass filter. The information capacity of communication
line without noise is expressed as follows:

C = 2F k. (4.1)

Theoretically, there are no limits for k and the capacity of information


channel can be very large.
Transmitting signals thorough real communication line signal is
affected by noise. It is obvious, that in such case there will be impos-
sible precisely determine amplitude of transmitted samples. So, noise
limits the information capacity of communication channel. Theoret-
ical information capacity limit of communication channel with noise
is expressed by Hartley-Shannon formula:

C = F log(1 + S/N ), bits/s. (4.2)

Here S and N – mean power of signal and noise respectively. As it


can be seen, the information capacity of channel directly proportional
to width of pass-band F and also depends on signal and noise ration
(SNR).

38
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS

To compare information capacity of real systems with theoretical


limit the capacity of channel with 1 Hz width of pass-band is used:
C/F [bits/s] = 3.32γ. Here γ – SNR in decibels.

4.2 Noise
As noise are considered all other oscillations and signals that interfere
with given signal. In general noise could be additive and multiplica-
tive.
Additive noise is noise that simply adds to signal and signal af-
fected by such noise can be expressed as follows:
x(t) = s(t) + n(t). (4.3)

Here s(t) – signal to be transmitted, n(t) – additive noise, x(t) –


signal at the output of communication line.
The other noise model takes into account changes in signal am-
plitude and is expressed as product of signal and multiplicative noise:
z(t) = s(t) + n(t). (4.4)

Multiplicative noise as general is caused by changes in transfer coef-


ficients of communication line. Such noise is common in radio com-
munication lines, but also observable in wire communication lines,
because of transfer coefficient of line depends on temperature and
other factors.
More complex distortion models are obtained when the same sig-
nal is affected by both additive and multiplicative noise:

x(t) = k(t)s(t) + n(t). (4.5)

39
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS

Analysing models of distortions the noise is considered as ran-


dom process that is described by multidimensional functions. Often
noise is modelled as random normal process with probability density
function described by Gaussian function:

z2
 
1
W (z) = √ exp − 2 . (4.6)
2πσ 2σ

Here σ 2 – noise dispersion.


In a simple manner, noise as random process can be described by
correlation function K(τ ) and power spectral density function G(ω)
(see Eq. (2.1)).
Noise according to its features can be harmonic, impulsive and
fluctuating.
Harmonic noise — inter-distortions among different communica-
tion systems. They appear when signals from the one communication
system fall into reception range of the other communication system.
Mathematically such noise is expressed as follows:

n(t) = A(t)cos {ωt + φ(t)} . (4.7)

Here A(t) and φ(t) – respectively random amplitude and phase. Ex-
ample of the correlation function of the harmonic noise is decaying
cosinusoidal function:

K(τ ) = K(0) exp −αt cos ωt. (4.8)

Characteristic feature of the harmonic noise is that it has narrow spec-


trum width. When at the same time there is lot of harmonic noise with
different spectrum rage, the spectrum of such complex noise can be

40
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS

wide enough. Such type of noise creates specific problem of planning


of radio resources (signals, frequencies and base stations) in mobile
communication network.
Impulsive noise — short time radiation from various electrical
charges and switches. Typical impulsive noise — short pulses that
appear at random time instances. Specific feature — concentration
in time. Short pulses create wide spectrum noise. When at the same
time there is lot of different sources of impulsive noise, the continu-
ous stochastic process can appear.
Fluctuating noise — wide spectrum continuous oscillations. As
general they appear in amplifiers and others electronic devices due to
random movement of electrons. Mathematically fluctuating noise is
modeled as normal process.
The power spectrum density of fluctuating noise is wide spectrum
continuous function. Example of fluctuating noise can be white noise,
that has constant power spectrum density over all positive frequency
axis G(ω) = N0 . Correlation function of the white noise is
K(τ ) = 0.5N0 δ(t). (4.9)
Correlation function of the white noise is delta function δ(t), that is
why two different samples of the white noise are independent and un-
correlated. So, multidimensional probability density function of the
white noise can be expressed as product of one dimensional probabil-
ity density functions:
n
( )
1 1 X 2
Wn (z1 , z2 , . . . , zn ) = √ n exp − 2 zl . (4.10)
2πσ 2σ
l=1

White noise is limit that approaches sums of harmonic and impulsive

41
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS

distortions when number of components is unlimited. That is why


many theoretical propositions about signal broadcasting and noise re-
sistance are based on assumption that noise is normal.

4.3 Radio Channels


The ideal radio channel is when signal from transmitter to receiver
propagates over the one direct path. The real channels, especially mo-
bile communication network channels, are more complex (Fig. 4.1),
because of various buildings and other objects strongly affects condi-
tions of propagation of the electro-magnetic waves. The direct wave
may be considerably attenuated by buildings on its propagation path.
To direct wave adds signals that were propagated over other different
paths. These other paths are created by special effects of radio wave
propagation such as diffraction, refraction and multiply reflections.
Also, in typical conditions of mobile communications there are sev-
eral base stations near each other. That is way, in general, signal at
the receivers input can be modeled as sum of many signals:
X 1/2
x(t) = κijk Pij sij (t − τijk ). (4.11)

Here sum is over three variables i, j, k, i – conditional number


of cell or base station that sends the signal, j – conditional number
of user (the number of signals that are summed over j is equal of
number of active users), k – number of the propagation path from
receiver to transmitter (kmin = 1, kmax = 3, . . . , 6), Pij – power
of the transmitted signal, that is different for different transmitter and
depends on communication conditions. Coefficients κijk and τijk –

42
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS

hBS Diffraction

Reflection
Direct wave

hMS

Figure 4.1: Conditions of mobile communication in urban area.

characterizes the transfer coefficients of particular propagation path


and delay of signal sijk . Limits of κijk widen when conditions of
radio wave propagation became more complex or/and distance from
transmitter to receiver increases. For example, single building that
is on radio wave propagation path can attenuate signal up to 20 dB.
Forest or trees can attenuate signal up to 3–12 dB. The signal delay
inside building can reach 40–200 ns, while in open space — 1–20 µs.

While mobile stations moves then signal delay and other reception
conditions also changes. Fig. 4.2 illustrates change of signal level
when mobile station moves only few meters.

43
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS

Figure 4.2: Change of radio signal level.

4.4 Discrete Channels Models


Telecommunication theory analyses not only continuous communica-
tion channel models that describes processes in communication lines,
but also continuous-discrete and pure discrete models.
The continuous-discrete channel model is applied for characteri-
zation of transmission processes when source generates discrete sym-
bols but over physical line segments of time continuous functions are
transmitted. Fig. 4.3 shows transmission of discrete symbols (bits) as
segment of harmonic wave with duration T .
The discrete channel models (Fig 4.4) are used when only part of
the communication system from transmitter to receiver of digital in-
formation is analyzed. Transmitter at that time moment can transmit

44
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS

1 0 1

t t+T t + 2T

Figure 4.3: Transmission of digital signal.

any symbol ai , i = 1, 2 . . . , m. Transmitted signal ai can be distorted


and received as bj , j = 1, 2, . . . , n. When two events ai and bj are
related then probability of event bj depends on event ai . That situa-
tion can be described by conditional probability p(bj |ai ). Here ai –
condition, bj – awaiting event.

a1 a2 a3 ai am

b1 b2 b3 bj bn

Figure 4.4: Discrete channel model.

Speaking about two events ai and bj reverse conditional probabil-

45
CHAPTER 4. COMMUNICATION LINES, CHANNELS AND
THEIR MODELS

ity can be introduced p(ai |bj ). Here as conditions is event bj .


Probability that both events occurs at the same time is denoted as
P (ai , bj ), and probabilities of event ai and bj are respectively p(ai )
and p(bj ). All these probabilities are related thorough expression:
P (ai , bj ) = p(ai )p(bj |ai ) = p(bj )p(ai |bj ). (4.12)
Now follows that two conditional probabilities are related thorough
formula:
p(ai |bj ) = p(bj |ai )p(ai )/p(bj ). (4.13)
Applying well known formula of probability theory
n
X
p(bj ) = p(ai )p(bj |ai )
i=1
the Eq. (4.13) becomes
p(ai )p(bj |ai )
p(ai |bj ) = Pn . (4.14)
i=1 p(ai )p(bj |ai )

Obtained expression that relates communication channel describing


conditional probabilities is mathematical model of the discrete com-
munication channel. In probability theory that expression is known
as Bayes formula.
Bayes formula in telecommunication theory is very important, be-
cause of it very well describes essence of transmission of information
thorough communication line or channel. These two events can be in-
terpreted as signals in Fig. 3.3. One of them signal to be transmitted
— digital, the other represents signal in communication line. Bayes
formula describes relation between conditional probabilities of these
two signals and thus is mathematical model of discrete communica-
tion channel.

46
Chapter 5

Signal Reception in Noise

5.1 Decision Rules


Suppose, event set ai represents transmitted signals (symbols, code
words, etc.) and event set bi represents received signals. Traditionally,
transmitted signals are denoted by si and received signals – xi . Now,
encountering these notations Bayes conditional probability formula
can be rewritten as follows:
p(si )p(xj |si )
p(si |xj ) = n . (5.1)
X
p(si )p(xj |si )
i=1

When specific realization of signal is received (condition xj ), then


using Bayes formula conditional probabilities that signals s1 , s2 , . . . , sn

47
CHAPTER 5. SIGNAL RECEPTION IN NOISE

were transmitted can be calculated. Afterwards biggest probability is


selected, eg. p(sk |xj ), according to following equation:

p(sk |xj ) > p(si |xj ), i = 1, 2, . . . , n; i 6= k. (5.2)

That equation is decision rule of optimal receiver and is applied for


processing of received signal xi . According to this rule, if condition
of Eq. (5.2) is satisfied for signal sk , is presumed that signal sk was
transmitted. Such receiver is optimal according to Maximum Likeli-
hood criteria.
Closely examining Eq. (5.2) it is easy to note that all expressions
p(sk |xj ) and p(si |xj ) has the same denominator (Eq. (5.1)). Then
substituting Eq. (5.1) to Eq. (5.2) and omitting denominator the fol-
lowing expression is obtained:

p(sk |xj ) > p(si )p(xj |si ), i = 1, 2, . . . , n; i 6= k. (5.3)

That expression uses direct conditional probabilities instead of re-


verse.

5.2 Optimal Receiver in White Noise Envi-


ronment
Formulated decision rule Eq. (5.3) must be understand as very general
receiver algorithm. The algorithms of optimal reception of signals are
detailed examined for model of communication channel with normal
white noise.

48
CHAPTER 5. SIGNAL RECEPTION IN NOISE

Lets denote transmitted signal as si (t), at receiver side during


time interval from 0 till T (T – signal si (t)duration) the sum of trans-
mitted signal and white noise n(t) is observed:

x(t) = si (t) + n(t). (5.4)

For further analysis lets use discrete time by dividing time interval
T into L parts. Then Eq. (5.4) can be rewritten:

x(l) = si (l) + n(l), l = 0, 1, . . . , L − 1. (5.5)

In white noise case any two samples n(l1 ) and n(l2 ) in discrete
time axis are independent, that is way L dimensional probability den-
sity function of noise n(l) is expressed as product of one dimensional
functions:
L−1
 L ( )
1 1 X 2
WL (n0 , n1 , . . . , nL−1 ) = √ exp − 2 nl . (5.6)
2πσ 2σ
l=0

Expressing n(l) = x(l)−si (l) (Eq. (5.5)) and substituting into Eq. (5.6)
following expression is obtained:

WL (n0 , n1 , . . . , nL−1 |si ) =


L−1
 L ( )
1 1 X 2
√ exp − 2 [x(l) − si (l)] . (5.7)
2πσ 2σ
l=0

That formula is probability density function, because of it correct


when condition x(l) − si (l) is satisfied.

49
CHAPTER 5. SIGNAL RECEPTION IN NOISE

Now, replacing in Eq. (5.3) conditional probabilities p(xj |si ) by


conditional probability density functions (Eq. (5.7)) and omitting iden-
tical factors from both sides of equation, the receiver algorithm that
is optimal according to Maximum Likelihood criteria is obtained:
L−1
( )
1 X
p(sk ) exp − 2 x(l) − sk (l) > (5.8)

l=0
L−1
( )
1 X
p(si ) exp − 2 x(l)si (l) , i = 0, 1, . . . , n, i 6= k.

l=0

Further modifications of Eq. (5.8) (raising to the square power, group-


ing, taking logarithms) leads to traditional algorithm of optimal re-
ceiver:
L−1 L−1
X X p(si ) (Ek + Ei )
x(l)sk (l) > x(l)si (l) + σ 2 log + ,
p(sk ) 2
l=0 l=0
i = 0, 1, . . . , n, i 6= k. (5.9)
As it can be seen, from received signal x(l) depends only these com-
ponents:
L−1
X L−1
X
x(l)sk (l) = Kk and x(l)si (l) = Ki . (5.10)
l=0 l=0

Here Kk and Ki – respectively correlations of received signal and


signals sk and si calculated using only one observation (without av-
eraging). The sum of other members of Eq. (5.9)
p(si ) (Ek + Ei )
σ 2 log + = ζki (5.11)
p(sk ) 2

50
CHAPTER 5. SIGNAL RECEPTION IN NOISE

is a threshold that depends on probabilities p(si ) and p(sk ) and ener-


gies of signals sk (l) and si (l):
L−1
X L−1
X
Ek = s2k (l) and Ei = s2i (l). (5.12)
l=0 l=0

When probabilities p(si ) and p(sk ) are equals p(si ) = p(sk ) and
equals are energies of all signals Ek = Ei , then threshold ζik = 0
disappears and decision rule of optimal receiver takes very compact
form:
Kk > Ki , i = 0, 1, . . . , n, i 6= k. (5.13)
That rule means: the k-signal is considered received if correlation
Kk of at that time observed realization x(l) with copy of k-signal is
greater then correlations with copies of other signals.
Presented version of the algorithm of the optimal receiver can be
directly applied when received signal is discretized and processed as
digital signal. The continuous time version of the algorithm of the
optimal receiver also can be written using Eq. (5.9) equation only
correlations and energies are calculated differently:
Z T Z T
Kk = x(t)sk (t)dt and Ki = x(t)si (t)dt, (5.14)
0 0
Z T Z T
Ek = )s2k (t)dt and Ei = )s2i (t)dt. (5.15)
0 0

The block diagram of the continuous time optimal receiver is shown


in Fig. 5.1. Multipliers and integrators calculates correlations Ki , i =
1, 2, . . . , n and decision block picks the biggest one. If, for example,

51
CHAPTER 5. SIGNAL RECEPTION IN NOISE

the biggest correlation is Kj then decision is that signal sj is received.


Switches are used to show that correlations are calculated only during
time interval from 0 till T and results to decision block is passed only
after that time interval.

T
R
x

Decision
T
x(t)
s1
T
R
x T

sn

Figure 5.1: Block diagram of the optimal receiver.

5.3 Noise Resistance


The analysis of erroneous reception is useful to begin from simple sit-
uation when only two orthogonal signals are transmitted under white
noise conditions. Suppose that during observation interval signal
sk (t) is transmitted. Then according to Eq. (5.13) the the condition of
correct decision is Kk > Ki and the condition of erroneous decision
– Kk < Ki .
Substituting into Eq. (5.13) initial notations and after integration
and regrouping the condition of erroneous decision becomes

S = Ekk − Eik < Θi − Θk . (5.16)

52
CHAPTER 5. SIGNAL RECEPTION IN NOISE

Here
Z T Z T
Ekk = sk (t)sk (t)dt, Ekk = sk (t)si (t)dt, (5.17)
0 0
Z T Z T
Θk = n(t)sk (t)dt, Θi = n(t)si (t)dt.
0 0

Here Ekk is energy of signal sk (t), Eki – correlation between signals


sk (t) and si (t), and Θk and Θi – random variables.
Such situation is shown in Fig. 5.2. W (x) probability density
function of random variable Θi − Θk and touched region represents
condition S < x when noise is greater than signal. The area of the
touched region is equal to probability of the erroneous decision:
Z ∞
pki = W (x)dx = Q(h). (5.18)
S

W (x)

S x

Figure 5.2: Illustration of probability of the erroneous decision.

53
CHAPTER 5. SIGNAL RECEPTION IN NOISE

Random variable Θi − Θk has normal distribution, because of


model of normal distortion n(t) is used. The dispersion of that vari-
able is:
2
σki = (Ekk − 2Eki + Eii )N0 /2. (5.19)
Such expression is obtained putting in square variable Θi − Θk and
forming dispersions of separate components that are calculated as fol-
lows:
Z TZ T
M Θ2k =

M {n(t1 )n(t2 )} sk (t1 )sk (t2 )dt1 dt2 (5.20)
0 0
T T
N0
Z Z
= δ(t1 − t2 )sk (t1 )sk (t2 )dt1 dt2
2 0 0
T
N0 N0
Z
= sk (t1 )sk (t2 )dt2 = Ekk .
2 0 2
Here correlation function of the white noise is δt N0 /2. M {} denotes
expectation operator.
After calculations Eq. (5.18) can be rewritten more specifically:
Z ∞
1
exp −z 2 /2 dz = Q(hki ).

pki = √ (5.21)
2π h

Here Q(h) = 0.5 erfc( 2h) – special function of probability integral,
h2ki = S/σki2
– ratio of signal and noise energies.
When signals sk (t) and si (t) are orthogonal then Eik = 0 and
2
Ekk = Eii = E and so, S = E, σik = σ 2 = EN0 and h2ik = h2 =
E/N0 . Now, the probability of the erroneous decision is:

porthogonal = Q(h). (5.22)

54
CHAPTER 5. SIGNAL RECEPTION IN NOISE

If signals are not orthogonal but opposite sk (t) = −si (t), then the
similar formula is obtained:
√ 
popposite = Q 2h (5.23)

Diagrams of the probabilities of the erroneous decision are shown


in Fig. 5.3. In considered situation the probabilities of the erroneous

1 2 3 4 h
0

-2

-4 orthogonal
log(p)

-6

-8 opposite

-10

-12

Figure 5.3: Diagrams of probabilities of the erroneous decision.

decision depends only on ratio of signal energy E and power spectral


density N0 of white noise h2 . If ratio increases, the probabilities of
the erroneous decision decreases. The other properties (form, spec-
trum width) of signal has no influence.
When communication quality must be guaranteed by specifying
the permissible probability of the erroneous decision, such demand

55
CHAPTER 5. SIGNAL RECEPTION IN NOISE

is recalculated to minimal affordable √ signal to noise ratio. If signals


are opposite the required energy is 2 times less than in orthogonal
signals case.
Eq. (5.21) represents the probability of the erroneous decision
when decision is made by testing only one inequality. When informa-
tion is transmitted using not binary code then many different signals
are transmitted. In such case decision is made by testing many condi-
tions — inequalities. The final probability of the erroneous decision
is expressed as partial probabilities of the erroneous decision:
pe rr = 1 − (1 − pk1 )(1 − pk2 ) . . . (1 − pkn ). (5.24)
For example, when harmonic signals A cos(ωk t) are transmitted and
frequencies ωk = ω0 + kΩ, k = 1, 2, . . . , n are selected that all
signals are orthogonal, then all partial probabilities of the erroneous
decision are equal p and Eq. (5.24) can be rewritten as follows:
perr = 1 − (1 − p)n−1 . (5.25)
Now, it is obvious that when number of different signals increases the
probability of the erroneous decision also increases.
When besides white noise there are other distortions (harmonic or
impulsive) then influence of these distortions on probability of erro-
neous decision is examined separately. If additional noise is denoted
as µ(t), then Eq. (5.16) takes such form:
S = Ekk − Eki < (Θi − Θk ) + (ζi − ζk ). (5.26)
Here variables
Z T Z T
ζk = µ(t)sk (t)dt and ζi = µ(t)si (t)dt (5.27)
0 0

56
CHAPTER 5. SIGNAL RECEPTION IN NOISE

are short time correlations between additional noise µ(t) and signals
sk (t) and si (t). More detailed analysis of the influence of additional
noise depends on it’s properties and relations with transmitted signal.
For example, if harmonic signals are transmitted, the receiver also
receives harmonic noise An cos(ωn t + φ),

AAn sin [(ωk − ωn )T − φ]


ζk = · .
2 ωk − ωi
The influence of ζk and ζi is small when difference ωk − ωn is big. If
frequencies ωk and ωn are the same, the influence is very strong. In
AAn
such case noise is ζk = T cos φ. Variable phase φ can increase
2
signal energy till E(1 + An /A) or decrease till E(1 − An /A). The
relative change of energy depends only on ratio of amplitudes of noise
and signal An /A. Mostly noticeable is decrease of energy, because
appears series of erroneous transmission. Such situation can occur in
GSM networks when two base station with same operating frequency
is near to each other.
Another situation is when disturbance is wide band random close
to normal noise then it’s influence to noise resistance is expressed
directly by increasing of energy (dispersion) of whole noise Θi −
Θk + ζi − ζk . That increasing is the more significant the more noise
is correlated with signal.

5.4 Matched Filtering


The filtering of the signals — separation of signals according to some
properties of their frequency distribution — is analysed in signals and

57
CHAPTER 5. SIGNAL RECEPTION IN NOISE

circuits theory. Widely known are low-pass, high-pass and band-pass


filters. The theory of statistical telecommunications analyses differ-
ent filtering problem that aims according to some criteria optimally
separate signal from noise.
The problem of matched filtering can be formulated as follows:
the observed process x(t) (Eq. (5.4)) consists of known form signal
s(t) with duration T and white noise n(t). The aim is to find such
filters impulse response g(t) or complex transfer function K(jω) that
signal to noise ratio at the output of filter will be maximal.
One possible solution of that problem is based on well known
equations that describe signal and noise at the output of filter:
Z T Z T
S(T ) = si (t)g(T − t)dt, N (T ) = n(t)g(T − t)dt.
0 0
(5.28)
Integration limits is 0 − −T because of signal duration is T . It is
logical to try achieve maximal signal at the output of filter at moment
that all signal enters into filter. According to Eq. (5.28) the problem
of matched filtering can be formulated more precisely. The aim is to
find such filters impulse response g(t) that maximises ratio:
nR o2
T
2
S (T ) 0
s i (t)g(T − t)dt
= o2 . (5.29)
N 2 (T )
nR
T
M 0 n(t)g(T − t)dt

Here M N 2 (T ) – dispersion of noise at the output of filter. Ac-
cording to Eq. (5.20) the dispersion of noise can be written as follows:
Z T
 2
g 2 (T − t)dt.

M N (T ) = 0.5N0 (5.30)
0

58
CHAPTER 5. SIGNAL RECEPTION IN NOISE

Eq. (5.30) shows that noise dispersion influences not filters impulse
response but its norm:
Z T
2
[g(t)] = g 2 (T − t)dt.
0

It is obvious that without any limitations [g(t)]2 = 1. Thus ratio


from Eq. (5.29) will be maximal when maximal will be quantity that
depends only on signal and filter:
(Z )2
T
si (t)g(T − t)dt . (5.31)
0

Now well known Schwarz-Buniakowsky inequality


Z 2 Z Z
f (x)g(x)dx ≤ f 2 (x)dx g 2 (x)d(x) (5.32)

can became equality only when f (x) = g(x). Thus quantity from
Eq. (5.31) becomes maximal when equality si (t) = g(T − t) will be
satisfied. That equality can be rewritten more purposive way:

g(t) = si (T − t). (5.33)

The filter that has impulse response related to signal according to


Eq. (5.32), i.e. it matched with signal in reverse time axis, is called
Matched Filter.
Matched filters can be used in optimal receivers. So, when equal-
ity (5.32) is satisfied, signal at the output of matched filter is described

59
CHAPTER 5. SIGNAL RECEPTION IN NOISE

by integral
Z T Z T
x(t)g(T − t)dt = x(t)si (t)dt = Ki
0 0

that coincide with one from Eq. (5.14) that represents correlation Ki
between signal with noise and copy of signal. That is why correlators
in Fig. 5.1 that were made from voltage multipliers and integrators,
can be replaced by matched filters (Fig. 5.4).

MF1
x(t) Decisions

MFn

Figure 5.4: Block diagram of optimal receiver with matched filters.

When for transmission of information n different signals are used,


then optimal receiver requires n matched filters that are tuned to dif-
ferent signal.
When into matched filter enters signal s(t) and other signal sx (t),
then output of filter is signal that form is similar to correlation func-
tion between signals s(t) and sx (t). If sx (t) = s(t), then output of
filter is signal that form is similar to correlation function of signal
s(t).

Example. Signal s(t) is rectangular impulse with duration T . Matched


filter for that signal is shown in Fig. 5.5 (a). It consists of delay

60
CHAPTER 5. SIGNAL RECEPTION IN NOISE

line (T ), integrator (−1) and integrator. Input signal is shown in


Fig. 5.5 (b), while output signal – Fig. 5.5 (c).

sx (t)
R sout (t)

T −1

(a)

sx (t) sout (t)

t0 t0 + T t t0 t0 + T t0 + 2T t
(b) (c)

Figure 5.5: Illustration of matched filtering.

When correlators realized by forming product of signals and fol-


lowing integration (Fig. 5.1) are used, then output is one quantity (in
digital version – one number) that corresponds to one point of corre-
lation function. Meanwhile, using matched filters the output is whole
correlation function. As it can be seen from Fig. 5.5 (c), the output
function reaches maximum when all signal enters into matched filter.
That feature — position of signal maximum in time axis — is widely
used in radio-location to determine return moment of reflected sig-

61
CHAPTER 5. SIGNAL RECEPTION IN NOISE

nal. That feature also is used in synchronization of communication


systems.
However, matched filters are used rarely, because of complex man-
ufacturing process.

62
Chapter 6

Multiplexing of
Communication Lines

6.1 Methods of Sharing of Communication


Lines
Communication lines are expensive, that is why each line must be
used as efficient as possible. With that purpose the same line is used
to transmit signals of different sources. In other words one line is
shared by many users. Each user possesses some part of information
capacity of line.
The division of capacity of communication line can be easily de-
scribed. Signal of each source is transformed to other special form

63
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

signal, to source i corresponds signal Si (t). These signals from sepa-


rate sources are summed and total signal is obtained:

I
X
SM (t) = Si (t). (6.1)
i=1

It is important that signals Si (t), i = 1, 2, . . . , I can not have any


form. Signals must be chosen so that from sum of received signals
SM (t) the signal Si (t) of each source could be separated.
In telecommunication theory is proved that signals Si (t), i =
1, 2, . . . , I can be separated if they are linearly independent, i.e. they
do not satisfied equality-identity:

I
X
Sk ≡ Ci Si (t), i 6= k. (6.2)
i=1

That condition can be proved very easily. Suppose, contrarily, Equal-


ity (6.2) is possible. That means signal Sk (t) can be formed by sum-
ming with some coefficients other signals from set {Si (t)}. If such
possibility exist, it can happen that signal of some source is synthe-
sized from signals of other sources.
The condition of linear independence satisfy all orthogonal sig-
nals. Real communication systems that transmit signals from dif-
ferent sources uses orthogonal signals, which properties determine
method of sharing of common resources.
Historically, oldest method divides channels according to frequency.
Signals with different frequency are transmitted thorough different
channel. Such method is called Frequency Division Multiple Access

64
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

(FDMA). The other method uses division of time axis into period-
ically repetitive intervals. Each signals has some sequence of peri-
odical time intervals. That method is called Time Division Multi-
ple Access (TDMA). There are systems that uses both frequency and
time division of communication channel. Now in radio communica-
tion systems is spreading Code Division Multiple Access (CDMA) of
channels when different signals have different codes.

6.2 Frequency Division Multiple Access


Using that method all frequency band F of communication line is di-
vided to sub-bands ∆fi – channels (Fig. 6.1). Different signals are
transmitted thorough different channels. Technically FDMA is im-
plemented using modulators M (Fig. 6.2). Into each modulator one
input signal ei (t) and harmonical oscillation with different frequency
ωi is applied. The output signals from all modulators are combined
in summer S. Total signal SM (t) is transmitted.

∆f1 ∆f2 ∆f3 ∆fi

Figure 6.1: FDMA.

65
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

e1
M1
SM (t)
S
en
Mn

Figure 6.2: Block diagram of forming of total signal.

Multi-user receiver (Fig. 6.3) receives total signal ŜM (t) and dif-
ferent modulated signals are separated using band-pass filters (BPF)
and detected.
ê1
MF1 D1
ŜM (t)
ên
MFn Dn

Figure 6.3: Block diagram of multi-user receiver.

For transmission of digital information using FDMA, composite


multi-frequency signals are used:
N
X
s(t) = Ci cos (ωi t − φi ). (6.3)
i=1

Very important group of such type signals is orthogonal multi-frequency


signals, which frequencies ωi are chosen that all components form set

66
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

of orthogonal functions:
Z T
cos (ωi t − φi ) cos (ωj t − φj )dt = 0, i 6= j. (6.4)
0

Model of orthogonal multi-frequency signal can be written in differ-


ent way:
XN
s(t) = cos [(ω0 + 2π/T )t + φi ]. (6.5)
i=1
Such signals are used in local computer networks, radio systems of
wide-band audio and video. In future such signals are planed to use
in mobile communications.
Orthogonal multi-frequency signal are generated digitally. At first,
modulation in domain of complex spectrum is performed and com-
plex spectrum {Ci } is formed. Continuous time signals are obtained
using digital inverse Fourier transformer and digital-analog convert-
ers (Fig. 6.4).
C1
C2
Fourier s(t)
DAC
transformer
Cn

Figure 6.4: Multi-frequency signal transmitter.

Multi-frequency signal receiver is similar to transmitter. Received


signal using analog-digital converter is converted to digital signal and

67
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

components of different frequencies, complex spectrum of signal am-


plitudes, are separated Fourier transformer (Fig. 6.5).

C1
C2
s(t) Fourier
DAC
transformer
Cn

Figure 6.5: Multi-frequency signal receiver.

6.3 Time Division Multiple Access


That method uses signal discretization in time axis with discretization
frequency 1/Tc (Fig. 6.6). In that way sequence of periodical time
intervals wit duration Tc is formed. Each cycle is divided accord-
ing to number of desired time channels (T C). One channel is used
to transmit one signal. These time intervals are periodically repeti-
tive. In each cycle one sample of each transmitted signal is formed
(Fig. 6.7 (a,b)). Afterwards, all samples are combined into one stream
that must be transmitted (Fig. 6.7 (c)).
In modern communication systems TDMA is used extensively for
transmission of digital signals, i.e. signals are not only discretized in
time axis but also each sample is coded. For example, if information
signals are coded using impulse code modulation, then during each
cycle each time channel gets code that corresponds to one sample of

68
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

TC1 TC2 TC1 TC2

Tc Tc

Figure 6.6: TDMA.

s1 (t)

t11 t12 t
(a)
s2 (t)

t21 t22 t
(b)
s(t)

t
(c)

Figure 6.7: Signal discretization and combination.

69
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

primary signal. As usual, speech signals are coded by binary eight


bits code.
When digital signal formed using TDMA is received, opposite
operations must be performed — divide into parts, separate different
channels, decode. These operations will be successful if time chan-
nels of receiver coincides with respective time intervals of receiving
signal.
Errors can be avoided when channels are separated if exact mark
of each time interval is present. In modern communication systems
such mark is transmitted together with signals. It is done by assign-
ing first time interval of each cycle for transmission (Fig. 6.6 T C1 ) of
auxiliary — cyclic synchronization signals. As usual, first time inter-
val of each cycle transmits the same binary code. Using that repetitive
code receiver is synchronized to beginning of each cycle.
FDMA and TDMA also are used together. For example, GSM
mobile communication system has radio resources that consists of
120 harmonic oscillations spaced in frequency by 200 kHz. Each
frequency channel is divided into eight time intervals – channels. So,
8 users can use the same frequency channel. Other example is optical
communication systems where thorough one fiber several light beams
with different wave length are transmitted. Each beam also is divided
in time.

6.4 Code Division Multiple Access


FDMA and TDMA methods are extensively used to transmit signals
thorough wires and fiber optic lines. Also they are applied in radio
and mobile communication systems.

70
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

The main specific of mobile communications is that communica-


tion services at the same time uses multiply users dispersed in wide
area. To guarantee full duplex radio connection using low power
transmitters the all area is divided into parts that has own base sta-
tions (BS). Each BS communicate only with users in prescribed area.
When planning network to each BS some band of radio frequency
spectrum is allocated as common resource. For example, in GSM
technology to each BS possible frequency channels are allocated.
When frequency is allocated to specific BS, then, in order to pre-
vent interference, the same frequency can not be allocated for nearest
BS. That requirement reduces number of usable channels and total
capacity of the communication system. Unfortunately, even carefully
planning communication network, the interference can not be avoided
due to very complex conditions of propagations of radio waves and
users mobility. Briefly formulating, due to narrow frequency spec-
trum, mobile communication networks have two main disadvantages:
necessity of territorial planning of radio frequency spectrum and un-
avoidable interference.
In order to avoid these disadvantages the code division multiply
access (CDMA) technology was invented that are used with spread
spectrum signals.

Spread Spectrum. The main principles of spread spectrum signals


were formulated in 1949-50. Their application in radio-location in-
creased resolution and application distance. For a long time spread
spectrum signals were obtained by modulating radio oscillation fre-
quency using saw form signal. Using such modulation frequency of
modulated signal is periodically changed. Spread spectrum signals

71
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

also can be obtained by discretely modulating one frequency signal.


It is frequency hopping method.
Using CDMA the spread spectrum signal is obtained by modu-
lating initial signal s(t) using specially formed sequence of positive
and negative pulses — pseudo-noise (PN) (Fig. 6.8). Such method is
called direct spreading.

100110
x x

PN sequence m(t) Radio signal

Demodulator x Filter x

PN sequence m(t) Radio signal

Figure 6.8: Block diagram of CDMA system.

General process of CDMA is illustrated in time diagrams of Fig. 6.9.


Initial signal s(t) as time function is shown in Fig. 6.9 (a). That func-
tion is multiplied by PN sequence m(t) (Fig. 6.9 (b)). Their product is
spread spectrum signal ν(t) = s(t)m(t) (Fig. 6.9 (c)). Before trans-
mission that spread spectrum signal is additionally modulated using
radio frequency oscillation (not shown in Fig. 6.9).
At receiver side opposite transformations is performed. First at
all, synchronous detection is performed. In Fig. 6.8 multiplier and

72
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

following filter is used as detector. After filtering the spread spectrum


signal ν(t) is restored. Initial information signal s(t) is obtained by
multiplying signal ν(t) by PN sequence m(t) (Fig. 6.9 (d)).
When signal is transmitted thorough real channel it is distorted
and form of restored signal may differ a lot from original information
signal. That is why, received signal is restored using correlation de-
tector that also performs signal integration. The integrated signal is
shown in Fig. 6.9 (e). It can be seen that the integrator acts as accumu-
lator. At the end of each cycle according to the accumulated voltage
the polarity (positive or negative) of transmitted signal is determined.
When into receiver enters signal formed using different PN se-
quence (Fig. 6.9 (f)) (w(t) instead of ν(t)), then such signal restora-
tion process is illustrated in Figs. 6.9 (g) and (h). Product of such
signal with m(t) (Fig. 6.9 (g)) and following integration (Fig. 6.9 (h))
at the end of each cycle do not accumulate significant voltage. That
voltage are added to voltage from Fig. 6.9 (e) do not change polarity
of restored signal.
It is worth to mention that algorithm of CDMA receive is the same
as algorithm of correlation receiver described in Section 5.2.
Modern CDMA communication systems uses PN sequences that
have correlation functions similar to correlation function of white
noise — unit impulse. That requirement fulfills maximal length se-
quences m that are generated using n-bits register with feedback
(Fig. 6.10) very similar to cyclic code coders. Here binary summa-
tion is done in modulo-2. Feedback coefficients g1 , g2 , . . . , gn are 1
or 0. The distribution of 0 and 1 in sequence of coefficients uniformly
describes generated PN sequence. By changing only one coefficient
different PN sequence is obtained.
Mathematically PN sequences (as cyclic codes) are characterized

73
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

Figure 6.9: Illustration of CDMA technology.

74
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

gr−1 g2 g1

Rr−1 + R2 + R1 + R0

Figure 6.10: Block diagram of m-sequence generator.

by generating polynomial:

G(x) = gn xn + gn−1 xn−1 + . . . + g2 x2 + g1 x + 1. (6.6)

Properties of m sequences:

• Generator with n-bits register generates PN sequence with pe-


riods of N = 2n − 1 bits.

• Modified m sequence (where ones remains while zeros are re-


placed by "−1") has correlation function K(τ ) = N when
τ = 0, and K(τ ) = −1 when τ = 1, 2, . . . , N − 1.

• Summing m sequence by modulo-2 with the same but shifted


sequence only phase of sequence is changed. That feature is
used to obtain copies of sequences that are not correlated with
each other.

• The initial phase of m sequence generator is set by writing to


register such n bits of sequence from which desirable to begin

75
CHAPTER 6. MULTIPLEXING OF COMMUNICATION LINES

generation. That feature is used to synchronize m sequence


generators.

Interference. Quality of transmission of digital information is esti-


mated using error probabilities. Error probability of one bit (Eqs. (5.21)-
(5.23)) is determined by ratio of energy Eb used to transmit one bit
and noise power spectral density N0 — Eb /N0 .
In single communication line influence on error probability has
only noise of dedicated frequency band and noise of receiver. When
mobile communication uses CDMA, then same frequency band is
shared by multiply users. Although for spectral spreading PN se-
quences that have very weak inter correlations is used, however they
are not totally independent. That creates additional interference dis-
tortions ∆Einter that in calculations of error probabilities are added
to fluctuating distortions and very often are even bigger. Signal and
noise ratio in that case is expressed as follows:
Eb /N = Eb /(N0 + ∆Einter ). (6.7)

Interference distortions are main problem of CDMA. The influ-


ence of that noise depends on three factors:
• inter correlation of signals,
• power of signals,
• volume of usage that depends on number of users, intensity of
usage and data transfer rate.
To decrease influence of interference distortions and guarantee
high quality of service, all these factors must be carefully analyzed.

76

You might also like