Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

1

Digital Communications
Lecture Notes by Y. N. Trivedi

I. P ROBABILITY, R ANDOM VARIABLES AND STOCHASTIC PROCESSES : (R EVISION )

• Definition of Probability
• Examples: Rolling a dice, tossing a coin
• Sample space
• Event
• Joint events: P (A ∩ B) = P (A, B)
• Mutually exclusive events: p(A, B) = 0, example
• Conditional probability: p(A/B) = P (A, B)/P (B), example
• Independent events:p(A, B) = P (A)P (B) or P (A/B) = P (A)
• Random Variable(RV): Discrete and Continuous
• Probability Distribution or Density Function (pdf): pX (x)
• Cumulative Distribution Function: FX (x), Properties:
– FX (−∞) = 0 and FX (∞) = 1
– FX (x) is a non decreasing function.
• Statistical averages of RVs:
R∞
– Mean or Expected value: E[X] = mx = −∞ xpX (x)dx.
n
R∞ n
– nth moment of RV X : E[X ] = −∞ x pX (x)dx.
– nth central moment of RV X :
Z ∞
E[(X − mx)n] = (x − mx)npX (x)dx
−∞
for n = 2 it is known as variance.
2

– Joint moment for two RVs (Correlation):


Z ∞Z ∞
E[X1X2] = x1 x2pX1X2 (x1, x2)dx1 dx2
−∞ −∞
– Joint central moment for two RVs (Covariance)

µ12 = E[(X1 − m1)(X2 − m2)]


Z ∞Z ∞
= (x1 − m1)(x2 − m2)pX1X2 (x1 , x2)dx1dx2
−∞ −∞
• Some useful pdfs:
– Uniform distribution:
2
2 1 − (x−m)
– Gaussian distribution: X ∼ N (m, σ ), pX (x) = 2πσ e
√ 2σ 2
.
R∞ x2
∗ Tail probability=Q(x) = 12 erf c( √x2 ) = x √12π e− 2 dx.
– Multivariate Gaussian distribution: Let Xi, i = 1, 2, 3..n are
Gaussian RVs with means mi and variances σi2, with covari-
ance matrix M of order n × n. Let X and m are the n × 1
column vectors of RV Xi and mean mi respectively. Then
1 − 12 (x−m)′ M−1 (x−m)
p(x1 , x2, ...., xn) = e
(2π)n/2 (det M)1/2
For i.i.d. Gaussian variables with mean 0 and variance σ 2 and
n = 2, pdf p(x1 , x2) is
P
1 − 2i=12 x2i
p(x1, x2 ) = 2
e 2σ
2πσ
In fact this is the distribution of X ∼ CN (0, σ 2)
• Stochastic processes
– definition
– ensemble averages
3

– correlation
– types of processes: stationary, wide-sense stationary, non-stationary
and ergodic processes.

II. L INEAR A LGEBRA

• Field: A field is an algebraic system < F ; +; . > such that


1) < F ; + > is an abelian group (Closure, Associativity, Iden-
tity element, Inverse element, Commutativity), where neutral
element is denoted by 0.
2) < F ; . > is an abelian group, where neutral element is de-
noted by 1.
3) For every a; b and c in F, the distributive law a.(b + c) =
(a.b) + (a.c) holds, and multiplication by 0 obeys the rule
a.0 = 0.a = 0.
• Vector Spaces: A vector space is an algebraic system < V; +; .; F ; +; . >
such that
– < F ; +; . > is a field.
– < V; +; . > is an abelian group; (whose neutral element is
denoted by 0);
– . is an operation on pairs consisting of an element of F and an
element of V such that, for all c1; c2 in F and all v1; v2 in V,
∗ (closure) c1.v1 is in V;
∗ (scaling by 1) 1.v1 = v1;
∗ (associativity) c1.(c2.v1) = (c1.c2).v1
∗ (distributivity) (c1 + c2).v1 = (c1.v1) + (c2.v1) and c1.(v1 +
v2) = (c1.v1) + (c1.v2)
4

– In a vector space < V; +; .; F ; +; . >, the elements of V are


called vectors and the elements of F are called scalars.
– If n is a positive integer and if c1; c2; ..; cn are scalars, then the
vector c1v1 + c2v2 + .. + cnvn is called a linear combination
of the vectors v1; v2; ..; vn. The vectors v1; v2; ...; vn are said to
be linearly independent if the only linear combination of these
vectors that gives 0 is that with c1 = c2 = ... = cn = 0;
otherwise they are said to be linearly dependent.
– Subspaces: If < V; +; .; F ; +; . > is a vector space and if U
is a subset of V such that < U; +; .; F ; +; . > is also a vector
space, then < U; +; .; F ; +; . > is called a subspace of the
vector space < V; +; .; F ; +; . >.
– Vector Space of N-Tuples and basis: For any positive integer
N, the set F N is an N-dimensional vector space with 0 =
[0; 0; ...; 0]. The vectors [1; 0; ..; 0]; [0; 1; ..; 0]; ...; [0; 0; ...; 1] con-
taining a single non-zero component equal to 1 are a basis for
F N.
– If V is any vector space with 0 < n < ∞ (i.e., any non-trivial
finite-dimensional vector space) with the scalar field F and if
e1; e2; ..; en is any basis for V, then every vector v in V can be
written uniquely as a linear combination of the basis vectors,
v = c1e1 + c2e2 + .. + cnen.
– Hamming Distance: The Hamming distance, d(x; y), between
N-tuples x and y with components in an arbitrary non-empty
set is defined as the number of coordinates in which x and y
differ. Properties of Hamming Distance.
5

∗ d(x; y) ≥ 0 with equality if and only if x = y (positive


definiteness);
∗ d(x; y) = d(y; x) (symmetry)
∗ d(x; z) + d(z; y) ≥ d(x; y) (triangle inequality).
– Hamming weight: The Hamming weight, w(x), of a vector x
in F N is defined to be the number of coordinates in which x
is non-zero. It follows that d(x; y) = w(y − x). Properties of
Hamming weight.
∗ w(x) ≥ 0 with equality if and only if x = 0.
∗ w(x) = w(−x).
∗ w(x) + w(y) ≥ w(x + y).

You might also like