Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Communication Systems EE 132A

Prof. Suhas Diggavi

UCLA Winter Quarter 2015


Handout # 16, Wednesday, January 28, 2015

Solutions: Homework Set # 3

Problem 1
(Sufficient Statistics for Ternary Hypothesis Testing)
First, find the conditional distributions:



1  2
1
2
2
exp 2 y0 2 Eyk + E + y1 + y2
fY |H (Y |k) =
2
(2)3/2 3
From here, we can think of the problem in two ways: as a three hypothesis M-ary problem,
or as a series of two binary hypothesis tests. We consider the binary approach first. Define
T0 (Y ) = y1 y0 and T1 (Y ) = y2 y0 . The likelihood ratio for a binary test between H0 and
H1 reduces to
H1

T0 (Y ) 0
H0

If y0 > y1 , we now need to do a comparison between y2 and y0 using the likelihood test
H2

T1 (Y ) 0
H0

If, however, y1 > y0 , we do a likelihood test between hypotheses H2 and H1 :


H2

T1 (Y ) T0 (Y )
H1

Clearly T0 (Y ) and T1 (Y ) form a sufficient statistic for this problem.


If we consider the problem as a ternary hypothesis testing problem, we have that
= argmaxfY |H (Y |k)
H
= argmax{yk }
= argmax{0, y1 y0 , y2 y0 }
= argmax{0, T0 (Y ), T1 (Y )}
And we can again see that T0 (Y ) and T1 (Y ) form a sufficient statistic.

Problem 2
(Union Bound for Ternary Signals)
(a) We can denote the received signal by y = (y1 , y2 )T , then the optimal decision region R1
can be written as:
||m1 ||2 ||m2 ||2
||m1 ||2 ||m3 ||2
and y T (m1 m3 ) >
.
2
2
Plugging into m1 , m2 , and m3 , we can simplify the above decision rule as
y T (m1 m2 ) >

y1 y2 > 0 and y1 + y2 > 0.


Similarly, we can decide the optimal decision region R2 as
y T (m2 m1 ) >

||m2 ||2 ||m1 ||2


||m2 ||2 ||m3 ||2
and y T (m2 m3 ) >
.
2
2

Or
y2 y1 > 0 and y2 > 0.
We can also decide the optimal decision region R3 as
y T (m3 m1 ) >

||m3 ||2 ||m1 ||2


||m3 ||2 ||m2 ||2
and y T (m3 m2 ) >
.
2
2

Or
y2 + y1 < 0 and y2 < 0.
The region can be sketched as below.

Figure 1: Optimal decisions regions.


(b) Using the union bound, we have
1
),
2
1
1
P (e|m2 ) = P (z1 z2 > 1) + P (z2 < 1) = Q( ) + Q( ),
2

P (e|m1 ) = P (z2 z1 > 1) + P (z1 + z2 < 1) = 2Q(

and

1
1
) + Q( ),
2

where z1 + z2 N (0, 2 2 ) and z1 z2 N (0, 2 2 ). Hence, the message m1 is more


vulnerable to errors.
P (e|m3 ) = P (z1 + z2 > 1) + P (z2 > 1) = Q(

(c) Using the union bound, we have

dmin
2
) = 2Q(
),
P (e) 2Q(

Problem 3
(Comparing PSK Systems)
(a) The minimum distance in 8PSK is

d = 2 E sin
8
so that the error probability is bounded as
!
E

Pe 2Q
sin2
2
8
!
r
E
0.14 2
2Q

(b) To get an error probability of 105 , we need to have


!
r
E
2
2Q
sin
105
2
8
which requires
r

E
sin2 4.4
2

E
4.42

= 134.1 = 21.3dB
2
sin2 8

(c) For 16PSK with E 0 = 4E/3, the error probability is given by


!
r
E0
2
Pe 2Q
sin
2
16
!
r

E 4
= 2Q
sin2
2 3
16
!
r
E
2Q
0.05 2

Therefore, 8PSK will have smaller error probability because Q-function is monotonically
decreasing.

Problem 4
(9-QAM)

(a) According to the MAP decision rule, the decision region for s0 is y1 < 1/2 and y2 <
1/2. Using union bound, the probability of error when s0 is sent can be calculated by
considering the following two probabilities: y1 > 1/2 and y2 > 1/2. Thus,
P (error|s0 ) P (y1 > 1/2|s0 ) + P (y2 > 1/2|s0 ) = 2Q(

1
).
2

(b) Using similar method as in Part (a), we can calculate the probability of error when s1 is
sent:
P (error|s1 ) P (y1 > 1/2|s1 ) + P (y2 > 1/2|s1 ) + P (y2 < 1/2|s1 ) = 3Q(

1
).
2

Also, we get the probability of error when s4 is sent:


P (error|s4 ) P (y1 < 1/2|s4 )+P (y1 > 1/2|s4 )+P (y2 < 1/2|s4 )+P (y2 > 1/2|s4 ) = 4Q(
(c) The probability of error can then be calculated using the results in (a) and (b) as
1
8
1
P (error) = (4 P (error|s0 ) + 4 P (error|s1 ) + P (error|s4 )) = Q( ).
9
3 2

Problem 5
%% Setup
N = 1000;
Eavg = 1;
E_noise = .01;
%% Constellation Creation
PSK_const(:,1) = sqrt(Eavg)*cos(0:pi/8:15*pi/8);
PSK_const(:,2) = sqrt(Eavg)*sin(0:pi/8:15*pi/8);
PAM_const = ((0:15) - 7.5);
PAM_const = PAM_const/sqrt(sum(PAM_const.^2)/16)*sqrt(Eavg);
QAM_const = zeros(16,2);
for i = 0:3
QAM_const((4*i+1):4*(i+1),1) = (i-1.5);
QAM_const((i+1):4:end,2) = (i-1.5);
end
QAM_const = QAM_const/sqrt(sum(sum((QAM_const).^2))/16)*sqrt(Eavg);

%% Transmit Sequence Creation


PSK_symbols = ceil(16*rand(1,N));
PSK_transmit = PSK_const(PSK_symbols,:);
PAM_symbols = ceil(16*rand(1,N));
PAM_transmit = PAM_const(PAM_symbols);

1
).
2

QAM_symbols = ceil(16*rand(1,N));
QAM_transmit = QAM_const(QAM_symbols,:);
%% Channel
Z = sqrt(E_noise)*randn(N,5);
PSK_receive = PSK_transmit + Z(:,1:2);
PAM_receive = PAM_transmit + Z(:,3);
QAM_receive = QAM_transmit + Z(:,4:5);
%% Decision Rule
PAM_decision = zeros(size(PAM_symbols));
PSK_decision = zeros(size(PSK_symbols));
QAM_decision = zeros(size(QAM_symbols));
for iSample = 1:N
% Find the nearest point in the constellation to the received sample
% and return the corresponding index in the constellation vector
[temp PAM_decision(iSample)] = min(abs(PAM_receive(iSample) - PAM_const));
[temp PSK_decision(iSample)] = min(abs(PSK_receive(iSample,1) - ...
PSK_const(:,1)).^2 + abs(PSK_receive(iSample,2) - PSK_const(:,2)).^2);
[temp QAM_decision(iSample)] = min(abs(QAM_receive(iSample,1) - ...
QAM_const(:,1)).^2 + abs(QAM_receive(iSample,2) - QAM_const(:,2)).^2);
end
%% Statistics
PAM_error_rate = sum(PAM_decision ~= PAM_symbols)/N;
PSK_error_rate = sum(PSK_decision ~= PSK_symbols)/N;
QAM_error_rate = sum(QAM_decision ~= QAM_symbols)/N;
[PAM_error_rate PSK_error_rate QAM_error_rate]
%% Plots
figure;
subplot(2,1,1)
plot(PAM_receive(:,1),zeros(size(PAM_receive)),b.)
title(Received PAM scatterplot)
xlabel(V_1)
ylabel(V_2)
subplot(2,2,3)
plot(PSK_receive(:,1),PSK_receive(:,2),b.)
title(Received PSK scatterplot)
xlabel(V_1)
ylabel(V_2)
subplot(2,2,4)
plot(QAM_receive(:,1),QAM_receive(:,2),b.)
title(Received QAM scatterplot)
5

xlabel(V_1)
ylabel(V_2)
Sample error rates and constellation plots:
PAM: 0.249
PSK: .042
QAM: .002

If you look at the plots of the three constellations in the original problem statement, it should
come as no surprise that QAM had the lowest probability of error and PAM had the highest,
simply based on the observation that for the same average energy, the distance between points
in the QAM constellation is clearly larger than the distance between points in the PSK or PAM
constellations.

You might also like