Download as pdf or txt
Download as pdf or txt
You are on page 1of 90

Understanding Rate-Diversity Tradeoff in MIMO

Wireless Systems and


Low Complexity Sphere Decoding

A PROJECT REPORT

Submitted in partial fulfillment of the requirements


for the award of the Degrees of

BACHELOR OF TECHNOLOGY
in
ELECTRICAL ENGINEERING
and
MASTER OF TECHNOLOGY
in
COMMUNICATION SYSTEMS
by

Ganti Radha Krishna


EE99386

Under the guidance of

Dr. K. Giridhar

DEPARTMENT OF ELECTRICAL ENGINEERING


INDIAN INSTITUTE OF TECHNOLOGY
MADRAS - 600 036
MAY 2004
CERTIFICATE

This is to certify that, the project report titled Understanding Rate-Diversity


Tradeoff in MIMO Wireless Systems and Low Complexity Sphere Decod-
ing submitted by G.Radha Krishna in partial fulfillment of the requirements for the
award of the Degrees of Bachelor of Technology in Electrical Engineering and Master
of Technology in Communication Systems, is a bona-fide record of the work car-
ried out by him under my supervision and guidance at the Department of Electrical
Engineering, Indian Institute of Technology, Madras.

Place Dr. K.Giridhar


Date Associate Professor
Department of Electrical Engineering
Indian Institute of Technology, Madras

ii
Acknowledgements

I express my deepest thanks to my guide Dr.K.Giridhar for his constant guidance and
advice throughout the work. I would like to thank him for giving me the freedom to
pursue my ideas and encouraging me all the way.
I wish to thank Dr.Srikrishna Bhashyam for bearing me through the endless tech-
nical discussions. I also wish to thank DR. V.V. Rao, Dr.R. Aravind, Dr. De-
vendra Jalihal and other faculty members for the courses they have offered which
kindled my interest in communications and broadened my knowledge base. I thank
Dr.S.A. Choudum and Dr.S.H. Kulkarni for their courses which deepened my inter-
est in mathematics. I would like to thank the TeNet group for new Intel lab, who
provided me with excellent computing facilities.
I would like to thank Ranga, Raghavendra, Lakshmi and other members of Intel
lab for a creative , stimulating and fun environment. I am thankful to Klutto for his
help during the initial stages of my project and for the great discussions with him. I
would specially thank Srinivas for his constant help, motivation and listening to me
patiently. I also am grateful to Srinivas for helping me in the literature survey and
trips to MathScience library for searching books. A thousand thanks to Vamsi who
helped me to rejuvenate my interest in math. Hours and hours of discussions about
mathematical problems and especially geometry are times which I would never forget.
I also wish to thank Shashi for his help and lots of error control coding discussions.
I thank all my Dual Degree friends for the great and enjoyable times we had. I wish
to thank my room neighbor and good friend Sambit for, bearing with me for three
years, never refusing me financial ,technical and computational aid, and for all the
great times I had paining him. I wish to thank all the sixth wing juniors for a great
time I had with them. I thank Ravish, Subbu, Pavan Ram, Bamba, Jaykumar, Ankit,
Sweety, Biju and all my other sixth wing friends who made my stay at IIT memorable
and enjoyable.
I wish to thank my family, who did not force me to go to medical school and
encouraged me all the way. I am very grateful to my grandmother and my peda-
attha for their love and affection towards me. Finally I would like to thank my
parents and my younger brother Shyam for their never ending love and support.

iv
Abstract

Usage of multiple antennas at the transmitter and receiver increase the rate and
reliability of communication over the wireless channels. In order to achieve both rate
and diversity, the input code matrix should follow certain criterion. In this thesis,
design of high rate space time codes, decoding complexity and criterion to achieve
optimal rate diversity tradeoff will be studied.
We first investigate Linear Dispersion Codes (LDC) which achieve ergodic capacity
of the channel. LDC for different number of transmit and receive antennas will be
designed and compared with LDC based on frame theory using simulations.
We then analyze decoding in multiple antenna scenario using Sphere Decoding
algorithm. We propose a new initial radius and it’s update method based on channel
knowledge and noise variance rather than later alone. The reduction in complexity
due to new radius is verified by simulations.
We develop criterion to be satisfied by the space time codes, so as to achieve
optimal rate-diversity tradeoff. Error probability of non-Gaussian codes as a function
of singular values of the code and channel matrix is analyzed. Using this analysis,
conditions for code to achieve optimal tradeoff are derived. Finally, we present full
rate and full diversity space time codes using extension fields.
Contents

Acknowledgements iii

Abstract v

List of Figures x

List of Tables xii

Notation xiii

1 Introduction 1
1.1 Channel and System Model . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Thesis Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Theoretical Background 6
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Design Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.2 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.3 Rate Diversity Tradeoff . . . . . . . . . . . . . . . . . . . . . . 11

3 Linear Dispersion Codes 12


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Linear Dispersion Codes . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2.1 Equivalent Channel . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.2 Design Criterion . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.3 Code Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.4 LDC for Constraint Three . . . . . . . . . . . . . . . . . . . . 15
3.2.5 LDC for different (M,N) . . . . . . . . . . . . . . . . . . . . . 15
3.3 LDC Based on Frame Theory . . . . . . . . . . . . . . . . . . . . . . 17
3.3.1 Code Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Sphere Decoding 24
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 Sphere Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.1 The Sphere Decoding Algorithm . . . . . . . . . . . . . . . . . 27
4.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.3.1 Covering Radius . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4 Modified Decoding Rule . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4.1 Calculation of Major Diagonal . . . . . . . . . . . . . . . . . . 32
4.4.2 Method A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4.3 Method B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4.4 Comparing D/2 with R0 . . . . . . . . . . . . . . . . . . . . . 34
4.5 Simulation Results and Conclusion . . . . . . . . . . . . . . . . . . . 35
4.5.1 Infinite Lattice . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.5.2 Finite Lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

vii
5 Rate-Diversity Tradeoff 40
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2 Diversity-Multiplexing Tradeoff . . . . . . . . . . . . . . . . . . . . . 40
5.2.1 Channel Model . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.2.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.2.3 Tradeoff Curve . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.2.4 Error events . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.2.5 Tradeoff Achieved by Alamouti Code . . . . . . . . . . . . . . 44
5.3 Tilted QAM Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.3.1 Criterion for a Non Gaussian code to achieve optimal tradeoff
for (M, M ) MIMO system . . . . . . . . . . . . . . . . . . . . 46
5.3.2 Examples of Codes that achieve optimal Tradeoff . . . . . . . 51
5.3.3 Error Analysis for Non Gaussian Codes . . . . . . . . . . . . . 52
5.4 Full rate and Full Diversity codes from Extension fields . . . . . . . . 58
5.4.1 Code Construction . . . . . . . . . . . . . . . . . . . . . . . . 59
5.4.2 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . 60
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Appendix A 64
A.1 Extension Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Appendix B 66
B.1 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Appendix C 68
C.1 LDC for (3, 3),(3, 1) and (4, 1) systems . . . . . . . . . . . . . . . . . 68
C.2 Frame theory based LDC for (3, 3) and (4, 1) systems . . . . . . . . . 73

viii
Bibliography 75

ix
List of Figures

1.1 Multiple antenna channel with M transmit and N receive antenna . . 2

2.1 Ergodic capacity of MIMO channel for various (M,N) . . . . . . . . . 8


2.2 Outage probability for various (M,N) and R=M bps/Hz . . . . . . . . 9
2.3 Outage Capacity for various (M,N) and 1% Outage Probability . . . 10

3.1 Error Performance of LDC, Frame based LDC for (2,2) system at R =
2bps/Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Error Performance of LDC, Frame based LDC for (2,2) system at R =
4bps/Hz using QAM . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Comparison of different Frame based Designs. . . . . . . . . . . . . . 22
3.4 Comparison of different LDC for different constraints. . . . . . . . . . 23

4.1 Sphere Decoder - basic idea . . . . . . . . . . . . . . . . . . . . . . . 26


4.2 Sphere Decoder - Increasing the dimension of search . . . . . . . . . . 26
4.3 Complexity versus SNR for finite Lattices . . . . . . . . . . . . . . . 29
4.4 Updating Procedure for Method B . . . . . . . . . . . . . . . . . . . 33
4.5 Complexity versus δ for 2x2 and 16-QAM . . . . . . . . . . . . . . . 34
4.6 Average computational complexity versus SNR for infinite lattices . . 35
4.7 Average computational complexity versus SNR for Method A . . . . . 36
4.8 Average number of lattice points versus SNR for Method A . . . . . . 37
4.9 Average number of lattice points versus SNR for Method B . . . . . . 38
4.10 Average computational complexity versus SNR for Method B . . . . . 38
5.1 Diversity-Multiplexing tradeoff, d∗ (r) for general M, N and T ≥ M +
N −1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.2 Comparison of Alamouti and Optimal rate diversity tradeoff for M =
N =T =2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.3 Outage region as a function of λ1 and λ2 . . . . . . . . . . . . . . . . 49
5.4 Comparison of exact and approximate error bounds . . . . . . . . . . 55
5.5 Comparison of New Method with Alamouti and Dast for 2 × 1 system
for R = 1bps/Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.6 Comparison of New code on different extension fields for 2 × 2 system
for R = 2bps/Hz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.7 Capacity of New Code for different rotations 2 × 2 system . . . . . . 63
5.8 SER performance of New Code for different rotations 2 × 2 system . 63

xi
List of Tables

3.1 Comparison of ergodic capacity of channel and capacity obtained by


LDC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2 2αN as a function of SNR for  = 0.9 . . . . . . . . . . . . . . . . . . 36

5.3 Values of βi for optimal rate diversity tradeoff for 2 × 2 system, T = 2 57


5.4 Values of βi for optimal rate diversity tradeoff for 3 × 3 system, T = 3 57
5.5 Values of βi for optimal rate diversity tradeoff for 4 × 4 system, T = 4 57

xii
Notation
AT if A is real
A† :
AH if A is complex
∗ : complex conjugate
R(A) : real(A)
I(A) : imaginary(A)
⊗ : Kronecker Product
=
˙ : exponential equality (Defined Later)
˙
< : exponentialy less than (Defined Later)
˙
> : exponentialy greater than (Defined Later)
min(a, b) : minimum of a and b
max(a, b) : maximum of a and b
(x)+ : max(0, x)
tr(A) : trace of matrix A
vec(A) ; column stacking of matrix A
E(x) : expectation of random variable x
(M, N ) : MIMO system with M transmitt antenna and N receive antenna
Chapter 1

Introduction

Multiple antennas at receiver or transmitter provide us with a significant increase


in capacity and robustness of communication in a fading environment. It has been
shown that the capacity grows linearly with the number of antennas used. Maximum
diversity gain for a multiple antenna channel has been found to be equal to the total
number of path between the antennas. The extra gain in rate and diversity for the
same transmit power makes multiple antenna systems more useful and attractive for
practical purposes. But to achieve capacity or diversity, the input symbols must be
arranged in a appropriate fashion in both space and time. This arrangement of input
symbols can be thought of as a matrix also called as space-time codes. To achieve a
gain in terms of diversity or capacity, this matrix should posses specific properties.
Much effort has gone into developing space-time codes that posses specific prop-
erties. The first space time code was proposed by Alamouti [12] over two transmit
antenna and two time periods. This code is orthogonal and has linear decoding com-
plexity. Taroukh [10] gave the constellation criterion and size to achieve complete
diversity. He also proved that orthogonal codes do not exist for more than two trans-
mit antennas over complex constellations. Orthogonal codes have lower decoding
complexity but are rate deficient. A set of codes called linear dispersion codes were
proposed by Hassibi [1] that achieve capacity of the channel. These codes achieve both
rate and diversity and hence give us a handle for design required for an application.
Decoding of space time codes using maximum likelihood rule leads to exponential
complexity in number of antennas making their usage prohibitive. To reduce the

1
complexity of decoding, sphere decoding algorithm which is of cubic complexity at
high SNR can be used. However sphere decoding is computationally complex at lower
SNR.
Zheng and Tse [8] established that there is a tradeoff between robustness and rate
of transmission. They provided with an upper bound on diversity given the rate of
transmission. Practical codes and the criterion to achieve this bound have been found
only for a two antenna systems.
In this thesis, our goal is to understand and design LDC and its variants namely
Frame theory based LDC. We bring down the complexity of sphere decoding by
appropriate choice of initial radius depending on channel conditions. Criterion for a
non-Gaussian code to achieve optimal tradeoff will be presented and error analysis of
non-Gaussian codes will be studied.

1.1 Channel and System Model

V1
h11
x1 y1
h12 h21
V2

x2
y2

VN
hM N
xm yN

Fig.1.1: Multiple antenna channel with M transmit and N receive antenna

Figure 1.1 shows a multiple input multiple output channel with M input antenna
and N output antennas. At any instant M signals satisfying an average power con-

2
dition are transmitted. In this thesis, the channel is modeled as a flat, Rayleigh, and
block fading, with channel knowledge available at receiver. Noise is modeled as Ad-
ditive White Gaussian Noise (AWGN). This model of the channel is used throughout
the thesis, unless specified otherwise. The mathematical model for the above channel
when the channel is assumed to be quasistatic for T channel usages is

Y= M
HX + V (1.1)

Y, V ∈ CN ×T , H ∈ CN ×M , X ∈ CM ×T

where H is the multiple antenna channel, X denotes the input transmission signal
matrix, V denotes the additive white gaussian noise, Y is the received signal matrix
and ρ denotes the required SNR. hij is modeled as a Rayleigh fading variable with
variance 0.5 in each dimension. vij is modeled as a complex Gaussian with variance
0.5 in each dimension.

1.2 Thesis Contribution

In this thesis, we studied the design problem of codes for near channel capacity com-
munication. We studied the problem of decoding these codes using sphere decoding
and its sensitivity to initial choice of radius. The criterion required for non Gaussian
codes to achieve optimal rate diversity tradeoff has been studied.
In this thesis Linear Dispersion Codes (LDC) which achieve ergodic capacity of
channel, were obtained by optimization of capacity function. The performance of
these codes were compared under various power constraints and for various configu-
rations of transmit and receive antenna. LDC based on frame theory were constructed
and their performance was compared with conventional LDC.
LDC are high rate codes, and ML decoding at these rates is very computationally
expensive. Sphere decoding algorithm can be used to decode LDC when the input
symbols are drawn from lattice constellations. Sphere decoding has a very high com-
plexity at low SNR, since the initial radius is chosen by using noise variance. In this

3
thesis, initial radius that is based on channel knowledge is proposed. The new radius
is computationally easy to calculate, unlike other previous choices of initial radius
available in literature. A new method of updating the radius on decoding failure
is also proposed. These ideas were validated by comparing them with conventional
sphere decoding algorithm using simulations.
Tse [8] quantified the optimal rate diversity tradeoff and the criterion for Gaussian
codes to achieve this optimal tradeoff. Yao [9] proposed condition for non-Gaussian
codes to achieve this optimality over two transmit and receive antenna. In this thesis,
we derived the criterion for Non-Gaussian codes to achieve optimal tradeoff for a
general M transmit and M receive antenna configurations. We proved that the codes
which were claimed to achieve this optimality through simulation results theoretically
satisfy the proposed criterion. We also analyzed probability of error for non-Gaussian
codes as function of singular values of channel and code. We used this analysis to
re derive criterion required by a non gaussian code to achieve optimal rate-diversity
tradeoff. We also propose space time codes from extension fields which achieve full
diversity and full rate. These codes can be thought as a generalization of diagonal
algebraic space time codes.

1.3 Thesis Outline

Chapter 2: In this chapter we review different metrics like equivalent channel


capacity, outage capacity, diversity and tradeoff that can be used to evaluate the
performance of space time codes.
Chapter 3: In this chapter Linear Dispersion Codes will be dealt with in de-
tail. LDC codes for various antenna configurations obtained by simulations will be
presented. Frame theory based LDC will be reviewed and their performance will be
compared to conventional LDC.
Chapter 4: In this chapter we review sphere decoding and motivate the need

4
for a good initial radius of decoding. We also propose a simple expression for radius
based on the channel. We then proceed to propose a updating scheme for the radius
based on intuition.We finally present simulation results comparing all the schemes.
Chapter 5: In this chapter rate diversity tradeoff will be presented in detail.
Tilted QAM codes are presented. Theoretical criterion required to achieve optimal
tradeoff for any M transmit and M receive antenna is then obtained. Error analysis
of non gaussian codes with regard to rate-diversity tradeoff is also presented. Finally
a set of codes are constructed from extension fields that provide full rate and full
diversity.

5
Chapter 2

Theoretical Background

2.1 Introduction

In this chapter basic theoretical aspects of MIMO systems will be reviewed. Various
metrics for evaluation of space time codes will be considered.

2.2 Design Metrics

In this section we will review some important metrics that can be used in evaluating
a code performance

2.2.1 Channel Capacity

Channel capacity for additive white Gaussian noise(AWGN) channels was first derived
by Shannon in his paper ”A Mathematical Theory of Communication”. Space time
channels also exhibit fading and have an extra degree of freedom, spatial dimension.
In this section we shall review capacity of MIMO channels under various conditions
of channel configurations and availability of Channel State Information (CSI).

2.2.1.1 Perfect CSI at Tx

When perfect Channel knowledge is available at the transmitter, water filling tech-
nique can be used to achieve capacity. Water filling method assumes independent
gaussian inputs. The channel is broken into equivalent sub channels along the dif-
ferent eigenmodes of the channel. The variance of the inputs is taken to be the

6
difference between a constant and the power of the eigenmode. So if λi denote the
singular values of the channel, the variances of the input are chosen as follows.

E(x2i ) = (µ − λ−1
i )
+

where a+ denotes max(o, a), and µ is chosen to meet the power constraint. The
capacity of this channel is given by
X
C(µ) = (ln(µλi ))+
i

This capacity is the upper bound of all feedback capacities. So this can be used as a
measure of how well a feedback scheme is performing with regarding to rate.

2.2.1.2 Perfect CSI at Rx

We assume that a input vector x is transmitted and y is received. We assume that


the output at receiver consists of (y, H) = (Hx + v, H). The mutual information
between input and output is then

I(x; (y, H)) = I(x; H) + I(x; y|H)

= I(x; y|H)

= EH [I(x; y|H = H)]

If x is constrained to have a covariance matrix Q, the choice of x that maximizes


mutual information is the circularly symmetric complex Gaussian of covariance Q,
and the corresponding mutual information is given by

Ψ(Q) = E[log det(IN + HQH† )] (2.1)

ρ
It can be easily shown that that the choice of Q that maximizes Ψ(Q) is Q = M
I,
which gives the capacity as

ρ
C(ρ) = E[log det(IN + HH† )] (2.2)
M
7
This capacity is also known as ergodic capacity. The expectation can be evalu-
ated analytically by using Wishart distribution and the exact expression is given by
Teletar [13]. Foschini [14] showed that at high SNR.

C ≈ min(M, N ) log(SN R)

Ergodic capacity of various MIMO channels has been plotted in Figure 2.1. We
observe that the capacity indeed scales with log(SN R) as min(M, N ). Every code
Ergodic Capacity Versus SNR for different (N,M)
25
2x2
1X1
3x3
2x1
3x2
20

15
bps/Hz

10

0
0 5 10 15 20 25 30
SNR dB

Fig.2.1: Ergodic capacity of MIMO channel for various (M,N)

X induces an equivalent channel and the ergodic capacity of this equivalent channel
can be calculated. This gives an idea of how much maximum rate that the code can
support on an average.

2.2.1.3 Outage Capacity and Outage Probability

Consider the case when channel H is chosen randomly at the beginning of all time
and held constant for all the uses of the channel. The Shannon capacity of the
channel is zero, since there is a non zero probability that the realized H is incapable
of supporting the desired rate however long we take the code length. In such a case

8
we can talk of a different metric at each rate R called outage probability. Outage
probability of a channel can be defined as the percentage of channel usages which are
unable to support the required rate at a given SNR. Mathematically this translates
to
ρ
Pout (R, ρ) = P (log det(IN + HH† ) < R)
M
We observe that Pout is a function of both ρ and R. In practice, coding is performed
over a finite channel realizations, and transmitter has no knowledge of channel. So
when the realized channel capacity is below R, the receiver cannot decode even with
powerful codes. We can define this as the outage event. This is a very good metric
of evaluation when coding is done over finite channel realizations. Outage capacity
can be defined as the maximum rate Rmax that can be transmitted for a given ρ
and pout . We also understand that pout will be a lower bound on the actual error
probability [8]. Figure 2.3 illustrates the outage probability of (M, N ) channels when
R = M bps/Hz. We observe that the outage curve corresponding to (M, N ) channel
has a slope of M N . Figure 2.3 illustrates the channel outage capacity for different
(M, N ) for a outage probability of 0.01.
Outage Probability Vs SNR for R = M bits
0
10
1x1
2x2
4x4

−1
10
Outage Probability

−2
10

−3
10

−4
10
6 8 10 12 14 16 18 20
SNR dB

Fig.2.2: Outage probability for various (M,N) and R=M bps/Hz

9
Outage Capacity at 1% Outage Probability
20
2x2
1x1
18

16

14

12
Outage Capacity

10

0
0 5 10 15 20 25 30 35 40
SNR dB

Fig.2.3: Outage Capacity for various (M,N) and 1% Outage Probability

2.2.2 Diversity

For an (M, N ) system a code can achieve a maximum diversity of M N . A code


achieving a diversity d has an error probability that falls as ρ−d . This is one of the
important metrics in evaluating a STBC.
For a code X to achieve complete diversity M N , it has to obey certain criterion.
Let X1 and X2 denote two distinct codewords and ∆ = X1 − X2 . Then the code X
achieves full diversity [10] if it satisfies the first two of the following conditions

1. T ≥ M

2. all ∆ should be full rank

3. maximize the worst case determinant of ∆.

The third condition makes the code perform better.

10
2.2.3 Rate Diversity Tradeoff

An (M, N ) system has min(M, N ) degrees of freedom, and hence one can transmit
independent data in these streams to increase the rate or can repeat the data across
the streams to achieve diversity. Hence we observe that there is a fundamental tradeoff
R
between rate and diversity. If we define r = log(SN R)
then the tradeoff between r and
diversity is given by [8]
d(r) = (M − r)(N − r)

The details of this tradeoff will be dealt in Chapter 4. The performance of the code
can be evaluated on the basis of the maximum diversity it can achieve for a given
rate. This is a useful metric to compare codes which have same diversity gain at a
fixed rate.

11
Chapter 3

Linear Dispersion Codes

3.1 Introduction

In this chapter we will analyze Linear Dispersion Codes (LDC) and LDC based on
frame theory. We will give LDC constructions for various (M, N ). We then compare
them with codes based on frame theory.

3.2 Linear Dispersion Codes

Linear Dispersion Codes are ergodic capacity achieving codes. The capacity of equiv-
alent channel, using LDC can be made to scale with SNR as min(M, N ) log(SN R).
Also these codes unlike V-BLAST, can be designed for any (M, N ).
Definition: A linear dispersion code is one for which
Q
X
X= (sq Cq + s∗q Dq ) (3.1)
q=1

Cq , Dq ∈ CM ×T ; sq , s∗q ∈ C

where s1 , . . . , sq are information symbols drawn from the signal constellation and
Cq , Dq are called dispersion matrices. Many of the existing space time codes can be
put in them framework of linear dispersion codes. Q is a design parameter and will
be discussed later. For Alamouti code, Q = 2 and the dispersion matrices are given

12
by    
1 0 0 1
C1 =   C2 =  
0 0 0 0
   
0 0 0 0
D1 =   D2 =  
0 −1 1 0

3.2.1 Equivalent Channel

In this subsection we will formulate the equivalent channel for a given LDC matrix.
This formulation converts the input and output matrices to equivalent vectors, which
make the calculation of capacity and decoding of codes easier. We derive an alternate
way of representing the channel which makes simulation of channel easy for any
(M, N ).
Decomposing sq into real and imaginary parts we get

sq = αq + jβq , q = 1, . . . , Q

Writing the equivalent X in this case, by substituting sq in Equation (3.1), we get


Q
X
X= (αq Aq + jβq Bq )
q=1

where Aq = Cq + Dq and Bq = Cq − Dq .
Writing the received matrix Y
r Q
ρ X
Y= H (αq Aq + jβq Bq ) + V (3.2)
M q=1

Let X1 = [vec(A1 ), vec(A2 ), . . . vec(Aq )]M T ×Q , X2 = [vec(B1 ), vec(B2 ), . . . vec(Bq )]M T ×Q ,


ᾱ = [α1 . . . αq ]† ,β̄ = [β1 . . . βq ]† . Taking vec operation on both sides of eq 3.2 we get
r
0 ρ 0 0
Y = H [X1ᾱ + jX2 β̄] + V
M
0 0 0
where H = IT ⊗ H and Y = vec(Y),V = vec(V) Rearranging the real and imagi-
nary part, we get the following form
r
ρ
y= Hs + v (3.3)
M

13
0 0 0 0
where y = [YR† YI† ]† v = [VR† VI† ]† ,s = [ᾱ† β̄ † ]† and the equivalent channel H is
given by   
0 0
HR −HI X1R −X2I
H= 0 0
  (3.4)
HI HR X1I X2R
where AR , AI denotes the real part and imaginary of matrix A respectively. H is of
dimension 2N T × 2Q.

3.2.2 Design Criterion

We observe that the set of Equations (3.3) is not under determined as long as

Q ≤ NT

Also a larger Q implies more degrees of freedom, thereby increasing the capacity
of equivalent channel. A smaller Q implies more repetition and higher diversity or
coding gain. Hence Q is optimally chosen as Q = min (M, N )T , since the maximum
rate can scale only as min (M, N ) log(SN R). Dispersion matrices Aq , Bq are chosen
such that, capacity of equivalent channel approaches actual channel capacity.
The equivalent channel capacity is given by.
1 ρ
Cld (ρ, T, M, N ) = max E log det(I2N T + HH† ) (3.5)
Aq ,Bq 2T M
The power constraints are given by any of the following equations
PQ †
1. q=1 (trAq Aq + trB†q Bq ) = 2T M

TM
2. trA†q Aq = trB†q Bq = Q
,q = 1, . . . , Q

T
3. A†q Aq = B†q Bq = I ,q
Q M
= 1, . . . , Q

3.2.3 Code Properties

1. The capacity achieved by LDC is always less than equal to the channel capacity.
This is because we are imposing a structure on the code, which results in loss
of degrees of freedom.

14
2. Any of the three power constraints given above can be used, but the conditions
become more rigid from one to three. Hence we are effectively reducing the de-
grees of freedom from constraint one to constraint three. This gives a reduction
in capacity, but the SER performance of the code becomes better.

3. LDC are not unique. For any Aq , Bq q = 1, . . . , Q we can premultiply them


by an unitary matrix Ψ to obtain a new LDC. We can also multiply H by a
2Q × 2Q unitary matrix to obtain new LDC. This method of rotation is used
to obtain better performing codes keeping capacity constant.

4. The optimization is insensitive to ρ. So in simulations we have chosen ρ = 20dB.

3.2.4 LDC for Constraint Three

Using constraint three implies, finding Aq , Bq such that they are unitary and also
maximize capacity. This optimization is difficult to achieve for any size of Aq , Bq ,
because of the large number of constraint involved (4QT M non linear constraints).
However this optimization problem can be solved easily for (2, 2) and (2, 1) systems
because of the following general representation for 2 × 2 unitary matrices [17]. X is
unitary if and only if
  
1 0 a −b
X=  , |a|2 + |b|2 = 1, θ ∈ R, a, b ∈ C

0 e b̄ ā

We choose a = cos θ1 eiθ2 , b = sin θ1 eiθ3 , θ1 , θ2 , θ3 ∈ R. Although choosing a, b in


the above fashion, does not gaurantee the most general form, it covers the unitary
matrices when |a| ≤ 1, |b| ≤ 1. Using this representation for Aq , Bq , the capacity
optimization problem was solved for (2, 1),(2, 2) systems for T = 2.

3.2.5 LDC for different (M,N)

In this subsection LDC codes for different number of transmitters and recievers will
be presented. These codes were obtained by solving the optimization problem using

15
MATLAB. The code will be listed in terms of Cq , Dq , q = 1, . . . , Q. The code is given
by
Q
X
X= (sq Cq + s∗q Dq )
q=1

(M, N, T, Q) = (2, 1, 2, 2), Power constraint 1 :

−0.1269 − 0.2492i 0.1917 + 0.1965i 0.1730 + 0.2388i 0.1998 + 0.1562i


C1 = C2 =
−0.2514 − 0.6096i 0.4140 + 0.4977i 0.3621 + 0.5938i 0.4400 + 0.4045i

0.6624 + 0.2120i 0.5976 + 0.0180i 0.5890 + 0.2966i −0.6387 − 0.1055i


D1 = D2 =
−0.2729 − 0.1120i −0.2520 − 0.0277i −0.2389 − 0.1452i 0.2664 + 0.0662i

(M, N, T, Q) = (2, 1, 2, 2), Power constraint 2 :

0.7371 + 0.0049i −0.0285 + 0.3605i −0.0020 + 0.3685i 0.6325 − 0.0105i


C1 = C2 =
0.3568 + 0.4771i −0.2463 + 0.1575i −0.2383 + 0.1786i 0.3152 + 0.4021i

0.0478 − 0.2938i −0.5062 − 0.0710i 0.5876 + 0.0964i −0.0680 + 0.2843i


D1 = D2 =
−0.3258 + 0.1725i 0.3081 + 0.5524i −0.3440 − 0.6520i 0.3313 − 0.1451i

(M, N, T, Q) = (2, 1, 2, 2), Power constraint 3 :

0.1842 − 0.2591i 0.4137 − 0.1261i 0.8394 + 0.4621i −0.1975 + 0.0233i


C1 = C2 =
−0.1730 − 0.2633i −0.1132 + 0.0943i 0.1830 + 0.1236i 0.9388 + 0.0946i

0.3618 + 0.6238i 0.3205 + 0.2985i 0.0711 − 0.1569i −0.0534 + 0.0993i


D1 = D2 =
−0.5286 − 0.0133i 0.7698 − 0.0830i 0.0432 + 0.0401i 0.0104 − 0.2394i

(M, N, T, Q) = (2, 2, 2, 4), Power constraint 1 :

−0.0703 − 0.1311i 0.4184 + 0.1040i 0.2926 + 0.2192i −0.0250 + 0.1446i


C1 = C2 =
−0.0502 + 0.5442i 0.2462 + 0.1335i 0.0860 − 0.0999i 0.5496 + 0.2714i

0.3547 + 0.3727i 0.1155 + 0.2484i 0.3248 + 0.2950i 0.2678 + 0.2049i


C3 = C4 =
0.1991 + 0.0571i 0.1692 − 0.1552i 0.2078 + 0.1833i −0.3312 + 0.2388i

0.2080 − 0.1700i 0.1433 + 0.0162i 0.0453 − 0.0466i −0.0936 − 0.3405i


D1 = D2 =
0.1775 + 0.1230i −0.5247 + 0.0062i −0.1898 + 0.5023i 0.1795 + 0.0469i

16
−0.1236 − 0.3242i −0.0638 + 0.6207i 0.1689 + 0.4010i −0.0090 − 0.2639i
D3 = D4 =
0.0686 − 0.1890i 0.11714 + 0.0337i −0.2364 − 0.3637i −0.02843 − 0.0400i

(M, N, T, Q) = (2, 2, 2, 4), Power constraint 2 :

0.3696 + 0.5978i 0.4685 + 0.0294i 0.1676 + 0.1018i −0.2596 − 0.1537i


C1 = C2 =
0.00410 − 0.0203i 0.05340 − 0.1780i 0.2899 + 0.4270i 0.2250 + 0.1371i

0.0620 + 0.2929i 0.3057 + 0.0623i 0.3792 − 0.1539i −0.0892 + 0.4645i


C3 = C4 =
0.3797 − 0.0618i −0.1626 + 0.3772i 0.0580 + 0.3938i 0.1679 + 0.4005i

−0.1881 − 0.0488i 0.1675 − 0.2659i −0.1699 − 0.0514i 0.1072 − 0.1203i


D1 = D2 =
0.1591 + 0.1889i 0.1996 − 0.1151i 0.4134 − 0.0107i −0.4841 + 0.2680i

0.1790 − 0.1635i 0.0839 + 0.4772i −0.0939 + 0.2604i −0.0831 + 0.0922i


D3 = D4 =
−0.1777 − 0.3665i −0.1915 − 0.0131i −0.1205 + 0.0806i 0.3562 − 0.1476i

(M, N, T, Q) = (2, 2, 2, 4), Power constraint 3 :

0.0001 − 0.0000i −0.3087 − 0.6226i 0.1742 + 0.3076i −0.3404 + 0.0955i


C1 = C2 =
0.6504 − 0.2447i 0.0000 + 0.0001i 0.1832 + 0.3023i 0.0907 − 0.3417i

0.1743 + 0.3077i −0.2826 + 0.2125i 0.7069 − 0.0154i 0.0001 + 0.0001i


C3 = C4 =
0.0610 + 0.3482i 0.0908 − 0.3417i −0.0001 + 0.0000i 0.5159 + 0.4836i

0.0001 − 0.0000i −0.1170 + 0.0580i 0.1742 + 0.3077i 0.3404 − 0.0955i


D1 = D2 =
0.0460 + 0.1223i 0.0000 + 0.0001i −0.1832 − 0.3024i 0.0908 − 0.3417i

−0.1742 − 0.3075i −0.2825 + 0.2126i 0.0000 0.0000


D3 = D4 =
0.0609 + 0.3483i −0.0906 + 0.3417i 0.0000 0.0000

Codes for other values of (M, N ) are given in Appendix C. In Table 3.1 capacities
obtained by the equivalent channel for these codes are compared to the actual ergodic
capacity of the channel.

3.3 LDC Based on Frame Theory

LDC achieve capacity, but are not unique. So an exhaustive search and performance
simulation of these codes for SER is required to pick a code having a good SER

17
(M, N ) Channel Capacity (Bits) Ldc Capacity (Bits)
Constraint1 Constraint2 Constraint3
(2, 1) 6.2678 6.221 6.204 5.9508
(2, 2) 11.34 11.36 11.28 11.25
(3, 1) 6.415 6.29 6.24
(3, 3) 16.706 16.6633
(4, 1) 6.473 6.454

Table 3.1: Comparison of ergodic capacity of channel and capacity ob-


tained by LDC

performance. Codes based on Frame Theory reduce the search space and provide
with codes that have better error performance. These codes were first proposed by
Heath and Paulraj [15]. In this section we will review codes based on frame theory
and obtain codes for some antenna configurations using numerical methods.

3.3.1 Code Definition

The code X is defined as in Equation (3.1) with Dq = 0. The effect of removing s∗q
on channel capacity is negligible [1]. Writing the equivalent channel .
Q
X
X= s q Cq
q=1

Substituting in the channel model, and taking the vec operation we get
ρ 0
y= H Xs + v
M
where X = [vec(C1 ), . . . , vec(CQ )] and others as previously defined. So the capacity
of equivalent channel is given by
1 ρ †
max E log det(IN T + H0 X X † H0 )
tr(X X †
<T ) T M

If Q = M T , we observe that, any X such that X X † = IMT achieves capacity.


When Q < M T , this orthogonality condition fails because the number of basis vectors

18
of a vector space cannot be greater than its dimension. In such a case making X

X † X = IQ (3.6)

would make the capacity grow asymptotically in proportion to Q/T [15]. We observe
that when Q < M T , the matrix X appears tall and is called as a tall matrix. Any
tall matrix satisfying equation (3.6) is called a tight frame. This above condition
guarantees that, the rate of code can scale to the ergodic capacity. To optimize the
error performance the rank of the code word matrices have to be optimized. When
Q = M T , the following criterion ensure that a code achieves channel capacity and
has a good error performance.

1. X X † = IM T

2. For T > M ensure that Cq C†q = Iwhile for T ≤ M ensure that C†q Cq = I for
q = 1, . . . , Q

When Q < M T , make X as tight frame, to achieve high rate. Codes obtained under
this condition should be optimized by considering other properties, i.e. making the
code word error matrix full rank or maximizing the worst case determinant of code
word error matrix.

3.3.2 Examples

The algorithm for constructing tight frames which satisfy both the above criterion
are given in [15] [16]. Using the algorithm we have come up with designs for various
(M, N ). In this section we will present codes for 2 × 2. Other constructions are
provided in Appendix C. The codes are presented in terms of X .
(M, N, Q, T) = (2, 2, 4, 2) : code1

19
 
0.104103 − 0.233339i −0.162140 + 0.241640i 0.439005 − 0.042893i −0.344111 − 0.192506i
 0.354478 + 0.243024i 0.351128 − 0.205011i 0.009591 − 0.235250i −0.190313 − 0.241475i 
X = 
 −0.231071 + 0.362383i −0.402476 − 0.057737i −0.161674 − 0.171161i −0.278022 − 0.131274i 
0.187131 + 0.173973i −0.241182 − 0.162820i 0.359317 − 0.255846i 0.264814 + 0.292138i

Another code given by Heath [16] for (M, N, Q, T) = (2, 2, 4, 2) :code2


(M, N, Q, T) = (2, 2, 4, 2)
 
0.3582 − 0.1817i −0.0743 + 0.1313i −0.2697 − 0.2725i 0.2870 + 0.2942i
 0.1183 − 0.2733i −0.2173 − 0.4243i −0.0485 + 0.3172i −0.1650 + 0.2320i 
X = 
 0.2783 + 0.1061i −0.2845 + 0.3825i −0.1509 + 0.2832i −0.2308 − 0.1668i 
−0.3801 + 0.1297i 0.0516 + 0.1418i −0.3792 + 0.0569i −0.1774 + 0.3707i

3.4 Simulation Results

In this section performance of LDC and LDC based on Frame theory will be compared.
In Figure 3.1 error performance of LDC is compared with Frame based LDC and
uncoded system for two transmitt and two recieve antennas. The code used for LDC is
(2, 2, 1, 4) code, which was given in subsection 3.2.5. Code 1 given in subsection 3.3.2
was used for frame theory based code. Both these codes transmit two symbols per
channel usage. BPSK was used for simulation. At SER of 1e4 LDC performs better
than uncoded system by 3dB and frame theory code perform better by 1dB over
LDC. So LDC codes perform better than spatial multiplexing systems for same rate
and have equal decoding complexity. Frame based codes perform better than LDC
because of their more constrainted design.
In Figure 3.2 the above two codes were simulated for R = 4 bps/Hz using QAM.
We observe that the difference between LDC and Frame based LDC decrease. This
is because we see the effect of diversity is more at lower rates, i.e at higher rates the
probability of encountering bad codewords is less, and hence designing codes based
on more constraints at high rates in not of much use.
In Figure 3.3 code1 and code2 given in subsection 3.3.2 were compared for (2,2)
system at 4bps/Hz. we observe that the performance difference is negligible.

20
BER plots for LDC, Frame based LDC , Uncoded, for (T ,R )=(2,2), Rate =2bps/Hz
x x
0
10
LDC
Uncoded
Frame Theory Based LDC
−1
10

−2
10
BER

−3
10

−4
10

−5
10

−6
10
0 2 4 6 8 10 12 14 16 18
SNR dB

Fig.3.1: Error Performance of LDC, Frame based LDC for (2,2) system at
R = 2bps/Hz

In figure 3.3 LDC codes for (2, 1, 2, 2) are compared for constraint1 and constraint
2 for R = 2bps/Hz. We observe that they have similar BER performance. Hence
we can infer that codes designed from constraint 1 and 2 perform almost similar.
Therefore one can design codes based on constraint 1 which makes optimization of
capacity function easy.

21
Comparison of LDC, Frame Theory based LDC for (T ,R )=(2,2), R=4 bps/Hz
x x
0
10
Frame based LDC
LDC
Uncoded

−1
10
SER

−2
10

−3
10

−4
10
0 2 4 6 8 10 12 14 16 18
SNR dB

Fig.3.2: Error Performance of LDC, Frame based LDC for (2,2) system at
R = 4bps/Hz using QAM

Comparison of codes based on Frame Theory for (2,2) , R=4bps/Hz


0
10
Simulated
Heath

−1
10
SER

−2
10

−3
10

−4
10
0 2 4 6 8 10 12 14 16 18
SNR dB

Fig.3.3: Comparison of different Frame based Designs.

22
Comparison of Constraint1 and Constraint2 for 2x1, R=2bps/Hz
0
10
Constraint1
Constraint2

−1
10
SER

−2
10

−3
10
0 2 4 6 8 10 12 14 16 18 20
SNR dB

Fig.3.4: Comparison of different LDC for different constraints.

23
Chapter 4

Sphere Decoding

4.1 Introduction

In this chapter we study Sphere Decoding and its complexity when applied to MIMO
systems. We will show that the initial radius of the sphere decoder is a critical
parameter and propose a new initial radius based on the channel knowledge.
MIMO wireless systems are characterized by high capacity, and many space time
codes try to achieve this limit. For many of these codes, the maximum-likelihood
(ML) approach results in a decoder which has a computational complexity that is
exponentially proportional to the number of transmit antennas and symbol constel-
lation size. In general, this problem can be posed as, finding the closest lattice point
in 2N -real dimensions when the point y ∈ CN .
The Sphere Decoding (SD) algorithm was first proposed in [20] to reduce the
ML search space, by confining the search only to the lattice points in a hypersphere
centered at the received point. Sphere Decoding algorithm primarily addresses the
problem of finding the lattice points inside the hypersphere given the lattice, the
received point, and the radius of the sphere.
The channel model that will be considered in this chapter is
r
ρ
Y= HX + V (4.1)
M

where X ∈ CM ×1 is the input signal and Y, V ∈ CN ×1 denote the output measurement


and noise signal vectors, respectively. H ∈ CN ×M represents the MIMO channel under

24

the familiar quasi-static and flat fading assumption. The term M
signifies the fact
that the total signal power is equally divided among the M transmit antennas. H
and V are comprised of independent, identically distributed complex-Gaussian entries
C(0, 1). By stacking the real part R(.) and imaginary part I(.) of these entries, we
define
h i†
x= R(X) †
I(X) † ∈ R2M ×1
h i†
y= R(Y) †
I(Y) † ∈ R2N ×1 ,
h i†
v= R(V) †
I(V) † ∈ R2N ×1 ,

and finally, H ∈ R2N ×2M is given by


 
ρ  R(H) I(H) 
r
H= (4.2)
M −I(H) R(H)

Then, the real valued equivalent of (4.1) is given by [1]

y = Hx + v (4.3)

We only consider signal constellations which are subsets of the QAM constellation.

4.2 Sphere Decoding

The basic idea behind Sphere Decoding is very simple. We attempt to search only
those lattice points which lie inside a hypersphere centered at the received point rather
than searching over the entire lattice. The closest lattice point inside the hypersphere
is also the closest lattice point in the entire lattice.
The problem of finding lattice points inside a hypersphere is analogous to ML
decoding which is of exponential complexity. Mathematically this translates to solving
1
the inequality k y − Hs k≤ R0 for s ∈ Z n . Sphere Decoding algorithm mainly
addresses the problem of finding the points inside the hypersphere given the radius
1
For this consider H, y, s to be real for illustration.

25
r

Fig.4.1: Sphere Decoder - basic idea

R0 . Although the problem of finding lattice points inside a general n−dimensional


hypersphere is difficult, it becomes trivial when n = 1. When n = 1 this problem
translates to finding the integers in an interval. We can use this observation to
increase the dimension from k to k + 1. Suppose we have found all the points inside
the hypersphere for dimension k, then the admissible values for k + 1 coordinate form
an interval. For example, to find out points in a 3-dimensional hypersphere (Figure

y 8
z 11
10 l2

x 7

6 5 4 3 1
2

l1

2R

Fig.4.2: Sphere Decoder - Increasing the dimension of search

26
4.2), we first find out the admissible values of x-coordinate. Suppose we have found
that x = {1, 2, 3, 4, 5, 6} satisfy the criterion, we then find out for each point the
admissible values for y-coordinate. For instance, for x = 2, the admissible values of
y-coordinate are {7, 8, 9, . . .}. Then for each of these y-coordinates we find out the
admissible values for z- coordinate. For y = 8, we find that the admissible values of
z are {10, 11}. Hence the points (2, 8, 10), (2, 8, 11) are inside the hypersphere. This
method constructs a tree where the nodes in level k correspond to lattice points inside
the hypersphere of radius r and dimension k. Therefore sphere decoding algorithm is
primarily a combination of depth first search (DFS) algorithm and a method to find
out the interval lengths in each dimension.
To find the interval lengths such that the interval length at dimension k is a
function only of points and interval lengths in lower dimensions , H is factorized and
the upper triangular property of R is used.
 
R
H = Q 
0(2N −2M )×2M

where R is an 2M × 2M upper triangular matrix and Q = [Q1 Q2 ] is an 2N × 2N


orthogonal matrix.The matrices Q1 and Q2 represent the first 2M and the last 2N −
2M columns of Q. Using these factorizations we can come up the required algorithm.
For the sake of completeness, we shall include the algorithm [2] here.

4.2.1 The Sphere Decoding Algorithm

Algorithm:

Input: Q = [Q1 Q2 ], R, x, y = Q1 ∗ x, R0 .

= r 2 − k Q∗2 x k2 , y2M |2M +1 = y2 M


20
1. Set k = 2M, r2M
0 0
rk +yk|k+1 −rk +yk|k+1
2. (Bounds for sk ) Set U B(sk ) = b rk,k−1
c, sk =d rk−1,k−1
e −1

27
3. (Increase sk ) sk = sk + 1. If sk ≤ U B(sk ) go to 5, else to 4.

4. (Increase k) k = k + 1; Ifk = 2M + 1 terminate algorithm, else go to 3.


P2M 02
5. (Decrease k) if k = 1 go to 6. Else k = k − 1, yk|k−1 = yk + j=k+1 rkj sj , rk =
02
rk+1 − (yk+1 − rk+1,k+1 sk+1 )2 , and go to 2.

6. Solution found. Save s and go to 3.

4.3 Complexity

The complexity of Sphere Decoding algorithm depends on the size of the tree gener-
ated, which is equal to the total number of lattice points in hyperspheres of radius R0
and dimension k = 1, . . . , 2M . The size of the tree depends on the entries of matrix
H. If H is a deterministic matrix, then the worst case algorithmic complexity comes
into picture and is shown (reference 40) to be
   
2
1 1
q b4R 0 dc + 2M − 1
(2(2M )3 + 3(2M )2 − 5(2M )) + (2b r02 dc + 1)   + 1
6 2 2
b4R0 dc
(4.4)
−2 −2
where d = max (r1,1 , . . . , r2M,2M ).We observe that the upper bound on the number
of computations in (4.13) can be exponential. But in practice the received vector is
random due to the channel nature and noise.

y = Hx + v (4.5)

The entries of v are independent N (0, σ 2 /2) and the entries of H are N (0, 1/2). We
see that k v k2 = k y − Hs k2 is a χ2 random variable with 2N degrees of freedom.
Thus we may choose the radius to be a scaled variance of the noise,

R02 = 2αN σ 2 (4.6)

in such a way that we may find a point inside the hypersphere with a high probability.
Z αN N −1
λ
e−λ dλ = 1 −  (4.7)
0 Γ(m/2)

28
and choose  to be close to zero. If we do not find any points inside the hypersphere
we make  = 2 , recalculate the radius and rerun the algorithm.
When the channel and noise are random, average complexity rather than the peak
complexity becomes more meaningful. It is shown in [2][3] that the average complexity
becomes cubic at high SNR.

Complexity versus SNR for (2 × 2) 64−QAM


7.5
Complexity Exponent

6.5
Complexity exponent e

6
2
Complexity = 4e

5.5 1

4.5

3.5
0 5 10 15 20 25 30

SNR

Fig.4.3: Complexity versus SNR for finite Lattices

Figure 4.3 shows how the complexity exponent varies with SNR. We observe that
the average complexity is very high at low SNR and approaches a cubic complexity
for higher SNR. The choice of initial radius based entirely on noise variance, makes
the initial radius large for low SNR. Therefore we encounter large number of lattice
points inside the hypersphere and complexity increases. When the channel matrix
is not well conditioned, i.e when some of the eigenvalues are small, the transformed
lattice tends to collapse in the direction of these eigenmodes and results in very large
number of points inside the sphere. When the channel matrix is bad, there is a high
probability that our decoding will be in error (due to collapsed lattice) along with
high complexity. Therefore it becomes intuitive to choose the initial radius based on

29
both channel and noise.

4.3.1 Covering Radius

ML decoding is analogous to quantization, i.e we try to find the quantized value of


the received vector (quantized values are information vectors in ML). For the received
lattice i.e Hx, x ∈ Zn each lattice point can be associated with a region that is nearer
to it than to other points called as the Voronoi region. A fundamental Voronoi cell
around the origin is defined as [6, 5]

ν(Hx) = y ∈ R2N :k y k≤k y − Hx k, ∀Hx 6= 02N



(4.8)

The Voronoi cell is a convex hull formed by the intersection of hyperplanes. Covering
radius C is the radius of the smallest sphere containing the Voronoi cell. We observe
that C is the best possible initial radius. The received point falls in one of the Voronoi
cells and is at a maximum distance of C from a lattice point. So choosing the initial
radius to be C guarantees at-least one lattice point. The calculation of C is shown
to be a NP-hard problem. Algorithms like Diamond-Cutting algorithm, calculate the
covering radius but in exponential memory and time. So choosing initial radius based
on C is not optimal since we have to calculate C for every channel usage.

4.4 Modified Decoding Rule

In the conventional SD algorithm [2], the radius R0 is chosen to be a scaled version


of the noise and is given by variance i.e.

R02 = 2αN σ 2 (4.9)

Here, α is chosen such that, one can find a lattice point with probability 1 −  where
0 ≤   1 [2] and
αN
λN −1 −λ
Z
1− = e dλ (4.10)
0 Γ(N )

30
where, Γ(.) is the gamma function. This radius R0 is completely based on the noise
variance and hence can lead to large number of points inside the sphere for low SNRs.
Instead, we propose to choose the new initial radius R1 as
 
D
R1 = min , R0 (4.11)
2
where D is the major diagonal of an individual lattice cell. This radius ensures
that the point is decoded without failure with probability one for an infinite lattice.
Unlike the covering radius of [2], this major diagonal is easy to calculate. Hence the
complexity of sphere decoder with new radius is less than or equal to the conventional
sphere decoder. R0 is a function of . R0 can be made small, which implies a large
epsilon, and a larger decoding failure. Hence we would want  to be about 0.9 initially.
For infinite lattices the received point falls in one of the lattice cells and hence
the hypersphere centered around this point with radius R1 will include at least one
lattice point. But in practical situations, where the lattice is finite, the received point
may fall outside the lattice and the hypersphere may not include any lattice point
leading to a decoding failure. In such situations the radius should be updated and the
given point should be decoded. One method of updating would be to shift back to
the radius that depends on noise variance, if R1 fails. This is intuitive since, the point
is outside the lattice and is at a significant distance from any lattice point, implying
a high noise situation. The complexity would be sum of the decoding failure and the
present run. When the point lies outside the lattice, the hypersphere based on R1
would not include many points because of the finite nature of lattice, and hence the
number of nodes in the tree would be low and the complexity of decoding complexity
would be low. In this thesis, this updating procedure is termed as Method A.
If one could get an ideal radius, i.e the radius that would yield only one point
inside the hypersphere, it would lead to the lowest complexity. Intuitively one can
start with zero radius and slowly increase the radius in finite steps until we obtain a
lattice point. But such a procedure would lead to a large number of decoding failures.
The cumulative complexity of these decoding failures would be very high. So instead

31
of starting with zero radius, one could start with a radius that would yield a lattice
point with high probability, and if a failure occurs, the radius is updates in small
increments. The size of the increment is critical for the aggregate complexity. This
updating procedure is termed as Method B.

4.4.1 Calculation of Major Diagonal

Given that the edges of the input lattice cell are the vectors ej = {0, 0, . . . , 1, . . . , 0, 0}
where 1 is in j th position and ej ∈ {0, 1}2M , the output lattice cell edges under the
linear transformation H are the columns of H. The major diagonal D is then given
by  

 

 X 
D = max a :a=
hi − hj , 0 ≤ j ≤ 2M (4.12)
 

 1≤i≤2M 

i6=j

where {hi }2M


i=1 are the columns of H, and h0 is the all zero vector. The above cal-

culation requires about 8M 2 N additions, 2M norm computations of 2N dimensional


vectors, and 2M comparisons which are of complexity O(N 3 ). Hence, calculation of
the major diagonal is computationally simple, especially for N = 1. However, for the
finite lattice case, R1 may not give a point inside the sphere when the received point
lies outside the lattice. In such a case, one of the following two methods, can be used
for updating the radius.

4.4.2 Method A

When the radius based on D/2 fails for a finite lattice, it implies that the point lies
outside the lattice at a distance greater than D/2. It is intuitive then to fall back
to R0 and re-decode the received point. The updating procedure for method A is
therefore to shift back to R0 if R1 fails.

32
Start
k=1

R2(0) = R1
P=0; K=1

Is Yes
Lattice Point Stop
Present ?

No

R2 (k) ≥ R0 R2 (k) = R0
and Yes p=1
P==0

No
R2 (k) = R2 (k − 1) + δR2 (0)
k=k+1

Fig.4.4: Updating Procedure for Method B

4.4.3 Method B

When the radius R1 based on D/2 fails, increment R1 by a factor δ ≥ 0 to define


R2 (k) = R2 (k − 1) + δD/2 where the initial value R2 (0) = D/2. Shift back to R0
if the present radius is greater than R0 . The exact update procedure is shown in
Fig.4.4. The intuition is to increase the radius slowly until a lattice point falls inside
the sphere. When R2 (k) ≥ R0 , we switch back to R0 as the decoding radius. The
value of δ is very critical. Making δ small will lead to decoding failure and increase
the overall complexity. Making δ large increases the hypersphere radius and leads to
a larger number of points inside the hypersphere and increase the complexity.
In 4.5, complexity exponent e is plotted against δ for an SNR of 3.5dB. We observe
that e is high at low SNR and decreases to a minimum value at δ ≈ 0.3 and then
increases. This graph resembles the energy curve between two atoms as a function of
the distance between them.

33
complexity Versus δ for 2x2 and 16QAM
8
Complexity exponent e

7.5

e
Complexity Exponent e 7 C= 4

6.5

5.5 Ideal δ = 0.3

4.5
0 0.5 1 1.5 2 2.5 3
δ

Fig.4.5: Complexity versus δ for 2x2 and 16-QAM

4.4.4 Comparing D/2 with R0

From equation (4.12) we see that major diagonal is the norm of a 2N dimensional
vector. Each vector is obtained by adding 2M , 2N dimensional vectors. Therefore
the final vector is a 2N dimensional vector with variance of each term being 2M or

equivalently a 2N dimensional vector scaled by 2M . Hence D 2 is a χ square distri-
bution with 2N degrees of freedom. The conventional choice of radius is according
to (4.10). It is easy to see that the probability D 2 ≤ R02 is given by

2αN σ 2
 
2 2
P (D ≤ R0 ) = P X ≤ √ (4.13)
2M
where X is a Chi-square random variable of dimension 2N . Here P (D 2 ≤ R02 ) de-
creases either with σ 2 , or 1/M or both . Hence, taking D/2 as the new radius gives
an advantage only in the lower SNR range. The probability is also increasing func-
tion of α and from 4.7 we observe that  decreases with increasing α. However to
get a better probability of success at the first decoding attempt α is generally large.
However, P (R1 ≤ R0 ) = 1. Hence, the new radius based sphere decoder will always

34
have a complexity that is less than equal to that of the sphere decoding rule in [2].

4.5 Simulation Results and Conclusion

4.5.1 Infinite Lattice

The input is taken to be a lattice point from Z 2M . The complexity of SD algorithm


with original and modified radius are plotted as a function of SNR. The average
complexity is calculated as a average over 10000 simulations.
Complexity comparison for Infinite lattice.
13.5
New Radius
Old Radius

13

12.5
Complexity exponent e

12

11.5

11

10.5

10

9.5
−4 −2 0 2 4 6 8 10 12 14
Noise Variance

Fig.4.6: Average computational complexity versus SNR for infinite lattices

4.5.2 Finite Lattice

64-ary QAM constellations have been used for simulations. The signal energy at each
receiver is normalized to unity and the initial radius R0 for the conventional SD is
chosen to make  = 0.9 [3]. The scaling factor of noise variance i.e 2αN to determine
R0 for 2 × 2 system, for various SNR that give  = 0.9 are given in Table 4.2.
The complexity exponent of the average number of lattice points falling inside the
sphere is plotted versus SNR in Figure 4.8 for 2 × 2 MIMO system, using the SD

35
σn2 1 0.8 0.7 0.6 0.5 0.25 0.1 0.05 0.01 0.005 0.001
2αN 2 2 1.8 1.8 1.8 1.6 1.6 1.8 3 3.6 4.2

Table 4.2: 2αN as a function of SNR for  = 0.9


Complexity versus SNR for (2 × 2) 64−QAM
7.5
1: New Radius
2: Old Radius
7

6.5
Complexity exponent e

6
2 e
Complexity = 4

5.5 1

4.5

3.5
0 5 10 15 20 25 30
SNR

Fig.4.7: Average computational complexity versus SNR for Method A

of [2] defined by method A. Observe that the new radius requires only a quarter of
the computations when compared to the conventional SD at lower SNR. The average
number of points is close to the conventional SD curve, for high SNR. This is because,
for finite lattices, there is a large probability that the points may lie outside the lattice
at a distance greater than the major diagonal. At high SNRs, the original radius
becomes smaller than the major diagonal, and therefore both the algorithms tend to
use only R0 . The computational complexity of SD algorithm is also important since
the new radius may fail to decode for finite lattice case. When it happens, we revert
back to the old radius causing a overhead in the algorithmic complexity. In Fig.4.10,
the exponent of the average complexity have been plotted as a function of SNR for
the 2 × 2 system, using method B for updating the radius. Even though the average
number of times the SD has to be repeated for a single received point by method B

36
Average no. of points versus SNR for (2× 2) 64−QAM
5
1. New Radius
2. Old radius
4.5

Average no.of points exponent e


3.5

e
3
Avg. no. points = 4
2
2.5 1

1.5

0.5

0
0 5 10 15 20 25 30

SNR

Fig.4.8: Average number of lattice points versus SNR for Method A

is more than that of method A, the average complexity for the whole procedure is
significantly lower. The choice of δ is very important. For this simulation, we have
chosen δ = 0.2 and no attempt has been made in this work to optimize the choice of
δ for these simulations.

4.6 Summary

In this chapter we studied Sphere Decoding and understood the importance of initial
radius. We have proposed a new method to choose the initial radius . By incorpo-
rating the new radius in the sphere decoding algorithm, we could achieve significant
reduction in complexity in the lower SNR regime. We also proposed two updating
strategies, for a decoding failure.
We investigated the case of two-transmit and two-receive antenna system in detail
using simulations. While the simulations in this study are mostly limited to 2 × 2
case, this idea works in a similar fashion for higher dimensional case. We have shown
that method B of updating gives a significant reduction in complexity compared to

37
Average no. of points versus SNR for (2×2) 64−QAM
5
1.New radius
2.Old radius
4.5

4
e
Avg. no = 4
3.5
Average no.of points e

3
2
2.5

2 1

1.5

0.5

0
0 5 10 15 20 25 30
SNR

Fig.4.9: Average number of lattice points versus SNR for Method B

Complexity versus SNR for (2× 2) 64−QAM


7.5
1. New Radius
2. Old Radius

6.5
Complexity Exponent e

2
6 Complexity = 4e

1
5.5

4.5

3.5
0 5 10 15 20 25 30

SNR

Fig.4.10: Average computational complexity versus SNR for Method B

38
method A. We also showed the importance of choosing the parameter δ. In conclusion,
this work demonstrates the fact that radius of the sphere decoder and its updating
strategy can significantly impact decoding complexity.

39
Chapter 5

Rate-Diversity Tradeoff

5.1 Introduction

Multiple antennas provide us with high data rates as well as diversity, i.e. robustness
against channel fading. However a fundamental tradeoff exists between both. This
tradeoff was defined and studied by Zheng and Tse [8]. This tradeoff was studied
for Gaussian codes and optimal tradeoff was derived for T ≥ M + N − 1. Yao [9]
proposed tilted QAM codes which achieve the optimal tradeoff for a 2 × 2 system
with T = 2. She proved non-Gaussian codes, which satisfy minimum determinant
criterion, achieve optimal tradeoff for 2 × 2 system. In this chapter, rate-diversity
tradeoff for non Gaussian codes is analyzed and the criterion that a code should follow
in order to achieve the optimal tradeoff will be derived. We will also analyze the error
probability of non-Gaussian codes as a function of singular values of the code.

5.2 Diversity-Multiplexing Tradeoff

In this section, we shall introduce the channel model, definition of diversity, multi-
plexing gain and introduce the main result.

5.2.1 Channel Model

We consider a MIMO system with M transmit and N receive antenns. The channel is
given by H where hij ∈ C are independently Rayleigh distributed with unit variance
and zero mean. Channel state information is available only at the receiver and not at

40
the transmitter. We also assume that the channel is quasi-static over T time intervals
(i.e.. channel uses). The input output relation is given by
r
ρ
Y= HX + V (5.1)
M

where X ∈ C M ×T , V is the additive noise with i.i.d entries vij ∈ CN (0, 1). ρ is the
average signal to noise ratio at each receive antenna.
A rate R bps/Hz codebook C has | C |= 2RT codewords {X(1), . . . , X(| C |)} each of
which is a M × T matrix. The transmitted signal X is normalized to have the average
transmit power at each antenna in each symbol period to be 1.

5.2.2 Definitions

The channel capacity of an (M, N ) MIMO system grows as C ≈ min {M, N } log(SN R).
Therefore, at any SNR, to achieve a significant fraction of capacity, the rate of the
code should also increase as log(SN R). If the rate of the transmitted code is fixed
for all SNR, the gap between capacity and transmission rate increases with SNR and
becomes infinity as SN R → ∞. Hence, to achieve a significant fraction of the capac-
ity for all SNR, the rate of the transmit code should also increase, i.e a different rate
codebook C for each SN R.
In this contexti, it is better to visualize the input transmit matrix X not as a symbolic
function of input symbols, but rather a one-one function between input information
set and the code word matrix set. The size of the input information set varies with
SN R.
We define a scheme as a family of codes {C(SN R)} of block length T , one at each
SN R level [8].
A scheme {C(SN R)} is said to achieve spatial multiplexing gain r if,

R(SN R)
lim =r (5.2)
SN R→∞ log(SN R)

41
The scheme achieves a diversity d if,

log(Pe (SN R))


lim = −d (5.3)
SN R→∞ log(SN R)

r denotes the percentage of capacity that we achieve at each SN R.


In this chapter =
˙ denotes exponential equality, i.e., we write f (SN R)=SN
˙ R b to
denote
log f (SN R)
lim =b
SN R→∞ log(SN R)
˙ >
Similarly <, ˙ are defined. Hence we can assume αSN R=SN
˙ R at very high SNR
where α is a scalar. From the above definition we observe that the spatial multiplexing
gain of any fixed rate scheme, i.e keeping rate constant for all SN R is 0. Also
0 ≤ r ≤ min {M, N } and dmax = M N .

5.2.3 Tradeoff Curve

The tradeoff curve between spatial multiplexing gain r and the optimal diversity
at r denoted by d∗ (r) is given by the piecewise linear function connecting points
(r, d∗ (r)), k = 0, 1, . . . , min{M, N }, where,

d∗ (r) = (M − r)(N − r) (5.4)

when T ≥ M + N − 1.

5.2.4 Error events

The optimal tradeoff curve is obtained by considering the error probability and show-
ing that it is of the same order as that of outage probability for Gaussian codes
when T ≥ M + N − 1. In this section, error mechanism of MIMO channels will be
considered. By considering the outage probability Pout as a function of R2 we get

Pout (R) = P [log det(I + ρHH † ) ≤ R]


2
Assume N ≤ M

42
(0,mn)

Diversity Gain: d*(r)


(1,(m−1)(n−1))

(2,(m−2)(n−2))

(r,(m−r)(n−r))

(minm,n,0)

Spatial Multiplexing gain r

Fig.5.1: Diversity-Multiplexing tradeoff, d∗ (r) for general M, N and


T ≥M +N −1

N
Y
= P [ (1 + ρλi ) < ρr ]
i=1

So the outage probability at rate r is approximately equal to the probability that there
are more than N − r singular values, each less than ρ−1 at high SNR, or equivalently
outage probability at r is equal to the probability that the rank of the channel matrix
is less than r. It can be shown that this probability is equal to ρ−(M −r)(N −r) . So the
outage probability is equal to

Pout (r) ≈ ρ−(M −r)(N −r)

Outage probability at a given rate and SNR can be shown to be a lower bound to the
actual error probability (not the pairwise error [8] probability) i.e

Pe (ρ) ≥ Pout (r) ≈ ρ−(M −r)(N −r) (5.5)

Also
Pe (ρ) = Pout (r)P (error | outage) + P (error, no outage)

≤ Pout (r) + P (error, no outage) (5.6)

43
Hence if P (error, no outage) is of the same order of P (outage) then by equation(5.5)
and equation(5.14) we get
P (error) ≈ Pout (r)

For Gaussian codes, when T ≥ M + N − 1, both the probabilities are of same order
and we obtain optimal tradeoff.

5.2.5 Tradeoff Achieved by Alamouti Code

Orthogonal designs provide us full diversity [10]. In this section the tradeoff curve for
orthogonal designs will be computed for two transmit antenna [8]. For two transmit
antenna Alamouti scheme will be considered. The equivalent received signal at the
receiver is given by r
ρ k H k2F
yi = xi + vi , i = 1, 2 (5.7)
2
where k H kF is the Frobenius norm of H. k H k2F is chi square distributed with
2N M degrees of freedom. Therefore, for arbitrarily small ,

P (k H k2F ≤ ) ≈ M N

We can show that the probability of error is also bounded by outage probability.
Hence to compute the tradeoff curve it is sufficient to evaluate the outage probability.

ρ k H k2F
 
r
Pout (r) = P 1 + ≤ρ
2
 +

= P k H k2F ≤ ρ−(1−r)
+
= ρ−M N (1−r)

Therefore the optimal tradeoff curve for alamouti is given by M N (1 − r)+ . Tradeoff
achieved by Alamouti scheme is compared to optimal tradeoff in Figure 5.2.

44
Comparison of Alamouti and Optimal Rate−Diversity Tradeoff M=N=T=2

Alamouti
Optimal Tradeoff

3
Diversity

0
0 1 2 3
Multiplexing Rate r

Fig.5.2: Comparison of Alamouti and Optimal rate diversity tradeoff for


M =N =T =2

5.3 Tilted QAM Codes

Any Gaussian code with T ≥ M + N − 1 attains optimal rate diversity tradeoff.


However, in practice, one uses a codebook made of elements from finite constellations.
When the input codebook is made from finite constellation, the optimal tradeoff curve
may be achieved even when T < M + N − 1. The main criterion for achieving this
optimal tradeoff is that the probability of error when not in outage should be of the
same order as that of the probability of outage. In this section we shall review one
such design for 2 × 2 system called as Tilted QAM codes [9]. In the previous section
we have seen that orthogonal codes do not achieve optimal tradeoff for N ≥ 2. This
is because we effectively send only one symbol per channel use instead on N symbols.
In OSTBC design, both the information symbols s1 and s2 appear in both rows and
columns of X, leading to full diversity but reducing the rate. The new code is defined

45
as follows.
 
x11 x12
X =  
x21 x22
    
x11 cos θ1 − sin θ1 s11
  =   
x22 sin θ1 cos θ1 s22
    
x21 cos θ2 − sin θ2 s21
  =   
x12 sin θ2 cos θ2 s12

θ1 and θ2 are chosen so as to maximize the worst case determinant of X. (2θ1 , 2θ2 ) =
(arctan(1/2), arctan(2)) maximize the determinant over the complete integer lattice,
and
1
min | det(X) |≥ √ (5.8)
∀X 2 5
for QAM like constellations of all sizes. When the channel is not in outage, this code
can be shown to achieve optimal tradeoff [9] over all QAM due to its minimum de-
terminant property. When not in outage the minimum distance between the received
codewords can be shown to be greater than half the noise variance. This implies that
probability of error when not in outage is in the same order that of outage leading
to optimal tradeoff even when T < M + N − 1. Therefore any code which satisfies
minimum determinant criterion achieves optimality for a 2 × 2 system.

5.3.1 Criterion for a Non Gaussian code to achieve optimal tradeoff for
(M, M ) MIMO system

In this section we shall derive the criterion required to achieve the optimal tradeoff
curve for a general (M, M ) antenna configuration. Also, we assume that T ≥ M .
This is a valid assumption as the number of signaling intervals should be greater than
number of transmit antennas to achieve full diversity [10]. Consider a m2 -QAM
.
constellation from Z(i) to transmit at a rate R = r log 2 (ρ) such that m2 = 2R/M =
.
ρr/M , where = denotes asymptotic sense with respect to ρ, i.e. any value which is not

46
a function of ρ can be considered to be 1, since it does not grow with ρ. Therefore
any constant multiplying ρ can be absorbed into ρ.

˙ 2 = 2R/M = ρr/M
Es =m (5.9)

The noise variance σv2 can be expressed as

Es . r/M −1
σv2 = =ρ (5.10)
ρ

Let H denotes the M × M channel matrix. Let λi , i = 1, 2, ...M, denote the


singular values of H, and πi , i = 1, 2..., M, denote the singular values of the code
matrix X. We also observe that the properties of singular values of code matrix X
and the code difference matrix ∆ are similar due to the linearity of the code and the
constellation considered.
We will find the condition on πi , ∀i, such that when the channel is not in outage,

min{k HX k2 }>σ
˙ v2 (5.11)
∀X

When the system is not in outage,

log(det(I + ρHHt )) > r log(ρ)


M
Y
⇒ (1 + ρλ2i ) ≥ ρr (5.12)
i=1

k HX k2 can be written in terms of λi and πi as follows.


X X
k HX k2 = u2ij λ2i πj2 = ci λ2i (5.13)
ij i

u2ij πj2 . From


P
where [uij ] is a unitary matrix depending on H and X and ci = j

(5.11) and (5.13) we can observe that (5.11) is equivalent to finding a condition on
ci such that
X
min ci λ2i >ρ
˙ r/M −1 (5.14)
X

47
Equation (5.12) is symmetric with respect to λi ∀ i. Hence there should exist a
minimum distance from the origin, i.e. min( λ2i ) exists and it occurs when λ1 =
P

λ2 = . . . = λm . Under this condition

(1 + ρλ2i ) > ρr/M (5.15)

λ2i > ρr/M −1 − ρ−1 (5.16)

From equation (5.16) we get


X X
ci λ2i >(ρr/M −1 − ρ−1 ) ci

λ2i , i.e. minimum power in


P
So for equation (5.14) to hold true for the case of min i

the channel , we get


X
(ρr/M −1 − ρ−1 ) min ˙ ρr/M −1
ci >
X
XX
(ρr/M −1 − ρ−1 ) min u2ij πj2 >
˙ ρr/M −1
X
i j

Exchanging the summations and using the unitary property of u we get


X ρr/M −1
min πj2 >
˙ (5.17)
X
j
(ρr/M −1 − ρ−1 )

When ρ → ∞ we get
X
min πj2 > 1
X
j

The above inequality says that the instantaneous energy (not the average power) of
the input matrix X should be greater than 1 at high SNR. In the above derivation, an
P 2
assumption that the worst case of channel occurs when λi is minimum was made,
but the worst case may occur at other combinations of λi . We will show that the
above condition equation (5.17) is a sufficient condition to achieve optimality. When
equation(5.12) holds true then
X
λ2i > M (ρr/M −1 − ρ−1 ) (5.18)
i

48
10002 (x y)2 + 1000 (x2+y2)+1 − 10001.5 = 0
2

1.5 Non Outage Region

0.5
d
min

0 d>dmin
y

−0.5

−1
Non Outage Region

−1.5
x=y

−2
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
x

Fig.5.3: Outage region as a function of λ1 and λ2

since any distance from origin is greater than when λi are equal Figure 5.3.
Also For eq5.14 to hold true
X X
min ci λ2i >
˙ min min {ci } λ2i > ρr/M −1
X X
ρr/M −1
⇒ ˙
min min{ci } > (5.19)
X M (ρr/M −1 − ρ−1 )
We know that, min{ci } > min{πi2 }

A sufficient condition for both inequalities to hold true is


ρr/M −1
min min{πi2 } >
˙
X M (ρr/M −1 − ρ−1 )
X ρr/M −1
min πi2 >
˙ M min{πi2 } >
X (ρr/M −1 − ρ−1 )
X
min πi2 > 1 ρ → ∞
X
i

πi2 ≥ 1 is sufficient to achieve optimality and encompasse


P
So the condition minX
other cases when λi are not equal. Also in the above analysis we have found the
condition that distance between the codewords be greater than equal to the noise
variance. This may not lead to correct decoding because the instantaneous noise may
be greater than the noise variance. But in the above derivation we assumed that the

49
channel is exactly out of outage, i.e on the boundary point. Depending on how much
deep we are into the non outage region the codeword difference will be much more
greater than the noise variance. In Appendix A we have shown that this condition
makes the code behave like T → ∞.
P 2
In the above derivation we got the condition that πi > 1 for all QAM, but a
P 2
condition like πi > C for all QAM where C is some constant is sufficient because
all the above results are in asymptotic sense and any C which is not function of ρ
is equal to 1 in the above sense. In the above results the point r = 0 will cause the
bound to go to infinity. This is because of the following reason. In analysis of rate
diversity tradeoff the absolute rate R = r log(ρ) which becomes zero at r = 0. This
implies that there is only one matrix in the code book. To avoid this problem one
can define the rate R = r log(ρ) + log(). We observe that the contribution by log()
term is constant and becomes negligible at higher SNR. The terms ρr/M −1 becomes
ρr/M −1 1/M in the above results, and the singularity at r = 0 is removed.
P 2
The codes designed using the criterion πi ≥ 1 come close to achieving the
optimal tradeoff curve, but they will tend to perform better than the conventional
codes, since we are reducing the error probability when not in outage. Summarizing
the results of this section
A Linear Space Time code X satisfying the following conditions

πi2 > 1 for all QAM


P
1. minX i

2. det(X) > 0 for all QAM

3. The code should be able to achieve R= M log(ρ), or equivalently the normalized


rate should be atleast M symbols per channel usage.

achieves optimal rate-diversity tradeoff for a M × M system over QAM constellation.

50
5.3.2 Examples of Codes that achieve optimal Tradeoff

In this subsection we shall analyze some existing codes, for their tradeoff character-
istics

Tilted QAM Codes: Yao [9] showed that Tilted QAM codes achieve optimal
P 2
tradeoff curve. In this subsection we will show that this code satisfies πi ≥ 1 for all
QAM. Tilted QAM Codes were shown to have a determinant greater than a constant
for all QAM. Tilted QAM code is given by
 
cos(θ1 )s1 − sin(θ1 )s2 sin(θ2 )s3 + cos(θ2 )s4
X=  (5.20)
cos(θ2 )s3 − sin(θ2 )s4 sin(θ1 )s1 + cos(θ1 )s2

where (θ1 , θ2 ) = (0.5 arctan(2), 0.5 arctan(0.5)) are the optimal angles to maximize
P 2
the determinant. Since πi =k X k2 , Frobenius norm of the code matrix can be
considered. Frobenius norm of the above matrix can be found out to be

k X k2 = |s1 |2 + |s2 |2 + |s3 |2 + |s4 |2

we observe that for any QAM constellation k X k is greater than 1 since, k X k6= 0
and should be an integer for si ∈ Z(i). We observe that the above code satisfies the
P 2
condition ˙ and det(X) > 0 for all QAM.
πi >1

Linear Threaded Algebraic Space-Time Codes: TAST codes [19] were pro-
posed by Damen and were later extended to LTAST codes. Damen claims that LTAST
codes achieve optimal tradeoff curve, which were verified by simulations, but did not
have a theoretical proof. In this section we will show that these coded satisfy the
determinant and optimality criterion and hence achieve optimal tradeoff. A generic

51
TAST code is defined as follows
 
1
u11 . . . φ M u21
 
1 2
φ u22 . . . φ u32
 
 M M 
DM,M,M (u) =  .. .. ..
 (5.21)
. . .
 
 
 
M −1
φ M uM M . . . u1M

where φ is chosen to guarantee full diversity and optimize the coding gain. φ is gen-
erally chosen as follows

φ = eiλ , λ 6= 0 ∈ R

The vector ui = [ui1 ui2 . . . uiM ] is chosen as follows.

ui = M i s i

where si is the it h information vector and Mi is a unitary matrix chosen to maximize


the product distance j uij . Since Mi is unitary k ui k2 =k si k2 . Consider the square
Q

of Frobenius norm of DM,M,M .


M 2
X X
2 2
k DM,M,M k = 1 k uij k = |sk |2 >1
˙
ij k=1

since |φc | = 1. Also it is shown by Damen [18] that det(DM,M,1 ) > 0 for all QAM.
Hence as this code satisfies all the required conditions this code should achieve optimal
tradeoff

5.3.3 Error Analysis for Non Gaussian Codes

In this section, the error performance of non-Gaussian codes with regard to rate-
diversity tradeoff will be analyzed. In [10] error analysis was made when the code
is fixed, and in [8] tradeoff curve was established assuming gaussian codes. In this
section we assume that the code is non-Gaussian and is varying with ρ, and derive
the tradeoff criterion.

52
When Gaussian codes are used and T ≥ M + N − 1 the tradeoff curve is obtained.
In Gaussian codes as the individual entities can be anything the properties of codes
depend on T which is the only parameter. In non gaussian codes or codes from finite
alphabet, the properties of codes can be characterized not only by T but also param-
eters like worst case minimum distance between the codes, minimum determinant.
These all are functions of the discrete singular values of the code.All the results in
this section assume N = M .
We derive the error probability as a function of the eigenvalues of HH† and
XX† . Consider X1 transmitted, and the channel realization is H. Let HH† =
UΣU† , Σ = diag(λ1 , . . . , λM ) denote the eigenvalue decomposition of HH† , and
XX† = VΠV† , Π = diag(π1 , . . . , πM ) denote the eigenvalue decomposition of XX† .
Then the probability that X1 is wrongly decoded as X2 is

P [X1 → X2 ] ≤ exp {−ρ(k H(X1 − X2 ) k2 )}

As before if we assume that the code is linear and input alphabet belong to QAM,
then X1 − X2 = X .

k HX k2 = T r(ΣU† VΠV† U) = T r(ΣΘΠΘ† )

where U, V, Θ = U† Vare unitary matrices.

˙ exp {−ρT r(ΣΘΠΘ† )}


P (X1 → X2 |HH† = UΣU† )≤ (5.22)

If HH† = UΣU† and H is random then Σ, U are independent . In order to get


the error probability as a function of λi and πi , we average equation (5.22) over all
the unitary matrices , i.e. over the stiefel manifold. We use the following integral
result by Itzyskon and Zuber. Let A and B be two diagonal matrices of order m.

det E(A, B)
Z

dΘeT rΘAΘ B δ(ΘΘ† − Im ) = Γ(m + 1) . . . Γ(1) (5.23)
det V (A) det V (B)

53
Where
 
e a1 b 1 . . . e a1 b m
 . .. ..
 
E(A, B) =  .. . .


 
e am b 1 . . . e am b m

and v(A), V (B) are the vandermonde matrices of A, B respectively. Using the fact
that the probability density function of unitary matrices of order m is

Γ(m) . . . Γ(1)
f (Θ) = δ(ΘΘ† − Im )
π m(m+1)/2

where Γ() denotes the gamma function. Averaging the error probability gives us

 
−ρλ1 π1 −ρλ1 πM
e ... e

.. .. ..
 C
P [X(1) → X(2)|Σ] ≤ det  . . .
 Q Q
 i<j (ρπi − ρπj ) i<j (λi − λj )


e−ρλM π1 . . . e−ρλM πM
 
1 1
1+ρλ1 π1
... 1+ρλ1 πM

.. .. ..
 C
≤ det  . . .
 Q Q
 i<j (ρπi − ρπj ) i<j (λi − λj )


1 1
1+ρλM π1
... 1+ρλM πM
(5.24)

1
where C is a constant. we make the approximation e−x ≤ 1+x
, ∀x ≥ 0 .

 
1 1
1+ρλ1 π1
... 1+ρλ1 πM  Q (ρπi − ρπj ) Q (λi − λj )
.. .. ..

i<j i<j
det  . . . =
 
QM
i,j=1 (1 + ρλi πj )
 
1 1
1+ρλM π1
... 1+ρλM πM

By substituting the determinant we get

1
P [X(1) → X(2)|Σ] ≤ QM (5.25)
i,j=1 (1 + ρλi πj )

54
1
The above approximation can be made only for higher SNR, since e−αx ≤ 1+αx
, ∀x, α ≥
1 1
0 but e−αx − e−αy  1+αx
− 1+αy
∀α ≥ 0 but can be made true as α → ∞. The
two bounds given by eq 5.24 and eq 5.25 have been averaged over H and plotted. We
Comparison of Upper bound and its Approximation averaged over H
0
10
Exponential Approximation
Upper Bound

−1
10

−2
10
Probability of Error

−3
10

−4
10

−5
10

−6
10

−7
10
0 5 10 15 20 25 30 35 40
SNR

Fig.5.4: Comparison of exact and approximate error bounds

observe that the bound becomes correct for SNR over 10dB. We also observe that
the slope is preserved, which is of more importance for tradeoff analysis. Instead of
averaging over the code ensemble, the values of πi which give rise to the worst deter-
minant will be considered. This can be justified by removing the 1 in deneominator
and observing that denominator is a function of determinant. After applying union
bound we get
ρT r
P [error|Σ] ≤ QM
i,j=1 (1 + ρλi πj )
For a code to achieve optimal tradeoff, the probability of error when not in outage
should be of the same order of outage probability. So to obtain a criterion for achieving
optimal error probability we average it over the event of non outage (A)c . We define
λi = ρ−αi as defined in [8]. We also define πi = ρ−βi since πi are no longer constant

55
but vary with size of codebook which varies with ρ. We then get

P [error|Σ] ≤ ρ−( i j (1−αi −βj )−T r)

It is shown that in [8] that the pdf of λi or equivalently αi at high ρ to be

p(α) = ρ− i (2i−1+|M −N |)αi

Using the method followed in [8], the error probability when not in outage is
Z
p(error|no outage) ≤ ρ−d(r,α,β) dα
(A)c

where !
M
X m X
X m
d(r, α, β) = (2i − 1)αi + (1 − αi − βj )+ − T r (5.26)
i=1 i=i j=1

and
X
A = {αi ∈ R+ , α1 ≥ α2 ≥ . . . ≥ αm ≥ 0, and (1 − αi )+ ≤ r} (5.27)

The probability is dominated by the term corresponding to α that minimizes d(r, α, β)


satisfying the constraint eq 5.27, so min{dmin (r, α, β), (M − r)(M − r)} becomes the
diversity gain. β is chosen so as to make the error exponent in the order of optimal
tradeoff.
The above minimization was solved numerically under the given constraint and
rate to obtain the optimal diversity for 2 × 2 system . The values of βi obtained are
given in Table 5.3. The values of βi for a 3 × 3 system are given in Table 5.4 The
values of βi for a 4 × 4 system are given in Table 5.5
From table 5.3 we observe that β1 + β2 = 0 for all the rates r. To check this
dmin (r, α, β) was calculated for a valid range of β1 and β2 for which β1 + β2 = 0 holds.
For all the above βi satisfying the constraint dmin (r, α, β) was found to be greater
than or equal to (M − r)(M − r) for all rates except r = 0. At r = 0 we got the
constraint that β1 + β2 < −0.5. Converting βi to πi we get the following conditions.

56
r dmin (r, α, β) β1 β2
0 4 -1 1
1 1 0 0
2 0 1 -1

Table 5.3: Values of βi for optimal rate diversity tradeoff for 2 × 2 system,
T =2

r dmin (r, α, β) β1 β2 β3
0 10 0 0 -1
1 4 0 0 0
2 1 0 0 0
3 0 0 0 0

Table 5.4: Values of βi for optimal rate diversity tradeoff for 3 × 3 system,
T =3

r dmin (r, α, β) β1 β2 β3 β4
0 16 -1 0 0 0
1 10 0 1 -1 0
2 4 0 0 0 0
3 1 0 0 0 0
4 0 0 0 1 -1

Table 5.5: Values of βi for optimal rate diversity tradeoff for 4 × 4 system,
T =4

57

r=0: β1 + β2 ≤ −0.5 ⇒ π 1 π2 ≥ ρ
r = 1, 2 : β 1 + β2 ≤ 0 ⇒ π 1 π2 ≥ 1

Except for the point r = 0 we have obtained the condition given by yao [9] for
optimality of 2 × 2 system. At r = 0 we obtain the above result saying that det(X) ≥

ρ because of the following reason. In rate diversity tradeoff r = 0 ⇒ R = 0, which
theoretically means there is only one or no matrix to transmit,i.e size of the codebook
is 1. Hence the above result is very valid since we can put the total power in the
single matrix. Intuitively r = 0 corresponds to any constant rate transmission for
all ρ. Since we already know that such a constant rate code achieves full diversity
when det(X) > 0 , we can impose the same condition here for r = 0. This condition
combined with the above conditions for other r we get that any 2 × 2 code achieves
optimal tradeoff if
det(X) ≥ 1 ∀X (5.28)

The constant 1 is of no particular significance as long as the determinant is greater


than some constant C. Similar results were obtained for a 3×3 system, i.e. det(X) ≥ 1
is sufficient to achieve optimality.

5.4 Full rate and Full Diversity codes from Extension fields

For a code to achieve optimal tradeoff we have to maximize certain functions of


the minimum singular values of the code matrix. Even for a code to achieve full
diversity [10] the determinant of the difference code matrix should be nonzero. In
this section we design new codes based on extension fields which give full diversity
over all QAM (scaled Gaussian Integers).

58
5.4.1 Code Construction

Let s denote the information vector of size q × 1 and fi (s) denote a linear function of
information symbols. Then the input code matrix is defined as

X = U ΣV † (5.29)

where U † U = IM ×M and V † V = IT ×T are two unitary matrices. Σ is defined as


 
f (s) 0 ... 0
 1 
Σ= 0 f2 (s) . . . 0  (5.30)
 
.. .. . . ..
 
. . . .

f1 (s) and f2 (s) are chosen over the appropriate extension field so that they are non
zero and linear over the gaussian integer ring. See Appendix 1 for information about
extension fields.

For example, in a 2 × 2 system we can choose f1 (s) = s1 + 2s2 and f2 (s) =

s1 − 2s2 . Choosing U and V as 2 × 2 Hadamard matrices, we get the code X as
 √ 
s 2s2
 √1  (5.31)
2s2 s1

We observe that X is linear and has a determinant s21 − 2s22 which is non zero for

si ∈ Q(i) . Here we have chosen Q( 2) as the extension field. we see that if
s1 , s2 ∈ Z(i) then f1 (s), f2 (s) 6= 0. we also observe that if f1 (s) and f2 (s) are linear
combination of the complete information vector as in the previous example then
0
det (X) 6= 0 ⇒ det (X − X ) 6= 0, where X0 is onother code matrix
fi (s) is chosen as the dot product of the information vector and the basis elements
of an algebraic extension field over Q(i) whose degree over Q is q. i.e.

fi (s) = s0 + s1 α + s2 α2 + . . . + sq−1 αq−1 (5.32)

We observe that fi (s) 6= 0 until s = 0, since we are forming a linear combination


with the basis elements over the information field. The choice of the extension field

59
is important and has to be chosen so that the approximation error of integers by the
basis elements is large, yielding a better product distance. The rate of the above
q
code is given by R = T
log2 (|S|) where S is the constellation set. These codes can be
viewed as a more general form of Diagonal Algebriac Space Time Codes (DAST) [11].
In DAST codes instead of linear functionals rotations of information symbols were
used. The rotation was being chosen to maximize product distance. Also matrix U
is always chosen as a Hadamard matrix, while matrix V is chosen to be an identity
M
matrix. The rate of DAST codes is specified is R = M
log2 (|S|).

5.4.2 Simulation Results

In this subsection we will study the effect of U, V on ergodic capacity, outage ca-
pacity and BER performance. We will also study the effect of extension field on the
performance of the code by simulation. We will also compare the performance of the
code with alamouti scheme and DAST codes. For simulation BPSK was used with
the code given by eq 5.31. In Figure 5.5 ,the proposed method was compared with
Alamouti and DAST codes for a 2 × 1 system with R = 1bps/Hz. We observe that
the new code has a diversity 2. The new code performs worse than both Alamouti
and DAST because the minimum determinant of the new code is smaller than the
other.

In Figure 5.6 , the new code was simulated with fi (s) = a0 s0 + a1 s1 + a2 s2 + a3 s3


with ai chosen as the basis of different extension fields on a 2×2 system using BPSK. In
one case Q(21/4 ) was chosen as the extension field, yielding the basis (a0 , a1 , a2 , a3 ) =
(1, 21/4 , 22/4 , 23/4 ). In the second case Q(eiπ/4 ) was chosen as the extension field yield-
ing the basis as (a0 , a1 , a2 , a3 ) = (1, eiπ/4 , ei2π/4 , ei3π/4 ).In the first case the minimum
determinant δmin = 0.0098159 and in the second case δmin = 0.29287. Therefore the
minimum determinant over BPSK for the extension field of Q(21/4 ) is almost close
to zero. Hence the diversity of the code in this extension field should be low. We

60
BER Performance for 2×1 system, R =1b/s/Hz
0
10
Alamouti Code
DAST Code
Proposed Code
−1
10

−2
10
BER

−3
10

−4
10

−5
10

−6
10
0 5 10 15 20 25 30
SNR dB

Fig.5.5: Comparison of New Method with Alamouti and Dast for 2 × 1


system for R = 1bps/Hz

can observe from Figure 5.6 that the code from Q(21/4 ) has a diversity of 1 while the
other has a diversity 4
Although Σ has been specified, the choice of U, V are not specified. So the code
was simulated with different unitary matrices for capacity and SER performance,
with a view to choose the best possible rotation. The code was simulated for capacity
√ √
with f1 (s) = s1 + 2s2 and f2 (s) = s1 − 2s2 . In one case Hadamard matrices were
used for rotation. In the second case an arbitrary rotation matrix was chosen. The
matrix chosen was  
−0.72388 −0.6899
U=V= 
−0.6899 0.72388
Figure 5.6 show the ergodic capacity as a function of SNR for both the cases. We
observe that the choice of rotation matrix is immaterial. Both the rotations give
exactly the same ergodic capacity. Figure 5.7 shows SER performance of the code for
the above two rotations using BPSK for a 2 × 2 system. We observe that both the
rotations perform identical. So from the above two simulations, one can infer that
the choice of rotation matrix is not vital.

61
Comparison of New Method with different extension fields
0
10
jπ/4
Extension Field Q(e )
1/4
Extension Field Q(2 )

−1
10

−2
10
SER

−3
10

−4
10

−5
10
4 6 8 10 12 14 16 18
SNR dB

Fig.5.6: Comparison of New code on different extension fields for 2 × 2


system for R = 2bps/Hz

5.5 Summary

In this chapter, we studied rate-diversity tradeoff and Tilted QAM codes in detail.
We proposed new criterion on the singular values of the code matrix, so as to achieve
optimal tradeoff for non Gaussian codes for any (M, M ). We also proved that tilted
QAM codes, and TAST codes which achieve optimal tradeoff, indeed satisfy the
criterion. We also analyzed the error probability of non Gaussian codes. we obtained
a upper bound on the error probability as function of singular values of the channel
and the code matrix. We then used this result in rate-diversity framework to obtain
the exact condition on singular values of code matrix, to obtain the optimal tradeoff.
The results that are obtained from this framework match the criterion given by Yao [9]
for a 2 × 2 system. We also obtained a similar condition for a 3 × 3 system. We
proposed codes which obtain full diversity using extension fields. These codes can be
tought as a generalization of DAST codes.

62
Capacity of 2x2 for proposed scheme for various rotations
10

7
Capacity bps/Hz

2
Hadamard Rotation
Arbit Rotation
1
0 5 10 15 20 25 30
SNR dB

Fig.5.7: Capacity of New Code for different rotations 2 × 2 system

SER performance with Hadamard and Arbitary Rotation for 2x2 System
0
10
Hadamard Rotation
Arbit Rotation

−1
10

−2
10
BER

−3
10

−4
10

−5
10

−6
10
0 5 10 15 20 25 30
SNR

Fig.5.8: SER performance of New Code for different rotations 2 × 2 system

63
Appendix A

A.1 Extension Fields

In this appendix, we summarize important results from the extension fields which are
required.

Definition A field E is an extension field of a field F if F ⊆ E.

Definition An element α of an extension field E of a field F is algebraic over F


if f (α) = 0 for some nonzero f (x) ∈ F [x]. If α is not algebraic over F , then α is
√ √
transcendental over F . Q ⊂ C and 2 is a zero of x2 − 2. Hence 2 is algebraic
over Q. π is transcendental over Q but algebraic over R.

Definition Let E be an extension field of F , and let α ∈ E is algebraic over F .


Then there is a monic irreducible polynomial p(x) ∈ F [x] such that p(α) = 0. This
polynomial is also minimal polynomial of α represented by irr(α, F ) . The degree
of α over F is the degree of the minimal polynomial denoted by deg(α, F ).

Definition An extension field E of a field F is a simple extension of F if E = F (α)


for some α ∈ E. In a simple extension field E = F (α) and α algebraic with degree n
every element β ∈ E can be expressed as

β = b0 + b1 α + . . . + bn−1 αn−1 (A.33)

where bi ∈ F .

Theorem A.1.1 Let E be an extension field of F , and let α be algebraic over F .


If deg(α, F ) = n, then F (α) is an n-dimensional vector space over F with basis
{1, α, . . . , αn−1 }. Furthermore every element β of F (α) is algebraic over F .
Definition An extension field E of field F is an algebraic extension of F if every
element in E is algebraic over F .

Definition If an extension field E of a field F is of finite dimension n as a vector


space over F , then E is a finite extension of degree n over F . [E : F ] denotes the
degree n of E over F .

Theorem A.1.2 A finite extension field of a field F is an algebraic extension of F .


If {αi |i = 1, . . . , n} is a basis for E over F and {βj |j = 1, . . . , m} is a basis for
K over E, for fields F ⊆ E ⊆ K, then the set {αi βj } of mn products is the basis for
K over F . From now on, by extension field, we mean algebraic extension.


Example Consider the set of rationals Q. Suppose we want to include 2 and
still keep the field structure, Then we should include all the elements of the form
√ √
a + b 2, a, b ∈ Q. 2 is algebraic integer since it is the root of x2 − 2. This new

field is represented by Q( 2). Every element of this extension field is of the form

a + b 2, a, b ∈ Q. Hence this can be also thought as a vector field with basis vectors
√ √
{1, 2} and the dimension of this field [Q( 2) : Q] = 2. Also we observe that, the
element 0 is generated only when a = b = 0.

√ √
Example Consider Q( 3). The basis vectors are {1, 3}, and the dimension of the
field 2.

√ √
Example Consider Q( 2, 3) , by the above theorem the basis of this vector space
√ √ √
over Q is {1, 2, 3, 6}, and the dimension 4.


Example Let φ = e 4 ∈ C. Then φ is a root of the irreducible polynomial µφ (X) =
X 4 + 1 ∈ Q[X]. The degree of φ over Q is 4 and one has φ4 = −1 ∈ Q. The minimum
polynomial of φ over Q[iX] is given by µφ (X) = X 2 − i, and φ has a degree of 2 over
Q(i).

65
Appendix B

B.1 Proof

ML decoder can gaurantee to decode if the magnitude of noise is smaller than half
the minimum distance between codewords. In this appendix we will show that the
P 2
condition πi ≥ 1 is indeed sufficient to achieve optimality. We generalize the
method given in Yao [9] for the proof.
Proof:
If the mutual information is 2α times the desired rate, then
QM 2
i=1 (1 + ρλi )
=α≥1
ρr

If i π 2 ≥ 1 , Then it can be shown by exactly following the inverse of the proof that
P

δ(H) min∀X {k HX k2 } ˙ 1/M


τ= = ≥α ≥1
σv2 ρr/M −1

From Tse [8] α can also be written as

(1−αi )+ −r
α=ρ
˙

1
(1−αi )+ −r)
˙ M(
⇒ τ =ρ

Usin the fact that k V k2 is a chi-square random variable of 2M T , we get,

k V k2
P (eror|H) ≤ P ( > τ /2)
σv2
Z ∞
=
˙ u2M T /2−1 e−u du
u=τ /2
= Γ(M T − 1, τ /2)
where Γ(n, x) is a incomplete function of n−dimensions . Approximating the gamma
function

˙ M T −1 e−τ /2
p(error|H)=τ (B.1)

1
(1−αi )+ −r)
and τ > ρ M ( , when channel is not in outage. In Gaussain code of length
T Tse [8] showed that when channel is not in outage

˙ −T ( (1−αi )+ −r)
p(error|H)≤ρ

For T ≥ 2M − 1 this becomes exponentially tight.


(1−αi )+ −r)
Denoting η = ρ( , P (error|H) decays with η like η − T for Gaussian
1/M )/2
codes. For non gaussian codes P (error|H) decays like η (M T −1)/M e(−η . As expo-
nential decays faster than any polynomial, it implies that eq B.1 behaves like T → ∞.
Therefore any non gaussian code satisfying the condition achieves optimal tradeoff.

67
Appendix C

C.1 LDC for (3, 3),(3, 1) and (4, 1) systems

(M, N, T, Q) = (3, 1, 3, 3), Power constraint 1 :

0.3411 + 0.0517i 0.0168 + 0.5532i 0.7378 + 0.0788i


C1 = −0.3787 + 0.3827i −0.0633 + 0.0775i 0.1443 − 0.1239i
0.2759 + 0.6538i −0.3646 + 0.2477i 0.0651 + 0.4849i

0.1952 + 0.5108i −0.4272 + 0.3943i 0.1716 + 0.4064i


C2 = 0.5675 + 0.0424i 0.1926 + 0.5347i 0.3461 − 0.0003i
0.1647 − 0.1285i 0.1439 − 0.0848i −0.0527 − 0.0713i

0.4784 − 0.2382i 0.1517 + 0.2285i −0.0444 + 0.0644i


C3 = 0.0294 − 0.3447i −0.4067 + 0.1007i 0.1032 + 0.5672i
0.0414 − 0.0768i −0.2611 + 0.5689i 0.4689 + 0.3593i

−0.0235 + 0.1865i −0.2932 + 0.2783i 0.1027 + 0.1765i


D1 = −0.0264 + 0.0875i 0.1501 − 0.3638i −0.2069 − 0.1650i
0.0941 − 0.0725i 0.4272 + 0.1437i 0.0901 − 0.2024i

0.2117 + 0.0796i −0.0865 − 0.2898i −0.4450 + 0.0048i


D2 = −0.2868 + 0.2438i 0.2392 + 0.0939i 0.2732 − 0.4060i
0.0226 − 0.5440i −0.1822 − 0.1636i −0.0216 + 0.4305i

0.1863 − 0.3228i 0.1715 + 0.1145i 0.1899 − 0.1178i


D3 = 0.2820 + 0.5248i −0.0610 + 0.4203i 0.2299 + 0.1205i
−0.4733 + 0.0081i −0.2271 − 0.2842i −0.2373 + 0.1150i
(M, N, T, Q) = (3, 1, 3, 3), Power constraint 2 :

0.3737 + 0.1417i 0.1408 + 0.3558i −0.0938 − 0.3567i


C1 = 0.1268 + 0.5930i −0.4963 − 0.2910i 0.3211 + 0.5288i
0.0352 + 0.5203i −0.2457 + 0.2519i 0.2449 − 0.2463i

0.0512 + 0.1207i 0.0219 + 0.4816i 0.2645 − 0.6501i


C2 = 0.1675 + 0.4251i 0.0146 + 0.4211i 0.2472 − 0.4469i
0.1669 + 0.0816i −0.0485 − 0.1596i 0.1272 + 0.2901i

−0.1587 − 0.2448i −0.1439 − 0.0437i 0.5432 + 0.1064i


C3 = 0.3934 − 0.0843i 0.2853 + 0.4586i 0.1094 − 0.1099i
0.0686 + 0.0129i −0.0949 + 0.5979i 0.5391 − 0.4302i

0.1115 + 0.4917i −0.3479 + 0.1164i −0.0145 + 0.2072i


D1 = 0.1741 − 0.0898i −0.0719 + 0.0225i −0.2108 + 0.1266i
0.2344 − 0.1841i 0.3348 − 0.0010i 0.1173 − 0.4577i

−0.1382 − 0.5394i 0.2635 − 0.0881i −0.0516 − 0.3075i


D2 = 0.0499 + 0.3777i −0.2222 + 0.1324i 0.0199 + 0.3226i
−0.6823 − 0.2656i 0.2872 − 0.1238i 0.0252 − 0.1135i

0.5085 + 0.2133i −0.1988 + 0.2161i 0.2484 + 0.0646i


D3 = 0.3956 − 0.4336i 0.1049 − 0.0097i −0.1417 − 0.1436i
−0.5629 + 0.1233i −0.0800 − 0.1645i −0.0599 + 0.1998i

69
(M, N, T, Q) = (3, 3, 3, 9), Power constraint 1 :

−0.1447 + 0.1487i 0.1742 − 0.1852i 0.0013 + 0.0315i


C1 = −0.0500 − 0.1501i −0.0018 + 0.4558i −0.0134 − 0.0747i
0.1715 + 0.2649i 0.2890 + 0.2510i 0.3626 − 0.1211i

0.1447 + 0.0754i 0.2816 + 0.2830i 0.0000 − 0.0705i


C2 = −0.0283 + 0.3805i −0.0123 + 0.0090i −0.0554 + 0.1229i
0.0682 + 0.0157i −0.0125 − 0.0232i 0.3500 − 0.1070i

0.2938 + 0.0368i 0.2489 + 0.2723i 0.2902 − 0.0346i


C3 = 0.0807 − 0.2036i 0.3120 + 0.3576i −0.0428 − 0.1071i
−0.0699 + 0.0153i −0.0706 + 0.0290i −0.1411 + 0.1892i

0.3566 + 0.0067i 0.1402 + 0.0191i 0.1204 + 0.2297i


C4 = −0.0443 + 0.0523i −0.0631 − 0.1922i 0.1533 + 0.0347i
−0.2669 + 0.2456i 0.0202 − 0.0134i 0.1438 + 0.0194i

0.3288 − 0.0482i 0.0141 + 0.1560i −0.0363 + 0.1758i


C5 = −0.1329 + 0.1452i 0.0615 + 0.2486i 0.2527 + 0.0247i
0.1723 − 0.0258i 0.2685 + 0.1196i −0.1458 + 0.0610i

0.3243 + 0.0172i −0.1241 − 0.0990i −0.0560 − 0.0966i


C6 = 0.2423 + 0.1407i 0.0384 − 0.0908i 0.1940 + 0.2504i
0.0084 + 0.1137i −0.1167 + 0.3904i 0.1426 − 0.0672i

−0.2103 − 0.0523i −0.0704 + 0.1378i 0.0755 + 0.0130i


C7 = 0.0724 + 0.1923i 0.1015 − 0.1416i 0.0345 + 0.3740i
0.1977 − 0.0739i 0.4276 + 0.0228i 0.0867 + 0.3223i

0.0246 + 0.2605i 0.0896 − 0.0961i −0.0792 + 0.0974i


C8 = 0.2787 + 0.0996i 0.2038 − 0.0582i 0.0637 + 0.1174i
0.0304 − 0.0886i −0.0221 + 0.1114i 0.0249 + 0.0462i

−0.1469 + 0.2140i 0.0986 + 0.0206i 0.2455 + 0.2052i


C9 = 0.1182 + 0.1278i 0.0337 − 0.1056i 0.0436 − 0.1024i
0.2554 + 0.3533i 0.0510 − 0.1068i 0.0415 + 0.1888i

70
−0.0159 − 0.2227i 0.0387 + 0.0416i −0.2616 − 0.0507i
D1 = −0.0888 + 0.2190i 0.1439 + 0.0226i −0.0538 − 0.1059i
−0.0286 + 0.0060i −0.0977 + 0.1141i 0.0913 − 0.1135i

−0.1263 − 0.1232i −0.1412 + 0.2485i 0.3822 − 0.2640i


D2 = 0.1358 − 0.0104i 0.0312 − 0.0062i 0.1204 + 0.2036i
0.1624 + 0.0837i −0.1601 + 0.0979i −0.1553 − 0.0911i

−0.2003 − 0.0988i 0.0852 − 0.2375i 0.1106 + 0.0408i


D3 = 0.0390 + 0.0264i −0.0491 + 0.1583i 0.2462 − 0.0326i
−0.1885 + 0.1369i −0.0072 − 0.1904i −0.0011 + 0.1689i

0.1984 − 0.0299i −0.2960 + 0.1736i −0.1966 − 0.2267i


D4 = 0.0684 + 0.0875i −0.1909 + 0.1303i −0.1563 − 0.1570i
−0.1267 + 0.0103i 0.2730 − 0.1650i 0.2699 + 0.0902i

0.0467 + 0.0690i 0.2135 + 0.0961i 0.1349 + 0.2787i


D5 = 0.0544 − 0.0881i −0.2651 − 0.0944i −0.3376 − 0.1115i
0.1890 − 0.2941i 0.0313 + 0.1322i −0.0847 − 0.0338i

0.1076 + 0.0861i 0.0766 − 0.0342i −0.2729 + 0.1370i


D6 = −0.0989 − 0.1749i 0.1403 + 0.0278i 0.3215 − 0.0555i
0.2044 + 0.0032i −0.2212 − 0.0754i −0.1109 + 0.2613i

−0.0169 + 0.0769i −0.2606 − 0.2553i 0.0071 + 0.1258i


D7 = 0.1764 + 0.1294i 0.1844 + 0.2529i 0.0470 − 0.0351i
−0.1911 − 0.1496i 0.0351 + 0.0169i 0.0655 + 0.0370i

0.0087 + 0.3374i 0.2277 + 0.0170i 0.0850 − 0.2147i


D8 = −0.1174 + 0.1664i 0.0227 − 0.1899i −0.0267 − 0.2597i
−0.2793 + 0.2110i 0.2893 + 0.0540i −0.2884 − 0.2560i

0.0237 + 0.0499i 0.0256 − 0.0649i −0.0123 + 0.1427i


D9 = −0.3117 − 0.4200i −0.0274 − 0.1563i −0.0084 + 0.3370i
0.0688 + 0.1442i 0.1116 − 0.1173i 0.1502 − 0.0988i

71
(M, N, T, Q) = (4, 1, 4, 4), Power constraint 1 :

−0.2182 − 0.0173i −0.1062 + 0.1573i 0.4904 + 0.3679i 0.3701 + 0.1327i


−0.1480 + 0.0628i 0.0537 + 0.0400i 0.3300 − 0.0460i −0.0921 − 0.2913i
C1 =
0.1287 + 0.1137i −0.0401 + 0.0177i −0.0002 − 0.1661i 0.2595 + 0.6639i
0.0925 + 0.2544i 0.0694 + 0.1120i 0.2964 − 0.4113i 0.3406 + 0.2751i

0.0865 + 0.0712i 0.0890 − 0.0954i −0.1966 − 0.4000i −0.3880 − 0.1515i


0.0229 + 0.0184i −0.0934 − 0.0451i −0.1803 + 0.0497i −0.1931 + 0.4100i
C2 =
0.0233 + 0.1523i −0.0220 + 0.1489i 0.3828 − 0.1408i 0.4685 + 0.3540i
−0.0271 + 0.0649i −0.1614 + 0.1001i 0.1871 + 0.2634i 0.3518 + 0.7135i

0.2417 + 0.0860i −0.0067 − 0.0489i −0.3120 − 0.2558i 0.1488 + 0.4727i


0.2146 + 0.0851i −0.0031 + 0.0766i −0.0926 − 0.2123i 0.5824 + 0.4090i
C3 =
0.1034 + 0.1320i −0.0764 − 0.0062i −0.1530 − 0.0794i 0.0464 + 0.6984i
0.1267 + 0.0445i 0.0794 − 0.1363i −0.3525 − 0.3491i −0.3915 − 0.0455i

−0.0249 + 0.1810i −0.0609 + 0.1400i 0.4660 + 0.0410i 0.4728 + 0.5688i


0.0572 + 0.1167i −0.0644 + 0.0346i 0.1989 − 0.0670i 0.2324 + 0.6251i
C4 =
−0.1605 − 0.1075i −0.0301 + 0.0373i 0.1984 + 0.2820i 0.0265 − 0.2834i
−0.0759 − 0.2397i −0.1503 − 0.0071i −0.0877 + 0.5149i 0.1807 + 0.0653i

−0.1649 + 0.1411i 0.0514 − 0.0547i 0.1890 − 0.1321i −0.5984 − 0.0786i


−0.1907 − 0.0846i −0.0129 + 0.1308i 0.5443 + 0.2717i 0.4256 − 0.3613i
D1 =
−0.0498 + 0.1462i −0.0553 + 0.0937i 0.3415 + 0.0005i 0.1710 + 0.3706i
−0.0324 − 0.2834i −0.0123 − 0.0638i −0.2914 + 0.2813i 0.0669 − 0.5012i

0.2877 + 0.1408i 0.1124 − 0.0469i −0.2203 − 0.5175i 0.1711 + 0.2377i


−0.3249 − 0.0278i −0.0384 + 0.1232i 0.5845 + 0.2977i 0.0168 − 0.3306i
D2 =
−0.1631 − 0.1803i −0.0385 − 0.0099i −0.0613 + 0.3140i −0.1780 − 0.3353i
0.2412 + 0.1836i 0.0366 − 0.0154i −0.0913 − 0.3158i 0.1259 + 0.5554i

−0.1290 − 0.1127i 0.0365 + 0.0078i 0.0257 + 0.0838i 0.0342 − 0.5571i


−0.1296 + 0.2076i −0.0392 + 0.1300i 0.5588 − 0.0086i 0.1640 + 0.4038i
D3 =
0.3183 + 0.0531i 0.1118 − 0.1054i −0.5356 − 0.5547i −0.1067 − 0.0818i
−0.0039 − 0.0306i −0.0761 − 0.0455i −0.1423 + 0.2684i −0.1864 + 0.3485i

0.1906 + 0.1455i 0.0504 + 0.0501i −0.0154 − 0.3362i 0.2732 + 0.2751i


−0.0929 − 0.2184i −0.0514 − 0.1098i −0.2200 + 0.3165i −0.3602 − 0.2493i
D4 =
0.2271 + 0.0544i −0.0462 − 0.1139i −0.5523 − 0.1919i −0.2856 + 0.4725i
−0.0436 − 0.2467i −0.0515 + 0.0608i −0.0552 + 0.4723i 0.4697 − 0.2388i

72
C.2 Frame theory based LDC for (3, 3) and (4, 1) systems

The code is presented in terms of X .


(M, N, T, Q) = (3, 3, 3, 9)

0.1710306 + 0.0950518i −0.0588658 − 0.2135632i −0.1536267 + 0.1643973i


−0.0509650 + 0.1636646i −0.0120218 + 0.0037621i 0.1255403 + 0.1670247i
0.0736869 − 0.1949654i −0.2470229 + 0.0292864i 0.1222324 − 0.0434173i
−0.0463803 + 0.2015773i 0.1383124 − 0.0416702i 0.0715931 + 0.1592428i
X (:, 1 : 3) = 0.2596792 − 0.0153978i 0.2537902 + 0.1071212i −0.0107065 + 0.0063840i
0.0203847 − 0.0155140i 0.0065322 + 0.1196550i −0.0214466 + 0.2828638i
0.1615898 − 0.0626817i 0.1542318 + 0.1318427i −0.1149708 + 0.1295428i
0.0839066 + 0.0837669i −0.0420368 − 0.1824854i −0.1583085 − 0.2055152i
0.1450655 + 0.2144053i −0.1363425 + 0.1276083i 0.1116477 − 0.0367177i

0.0386606 − 0.0911680i 0.0691544 + 0.1206424i −0.1563622 − 0.0443708i


0.0068064 − 0.2978277i 0.1809087 + 0.0211976i 0.1602032 − 0.0212991i
0.1115119 − 0.0110589i −0.1101001 + 0.2155803i 0.2236145 − 0.0925799i
0.1305872 + 0.2223637i −0.0328014 + 0.0433495i 0.1217878 − 0.0482173i
X (:, 4 : 6) = −0.1271412 − 0.0572257i 0.1097139 + 0.2215624i −0.0388283 + 0.2094158i
0.0071620 + 0.1584978i 0.2161796 − 0.0171853i 0.0331300 − 0.2179306i
−0.0243750 − 0.1849652i −0.2416715 − 0.1743960i 0.2161201 − 0.1443195i
−0.0539756 − 0.0032265i 0.1209245 − 0.0467338i 0.0517750 − 0.1922216i
−0.1242191 + 0.2407295i −0.0093311 + 0.0734725i 0.0533730 − 0.0331153i

0.263179 − 0.112013i 0.096318 − 0.206459i 0.019047 − 0.074954i


0.123980 + 0.066730i −0.096519 + 0.162685i 0.072948 + 0.226427i
0.013040 + 0.096474i 0.074429 − 0.133741i 0.219951 + 0.012700i
−0.054793 − 0.159335i −0.199627 − 0.003491i 0.237121 − 0.167696i
X (:, 7 : 9) = −0.131393 + 0.162998i −0.030400 − 0.054062i 0.119928 − 0.038893i
−0.191644 + 0.046491i 0.256611 − 0.039393i −0.081192 + 0.065387i
0.029075 − 0.008117i −0.088591 − 0.107225i −0.120227 − 0.079546i
−0.217748 + 0.006361i −0.203360 − 0.173570i 0.032986 + 0.193745i
0.192127 + 0.160725i −0.137007 − 0.038896i −0.227341 + 0.004433i

73
(M, N, T, Q) = (4, 1, 4, 4)

−0.0018803 − 0.0211371i −0.1153889 − 0.0123965i −0.0051421 + 0.1101025i −0.1679822 + 0.0295773i


−0.1456307 + 0.0346721i −0.1348900 + 0.1040347i −0.1686524 − 0.0010904i 0.0346961 − 0.0063777i
−0.0379106 + 0.1020475i 0.0946964 + 0.0652913i 0.0145382 + 0.0325755i −0.0043559 − 0.1433103i
0.1008687 + 0.1327171i 0.0759632 − 0.0318185i −0.0099654 − 0.1432981i −0.0991539 − 0.0421231i
0.1389515 − 0.0820616i 0.0202299 + 0.0753535i −0.1540776 + 0.0558095i −0.1077012 − 0.0806600i
−0.0298793 + 0.0641237i 0.0897593 + 0.0264857i −0.0628018 + 0.0863142i −0.1301470 − 0.1337760i
0.1119947 + 0.0553405i 0.0113414 + 0.2148643i 0.0892371 − 0.0213421i −0.0623316 + 0.0696557i
−0.1237926 + 0.0228856i −0.0195958 + 0.0312264i 0.0168051 + 0.1246998i 0.0125731 + 0.0257868i
X =
0.0436600 + 0.0167234i −0.0958582 − 0.0203252i −0.1297531 + 0.0256053i −0.0310594 + 0.0531056i
0.1291932 − 0.1288297i −0.0103046 + 0.0141946i 0.0852503 + 0.0804021i 0.0518512 − 0.0990102i
0.0778917 + 0.1093759i 0.0507412 − 0.0199494i −0.1336901 + 0.0798455i 0.1696101 + 0.0250265i
0.0709957 + 0.0628932i −0.1750997 + 0.1376876i −0.0480657 − 0.0686857i −0.0057084 + 0.1296033i
0.1435662 + 0.1149389i −0.1719114 − 0.0615540i 0.0545890 + 0.0549996i −0.0014086 − 0.1073066i
0.0119180 + 0.0406807i 0.1365042 − 0.0760405i −0.0918302 − 0.0222727i −0.0383918 + 0.1116054i
0.0056651 − 0.1303127i 0.0010224 + 0.0008066i −0.1676136 + 0.0215880i −0.0183700 − 0.0589464i
0.0640554 + 0.0759227i 0.0575679 − 0.0377659i −0.0123423 + 0.1373082i −0.0398171 + 0.1779216i

74
Bibliography

[1] B. Hassibi and B.Hochwald, “High-rate codes that are linear in space and time,“
IEEE Trans. Info Theory, vol.48, pp.1804-1824, July 2002.

[2] Babak Hassibi and Haris Vikalo, “On Sphere Decoding Algorithm. 1. Expected
Complexity”.

[3] Babak Hassibi and Haris Vikalo, “On Sphere Decoding Algorithm. 2. General-
izations, Second-order Statistics, and Applications to Communications“.

[4] Haris Vikalo, “Sphere Decoding Algorithms for Digital Communications“, PhD
thesis, Stanford University, 2003.

[5] Emanuele Viterbo, “Tecniche mathematiche computazionali per l’analisi ed il


progetto di costellazioni a reticolo“, PhD thesis, EPFL, 1995.

[6] J.Conway and N.Sloane, “Sphere Packings, Lattices and Graphs“. Springer-
verlag, 1993.

[7] U.Fincke and M.Pohst,“Improvedmethods for calculating vectors of shortest


length in a lattice, including a complexity analysis,“ Mathematics of Compu-
tation, vol.44,pp.463-471, April1985.

[8] Lizhong Zheng and David N.C. Tse, ”Diversity and Multiplexing: A Fundamen-
tal Tradeoff in Multiple Antenna Channels”

[9] Huan Yao, “Efficient Signal, Code, and Receiver Designs for MIMO Communi-
cation Systems“, PhD thesis, Massachusetts Institute of Technology, 2003 .

75
[10] V. Tarokh, N. Seshadri, and A.Caladerbank, “Space-time codes for high data
rate wireless communications: Performance criterion and code construction,“
IEEE Trans. Info. Theory, vol. 44, pp.744-65, 1998

[11] M.O. Damen, Ahmed Tewfik, Jean-Claude Belfiore, “A construction of a Space-


Time Code Based on Numcer Theory“, IEEE Trans. Info. Theory, vol. 48, March
2002.

[12] S. Alamouti, “A simple transmitter diversity scheme for wireless communica-


tions,“ IEEE JSAC, pp. 1451-8, October 1998.

[13] E. Teletar. “Capacity of multi-antenna Gaussian channels“. AT&T Bell Labs


Internal Tech. Memo., June 1995.

[14] G. J. Foschini. “Layered space-time architecture for wireless communication in


a fading environment when using multiple antenna.“ Bell Laboratories Technical
Journal, 1(2):41-59, Autumn 1996.

[15] Robert.W. Heath,A.J. Paulraj.“Linear Dispersion Codes for MIMO Systems


Based on Frame Theory“,” IEEE Trans. Signal Proc., vol. 50, October 2002.

[16] Robert.W. Heath, A.J. Paulraj. “Capacity Maximizing Linear Space-Time


Codes“IEICE Transactions on Electronics March 2002; v.E85-C, no.3, p.428-
435.

[17] Kenneth Hoffman, Ray Kunze. “Linear Algebra“, second edition, Prentice Hall
of India.

[18] M.O. Damen, H. EL Gammal and N.C. Beaulieu,“ Linear Threaded Algebriac
Space-Time Constellations“, IEEE Trans. Inform. Theory, vol.49, pp. 2372-2388,
Oct. 2003.

[19] H. El Gammal and M.O Dammen,“Universal space-time coding“, IEEE Trans.


Inform. Theory, vol. 48, pp. 1804-1824, july 2002.

76
[20] U. Fincke and M. Pohst,”Improved methods for calculating vectors of short
length in a lattice, including a complexity analysis,” Mathematics of Compu-
tation, Vol. 44, pp.463-471, April 1985.

77

You might also like