Professional Documents
Culture Documents
LTE Concept Facile 3
LTE Concept Facile 3
There are several similar but a little bit different terminology that indicates the ratio
between the wanted signal and the unwanted noise. These terms are confusing almost
everybody. I will try to explain the concept of these terms as much as practical sense
(hopefully). In many case, it would be much easier to understand if you understand on what
purpose (in what context) these are used.
SNR stands for 'Signal to Noise Ratio'. It is pretty much self-explanatory and it would not
need much explanation. It is just the ratio of Signal Power and Noise Power as described
below in mathematical form.
SNR can be either positive and negative value if you represent it in dB scale. Negative SNR
means that Signal power is lower than the noise power. You may think communication
would be impossible in the negative SNR condition, but in reality there is communication
system (technology) which is designed to work mostly in such a condition (e.g, CDMA,
WCDMA).
How does SNR impact the performance of a system (especially on the reciever of a
transmission system) ? I think following plots would give you an intuitive understanding of
this. As you see, as SNR decreases the quality of the signal gets poorer (higher noise level).
As a result, Bit Error Rate (BER) will increase and Sensitity will decrease. (Note : Noise
added to this example is AWGN. See AWGN page for the details of the relationship between
SNR and AWGN)
In the following plots, the red dots indicate the ideal constellation with almost no error and
the black dots represents the statistical location of each data points with noises. You can
say, the farther a black dot is from the red dots, the higher probable errors (Bit
Error) occur. In this example, you see three cases of QAM constellation and each case is
exposed to error with different SNR. You would notice that as SNR goes lower the range of
constellation spread goes wider. It means.. with the same modulation scheme .. as SNR
goes lower, the probablity of error goes higher. If you are not familiar with this kind of
concept, please give some more time until you understand the concept.
Now let's describe on the relationship between SNR and Bit Error Rate in more quantitative
way. If you had chance with articles or papers /thesis about communication technology
(especially with anything related to transmitter, reciever technology), you would have seen
the plots as shown at the bottom right. However, if you are new to this area the
interpretation of the plots may not seem to be clear.
Following constellation is based on LTE physical layer specification. The exact quantitative
relation between SNR and exact BER would vary depending on each communication system
design, but overal logic explained here holds true for any system.
First, take a look at the serieas of constellation at the top track. You see the cases of
different modulation (BPSK, QAM, 16QAM, 64QAM, 256 QAM) but same SNR. You would
notice that even with the same SNR you would get higher probability of error as the
modulation depth increases. I hope this sounds clear to you. This top track represents a
single point on a sequance of graphs in the plot at the bottom as indicated by green arrows.
Give some more time to yourself until you clearly understand this.
Now let's decrease SNR by 5 dB. On the top track, you would notice that the range of errors
on constellation gets wider and you see the rate of Bit Error increases on the plot.
Now let's decrease SNR by another 5 dB. On the top track, you would notice that the range
of errors on constellation gets even more wider and you see the rate of Bit Error increases
even more on the plot.
Now let's decrease SNR by another 5 dB. On the top track, you would notice that the range
of errors on constellation gets even more wider and you see the rate of Bit Error increases
even more on the plot.
Now would you see any trend from this example ? Even with exactly same constellation, Bit
Error Rate increase or decrease based on SNR. Many people tend to think that the error
rate is determined by transmitter power and reciever power, but in reality the absolute
power is not important. The thing that is really important is SNR. However, in practice many
people including me take transmitter or reciever power as an indirect indicator for SNR
based on 'BIG ASSUMPTION' that the level of noise is known (even roughly) and the level of
noise does not change when you increase or decrease power. If this BIG ASSUMPTION holds
true, if you increase Transmitter power you may say SNR would be better than the case
when you have low transmitter power. If you have higher recieved power, you may say SNR
would be better than the case when you lower reciever power. But don't blindly apply this
rule for any accurate analysis or troubleshooting. If you are in stuation where you need very
accurate analysis of Bit Error analysis, you need to check SNR of every components on the
signal path. I know this is huge job, this is one of the reason why it take such a long time
with using a lot of high end test equipment for calibrating the high accuracy test equipment
(e.g, Conformance Test system).
As you see above, you might have noticed that SNR is tightly related to BER (Bit Error
Rate). You might have seen a kind of general trend as follows :
i) At the same modulation depth, you will get high BER(Poor Performance) at low
SNR and low BER (Good Performance) at high SNR
ii) At the same SNR, you will get high BER(Poor Performance) at high modulation
depth and low BER (Good Performance) at low modulation depth
However, in modern communication various kinds of channel coding and error correction
technology is used to correct the certain degree of BER. So if you measure the error rate
after error correction, you may see much lower error rate than the case without error
correction. Usually the error rate after the error correction is measured as a parameter
called BLER (BLock Error Rate). However, even with this kind of error correction process,
you cannot fix all the errors. Therefore, the general trend still holds true at BLER
measurement.
i) At the same modulation depth, you will get high BLER(Poor Performance) at low
SNR and low BLER (Good Performance) at high SNR
ii) At the same SNR, you will get high BLER(Poor Performance) at high modulation
depth and low BLER (Good Performance) at low modulation depth
The exact correlation between SNR and BLER may vary depending on what kind of channel
coding and error correction are used. Following graph shows a good example of SNR vs
BLER for LTE PDSCH (See Ref [2] for the details. this is data for the system supporting only
up to 64 QAM. You would see different plots if you measure with the system supporting 256
QAM).
Similar to SNR, there is another indicator called SINAD. It is defined as shown below. It
indicates the ratio of Total energy (Wanted + Unwanted) and Unwanted power. Since the
numerator is the total power in the definition, the value in dB is always positive.
In most of RF area, we use SNR more frequently and in some area like Audio signal analysis
we tend to use SINAD more frequently.
We often get confused by SNR vs SINAD and have difficulties in understanding the
difference between SNR and SINAD. It is well explained in Reference [1] as stated below.
As stated above, the main difference is whether to include 'distortion' in the calculation or
not. Distortion can be more intuitively understood in time domain. If you convert the signal
with distortion into frequency domain, the distortion appears in the form of harmonics. So in
terms of frequency domain, the main difference between SNR and SINAD is whether to
include harmonics in the calculation or not.
SINR stands for Signal to Interference plus Noise Ratio and the definition can be illustrated
as below (I hope this single picture can explain everything). Simply put, SINR is the ratio of
the signal (desired signal) and the unwanted noise. The unwanted noise comprises of all the
external interference and internaly generated noise.
Example 1 : SNR (SINR) vs Throughput in a LTE Live Network
Following plot is from the data captured by a drive test tool Azenqos Drive Test tool (AZQ
Android). This plot is automatically generated by AZQ Reporting tool and I just did some
cosmetic touch on the chart.
This is the real measurement showing the correlation between SINR and Throughput. As
you see, as SNR(SINR) goes higher, throughput increases exponentially. In other words, As
SNR decreases, the throughput will decrease exponetially. If network does not change code
rate (i.e, MCS), the throughput decrease would be due to decoding failure at the reciever
(i.e, decoding failure at UE), however in real network UE reports CQI periodically to eNB and
eNB changes the code rate accordinly (i.e, decreasing MCS as CQI value gets lower and this
results in smaller transport block size), so this throughput change would be due to lower
transport block size.
Antenna Port
I think one of the most confusing concept in LTE physical layer is the concept of 'Antenna
port'. The official definition of Antenna port goes as follows. (To be honest, this official
definition does not make any clear sense to me)
An antenna port is defined such that the channel over which a symbol on the antenna port
is conveyed can be inferred from the channel over which another symbol on the same
antenna port is conveyed. There is one resource grid per antenna port. The antenna ports
used for transmission of a physical channel or signal depends on the number of antenna
ports configured for the physical channel or signal as shown in Table 5.2.1-1.
< 36.211 - Table 5.2.1-1: Antenna ports used for different physical channels and signals >
An antenna port is defined such that the channel over which a symbol on the antenna port
is conveyed can be inferred from the channel over which another symbol on the same
antenna port is conveyed. For MBSFN reference signals, positioning reference signals, UE-
specific reference signals associated with PDSCH and demodulation reference signals
associated with EPDCCH, there are limits given below within which the channel can be
inferred from one symbol to another symbol on the same antenna port. There is one
resource grid per antenna port. The set of antenna ports supported depends on the
reference signal configuration in the cell:
NOTE : As you see here, there are several different port combinations for a specific
refernece signal type. Which of the combination is used is determined by a specific antenna
configuration (i.e, Transmission Mode). For further details on the antenna port combination
and each transmission mode and reference signal, refer to Transmission Mode page and
Reference Signal (Downlink) page.
Simply put,
Antenna port is logical concept, not a physical concept (meaning 'Antenna port' is not the
same as 'Physical Antenna')
Each Antenna port represents a specific channel model
The channel that is transmitted by a specific antenna port can be done by using the reference
signal assinged fort the port (This is why each antenna port has its own reference signal)
To be honest, any of the verbal description of Antenna port was not so clear to me for a
long time. I am kind of person who has huge difficulties on understanding things if I don't
visualize it (have any form of visual image). Just to give you another angle of the concept of
Antenna port, I will try to show you on exactly which point in physical layer processing the
antenna port are introduced. As illustrated below, antenna port is introduced in Precoding
process at first and each antenna port will generate its own resource grid.
Now let's take a look at some of practical examples of how each of antenna port are
associted each resource grid. These example shows the all the resource grid that can be
observed at point (C) on the physical layer processing shown above. In these examples, I
will draw the resource grid with only one RB just for simplicity.
Channel Estimation
As I explained in other pages, in all communication the signal goes through a medium
(called channel) and the signal gets distorted or various noise is added to the signal while
the signal goes through the channel. To properly decode the received signal without much
errors are to remove the distortion and noise applied by the channel from the received
signal. To do this, the first step is to figure out the characteristics of the channel that the
signal has gone through. The technique/process to characterize the channel is called
'channel estimation'. This process would be illustrated as below.
There are many different ways for channel estimation, but fundamental concepts are
similar. The process is done as follows.
i) set a mathematical model to correlate 'transmitted signal' and 'recieved signal' using
'channel' matrix.
ii) Transmit a known signal (we normally called this as 'reference signal' or 'pilot signal') and
detect the received signal.
iii) By comparing the transmitted signal and the received signal, we can figure out each
elements of channel matrix.
As an example of this process, I will briefly describes on how this process in LTE. Of course,
I cannot write down full details of this process in LTE and a lot of details are up to
implementation (meaning the detailed algorithm may vary with each specific chipset
implmenetation). However, overall concept would be similar.
General Algorithm
Channel Estimation for SISO
o Estimation of Chnnel Coefficient
o Estimation of Noise
Channel Estimation for 2 x 2 MIMO
o Estimation of Chnnel Coefficient
o Estimation of Noise
General Algorithm
How can we figure out the properties of the channel ? i.e, how can we estimate the channel
? In a very high level view, it can be illustrated as below. This illustration says followings :
i) we embed the set of predefined signal (This is called a reference signal)
ii) As these reference signal go through the channel, it get distorted (attenuated,
phase-shifted, noised) along with other signals
iii) we detect/decode the received reference signal at the reciever
iv) Compare the transmitted reference signal and the received reference signal and
find correlation between them.
Now let's think of the case of LTE SISO case and see how we can estimate channel
properties (channel coefficient and noise estimate). Since this is SISO, reference signal is
embedded onto only one antenna port (port 0). The vertical line in the resource map
represents frequency domain. So I indexed each of reference signal with f1, f2, f3...fn. Each
reference symbol can be a complex number (I/Q data) that can be plotted as shown below.
Each complex number (Reference Symbol) on the left (transmission side) is modified
(distorted) to each corresponding symbols on the right (recieved symbol). Channel
Estimation is the process of finding correlation between the array of complex numbers on
the left and the array of complex numbers on the right.
The detailed method of the estimation can very depending on the implementation. The
method that will be described here is based on the Open Source : srsLTE (Refer to [1])
Since this is only one antenna, system model for each transmitted reference signal and
received reference signal can be represented as follows. y() represents the array of received
reference signal, x() represents the array of transmitted reference signal() and h()
represents the array of channel coefficient. f1, f2,... just integer indices.
We know what x() are because it is given and the y() is also know because it is
measured/detected from the reciever. With these, we can easily calculate coefficient array
as shown below.
Now we have all the channel coefficient for the location where reference signals are located.
But we need channel coeffcient at all the location including those points where there is no
reference signal. It means that we need to figure out the channel coefficient for those
location with no reference signal. The most common way to do this to this is to interpolate
the measured coefficient array. In case of srsLTE, it does a averaging first and then did
interpolation over the averaged channel coefficient.
Next step is to estimate the noise properties. Theoretically, the noise can be calculated as
below.
However, what we need is the statistical properties of the noise .. not the exact noise value.
we can estimate the noise using only measured channel coefficient and averaged channel as
shown below (Actually exact noise value does not have much meaning because the noise
value keep changes and it is of no use to use those specific noise value). In srsLTE, the
author used this method.
Let's assume that we have a communication system as shown below. x(t) indicates the
transmitted signal and y(t) indicates the received signal. When x(t) gets transmitted into
the air (channel), it gets distorted and gets various noise and may interfere each other. so
the recieved signal y(t) cannot be same as the transmitted signal x(t).
This relation among the transmitted signal, received signal and channel matrix can be
modeled in mathematical form as shown below.
In this equation, we know the value x1,x2 (known transmitted signals) and y1,y2
(detected/recieved signal). The parts that we don't know is H matrix and noise (n1,n2).
For simplicity, let's assume that there is no noise in this channel, meaning that we can set
n1, n2 to be 0. (Of course, in real channel there are always noise and estimate noise is a
very important part of channel estimation, but we assume in this example that there is no
noise just to make it simple. I will add later the case with noise when I have better
knowledge to describe the case in plain language).
Since we have a mathematical model, the next step is to transmit a known signal (reference
signal) and figure out channel parameter from the reference signal.
Let's suppose we have sent a known signal with the amplitude of 1 through only one
antenna and the other antenna is OFF now. Since the signal propagate through the air and
it will be detected by both antenna at the reciever side. Now let's assume that the first
antenna received the reference signal with the amplitude of 0.8 and the second antenna
received it with amplitude of 0.2. With this result, we can figure out one row of channel
matrix (H) as shown below.
Let's suppose we have sent a known signal with the amplitude of 1 through only the other
(second) antenna and the first antenna is OFF now. Since the signal propagate through the
air and it will be detected by both antenna at the reciever side. Now let's assume that the
first antenna received the reference signal with the amplitude of 0.3 and the second
antenna received it with amplitude of 0.7. With this result, we can figure out one row of
channel matrix (H) as shown below.
Simple enough ? I think (hope) that you didn't have any problems with understanding this
basic concept. But if you use this method exactly as described above, there would be some
inefficiency. According to the concept explained above, there should be a moment when you
transmit only reference signal without real data just to estimate the channel information,
meaning that data rate will be decreased because of channel estimation process. To remove
this inefficiency, the real communication system transmit the reference signal and data
simulteneously.
Now the question is "How can we implement the concept described above while transmitting
the reference signal and data simultaneously ?". There can be several different ways to do
this and different communication system would use a little bit of different methodology.
In case of LTE as an example, we use the method described as shown below. In case of 2 x
2 MIMO in LTE, each sub frame has different locations for reference signal for each antenna.
The subframe for antenna 0 transmitted the reference signal allocated for antenna 0 and
does not transmit any signal at the reference signal allocated for antenna 1. The subframe
for antenna 1 transmitted the reference signal allocated for antenna 1 and does not
transmit any signal at the reference signal allocated for antenna 0. So if you decode at the
two reciever antenna the resource elements allocated for reference signal for antenna 0,
you can estimate h11, h12. (here we also assume that there is no noise for simplicity). If
you decode at the two reciever antenna the resource elements allocated for reference signal
for antenna 1, you can estimate h21, h22. (here we also assume that there is no noise for
simplicity).
The process illustrated above is to measure H matrix for one specific points in frequency
domain in a LTE OFDMA symbol. If you apply the measured H value as it is in the process of
decoding other parts of symbol, the accuracy of the decoded symbol might not be as good
as it can be because the measured data used in previous step would contain some level of
noise. So in real application, some kind of post processing is applied to the H values
measured by the method described above and in this post processing procedure we could
figure out the overal statistical properties of Noise (e.g, mean, variance and statistical
distribution of the noise). One thing to keep in mind is that the specific noise value obtained
in this process does not have much meaning by itself. The specific value obtained from the
reference signal would not be same as the noise value to decode other data (non-reference
signal) because the noise value changes randomly. However, the overal properties of those
random noise can be an important information (e.g, used in SNR estimation etc).
Before moving on, let's briefly think of the mathematical model again. Even though we
decribe a system equation as follows including the noise term, it doesn't mean that you can
directly measure the noise. It is impossible. This equation just show that the detected signal
(y) contains a certain portions of noise component.
So, when we measure the channel coefficient, we used the equipment that does not have
noise term as shown below.
Now let's assume that you have measured H matrix across a whole OFDM symbol, you
would have multiple H matrix as below, each of which indicate the H matrix at one specific
frequency.
Now you have an array of H matrix. This array is made up of four different group, each of
the group is highlighted with different colors as shown below.
When you apply the post processing algorithm, the algorithm needs to be applied to each of
these groups separately. So for simplicity, I rearranged the array of H matrix into multiple
of independent arrays (4 arrays in this case) as shown below.
For each of these arrays, I will do the same processing as illustrated below. (Each chipset
maker may apply apply a little bit different method, but overall idea would be similar). In
the method illustrated below, the data (the array of channel coefficient in each frequency
points) is applied with IFFT, meaning the dta is converted into a time domain resulting in an
array of time domain data labeled as (2). Actually this is a impulse response of the specific
channel path. And then we apply a specific filtering (or windowing) to this time domain
data. In this example, replace the data from a certain point with zero and creating the result
labeled as (3). You may apply a more sophisticated filter or windowing instead of this kind
of simple zeroing. And then, by converting the filtered channel impulse data back to
frequency domain, I get the filtered channel coefficient and I use that value as 'Estimated
channel coefficient' in the processing of decoding other recieved signal (i.e, decoding non-
reference data).
By doing the same process to all the four array, you get the four arrays of 'Estimated
Channel Coefficient Array'. From these four arrays, you can reconstructed the array of
estimated channel matrix as follows.
With this estimated channel matrix, you can estimate the noise values at each point using
the following equation. This is same as original system equation at the beginning of this
page except that H matrix is replaced by 'Estimated H' matrix and now we know all the
values except noise value. So, by plugging in all the know values we can calculate
(estimate) noise values at each measurement point.
If you apply this equation for all the measurement point, you would get the noise values for
all the measurement point and from those calculated noise value you can get the statistical
properties of the noise. As mentioned above, the each individual noise value calculated here
does not have much meaning because the value cannot be directly applicable to decoding
other signal (non-reference signal), but the statistical characteristics of these noise can be a
very useful information to determine the nature of the channel.
Inter-Cell Interference Coordination (ICIC)
As mobile communication technology has evolved dramatically, from LTE (10 MHz) to LTE-A
(10+10 MHz), and then to wideband LTE (20 MHz), South Korea's mobile market is hotter
than ever with its big 3 operators competing fiercely in speed and quality (see Netmanias
Report, LTE in Korea UPDATE - May 1, 2014). Operators can offer different maximum
speeds depending on how wide frequency bandwidths they can actually use. All three, with
pretty much same amount of LTE frequency bandwidths obtained, practically support the
same maximum speeds.
However, these theoretical maximum speeds are not available to users in real life. What
users experience, i.e., Quality of Experience (QoE) is affected by various factors, and so the
actual QoE is far from the maximum speeds. One of the biggest factors that causes such
quality degradation is Inter-cell Interference.
In 2G/3G networks, it was base station controllers, i.e., upper nodes of base stations, that
control inter-cell interference. In 4G networks like LTE/LTE-A, however, inter-cell
interference can be controlled through coordination among base stations. This was made
possible because now LTE networks have X2 interfaces defined between base stations. By
exchanging interference information over these X2 interfaces, base stations now can
schedule radio resources in a way that avoids inter-cell interference. 1
In this and next few posts, we will learn more about these Interference Coordination
technologies. First, let's find out ICIC, the most basic interference coordination
technology.
ICIC Concept
ICIC is defined in 3GPP release 8 as an interference coordination technology used in LTE
systems. It reduces inter-cell interference by having UEs, at the same cell edge but
belonging to different cells, use different frequency resources. Base stations that support
this feature can generate interference information for each frequency resource (RB), and
exchange the information with neighbor base stations through X2 messages. Then, from the
messages, the neighbor stations can learn the interference status of their neighbors, and
allocate radio resources (frequency, Tx power, etc.) to their UEs in a way that would avoid
inter-cell interference.
For instance, let's say a UE belonging to Cell A is using high Tx power on frequency resouce
(f3) at the cell edge. With ICIC, Cell B then allocates a different frequency resource (f2) to
its UE at the cell edge, and f3 to its other UE at the cell center, having the one at the center
use low Tx power in communicating.
Interference Information used in ICIC
Basic ICIC Behavior
Networks consisting of the same type of cells (e.g. existing macro networks), as presented
in the previous post, are called homogeneous networks while ones with different types of
cells are called heterogeneous networks (HetNet). So, HetNet is a network where small cells
are deployed within a macro cell coverage. From Release 10 on, HetNet environments are
also considered when discussing LTE-A standards.
Figure 1. Homogeneous network and heterogeneous network (HetNet)
■ What is eICIC?
eICIC is an interference control technology defined in 3GPP release 10. It is an advanced
version of ICIC, previously defined in 3GPP release 8, evolved to support HetNet
environments. To prevent inter-cell interference, ICIC allows cell-edge UEs in neighbor cells
to use different frequency ranges (RBs or sub-carriers). On the other hand, eICIC allows
them to use different time ranges (subframes) for the same purpose. That is, with eICIC, a
macro cell and small cells that share a co-channel can use radio resources in different time
ranges (i.e. subframes).
Two main features of eICIC are: Almost Blank Subframe (ABS) technology defined in
Release 10 and Cell Range Expansion (CRE) technology defined in Release 11. ABS can
prevent cell-edge UEs in small cells from being interfered with by the neighboring macro cell
by having both cells still use the same radio resources, but in different time ranges
(subframes). CRE expands the coverage of a small cell so that more UEs near cell edge can
access the small cell. In this post, we will discuss ABS only.
In a homogeneous network, this is not a big problem because there isn't much difference in
Tx power from neighbor cells' antenna, and hence no significant inter-channel interference
by control channels is caused between neighbor cells at cell edge. On the other hand,
in HetNet where a macro cell has much higher Tx power than a small cell 1, the
small cell's control channel is inevitably interfered with by the macro cell's,
making ICIC applied to the data channel ineffective.
Figure 4. Issues with ICIC in HetNet: Interference by macro cell's control channel
■ eICIC Concept: Problems with ICIC solved by having cells use radio resources in
different time
■ eICIC Operation: Delivering ABS pattern information over X2 interface
Increased radio network capacity can be achieved by improving spectral efficiency. Spectral
efficiency (bit/sec/Hz) is the transmission rate measured in bps per Hz. The higher spectral
efficiency, the more data can be transmitted with the same amount of bandwidth. By
default, LTE networks provide broadband radio links by obtaining higher spectral efficiency
through using at least 2x2 MIMO antennas. At cell centers, installing more antennas at a
base station improves spectral efficiency, leading to higher UE throughputs. At cell edge
areas, however, only insignificant throughput improvement can be expected. So, we should
find another way to gain the same effect.
■ Definition of CoMP
Coordinated Multi-Point (CoMP) is a new inter-cell cooperation technology specifically aiming
to enhance throughputs of UEs at cell edge. CoMP mitigates inter-cell interference and
increases throughputs of a UE at cell edge by allowing not only the UE's serving cell, but
also other cell(s) to communicate with the UE, through cooperation with one another.
Traditionally, a UE accesses only one cell (serving cell) for communication. But, a CoMP-
enabled UE can communicate with more than one cell located in different points, and this
group of cells works as a virtual MIMO system. Cells that are in charge of directly or
indirectly transmitting data to UE are called "CoMP cooperating cells" ("CoMP cooperating
set" in 3GPP terms*), and specifically those actually responsible for transmitting data to UE
are called "CoMP transmission cell(s)" ("CoMP transmission points" in 3GPP terms *).
In summary, CoMP is an inter-cell cooperation technology that enables more than one
transmission cell to communicate with a UE to achieve better throughputs at cell edge areas
by reducing inter-cell interference. CoMP cooperating cells share channel information of a
UE, and based on the information, transmission cell(s) are decided.
ICIC and eICIC, both aiming to reduce inter-cell interference, can help UEs at cell edge to
communicate, but neither can actually improve their throughputs. That's because they
restrict radio resource usage in frequency domain (ICIC) and time domain (eICIC) to
mitigate interference. And interference information between neighbor cells is shared on a
relatively long term basis. As a result, fast-changing channel conditions of UE (e.g. when UE
is traveling fast, or entering a shadowing area) are not reflected in inter-cell cooperation
promptly in time, inevitably impeding dynamic allocation of resources.
CoMP, recognized as the most advanced inter-cell cooperation technology so far, was first
standardized in Release 11, and further standardization is still taking place in Release 12. It
uses radio resources not just in frequency/time domain, but also in spatial domain, to
enhance spectral efficiency. That is, it performs beamforming using a smart antenna, or
works as a virtual MIMO system. With CoMP, cooperating cells can share UE's channel
information every time scheduling is performed, and hence UE's instantaneous channel
conditions can be reflected in time. This sharing makes joint scheduling possible. CoMP can
be used either in a homogeneous or heterogeneous network (HetNet), and features various
types of inter-cell cooperation: CS, CB JT, and DPS (see CoMP Types below).
Base stations give their UEs an instruction on how and which cell's CSI are to be measured
by sending a CSI-RS (CSI Reference Signal) configuration message. Upon this instruction,
UEs measure CSI and report to their serving cells. In general, CSI information includes
Channel Quality Indicator (CQI), Precoding Matrix Indicator (PMI), and Rank Indicator (RI).
CQI: An indicator of channel quality. Displayed as a highest modulation and coding rate
(MCR) value that satisfies the condition of 'channel block error rate (BLER) < 0.1'. It is set
as a value ranging 0 ~ 15 (4 bits). The better channel quality, the higher MCR is used.
Subband CQIs indicate the quality for specific frequency ranges (subrange) while wideband
CQIs indicate that for the entire channel bandwidth.
PMI: Base stations deliver more than one data stream (layer) through Tx antenna.
Precoding matrix shows how individual data streams (layers) are mapped to antennas. To
calculate precoding matrix, UEs obtain channel information by measuring the channel
quality of each DL antenna. Because providing feedback on all channel information results in
significantly increased overheads, generally a code book is pre-configured at base stations
and UEs. Using this code book, UEs send the index of a corresponding precoding matrix
only. Base stations, by referring the reported precoding matrix, calculate its own precoding
matrix, and use the optimal value from it.
RI: Indicates the number of data stream(s) being delivered in DL. For instance, with 2 X 2
MIMO, this value is 1 in case of transmit diversity MIMO where two antennas at a base
station are sending the same data stream, and it is 2 in case of spatial multiplexing MIMO
where the antennas are sending different data streams.
In Figure 1, A1 and B1 at cell edge, each with a different frequency resource allocated (f3
and f2), can avoid interference, and hence have improved throughputs. Both UEs do receive
signals from the other UE. These signals do not cause interference with the other's, but may
cause degraded reception of their own signals.
2. Coordinated Beamforming (CB)
CB CoMP allocates different spatial resources (beam patterns) to UEs at cell edge by using
smart antenna technology. Without CS, A1 and B1 may end up being allocated the same
frequency resource (f3 in Figure 2). CB CoMP allows Cell A and Cell B to cooperate with each
other, and allocate different spatial resources (beam pattern 1, beam pattern 2) to A1 and
B1 at cell edge. These two cells can prevent interference by allocating main beam to their
own UE, and null beam to the other neighbor UE.
Generally, CB is more often used with CS, than alone. Figure 3 shows a case where CS and
CB are used together. Cell A and Cell B cooperate with each other to allocate different
frequency resources (f3, f2) and different spatial resources (beam pattern 1, beam pattern
2) to A1 and B1, respectively. This cooperation is pretty effective because, CS alone can
easily take care of interference issues, and besides CB can even ensure better reception
quality. If used with CB, CS can achieve better cell-edge throughputs because CB helps A1
and B1 to avoid signals sent to the other, and better receive those destined for themselves.
Figure 3. CS/CB
Additional information and later updates can be found on Electronics Notes our sister website LTE
channels bands & spectrum.
33 1900 - 1920 20
34 2010 - 2025 15
35 1850 - 1910 60
36 1930 - 1990 60
37 1910 - 1930 20
38 2570 - 2620 50
39 1880 - 1920 40
TDD LTE BANDS & FREQUENCIES
There are regular additions to the LTE frequency bands / LTE spectrum allocations as a result of
negotiations at the ITU regulatory meetings. These LTE allocations are resulting in part from the digital
dividend, and also from the pressure caused by the ever growing need for mobile communications.
Many of the new LTE spectrum allocations are relatively small, often 10 - 20MHz in bandwidth, and
this is a cause for concern. With LTE-Advanced needing bandwidths of 100 MHz, channel aggregation
over a wide set of frequencies many be needed, and this has been recognised as a significant
technological problem. . . . . . . . .
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)
1.4, 3, 5,
1850 – 1930 –
2 FDD 1900 PCS[A 4] 25 80 10, 15,
1910 1990
20
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)
1.4, 3, 5,
1710 – 1805 –
3 FDD 1800 DCS 95 10, 15,
1785 1880
20
1.4, 3, 5,
1710 – 2110 –
4 FDD 1700 AWS-1[A 4] 66 400 10, 15,
1755 2155
20
824 – 1.4, 3, 5,
5 FDD 850 Cellular 26 869 – 894 45
849 10
1427.9
1475.9 –
11 FDD 1500 Lower PDC 74 – 48 5, 10
1495.9
1447.9
Upper 777 –
13 FDD 700 746 – 756 −31 5, 10
SMH[A 7] 787
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)
Upper 788 –
14 FDD 700 758 – 768 −30 5, 10
SMH[A 8] 798
Lower 704 –
17 FDD 700 12, 85 734 – 746 30 5, 10
SMH[A 9] 716
Digital
832 – 5, 10, 15,
20 FDD 800 Dividend (E 791 – 821 −41
862 20
U)
1447.9
1495.9 –
21 FDD 1500 Upper PDC 74 – 48 5, 10, 15
1510.9
1462.9
1626.5
Upper 1525 –
24 FDD 1600 – −101.5 5, 10
L-Band (US) 1559
1660.5
1.4, 3, 5,
Extended 1850 – 1930 –
25 FDD 1900 80 10, 15,
PCS[A 10] 1915 1995
20
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)
807 – 1.4, 3, 5,
27 FDD 800 SMR 852 – 869 45
824 10
703 – 3, 5, 10,
28 FDD 700 APT 758 – 803 55
748 15, 20
Lower
29 SDL[A 11] 700 N/A 717 – 728 N/A 3, 5, 10
SMH[A 12]
2305 – 2350 –
30 FDD 2300 WCS[A 13] 45 5, 10
2315 2360
452.5 – 462.5 –
31 FDD 450 NMT 10 1.4, 3, 5
457.5 467.5
5, 10, 15,
33 TDD 2100 IMT 39 1900 – 1920 N/A
20
1.4, 3, 5,
35 TDD 1900 PCS (UL) 1850 – 1910 N/A 10, 15,
20
1.4, 3, 5,
36 TDD 1900 PCS (DL) 1930 – 1990 N/A 10, 15,
20
5, 10, 15,
37 TDD 1900 PCS[A 14] 1910 – 1930 N/A
20
5, 10, 15,
38 TDD 2600 IMT-E[A 14] 41 2570 – 2620 N/A
20
5, 10, 15,
40 TDD 2300 S-Band 2300 – 2400 N/A
20
5, 10, 15,
41 TDD 2500 BRS 2496 – 2690 N/A
20
5, 10, 15,
43 TDD 3700 S-Band 3600 – 3800 N/A
20
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)
3, 5, 10,
44 TDD 700 APT 703 – 803 N/A
15, 20
5, 10, 15,
48 TDD 3500 CBRS (US) 3550 – 3700 N/A
20
3, 5, 10,
50 TDD 1500 L-Band (EU) 1432 – 1517 N/A
15, 20
Extended
51 TDD 1500 1427 – 1432 N/A 3, 5
L-Band (EU)
5, 10, 15,
52 TDD 3300 S-Band 3300 – 3400 N/A
20
1.4, 3, 5,
53 TDD 2400 GlobalStar 2483.5 – 2495 N/A
10
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)
Extended
1.4, 3, 5,
AWS 1710 – 2110 –
66 FDD 1700 400 10, 15,
(AWS-1–3)[A 1780 2200[2]
17]
20
5, 10, 15,
67 SDL[A 11] 700 EU 700 N/A 738 – 758 N/A
20
698 –
68 FDD 700 ME 700 753 – 783 55 5, 10, 15
728
2570 –
69 SDL[A 11] 2600 IMT-E[A 14] N/A N/A 5
2620
Digital
663 – 5, 10, 15,
71 FDD 600 Dividend (U 617 – 652 −46
698 20
S)
451 –
72 FDD 450 PMR (EU) 461 – 466 10 1.4, 3, 5
456
450 –
73 FDD 450 PMR (APT) 460 – 465 10 1.4, 3, 5
455
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)
1.4, 3, 5,
Lower 1427 – 1475 –
74 FDD 1500 48 10, 15,
L-Band (US) 1470 1518
20
1432 – 5, 10, 15
75 SDL[A 11] 1500 L-Band (EU) N/A N/A
1517 20
Extended 1427 –
76 SDL[A 11] 1500 N/A N/A 5
L-Band (EU) 1432
Extended
698 –
85 FDD 700 Lower 728 – 746 30 5, 10
716
SMH[A 6]
5150 –
252 SDL[A 11] 5200 U-NII-1[A 18] N/A N/A 20
5250
5725 –
255 SDL[A 11] 5800 U-NII-3[A 18] N/A N/A 20
5850
Channel
Duplex Uplink[A Downlink[A Duplex
ƒ Common Subset bandwidt
Band mode[A 2] 3]
spacing
(MHz) name of band (MHz) (MHz) hs
1]
(MHz)
(MHz)