Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

RF - SNR vs SINAD

SNR / SINAD / SINR

There are several similar but a little bit different terminology that indicates the ratio
between the wanted signal and the unwanted noise. These terms are confusing almost
everybody. I will try to explain the concept of these terms as much as practical sense
(hopefully). In many case, it would be much easier to understand if you understand on what
purpose (in what context) these are used.

SNR (Signal to Noise Ratio)

SNR stands for 'Signal to Noise Ratio'. It is pretty much self-explanatory and it would not
need much explanation. It is just the ratio of Signal Power and Noise Power as described
below in mathematical form.

SNR can be represented in a graphical form as shown below.

SNR can be either positive and negative value if you represent it in dB scale. Negative SNR
means that Signal power is lower than the noise power. You may think communication
would be impossible in the negative SNR condition, but in reality there is communication
system (technology) which is designed to work mostly in such a condition (e.g, CDMA,
WCDMA).

Why SNR is important ?


It is because SNR is one of the most important indicator to represent signal quality. You
may think Signal Power is the most important factors for signal quality, but in theory Signal
power alone does not mean anything in terms of representing signal quality which help you
predict how much error will happen for your communication system. Even if your signal
power is very strong, you would not get good communication result (low error or no error) if
the noise power is high as well. On the contrary, even if the signal power is very low, you
would get good communication result if the noise power is much lower than the signal
power. This is why in most communication text book or in most of measurement process,
SNR rather than the absolute signal power is used as evaluation/test criteria.

Now let's think of how to measure SNR.


You can get the rough estimate of SNR for a certain signal using a spectrum analyzer, but it
may not be as easy as it sound to measure the accurate SNR since ideally this
measurement should be done at RBW of 1 Hz.
However, if you have to measure SNR in a communication device (not in test equipment),
you cannot use the same method as spectrum analyzer. In that case, the device use very
complicated signal processing algorithm to estimate SNR and the method itself tend to be
different depending on communication technology.

How does SNR impact the performance of a system (especially on the reciever of a
transmission system) ? I think following plots would give you an intuitive understanding of
this. As you see, as SNR decreases the quality of the signal gets poorer (higher noise level).
As a result, Bit Error Rate (BER) will increase and Sensitity will decrease. (Note : Noise
added to this example is AWGN. See AWGN page for the details of the relationship between
SNR and AWGN)

In the following plots, the red dots indicate the ideal constellation with almost no error and
the black dots represents the statistical location of each data points with noises. You can
say, the farther a black dot is from the red dots, the higher probable errors (Bit
Error) occur. In this example, you see three cases of QAM constellation and each case is
exposed to error with different SNR. You would notice that as SNR goes lower the range of
constellation spread goes wider. It means.. with the same modulation scheme .. as SNR
goes lower, the probablity of error goes higher. If you are not familiar with this kind of
concept, please give some more time until you understand the concept.

Now let's describe on the relationship between SNR and Bit Error Rate in more quantitative
way. If you had chance with articles or papers /thesis about communication technology
(especially with anything related to transmitter, reciever technology), you would have seen
the plots as shown at the bottom right. However, if you are new to this area the
interpretation of the plots may not seem to be clear.
Following constellation is based on LTE physical layer specification. The exact quantitative
relation between SNR and exact BER would vary depending on each communication system
design, but overal logic explained here holds true for any system.

First, take a look at the serieas of constellation at the top track. You see the cases of
different modulation (BPSK, QAM, 16QAM, 64QAM, 256 QAM) but same SNR. You would
notice that even with the same SNR you would get higher probability of error as the
modulation depth increases. I hope this sounds clear to you. This top track represents a
single point on a sequance of graphs in the plot at the bottom as indicated by green arrows.
Give some more time to yourself until you clearly understand this.

Now let's decrease SNR by 5 dB. On the top track, you would notice that the range of errors
on constellation gets wider and you see the rate of Bit Error increases on the plot.
Now let's decrease SNR by another 5 dB. On the top track, you would notice that the range
of errors on constellation gets even more wider and you see the rate of Bit Error increases
even more on the plot.
Now let's decrease SNR by another 5 dB. On the top track, you would notice that the range
of errors on constellation gets even more wider and you see the rate of Bit Error increases
even more on the plot.
Now would you see any trend from this example ? Even with exactly same constellation, Bit
Error Rate increase or decrease based on SNR. Many people tend to think that the error
rate is determined by transmitter power and reciever power, but in reality the absolute
power is not important. The thing that is really important is SNR. However, in practice many
people including me take transmitter or reciever power as an indirect indicator for SNR
based on 'BIG ASSUMPTION' that the level of noise is known (even roughly) and the level of
noise does not change when you increase or decrease power. If this BIG ASSUMPTION holds
true, if you increase Transmitter power you may say SNR would be better than the case
when you have low transmitter power. If you have higher recieved power, you may say SNR
would be better than the case when you lower reciever power. But don't blindly apply this
rule for any accurate analysis or troubleshooting. If you are in stuation where you need very
accurate analysis of Bit Error analysis, you need to check SNR of every components on the
signal path. I know this is huge job, this is one of the reason why it take such a long time
with using a lot of high end test equipment for calibrating the high accuracy test equipment
(e.g, Conformance Test system).

As you see above, you might have noticed that SNR is tightly related to BER (Bit Error
Rate). You might have seen a kind of general trend as follows :
i) At the same modulation depth, you will get high BER(Poor Performance) at low
SNR and low BER (Good Performance) at high SNR
ii) At the same SNR, you will get high BER(Poor Performance) at high modulation
depth and low BER (Good Performance) at low modulation depth
However, in modern communication various kinds of channel coding and error correction
technology is used to correct the certain degree of BER. So if you measure the error rate
after error correction, you may see much lower error rate than the case without error
correction. Usually the error rate after the error correction is measured as a parameter
called BLER (BLock Error Rate). However, even with this kind of error correction process,
you cannot fix all the errors. Therefore, the general trend still holds true at BLER
measurement.
i) At the same modulation depth, you will get high BLER(Poor Performance) at low
SNR and low BLER (Good Performance) at high SNR
ii) At the same SNR, you will get high BLER(Poor Performance) at high modulation
depth and low BLER (Good Performance) at low modulation depth
The exact correlation between SNR and BLER may vary depending on what kind of channel
coding and error correction are used. Following graph shows a good example of SNR vs
BLER for LTE PDSCH (See Ref [2] for the details. this is data for the system supporting only
up to 64 QAM. You would see different plots if you measure with the system supporting 256
QAM).

SINAD (Signal to Noise And Distortion Ratio)

Similar to SNR, there is another indicator called SINAD. It is defined as shown below. It
indicates the ratio of Total energy (Wanted + Unwanted) and Unwanted power. Since the
numerator is the total power in the definition, the value in dB is always positive.
In most of RF area, we use SNR more frequently and in some area like Audio signal analysis
we tend to use SINAD more frequently.

We often get confused by SNR vs SINAD and have difficulties in understanding the
difference between SNR and SINAD. It is well explained in Reference [1] as stated below.

Signal-to-noise ratio (SNR, or sometimes called SNR-without-harmonics) is calculated from


the FFT data the same as SINAD, except that the signal harmonics are excluded from the
calculation, leaving only the noise terms. In practice, it is only necessary to exclude the first
5 harmonics, since they dominate. The SNR plot will degrade at high input frequencies, but
generally not as rapidly as SINAD because of the exclusion of the harmonic terms.

As stated above, the main difference is whether to include 'distortion' in the calculation or
not. Distortion can be more intuitively understood in time domain. If you convert the signal
with distortion into frequency domain, the distortion appears in the form of harmonics. So in
terms of frequency domain, the main difference between SNR and SINAD is whether to
include harmonics in the calculation or not.

SINR (Signal to Interference plus Noise Ratio)

SINR stands for Signal to Interference plus Noise Ratio and the definition can be illustrated
as below (I hope this single picture can explain everything). Simply put, SINR is the ratio of
the signal (desired signal) and the unwanted noise. The unwanted noise comprises of all the
external interference and internaly generated noise.
Example 1 : SNR (SINR) vs Throughput in a LTE Live Network

Following plot is from the data captured by a drive test tool Azenqos Drive Test tool (AZQ
Android). This plot is automatically generated by AZQ Reporting tool and I just did some
cosmetic touch on the chart.
This is the real measurement showing the correlation between SINR and Throughput. As
you see, as SNR(SINR) goes higher, throughput increases exponentially. In other words, As
SNR decreases, the throughput will decrease exponetially. If network does not change code
rate (i.e, MCS), the throughput decrease would be due to decoding failure at the reciever
(i.e, decoding failure at UE), however in real network UE reports CQI periodically to eNB and
eNB changes the code rate accordinly (i.e, decreasing MCS as CQI value gets lower and this
results in smaller transport block size), so this throughput change would be due to lower
transport block size.
Antenna Port

I think one of the most confusing concept in LTE physical layer is the concept of 'Antenna
port'. The official definition of Antenna port goes as follows. (To be honest, this official
definition does not make any clear sense to me)

36.211 5.2.1 Resource grid (Uplink) says :

An antenna port is defined such that the channel over which a symbol on the antenna port
is conveyed can be inferred from the channel over which another symbol on the same
antenna port is conveyed. There is one resource grid per antenna port. The antenna ports
used for transmission of a physical channel or signal depends on the number of antenna
ports configured for the physical channel or signal as shown in Table 5.2.1-1.

< 36.211 - Table 5.2.1-1: Antenna ports used for different physical channels and signals >

36.211 6.2.1 Resource grid (Downlink) says :


Reference Signal Type Associated Antenna Ports
Cell-Specific Reference Signal p = 0, p ∈ {0,1}, p ∈ {0,1,2,3}
MBSFN p=4
p = 5, p = 6, p = 7, p = 8 ,
UE-Specific Reference Signal
one/several of p ∈ {7,8,9,10,11,12,13,14}
DMRS for EPDCCH p ∈ {107,108,109,110}
Positioning Reference Signal p=6
P = 15,
p = 15,16,
p = 15,16,17,18,
CSI Reference signal
p = 15,16,17,18,19,20,2122,
p = 15,16,17,18,19,20,21,22,23,24,25,26,
p = 15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30

An antenna port is defined such that the channel over which a symbol on the antenna port
is conveyed can be inferred from the channel over which another symbol on the same
antenna port is conveyed. For MBSFN reference signals, positioning reference signals, UE-
specific reference signals associated with PDSCH and demodulation reference signals
associated with EPDCCH, there are limits given below within which the channel can be
inferred from one symbol to another symbol on the same antenna port. There is one
resource grid per antenna port. The set of antenna ports supported depends on the
reference signal configuration in the cell:

NOTE : As you see here, there are several different port combinations for a specific
refernece signal type. Which of the combination is used is determined by a specific antenna
configuration (i.e, Transmission Mode). For further details on the antenna port combination
and each transmission mode and reference signal, refer to Transmission Mode page and
Reference Signal (Downlink) page.

Simply put,
 Antenna port is logical concept, not a physical concept (meaning 'Antenna port' is not the
same as 'Physical Antenna')
 Each Antenna port represents a specific channel model
 The channel that is transmitted by a specific antenna port can be done by using the reference
signal assinged fort the port (This is why each antenna port has its own reference signal)

To be honest, any of the verbal description of Antenna port was not so clear to me for a
long time. I am kind of person who has huge difficulties on understanding things if I don't
visualize it (have any form of visual image). Just to give you another angle of the concept of
Antenna port, I will try to show you on exactly which point in physical layer processing the
antenna port are introduced. As illustrated below, antenna port is introduced in Precoding
process at first and each antenna port will generate its own resource grid.
Now let's take a look at some of practical examples of how each of antenna port are
associted each resource grid. These example shows the all the resource grid that can be
observed at point (C) on the physical layer processing shown above. In these examples, I
will draw the resource grid with only one RB just for simplicity.

Example 1 > 4x4 MIMO, Transmission Mode 3 or 4.


Example 2 > Transmission Mode 9, 4 Layer.

Channel Estimation
As I explained in other pages, in all communication the signal goes through a medium
(called channel) and the signal gets distorted or various noise is added to the signal while
the signal goes through the channel. To properly decode the received signal without much
errors are to remove the distortion and noise applied by the channel from the received
signal. To do this, the first step is to figure out the characteristics of the channel that the
signal has gone through. The technique/process to characterize the channel is called
'channel estimation'. This process would be illustrated as below.
There are many different ways for channel estimation, but fundamental concepts are
similar. The process is done as follows.

i) set a mathematical model to correlate 'transmitted signal' and 'recieved signal' using
'channel' matrix.
ii) Transmit a known signal (we normally called this as 'reference signal' or 'pilot signal') and
detect the received signal.
iii) By comparing the transmitted signal and the received signal, we can figure out each
elements of channel matrix.

As an example of this process, I will briefly describes on how this process in LTE. Of course,
I cannot write down full details of this process in LTE and a lot of details are up to
implementation (meaning the detailed algorithm may vary with each specific chipset
implmenetation). However, overall concept would be similar.
 General Algorithm
 Channel Estimation for SISO
o Estimation of Chnnel Coefficient
o Estimation of Noise
 Channel Estimation for 2 x 2 MIMO
o Estimation of Chnnel Coefficient
o Estimation of Noise
General Algorithm

How can we figure out the properties of the channel ? i.e, how can we estimate the channel
? In a very high level view, it can be illustrated as below. This illustration says followings :
i) we embed the set of predefined signal (This is called a reference signal)
ii) As these reference signal go through the channel, it get distorted (attenuated,
phase-shifted, noised) along with other signals
iii) we detect/decode the received reference signal at the reciever
iv) Compare the transmitted reference signal and the received reference signal and
find correlation between them.

Channel Estimation for SISO

Now let's think of the case of LTE SISO case and see how we can estimate channel
properties (channel coefficient and noise estimate). Since this is SISO, reference signal is
embedded onto only one antenna port (port 0). The vertical line in the resource map
represents frequency domain. So I indexed each of reference signal with f1, f2, f3...fn. Each
reference symbol can be a complex number (I/Q data) that can be plotted as shown below.
Each complex number (Reference Symbol) on the left (transmission side) is modified
(distorted) to each corresponding symbols on the right (recieved symbol). Channel
Estimation is the process of finding correlation between the array of complex numbers on
the left and the array of complex numbers on the right.
The detailed method of the estimation can very depending on the implementation. The
method that will be described here is based on the Open Source : srsLTE (Refer to [1])

< Estimation of Channel Coefficient >

Since this is only one antenna, system model for each transmitted reference signal and
received reference signal can be represented as follows. y() represents the array of received
reference signal, x() represents the array of transmitted reference signal() and h()
represents the array of channel coefficient. f1, f2,... just integer indices.

We know what x() are because it is given and the y() is also know because it is
measured/detected from the reciever. With these, we can easily calculate coefficient array
as shown below.
Now we have all the channel coefficient for the location where reference signals are located.
But we need channel coeffcient at all the location including those points where there is no
reference signal. It means that we need to figure out the channel coefficient for those
location with no reference signal. The most common way to do this to this is to interpolate
the measured coefficient array. In case of srsLTE, it does a averaging first and then did
interpolation over the averaged channel coefficient.

< Estimation of Noise >

Next step is to estimate the noise properties. Theoretically, the noise can be calculated as
below.
However, what we need is the statistical properties of the noise .. not the exact noise value.
we can estimate the noise using only measured channel coefficient and averaged channel as
shown below (Actually exact noise value does not have much meaning because the noise
value keep changes and it is of no use to use those specific noise value). In srsLTE, the
author used this method.

Channel Estimation for 2 x 2 MIMO

Let's assume that we have a communication system as shown below. x(t) indicates the
transmitted signal and y(t) indicates the received signal. When x(t) gets transmitted into
the air (channel), it gets distorted and gets various noise and may interfere each other. so
the recieved signal y(t) cannot be same as the transmitted signal x(t).
This relation among the transmitted signal, received signal and channel matrix can be
modeled in mathematical form as shown below.

In this equation, we know the value x1,x2 (known transmitted signals) and y1,y2
(detected/recieved signal). The parts that we don't know is H matrix and noise (n1,n2).
For simplicity, let's assume that there is no noise in this channel, meaning that we can set
n1, n2 to be 0. (Of course, in real channel there are always noise and estimate noise is a
very important part of channel estimation, but we assume in this example that there is no
noise just to make it simple. I will add later the case with noise when I have better
knowledge to describe the case in plain language).

Since we have a mathematical model, the next step is to transmit a known signal (reference
signal) and figure out channel parameter from the reference signal.

Let's suppose we have sent a known signal with the amplitude of 1 through only one
antenna and the other antenna is OFF now. Since the signal propagate through the air and
it will be detected by both antenna at the reciever side. Now let's assume that the first
antenna received the reference signal with the amplitude of 0.8 and the second antenna
received it with amplitude of 0.2. With this result, we can figure out one row of channel
matrix (H) as shown below.
Let's suppose we have sent a known signal with the amplitude of 1 through only the other
(second) antenna and the first antenna is OFF now. Since the signal propagate through the
air and it will be detected by both antenna at the reciever side. Now let's assume that the
first antenna received the reference signal with the amplitude of 0.3 and the second
antenna received it with amplitude of 0.7. With this result, we can figure out one row of
channel matrix (H) as shown below.
Simple enough ? I think (hope) that you didn't have any problems with understanding this
basic concept. But if you use this method exactly as described above, there would be some
inefficiency. According to the concept explained above, there should be a moment when you
transmit only reference signal without real data just to estimate the channel information,
meaning that data rate will be decreased because of channel estimation process. To remove
this inefficiency, the real communication system transmit the reference signal and data
simulteneously.
Now the question is "How can we implement the concept described above while transmitting
the reference signal and data simultaneously ?". There can be several different ways to do
this and different communication system would use a little bit of different methodology.

In case of LTE as an example, we use the method described as shown below. In case of 2 x
2 MIMO in LTE, each sub frame has different locations for reference signal for each antenna.
The subframe for antenna 0 transmitted the reference signal allocated for antenna 0 and
does not transmit any signal at the reference signal allocated for antenna 1. The subframe
for antenna 1 transmitted the reference signal allocated for antenna 1 and does not
transmit any signal at the reference signal allocated for antenna 0. So if you decode at the
two reciever antenna the resource elements allocated for reference signal for antenna 0,
you can estimate h11, h12. (here we also assume that there is no noise for simplicity). If
you decode at the two reciever antenna the resource elements allocated for reference signal
for antenna 1, you can estimate h21, h22. (here we also assume that there is no noise for
simplicity).

< Estimation of Chnnel Coefficient >

The process illustrated above is to measure H matrix for one specific points in frequency
domain in a LTE OFDMA symbol. If you apply the measured H value as it is in the process of
decoding other parts of symbol, the accuracy of the decoded symbol might not be as good
as it can be because the measured data used in previous step would contain some level of
noise. So in real application, some kind of post processing is applied to the H values
measured by the method described above and in this post processing procedure we could
figure out the overal statistical properties of Noise (e.g, mean, variance and statistical
distribution of the noise). One thing to keep in mind is that the specific noise value obtained
in this process does not have much meaning by itself. The specific value obtained from the
reference signal would not be same as the noise value to decode other data (non-reference
signal) because the noise value changes randomly. However, the overal properties of those
random noise can be an important information (e.g, used in SNR estimation etc).

Before moving on, let's briefly think of the mathematical model again. Even though we
decribe a system equation as follows including the noise term, it doesn't mean that you can
directly measure the noise. It is impossible. This equation just show that the detected signal
(y) contains a certain portions of noise component.
So, when we measure the channel coefficient, we used the equipment that does not have
noise term as shown below.

In specific application in LTE, we have multiple measurement points (multiple reference


signal) within a OFDM symbol. These measurement points are represented on frequency
domain. So, let's rewrite channel matrix as follows to indicate the measurement point of
each channel matrix.

Now let's assume that you have measured H matrix across a whole OFDM symbol, you
would have multiple H matrix as below, each of which indicate the H matrix at one specific
frequency.

Now you have an array of H matrix. This array is made up of four different group, each of
the group is highlighted with different colors as shown below.
When you apply the post processing algorithm, the algorithm needs to be applied to each of
these groups separately. So for simplicity, I rearranged the array of H matrix into multiple
of independent arrays (4 arrays in this case) as shown below.

For each of these arrays, I will do the same processing as illustrated below. (Each chipset
maker may apply apply a little bit different method, but overall idea would be similar). In
the method illustrated below, the data (the array of channel coefficient in each frequency
points) is applied with IFFT, meaning the dta is converted into a time domain resulting in an
array of time domain data labeled as (2). Actually this is a impulse response of the specific
channel path. And then we apply a specific filtering (or windowing) to this time domain
data. In this example, replace the data from a certain point with zero and creating the result
labeled as (3). You may apply a more sophisticated filter or windowing instead of this kind
of simple zeroing. And then, by converting the filtered channel impulse data back to
frequency domain, I get the filtered channel coefficient and I use that value as 'Estimated
channel coefficient' in the processing of decoding other recieved signal (i.e, decoding non-
reference data).
By doing the same process to all the four array, you get the four arrays of 'Estimated
Channel Coefficient Array'. From these four arrays, you can reconstructed the array of
estimated channel matrix as follows.

< Estimation of Noise >

With this estimated channel matrix, you can estimate the noise values at each point using
the following equation. This is same as original system equation at the beginning of this
page except that H matrix is replaced by 'Estimated H' matrix and now we know all the
values except noise value. So, by plugging in all the know values we can calculate
(estimate) noise values at each measurement point.

If you apply this equation for all the measurement point, you would get the noise values for
all the measurement point and from those calculated noise value you can get the statistical
properties of the noise. As mentioned above, the each individual noise value calculated here
does not have much meaning because the value cannot be directly applicable to decoding
other signal (non-reference signal), but the statistical characteristics of these noise can be a
very useful information to determine the nature of the channel.
Inter-Cell Interference Coordination (ICIC)
As mobile communication technology has evolved dramatically, from LTE (10 MHz) to LTE-A
(10+10 MHz), and then to wideband LTE (20 MHz), South Korea's mobile market is hotter
than ever with its big 3 operators competing fiercely in speed and quality (see Netmanias
Report, LTE in Korea UPDATE - May 1, 2014). Operators can offer different maximum
speeds depending on how wide frequency bandwidths they can actually use. All three, with
pretty much same amount of LTE frequency bandwidths obtained, practically support the
same maximum speeds.

However, these theoretical maximum speeds are not available to users in real life. What
users experience, i.e., Quality of Experience (QoE) is affected by various factors, and so the
actual QoE is far from the maximum speeds. One of the biggest factors that causes such
quality degradation is Inter-cell Interference.

In 2G/3G networks, it was base station controllers, i.e., upper nodes of base stations, that
control inter-cell interference. In 4G networks like LTE/LTE-A, however, inter-cell
interference can be controlled through coordination among base stations. This was made
possible because now LTE networks have X2 interfaces defined between base stations. By
exchanging interference information over these X2 interfaces, base stations now can
schedule radio resources in a way that avoids inter-cell interference. 1

There are several Interference Coordination technologies in LTE and LTE-A:

 LTE: Inter-Cell Interference Coordination (ICIC)


 LTE-A: Enhanced ICIC (eICIC) which is an adjusted version of ICIC for HetNet, and
Coordinated Multi-Point (CoMP) which uses Channel Status Information (CSI) reported
by UE

In this and next few posts, we will learn more about these Interference Coordination
technologies. First, let's find out ICIC, the most basic interference coordination
technology.

Inter-Cell Interference Coordination (ICIC)


What causes inter-cell interference?
The biggest cause of lower mobile network capacity is interference. Interference is caused
when users in different neighbor cells attempt to use the same resource at the same time.
Suppose there are two cells that use the same frequency channel (F, e.g., 10MHz at 1.8GHz
band), and each cell has a UE that uses the same frequency resource 2 (fi, fi∈F). As seen in
the figure below, if the two UEs are located in cell centers like A2 and B2, no interference is
caused because they use low power to communicate. However, if they are at cell edges like
A1 and B1, their signals cause interference for each other because the two use high power
to communicate.
Interference is caused because cells only know what radio resources their own UEs are
using, and not what other UEs in the neighbor cells are using. For example, in the figure
above, Cell A knows what resources A1 is using, but not about what B1 is using, and vice
versa. And the cells independently schedule radio resources for their own UEs. So, to the
UEs at cell edges (A1 in Cell A and B1 in Cell B), same frequency resource can be allocated.

ICIC Concept
ICIC is defined in 3GPP release 8 as an interference coordination technology used in LTE
systems. It reduces inter-cell interference by having UEs, at the same cell edge but
belonging to different cells, use different frequency resources. Base stations that support
this feature can generate interference information for each frequency resource (RB), and
exchange the information with neighbor base stations through X2 messages. Then, from the
messages, the neighbor stations can learn the interference status of their neighbors, and
allocate radio resources (frequency, Tx power, etc.) to their UEs in a way that would avoid
inter-cell interference.

For instance, let's say a UE belonging to Cell A is using high Tx power on frequency resouce
(f3) at the cell edge. With ICIC, Cell B then allocates a different frequency resource (f2) to
its UE at the cell edge, and f3 to its other UE at the cell center, having the one at the center
use low Tx power in communicating.
Interference Information used in ICIC
Basic ICIC Behavior

eICIC (enhanced ICIC)


Enhanced Inter-Cell Interference Coordination (eICIC)
As noted in the previous post about ICIC, we will find out about enhanced Inter-Cell
Interference Coordination (eICIC), an interference control technology in LTE-A in this
post. In LTE/LTE-A, one key challenge for operators is that they have to increase network
capacity to keep up with fast-growing traffic. Especially, crowded areas in metropolitan
cities have hotspots with extremely high traffic. For these hotspots, just reducing the size of
macro cells is not quite enough to handle the high traffic. So, network operators want to
increase the network capacity in a more economical way - by installing small cells.

Networks consisting of the same type of cells (e.g. existing macro networks), as presented
in the previous post, are called homogeneous networks while ones with different types of
cells are called heterogeneous networks (HetNet). So, HetNet is a network where small cells
are deployed within a macro cell coverage. From Release 10 on, HetNet environments are
also considered when discussing LTE-A standards.
Figure 1. Homogeneous network and heterogeneous network (HetNet)

■ What is eICIC?
eICIC is an interference control technology defined in 3GPP release 10. It is an advanced
version of ICIC, previously defined in 3GPP release 8, evolved to support HetNet
environments. To prevent inter-cell interference, ICIC allows cell-edge UEs in neighbor cells
to use different frequency ranges (RBs or sub-carriers). On the other hand, eICIC allows
them to use different time ranges (subframes) for the same purpose. That is, with eICIC, a
macro cell and small cells that share a co-channel can use radio resources in different time
ranges (i.e. subframes).

Two main features of eICIC are: Almost Blank Subframe (ABS) technology defined in
Release 10 and Cell Range Expansion (CRE) technology defined in Release 11. ABS can
prevent cell-edge UEs in small cells from being interfered with by the neighboring macro cell
by having both cells still use the same radio resources, but in different time ranges
(subframes). CRE expands the coverage of a small cell so that more UEs near cell edge can
access the small cell. In this post, we will discuss ABS only.

Figure 2. eICIC technology: ABS

■ Problems with ICIC


First, you may wonder what issues ICIC had that made HetNet choose eICIC over ICIC.
ICIC enables cell-edge UEs to use different frequency resources (RBs) in communicating, by
having neighboring base stations exchange interference information with each other over X2
interface. This is effective in reducing inter-cell interference in an existing macro cell-based
homogeneous network, but causes interference between control channels in a HetNet.
When a base station communicates with a UE, each DL subframe of 1 msec consists of two
periods - one for delivering control channel and the other for delivering data channel. ICIC
can allocates different frequency resources to cell-edge UEs only when delivering data
channels (Physical Downlink Shared Channel; PDSCH). Resource information allocated to
UEs is delivered through control channels (Physical Downlink Control Channel; PDCCH).
Here the thing is, unlike data channels, control channels are not delivered through different
frequency ranges, but distributed across the entire channel bandwidth first and then
delivered. This may cause UEs in neighbor cells to share the same frequency resources.

Figure 3. Control channel (PDCCH) and data channel (PDSCH)

In a homogeneous network, this is not a big problem because there isn't much difference in
Tx power from neighbor cells' antenna, and hence no significant inter-channel interference
by control channels is caused between neighbor cells at cell edge. On the other hand,
in HetNet where a macro cell has much higher Tx power than a small cell 1, the
small cell's control channel is inevitably interfered with by the macro cell's,
making ICIC applied to the data channel ineffective.
Figure 4. Issues with ICIC in HetNet: Interference by macro cell's control channel

■ eICIC Concept: Problems with ICIC solved by having cells use radio resources in
different time
■ eICIC Operation: Delivering ABS pattern information over X2 interface

CoMP : CS, CB, JT and DPS


Today, we will learn about CoMP, an inter-cell cooperation technology in LTE-A, since we
learned about ICICand eICIC in the previous posts. At an early stage of LTE/LTE-A, offering
high speed is the most important marketing point for operators. However, as LTE
subscribers and traffic grow, satisfying users with high Quality of Experience (QoE), for
example, by improving user throughputs at cell edge areas where data transmission speed
drops drastically becomes far more important than just supporting the highest speed.

Increased radio network capacity can be achieved by improving spectral efficiency. Spectral
efficiency (bit/sec/Hz) is the transmission rate measured in bps per Hz. The higher spectral
efficiency, the more data can be transmitted with the same amount of bandwidth. By
default, LTE networks provide broadband radio links by obtaining higher spectral efficiency
through using at least 2x2 MIMO antennas. At cell centers, installing more antennas at a
base station improves spectral efficiency, leading to higher UE throughputs. At cell edge
areas, however, only insignificant throughput improvement can be expected. So, we should
find another way to gain the same effect.

■ Definition of CoMP
Coordinated Multi-Point (CoMP) is a new inter-cell cooperation technology specifically aiming
to enhance throughputs of UEs at cell edge. CoMP mitigates inter-cell interference and
increases throughputs of a UE at cell edge by allowing not only the UE's serving cell, but
also other cell(s) to communicate with the UE, through cooperation with one another.

Traditionally, a UE accesses only one cell (serving cell) for communication. But, a CoMP-
enabled UE can communicate with more than one cell located in different points, and this
group of cells works as a virtual MIMO system. Cells that are in charge of directly or
indirectly transmitting data to UE are called "CoMP cooperating cells" ("CoMP cooperating
set" in 3GPP terms*), and specifically those actually responsible for transmitting data to UE
are called "CoMP transmission cell(s)" ("CoMP transmission points" in 3GPP terms *).

In summary, CoMP is an inter-cell cooperation technology that enables more than one
transmission cell to communicate with a UE to achieve better throughputs at cell edge areas
by reducing inter-cell interference. CoMP cooperating cells share channel information of a
UE, and based on the information, transmission cell(s) are decided.

■ Why CoMP? – Problems with ICIC and eICIC


As discussed in the previous posts, ICIC (defined in Release 8) reduces inter-cell
interference by allocating different frequency resources (RBs or sub-carriers) to UEs at cell
edge. On the other hand, eICIC (defined in Release 10) does the same task in time domain,
by allocating different time resources (subframes) through cooperation between a macro
cell and small cells in a HetNet.

ICIC and eICIC, both aiming to reduce inter-cell interference, can help UEs at cell edge to
communicate, but neither can actually improve their throughputs. That's because they
restrict radio resource usage in frequency domain (ICIC) and time domain (eICIC) to
mitigate interference. And interference information between neighbor cells is shared on a
relatively long term basis. As a result, fast-changing channel conditions of UE (e.g. when UE
is traveling fast, or entering a shadowing area) are not reflected in inter-cell cooperation
promptly in time, inevitably impeding dynamic allocation of resources.

CoMP, recognized as the most advanced inter-cell cooperation technology so far, was first
standardized in Release 11, and further standardization is still taking place in Release 12. It
uses radio resources not just in frequency/time domain, but also in spatial domain, to
enhance spectral efficiency. That is, it performs beamforming using a smart antenna, or
works as a virtual MIMO system. With CoMP, cooperating cells can share UE's channel
information every time scheduling is performed, and hence UE's instantaneous channel
conditions can be reflected in time. This sharing makes joint scheduling possible. CoMP can
be used either in a homogeneous or heterogeneous network (HetNet), and features various
types of inter-cell cooperation: CS, CB JT, and DPS (see CoMP Types below).

■ Channel Information used in CoMP


Channels are transmission routes for data, i.e. between Tx antenna and Rx antenna across
air. If base stations know UE's channel information beforehand, they can transmit precoded
data so that UE can get better reception. For this purpose, UEs measure their channels, and
report the resulting Channel State Information (CSI) to their base stations.

Base stations give their UEs an instruction on how and which cell's CSI are to be measured
by sending a CSI-RS (CSI Reference Signal) configuration message. Upon this instruction,
UEs measure CSI and report to their serving cells. In general, CSI information includes
Channel Quality Indicator (CQI), Precoding Matrix Indicator (PMI), and Rank Indicator (RI).

CQI: An indicator of channel quality. Displayed as a highest modulation and coding rate
(MCR) value that satisfies the condition of 'channel block error rate (BLER) < 0.1'. It is set
as a value ranging 0 ~ 15 (4 bits). The better channel quality, the higher MCR is used.
Subband CQIs indicate the quality for specific frequency ranges (subrange) while wideband
CQIs indicate that for the entire channel bandwidth.

PMI: Base stations deliver more than one data stream (layer) through Tx antenna.
Precoding matrix shows how individual data streams (layers) are mapped to antennas. To
calculate precoding matrix, UEs obtain channel information by measuring the channel
quality of each DL antenna. Because providing feedback on all channel information results in
significantly increased overheads, generally a code book is pre-configured at base stations
and UEs. Using this code book, UEs send the index of a corresponding precoding matrix
only. Base stations, by referring the reported precoding matrix, calculate its own precoding
matrix, and use the optimal value from it.

RI: Indicates the number of data stream(s) being delivered in DL. For instance, with 2 X 2
MIMO, this value is 1 in case of transmit diversity MIMO where two antennas at a base
station are sending the same data stream, and it is 2 in case of spatial multiplexing MIMO
where the antennas are sending different data streams.

■ CoMP Types (CoMP Categories in 3GPP Terms *)


Specific CoMP types can be categorized in many ways depending on the criteria used for
categorization - whether backhaul is ideal or non-ideal, whether CoMP between eNBs is
supported or not, whether MIMO antennas support one user or multiple users, whether it is
to be applied to DL or UL, etc.
This post will discuss DL CoMP. CoMP is designed to reduce inter-cell interference and
enhance throughputs of cell-edge UEs. When cell(s) send data to UEs, they can use one of
the following CoMP types depending on the extent of coordination among cells and traffic
load. Although different types of CoMP can be used together, we will explain the specific
types one by one below for easier understanding.

Coordinated Scheduling/Coordinated Beamforming (CS/CB)


As an effort to minimize interference among cell-edge UEs, CS and CB CoMP select one of
the cooperating cells as a transmission cell, and use it in communicating with UE.

1. Coordinated Scheduling (CS)


The basic idea of CS CoMP is pretty similar to ICIC in that it reduces inter-cell interference
by allocating different frequency resources (RBs or sub-carriers) to cell-edge UEs. But from
technical perspective, CS CoMP is a more advanced technology that requires a much shorter
operation period, more complicated signal processing and more elaborate algorithm,
compared to ICIC. In ICIC, cooperating cells share interference information of each cell, but
in CS CoMP they can share channel information of each user.
 First, cooperation periods in CS CoMP are a lot shorter than in ICIC. In ICIC, each
cooperation period is tens ~ hundreds of msecs long. So, once ICIC coordination
results are updated, schedulings are based on the result for a long time. On the other
hand, in CS CoMP, with a cooperation period as short as 1 msec, new CS coordination
results are applied every time scheduling is performed. So, resources can be
dynamically allocated even with instantaneous changes of UE's channel condition.
 Second, in CS CoMP, cooperating cells share greater amount of more elaborate
information, compared to those in ICIC. In ICIC, pretty simple information like
interference level by radio block is shared (see ICIC) while user-detailed channel
information (CQI, PMI, RI, SINR, etc.) between UEs and their cooperating cells is
shared in CS CoMP.

Figure 1. Coordinated Scheduling (CS)

In Figure 1, A1 and B1 at cell edge, each with a different frequency resource allocated (f3
and f2), can avoid interference, and hence have improved throughputs. Both UEs do receive
signals from the other UE. These signals do not cause interference with the other's, but may
cause degraded reception of their own signals.
2. Coordinated Beamforming (CB)
CB CoMP allocates different spatial resources (beam patterns) to UEs at cell edge by using
smart antenna technology. Without CS, A1 and B1 may end up being allocated the same
frequency resource (f3 in Figure 2). CB CoMP allows Cell A and Cell B to cooperate with each
other, and allocate different spatial resources (beam pattern 1, beam pattern 2) to A1 and
B1 at cell edge. These two cells can prevent interference by allocating main beam to their
own UE, and null beam to the other neighbor UE.

Figure 2. Coordinated Beamforming (CB)

Generally, CB is more often used with CS, than alone. Figure 3 shows a case where CS and
CB are used together. Cell A and Cell B cooperate with each other to allocate different
frequency resources (f3, f2) and different spatial resources (beam pattern 1, beam pattern
2) to A1 and B1, respectively. This cooperation is pretty effective because, CS alone can
easily take care of interference issues, and besides CB can even ensure better reception
quality. If used with CB, CS can achieve better cell-edge throughputs because CB helps A1
and B1 to avoid signals sent to the other, and better receive those destined for themselves.
Figure 3. CS/CB

Joint Processing (JP): Joint Transmission/Dynamic Point Selection (JT/DPS)


In JT/DPS CoMP, multiple cells are selected among cooperating cells as transmission cells
for better reception of UEs at cell edge.

3. Joint Transmission (JT)


4. Dynamic Point Selection (DPS)
There is a growing number of LTE frequency bands that are being designated as possibilities for use
with LTE. Many of the LTE frequency bands are already in use for other cellular systems, whereas
other LTE bands are new and being introduced as other users are re-allocated spectrum elsewhere.

Additional information and later updates can be found on Electronics Notes our sister website LTE
channels bands & spectrum.

FDD and TDD LTE frequency bands


FDD spectrum requires pair bands, one of the uplink and one for the downlink, and TDD requires a
single band as uplink and downlink are on the same frequency but time separated. As a result, there
are different LTE band allocations for TDD and FDD. In some cases these bands may overlap, and it
is therefore feasible, although unlikely that both TDD and FDD transmissions could be present on a
particular LTE frequency band.
The greater likelihood is that a single UE or mobile will need to detect whether a TDD or FDD
transmission should be made on a given band. UEs that roam may encounter both types on the same
band. They will therefore need to detect what type of transmission is being made on that particular
LTE band in its current location.
The different LTE frequency allocations or LTE frequency bands are allocated numbers. Currently the
LTE bands between 1 & 22 are for paired spectrum, i.e. FDD, and LTE bands between 33 & 41 are
for unpaired spectrum, i.e. TDD.

LTE frequency band definitions

FDD LTE frequency band allocations


There is a large number of allocations or radio spectrum that has been reserved for FDD, frequency
division duplex, LTE use.
The FDD LTE frequency bands are paired to allow simultaneous transmission on two frequencies.
The bands also have a sufficient separation to enable the transmitted signals not to unduly impair the
receiver performance. If the signals are too close then the receiver may be "blocked" and the sensitivity
impaired. The separation must be sufficient to enable the roll-off of the antenna filtering to give
sufficient attenuation of the transmitted signal within the receive band.
FDD LTE BANDS & FREQUENCIES

LTE UPLINK DOWNLINK WIDTH DUPLEX BAND


BAND (MHZ) (MHZ) OF SPACING GAP
NUMBER BAND (MHZ) (MHZ)
(MHZ)

1 1920 - 1980 2110 - 2170 60 190 130

2 1850 - 1910 1930 - 1990 60 80 20

3 1710 - 1785 1805 -1880 75 95 20

4 1710 - 1755 2110 - 2155 45 400 355

5 824 - 849 869 - 894 25 45 20

6 830 - 840 875 - 885 10 35 25

7 2500 - 2570 2620 - 2690 70 120 50

8 880 - 915 925 - 960 35 45 10

9 1749.9 - 1784.9 1844.9 - 1879.9 35 95 60

10 1710 - 1770 2110 - 2170 60 400 340

11 1427.9 - 1452.9 1475.9 - 1500.9 20 48 28

12 698 - 716 728 - 746 18 30 12

13 777 - 787 746 - 756 10 -31 41

14 788 - 798 758 - 768 10 -30 40

15 1900 - 1920 2600 - 2620 20 700 680

16 2010 - 2025 2585 - 2600 15 575 560

17 704 - 716 734 - 746 12 30 18

18 815 - 830 860 - 875 15 45 30

19 830 - 845 875 - 890 15 45 30

20 832 - 862 791 - 821 30 -41 71

21 1447.9 - 1462.9 1495.5 - 1510.9 15 48 33

22 3410 - 3500 3510 - 3600 90 100 10

23 2000 - 2020 2180 - 2200 20 180 160


FDD LTE BANDS & FREQUENCIES

LTE UPLINK DOWNLINK WIDTH DUPLEX BAND


BAND (MHZ) (MHZ) OF SPACING GAP
NUMBER BAND (MHZ) (MHZ)
(MHZ)

24 1625.5 - 1660.5 1525 - 1559 34 -101.5 135.5

25 1850 - 1915 1930 - 1995 65 80 15

26 814 - 849 859 - 894 30 / 40 10

27 807 - 824 852 - 869 17 45 28

28 703 - 748 758 - 803 45 55 10

29 n/a 717 - 728 11

30 2305 - 2315 2350 - 2360 10 45 35

31 452.5 - 457.5 462.5 - 467.5 5 10 5

TDD LTE frequency band allocations


With the interest in TDD LTE, there are several unpaired frequency allocations that are being prepared
for LTR TDD use. The TDD LTE bands are unpaired because the uplink and downlink share the same
frequency, being time multiplexed.

TDD LTE BANDS & FREQUENCIES

LTE BAND ALLOCATION (MHZ) WIDTH OF BAND (MHZ)


NUMBER

33 1900 - 1920 20

34 2010 - 2025 15

35 1850 - 1910 60

36 1930 - 1990 60

37 1910 - 1930 20

38 2570 - 2620 50

39 1880 - 1920 40
TDD LTE BANDS & FREQUENCIES

LTE BAND ALLOCATION (MHZ) WIDTH OF BAND (MHZ)


NUMBER

40 2300 - 2400 100

41 2496 - 2690 194

42 3400 - 3600 200

43 3600 - 3800 200

44 703 - 803 100

There are regular additions to the LTE frequency bands / LTE spectrum allocations as a result of
negotiations at the ITU regulatory meetings. These LTE allocations are resulting in part from the digital
dividend, and also from the pressure caused by the ever growing need for mobile communications.
Many of the new LTE spectrum allocations are relatively small, often 10 - 20MHz in bandwidth, and
this is a cause for concern. With LTE-Advanced needing bandwidths of 100 MHz, channel aggregation
over a wide set of frequencies many be needed, and this has been recognised as a significant
technological problem. . . . . . . . .

Frequency bands and channel bandwidths


From Tables 5.5-1 "E-UTRA Operating Bands" and 5.6.1-1 "E-UTRA Channel Bandwidth" of 3GPP
TS 36.101,[1] the following table lists the specified frequency bands of LTE and the channel
bandwidths each band supports. Obsolete bands are indicated by a grey background.

Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)

1920 – 2110 – 5, 10, 15,


1 FDD 2100 IMT 65 190
1980 2170 20

1.4, 3, 5,
1850 – 1930 –
2 FDD 1900 PCS[A 4] 25 80 10, 15,
1910 1990
20
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)

1.4, 3, 5,
1710 – 1805 –
3 FDD 1800 DCS 95 10, 15,
1785 1880
20

1.4, 3, 5,
1710 – 2110 –
4 FDD 1700 AWS-1[A 4] 66 400 10, 15,
1755 2155
20

824 – 1.4, 3, 5,
5 FDD 850 Cellular 26 869 – 894 45
849 10

2500 – 2620 – 5, 10, 15,


7 FDD 2600 IMT-E 120
2570 2690 20

Extended G 880 – 1.4, 3, 5,


8 FDD 900 925 – 960 45
SM 915 10

Extended 1710 – 2110 – 5, 10, 15,


10 FDD 1700 66 400
AWS-1[A 5] 1770 2170 20

1427.9
1475.9 –
11 FDD 1500 Lower PDC 74 – 48 5, 10
1495.9
1447.9

Lower 699 – 1.4, 3, 5,


12 FDD 700 85 729 – 746 30
SMH[A 6] 716 10

Upper 777 –
13 FDD 700 746 – 756 −31 5, 10
SMH[A 7] 787
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)

Upper 788 –
14 FDD 700 758 – 768 −30 5, 10
SMH[A 8] 798

Lower 704 –
17 FDD 700 12, 85 734 – 746 30 5, 10
SMH[A 9] 716

Lower 800 815 –


18 FDD 850 26 860 – 875 45 5, 10, 15
(Japan) 830

Upper 800 830 –


19 FDD 850 26 875 – 890 45 5, 10, 15
(Japan) 845

Digital
832 – 5, 10, 15,
20 FDD 800 Dividend (E 791 – 821 −41
862 20
U)

1447.9
1495.9 –
21 FDD 1500 Upper PDC 74 – 48 5, 10, 15
1510.9
1462.9

3410 – 3510 – 5, 10, 15,


22 FDD 3500 100
3490 3590 20

1626.5
Upper 1525 –
24 FDD 1600 – −101.5 5, 10
L-Band (US) 1559
1660.5

1.4, 3, 5,
Extended 1850 – 1930 –
25 FDD 1900 80 10, 15,
PCS[A 10] 1915 1995
20
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)

Extended 814 – 1.4, 3, 5,


26 FDD 850 859 – 894 45
Cellular 849 10, 15

807 – 1.4, 3, 5,
27 FDD 800 SMR 852 – 869 45
824 10

703 – 3, 5, 10,
28 FDD 700 APT 758 – 803 55
748 15, 20

Lower
29 SDL[A 11] 700 N/A 717 – 728 N/A 3, 5, 10
SMH[A 12]

2305 – 2350 –
30 FDD 2300 WCS[A 13] 45 5, 10
2315 2360

452.5 – 462.5 –
31 FDD 450 NMT 10 1.4, 3, 5
457.5 467.5

1452 – 5, 10, 15,


32 SDL[A 11] 1500 L-Band (EU) 75 N/A N/A
1496 20

5, 10, 15,
33 TDD 2100 IMT 39 1900 – 1920 N/A
20

34 TDD 2100 IMT 2010 – 2025 N/A 5, 10, 15


Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)

1.4, 3, 5,
35 TDD 1900 PCS (UL) 1850 – 1910 N/A 10, 15,
20

1.4, 3, 5,
36 TDD 1900 PCS (DL) 1930 – 1990 N/A 10, 15,
20

5, 10, 15,
37 TDD 1900 PCS[A 14] 1910 – 1930 N/A
20

5, 10, 15,
38 TDD 2600 IMT-E[A 14] 41 2570 – 2620 N/A
20

DCS–IMT 5, 10, 15,


39 TDD 1900 1880 – 1920 N/A
Gap 20

5, 10, 15,
40 TDD 2300 S-Band 2300 – 2400 N/A
20

5, 10, 15,
41 TDD 2500 BRS 2496 – 2690 N/A
20

CBRS (EU, 5, 10, 15,


42 TDD 3500 3400 – 3600 N/A
Japan) 20

5, 10, 15,
43 TDD 3700 S-Band 3600 – 3800 N/A
20
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)

3, 5, 10,
44 TDD 700 APT 703 – 803 N/A
15, 20

L-Band 5, 10, 15,


45 TDD 1500 50 1447 – 1467 N/A
(China) 20

46 TDD 5200 U-NII[A 15] 5150 – 5925 N/A 10, 20

47 TDD 5900 U-NII-4[A 16] 5855 – 5925 N/A 10, 20

5, 10, 15,
48 TDD 3500 CBRS (US) 3550 – 3700 N/A
20

49 TDD 3600 S-Band 48 3550 – 3700 N/A 10, 20

3, 5, 10,
50 TDD 1500 L-Band (EU) 1432 – 1517 N/A
15, 20

Extended
51 TDD 1500 1427 – 1432 N/A 3, 5
L-Band (EU)

5, 10, 15,
52 TDD 3300 S-Band 3300 – 3400 N/A
20

1.4, 3, 5,
53 TDD 2400 GlobalStar 2483.5 – 2495 N/A
10
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)

Extended 1920 – 2110 – 5, 10, 15,


65 FDD 2100 190
IMT 2010 2200 20

Extended
1.4, 3, 5,
AWS 1710 – 2110 –
66 FDD 1700 400 10, 15,
(AWS-1–3)[A 1780 2200[2]
17]
20

5, 10, 15,
67 SDL[A 11] 700 EU 700 N/A 738 – 758 N/A
20

698 –
68 FDD 700 ME 700 753 – 783 55 5, 10, 15
728

2570 –
69 SDL[A 11] 2600 IMT-E[A 14] N/A N/A 5
2620

1695 – 1995 – 295 –


70 FDD 2000 AWS-4 5, 10, 15
1710 2020 300[3]

Digital
663 – 5, 10, 15,
71 FDD 600 Dividend (U 617 – 652 −46
698 20
S)

451 –
72 FDD 450 PMR (EU) 461 – 466 10 1.4, 3, 5
456

450 –
73 FDD 450 PMR (APT) 460 – 465 10 1.4, 3, 5
455
Duple
Dupl Channel
ƒ Subs Uplin Downlin x
Ban ex Common bandwidt
(MH et of k[A 2] k[A 3] spaci
d mod z)
name hs
band (MHz) (MHz) ng
e[A 1] (MHz)
(MHz)

1.4, 3, 5,
Lower 1427 – 1475 –
74 FDD 1500 48 10, 15,
L-Band (US) 1470 1518
20

1432 – 5, 10, 15
75 SDL[A 11] 1500 L-Band (EU) N/A N/A
1517 20

Extended 1427 –
76 SDL[A 11] 1500 N/A N/A 5
L-Band (EU) 1432

Extended
698 –
85 FDD 700 Lower 728 – 746 30 5, 10
716
SMH[A 6]

5150 –
252 SDL[A 11] 5200 U-NII-1[A 18] N/A N/A 20
5250

5725 –
255 SDL[A 11] 5800 U-NII-3[A 18] N/A N/A 20
5850

Channel
Duplex Uplink[A Downlink[A Duplex
ƒ Common Subset bandwidt
Band mode[A 2] 3]
spacing
(MHz) name of band (MHz) (MHz) hs
1]
(MHz)
(MHz)

You might also like