5G New Radio - Explained in A Nutshell - Understand The Latest 5G Radio Specs in A Story Telling Way (5G System Architecture Book 1) - Ali A. Esswie

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 74

5G New Radio - Explained in a Nutshell

Understand the Latest 5G Radio Specs in A Story Telling Way

Authored by

Ali A. Esswie

1
Table of Contents
Preface ............................................................................................................................................................................... 3

Author Biography ............................................................................................................................................................. 4

Introduction ...................................................................................................................................................................... 5

Key differences of the 3G/4G/5G Mobile Generations ............................................................................................ 6

Why Do Not Simply Evolve the Fourth Generation? .............................................................................................. 12

5G requirements and Gap to LTE/4G ....................................................................................................................... 17

3GPP Standardization of the 5G New Radio ................................................................................................................ 22

Main Building Blocks of the 5G New Radio Architecture ........................................................................................... 25

5G Radio Access Network: Radio Protocols ............................................................................................................ 27

5G Spectrum ............................................................................................................................................................... 33

5G Flexible Frame Structure ..................................................................................................................................... 43

5G Massive MIMO and Beamforming ....................................................................................................................... 46

5G Bandwidth Parts Operation ................................................................................................................................. 55

5G Flexible TDD Transmission .................................................................................................................................. 57

5G Multi Connectivity ................................................................................................................................................ 61

5G New Radio Beamformed Access ......................................................................................................................... 63

5G User-centric Reference Signals .......................................................................................................................... 65

Beyond 3GPP 5G Release-15 ......................................................................................................................................... 70

Concluding Remarks ...................................................................................................................................................... 72

References to 5G 3GPP Documents ............................................................................................................................. 73

2
Preface

This book introduces the major conceptional principals of the state-of-the-art fifth generation

(5G) radio system design, including its main driving technology pillars, radio interface design, and

the disruptive quality of service architectures. In a simple and very easy-to-digest way, this book

brings down the latest 5G radio standard specifications; however, with an effortless story telling

style, that would be easily accessible by undergrads, grads and early researchers. Accordingly,

the 3GPP specs of the 5G new radio are ultimately simplified, with in-text illustrative examples

and on-the-fly reviews of potentially-needed prerequisite knowledge.

3
Author Biography

Ali A. Esswie is an expert wireless research engineer, equipped with distinctive experience of industrial and

academic technical projects. He is currently with Nokia Bell Labs as a wireless research standardization

engineer. During 2013-2016, he was with Intel Labs and Huawei performance group, respectively, engaging

in a diversity of multi-national research projects including realistic cellular deployments. Furthermore, he

obtained his BSc. degree from Cairo University with academic honors. Later, he got his MSc. degree from

Memorial University of Canada when he won the Fellow of the graduate school for research excellence.

Currently, Ali is pursuing his PhD degree at the electronic systems department at Aalborg University. Ali

owns many high quality academic publications, published in world flagship academic conferences and

journals, in addition to invention intellectual reports which span the fundamental 4G/5G system design

principals.

4
Reading Keys

‘e.g.,’ denotes: for example, or for instance. It simply provides an example of the respective text.

‘i.e.,’ denotes: that is, or in other sense, in other way. It simply introduces another way of expressing the

former text.

‘−‘ implies: a side note or further explanation or review of prerequisite knowledge of former text.

‘BS’ or ‘base-station’ or ‘eNodeB’ or ‘gNB’: all denote the base-station end of the 4G and 5G radios.

5
Introduction

Since 1970s, a new cellular mobile generation tends to appear every 10 years period. Since then, our global

societies have been dramatically influenced by the advances in the wireless communications, pushing

towards a more modernized way of life. The first mobile generation (1G) was dictated by analogue

communications. Later, second generation (2G) has introduced the digital audio communications as well

as text messaging during 1989. The third generation (3G) accordingly has been developed mainly to meet

the wave of the data communication demand at this time and was finally frozen in standards by the end

of 2007. More specifically, the 3G technology has witnessed five different major updates, i.e., standard

releases, each was majorly concerned by several system design improvements to support higher data

rates. At the end of the 3G era, it was encompassed by what is globally re-known as the universal mobile

telecommunication system (UMTS) – simply put, the truly mature version of the 3G cellular

technology. Well then, first let me give you a quick glimpse on how the state-of-the-art 5G

technology generically adopts a different set of conceptional design principals, that are quite

different from former generations.

Key differences of the 3G/4G/5G Mobile Generations

Mainly, the 5G technology comes with a so-called new radio. The 5G new radio implies that the

radio interface becomes much more agile and simply programmable in time, equipped by further

sophisticated radio communications over the air as well as numerous technology variations that

were not available with the former 4G and 3G systems. In Table I, the main differences between

the major cellular standards are precisely presented – let’s lightly go through them one by one.

6
Table 1: Major differences between 3G, 4G and 5G cellular systems.

Item 3G 4G 5G
Downlink waveform CDMA OFDM OFDM
Uplink waveform CDMA SC-OFDM SC-OFDM
Channel coding Turbo Turbo LDPC (data) – Polar (control)
Transmission time interval 10/2 ms 1 ms Flexible {0.143, 0.5, 1, …} ms
Beamforming NA Data Data + Control
Bandwidth 5 MHz 0.4-20 MHz Up to 100/400 MHz
Quality of service Bearer based Bearer based Flow/packet based
Network slicing NA NA Supported
Fast transmissions NA NA Connectionless
Cloud services NA NA In-Built

Twenty years ago, the 3G networks adopted the code division multiple access (CDMA) to multiplex

different users over time, where each user is assigned a specific sequence code, which it should

use to decode its data. Although, CDMA has been proven to provide a limited capacity and

scheduling flexibility due to the non-zero correlation between sequence codes.

– basically, we do not have as many perfectly-orthogonal codes as the number of connected

users. Then, the 4G technology later upgraded the multiplexing technology to the orthogonal

frequency division multiple access (OFDMA) where users are multiplexed over sereval time and

frequency resources by using an orthogonalized carrier design.

– imagine the whole bandwidth is divided into many smaller pipelines, each user may be assigned

a non-contiguous (OFDM) or contiguous (single carrier OFDM (SC-OFDM)) number of these

bandwidth pipelines.

7
However, with the 5G new radio, it was decided – up to the current standardization activity, that

OFDM is still the best fit for the sake of multiplexing different and concurrent user transmissions.

That was in fact quite disappointing to many international researchers where the research line of

the non-orthogonal multiplexing techniques is rapidly evolving; however, apparently, not as

mature as OFDM at the mean time.

However, one of the major changes that is being introduced with the 5G new radio is the adoption

of a new channel coding – well, not really new within the academic community. The Turbo

encoding was extensively utilized within the 3G and 4G systems. Generally, the Turbo coding is

one of the best forward error correction techniques available to date.

– a forward error correction technique is a way to correct unintended errors in the received signal,

mainly by adding some redundant data to the actual transmitted information, e.g., a transmitter

desires to transmit a ‘1’, Turbo encoder shall send it as ‘110’ with two additional bits just for error

correction at the receiver side.

Furthermore, with the 4G radio technology, the Turbo process can be dynamically reconfigured

at the receiver side to adjust the receiver performance and decoding complexity with the

predicted size of the communication error, basically coming from the varying wireless channel in

between, i.e., increase decoding complexity and time only when needed to maintain a desired

reception performance. Well, by now, I feel that I may have convinced you why 5G should use

Turbo decoding as well? In fact, it will NOT. The 5G new radio is being decided to adopt the low-

8
density parity check code (LDPC) for data transmissions and Polar codes for control plane, i.e.,

control signaling transmissions. Let’s divide the thoughts here.

– LDPC is foreseen to address some problems of the Turbo coding and which the 5G new radio

significantly relies on, especially towards the 5G broadband use cases and reliability targets (– no

worries, continue reading and all shall be addressed later). So, LDPC provides a higher decoding

throughput with much less decoding complexity compared to the Turbo decoding. Also, LDPC

shows a better flexibility in the rate matching operations.

– well, the rate matching of a code implies its ability to match the number of bits of one code

block, i.e., one coded segment of data after adding redundant bits, to the maximum allowed

number of bits in the current allocation.

Accordingly, LDPC is highly beneficial when used with broadband services over the 5G new radio.

On the other side, for control channels, Polar codes are quite mature in literature and mostly are

known to be the only coding technique that approaches the theoretical optimal capacity limit. The

intuition behind using Polar codes in the control plane of the 5G new radio consolidates the

significance of having near-optimal control communications in the 5G new radio.

– imagine a broadband system with significantly degraded control communications, this means

that both users and network are able to achieve very high data rates, but they are not well

coordinated/controlled to do so, which is a great loss accordingly.

9
Stressing on such fact, the antenna beamforming capabilities coming with the 5G new radio are

also extended to include the control channels instead of only the data channels with the 4G

systems.

Furthermore, the 5G new radio is envisioned to have an ultimate timing flexibility. You may have

already heard that 5G is to support flexible numerologies in time and frequency. This means that

the 5G technology can be fit to different quality of service requirements such as the latency-

critical and broadband communications, respectively, over a single radio spectrum. Well, let’s

consider the time domain for now. Both 3G and 4G systems support quite a large and rigid

transmission time interval.

– well first, a transmission time interval is the time periodicity where the system performs either

an uplink or downlink transmission, as a quantized mean in time.

3G started by a transmission interval of 10 milli-seconds. In later versions, it is optimized to 2 milli-

seconds. With 4G, a transmission time interval is set to 1 milli-second. In that sense, when a

latency-critical traffic, that needs to be delivered as soon as possible, arrives at the transmitter

during a random point of time, such traffic will be buffered until the next transmission time

opportunity which may be due in 1 or 2 milli-seconds later. That is quite too much wasted time

towards the 5G latency requirements, and here comes the power of the 5G new radio. 5G utilizes

variable transmission time durations that can be as down to a duration of a single OFDM symbol,

i.e., 0.0715 milli-seconds, 2-OFDM symbols, i.e., 0.143 milli-seconds, and conventional 14-OFDM

10
symbols, i.e., 1 milli-second. This way, latency-critical users can be served by smaller transmission

time intervals while data-rate-hungry applications are only supported with the longer durations.

Moreover, for such latency sensitive applications, 5G new radio supports connectionless

communications for short pack transmissions, i.e., users can immediately transmit their

respective data payload without waiting for control signaling such as the scheduling grant. These

schemes are under the umbrella of the grant-free transmission technology, especially in the

uplink direction.

Additionally, network slicing and flow-based quality of service are two major breakthroughs of

the 5G radio technology. Generally, network slicing is a very similar approach to the software

defined networking and network functions virtualization techniques in fixed networks. With

network slicing, the physical network structure can be virtually partitioned, i.e., into different

virtual sub-networks or slices, based on the specific quality of service needs. It is mainly

envisioned to be integrated with the core implementation of the 5G.

– one simple example to demonstrate the 5G network slicing would be if we have two applications

running on a single cell phone, one is latency-critical such as vehicle to vehicle tracking app and a

broadband live streaming of a football game. The former requires a strict radio latency but not

necessarily a high throughput which is completely the opposite to the latter application. Thus, the

network can be virtually optimized such that the latency-critical traffic gets through the fastest

network paths, i.e., but not required to be of high bandwidth, while the other one can be a little

more relaxed in time with the requirement of higher bandwidth paths.

11
In conclusion, the 5G new radio comes with a revolutionary technology spanning across all its

system components. However, the key improvement of the 5G technology lies in its generic

flexibility, i.e., multiplexing of different quality of service demands on same spectrum, adaptation

of the time and frequency transmission numerology based on the instantaneous need, and the

virtual adaptation of the network structure to the end-to-end user targets.

However, I bet that you may have a fundamental question about why could not we just evolve

the 4G radio technology?, magically to get the 5G radio standards. We have done it before from

3G to 4G, can we do it once again? Answer is absolutely NOT.

Why Do Not We Simply Evolve the 4G?

Well, each mobile generation is usually driven by its killer applications and use cases, which implies

why this world does need a new mobile technology, – simply, put. The 1G was motivated by voice

communications. Later, 2G was driven by improved voice and text communications. 3G then

comes with a further better voice and affordable data connectivity, that occurred when people

and technology were both ready for data connectivity. Though, 3G did not satisfy such hunger of

the internet connectivity. Thus, 4G followed the trend to offer ultimately higher capacity mobile

networks. That is, the main aim was to exponentially increase the network and user data rates. In

other sense, it is the data connectivity as the main and single 4G target to pursue. This led the 4G

standardization to combine a newly improved core network structure, – that is so called as the

evolved packet core (EPC) as well as evolved radio technology. As the name suggests, it was an

evolution to the 3G technology, i.e., improving transmission techniques, core hardware and

12
interfaces, and more bandwidth allocation; however, without changing the baseline design of the

network. That is the fundamental difference between the 4G and 5G technology.

However, before answering why 5G is different? – let’s first discuss what are the driving

applications of the 5G technology?.

5G introduces the term: over the top (OTT) vertical industries. Well, OTTs denote the industrial

partners, i.e., companies, segments, etc, which are recently becoming concerned by the 5G

technology standardization; although, they currently run businesses independent from the

cellular communications.

– think of food factories, health care facilities, automotive manufactures, gaming and virtual

reality suppliers, and many more.

These partners are highly involved in the 5G standardization process with a single target to

transform the physical wired connectivity into a purely wireless one, with ultimate flexibility,

reliability, and broadband rates. This imposes a lot of challenges against the 5G radio technology,

since currently, we do not simply have a single objective anymore to optimize the network

towards.

– imagine a surgical doctor is performing a remote operation on a patient through a haptic robotic

arm, connected through the 4G radio interface to the control unit at the doctor’s office. With such

a critical use case, the 4G radio interface is required to have a minimal radio latency, and an

13
ultimately high link reliability, i.e., data communication link is guaranteed without intermediate

interruptions, along with extreme data rates.

Such a triangle of requirements (latency – reliability - capacity) is fundamentally impossible to be

simultaneously achieved on a single radio interface. If you recall, an optimal system capacity is

the Shannon limit; however, this comes with the assumption of an infinite communication latency.

Surprisingly, this set of mentioned requirements is exactly what the 5G standards promise. They

can be summarized by the following three main 5G service classes as:

Ultra-reliable and low latency communications (URLLC)

URLLC applications require extremely fast and stochastic transmissions of small data payloads

and with an ultimately higher reliability at any time.

– think of a robotic machine that is randomly in time transmitting small update packets towards

a controller to indicate its positions, seeking instructions on the next movements. If either these

packets are being dropped due to the wireless channel in between or arrived quite late, an outage

occurs at this machine. Herein, outage simply means that such machine is out of service, i.e., not

aware about the next operation, due to lack of communication reliability to the controller. This is

mainly what URLLC service class is concerned about. That is, how does the 5G radio interface offer

highly reliable, extremely low-latency, sporadic transmissions?

14
Enhanced mobile broadband (eMBB)

eMBB service class is quite a conventional class to what we formally had with the 4G systems,

except that 5G targets require more and more of the eMBB data rates. eMBB denotes the

applications that demand extremely high, i.e., broadband, communication speeds or data rates

such as video streaming services, online gaming, internet teleconferencing, and many more.

Basically, the dominant factor towards the 5G eMBB targets is the channel bandwidth, – more

bandwidth allocations afford higher capacity.

Massive machine type communication (mMTC)

Such 5G use case is a crucial and clear differentiation from the 4G technology. eMTC characterizes

a massive number of cheaper, low-power and low-energy connected devices to the radio

interface, all transmitting data payloads with random arrivals and for a very short period of time.

– imagine we have a multi sensor network setup, where thousands of sensors, equipped with

radio frequency modules, are exchanging, through 4G links, some updates about air temperature,

pressure, humidity, etc, with a controller center at the other side of the globe. These sensors are

of extremely low power capability and need to transmit their payload as quickly as possible to

return idle (sleepy) again. Herein, the radio interface needs to introduce special access techniques,

resource allocations and scheduling algorithms in order to cope with such newly introduced

targets. Although the very recent versions of the 4G networks have introduced some system

enhancements for mMTC such as narrow band resource allocations for mMTC and internet of

15
things (IoTs) devices, some other improvements are not even feasible due to conflicts with

baseline 4G system structure.

– as an example, when a device attempts accessing the radio interface of the 4G network, it must

perform the well-known random-access procedure. First, it waits until the first available random-

access opportunity, announced by the network. Note, during this period, the device is running in

full battery. Then, it passes through the four-step random-access operation of the 4G radio

technology, which eventually may or may not succeed. Upon failure, the device repeats the access

attempt after some random back-off time. This implies, devices must first connect to the radio

interface before being able to transfer their respective data. With mMTC, the data payload to be

transmitted may be extremely of small size, i.e., several bytes, while the 4G signaling overhead

for random-access may take over hundreds of bytes. That is then a significant loss of the network

resources and device energy consumption, respectively. However, that is how the 4G technology

works up to time. With the 5G new radio, it offers connectionless packet transmissions where

devices just wake up, transmit their payload and immediately fall back a sleep again, – that simple,

although, not really simple at all from the engineering perspective.

Thus, if you still recall the excited question in this part, i.e., why simply cannot we evolve the 4G

networks to reach the 5G targets?, the long-awaited answer is: NOT AT ALL, we cannot evolve

the 4G technology to reach the requirements of the 5G new radio. The main driving applications

of the 5G are completely different from that of the former 4G. The set of 5G requirements are

16
simply challenging the baseline system design of the 4G systems, for which it becomes not

economically and technically feasible to reach with the 4G technology.

Well, let’s organize things more here. Next, the formal standard 5G requirements are presented

in a more detailed manner.

5G Requirements and Gap to LTE/4G

There are several global and continental alliances which are considered pioneers to define the 5G

targets. Some of these are operator-led, others are hybrid between operators, network vendors

and even academia partners. Examples can be as follows:

Next generation mobile networks (NGMN): it is an operator-led specification alliance that really

targets the needs of the participating telecom operators by setting the appropriate requirements

for the next generation mobile networks.

International telecommunications union radiocommunication (ITU-R): it is a global union that

seeks to ensuring that we communicate over interference-free spectrum. In other sense, its major

task is to efficiently propose the best practices of using the available spectrum of each mobile

generation. As part of its vision towards the 5G communications, it developed a visionary

specification set, so called as international mobile telecommunications (IMT)-2020. IMT-2020 sets a

future specification set of the standalone 5G technology that is envisioned by 2020,

– as of now, 5G systems cannot really operate independently, i.e., standalone, without

some help from the former 4G architecture so we may call it as pre-5G lunch.

The 3rd Generation Partnership Project (3GPP): that is the widely adopted standardizing partner

of each mobile generation. More details about 3GPP shall follow in the next Section. 3GPP unites

17
the seven-major telecommunication standard development entities to develop a stable 5G operation

specifications, and to reach the target 5G requirements.

As you can expect by now, these organizations may have set different requirements for 5G. Well,

that is right, but not with extreme variations though. Herein, we only go through the official 3GPP

main requirements for 5G as follows.

- 5G radio latency target: 1 ms for user-plane and 10 ms for control-plane

Well, most of our populated 4G mobile networks usually offer radio latencies between 200 to 500

ms. The 5G new radio apparently sets a tougher radio latency requirement. But first, let me explain

what do the control and user plane latencies indicate?

– the user plane latency means the difference in time between the point a data packet arrived

at the buffers of either the base-station or the user (which needs to be transmitted) to the

point when such packet is successfully decoded by its intended receiver. Such latency

includes the scheduling delay, i.e., some data packets may not get transmitted instantly,

processing and hardware delays at transmitter and receiver, propagation delay, and most

importantly the re-transmissions delays, i.e., due to the time-variable wireless fading

channels, some packets may not go through from first time, thus, transmitters tend to

transmit these packets once again (can be of several times) and with same or different

transmission configuration as the first transmitted packet, until the packet gets either

successfully decoded or dropped if all attempts have failed. Although, the control-plane

latency implies the difference in time from the point a device is in idle mode (sleepy) until it

gets active. If you recall, in 4G systems, this delay shall contain multiple message

exchanges between base-station and the user for random access, security check, quality of

service setup, and many more of waking up operations.

18
Thus, these latency requirements cannot be fundamentally satisfied by the 4G technology

due to its rigid frame structure as we will follow up later in more detail.

- 5G peak data rate: 20 Gbps for downlink and 10 Gbps for uplink

Several specification groups called for 1000x times peak data of 5G than 4G. Recalling that the latest

enhanced version of the 4G systems can maximally reach a peak data rate of 1-3 Gbps. Then, a 5G

system should be capable to reach 1000 – 3000 Gbps of peak data rate, which is a very hypothetical

case. Thus, 3GPP has adopted a more realistic view to define the target downlink and uplink

maximum data rate. But, first, how is the standard peak data rate being estimated?

– the peak data rate indicates the theoretical, error-free, maximum-possible user data rate that a

device should achieve when it has been granted all the system radio resources, e.g., a single active

user case.

Such extensive data rates can only be achieved by distributive capacity-boosting technologies such

as massive multiple-input multiple-output (MIMO), milli-meter wave communications (mMWave),

extremely high bandwidth allocations, and many more.

– no worries, we will go through these later.

– recall that when we go higher in frequency spectrum, achievable capacity can be

exponentially increased; however, this comes with a highly reduced radio coverage and

challenging radio propagation characteristics, such as extreme path sparsity, phase

variance, and more.

- 5G mobility interruption time target: 0 ms

The mobility interruption time implies the shortest time supported by the system during which a

device cannot exchange any data packets, for instance, during a handover from a serving BS to a

19
new one. That means that 5G new radio specifies that devices should be always connected to the

network, even during handovers.

In 4G, the mobility interruption time is usually between 10 – 15 ms since a device fully disconnects

first from the serving BS and attempts a ‘reliable and pre-configured’ connection to the secondary

BS.

– the intuition behind such 4G handover procedure was to minimize the control overhead of the user

end, since it only needs to connect to a single BS at a time.

Aligned with the former 3G specifications, 5G new radio re-introduces the make-before-break

concept, as the device establishes its secondary connection before it terminates its primary one,

given that user devices have become of more signal processing and power capabilities.

– as you observe, an extensive care has been enforced in every way possible to achieve a very low

radio latency during the different user states, i.e., idle mode, active mode, and handover transitions.

This denotes the 5G new radio needs to be a reliable mean of wireless transmission regardless of

the system and/or device state. This defines the ultimate target of the 5G new radio as the reliability.

- 5G reliability target: 99.999% within 1 ms

The most challenging requirement of the 5G new radio is to reach a reliability target of 99.999%

within 1 ms. However, for completeness, lets first define what really reliability is?

– constrained reliability in 5G denotes the probability of an arbitrary packet to get successfully

decoded at its intended receiver, conditioned that it should be decoded during a given latency bound.

Thus, the 5G new radio requires that during 99.999% of time, the latency-critical, i.e., URLLC,

packets get received and successfully decoded within just 1 ms. This one 5G system requirement

challenges the whole engineering baseline of the prior 4G radio technology. For more completeness,

some researchers also define the radio reliability by the outage probability as: 1 – 0.99999 = 0.00001,

where in a simple sense, it denotes how many packets can be easily tolerated by the system. Thus,

20
5G requires that a maximum of a single packet every 100000 transmitted packets, either gets

dropped or even successfully decoded but NOT within the 1 ms deadline, – that is really challenging.

But, do we really need such tight wireless reliability? answer is YES, since we are evolving to build

our daily life activities on cellular communication as being explained in detail over the previous sub-

sections. Although, the tough question now is how can actually we achieve it over wireless

connectivity? – well, we will find out over the next Sections. Below, several other self-explanatory

5G new radio requirements are presented, where they are all concerned about having more of

everything, i.e., more of everything which 4G radios support.

- 5G spectrum target: spectrum up to 100 GHz and more bandwidths up to 6 GHz.

- 5G mobility target: higher supported user speeds up to 500 Km/h.

- 5G connection density target: more connected devices up to 1 million devices / Km2.

So, to wrap up, I would like to consolidate on why the 4G technology is not able to reach the 5G

targets. As we have gone through the 5G requirements, there are some targets which are just more

of what was there with the 4G; however, there are other applications and requirements that either

not being specified or cannot be supported by the 4G radio technology, in a sense that a newly

renovated system architecture is definitely needed.

Over the upcoming sections, we will go through understanding the 5G radio protocols,

transmission strategies, system architectures and many more, with a moderate level of details,

combined with concrete conclusions. Although, lets’ start by explaining which entity does actually

standardize our cellular networks? and simply, how do they do it?

21
3GPP Standardization of the 5G New Radio

3GPP has been created since 1998 as a joint standardization activity between USA, Europe, Japan,

and Korea. As of now, the 3GPP has become the leading standardization body for cellular

communications, and it is taking full leadership of the 5G standardization. 3GPP is topped by a

project coordination group (PSG), where its members manage three major technical specification

groups (TSGs), – think of a TSG as a specialized committee that is concerned only about

standardizing a certain system partition. Accordingly, each TSG is divided to several work groups

(WGs) which look closer in a more specialized and narrow way into standardizing specific system

aspects of the belonging TSG. Furthermore, as you may have heard the term work items (WIs)

before, they define the technical work and studies that will be performed within each WG. In

general sense, 3GPP TSGs usually hold 4 plenary meetings per year, typically in March, June,

September and December while there are 1-2 meetings of the corresponding WGs between every

TSG two successive meetings. So, WGs propose and study the technical aspects of the concerned

system design principals within their scope and provide recommendations to the parent TSGs for

final decisions.

Hence, the general structure of the 3GPP organization is as follows:

22
Source: 3GPP

With this, let’s go back again to the main question here: does 3GPP alone standardize our cellular

networks? Answer is NO, 3GPP only produces technical reports (TRs) and technical specifications

(TSs) of each mobile generation.

– a TR is just a feasibility study that a given 3GPP WG has performed. TR can be seen as an

academic paper that evaluates the performance of a given system design aspect. Although, a TS

denotes a certain set of the system specifications which should be adopted to reach the

associated performance targets.

Well then, 3GPP does have several regional standardization partners as ARIB (Japan), ATIS (North

America), CCSA (China), ETSI (Europe), TSDSI (India), TTA (Korea), and TTC (Japan). These partners

shall transpose and slightly modify the 3GPP TSs to be well aligned with the regional demands.

–for that very specific reason, you may have experienced that a mobile phone, manufactured in

Europe may have inflicted some feature degradation while being connected to a Korean network

23
for example, simply because both standards are slightly variant. However, to achieve a global and

transparent connectivity, that should not affect the major mobile device operation.

Finally, every 1-3 years of the hard work of all 3GPP WGs, 3GPP issues a reliable release of the

mobile generation under standardization.

– a 3GPP release indicates a full system specification set, which is approved by 3GPP, concerning

all aspects of the current mobile generation such as the radio design, core design, interfaces, IP

connectivity, etc.

For completeness, the 3GPP specification documents of the 4G usually take over the numbering

starting by 36.xxx, while the 5G TS documents shall start by 38.xxx. Examples can be as below.

38.1xx: 5G user and BS requirements

38.2xx: 5G physical layer specifications

38.3xx: 5G protocol stack specifications

38.4xx: 5G logical and physical interfaces

By now, the first concrete 3GPP release for 5G has been out during September 2018, i.e., release-

15. Release-15 has identified the main building blocks of the 5G system architecture (which we

basically will go through over the next Sections), and that will operate in a standalone mode.

Although, the early 5G deployments, envisioned during 2019 and early 2020, shall rely on the

existing 4G structures.

24
Main Building Blocks of the 5G New Radio Architecture

As of now, we have answered why simply we do need new radio and core structures for the 5G

cellular communications. So, the 5G system architecture is envisioned to be as flexible as possible,

as scalable as possible, as efficient as possible, and as reliable as possible. Flexible denotes that

the 5G radio and core design should be both adaptable and programmable in time to meet the

diverse, and sometimes conflicting, requirements of the supported use cases. Some examples,

as being discussed in previous Sections, include the support for broad band transmissions with

large payload sizes and extreme data rates, while supporting a massive number of connectionless

devices with sporadic and very low payload sizes. Scalability is characterized by the system ability

to host a very large number of connected devices at an arbitrary time. This may seem easily

achieved by just expanding the operating spectrum, i.e., communication bandwidth; however,

having different types of traffic patterns, transmitted packet sizes, connection-based and

connectionless communications, and many more use cases set a fundamental limitation of the

system capacity.

– as an example of this limitation, having very time sporadic traffic patterns over the radio

interface does in fact disturb the link adaptation (LA) process. First, LA means that the radio

interface dynamically in time adjusts the transmission configurations such as the modulation and

coding schemes based on each link quality and radio conditions. Although, with such random and

small-payload traffic type where transmissions are very short and sporadic in time, link qualities

can be changing quite aggressively from a time unit just to the next one. Thus, LA should be very

25
fast and more accurate accordingly. Otherwise, LA process shall be dictated by outdated link

information that would not be valid at the time such LA enforcement is applied.

On another side, efficiency of the 5G new radio implies exactly the same meaning as being defined

with the 4G technology. Having efficient radio and core interfaces is still essential to achieve

maximum spectral efficiency.

– spectral efficiency is a measure of how much the system radio resources, i.e., spectrum, is well

utilized. High spectral efficiency denotes that the radio interface is capable to achieve high

capacity performance with the lowest possible used spectrum.

Finally, the reliability is a key design principal for 5G communications. It basically changes the way

we assess the performance of the 5G system then. For instance, with 4G technology, we majorly

cared more about the system average coverage and block error rates, while with the 5G systems,

we mostly care about a specific percentile of users which could or could not achieve a certain

reliability bound, regardless of the coverage and channel conditions of each user. This adds the

requirement of an ultimate radio flexibility in order to achieve the same reliability level for all

reliability-critical users of different radio conditions at the same time.

Over the upcoming sub-sections, we will go through in more technical details of the main blocks

of the 5G radio architecture.

26
5G Radio Access Network: Radio Protocols

In a generic sense, the 5G new radio follows a relatively similar radio access network (RAN)

structure as the 4G one. The gNB, defined as the general NodeB, is the 4G-alike enhanced NodeB

(eNB) or base-station that offers the 5G new radio connectivity on both the user and control

planes. As being referred in previous Sections, dual connectivity with 4G is essential during the

early 5G deployments; why? because it would be too cost inefficient to start a full 5G system at

once, especially with the lack of its main use cases at the mean time. Thus, ng-eNB and eLTE-eNB

terms are defined as the next generation (NG) base-stations which offer the connectivity to the

5G new radio interface and either the 4G or 5G core structures.

The gNBs are then inter-connected through the logical Xn interfaces, which correspond to the

former 4G X2 interfaces. However, both have exactly the same functionality.

– a logical Xn interface denotes that connectivity may not be through a direct connection but

rather through an IP internet connectivity.

As will be explained in greater detail in the Second note series of this book, the gNBs are

connected to the 5G core through the NG core interfaces. The first core device to connect to is

the access and mobility management function (AMS) through the NG-control (NG-C) interface and

the user plane function (UPF) through the NG-user (NG-U) interface.

– for now, do not worry about why these 5G core hardware devices are entitled as functions

instead of being devices or entities.

27
Thus, on the radio side, a 5G gNB is responsible to provide the end radio connections, admission

control, handovers, lower level network slicing, connectivity of multi-radio access technologies

(RATs), user and resource multiplexing, and many more functions.

On the core side, the AMS, as the name suggests, is the first core entity a 5G user shall connect

to in order to have its connection authorized, registered within the network, and ciphered.

However, the UPF is the core entity responsible for the user packet routing and forwarding

between the core ingress and egress points, and the handling of each user quality of service

requirements.

– think of it, as the AMS manages the control plane of the users and the UPF administrates their

respective user data plane transmissions. The 5G core structure will be explained in depth through

the second series of this book.

Additionally, the 5G technology further consolidates the importance of the centralized approach,

where the RAN architecture is split into a centralized base-band units (CUs) and many radio

distributed units (DUs).

– CU represents the administrative unit that processes the data received from or transmitted by

its associated DUs, while DUs can either have only the responsibility of the antenna transmissions

or a minor set of signal processing as well. Each CU can be associated to several DUs accordingly.

28
Such basic cloud RAN approach is well-known since the 3G and 4G systems; however, it did not

get a significant success due to the shortage of the application demands at this time, that made

it economically inefficient then.

So, why such centralized approach is quite important with the 5G new radio? Basically, because it

enables vital performance-improving techniques towards the main use cases associated with the

5G system. Having a centralized unit which has access to the information’s from multiple DUs,

and potentially hundreds of users accordingly at the same time, shall make it much easier for

interference coordination, exchanging channel state information between DUs, and supporting

much more efficient coordinated user scheduling operations. These techniques could not be

supported with the 4G radio interface because it would be too costly to share such control

information between eNBs. – here, too costly implies it requires a significant amount of control

signaling overhead.

Thus, with 5G new radio, a diversity of functional splitting options has been introduced between

the CUs and DUs.

– a functional split means that a specific set of the radio processing jobs can be moved and

processed at different spots of the network, either over the DUs or CUs. For example, with

original LTE standards, the baseline communication structure is the fully distributed one, i.e., no

CUs. It does mean that everything from having received an analogue signal on the antenna array,

channel estimation, packet retransmissions, resource and user scheduling and more, occur at the

29
LTE eNB processing units. Although, with the 5G functional split approach, a single gNB can be

divided into a single CU and multiple DUs, and all are interconnected through high data rate

interfaces, e.g., Fiber connections. Then, the latency-critical traffic can be fully processed at the

DUs for minimum end-end latency; however, latency-relaxed traffic can be processed at the CU

for better coordination performance among all involved DUs. This defines two main options for

the functional split as: the lower and higher layer split. The former indicates that the Physical layer

of the 5G new radio is located at the DUs, while the other layers from the protocol stack are

processed at the CU. However, the latter option implies that both layer-1 (Physical) and a sub-set

of Layer-2 (medium access control (MAC)) functionalities are processed at the DUs. The

advantage of option 2 is that it requires much lower transport overhead between the CU and its

associated DUs, since the data to be exchanged is based on Layer-2 information’s.

– The physical layer (Layer-1) of the protocol stack usually exhibits the significant portion of the

control overhead.

So far, we have been discussing the functional layer split of the 5G new radio. Accordingly, lets

define what is a layer? and how the data and control information payloads are carried out through

the radio and core interfaces, respectively?

You have probably heard of the terms: access and non-access stratum (AS and NAS) from LTE

standards. The NAS implies the control information flows between a 5G user, 5G new radio, and

the 5G core interfaces, as well as how such information is being understood through the different

30
parts of the network. However, the AS denotes the same functionality as the NAS but only for

the data flows over the radio interface. In other sense, the AS and NAS define how the data and

control planes shall work, including all the network processing units that should be involved within

each functionality. Then, the term protocol stack shows up. A protocol stack then describes the

necessary and ordered steps with which a user or network device shall perform to carry and

understand either a control or data flow. The 5G users hold both the radio protocol stack (for data

transmissions over radio) and the NAS protocol stack (for control transmissions over radio).

herein, we will briefly and solely go through the radio protocol stack.

Layer-1: the physical layer of the protocol stack is basically responsible for the data initial

reception and transmission including bit modulation, encoding, power control, error detection and

correction, and many more operations.

Layer-2: the medium access control (MAC) layer of the protocol stack performs data

retransmissions upon failed receptions, user and resource scheduling, and quality of service

handling over the radio interface.

Layer-3: the radio link control (RLC) of the protocol stack offers data segmentation or

reassembly, and duplication detection.

Layer-4: the packet data converge protocol (PDCP) of the protocol stack applies radio ciphering,

encryption, payload compression, and sequence numbering.

31
Layer-5: the service data adaptation protocol (SDAP) is a 5G newly introduced layer. It has a key

function in order for the 5G new radio to support a diversity of quality of services at the same

time. SDAP maps and translates a certain quality of service requirement for a given data flow of

an arbitrary user into parametrized radio transmissions, that at the end would satisfy such

requirement.

It is worth noting that some researchers consider the MAC, RLC and PDCP layers to be part of

Layer-2. Well then, on the control radio plane, a user has two main layers as follows:

Control Layer-1: the radio resource control (RRC) handles the control information exchange over

the radio interface including the mobility control, and measurement reporting configurations.

Control Layer-2: the NAS layer is responsible for the user control connection establishment up

to the 5G core entities, i.e., establish a secure connection with the core interface.

Well, I know it may be a lot of boring protocol stack function details so far. So, let’s have an

example of a 5G user that is in idle mode (not connected to the network).

– the user is informed by the network control signaling (RRC messaging) about how to measure

its operating bandwidth in order to identify the BS/gNB with the best receptive coverage, and on

which periodicity it should do so? So, when it gets awake, it connects to such BS accordingly. Then,

assume that a downlink data arrives for such user, such as an already-started video streaming,

then the BS starts from the SDAP layer to check for the required quality of service level requested

by such user. Generally, a user with a higher subscription level will get a better quality of service.

32
Thus, the SDAP translates that requirement into certain transmission parameters such as the

number of resources, service time, etc and feeds-back such information to Layer 2: the MAC layer

for later processing. Then, the PDCP layer compresses the data payloads, and applies radio

encryption for secure communications. Further, the MAC layer schedules such traffic over the

network resources, considering the input feedback from the SDAP layer to meet the quality of

service level of this user. Finally, the physical layer modulates, encodes, and transmits data

payloads over the air towards the intended user.

5G Spectrum

Currently, there is a progressive increase in data traffic volumes through our mobile

data networks. This introduces a vital technological question of how to resolve our

expected cellular capacity problems in future. For sure, there are a diversity of

capacity-enhancing techniques which are of a great importance to maximize the

network capacity such as the massive MIMO type of communications. However,

available spectrum or bandwidth will always be the fundamental limitation of the

network capacity.

–well, more specifically, the system capacity is simply restricted by the available

bandwidth and the signal-to-interference-noise ratio (SINR), where it is always

believed that our cellular networks are interference-limited communication

environments.

33
Thus, in order to meet the promised 5G new radio capacity specifications, a larger

amount of spectrum is necessary. Being said that, the 5G new radio is the first cellular

communication system to support frequency spectrums, spanning the range from

sub 6 GHz up to 100 GHz with scalable bandwidth ranges from 5 MHz up to 400 MHz.

–let’s recall that the maximum communication bandwidth, supported by the former

4G systems, is 20 MHz. 5G new radio simply supports up to 20x times more bandwidth

then. However, this imposes a lot of device hardware challenges though. For example,

mobile phones shall be required to scan the entire bandwidth at the times of special

measurements or cell selections, which is incredibly large with the 5G new radio, and to

do that as fast as possible. So, the RF chain of the mobile phone should be of extreme

high quality, that is not really an efficient requirement in sense of the cost metrics.

Thus, several improvements of the user frequency scanning procedures are

introduced with the 5G new radio as well.

So, lets first start by introducing how the spectrum is shared between several telecom

operators in each country. The most common strategy which you may have probably

heard of already is the exclusive access of the spectrum. A spectrum auction is

publicly held by governments, during which operators propose to undertake a specific

chunk or portion of the available spectrum. As a result, each operator shall solely have

the exclusive access to such spectrum. Honestly, this is the most appreciated

34
spectrum sharing strategy yet in 3GPP. However, with the 5G new radio, some other

spectrum sharing approaches started to rise on top as well.

For instance, the authorized licensed shared access aims to allow operators to have

partial and constrained access to some portion of the spectrum carriers that have

been allocated to other parties. As an example, some of each country’s available

spectrum is usually allocated to military applications. Although, some of these

applications and use cases are not actually using the spectrum over all geo-locations

or at all times, e.g., may be only during military exercises. Thus, operators could gain

access to such spectrum, which is conditioned to certain areas or times. This way, the

available spectrum is basically much more well utilized.

Furthermore, there is a key observation of our cellular traffic statistics, which is

progressively perceiving huge interest in the 5G standardization as of now. It is simply

the fact that the majority of our cellular traffic comes from the indoor environments

such as the malls, homes, etc. So, an efficient traffic steering can significantly relax

the cellular network capacity targets. Although traffic steering/offloading to WIFI

networks is not newly coming with the 5G new radio, but unlike the 4G standards, it is

becoming a solid part of its standardization process.

The idea is that a 5G new radio mobile phone can steer its broadband traffic from the

5G connectivity generically to an indoor wireless connectivity below 6 GHz, potentially

35
the WIFI networks. Given that users are highly likely to be short distanced from the

WIFI access points, they can perceive a huge capacity improvement in addition to

offloading the congested 5G new radio networks from such significant amount of

traffic.

–honestly, there is another reason to offload traffic towards the WIFI networks. The

standardized 5G bands are quite large, e.g., 100 MHz for spectrum below 6 GHz. As

you know, such spectrum is highly congested as of now. It simply confines our former

2G, 3G and 4G communication systems. Thus, it is fairly difficult to have 100 MHz of

contiguous carrier components over such spectrum. Simply put, WIFI bands (2.4 to 5

GHz) can offer up to 500 MHz of available spectrum.

Being said that, the WIFI spectrum is basically unlicensed and can be an insecure way

of communication for cellular connectivity as well, –as the WIFI access point could have

access to some of the private 5G network configurations as well as to the inflicted co-

channel interference. Thus, it is not a reliable way of communications for 5G new radio.

Accordingly, the 3GPP sets a great interest in such interoperability between the 5G

new radio and the unlicensed communications, by standardizing novel core interfaces

that simply maintain a secure inter-connectivity between 3GPP (5G/4G) and non-3GPP

(WIFI/WIGIG) radio access networks, in addition to several network selection policies as

well, –simply put, when and how shall users decide to offload traffic towards non-

3GPP RANs?

36
So well so far, the early 5G commercial deployments, expected during 2020, will be

allocated over the 3.3 to 4.9 GHz, with the time division duplexing (TDD) transmission

mode, and the 24 to 28 GHz for higher carrier bandwidths. Lower spectrum range

provides a reliable coverage and better radio penetration performance; however,

higher bands offer significantly improved capacity.

So, more specifically, two types of frequency ranges are defined for the 5G new radio

in 3GPP standards as follows:

Frequency range 1 (FR1): all spectrum carriers below 6 GHz.

Frequency range 2 (FR2): all spectrum carriers between 24.25 to 52.6 GHz.

For each FR, the maximum bandwidth and sub-carrier spacing are set in standards.

–a sub-carrier spacing defines the frequency separation between two consecutive

sub-carrier channels of the orthogonal frequency division multiplexing (OFDM)

transmission. With LTE, the sub-carrier spacing is fixed, i.e., does not depend on

operating spectrum, and always equals to 15 KHz. The sub-carrier spacing of the 5G

new radio can however be scalable and variant based on the operating bandwidth, and

numerology (explained in upcoming Section). Furthermore, it can be variable across

different parts of the same 5G new radio bandwidth.

Thus, the maximum 5G new radio bandwidth, supported within FR1, is 100 MHz;

however, it is 400 MHz within FR2. The sub-carrier spacings of 15 and 30 KHz,

37
respectively, can only be adopted in FR1; however, the sub-carrier spacing of 120 KHz

is only used within FR2. Furthermore, the 60 KHz sub-carrier spacing can be used over

both FR1 and FR2.

Generically, and as depicted in below tables, the 5G new radio bands are classified into

three main categories as follows:

1. Unpaired bands for time division duplexing (TDD)

2. Paired bands for frequency division duplexing (FDD)

3. Supplementary downlink (SDL) and supplementary uplink (SUL) bands.

The TDD bands imply that the downlink and uplink spectrums are not associated to

each other’s, basically because both transmission directions are not activated at the

same time, i.e., the system is either uplink or downlink. Thus, a single TDD band can

fulfill an individual TDD system. On another side, the FDD bands are so called as paired

bands, since an FDD downlink band requires a corresponding FDD uplink band to form

an operating FDD radio network. Finally, the supplementary downlink and uplink bands

are then introduced with the 5G new radio and are of a significant importance to boost

the end user experience. Think of the SUL and SDL bands as extra available bands that

can be freely utilized and appended to the operating spectrum in order to enhance

either the downlink or uplink capacity. For instance, assume that we have many

simultaneous and rich content transmissions over an FDD downlink band. This way,

38
the entire downlink band is fully occupied by just 50% of the offered traffic. Then, the

remaining traffic amount is being buffered to upcoming transmission opportunities,

leading to a sub-optimal capacity accordingly. With the introduction of the SDL bands,

the radio network can configure some of the buffered users, with their associated

downlink traffic, to get instantly transmitted over an additional SDL, may be spanning

a different band.

Another important application of the 5G new radio supplementary bands, especially

the SUL bands, is concerned about the way a TDD system works. TDD denotes a radio

system where the transmission direction is alternatively changing in time between

downlink and uplink, both over the same spectrum. The issue herein is the availability

of the TDD spectrum in standards is quite higher than, these used with the 4G

systems, i.e., 3.5 GHz band compared to (1.2-2.5 GHz) band with 4G. This means that

the TDD users need more transmission power to compensate for the additional

losses, caused by the higher frequency bandwidth allocation. As a simple solution to

this issue, a 5G new radio user in TDD transmission can be configured with the

standard TDD band over 3.5 GHz band for the downlink direction; however, during the

uplink transmission opportunities, it can use a lower SUL band such as the 1.7 GHz

band to overcome such issue.

–a side note, the integration of the SUL and SDL bands is quite different from the

band aggregation of the 4G systems. For the latter, two or more bands are combined

39
together, and users can transmit over all involved carriers at the same time. Although,

with the 5G new radio SUL and SDL bands, users can only transmit over a single band

at a time.

So, to conclude this section, lets quickly skim over the available bandwidth levels of

each carrier range. Well, within FR1, there are 5, 10, 15, 20, 25, 40, 50, 60, 70, 80, and

100 MHz of bandwidth allocations. Although, for FR2, there are 50, 100, 150, and 200

MHz allocations. By analogy to the former 4G technology, several carrier components

can be aggregated to form up to 6.4 GHz of bandwidth allocation.

–as a final note, opposed to the carrier aggregation of the 5G new radio spectrum, the

5G new radio carrier components can be dynamically sub-divided as well. This is so

called as: bandwidth parts and will be discussed in greater details over upcoming

Sections. In a very simple sense, bandwidth-parts technology means that some users

may have the perception that system bandwidth is for instance 10 MHz, while the

actual system bandwidth is 100 MHz. A basic advantage of such technique is a great

reduction of the mobile phone processing overhead, as of now, users need only to

scan and measure smaller bandwidth allocations, despite that the 5G new radio bands

are significantly larger than these used with the 4G radios.

40
41
42
5G Flexible Frame Structure

This is one of the most differentiating aspects of the 5G new radio. But, let’s first quickly recall

that the frame structure of the LTE standards is fixed in both the time and frequency domains.

That is, with a frame length of 10 ms while the sub-carrier spacing is always set to 15 KHz.

–well, such inflexible frame design was quite sufficient towards the 4G use cases, but why is not

it satisfying the 5G new radio requirements? the answer gets us back to the diverse use cases of

the 5G new radio, for which latency-critical and latency-tolerant applications as well as large and

very small packet transmissions should be integrated over the same radio interface. Accordingly,

the radio frame design should be configurable, i.e., programmable, in time.

–let me give you a clearer example of this towards the latency critical URLLC traffic. As being

discussed before, such traffic arrives randomly in time at the transmitter side and needs to get

transmitted by the quickest possible way within just 1 ms of radio latency from data arrival at

transmitter until successful decoding at receiver. And as you may know that already, the 4G/5G

radio systems transmit data signals on the air over quantized steps in time. This is so called as

the transmission time interval (TTI). In other sense, every TTI period, the 5G new radio transmits

43
the scheduled data payloads over all radio frequency resources. With LTE, a single TTI is worth 1

ms. Thus, imagine that a URLLC sporadic packet arrives just after the start of the current system

TTI, then it will be buffered or queued at least until the next available TTI instance, that comes 1

ms later, –this is entitled as the frame alignment delay where packet arrivals are buffered until

they are aligned in time with the next TTI opportunity. This is definitely NOT satisfying the URLLC

latency limits. With the 5G new radio, the TTI can be significantly reduced down to 0.1 ms for

latency-critical traffic and as large as 1 or 2 ms for latency-tolerant traffic. Again, these variable

TTI durations are supported over a single radio interface but for different applications, users or

traffic types.

So, the 5G new radio is said to support multiple numerologies. A numerology denotes a reference

sub-carrier spacing of the radio interface. This corresponds to a certain OFDM symbol time and

hence, a certain TTI size, –the TTI duration with the 5G new radio is always expressed by a number

of successive OFDM symbols. Thus, as the 5G new radio supports 15, 30, 60, and 240 KHz of sub-

carrier spacings, the TTI length or duration can significantly vary accordingly.

With 5G, a radio frame is worth 10 ms in time. This is divided into 10 subframes, each is 1 ms. Then,

each subframe is always 14 OFDM symbols; however, it can be divided into a scalable number of

radio slots.

–important note, as being debated, the TTI duration is the actual transmission periodicity of the

radio interface. Then, you may be probably wondering why do we have the frame, and subframe

44
periodicities as well, despite that radio transmissions always occur per TTI? The answer is simple,

it is designed that way to allow for several system periodicities. As an example, user scheduling

for latency-tolerant traffic can occur per subframe basis; however, system broadcast information

can be transmitted per radio frame periodicity. Traffic for URLLC users can be processed,

scheduled and transmitted per TTI or slot basis for lower radio latency.

–you may probably also be wondering about why not to unify all system periodicities to the lowest

one which is the TTI duration. Once more, answer is to save up the radio control overhead. For the

previous example, if the system broadcast information is transmitted every TTI for instance, that

would result in a significant and not-necessarily needed radio overhead, greatly degrading the

system achievable capacity.

–a slot or a mini-slot is also referred to as the TTI duration. Thus, with the 5G new radio, if a single

subframe is divided into 4 mini-slots, it implies the radio interface is able to transmit data signals

4 times during every subframe duration (1 ms). Recall that with LTE, in such case, system can only

transmit data payload a single time every subframe duration.

–to unify acronyms herein, and for the rest of this book, consider the slot as the radio subframe.

Then, an arbitrary subframe or slot can contain a scalable number of mini-slots. A mini-slot is just

transmission opportunity of the radio interface. Thus, a mini-slot is the actual TTI accordingly.

–well, now let me give you an inclusive example about the frame flexibility of the 5G new radio.

Consider we have two users: one with latency-critical traffic such as URLLC, and other is latency-

45
tolerant and capacity-hungry user such as eMBB. The URLLC user is scheduled with a TTI duration

of just 2 OFDM symbols; however, the eMBB user is scheduled with a TTI of 14 OFDM symbols. In

other way, the system shall transmit or receive data towards/from the URLLC user every 0.14 ms

while for the eMBB user per 1 ms. You see the difference.

–thus, unlike 4G systems, with the 5G new radio, the radio interface can be transmitting data

almost continuously, i.e., on a diversity of transmission periodicities for different connected

users.

5G Massive MIMO and Beamforming

Well, massive MIMO is not quite a 5G specific technology. It has been in fact matured in academia

since many years ago. However, it did not yet have a chance to fly in 3GPP standards. There are

several reasons for this yet to be presented; though, lets first define what massive MIMO really

is?

–the wireless channel is composed of several paths from transmitter to receiver in the spatial

domain, i.e., in the free space. Some of these paths are truly stronger than others, e.g., some

paths may be weak (i.e., highly faded) due to for example, being blocked by an object among the

transmitter and receiver pair, or by several reflections, thus, energy towards these paths will be

almost scattered and lost. The optimal transmission strategy is indeed to direct the transmission

energy into these strongest paths towards the intended receiver. The fundamental problems

here are how to identify these paths at the transmitter side and how accurately the transmitter

46
can direct such transmission energy towards these paths? the answer is the channel state

information and massive MIMO communications, respectively. The channel state information

implies that the mobile phones simply inform their serving transmitters, i.e., base stations, about

their preferred channel paths using a quantized feedback. There are tons of proposals of how to

best efficiently exchange such quantized channel information. On another side, the massive

MIMO denotes that the transmission radio system is equipped with hundreds of antennas at the

transmitter and receiver sides. This simply enables more precise direction of the transmission

energy towards the strongest channel paths, being known from the user channel state

information.

By now, I should have motivated you to ask the question: why the hell did not massive MIMO

communications make it in 3GPP standards yet? Answer is the implementation complexity and

required use cases. With the 4G radio technology, massive MIMO had fundamental limitations

against its implementation. First, the required channel feedback overhead size from cell phones

to base-stations is proportional to the number of antennas at the base-station. Thus, having

hundreds of antennas, such overhead may consume the radio capacity of the control uplink

channels accordingly. Secondly, the 4G radio systems are almost likely deployed on sub 6 GHz

bands. Commonly, they are deployed over the 1.2 to 2.5 GHz bands to achieve a good tradeoff

between capacity and coverage. Thus, as we know from antenna design theory, the size of the

antenna arrays, and specifically the physical spacing between antenna elements, is a function of

the operating carrier frequency. The lower the operating band, the larger antenna spacing limits.

47
Thus, to have fairly large antenna arrays with the 4G radio, the physical size of the massive MIMO

arrays may go up so large, which is practically not feasible. Moreover, the need drive for the

massive MIMO was not really sufficient at this time, as the 4G radio has already offered a

sufficient capacity to the early 4G use cases.

So, you may be probably wondering: what is the drive for massive MIMO with the 5G new radio

then? First, it is the higher supported spectrum allocation, i.e., FR2 from 20 to 60 GHz spectrum.

These bands are very dispersive in nature. This means the channel shall have way too many spatial

paths for a signal to take between two ends. Herein, massive MIMO with advanced beamforming

is an ideal solution of this issue. Much better news, because the carrier frequencies are so high

for these bands, the size of the massive MIMO antenna arrays is highly reduced to be feasible in

practice. Secondly, over the past couple of Sections, we have been discussing that 5G new radio

should simultaneously support a massive number of connected devices, i.e., mMTC. Thus, the 5G

new radio requires excessive available system capacity at the same time, while the standard

orthogonal user multiplexing techniques, e.g., OFDMA, are capacity-limited by the number of

system frequency orthogonalized resources. With massive MIMO communication, system

capacity can simply be increased by factors of 10; why, simply because base-stations become

able to schedule users on different spatial directions or orientations; however, on the same time

and frequency radio resources, instead of them being scheduled separately in time or frequency.

That is simply one more added degree of freedom to maximize the system capacity. Accordingly,

48
the associated capacity gains of the massive MIMO technology can be further enhanced more by

integrating more antennas at both transmitter and receiver ends.

Furthermore, in theory, massive MIMO communications work out in both the frequency and time

division duplexing modes (FDD and TDD). In FDD, the channel state information from mobile

phones to base-stations is mandatory since both the downlink and uplink bands are on different

spectrum allocations. However, with TDD, the spectrum is unpaired, where both uplink and

downlink bands are on the same spectrum. Hence, base-stations can figuratively estimate the

downlink channel state information from the corresponding uplink channels, –so called as TDD

channel reciprocity, thus, entirely removing the requirement of the channel station information

feedback overhead from users to transmitters and further enabling the massive MIMO

communications. This in fact aligns very well with the fact that the early deployments of the 5G

new radio are envisioned over the TDD 3.5 GHz bands, due to regulatory reasons. Thus, massive

MIMO can be well integrated with the 5G new radio deployments since Day-1.

Well, in that sequence, we will go through further complications of the massive MIMO

communications and what can it actually do? while keeping the same low level of details we have

always maintained.

First of all, there may a confusion between the term precoding and beamforming, so let’s settle

it down first.

49
–signal precoding at transmitter is a way to direct the signal energy of this user towards its

strongest channel paths, known from its latest channel state information feedback. So, precoding

is only concerned about a single user at a time. However, beamforming has mainly the same

function as the precoding process while also considering the other concurrent transmissions of

other users that may interfere with the intended user. In a simple sense, it maximizes the desired

energy towards the best channel paths of each user while minimizing its interfering energy with

other users of ongoing transmissions. Basically, beamforming is an advanced stage of precoding,

–with precoding only, you may have a maximum energy towards a desired transmission, but it

can be the case also that such transmission is impacted by severe interference as well. Hence,

capacity may be degraded though. Massive MIMO significantly enhances both the precoding and

beamforming operations due to the antenna array gain and the more identified channel paths

over the antenna array.

So, beamforming is defined as a signal processing capability with which the input signal is phase

and gain altered on each antenna element across the entire array, in order to have a maximum

energy transmission towards intended users and theoretically zero energy towards other users,

–in practice, such absolute zero is not feasible; instead, it can be -20 to -60 dB less energy.

However, such accurate beamforming comes with the cost of the processing complexity.

Additionally, with former 4G radios, and actually all former cellular systems, antennas are so called

as passive elements. This means that they do not process signals and just air transmit what has

been input. Accordingly, the RF units behind the antennas have to do the full job of precoding and

50
beamforming. Then, through cables, –high quality ethernet cables, Fiber or others, they signal

these well-constructed RF signals to antennas for air transmissions. That could be reasonable if

the number of antennas is fairly small. With massive MIMO, we simply need way too many of these

high-quality cables.

As an alternative way, an active antenna design has been introduced with the 5G new radio. This

implies that antennas now can process input signals, combine them, and finally transmit them

over air, –simply, antennas have become active elements. Well then, each individual antenna shall

have an RF unit right behind the antenna on the physical tower. This way, no cables are even

needed.

–in the standard setup, an RF unit can serve more than one physical antenna to reduce cost, such

that there will be a difference between the numbers of the mounted antennas and RF units,

respectively. In the antenna design theories, herein in this case, the RF units are called as the

antenna ports while the antennas themselves are so called as the physical antenna elements.

–remember that, the number of physical antennas determines how directive is the antenna array.

Simply put, more antenna leads to much more beamforming directivity in the space. Although,

the number of the RF units behind antennas defines the beamforming capability in sense of how

many beams or signals the antenna array can process simultaneously. And finally, the

configuration of the antenna array, either planar, linear, circular, etc, defines the dimensionality

of the beamforming. This means, for example, a linear array, configured only in the horizonal

51
directional, can only beamform the horizontal space. Although, for instance, a planar array with

antenna elements being setup horizontally and vertically, can beamform the horizontal and

vertical directions. With this case, in theory, the base-stations are able to truly point the

transmission directions nearly towards the exact locations of the intended users. This is simply

entitled as full dimensional, i.e., 3D, massive MIMO transmissions.

Well then what about the 5G numbers for advanced beamforming? The 5G new radio, starting

from 3GPP release-15, could indeed support up to 64 RF units while it was limited to 32 with

LTE/4G release-14. Furthermore, as being debated before, one significant differentiation aspect

for the 5G new radio is its support for control channels’ beamforming. With the 4G radios,

advanced beamforming was only applicable to data channels; however, to reach an ultimate

system reliability with the 5G new radio, this has been changed.

Also, without going into too deep details, as you may probably know, the MIMO capacity is always

limited by the lower number of the mounted antennas at both the transmitter and receiver,

respectively. Clearly, this is an issue with the 4G radios, where users can have a maximum of 4

antennas. With the 5G new radios, there is no a standard limitation on the mobile end capability.

One final remark to conclude this Section, over higher carrier frequencies, i.e., 5G FR2, antenna

radiation patterns become much more directive, especially with more and more mounted

antennas. Hence, the 5G transmissions over these spectra would be very narrow and very

directive. In other mean, with the former 4G radios, basically base-stations transmit in all

52
directions. Then using precoding, they attempt on best-effort basis to focus the energy towards

the best channel paths of the intended user(s). That is also one reason why we tend to physically

down-tilt the antenna panels, in order for them not to overshoot other surrounding base-stations

as well. Although, with the 5G, this is not a problem anymore because transmissions simply have

become beam-based and being directive with a fairly good precision to the exact locations of the

users.

This very same transmission behavior introduces a lot of other issues though with the 5G new

radio. Now, the 5G transmissions, including both control and data channels, will be sensed by

users only towards special directions because they are way too narrow.

– imagine a base-station is transmitting a beamformed control information to several well-

separated users, the question now is: as the transmission beam/direction is very narrow in the

space, so, in which direction should the base-station transmit and beamform such information?

Answer is basically to make these transmissions on user centric basis, instead of being single

network-centric transmissions. Thus, with the massive MIMO communications over the FR2,

every 5G transmission and reception shall be directed towards a single mobile phone, and that

includes both the control and data channels.

– you are probably thinking this may be too much overhead to send both data and control

information separately towards each users’ direction. Yes, you are ultimately right here. That is

why the 5G new radio borrows a similar concept from the former 4G communications, which is

53
the channel state information reference signals (CSI-RS), –no worries if you do not follow, check

below example.

–in both LTE and 5G radios, there are two control signals so called as the primary and secondary

synchronization signals (PSS and SSS). Basically, users utilize them to get to know the base-

station identification and to have themselves synchronized, i.e., their RF clocks, with the

transmission periodicity of the intended base-station. With the 4G technology, these signals are

being sent with a predefined time periodicity and over specific frequency resources. In the

spatial/direction domain, these are transmitted basically over the entire sector coverage because

these signals are not beamformed. Thus, users just have to monitor and scan the operating

bandwidth for some time until they could lock on the PSS and SSS signals accordingly. With the

5G new radio, instead, the sector coverage space is divided into several divisions or parts, where

at first, base-stations transmit the PSS and SSS signals towards the first division, i.e., using a

beamformed transmission towards that direction. Sometime later, it repeats the same process

but towards the second coverage direction and etc until it spans the entire sector space. Upon

that event, base-stations start over from beginning again. Thus, 5G users, regardless of their

locations, will have a chance to lock on a PSS/SSS beamformed transmission that is closest

possible to their geo-locations. Accordingly, such beam sweeping in time needs to be done quite

fast, so not to make users to wait for long times before they can get a sufficient PSS/SSS

beamformed transmission.

54
During connected mode, the 5G new radio shall instruct users accordingly on which directions

they should receive or transmit. Thus, one would expect that some beam failure mechanisms are

under standardization if beam blockage occurs for example.

–you see the difference, users in idle mode try to catch the beam that is closest possible to their

locations, using their own sensing capabilities; why, because base-stations do not yet identify

these users. Later, during user connected modes, and for a more efficient and controlled

beamforming operation, base-stations formally instruct the active users of the beams that they

should use to receive the downlink or transmit the uplink data.

–this entire procedure lies below the umbrella of the beam scheduling and monitoring of the 5G

new radio standardization, which includes techniques for beam measurements, beam scheduling,

beam reporting, beam tracking, beam failure and recovery.

5G Bandwidth Parts Operation

Bandwidth parts technology is being newly introduced with the 5G new radio. Although, lets first

define what the problem that such technology is addressing here? As of now, we have clarified

the standardized 5G new radio spectrum and potential bandwidth allocations over FR1 and FR2.

Recall, a maximum bandwidth allocation of 100 MHz and 400 MHz is supported for 5G FR1 and

FR2, respectively. This is simply way too much, compared to the former 4G where the maximum

supported bandwidth is 20 MHz. Though, what is the problem coming from? Simply, from the

devices, i.e., the mobile phones. During special events, cell phones need to monitor the entire

55
communication bandwidth, for several different reasons, such as frequency measurements,

frequency scanning, and initial wake up operation. This sets a clear restriction on the required

complexity of the radio chain of the cell phones in order for them to scan a very wide bandwidth

as fast as possible. As a result, 5G cell phones shall get very complex and costly as well.

A simple solution; though, not simple at all, could be feasible by converting the entire large

bandwidth chunk into several smaller bandwidth pipelines or parts, where from the mobile phone

perspective, these smaller bandwidth parts would be understood as the entire system bandwidth.

Thus, users need only to monitor and scan these smaller bandwidth allocations.

Accordingly, based the 5G new radio definitions in 3GPP specs of Rel-15, the system carrier

components now have common physical resource blocks (CRB), unlikely with 4G PRBs, which can

be assigned to any configured bandwidth part. Thus, a bandwidth part actually contains the

system PRBs. A bandwidth part is then defined by a starting PRB index and a number of

successive PRBs. It can be utilized as downlink or uplink and users may have been configured with

multiple bandwidth parts at the same time.

However most importantly, bandwidth parts follow a modular design. This means that each

bandwidth part can have its own numerology, –do not you remember what the numerology is?

please revise the ‘5G flexible frame structure’ Section, its own bandwidth, its own transmission

configurations. This enables a large set of performance enhancement techniques by scheduling

56
and multiplexing users with different requirements over different bandwidth parts, that in fact

adapt their own configurations in time to the associated user targets and radio states.

–bandwidth parts technology is still an ongoing research towards the 5G new radio though. Its

baseline network and user signaling have been set as of now. Simply, how the network and users

will be coordinated on a bandwidth part agreement. Though, for keeping the detail level at

minimum, these are out of this book’s scope.

5G Flexible TDD Transmission

Since the time of the 4G systems, the frequency and time division duplexing (FDD and TDD)

technologies have been supported there in 3GPP standards.

–an FDD transmission denotes that the downlink and uplink bands are deployed on separate

spectra at the same time. Accordingly, users can transmit in the uplink while base-stations are

transmitting in the downlink as well. However, with TDD, both uplink and downlink bands are

unpaired. This implies there is a single band which can be used either for downlink or uplink

transmission at a time. Usually, networks alternate between the downlink and uplink transmission

directions according to predefined periodicities. As a best practice, TDD base-stations adopt

many downlink transmission opportunities within a radio frame, i.e., every 10 ms, than the

corresponding uplink opportunities, when they have much offered downlink traffic than the

uplink one.

57
–thus, it in principal implies that the system adjusts its transmission direction based on traffic

availability. Assume the hypothetical scenario of a single base-station with a single user, which is

receiving in the downlink direction at all time. With FDD, the downlink band can be fully allocated

to this user; however, the paired uplink band shall stay idle without any transmissions. This is a

significant loss of the resource utilization. With TDD on the other hand, the base-station shall

decide to allocate the single TDD band to downlink transmissions for a longer time period until

some uplink connections or traffic become available.

Accordingly, FDD clearly requires double the bandwidth of the TDD. But, it also achieves doubled

capacity. Then, what are the key deciding factors with which a network operator shall go for TDD

or FDD system?

Simply, the need and availability. For some telecom operators, available bands may not be

allocated over the FDD standardized spectrum, thus, they may not have a choice except to utilize

the TDD technology. Furthermore, when spectrum and operational expenses are primarily the

main concern, TDD is then always the best option, –operators basically pay for a single band

instead. Moreover, as being discussed over the ‘massive MIMO’ Section, TDD makes such massive

MIMO technology feasible in practice up to this very moment. With FDD, the channel state

information, as being pointed out before, is a critical obstruction to implement the massive MIMO

antenna arrays while this problem is not as severe, or basically not there at all, with the TDD

technology.

58
–in fact, some operators may also choose TDD because of its transmission flexibility, especially

to the sporadic traffic patterns. Referred to previous example, when sporadic traffic is applicable,

e.g., 5G URLLC-alike traffic, TDD offers moving the system capacity between the downlink and

uplink directions dynamically in time based on the available traffic pattern. This is very convenient

to the 5G industrial cases where devices such as robots are expected to send updates during

specific times. Thus, the network configures itself with sufficient uplink transmission

opportunities during the times these end devices should be transmitting their updates in the

uplink. Otherwise, a downlink transmission direction is activated.

Although, TDD was not that flexible since beginning. Coming up with the 4G radio technology,

TDD was so called as a static TDD. A static TDD denotes that every radio frame periodicity, i.e.,

10 ms in time, there are specific radio subframes that are allocated to downlink and others to

uplink. This is predefined from the network planning and configuration phase and is unified across

all base-stations. This is a critical limitation for two main reasons: first, such predefined frame

structure of fixed downlink and uplink transmission subframes may not match the available

traffic pattern and load at a given time unit. Second, not all adjacent base-stations may have a

similar traffic patterns at all times, so to have a unified frame structure for all base-stations is

clearly suboptimal.

Thus, one stage later, with the 4G-Pro standards, 3GPP has introduced some dynamicity to the

TDD operation. Several radio frame configurations with different numbers of the downlink and

uplink transmission subframes within every radio frame are introduced, where base-stations can

59
dynamically in time select the one frame that best meets its traffic demand. For example, a base-

station with much higher downlink available traffic than uplink, selects a radio frame with higher

number of downlink subframes than uplink subframes.

Although, the minimum frame update periodicity was actually the frame duration itself, i.e., 10

ms. This is considered way too large update delay since with the 5G new radio, URLLC-alike traffic

type is expected to fluctuate per 1 ms basis. Thus, such TDD frame adaptation periodicity should

align with it accordingly, i.e., be as fast as the traffic variations.

Thus, in 5G standards, in fact, such frame periodicity can be based on the mini-slot basis, that can

be way less than 1 ms. In a simpler sense, with the 5G new radio in fully flexible TDD systems,

base-stations can change their transmission directions, in an ultra-quick fashion, to sporadic

traffic demands.

Being said that, although, there is one critical issue against such TDD flexibility. The issue occurs

when neighboring base-stations may have different directional transmission directions. For

instance, at an arbitrary time instant, a base-station with downlink subframe while adjacent base-

station adopts an uplink subframe. Now, we inflict an additional type of interference. That is the

cross-link interference, where the downlink transmission, with the larger power from the

downlink base-station, interferes with the lower power uplink transmission at the adjacent uplink

base-station.

60
Such cross-link interference issue is one of the most critical and yet open research problems, and

on top of the 3GPP agenda during the ongoing release-16 standardization, and potentially

upcoming release-17 as well.

5G Multi Connectivity

Multi-connectivity refers to the situation when mobile phones are connected to, i.e., receive and

transmit on, resources from different base-stations at the same time. These could be over radio

or core resources and from one or different RAT technologies.

–well, the multi-connectivity was also there with the former LTE technology. With LTE, the data

planes of the user transmissions could be shared or aggregated between different LTE/ 4G base-

stations. In sense, users could either have their data transmissions duplicated from two adjacent

base-stations for extra link robustness or data aggregation for boosting the achievable

throughput. One important note here, the control links of users are still within a single

connectivity mode to the master serving base-station.

–thus, one clear difference herein is that 5G multi-connectivity include the control plane as well.

Thus, 5G users could have their data and control links, established from different base-stations.

Hopefully motivated enough to conclude that towards the 5G new radio targets, multi-

connectivity has become of a significant importance, especially during the early 5G deployments.

First, what benefits does multi-connectivity bring to the 5G new radio? In a simple way, a lot. For

instance, it boosts the system reliability and user throughput. However, most importantly, multi-

61
connectivity shall make it very smooth to deploy the early 5G new radio implementations. At first,

it would be way too expensive to deploy a standalone 5G radio and core architecture. Thus,

operators are expected to have tight interconnections between existing 4G structures and the

newly introduced 5G systems. For example, users establish radio connections with the 5G new

radio and core connectivity with the former 4G evolved packet core structure. Herein, there are

tons of multi-connectivity options. Examples can be as follows:

Intra-5G multi-connectivity: 5G users maintain 5G radio sub-6 GHz connections, to maximize

coverage due to the lower spectrum, as well as 5G connections of beyond 60 GHz, to boost the

data rates.

Inter-RAT multi-connectivity: basically, users establish and maintain connections with 5G and

4G radios as well.

Non-3GPP inter-RAT multi-connectivity: users maintain multiple connections to non-3GPP

radios such as WIFI/ WIGIG/ etc as well as the 5G new radio.

– although, multi-connectivity is not all-green scheme here. It has a fundamental and pricy

limitation though. Multi-connectivity could indeed boost the user throughput and link reliability.

However, it may highly degrade the system spectral efficiency due to the poorly utilized radio

resources. Thus, 3GPP is considering several creative solutions to fast switch users back and forth

among the single and multi-connectivity modes. That is, the network shall decide which users and

when, i.e., upon which conditions, should be multi-connected?

62
Furthermore, 3GPP has standardized several configurations for the 5G multi-connectivity

scenarios and with different levels of duality.

–herein, I mean that users could actually divide their protocol stack into several slices, and have

each slice administered or concerned from a different base-station.

–Over the upcoming book notes (Part-2) of the detailed 5G service-based core architecture, we

shall go in much more depth into the 5G dual connectivity, and the associated functional split.

5G New Radio Beamformed Access

As has been pointed out over the ‘5G Massive MIMO and Beamforming’ Section, transmissions

over the 5G new radio are always beamformed. This denotes that base-stations and users

communicate to each other’s only in certain directions at certain times. The benefits of such

behavior are quite significant in fact. For example, a great reduction of the inter-user interference

has become feasible then, since users may not even see other users’ concurrent transmissions.

To be mentioned also, the antenna array gain boosts the user achievable capacity and network

coverage as well. However, this imposes a simple question herein: how the common control

channels are being transmitted in the downlink direction? These channels need to be acquired by

all geographically-separated users at any time.

–well, the common channels of LTE are always transmitted towards all directions at a time. More

specifically, the control signaling from base-stations is usually transmitted on a pre-defined

63
periodicity within specific radio frequency resources. Although, on the spatial domain, they can

be seen in all coverage directions of the antenna array.

Thus, to truly put some hands-on knowledge here, lets recall the PSS and SSS signaling we have

gone briefly through over former Sections. These signals are quite essential for respective users

to camp on a base-station. Next, users tend to perform the radio random access procedure to

get their control and data connections established, from the radio and core sides, respectively.

Thus, such signals simply need to be sensed within every short time period from all users, which

can be sparsely geo-located within the coverage of the current base-station.

With the 5G new radio beamformed access, in a simple way, the PSS and SSS are beamformed

into specific spatial directions at a time. A single time unit later, they are sent again on the same

frequency resources but towards an adjacent spatial direction instead (with simple angle

rotation), and so until the sector area is covered by beams. This is simply so called as beam

sweeping, since we simply sweep or slide the beamforming coverage in time. In this regard, 3GPP

has set up several definitions. An SS block (SSB) basically implies a single beam direction at a time.

It is entitled as block information because such beam indeed holds a block of information

elements such as the SSS, PSS, and main information broadcast (MIB). This means that a user in

a certain location just needs to lock on a single beam (the closest one), because every beam

modulates the entire information needed by this user end. Then, an SS burst is defined to include

all SSBs within a certain SS transmission time, –no worries, check the below example.

64
–say the SSS and PSS information need to be transmitted every 20 ms, and for a continuous

transmission duration of 4 ms across specific radio frequency resources. Thus, let’s assume we

have four SSBs, with four beams into four different beamformed directions. At the first eligible

ms, SSB 1 into beam 1 is transmitted. At the second eligible ms, SSB 2 into beam 2 is then

transmitted and so on. Accordingly, SSB 1, 2, 3 and 4 are just called the SS burst transmission at

this time instant.

5G User-centric Reference Signals

As the name suggests, a reference signal term denotes a pre-defined and pre-known signal to

both transmitter and receiver. Such signal is transmitted for the sake of the receiver to estimate

how much signal damage (basically amplitude distortion) can be inflicted due to the wireless

channel in between. Thus, it can compensate such damage later when actual data packets are

being received. Herein, a major change has been enforced on the reference signal design within

the 5G new radio systems. However, to make it clear and simple, lets first recall how reference

signals simply work with the 4G radios?

Well, generically, the 4G radio standardized the cell-common reference signals. Simply put, these

are cell specific rather than user specific. Accordingly, they are being periodically sent in time over

specific frequency sources, regardless of the connected user capacity, i.e., how many users are

connected at the mean time. However, starting from LTE release-10, 3GPP has introduced the

user-centric reference signals, i.e., user specific. This means that the radio interface genuinely

transmits reference signals towards a specific active user and on a set of time-frequency

65
resources that are only accessible by this user. Basically, such type of the reference signals is

mainly utilized with some advanced beamformed transmissions of the 4G radios.

Now, here is a question: why are these cell common reference signals not efficient enough to be

utilized with the 5G new radio? Clearly, these signals will be transmitted at all times. Just imagine

the case that a very low number of users are connected to an arbitrary 4G radio interface. These

users may require much lower reference signal transmission overhead than what is actually being

transmitted, in order to estimate their wireless channels. This is accordingly a significant resource

loss then. One more reason, as we know by now, 5G transmissions are beamformed. Thus, users

may not be separated on the resource domain but rather on the spatial domain.

–this denotes with the 5G new radio, base-stations may then schedule different users at the same

time and frequency resources but on different beams, i.e., directions.

Thus, it becomes fairly important to have the user channel estimates also based on the

transmitted beams, instead of only being based on the time and frequency resources.

Being said that, the 5G new radio only considers user specific reference signals for both control

and data channels. One clear advantage here is that, unlike the 4G radio, when the number of

connected users is quite low, the 5G new radio does only transmit reference signals towards the

frequency resources occupied by these users, with much lower control overhead compared the

LTE case, with which reference signals spanning all system time and frequency resources will be

transmitted, even if some of these will not be used; basically because there are not enough users.

66
–for that reason, it has been always said that the 5G new radio should be flexible enough to send

reference signal only when needed.

So, let’s get more specific here, 5G new radio uses the same 4G reference signals as well as

introducing a new reference signal design for phase tracking. Accordingly, the 5G new radio

reference signals are as follows:

Demodulation Reference Signal (DMRS): from its name, DMRS helps the receiver to de-

modulate the received signal, by suppressing the negative effects of the wireless channel in

between the transmitter and receiver, i.e., channel distortion and power decay effects. Hence, a

generic transmitter shall always append the DMRS symbols within its to-be-transmitted data

payload for the other end to be able listen to DMRS symbols, estimate channel fading, and finally

decode the useful data payload. If DMRS symbols are not present in the transmitted payload,

such payload may not get successfully decoded at its intended receiver; thus, DMRS are quite

essential. To be mentioned also, DMRS is a dual reference signal, meaning that it can be used

either in the downlink or the uplink directions.

Sounding Reference Signal (SRS): once more, as the name implies, such reference signal is to

sound the wireless channel. Well, you are probably confused, and you are rightfully right. Indeed,

SRS does almost exactly the same job as the DMRS, by just helping the receiver to estimate the

inflicted channel distortion. The only difference is that DMRS are only transmitted when actual

and useful data payload is about to be transmitted as well; however, SRS can be either

67
transmitted periodically or based on an event occurrence, i.e., event-triggered, from a user end

to its e serving base-station. Also, you probably noticed, the SRC is only transmitted in the uplink

direction. Now, it is the turn of the downlink direction.

Channel State Information Reference Signal (CSI-RS): it is the analogous reference signal to the

SRS; but, in the downlink direction. The CSI-RS has a vital importance for MIMO communications

as simply users need to tell their serving base-stations about their downlink channel estimates,

for the base-stations to apply a better ‘everything’ in the downlink direction.

–without an accurate CSI-RS estimation and CSI feedback (as briefly discussed before), advanced

beamforming at the base-stations simply become infeasible in practice.

So far, the former reference signals have been there also with the 4G standards. However, the

5G new radio introduces a new reference signal design. But, before going through the last

reference signal type of the 5G new radio, lets recall something about how a wireless channel can

distort our wireless signals? Theoretically, a channel introduces an amplitude and phase error or

distortion to our useful transmitted data. So, the receiver shall perceive a completely different

version of the transmitted data symbols and then, may not be able to decode them correctly.

This channel distortion is time and frequency variant, –that is a reason why the radio interface

transmits the reference signals on different time and frequency resources.

Apparently, the DMRS, SRS, and CSI-RS help the receiver end to estimate the amplitude distortion

of the received symbols, hence, receiver is able to compensate for it and correctly decode the

68
data payload. A logical question pops up here: where is the phase compensation? Simply, with the

4G radio, phase compensation was not an issue. The majority of the 4G networks are deployed

over the sub 6 GHz spectrum, where practically it was proven that the phase noise or error of the

wireless channels over this spectrum is minor. Although, with the 5G new radio, new spectrum

ranges from 20 to 60 GHz can be integrated with the 5G radio standards. Herein, we got the issue

back again and for that reason, the 5G new radio defines a new reference signal.

Phase Tracking Reference Signal (PTRS): probably now, you already know what is it about? PTRS

is a reference signal to simply track the phase error or distortion of the wireless channel for the

receiver to compensate for it prior to decoding. PTRS signals are transmitted from one OFDM

symbol to another. And yes, that is quite much overhead indeed. The reason for that over the

higher ranges of FR2, the channel phase distortion is quite aggressive such that a phase error of

complete few degrees can be inflicted from an OFDM symbol just to the next one.

–before I finally close this Section, I would like to stress out that there are further pieces of the

cake here. The 5G new radio in fact integrates several other vital system blocks that critically

impact the overall performance such as, for instance, the dynamic user scheduling, the in-

resource control channels, and many more. These will be covered in depth over upcoming book

notes.

69
Beyond 3GPP 5G Release-15

Release-15 has been a big milestone in mid-2018 for 3GPP towards achieving the 5G promised

requirements. Release-16 standardization is currently ongoing and is expected to be fulfilled by

the end of 2019. As usual, release-16 has several work items, tackling several issues. Herein, we

shall go briefly through the main ongoing work items.

Further enhancements to E-UTRAN (4G): this work item specifically investigates the integration

of the 5G new radio functional split, explained earlier; however, into the 4G-Pro standards. This is

basically to make it easier when the 5G new radio protocol stack is split between 5G and 4G radios.

Further enhancements to narrow band internet of things (NB-IoT): NB-IoT applications are

commercialized starting from LTE release-13, while transmissions over narrower subcarrier bands

were the main shot there. Further enhancements include terminated user transmissions, i.e.,

users can early decide to terminate transmissions upon completion, for an ultimate saving of the

user power consumption.

Non-terrestrial communications for 5G new radio: this work item studies the initial

requirements towards the full integration of the satellite communication into the 5G new radio.

This includes, but not limited to, roaming and data offloading from 5G new radio to satellite, and

temporary satellite connectivity.

Towards the promised 5G system targets, the following work items are ongoing as well:

Carrier band aggregation for ultra-broadband 5G new radio transmissions.

70
MIMO enhancements for 5G new radio, including the MU-MIMO, downlink and uplink

reference signals, and channel state information feedback strategies.

Mobility enhancements for 5G new radio.

Energy consumption enhancements for 5G new radio.

71
Concluding Remarks

By this, we conclude all discussions presented in these book notes. As being evident,

the development of the 5G new radio is progressively evolving in standardization and

practice, respectively. This year, we are also expecting early Beta commercial lunches

of the 5G new radio across the globe. Creative and breaking use cases and novel

applications shall continue development and further integration into our cellular

technology out of these standardization activities. Hopefully, these book notes were

sufficient and simple-to-digest enough to give you a glimpse of how the 5G new radio

research activities are currently ongoing, as well as to the main concept creation of the

5G new radio system components.

Over the upcoming book note series, the 5G core system architecture associated with

its service-level-agreement (SLA) structure, network slicing, and end-to-end quality of

experience, shall be explained in depth; though, in the simplest possible way.

For further feedback on the content of these notes, please feel free to get in contact

with me through my communication channels.

72
References to 5G 3GPP Documents

TS 38201, NR; Physical layer; General description


TS 38202, NR; Services provided by the physical layer
TS 38211, NR; Physical channels and modulation
TS 38212, NR; Multiplexing and channel coding
TS 38213, NR; Physical layer procedures for control
TS 38214, NR; Physical layer procedures for data
TS 38215, NR; Physical layer measurements
TS 38300, NR; Overall description; Stage-2
TS 38304, NR; User Equipment (UE) procedures in idle mode and in RRC Inactive state
TS 38305, NG Radio Access Network (NG-RAN); Stage 2 functional specification of User Equipment (UE)
positioning in NG-RAN
TS 38306, NR; User Equipment (UE) radio access capabilities
TS 38307, NR; Requirements on User Equipments (UEs) supporting a release-independent frequency band
TS 38321, NR; Medium Access Control (MAC) protocol specification
TS 38322, NR; Radio Link Control (RLC) protocol specification
TS 38323, NR; Packet Data Convergence Protocol (PDCP) specification
TS 38331, NR; Radio Resource Control (RRC); Protocol specification
TS 38401, NG-RAN; Architecture description
TS 38410, NG-RAN; NG general aspects and principles
TS 38411, NG-RAN; NG layer 1
TS 38412, NG-RAN; NG signaling transport
TS 38413, NG-RAN; NG Application Protocol (NGAP)
TS 38414, NG-RAN; NG data transport
TS 38415, NG-RAN; PDU Session User Plane protocol
TS 38420, NG-RAN; Xn general aspects and principles
TS 38421, NG-RAN; Xn layer 1
TS 38422, NG-RAN; Xn signaling transport
TS 38423, NG-RAN; Xn Application Protocol (XnAP)
TS 38424, NG-RAN; Xn data transport
TS 38425, NG-RAN; NR user plane protocol
TS 38455, NG-RAN; NR Positioning Protocol A (NRPPa)
TS 38460, NG-RAN; E1 general aspects and principles
TS 38461, NG-RAN; E1 layer 1
TS 38462, NG-RAN; E1 signaling transport
TS 38463, NG-RAN; E1 Application Protocol (E1AP)
TS 38470, NG-RAN; F1 general aspects and principles
TS 38471, NG-RAN; F1 layer 1
TS 38472, NG-RAN; F1 signaling transport
TS 38473, NG-RAN; F1 Application Protocol (F1AP)
TS 38474, NG-RAN; F1 data transport
TS 38.508-1, 5GS; User Equipment (UE) conformance specification; Part 1: Common test environment

73
TS 38.508-2, 5GS; User Equipment (UE) conformance specification; Part 2: Common Implementation
Conformance Statement (ICS) proforma
TS 38509, 5GS; Special conformance testing functions for User Equipment (UE)
TS 38.521-1, NR; User Equipment (UE) conformance specification; Radio transmission and reception; Part
1: Range 1 standalone
TS 38.521-2, NR; User Equipment (UE) conformance specification; Radio transmission and reception; Part
2: Range 2 standalone
TS 38.521-3, NR; User Equipment (UE) conformance specification; Radio transmission and reception; Part
3: Range 1 and Range 2 Interworking operation with other radios
TS 38.521-4, NR; User Equipment (UE) conformance specification; Radio transmission and reception; Part
4: Performance
TS 38522, NR; User Equipment (UE) conformance specification; Applicability of radio transmission, radio
reception and radio resource management test cases
TS 38.523-1, 5GS; User Equipment (UE) conformance specification; Part 1: Protocol
TS 38.523-2, 5GS; User Equipment (UE) conformance specification; Part 2: Applicability of protocol test
cases
TS 38.523-3, 5GS; User Equipment (UE) conformance specification; Part 3: Protocol Test Suites
TS 38533, NR; User Equipment (UE) conformance specification; Radio Resource Management (RRM)
TS 38806, Study of separation of NR Control Plane (CP) and User Plane (UP) for split option 2
TS 38810, NR; Study on test methods
TS 38811, Study on New Radio (NR) to support non-terrestrial networks
TS 38812, Study on Non-Orthogonal Multiple Access (NOMA) for NR
TS 38813, New frequency range for NR (3.3-4.2 GHz)
TS 38814, New frequency range for NR (4.4-5.0 GHz)
TS 38815, New frequency range for NR (24.25-29.5 GHz)
TS 38816, Study on Central Unit (CU) - Distributed Unit (DU) lower layer split for NR
TS 38.817-01, General aspects for User Equipment (UE) Radio Frequency (RF) for NR
TS 38.817-02, General aspects for Base Station (BS) Radio Frequency (RF) for NR
TS 38818, General aspects for Radio Resource Management (RRM) and demodulation for NR
TS 38874, NR; Study on integrated access and backhaul
TS 38889, Study on NR-based access to unlicensed spectrum
TS 38900, Study on channel model for frequency spectrum above 6 GHz
TS 38901, Study on channel model for frequencies from 0.5 to 100 GHz
TS 38903, NR; Derivation of test tolerances and measurement uncertainty for User Equipment (UE)
conformance test cases
TS 38905, NR; Derivation of test points for radio transmission and reception User Equipment (UE)
conformance test cases
TS 38912, Study on New Radio (NR) access technology
TS 38913, Study on scenarios and requirements for next generation access technologies

74

You might also like