Professional Documents
Culture Documents
Lte Technology For Engineers 130716042500 Phpapp01
Lte Technology For Engineers 130716042500 Phpapp01
Introduction to LTE
1.1 Where are we?
Page 9
Page 10
1.2 Release 99
UMTS/W-CDMA was initially conceived as a circuit switched based system and was
not well-suited to IP packet based data traffic. Once the basic UMTS system was
released and deployed, the need for better packet data capability became clear,
especially with the rapidly increasing trend towards Internet style packet data
services which are particularly bursty in nature. It supports Cell-DCH and typical
speeds 384kb/s.
Release 5: This release included the core of HSDPA itself. It provided for downlink
packet support, reduced delays, a raw data rate (i.e. including payload, protocols,
error correction, etc.) of 14 Mbps and gave an overall increase of around three times
over the 3GPP UMTS Release 99 standard.
Release 6: This included the core of HSUPA with an enhanced uplink with improved
packet data support. This provided reduced delays, an uplink raw data rate of 5.74
Mbps and it gave an increased capacity of around twice that offered by the original
Release 99 UMTS standard. Also included within this release was the Multimedia
Broadcast Multicast Services (MBMS), providing improved broadcast services such as
Mobile TV.
Release 7: This release of the 3GPP standard included downlink MIMO operation as
well as support for higher order modulation up to 64 QAM in the uplink and 16 QAM
in the downlink. However, it only allows for either MIMO or higher order
modulation. It also introduced protocol enhancements to allow support for
Continuous Packet Connectivity (CPC).
K025 LTE Technology for Engineers
Introduction to LTE
Page 11
Release 8: This release of the standard defines dual carrier operation as well as
allowing simultaneous operation of the high order modulation schemes and MIMO.
Further to this, latency is improved to keep it in line with the requirements for many
new applications being used.
Page 12
Rb_phy includes DPDCH (User data + L3 control) + Error protection + DPCCH (L1
control)
DPCCH = Dedicated Physical Control Channel
In UL, symbol rate=ch bit rate.
The DPDCH channel bit rate is less than channel bit rate because the latter contains
both DPDCH and DPCCH ch bit rates. The exact DPDCH bit rate depends on the slot
format. DPDCH is shared by logical/transport channels (DCCH/DCH +
DTCH/DCH).
The exact DTCH bit rate depends on the selected channel configuration or transport
format, for example with AMR 12.2, the DTCH is 12.2 kbit/s and DCCH is 3.7 kbit/s
by default (SF = 128).
For the channel coding, three options are supported: convolutional coding, turbo
coding, or no channel coding. Channel coding selection is indicated by upper layers.
For example, with single DPDCH in UL:
960kbps can be obtained with SF=4, no coding
400-500 kbps with coding
With 3 codes, up to 5740 kbps uncoded or 2Mbps (or even more) with coding.
Error Correction Coding Parameters
Transport channel type
Coding scheme
Coding rate
Convolutional code
1/2
Convolutional code
1/3, 1/2
Turbo code
1/3
No coding
Page 13
Page 14
1.3 UTRAN
In UMTS, the UTRAN is formed from RNCs, Node Bs and their defined interfaces. An
RNC is the UMTS equivalent of a GSM BSC, but has greater functionality. One of the
main differences between UMTS (Release 99) and GSM (Release 99) is that there is an
Iur interface interconnecting multiple RNCs. This additional interface permits the
RNCs to communicate with each other, allowing soft handover (not present in GSM).
Another difference between GSM and UMTS is that some of the Mobility
Management (MM) has been moved to the RNC from the Core Network, allowing
UTRAN initiated paging. The RNC has greater control over a group of cells and the
functionality allows soft handover.
Page 15
Core Network
The Core Network is divided in circuit switched and packet switched domains. Some
of the circuit switched elements are Mobile services Switching Centre (MSC), Visitor
location register (VLR) and Gateway MSC. Packet switched elements are Serving
GPRS Support Node (SGSN) and Gateway GPRS Support Node (GGSN). Some
network elements, like EIR, HLR, VLR and AUC are shared by both domains.
The functions of RNC are:
Page 16
Admission Control
Channel Allocation
Load Control
Handover Control
Macro Diversity
Ciphering
Segmentation / Reassembly
Broadcast Signalling
Page 17
Page 18
Page 19
Co-existence with legacy standards and systems: LTE users should be able to make
voice calls from their terminal and have access to basic data services even when they
are in areas without LTE coverage. LTE therefore allows smooth, seamless service
handover in areas of HSPA, WCDMA or GSM/GPRS/EDGE coverage. Furthermore,
LTE/SAE supports not only intra-system and intersystem handovers, but interdomain handovers between packet switched and circuit switched sessions.
Page 20
Page 21
1.5 HS-PDSCH
Page 22
1.6 HSDPA
Page 23
1.7 HSUPA
Page 24
[CL] From one phase to another, the main limitation is the product development which resources you can offer to the user.
TTI - 2 ms (HS-DSCH), 10 ms, 20 ms, 40 ms, and 80 ms
TTI is length of transmission on the radio link
Page 25
Page 26
Page 27
Page 28
1.9 R8 LTE
Page 29
Page 30
Page 31
Page 32
Page 33
Urban areas:
Rural areas:
Option 1: deploy UMTS in 900 MHz band.
Advantage: rollout can start now
Disadvantage: a block of 5 MHz need to be taken out of the GSM band. Not a lot of
operators can affort to take out this much spectrum due to heavy usage in this band
Option 2: Introduce LTE in 900 MHz band
Advantages: reuse of GSM 900 Sites; step by step introduction of LTE with smaller
granularity (1.4 / 3 / 5 /MHz).
Page 34
Page 35
Page 36
Page 37
Page 38
Page 39
Page 40
CHAPTER 2
IP Core Network
Overview
2.1 The TCP/IP Layers
TCP/IP can be represented by the US DoD Model. This model describes the
relationship between the main protocols used by TCP/IP.
Page 41
Prior to the development of this model most network protocols were vendor
dependent. The architecture behind TCP/IP is different in the sense that the same
protocol model can be run on a multitude of different computer systems without
modification of the operating system or hardware architecture. TCP/IP is designed to
run as an application.
The protocol was primarily used to support application-orientated functions and
process-to-process communications between hosts. Specific applications to provide
basic network services for users were written to run with TCP/IP. The objective of the
lower protocols was to provide support for the network layer application services.
Page 42
Transport layer protocols provide two basic functions to the application layer services
- quality of service and application multiplexing through port numbers. TCP/IP has
two main transport layer protocols TCP and UDP.
UDP provides a simple datagram delivery service adding application multiplexing
and a checksum to the underlying IP layer. It therefore provides the same unreliable,
connectionless delivery service as IP. It does not use acknowledgements to confirm
that messages have arrived, it does not provide any flow control mechanisms, and it
does no sequencing - UDP messages can be duplicated, arrive out of order or not at
all. UDP works well on LANs where error rates are low and delays small, but on
WANs it behaves poorly, especially for large data transfers.
TCP provides a reliable, connection-oriented, stream based delivery system by adding
acknowledgements, sequencing and flow control to IP. This makes TCP much more
efficient on WANs and for large data transfers, but it has a large protocol overhead
which makes it slower and less efficient than UDP in certain applications.
Most applications tend to use TCP because it provides reliable delivery, but timesensitive, transactional and broadcast based applications need to use UDP.
Page 43
Page 44
Page 45
Page 46
The power behind TCP/IP is not the sophisticated and powerful nature of the
protocol architecture, but rather it is the absolute simplicity of the protocol. This is
equally true of the application-level services which are designed to provide network
services for users.
TCP/IP provides a consistent application front end to users regardless of the
operating system, platform, or network architecture which is used. Many of the
application level services retain the look and feel of simple character-oriented
applications. Even today, with TCP/IP providing GUI, once the superficial GUI is
removed, the same basic element of code is used to provide the network service.
The Transport layer protocols (TCP/UDP) use Port Numbers to uniquely identify
each application level service. The client usually generates a port number above 1,023
to identify the process and the server always uses a well known port number.
Page 47
Page 48
Page 49
Page 50
Page 51
Page 52
VERS
HLEN
Service Type
Total Length
Length of IP datagram in octets including header & data Maximum of 65535
Identification
Flags
fragments)
Fragment Offset Position of data in this fragment compared to original datagram units of 8 octets
Time To Live
internet.
Specifies how long (in router hops) the datagram is to remain in the
Source IP Address
Page 53
Page 54
Padding
DATA
Each physical network has a defined limit on the size of protocol which it can
support. This is known as the Maximum Transfer Unit (MTU) and is generally
considered a hard limit of the network which cannot be increased. The MTU is the
maximum size of software protocol which can be sent, and not the maximum size of
frame which can be supported. If hardware control information (such as physical
addresses) is added to the MTU, the maximum frame size can be derived.
Ethernet limits transfers to 1500 bytes of data, which FDDI permits approximately
4470 bytes of data per frame. MTUs vary considerably in size. Local area networks,
which generally use high bandwidth, low bit error rate media have relatively large
MTUs, while wide area networks have much smaller MTUs. Limiting datagram size
to fit the smallest possible MTU in the internet makes transfers inefficient when those
datagrams pass across a network which can carry larger size frames. However,
allowing datagrams to be larger than the minimum network MTU in an internet
means that a datagram may not always fit into a single network frame.
Instead of making IP datagrams adhere to the constraints of physical networks,
TCP/IP software chooses a convenient initial datagram size and arranges a way to
divide large datagrams into smaller pieces when the datagram needs to traverse a
network that has a small MTU. The small pieces into which a datagram is divided are
called fragments, and the process of dividing a datagram is known as fragmentation.
Page 55
2.7 Fragmentation
Hosts can choose to send datagrams up to the supported MTU of their own network.
Routers interconnect different physical networks with varying MTUs. Routers can
fragment datagrams if desired to permit transport across networks which have
smaller MTUs.
The IP protocol does not limit datagram size nor does it guarantee that the datagram
will be delivered without fragmentation. The source can choose any datagram size it
thinks appropriate; fragmentation and reassembly occur automatically, without
taking action. The IP specification states that routers must accept datagrams up to the
maximum of the MTUs of the networks to which they are attached.
Once a datagram has been fragmented, the fragments travel all the way to the final
destination, where they reassembled. This has a number of disadvantages - small
fragments must be carried by networks which could support larger MTUs, and
reassembling the datagrams at the destination can lead to inefficiency, particularly if
fragments are lost. If fragments are lost, the original datagram cannot be reassembled.
The receiving machine starts a reassembly timer when it receives an initial fragment.
If the timer expires before all fragments arrive, the receiving machine discards the
surviving pieces without processing the datagram. Performing reassembly at the
ultimate destination works well, and permits each fragment to be routed
independently as well as sparing resources on routers.
Page 56
The TIME To LIVE specifies how long, in seconds, the datagram is permitted to
remain in the internet. Whenever a host injects a datagram into the internet, it sets a
maximum time that the datagram should survive. Router and hosts that process
datagrams must decrement the TIME To LIVE field as time passes and remove the
datagram from the internet when the value in this field reaches zero.
Estimating exact time is difficult because routers do not usually know the transit time
of physical networks. A few rules simplify processing and make it easy to handle
datagrams without synchronise clocks. First, each router along the path from source
to destination is required to decrement the TIME To LIVE field when the datagram
header is processed. Furthermore, to handle cases of overloaded routers that
introduce long delays, each router records the local time when the datagram arrives
and decrements the TIME To LIVE by the number of seconds that the datagram
remained inside the router waiting for service.
Page 57
Page 58
2.9 IP Addresses
Page 59
Binary address strings are very difficult to work with. To overcome this problem and
make logical addressing easier to comprehend, the 32-bit address string is divided
into 8-bit bytes and then converted into the corresponding decimal notation. It is this
dotted decimal notation which is used to configure hosts on a TCP/IP network.
However, it should be noted that decimal addresses are a human and humane
interface to TCP/IP. As far as the host is concerned, the address appears and is used
as a binary string. This is the cause of much confusion.
Page 60
There are five main classes of IP addresses but only three are directly usable: A, B and
C.
For a Class A address, 8-bits are used to logically identify the network. For Class B,
16-bits are used, and for Class C, 24-bits are used. In each case, once the network bits
have been allocated, the remaining bits are used to logically identify the node.
Class E addresses are reserved for testing and development by the IETF and cannot be
assigned to any device. Class D addresses are software multicast addresses and
reserved for the use of routing protocols such as OSPF, RIPv2 and so on.
The address categorisation is derived from the high bit order rule of the first byte. The
high bit order rule is interrupted by every TCP/IP stack as soon as an address is
entered. This rule is also used to define the decimal ranges in the first byte of each
address.
Page 61
Some binary bit patterns are reserved for management reasons and cannot be
allocated to devices on a TCP/IP network.
Although the Internet Protocol has been stable for a considerable number of years, the
way IP addresses are used and interpreted has changed over the years.
In general, 1s indicate "All" and 0s indicate "Any" - the local broadcast address is an
obvious exception; a broadcast to all hosts on all networks (on the Internet) would
cause chaos!
Page 62
Page 63
Page 64
Page 65
Although the IPv6 header must accommodate larger addresses, an IPv6 base header
contains less information than an IPv4 header. Options and some of the fixed fields
that appear in an IPv4 header have been moved to extension headers. Changes in the
datagram header reflect changes in the protocol:
Page 66
The header length field has been eliminated, and the datagram length field has
been replaced by a Payload
Length field.
The size of the source and destination address fields has been increased to 16 bytes
each.
Fragmentation information has been moved out of fixed fields in the base header
into an extension header.
The Service Type field has been replaced by a Flow Label field.
The Protocol field has been replaced by a field that specifies the type of the next
header.
IPv6 handles packet length specification in a new way. Firstly, because the size of the
base header is fixed at 40 bytes, the header does not include a field for the header
length. Secondly, IPv6 replaces IPv4 packet length field with a 16-bit Payload Length
field that specifies the number of octets carried in the packet excluding the header. An
IPv6 packet can contain 64k bytes of data.
Page 67
Page 68
Page 69
The fixed header occupies the first 40 octets (320 bits) of the IPv6 packet. It contains
the source and destination addresses, traffic classification options, a hop counter, and
a pointer for extension headers if any. The Next Header field, present in each extension
as well, points to the next element in the chain of extensions
Page 70
Page 71
Page 72
2.16 IP Precedence
Page 73
Page 74
The Type of Service field in the IP header was originally defined in RFC 791.
It defined a mechanism for assigning a priority to each IP packet as well as a
mechanism to request specific treatment such as high throughput, high reliability or
low latency.
Differentiated Services Code Point
In RFC 2474 the definition of this entire field was changed. It is now called the "DS"
(Differentiated Services) field and the upper 6 bits contain a value called the "DSCP"
(Differentiated Services Code Point). Since RFC 3168, the remaining two bits (the two
least significant bits) are used for Explicit Congestion Notification.
Page 75
In theory, a network could have up to 64 (i.e. 26) different traffic classes using
different markings in the DSCP. The DiffServ RFCs recommend, but do not require,
certain encodings. This gives a network operator great flexibility in defining traffic
classes. In practice, however, most networks use the following commonly-defined
Per-Hop Behaviors:
Page 76
The DS field consists of a 6-bit differentiated services code point (DSCP) RFC 2474.
Explicit Congestion Notification occupies the least-significant 2 bits. ECN allows endto-end notification of network congestion without dropping packets
Page 77
Page 78
Because the CE indication can only be handled effectively by an upper layer protocol
that supports it, ECN is only used in conjunction with upper layer protocols (for
example, TCP).
Page 79
Page 80
Page 81
If you need to mark packets in your network and all of the devices support IP DSCP
marking and matching, use the IP DSCP marking to mark your packets. This is
because the IP DSCP markings provide more packet marking options. 64 individual
values can be marked using IP DSCP marking, while only 8 individual values can be
marked using IP precedence marking.
Page 82
Page 83
Page 84
Page 85
There are four assured forwarding (AF) classes, AF1x through AF4x. The first number
corresponds to the AF class and the second number (x) refers to the level of drop
preference within each AF class. There are three drop probabilities, ranging from 1
(low drop) through 3 (high drop). Depending on a network policy, packets can be
selected for a PHB based on required throughput, delay, jitter, loss, or according to
the priority of access to network services
Page 86
Page 87
Page 88
2.20.1
RNC Scheduling
Page 89
2.21 Questions
Page 90
CHAPTER 3
Layer 2 Switching
3.1 Introduction
Page 91
With the introduction of 4G systems, wireless networks are evolving to nextgeneration packet architectures capable of supporting enhanced broadband
connections. Simple text messaging and slow email downloads are being replaced by
high-speed connections that support true mobile office applications, real time video,
streaming music, and other rich multimedia applications. 4G wireless networks will
approach the broadband speeds and user experience now provided by traditional
DSL and cable modem wireline service.
From the wireless operators perspective, 4G systems are vastly more efficient at
using valuable wireless spectrum. These spectral efficiency improvements support
new high-speed services, as well as larger numbers of users. The additional speeds
and capacity provided by 4G wireless networks put additional strains on mobile
backhaul networks and the carriers providing these backhaul services. Not only are
the transport requirements much higher, but there is also a fundamental shift from
TDM transport in 2G and 3G networks to packet transport in 4G networks.
Understanding the impact of 4G on mobile backhaul transport is critical to deploying
efficient, cost-effective transport solutions that meet wireless carrier expectations for
performance, reliability and cost.
Page 92
Page 93
Page 94
TCP/IP can be represented by the US DoD Model. This model describes the
relationship between the main protocols used by TCP/IP.
Prior to the development of this model most network protocols were vendordependent. The architecture behind TCP/IP is different in the sense that the same
protocol model can be run on a multitude of different computer systems without
modification of the operating system or hardware architecture. TCP/IP is designed to
run as an application.
The protocol was primarily used to support application-orientated functions and
process-to-process communications between hosts. Specific applications to provide
basic network services for users were written to run with TCP/IP. The objective of the
lower protocols was to provide support for the network layer application services.
Page 95
Page 96
Message switching moves the entire message from connecting point to connecting
point, one step at a time. This method is sometimes referred to as store & forward.
Message switching creates a virtual or dedicated connection to the next switching
station. The entire message is transmitted and then the connection is terminated. The
receiving station must buffer the entire message and then create a connection to the
next switching station and forward the entire message. The message is forwarded one
step at time until it is received at the final destination.
The best example of message switching is E-mail servers.
Page 97
3.5 Frames
A frame is the fundamental unit of data transfer used on LANs. It has specific
network characteristics which relate to the type of network it originated on. All IEEE
compliant frames have a similar structure and start with a preamble which is
followed by hardware addresses for source and destination stations. Some network
frames (Token Ring) have some special fields for specific MAC control. After the
source MAC address, the LLC protocol data unit follows. Generally, LLC is three
bytes in size followed by the network layer protocol. The maximum and minimum
size of the protocol which follows LLC is dependent on the type of network. The
frame is finished by a four-byte trailer used for error checking.
Page 98
Page 99
Page 100
Page 101
A store and forward switching hub stores the full incoming frame in a buffer. This
enables the switch to perform a CRC check to see if the frame contains any errors. If
the frame is error-free, the switch uses an address lookup table to obtain a destination
port. Once the address is obtained, the switch performs a cross-connect operation and
forwards the frame to the destination.
Since the frame must be buffered in shared RAM, this results in greater latency than
that provided by cut through switching. A key advantage of store and forward
switches, results from the buffering of the frames in the switch. Since they are placed
in memory, this enables frame processing functions to be added to the switch,
permitting vendors to support a variety of filtering operations and the gathering of
statistics for management reports.
Page 102
Page 103
Page 104
Page 105
Page 106
Within corporate networks system administrators like to segregate users into separate
networks. The old method would have required that all users of one network be
connected to the same physical devices.
With modern switches, it is possible to isolate users into separate networks, whilst
connected to the same layer 2 devices.
Users belonging to the same network are simply attached to ports on a layer 2 switch.
These ports are then programmed to be part of the same virtual network. A virtual
network is known as a Virtual LAN (VLAN) and user ports on the same virtual
network would share the same VLAN ID.
You may have users who wish to be part of the same VLAN whilst connected to
different layer 2 switches. A physical connection between the switches needs to be
established, and the ports at either end of the link would have to be part of the same
VLAN.
VLAN Tagging (IEEE 802.1q)
The diagram above also shows the additional fields within the layer 2 ethernet frame
that can be used to identify traffic from separate VLANs.
The fields are:
TPI
Priority Bits.
VI
VLAN ID.
Page 107
Page 108
Page 109
Page 110
Page 111
Page 112
Page 113
The Metro Ethernet Forum (MEF) has led the industry in propagating Carrier
Ethernet and has identified five key attributes that distinguish Carrier Ethernet from
traditional LAN based Ethernet. These are:
Standardised services
Scalability
Service manageability
Quality of service
Reliability
Page 114
Page 115
Page 116
Page 117
The ability to predetermine the EVC path through the Ethernet network is
fundamental to making Ethernet connection-oriented. In classic connectionless
Ethernet bridging, Ethernet frames are forwarded in the network according to the
MAC bridging tables in the learning bridge. If a destination MAC address is
unknown, the bridge floods the frame to all ports in the broadcast domain. Spanning
tree protocols like IEEE 802.1s are run to ensure that there are no loops in the
topology and to provide network restoration in the event of failure. Depending upon
the location and sequence of network failures, the path EVCs take through the
network may be difficult to predetermine.
Predetermining the EVC patheither through a management plane application or via
an embedded control planeensures that all frames in the EVC pass over the same
sets of nodes. Therefore, intelligence regarding the connection as a whole can now be
imparted to all nodes along the path.
Page 118
Resource reservation and CAC is the next critical function. Now that the EVC path
through the network has been explicitly identified, the actual bandwidth and queuing
resources required for each EVC are reserved in all nodes along the path. This is vital
to ensure the highest possible levels of performance in terms of packet loss, latency,
and jitter. CAC ensures that the requested resource is actually available in each node
along the path prior to establishing the EVC.
Once the path has been determined and the resources allocated, the traffic
engineering and traffic management functions ensure that the requested connection
performance is actually delivered. After packets have been classified on network
ingress, a variety of traffic management functions must be provided in any packetbased network. These include:
Policing
Shaping
Queuing
Scheduling
Page 119
Page 120
Page 121
Page 122
Page 123
Page 124
Page 125
Page 126
Page 127
Page 128
Page 129
Page 130
802.1Q provides for tagging Ethernet frames with VLAN IDs. It provides the
mechanism that enables multiple-bridged networks to transparently share the same
physical network while maintaining the isolation between networks. Ethernet
switches deliver packets within the same VLAN and send the traffic between different
VLANs to internal or external routers to perform the routing function. 802.1Q only
supports up to 4094 VLANs, which is a scaling constraint for service providers.
Page 131
Page 132
Based on the Metro Ethernet Forums (MEF) definitions, there are two broad
categories of Carrier Ethernet services: point-to-point, referred to as E-Line services;
and multipoint, referred to as E-LAN services. Both E-line and E-LAN services are
often provided with multiple classes of service (CoS); where a single Ethernet virtual
connection (EVC) can carry traffic with one or more CoS. Service providers desire to
build networks that offer all services simultaneously on a single converged
infrastructure.
Page 133
Page 134
Page 135
3.19 Questions
Page 136
CHAPTER 4
Page 137
4.2 OFDMA
The downlink transmission scheme for E-UTRA FDD and TDD modes is based on
conventional OFDM. In an OFDM system, the available spectrum is divided into
multiple carriers, called subcarriers. Each of these subcarriers is independently
modulated by a low rate data stream. OFDM is used as well in WLAN, WiMAX and
broadcast technologies like DVB. OFDM has several benefits including its robustness
against multipath fading and its efficient receiver architecture.
Page 138
LTE uses advanced antenna techniques and wider spectrum allocations to provide
higher data rates throughout the cell area. LTE supports MIMO, SDMA and
beamforming . These techniques are complementary and can be used to trade off
between higher sector capacity, higher user data rates, or higher cell-edge rates, and
thus enable operators to have finer control over the end-user experience.
DL MIMOLTE supports up to 4x4 MIMO in the DL, which uses four transmit
antennas at the Node B to transmit orthogonal (parallel) data streams to the four
receive antennas at the user equipment (UE). Using additional antennas and signal
processing at the receiver and transmitter, MIMO increases the system capacity and
user data rates without using additional transmit power or bandwidth. To be most
effective, MIMO needs a high signal-to-noise ratio (SNR) at the UE and a rich
scattering environment. High SNR ensures that the UE is able to decode the incoming
signal, and a rich scattering environment ensures the orthogonality of the multiple
data streams. The MIMO benefit is therefore maximised in a dense urban
environment, where there is enough scattering and the small cell sizes provide an
environment of high SNRs at the UE.
Page 139
Page 140
SU-MIMO
Similarly, on the UL, SDMA enables two users in the cell to simultaneously send data
to the eNode B, using the same time-frequency resource. Even though the
transmissions are simultaneous, the spatial separation ensures that the two data
streams do not interfere with each other. Allowing these concurrent transmissions
increases the cell capacity in both the DL and the UL. LTE does not support
simultaneous MIMO and SDMA operation to a user; hence, there is a tradeoff
between higher user data rates and higher system capacity in the DL.
Page 141
Beamforming
Beamforming increases the user data rates by focusing the transmit power in the
direction of the user, effectively increasing the received signal strength at the UE.
Beamforming provides the most benefits to users in weaker-signal-strength areas, like
the edge of the cell coverage. Beamforming ensures that cell-edge rates are high, and
enables the operator to deploy high-bandwidth services without concern for service
degradation at the cell edge.
Page 142
Page 143
4.5 FDD/TDD
LTE can be used in both paired (FDD) and unpaired (TDD) spectrum. Leading
suppliers first product releases will support both duplex schemes. In general, FDD is
more efficient and represents higher device and infrastructure volumes, while TDD is
a good complement, for example in spectrum centre gaps.
All cellular systems today use FDD, and more than 90 per cent of the worlds mobile
frequencies available are in paired bands. With FDD, downlink and uplink traffic is
transmitted simultaneously in separate frequency bands.
With TDD the transmission in uplink and downlink is discontinuous within the same
frequency band. As an example, if the time split between down- and uplink is 1/1, the
uplink is used half of the time. The average power for each link is then also half of the
peak power. As peak power is limited by regulatory requirements, the result is that
for the same peak power, TDD will offer less coverage than FDD.
Page 144
4.5.1 FDD
Two frame structure types are defined for E-UTRA: frame structure type 1 for FDD
mode, and frame structure type 2 for TDD mode.
For the frame structure type 1, the 10 ms radio frame is divided into 20 equally sized
slots of 0.5ms. A sub-frame consists of two consecutive slots, so one radio frame
contains ten sub-frames.
Page 145
4.5.2 TDD
The frame structure for the type 2 frames used on LTE TDD is somewhat different.
The 10 ms frame comprises two half frames, each 5 ms long. The LTE half-frames are
further split into five sub-frames, each 1ms long.
With TDD the transmission in uplink and downlink is discontinuous within the same
frequency band. As an example, if the time split between down- and uplink is 1/1, the
uplink is used half of the time. The average power for each link is then also half of the
peak power. As peak power is limited by regulatory requirements, the result is that
for the same peak power, TDD will offer less coverage than FDD.
Page 146
GP (Guard Period)
Page 147
The DL to UL switching method ensures that the high power downlink transmissions
from the eNodeB from other neighbour cells do not interfere when the eNodeB UL
reception is going in the current cell.
Page 148
One of the advantages of using LTE TDD is that it is possible to dynamically change
the up and downlink balance and characteristics to meet the load conditions. In order
that this can be achieved in an ordered fashion, a number of standard configurations
have been set within the LTE standards.
A total of seven up/downlink configurations have been set, and these use either 5 ms
or 10 ms switch periodicities. In the case of the 5ms switch point periodicity, a special
sub-frame exists in both half frames. In the case of the 10 ms periodicity, the special
subframe exists in the first half frame only. It can be seen from the table above that the
sub-frames 0 and 5 as well as DwPTS are always reserved for the downlink. It can
also be seen that UpPTS and the sub-frame immediately following the special
subframe are always reserved for the uplink transmission.
Page 149
Page 150
Page 151
Page 152
Data symbols are independently modulated and transmitted over a high number of
closely spaced orthogonal subcarriers. In E-UTRA, downlink modulation schemes
QPSK, 16QAM, and 64QAM are available.
Page 153
Page 154
LTE must support the international wireless market and regional spectrum
regulations and spectrum availability. To this end the specifications include variable
channel bandwidths selectable from 1.4 to 20 MHz, with subcarrier spacing of 15 kHz.
If the new LTE eMBMS is used, a subcarrier spacing of 7.5 kHz is also possible.
Subcarrier spacing is constant regardless of the channel bandwidth.
3GPP has defined the LTE air interface to be "bandwidth agnostic," which allows the
air interface to adapt to different channel bandwidths with minimal impact on system
operation. The smallest amount of resource that can be allocated in the uplink or
downlink is called a resource block (RB). An RB is 180 kHz wide and lasts for one 0.5
ms timeslot. For standard LTE, an RB comprises 12 subcarriers at a 15 kHz spacing,
and for eMBMS with the optional 7.5 kHz subcarrier spacing an RB comprises 24
subcarriers for 0.5 ms. The maximum number of RBs supported by each transmission
bandwidth is given above.
Page 155
4.7 Orthagonality
Depending on the required data rate, each UE can be assigned one or more resource
blocks in each transmission time interval of 1 ms. The scheduling decision is made in
the base station (eNodeB).
Page 156
Page 157
LTE specifies a high-capacity multicast and broadcast service, using a singlefrequency network (also called multicast-broadcast single-frequency network or
MBSFN). As depicted above, all cells in the network (or a geographical area) transmit
time-synchronized, identical DL signals. At the user terminal, these multiple timesynchronized transmissions appear as a single transmission with high signal strength,
and thus can be easily decoded. In addition to the benefits of time-synchronised
transmissions, the robustness of OFDM to multipath propagation ensures that the
inter-cell interference is reduced.
The capacity benefits of the single-frequency network are highest when the same
content is transmitted in all cells of the macro network.
Page 158
Page 159
LTE has ambitious requirements for data rate, capacity, spectrum efficiency, and
latency. In order to fulfill these requirements, LTE is based on new technical
principles. LTE uses new multiple access schemes on the air interface: OFDMA
(Orthogonal Frequency Division Multiple Access) in downlink and SC-FDMA (Single
Carrier Frequency Division Multiple Access) in uplink.
While OFDMA is seen optimum to fulfil the LTE requirements in downlink, OFDMA
properties are less favourable for the uplink. This is mainly due to weaker peak-toaverage power ratio (PAPR) properties of an OFDMA signal, resulting in worse
uplink coverage.
Thus, the LTE uplink transmission scheme for FDD and TDD mode is based on SCFDMA (Single Carrier Frequency Division Multiple Access) with cyclic prefix. SC-FDMA
signals have better PAPR properties compared to an OFDMA signal. This was one of
the main reasons for selecting SCFDMA as LTE uplink access scheme. The PAPR
characteristics are important for cost-effective design of UE power amplifiers.
Page 160
Page 161
In the time domain, a guard interval may be added to each symbol to combat interOFDM-symbol-interference due to channel delay spread. In EUTRA, the guard
interval is a cyclic prefix which is inserted prior to each OFDM symbol.
Delay spread is a type of distortion that is caused when an identical signal arrives at
different times at its destination. The signal usually arrives via multiple paths and
with different angles of arrival. The time difference between the arrival moment of the
first multipath component (typically the Line of sight component) and the last one, is
called delay spread.
Page 162
The data to be transmitted on an OFDM signal is spread across the carriers of the
signal, each carrier taking part of the payload. This reduces the data rate taken by
each carrier. The lower data rate has the advantage that interference from reflections
is much less critical. This is achieved by adding a guard band time or guard interval
into the system. This ensures that the data is only sampled when the signal is stable
and no new delayed signals arrive that would alter the timing and phase of the signal.
The distribution of the data across a large number of carriers in the OFDM signal has
some further advantages. Nulls caused by multi-path effects or interference on a
given frequency only affect a small number of the carriers, the remaining ones being
received correctly. By using error-coding techniques, which does mean adding further
data to the transmitted signal, it enables many or all of the corrupted data to be
reconstructed within the receiver. This can be done because the error correction code
is transmitted in a different part of the signal. It is this error coding which is referred
to in the "Coded" word in the title of COFDM which is often seen.
Page 163
Page 164
To each OFDM symbol, a cyclic prefix (CP) is appended as guard time. One downlink
slot consists of 6 or 7 OFDM symbols, depending on whether extended or normal
cyclic prefix is configured, respectively. The extended cyclic prefix is able to cover
larger cell sizes with higher delay spread of the radio channel.
Page 165
Data is allocated to the UEs in terms of resource blocks, i.e. one UE can be allocated
integer multiples of one resource block in the frequency domain. These resource
blocks do not have to be adjacent to each other. In the time domain, the scheduling
decision can be modified every transmission time interval of 1 ms. The scheduling
decision is done in the base station (eNodeB). The scheduling algorithm has to take
into account the radio link quality situation of different users, the overall interference
situation, Quality of Service requirements, service priorities, and so on.
Page 166
4.11 Scheduler
Scheduler in eNB (base station) allocates resource blocks (which are the smallest
elements of resource allocation) to users for predetermined amount of time. Slots
consist of either 6 (for long cyclic prefix) or 7 (for short cyclic prefix) OFDM symbols
Longer cyclic prefixes are desired to address longer fading. The number of available
subcarriers changes depending on transmission bandwidth (but subcarrier spacing is
fixed).
Page 167
Round Robin
The aim of this scheduler is to share the available/unused resources equally among
the RT terminals (i.e. the terminals requesting RT services) in order to satisfy their RTMBR demand.
This is a recursive algorithm and continues to share resources equally among RT
terminals, until all RT-MBR demands have been met or there are no more resources
left to allocate
Proportional Fair
The aim of this Scheduler is to allocate the available/unused resources as fairly as
possible in such a way that, on average, each terminal gets the highest possible
throughput achievable under the channel conditions.
This is a recursive algorithm. The remaining resources are shared between the RT
terminals in proportion to their bearer data rates. Terminals with higher data rates get
a larger share of the available resources. Each terminal gets either the resources it
needs to satisfy its RT-MBR demand, or its weighted portion of the available/unused
resources, whichever is smaller. This recursive allocation process continues until all
RT-MBR demands have been met or there are no more resources left to allocate.
Page 168
Proportional Demand
The aim of this scheduler is to allocate the remaining unused resources to RT
terminals in proportion to their additional resource demands. This is a non-recursive
allocation process and results in either satisfying the RT-MBR demands of all
terminals or the consumption of all of the resources.
Page 169
Max SINR
The aim of this Scheduler is to maximise the terminal throughput and, in turn, the
average cell throughput. This is a non-recursive resource allocation process, where
terminals with higher bearer rates (and consequently higher SINR) are preferred over
terminals with lower bearer rates (and consequently lower SINR). This means that
resources are allocated first to those terminals with better SINR/channel conditions,
thereby maximising the throughput.
Page 170
Page 171
Page 172
Data is allocated to the UEs in terms of resource blocks. A physical resource block
consists of 12 (24) consecutive sub-carriers in the frequency domain for the Nf=15 kHz
(Nf=7.5 kHz) case. In the time domain, a physical resource block consists of DL
Nsymb consecutive OFDM symbols, DL Nsymb is equal to the number of OFDM
symbols in a slot.
Depending on the required data rate, each UE can be assigned one or more resource
blocks in each transmission time interval of 1 ms. The scheduling decision is done in
the base station (eNodeB).
Page 173
The user data is carried on the Physical Downlink Shared Channel (PDSCH).
Page 174
Page 175
Page 176
Page 177
4.15.1
Page 178
Configuration of Carrier
Page 179
Page 180
Reference Signal Received Power (RSRP), is determined for a considered cell as the
linear average over the power contributions (in [W]) of the resource elements that
carry cell-specific reference signals within the considered measurement frequency
bandwidth. For RSRP determination, the cell-specific reference signals R0 and, if
available, R1 can be used. If receiver diversity is in use by the UE, the reported value
shall not be lower than the corresponding RSRP of any of the individual diversity
branches.
E-UTRA Carrier RSSI
E-UTRA Carrier Received Signal Strength Indicator, comprises the total received
wideband power observed by the UE from all sources, including co-channel serving
and non-serving cells, adjacent channel interference, thermal noise and so on.
Reference Signal Received Quality (RSRQ)
RSRQ is defined as the ratio NRSRP / (E-UTRA carrier RSSI), where N is the number
of RBs of the E-UTRA carrier RSSI measurement bandwidth. The measurements in the
numerator and denominator shall be made over the same set of resource blocks.
Page 181
Page 182
Page 183
Page 184
Page 185
Page 186
Page 187
4.18.1
Page 188
4.18.2
Page 189
4.19 Questions
Page 190
CHAPTER 5
LTE Network
Architecture and
Protocols
Page 191
Scalable bandwidth up to 20 MHz, covering 1.25, 2.5, 5, 10, 15, and 20 MHz in the
study phase.
Reduced latency, to 10 msec round-trip time between user equipment and the base
station, and to less than 100 msec transition time from inactive to active.
Page 192
Page 193
Page 194
A network run by one operator in one country is known as a Public Land Mobile
Network (PLMN). Roaming is where users are allowed to connect to PLMNs other
than those to which they are directly subscribed.
Page 195
Page 196
Page 197
Page 198
Page 199
Page 200
Page 201
Page 202
Page 203
Page 204
Page 205
QoS definitions for Radio Bearers which can be modified are listed below:
[1] Guaranteed bit rate (GBR) specifies the guaranteed number of bits delivered by EUTRA within a period of time (provided there is data to deliver).
[2] Maximum bit rate (MBR) specifies a maximum number of bits delivered by UMTS
within a period of time
Page 206
Page 207
Page 208
Page 209
The HSS contains users SAE subscription data such as the EPS-subscribed QoS
profile and any access restrictions for roaming. It also holds information about the
PDNs to which the user can connect.
This could be in the form of an Access Point Name (APN) (which is a label according
to DNS1 naming conventions describing the access point to the PDN), or a PDN
Address (indicating subscribed IP address(es)). In addition, the HLR holds dynamic
information such as the identity of the MME to which the user is currently attached or
registered. The HLR may also integrate the Authentication Centre (AuC) which
generates the vectors for authentication and security keys.
Security functions are the responsibility of the MME for both signalling and user data.
When a UE attaches with the network, a mutual authentication of the UE and the
network is performed between the UE and the MME/HSS. This authentication
function also establishes the security keys which are used for encryption of the
bearers.
All data sent over the radio interface is encrypted
Page 210
Page 211
Page 212
Page 213
Page 214
Page 215
Page 216
The LTE architecture enables service providers to reduce the cost of owning and
operating the network by allowing the service providers to have separate CN (MME,
SGW, PDN GW) while the E-UTRAN (eNBs) is jointly shared by them. This is enabled
by the S1-flex mechanism by enabling each eNB to be connected to multiple CN
entities. When a UE attaches to the network, it is connected to the appropriate CN
entities based on the identity of the service provider sent by the UE.
Page 217
Page 218
Page 219
Page 220
Page 221
Page 222
Page 223
All data will be encrypted for over the air transmission, ensuring user confidentiality.
Scheduling and transmission of paging messages (from the MME);
A UE will have to be paged when there is data to send to it or when an incoming
phone call is being made.
mobility and scheduling.
UEs inform the network which cells it is receiving and the power level and quality of
those signals. The eNB can provide the UE assistance, by providing a list of
frequencies, scrambling codes (UTRAN) etc, and perhaps even a list of preferred
networks and specific frequencies to measure. We will talk more about this later.
Page 225
Page 226
Page 227
5.11.1
Page 228
Logical Channels
Page 229
Page 230
5.11.2
Transport Channels
The LTE transport channels vary between the uplink and the downlink, as each has
different requirements and operates in a different manner. Physical layer transport
channels offer information transfer to medium access control (MAC) and higher
layers.
Broadcast Channel (BCH): The LTE transport channel maps to Broadcast Control
Channel (BCCH)
Downlink Shared Channel (DL-SCH):This transport channel is the main channel for
downlink data transfer. It is used by many logical channels.
Paging Channel (PCH): To convey the PCCH
Multicast Channel (MCH): This transport channel is used to transmit MCCH information
to set up multicast transmissions
Page 231
Page 233
Page 234
5.11.3
Physical Channels
Page 235
Physical Broadcast Channel (PBCH): This physical channel carries system information for
UEs requiring to access the network.
Physical Downlink Control Channel (PDCCH): The main purpose of this physical channel is
to carry mainly scheduling information.
Physical Hybrid ARQ Indicator Channel (PHICH): As the name implies, this channel is used
to report the Hybrid ARQ status.
Physical Downlink Shared Channel (PDSCH): This channel is used for unicast and paging
functions.
Physical Multicast Channel (PMCH): This physical channel carries system information for
multicast purposes.
Physical Control Format Indicator Channel (PCFICH): This provides information to enable
the UEs to decode the PDSCH
Page 236
Page 237
Page 238
5.12.1
PDSCH
Page 239
Page 240
5.12.2
PUSCH
PUSCH (Physical Uplink Shared Channel): Carries the UL-SCH data, CQI, PMI and RI.
RI (Rank Indicator): RI indicates the number of spatial layers that can be supported by
the UE, based on the channel conditions. The transmission rank selected to be used is
dependent on RI as well as other factors (depending on the vendor) such as traffic
pattern, available transmission bandwidth and so on. RI is compulsory for both open
and closed loop spatial multiplexing.
PMI (Precoding Matrix Indicator): PMI ensures that the correct spatial domain precoding
matrix is applied by the eNodeB so that the transmitted signal matches with the
spatial channel experienced by the UE. It is denoted by the Transmit Precoding Matrix
Indicator (TPMI) that consists of 3 bit or 6 bit information field for 2 or 4 transmit
antennas, respectively. It is compulsory for closed loop spatial multiplexing.
CQI (Channel Quality Indicator): It is a 4 bit index pointing into a table of 16 different
modulation and coding schemes. It indicates or suggests a combination of modulation
and coding scheme that the eNodeB should use to ensure that the BLER (Block Error
Ratio) experienced by the UE remains less than 10%.
Page 241
Page 242
5.13.1
Functional Nodes - UE
Page 243
5.13.2
Page 244
Page 245
5.14.1
Page 246
PDCCH (Physical Downlink Control Channel): Informs the UE about the resource allocation
of PCH and DL-SCH, and Hybrid ARQ information related to DL-SCH. It also carries
the uplink scheduling grant. The downlink control signalling (PDCCH) is located in
the first n OFDM symbols where n 3 and consists of:
Transport format, resource allocation, and hybrid-ARQ information related to DLSCH, and PCH
Page 247
Page 248
Page 249
Page 250
Page 251
5.16.1
Page 252
Page 253
Page 254
Page 255
PCFICH (Physical Control Format Indicator Channel): Informs the UE about the number of
OFDM symbols used for the PDCCHs. It is transmitted in every subframe.
Page 256
Page 257
Page 258
Page 259
Page 260
Page 261
Page 262
Page 263
Page 264
Page 265
Page 266
Page 267
Page 268
Page 269
Page 270
CHAPTER 6
Mobility Management
6.1 UE States
Page 271
Co-existence with legacy standards and systems: LTE users should be able to make voice
calls from their terminal and have access to basic data services even when they are in
areas without LTE coverage. LTE therefore allows smooth, seamless service handover
in areas of HSPA, WCDMA or GSM/GPRS/EDGE coverage. Furthermore, LTE/SAE
supports not only intra-system and intersystem handovers, but inter-domain
handovers between packet switched and circuit switched sessions.
Page 272
6.2 UE Power-up
Page 273
Page 274
Page 275
Page 276
Page 277
Page 278
Page 279
Page 280
Page 281
In order to provide seamless service continuity, ensuring mobility between LTE and
legacy technologies is therefore very important. These technologies include
GSM/GPRS and WCDMA/HSPA.
Page 282
Page 283
Page 284
Page 285
Page 286
Page 287
Page 288
Page 289
Page 290
Page 291
Page 292
Page 293
Page 294
Page 295
Page 296
Page 297
Page 298
Page 299
If a neighboring cell was ranked with the highest value R, will the UE start the cell reselection?
If it is a GSM or TDD cell, then indeed the UE performs the cell re-selection process to
this cell. If it is an FDD cell, it depends on the used quality measure.
There are two options: CPICH RSCP or CPICH Ec/No.
The UE learns from the system information, which quality measure to use.
If the quality measure CPICH RSCP is used, the UE perform the cell re-selection. If the
quality measure Ec/No is used, the UE has to make a second ranking based on the
same measurement quantity. The UE performs cell re-selection to the FDD cell, which
was ranked best in the second ranking process.
Is the cell re-selection initiated immediately after the UE ranks a neighbouring cell to
be the best?
If so, we could face a ping-pong effect a UE often performing cell reselection
between two neighbouring cells.
To avoid this, the operator uses the time interval value Treselection, whose value
ranges between 0 and 31 seconds. Only when a cell was ranked Treselection seconds
better then the serving cell, a cell reselection to this cell takes place. In addition to this,
a UE must camp at least 1 second on a serving cell, before the next cell re-selection
may take place.
How often are the cell re-selection criteria evaluated?
This is done at least once every DRX cycle for cells, for which new measurement
results are available.
Page 300
Page 301
Page 302
Page 303
Page 304
Page 305
Page 306
Page 307
Page 308
6.7.4 Handover
Page 309
Page 310
Page 311
Page 312
Page 313
Page 314
The mobiles continuously measure the RSRP from the serving cell and candidate cells
(cells in the vicinity of the mobile that might be considered as handover candidates).
A measurement report is typically triggered when the RSRP from a candidate cell is
within a threshold D dB from the serving cell RSRP.
The measurement report contains information about the PCI and the corresponding
RSRP of the candidate cell. The serving cell may order the mobile to read the GID
(transmitted on the broadcast channel from each cell) of a cell with a certain PCI and
report that back to the serving cell.
This could be done for example if the PCI is associated with a cell with handover
failures in the past or if a central node such as the OSS has requested it. In any case,
the GID of a neighbouring cell can be obtained with help from a mobile station upon
request from the serving cell. In case the serving cell decides to set up a relation to the
neighbouring cell it contacts the central configuration server in the network and
obtains the IP address.
Page 315
Is the PCI of the candidate cell already known in the serving cell (i.e. is the neighbour
relation already established)?
Yes: Initiate handover decision procedure.
No: Consider the candidate cell as a NCR list candidate.
Order the UE to report GID. Obtain connectivity information for the candidate cell
and signal to the candidate cell, directly or through the core network, about a mutual
addition to the NCR lists of the two cells.
Page 316
Page 317
Page 318
Page 319
Page 320
Page 321
Page 322
Page 323
6.8 Questions
Page 324