Professional Documents
Culture Documents
Code On Network Coding For 5G Evolution PDF
Code On Network Coding For 5G Evolution PDF
Overview
The
purpose
of
this
white
paper
is
to
explain
how
coding
algorithms,
and
specifically
network
codes,
can
enable
a
seamless,
software-‐based
upgrade
to
5G
Networks.
The
NGNM
Alliance,
a
group
of
service
providers
working
cooperatively
on
5G
requirements,
define
5G
network
performance
as
needing
“much
greater
throughput,
much
lower
latency,
ultra-‐high
reliability,
much
higher
connectivity
density,
and
higher
mobility
range.
This
enhanced
performance
is
expected
to
be
provided
along
with
the
capability
to
control
a
highly
heterogeneous
environment,
and
capability
to,
among
others,
ensure
security
and
trust,
identity,
and
privacy.”
The
use
cases
in
this
white
paper
show
how
NGNM’s
objectives
can
be
achieved
through
a
proprietary
network
code
called
Random
Linear
Network
Coding
(RLNC).
RLNC
improves
network
efficiency
by
simplifying
its
operations.
As
stated
by
the
brilliant
computer
scientist
and
the
designer
of
today’s
routing
algorithms,
Dijkstra:
RLNC
provides
order-‐of-‐magnitude
improvement
wherever
data
is
transported
or
stored.
Among
state-‐
of-‐the
art
codes,
it
is
most
capable
of
removing
a
network’s
inefficient
redundancy.
Moreover,
its
versatility
enables
new
modes
of
communications,
including
novel
protocols
and
powerful
performance
and
reliability
tools,
in
the
most
complex
environments
and
topologies
(e.g.,
satellite,
wireless
offload/backhaul,
wireless
mesh)[2].
These
innovations
will
be
critical
tools
for
5G
networks.
By
simplifying
network
operations,
RLNC
enables
innovative
combinations
with
source
codes
and
powerful
design
tradeoffs
involving
reliability,
latency,
complexity,
and
energy.
RLNC
insures
optimal
speed
and
reliability
in
a
variety
of
applications
and
a
range
of
devices.
It
is
complementary
with
source
codes
in
the
way
it
optimizes
digital
storage
and
distribution
to
insure
the
highest
possible
quality,
given
the
underlying
network
or
media
losses
and
latencies.
This
paper
illustrates
how
RLNC
can
be
used
to
improve
network
efficiency
for
optimal
data
and
media
delivery
across
heterogeneous
devices
and
networks.
Use
Case
1:
Improving
the
Mobile
User
Experience
from
the
Application
Layer
Managing
Network
Latency
in
Streaming,
End-‐to-‐End
Multimedia
Applications
RLNC
optimally
delivers
media
content
in
end-‐to-‐end
systems,
thus
enabling
higher
Quality
of
Experience
(QoE)
to
media
customers.
RLNC
has
shown
significant
latency
gains
in
multimedia
streaming
applications[3].
The
discussed
implementations
can
be
tuned
for
both
long
distances
(e.g.,
data
center
to
home
theatre)
and
short
distances
(e.g.,
between
devices
in
a
home
network).
Several
application-‐layer
implementations
of
RLNC
have
shown
the
unique
ability
to
provide
reliability
while
guaranteeing
exceptionally
low
latencies.
Compared
to
conventional
channel
codes,
RLNC
has
shown
latency
reductions
of
at
least
4x[3].
These
latency
gains
are
made
possible
by
RLNC’s
unique
capability
to
code
on
the
fly
or
in
a
sliding-‐window
(see
Appendix
1
&
2).
Use
Case
2:
Reducing
Network
Congestion
at
the
Transport
Layer
Boosting
Multimedia
Streaming
through
RLNC’s
Protocol-‐Friendly
Enhancements
RLNC
provides
powerful
protocol-‐enhancement
capabilities
that
significantly
increase
the
performance
of
transport
networks,
thus
enabling
media
streams
to
carry
higher
video
and
audio
qualities
using
the
same
resources.
For
example,
Coded
TCP
combines
the
reliability
of
RLNC
with
TCP’s
congestion
control
algorithms
to
minimize
latencies.
RLNC-‐enhanced
protocols
can
be
applied
across
the
content
distribution
network.
Within
LTE,
and
eventually
5G,
they
extend
cell
coverage
or
increase
coverage
density
by
2.5x.
They
also
improve
throughput
in
crowded
WiFi
settings
such
as
airports,
coffee
shops,
libraries,
or
buildings
with
interfering
WiFi
networks[7,9].
TCP,
the
protocol
representing
the
majority
of
Internet
traffic
and
almost
all
streaming
video,
may
back
off
unnecessarily
when
faced
with
random
packet
losses.
Coded
TCP
is
capable
of
handling
random
losses
by
inhibiting
TCP’s
inefficient
back-‐off
instances,
thus
allowing
efficient
use
of
available
bandwidth[2].
This
is
particularly
relevant
for
multimedia
streaming
applications:
Even
with
20%
packet
losses
over
a
25Mbps
link,
a
user
can
watch
streaming
video
without
experiencing
any
buffer
overruns[7].
Coded
TCP
has
shown
similar
streaming
gains
in
proxy
configurations.
In
a
recent
multi-‐
continental
Coded
TCP
trial,
large-‐scale
download
speed
tests
were
undertaken
where
thousands
of
consumer
devices
connected
to
the
Internet
through
commercial
WiFi
and
cellular
networks.
The
result
was
an
average
5x
speed
gain
compared
to
conventional
TCP.
These
gains
are
made
possible
by
RLNC’s
unique
capability
to
code
in
a
sliding-‐window,
as
illustrated
in
Appendix
2.
RLNC
randomly
generates
its
coding
coefficients
and
embeds
them
within
the
data
for
transport
and
storage.
These
unique
features
enable
RLNC
to
re-‐encode
coded
packets
at
different
network
nodes
and
layers
without
need
for
prior
decoding.
A
recoding
node
can
combine
received
coded
packets
using
locally
generated
random
coefficients.
Consequently,
it
can
react
to
local
network
degradation
instantly
by
inserting
additional
parity
(coded)
packets
into
the
media
stream,
as
shown
in
the
figure
above.
Recoding
does
not
increase
decoding
complexity.
(See
Appendix
5
for
a
detailed
example
of
recoding.)
RLNC’s
recoding
process
has
been
demonstrated
to
significantly
improve
network
efficiently
and
reliability
in
Software
Defined
Networking
(SDN).
Recent
SDN
implementations
demonstrate
considerable
improvements
for
RLNC
in
multihop
networks[6].
This
study
of
fundamental
multihop
topologies
shows
that
simple
IP-‐layer
recoding
strategies
enable
networks
to
realize
performance
boosts
in
TCP
without
modifying
the
overlaying
end-‐to-‐end
transport
protocol.
The
results
demonstrate
TCP
goodput
gains
above
3x[6],
achieved
through
RLNC’s
unique
recoding
capabilities.
RLNC
multi-‐hop
gains
apply
at
both
the
network
core
and
edge,
with
significant
throughput
and
latency
gains
in
local
wireless
meshes
(e.g.,
WiFi
metro/home
network,
home
theatre
setup,
etc.).
Use
Case
8:
Mesh
Networking
to
Improve
the
User’s
Media
Experience
ABR
Optimization
in
Wireless
Mesh
Settings
Future
home
network
setups
are
increasingly
featuring
multiple
playback
devices.
For
reliable
operation,
such
devices
may
need
to
function
as
a
coordinated
wireless
mesh,
as
shown
in
the
home
theatre
illustration
(right).
Recent
work
has
demonstrated
that
RLNC-‐based
protocols
offer
significant
Quality
of
Experience
(QoE)
gains
when
carrying
ABR
video
over
such
a
wireless
mesh[16].
The
Use
Case
9:
Edge
Caching
with
RLNC,
Towards
Cache
Meshing
and
Cooperation
RLNC
Recoding,
a
CDN
Game-‐Changer
RLNC’s
recoding
feature
can
be
used
to
dramatically
improve
CDN
efficiency.
Caching
is
used
broadly
within
CDNs
to
distribute
content
more
efficiently.
Owing
to
cheaper
and
more
powerful
storage
and
computing
at
the
network
edge
(e.g.,
set
top
boxes,
gaming
platforms),
edge
devices
are
increasingly
playing
a
caching
role.
The
figure
(right)
shows
a
typical
CDN
multicast
architecture
with
multiple
tiers
of
caches.
To
illustrate
the
value
of
RLNC’s
recoding
capabilities,
red
links
are
each
assumed
to
exhibit
5%
packet
losses.
Restricted
by
their
end-‐to-‐end
structures,
conventional
block
and
rateless
codes
need
to
process
cumulative
packet
losses
at
the
receiver
(see
Appendix
5).
This
results
in
the
transmission
of
the
worst-‐
case
overhead
(above
15%)
to
all
customers,
regardless
of
whether
such
overhead
is
needed.
This
is
an
expensive
requirement,
given
bandwidth
scarcity.
Owing
to
recoding,
RLNC
can
inject
redundancy
as
needed
at
the
link
level,
resulting
in
optimal
overhead
and
throughput
over
all
links.
Combining
its
recoding
and
meshing
capabilities,
RLNC
enables
true
edge
cache
clustering
and
cooperation.
RLNC
also
provides
large
coding
speedups
for
storage
and
CDN
applications
compared
to
competing
coding
libraries,
as
illustrated
in
Appendix
6.
Use
Case
10:
RLNC’s
Seamless
User
Experience
in
a
Highly
Heterogeneous
Network
Coded
Multiresolution
Transport
Compared
to
single-‐layer
transcoding
techniques,
layered
coding
helps
reduce
storage
costs
and
bandwidth
consumption,
enabling
the
distribution
of
higher-‐quality
multimedia.
RLNC
allows
multimedia
distribution
networks
to
use
the
native
scaling
video
coding
of
H.264
or
the
emerging
H.265
standards
to
reduce
or
obviate
the
current
need
for
sophisticated
rate
measurements
and
resolution
RLNC
can
make
use
of
a
pre-‐existing
packetization
structure
or
generate
its
own
packets
from
an
input
bitstream.
This
is
illustrated
in
Figure
2,
a
simplified
block
diagram
showing
the
operations
of
the
RLNC
encoder
and
decoder
modules
of
Figure
1.
Since
packets
entering
the
encoding
process
are
assumed
to
be
of
equal
size,
an
optional
packetization
stage
may
be
required
for
functions
such
as
padding.
The
input
frames
are
then
buffered
in
preparation
for
transmission.
The
two
major
units
within
the
encoder
and
decoder
are
the
protocol
unit
and
encoding/decoding
unit.
While
the
encoding
and
decoding
units
strictly
perform
encoding
or
decoding
over
a
set
of
buffered
In
the
example
of
Figure
3,
receiving
any
three
linearly
independent
packets
is
sufficient
to
decode
the
three
original
packets.
The
number
of
additional
coded
packets
(i.e.,
the
redundancy)
can
be
optimized
so
that
it
closely
matches
the
channel
loss.
Decoding
reverses
the
linear
operations
of
Figure
3
through
Gaussian
elimination,
a
customary
algorithm
for
solving
systems
of
linear
equations.
In
Figure
4,
the
right-‐side
coefficient
entries
are
generated
through
this
process.
Gaussian
elimination
typically
requires
a
number
of
arithmetic
operations
that
are
in
the
order
of
n3,
where
n
is
the
number
of
input
packets.
Importantly,
RLNC
can
use
coded,
uncoded,
and
partial
packets
to
decode
(see
Figure
4),
leading
to
lower
complexity
and
easier
integration
with
existing
distributed
systems.
Figure 4 Decoding
RLNC
Overhead
Like
any
form
of
coding,
RLNC
comes
with
two
main
costs.
First,
there
is
a
computational
cost
associated
with
the
complexity
of
encoding
and
decoding
RLNC
packets.
Second,
there
is
a
header
overhead
associated
with
the
size
of
the
header
in
each
coded
packet.
Unlike
other
coding
schemes,
these
costs
in
RLNC
are
strongly
dependent
on
the
application.
This
is
because
the
simplicity
of
the
RLNC
algorithm
allows
for
a
number
of
tradeoffs
that
limit
such
costs.
Following
are
three
examples
of
parameters
that
strongly
influence
both
computational
and
packet
overheads:
• Field/Symbol
Size:
The
symbol
size
(i.e.,
the
number
of
bits
allocated
to
the
encoding
symbols
of
Figure
3)
governs
the
field
size
(i.e.,
the
number
of
coefficients
that
can
be
represented
by
such
a
symbol).
For
example,
a
byte-‐sized
symbol
means
that
the
field
has
28
=
256
elements.
The
field
size
determines
the
complexity
of
the
finite
field
operations
involved
in
computing
and
decoding
the
linear
combinations.
In
addition,
it
clearly
impacts
the
size
of
each
coefficient
in
the
coding
header.
• Block
Size:
The
block
size
is
the
number
of
packets
to
be
coded
together.
Small
blocks
incur
lower
computational
complexity.
In
addition,
they
require
a
shorter
list
of
coefficients,
hence
producing
smaller
coding
headers.
On
the
other
hand,
larger
blocks
enable
finer
granularity
in
determining
the
proportion
of
non-‐redundant
packets
(i.e.,
the
code
rate),
thus
enabling
more
efficient
adjustment
to
channel
losses.
• Coding
Density:
The
coding
density
represents
the
proportion
of
packets
represented
within
each
coded
packet.
In
RLNC,
a
linear
combination
can
involve
any
subset
of
the
block
packets.
Full
RLNC
Appendix
2
RLNC’s
Multiple
Encoding
Schemes
Unlike
traditional
Channel
Codes,
RLNC
allows
for
multiple
transport
schemes
while
using
simple
linear
algebra
to
encode
and
decode.
This
capability
is
a
function
of
RLNC
using
random
coefficients
within
a
suitable
field
size
to
generate
linear
combinations.
The
simplicity
of
this
random
coefficient
generation
means
that
the
RLNC
algorithm
can
be
applied
at
any
node
and
any
layer.
More
importantly,
it
means
that
the
code
can
be
carried
with
data.
Those
two
unique
attributes
are
the
source
of
RLNC’s
versatility[2]
and
provide
the
a
number
of
encoding
schemes
where
traditional
codes
are
limited
to
one
(Block
Coding):
• Block
Coding:
In
block
coding,
RLNC
operates
as
a
conventional
block
code,
where
packets
are
assembled
in
blocks
and
then
coded
together.
In
RLNC,
feedback
provides
powerful
performance
optimization
tools.
For
instance,
both
the
block
size
and
the
redundancy
level
(proportion
of
additional
coded
packets)
can
vary
dynamically
if
information
on
received
packets
is
available.
• On-‐the-‐Fly
Coding:
Unlike
traditional
codes,
RLNC
does
not
require
the
encoder
to
receive
the
entire
block
before
starting
coded
transmissions.
Such
on-‐ Figure
5
RLNC
Coding
Capabilities
the-‐fly
coding
allows
for
more
flexible
transmission
schemes,
particularly
in
streaming
applications
[3].
• Sliding-‐Window
Coding:
RLNC
has
the
singular
capability
to
depart
from
the
block-‐coding
paradigm
and
adopt
a
sliding-‐window
approach:
Each
coded
packet
becomes
a
representation
of
the
transmitter’s
current
sliding
window,
as
shown
in
Figure
5.
In
sliding-‐window
coding,
the
transmitter
is
also
coding
new
arriving
packets
on
the
fly.
However,
this
approach
enables
both
transmitter
and
receiver
to
coordinate
window-‐decoding
events
(through
coordinating
encoding
and
redundancy).
This
enables
the
control
of
application-‐layer
latency
while
preserving
the
code’s
reliability
features.
• Systematic
Coding:
To
minimize
decoding
complexity
in
end-‐to-‐end
schemes,
it
is
desirable
to
transmit
the
native
packets
ahead
of
any
coded
packet,
a
process
called
systematic
coding.
Reliable
AC-‐3
Audio
Bitstream
–
Revisited
The
example
of
p.4
uses
RLNC
as
a
conventional
block
code
to
protect
an
enhanced
AC-‐3
audio
bitsteam
against
a
unifirm
1%
frame
loss.
Using
such
a
basic
block
code
is
inadequate
when
losses
become
dynamic
(e.g.,
when
loss
bursts
occur).
For
instance,
if
the
average
losses
remained
at
1%
but
bursts
of
Appendix
3
RLNC
Multicasting
Capabilities
In
broadcast
environments,
RLNC
can
be
combined
with
protocols
such
as
NORM
(NACK-‐Oriented
Reliable
Multicast)
to
yield
powerful
broadcast
capabilities.
A
notable
RLNC-‐based
NORM-‐enhancement
is
the
Speeding
Multicast
by
Acknowledgment
Reduction
Technique
(SMART)
protocol
[18],
described
below.
SMART
is
an
RLNC-‐based
feedback
protocol
that
uses
a
predictive
model
to
determine
the
optimal
feedback
time
for
a
broadcast
channel
with
a
potentially
large
number
of
receivers.
Scheduling
the
feedback
according
to
this
predictive
model
is
shown
to
reduce
both
the
feedback
traffic
and
Appendix
4
RLNC’s
Multipath
Encoding
Since
the
code
is
embedded
in
each
packet,
RLNC
renders
packets
interchangeable,
and
arrival
order
irrelevant.
The
destination
user
needs
only
to
assemble
a
sufficient
number
of
packets,
coded
or
uncoded,
in
order
to
decode
the
stream.
This
means
that
RLNC
allows
networks
to
seamlessly
combine
connections
with
wide-‐ranging
loss,
latency
and
bandwidth
characteristics,
without
need
for
complex
scheduling.
Not
only
is
no
path
coordination
necessary
at
the
source,
RLNC
also
enables
the
combination
of
multiple
heterogeneous
sources.
In
the
example
of
Figure
6,
the
destination
pulls
content
from
two
distinct
sources.
The
coded
source
(source
1)
is
able
to
transmit
packets
via
two
separate
paths.
The
uncoded
source
(source
2)
uses
a
single
path,
as
is
customary
in
today’s
networks.
The
receiver
is
able
to
combine
the
first
five
arriving
packets
without
any
need
for
sources
or
path
coordination.
The
loss
of
one
of
the
two
paths
by
Source
1,
depicted
in
Figure
6,
would
have
dramatic
consequences
on
the
stream
without
coding.
Several
multipath
implementations
show
that
applying
RLNC
across
multiple
channels
yields
the
sum
of
the
optimized
throughputs,
without
switching
or
coordination[6,9,10].
Coding
over
multiple
orthogonal
wireless
channels
is
expected
to
yield
other
performance
improvements,
including
security
gains
and
robustness
against
single-‐channel
jamming
or
congestion.
Multi-‐sourced
Streaming
Multimedia
streaming
is
a
major
application
of
RLNC’s
multipath
principle.
To
illustrate
this,
consider
that
the
sources
of
Figure
6
are
two
server
or
cache
sites
(e.g.,
Netflix)
hosting
a
coded
version
of
the
requested
media
content.
The
receiver
(e.g.,
home
set
top
Appendix
5
RLNC’s
Recoding
Gains
Recoding
enables
RLNC
to
uniquely
adjust
its
coding
overhead
to
local
network
conditions.
In
the
toy
example
of
Figure
8,
RLNC
recoding
is
contrasted
with
conventional
end-‐to-‐end
coding.
The
common
scenario
is
the
transmission
of
a
20-‐packet
file
from
source
(S)
to
destination
(D)
across
a
tandem
network
where
each
of
the
three
hops
has
a
10%
packet
loss.
The
example
simulates
the
quality
of
each
link
by
showing
packet
losses
in
red
above
each
link.
End-‐to-‐end
coding
naturally
requires
the
provisioning
of
all
the
required
redundancy
at
the
source
node.
In
this
example,
this
redundancy
amounts
to
37%
of
the
native
file
size.
Recoding,
on
the
other
hand,
enables
RLNC
to
renew
the
redundancy
at
each
intermediate
node
without
need
for
decoding.
RLNC’s
random
co-‐
efficient
generation
and
code
embedding
features
allow
any
intermediate
node
to
participate
in
the
coding
process,
in
particular
by
recombining
received
Appendix
6
Boosting
Coding
Speeds
for
Distributed
Storage
and
CDNs
When
it
comes
to
sheer
coding
speeds,
the
RLNC
library
is
significantly
faster
than
industry
standard
libraries.
In
a
May-‐2014
measurement
campaign,
Kodo
outperformed
state-‐of-‐the-‐art
storage
libraries,
including
ISA-‐L,
Jerasure,
and
OpenFEC.
Network
World
picked
up
the
story[19]:
“RLNC
performed
13%
to
465%
faster
than
the
industry
standard
Reed-‐Solomon
encoding
in
Storage
Area
Network
(SAN)
erasure
application
testing.
The
Kodo
library,
using
RLNC
to
encode
and
decode
data
on
a
SAN
for
error
correction
and
fault
tolerance,
was
compared
to
Intel’s
Reed-‐Solomon
library
implementation,
called
ISA-‐L,
and
an
open
source
library
implementation
called
Jerasure.
The
[19]
Kodo/RLNC
implementation
ran
consistently
faster
on
identical
SAN
hardware.”
RLNC’s
main
selling
point
is
its
unique
capability
to
enable
next-‐generation
information
infrastructure
by
inherently
providing
unique
features
such
as
multipath
transport,
seamless
distributed
storage,
and
robust
streaming.
Nevertheless,
the
reported
benchmarking
results
illustrate
RLNC’s
readiness
for
deployment
as
an
alternative
to
existing
storage
and
transport
codes.
Appendix
7
RLNC
Multi-‐Resolution
Transport
RLNC
offers
a
number
of
tools
to
optimize
scalable
transport
in
both
point-‐to-‐point
and
multicast
topologies,
as
shown
in
Figure
9.
Through
the
transcoding
of
base-‐
and
enhancement-‐layer
packets,
RLNC
is
capable
of
dynamically
switching
to
different
resolutions
while
maintaining
reliability.
RLNC’s
flexibility
is
illustrated
by
the
multicasting
setup
of
Figure
10,
where
all
users
receive
both
layers
irrespective
of
which
packets
were
lost
by
each
node.
Figure
9
RLNC
Scalable
Multimedia
Figure
10
RLNC
Scalable
Multicasting
Multi-‐Channel
Content
Distribution
Figures
9
and
10
underline
RLNC’s
ability
to
transport
a
set
of
synchronous
parallel
streams,
or
channels,
while
insuring
that
losses
affect
the
lowest-‐priority
channels
first.
In
the
example
of
Figure
9,
the
base
and
enhancement
layers
may
represent
the
high-‐
and
low-‐priority
channels,
respectively.
The
same
principle
can
therefore
be
applied
to
a
group
of
audio
and
video
channels
belonging
to
the
same
media
stream,
sent
independently
during
playback.
RLNC
allows
the
dynamic
allocation
of
redundancy
across
all
channels.
In
the
two-‐channel
example
of
Figure
9,
as
long
as
the
allocated
redundancy
is
capable
of
addressing
aggregate
losses,
combining
the
redundancy
for
the
two
channels
is
beneficial,
as
it
insures
that
losses
will
not
deteriorate
any
single
channel.
This
is
also
clear
in
the
multicasting
example
of
Figure
10,
where
RLNC
insures
the
reception
of
both
channels
for
any
two-‐packet
loss
configuration.
Furthermore,
RLNC
enables
the
redundancy
level
to
tightly
match
channel
losses
if
good
feedback
is
provided.
If
losses
exceed
allocated
redundancy,
however,
low-‐priority
channels
(e.g.,
alternative
dialog)
can
switch
to
transporting
redundancy
for
the
main
channels,
as
illustrated
in
Figure
9.
In
a
multiresolution
stream
scenario,
the
combination
of
dynamically
adjusting
resolution
to
bandwidth
(Fig.
9)
and
the
property
of
serving
heterogeneous
devices
(low-‐
and
high-‐resolution)
provides
a
very
versatile
media
stream.
Such
a
stream
will
be
efficient
in
terms
of
storage
on
the
server[12]
(as
mentioned
above)
and
will
be
able
to
provide
the
same
dynamic
adjustment
as
multiple
single-‐
resolution
streams
(e.g.,
with
HTTP
live
streaming
protocols)
while
being
multicast-‐friendly.