Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

 

Network  Coding:  Evolving  to  5G  


Using  Random  Linear  Network  Coding  to  Enable  a  Seamless  5G  Upgrade  
 

Overview  
The   purpose   of   this   white   paper   is   to   explain   how   coding   algorithms,   and   specifically   network   codes,  
can  enable  a  seamless,  software-­‐based  upgrade  to  5G  Networks.    
The   NGNM   Alliance,   a   group   of   service   providers   working   cooperatively   on   5G   requirements,   define   5G  
network  performance  as  needing  “much  greater  throughput,  much  lower  latency,  ultra-­‐high  reliability,  
much  higher  connectivity  density,  and  higher  mobility  range.    This  enhanced  performance  is  expected  
to  be  provided  along  with  the  capability  to  control  a  highly  heterogeneous  environment,  and  capability  
to,  among  others,  ensure  security  and  trust,  identity,  and  privacy.”      
The   use   cases   in   this   white   paper   show   how   NGNM’s   objectives  can   be   achieved  through   a   proprietary  
network   code   called   Random   Linear   Network   Coding   (RLNC).     RLNC   improves   network   efficiency   by  
simplifying   its   operations.   As   stated   by   the   brilliant   computer   scientist   and   the   designer   of   today’s  
routing  algorithms,  Dijkstra:    

“Simplicity  is  prerequisite  for  reliability.”  


The  use  cases  also  show  how  RLNC  from  Code  On[1]  can  provide  greater  throughput,  increased  security,  
and   improved   battery   life   extension   and   user   experience   in   mobile   devices.   RLNC   is   a   breakthrough  
coding   algorithm   that   enables   a   new   category   of   codes   called   network   codes   which   extend   beyond  
traditional  channel  codes  by  improving  efficiencies  in  both  network  and  storage  systems.    
 

RLNC  provides  order-­‐of-­‐magnitude  improvement  wherever  data  is  transported  or  stored.  Among  state-­‐
of-­‐the   art   codes,   it   is   most   capable   of   removing   a   network’s   inefficient   redundancy.   Moreover,   its  
versatility   enables   new   modes   of   communications,   including   novel   protocols   and   powerful  
performance   and   reliability   tools,   in   the   most   complex   environments   and   topologies   (e.g.,   satellite,  
wireless  offload/backhaul,  wireless  mesh)[2].    These  innovations  will  be  critical  tools  for  5G  networks.    
By   simplifying   network   operations,   RLNC  
enables   innovative   combinations   with   source  
codes  and  powerful  design  tradeoffs  involving  
reliability,  latency,  complexity,  and  energy.    
RLNC  insures  optimal  speed  and  reliability  in  a  
variety  of  applications  and  a  range  of  devices.  
It   is   complementary   with   source   codes   in   the  
way   it   optimizes   digital   storage   and  
distribution   to   insure   the   highest   possible  
quality,  given  the  underlying  network  or  media  
losses   and   latencies.   This   paper   illustrates   how  
RLNC   can   be   used   to   improve   network  
efficiency  for  optimal  data  and  media  delivery  
across  heterogeneous  devices  and  networks.      

Code  On  Proprietary  and  Confidential 1  


 
Random  Linear  Network  Coding  Use-­‐Cases  

Use  Case  1:    Improving  the  Mobile  User  Experience  from  the  Application  Layer  
Managing  Network  Latency  in  Streaming,  End-­‐to-­‐End  Multimedia  Applications  
RLNC   optimally   delivers   media   content   in   end-­‐to-­‐end  
systems,   thus   enabling   higher   Quality   of   Experience  
(QoE)   to   media   customers.   RLNC   has   shown  
significant   latency   gains   in   multimedia   streaming  
applications[3].  The  discussed  implementations  can  be  
tuned   for   both   long   distances   (e.g.,   data   center   to  
home   theatre)   and   short   distances   (e.g.,   between  
devices  in  a  home  network).  
Several   application-­‐layer   implementations   of   RLNC  
have   shown   the   unique   ability   to   provide   reliability  
while   guaranteeing   exceptionally   low   latencies.  
Compared   to   conventional   channel   codes,   RLNC   has  
shown   latency   reductions   of   at   least   4x[3].   These   latency   gains   are   made   possible   by   RLNC’s   unique  
capability  to  code  on  the  fly  or  in  a  sliding-­‐window  (see  Appendix  1  &  2).  

Use  Case  2:  Reducing  Network  Congestion  at  the  Transport  Layer  
Boosting  Multimedia  Streaming  through  RLNC’s  Protocol-­‐Friendly  Enhancements  
RLNC   provides   powerful   protocol-­‐enhancement  
capabilities   that   significantly   increase   the  
performance   of   transport   networks,   thus   enabling  
media   streams   to   carry   higher   video   and   audio  
qualities   using   the   same   resources.   For   example,  
Coded  TCP  combines  the  reliability  of  RLNC  with  TCP’s  
congestion   control   algorithms   to   minimize   latencies.  
RLNC-­‐enhanced   protocols   can   be   applied   across   the  
content   distribution   network.   Within   LTE,   and  
eventually   5G,   they   extend   cell   coverage   or   increase  
coverage   density   by   2.5x.   They   also   improve  
throughput  in  crowded  WiFi  settings  such  as  airports,  
coffee  shops,  libraries,  or  buildings  with  interfering  WiFi  networks[7,9].    
TCP,  the  protocol  representing  the  majority  of  Internet  traffic  and  almost  all  streaming  video,  may  back  
off   unnecessarily   when   faced   with   random   packet   losses.   Coded   TCP   is   capable   of   handling   random  
losses   by   inhibiting   TCP’s   inefficient   back-­‐off   instances,   thus   allowing   efficient   use   of   available  
bandwidth[2].  This  is  particularly  relevant  for  multimedia  streaming  applications:  Even  with  20%  packet  
losses   over   a   25Mbps   link,   a   user   can   watch   streaming   video   without   experiencing   any   buffer  
overruns[7].   Coded   TCP   has   shown   similar   streaming   gains   in   proxy   configurations.   In   a   recent   multi-­‐
continental   Coded   TCP   trial,   large-­‐scale   download   speed   tests   were   undertaken   where   thousands   of  
consumer  devices  connected  to  the  Internet  through  commercial  WiFi  and  cellular  networks.  The  result  
was  an  average  5x  speed  gain  compared  to  conventional  TCP.  These  gains  are  made  possible  by  RLNC’s  
unique  capability  to  code  in  a  sliding-­‐window,  as  illustrated  in  Appendix  2.  
   

Code  On  Proprietary  and  Confidential 2  


 
Use  Case  3:  Boosting  User  Experience  in  Remote  Locations  
Maximizing  Bandwidth  Utilization  and  Stabilizing  Throughput  in  Satellite  Links  
RLNC   transforms   the   Quality   of   Experience   (QoE)   of  
media  consumers  sharing  satellite  links,  without  need  
for  bandwidth  upgrades.    
In   a   recent   implementation,   RLNC   was   used   to   carry  
Internet   services   via   satellite   to   a   group   of   Pacific  
islands.   The   coded   IP-­‐layer   tunnel   was   designed   to  
operate   without   feedback.   The   latest   results  
demonstrate  that  RLNC  improves  link  utilization  by  up  
to  50%,  enabling  goodput  performance  gains  reaching  
4x[20].   An   important   byproduct   of   RLNC   is   to   stabilize  
the   link’s   overall   goodput   by   reducing   bandwidth  
utilization   swings.   In   a   remarkable   demonstration   of  
improved   user   experience   through   a   mere   software   patch,   RLNC   had   successfully   ushered   high-­‐
definition  (HD)  video  onto  some  of  the  islands  for  the  first  time.  
In  an  alternative  implementation,  coding  TCP  connections  individually  has  demonstrated  goodput  gain  
factors  reaching  20x  over  conventional  TCP  in  emulated  satellite  links[8].  
RLNC  has  also  demonstrated  gains  in  satellite  relaying  and  multi  beam  communications.  In  addition  to  
remote-­‐location   Internet   access,   potential   RLNC   applications   include   maritime   communications   and  
inflight  connectivity.  

Use  Case  4:  Improving  Throughput  with  Superior  Multicast  Protocols  


Realizing  Superior  Broadcast  Quality  through  RLNC  
RLNC   enables   multimedia   content   to   be   broadcast   at  
higher  QoE  to  larger  audiences.  RLNC  was  also  shown  
to   enables   powerful   Quality   of   Experience   (QoE)  
tradeoffs   in   access   and   multicast   networks,  
particularly   in   minimizing   interruption   in   media  
playback   as   well   as   the   number   of   initially   buffered  
packets   (initial   waiting   time)[4,5].   Compared   to   state-­‐
of-­‐the-­‐art   reliable   multicasting   protocols,   RLNC-­‐
enabled   broadcast   was   also   shown   to   accommodate  
larger   networks   while   delivering   30-­‐50%   lower  
latencies  (see  Appendix  3).  
RLNC   broadcasting   finds   important   applications   in  
satellite   broadcasting   networks.   Using   limited   NACK   feedback,   RLNC-­‐based   protocols   achieve   2x   the  
throughput   of   conventional   systems   with   100   receivers   and   5%   losses.   (The   gain   factor   climbs   to   6x  
with  50%  losses).  
Other   multicasting   applications   include   content   distribution   networks,   IPTV   broadcasting,   stadium  
wireless  networks,  cable  systems  (DOCSIS),  and  DSL.  RLNC  was  also  implemented  successfully  in  WiFi  
multicasting  applications,  where  important  energy  savings  were  demonstrated  in  mobile  devices.  
   

Code  On  Proprietary  and  Confidential 3  


 
Use  Case  5:  Coded  Multi-­‐Path,  Enabling  Virtualized  Infrastructure  and  Seamless  Offload    
Boosting  User  Experience  through  RLNC  Multi-­‐Path  
A   common   misperception   about   today’s   Internet   is  
that  data  packets  flow  between  routers  and  switches  
opportunistically,   down   optimal   paths   in   the   network.  
The   reality   is   that   99.9%   of   a   given   communication  
flows   over   exactly   the   same   path,   yielding   significant  
network  inefficiencies.  In  today’s  protocols,  the  state  
of  each  packet  in  transit  is  tracked  to  guarantee  data  
packet   arrival   (e.g.,   TCP),   an   expensive   and   complex  
procedure.    
In   addition   to   operating   as   a   channel   code,   RLNC  
addresses  the  root  of  the  problem  by  eliminating  the  
need  for  tracking  the  state  of  each  individual  packet.      
RLNC’s   stateless   communications   eliminate   administrative   complexity   and   enables   multipath  
communications.   RLNC   implementations   demonstrate   significant   gains   in   throughput   in   multipath  
scenarios.  For  example,  RLNC  Multi-­‐Path  TCP  (MPTCP)  goodput  gains  reach  11x[6]  compared  to  TCP.  
Through   its   offloading   capabilities,   RLNC   combines   LTE/5G   reliability   with   WiFi   bandwidths,   thus  
improving   user   QoE.   An   example   is   illustrated   in   Appendix   4,   where   a   receiver   is   able   to   receive   a  
stream  using  both  its  WiFi  and  LTE/5G  networks.  In  this  case,  RLNC  offload  capabilities  not  only  provide  
higher   data   throughput,   but   also   allow   networks   to   minimize   transport   costs.   Hence,   RLNC   allows  
devices   to   get   the   benefits   of   a   full   cellular   connection   for   a   fraction   (a   few   percent)   of   the   cellular  
cost[11].    
Combining   multiple   paths   with   RLNC   duplicates   connection   capacity   while   maintaining   reliability   and  
management   simplicity.   Data   might   be   transmitted   simultaneously   over   WiFi   and   LTE/5G,   cable   and  
LTE/5G,   LTE/5G   and   DSL,   two   DSL   lines,  or   multiple   WiFi   channels   (see   Appendix   4).   In   a   home   theatre,  
for  example,  RLNC  enables  devices  fitted  with  multiple  WiFi  interfaces  to  multiply  their  WiFi  bandwidth  
without  need  for  major  inter-­‐channel  scheduling.  

Use  Case  6:  RLNC’s  Inherent  Security  


RLNC  as  a  Security  and  Content  Protection  Mechanism  
RLNC   is   a   powerful   natural   complement   to   the  
traditional   encryption   methods   used   for   content  
protection.    
RLNC   insures   that   decoding   cannot   be   performed  
without   a   pre-­‐determined   number   of   coded   packets.  
As   a   consequence,   sending   packets   through   multiple  
paths   creates   an   additional   level   of   protection   against  
eavesdropping.   The   figure   (right)   illustrates   this   in   a  
scenario   where   content   is   broadcast   through   a  
satellite  to  a  set  top  box.  However,  a  small  portion  of  
the   content   is   streamed   through   the   consumer’s  
broadband  Internet  connection.  In  this  case,  RLNC  can  
be  used  to  insure  that  the  broadcast  content  is  unusable  without  the  low-­‐bandwidth  component.    

Code  On  Proprietary  and  Confidential 4  


 
Distributing   coded   content   across   multiple   locations   plays   a   similar   role   in   storage   and   content  
distribution   networks,   as   long   as   no   single   location   or   drive   holds   enough   information   to   decode.   A  
carefully  designed  RLNC  system  thus  inherently  offers  powerful  tools  to  control  access  to  content.  
Furthermore,   the   encryption   of   RLNC   coefficients   effectively   locks   the   whole   payload.   In   the   context   of  
multi-­‐resolution   video   delivery,   RLNC   coefficient   encryption   significantly   reduces   server   loads   while  
providing  viable  content  protection  even  through  high  loss  links[15].  

Use  Case  7:  RLNC’s  New  Ultra-­‐High  Reliability  Tools  


Coded  Multi-­‐hop  Networks:  Shifting  the  Network  Paradigm  with  RLNC  Recoding  

 
RLNC   randomly   generates   its   coding   coefficients   and   embeds   them   within   the   data   for   transport   and  
storage.   These   unique   features   enable   RLNC   to   re-­‐encode   coded   packets   at   different   network   nodes  
and  layers  without  need  for  prior  decoding.  A  recoding  node  can  combine  received  coded  packets  using  
locally  generated  random  coefficients.  Consequently,  it  can  react  to  local  network  degradation  instantly  
by   inserting   additional   parity   (coded)   packets   into   the   media   stream,   as   shown   in   the   figure   above.  
Recoding  does  not  increase  decoding  complexity.  (See  Appendix  5  for  a  detailed  example  of  recoding.)  
RLNC’s   recoding   process   has   been   demonstrated   to   significantly   improve   network   efficiently   and  
reliability   in   Software   Defined   Networking   (SDN).   Recent   SDN   implementations   demonstrate  
considerable   improvements   for   RLNC   in   multihop   networks[6].   This   study   of   fundamental   multihop  
topologies   shows   that   simple   IP-­‐layer   recoding   strategies   enable   networks   to   realize   performance  
boosts  in  TCP  without  modifying  the  overlaying  end-­‐to-­‐end  transport  protocol.  The  results  demonstrate  
TCP  goodput  gains  above  3x[6],  achieved  through  RLNC’s  unique  recoding  capabilities.    
RLNC  multi-­‐hop  gains  apply  at  both  the  network  core  and  edge,  with  significant  throughput  and  latency  
gains  in  local  wireless  meshes  (e.g.,  WiFi  metro/home  network,  home  theatre  setup,  etc.).  

Use  Case  8:    Mesh  Networking  to  Improve  the  User’s  Media  Experience  
ABR  Optimization  in  Wireless  Mesh  Settings  
Future   home   network   setups   are  
increasingly   featuring   multiple  
playback   devices.   For   reliable  
operation,   such   devices   may  need  
to   function   as   a   coordinated  
wireless   mesh,   as   shown   in   the  
home  theatre  illustration  (right).    
Recent   work   has   demonstrated  
that   RLNC-­‐based   protocols   offer  
significant  Quality  of  Experience  (QoE)  gains  when  carrying  ABR  video  over  such  a  wireless  mesh[16].  The  

Code  On  Proprietary  and  Confidential 5  


 
RLNC  testbed  uses  a  reference  implementation  of  the  Dynamic  Adaptive  Streaming  over  HTTP  (DASH)  
client,  along  with  a  standard  server  setup,  where  all  HTTP  traffic  is  carried  by  TCP.    
At  typical  WiFi  loss  levels  (2%),  RLNC  enables  DASH  to  achieve  a  30%  higher  bitrate  while  removing  all  
playback   interruptions.   At   higher   losses   (10%),   RLNC   allows   the   DASH   client   to   run   at   4x   its   uncoded  
bitrate  while  reducing  the  duration  of  video  interruption  by  one  order  of  magnitude.  Note  that  similar  
RLNC  gains  apply  for  a  number  of  tested  TCP  variants[16].  
A   number   of   unique   RLNC   features   are   used   in   this   setup.   The   RLNC-­‐based   protocol   enables   the   full  
exploitation  of  the  mesh  by  simplifying  the  participation  of  "helper"  nodes[16].  The  latter  automatically  
form  local  relay  topologies  and  use  recoding  to  maximize  link  efficiency  and  robustness.  In  addition,  the  
protocol  makes  full  use  of  RLNC's  flexible  encoding  schemes[16,17]  (see  Appendix  2).  
The   above   gains   extend   to   wireless   ad-­‐hoc   networks,   where   significant   throughput   and   latency  
improvements   were   demonstrated   in   the   context   of   vehicular   and   sensor   networks,   suggesting   that  
RLNC  is  an  unavoidable  evolutionary  step  towards  the  Internet-­‐of-­‐Things  (IoT).  

Use  Case  9:  Edge  Caching  with  RLNC,  Towards  Cache  Meshing  and  Cooperation    
RLNC  Recoding,  a  CDN  Game-­‐Changer  
RLNC’s   recoding   feature   can   be   used   to  
dramatically   improve   CDN   efficiency.  
Caching   is   used   broadly   within   CDNs   to  
distribute   content   more   efficiently.   Owing  
to  cheaper  and  more  powerful  storage  and  
computing   at   the   network   edge   (e.g.,   set  
top  boxes,  gaming  platforms),  edge  devices  
are  increasingly  playing  a  caching  role.    
The   figure   (right)   shows   a   typical   CDN  
multicast  architecture  with  multiple  tiers  of  
caches.   To   illustrate   the   value   of   RLNC’s  
recoding   capabilities,   red   links   are   each  
assumed  to  exhibit  5%  packet  losses.    
Restricted   by   their   end-­‐to-­‐end   structures,   conventional   block   and   rateless   codes   need   to   process  
cumulative  packet  losses  at  the  receiver  (see  Appendix  5).  This  results  in  the  transmission  of  the  worst-­‐
case  overhead  (above  15%)  to  all  customers,  regardless  of  whether  such  overhead  is  needed.  This  is  an  
expensive   requirement,   given   bandwidth   scarcity.   Owing   to   recoding,   RLNC   can   inject   redundancy   as  
needed  at  the  link  level,  resulting  in  optimal  overhead  and  throughput  over  all  links.  
Combining   its   recoding   and   meshing   capabilities,   RLNC   enables   true   edge   cache   clustering   and  
cooperation.  RLNC  also  provides  large  coding  speedups  for  storage  and  CDN  applications  compared  to  
competing  coding  libraries,  as  illustrated  in  Appendix  6.    

Use  Case  10:  RLNC’s  Seamless  User  Experience  in  a  Highly  Heterogeneous  Network  
Coded  Multiresolution  Transport  
Compared   to   single-­‐layer   transcoding   techniques,   layered   coding   helps   reduce   storage   costs   and  
bandwidth   consumption,   enabling   the   distribution   of   higher-­‐quality   multimedia.   RLNC   allows  
multimedia  distribution  networks  to  use  the  native  scaling  video  coding  of  H.264  or  the  emerging  H.265  
standards  to  reduce  or  obviate  the  current  need  for  sophisticated  rate  measurements  and  resolution  

Code  On  Proprietary  and  Confidential 6  


 
adjustments.   This   simplifies   considerably   the  
management   of   multimedia   distribution.   RLNC-­‐based  
multi-­‐resolution   schemes   yield   significant   data  
availability  and  transport  efficiency  gains[12,13,14].  
For   multi-­‐resolution   content   distribution,   RLNC  
enables  the  following  features  (see  Appendix  7):  
• Proactive   adjustment   of   both   resolution   and  
reliability  overhead  to  network  conditions.  
• Dynamic   adjustment   of   resolution   while  
protecting   high-­‐priority   layers/channels   (e.g.,  
base  layer),  even  in  broadcast  scenarios.  
• Allocating  reliability  overhead  dynamically  across  multiple  layers/channels.  
• More  efficient  bandwidth  use,  hence  serving  higher-­‐quality  content  to  heterogeneous  devices.  
• More  efficient  storage  of  multi-­‐resolution  content.  

Use  Case  12:  Towards  Mobile  Storage  and  Millisecond  Latency    


RLNC:  Harnessing  the  Global  Storage  Infrastructure  
Today’s   global   network   boasts   massive   storage  
infrastructure,   including   not   only   data   centers   and  
edge  caches,  but  also  service-­‐provider  network  caches  
and  customer  storage.  Despite  the  widespread  use  of  
content  replication  and  edge  caching,  failures  such  as  
CloudFare’s   one-­‐hour   outage   on   March   3rd,   2013[21],  
have   become   a   regular   feature   of   the   cloud-­‐service  
landscape.  
RLNC  brings  significant  performance  gains  and  energy  
savings  to  storage  systems  at  the  drive,  the  SAN,  and  
the   cloud   level.   Using   minimal   overhead,   RLNC  
increases   the   availability   and   security   of   data   stored  
across  multiple  drives/clouds.  Recent  RLNC  implementations  show  download  speed  gains  reaching  50%  
over  conventional  replication  methods,  across  multiple  cloud  storage  providers[22].  
Coding   across   data   blocks   improves   robustness   against   drive   and   transmission   failures.   By   removing  
state   tracking   (see   Appendix   4   &   5),   RLNC   is   provoking   a   qualitative   leap   in   drive   repair   and   dynamic  
data   reconstruction   technology.   In   dynamic   caching   scenarios,   RLNC   is   shown   to   consume   2.5x   less  
transport  and  1.25x  less  storage  resources  than  conventional  codes[23].  
Furthermore,   RLNC   is   a   green   technology,   as   it   reduces   data   center   energy   consumption   by   20-­‐50%  
through   curtailing   transaction   times   and   required   storage[24].   RLNC’s   coding   speedups   (see   Appendix   6)  
have   been   demonstrated   for   both   Intel   and   ARM   chipsets,   where   the   multi-­‐platform   RLNC   library   is  
compatible  with  hardware  acceleration[25].    
By   allowing   the   decoding   of   coded   data   blocks   in   quasi-­‐arbitrary   order   and   in   combination   with  
uncoded   data,   RLNC   has   not   only   added   flexibility   to   distributed   storage   systems;   it   has   become   a  
powerful   tool   for   storage   virtualization[2].   Furthermore,   RLNC’s   simplified   management   ushers   a   new  
era   of   storage   mobility,   where   content   is   finally   as   mobile   as   the   customer,   shadowing   their   very  
movement.    

Code  On  Proprietary  and  Confidential 7  


 
Conclusion  
RLNC  is  a  next  generation  coding  algorithm  that  is  crucial  to  realize  the  network  operating  efficiencies  
required  by  5G  Networks:  a  breakthrough  in  throughput,  latency,  reliability,  mobility,  and  security.    It  is  
also   the   first   code   capable   of   optimizing   both   storage   and   transport   networks,   a   vital   feature   as   5G  
networks  increasingly  combine  their  storage  and  network  applications.    
RLNC  is  uniquely  versatile  compared  to  conventional  codes,  as  it  can  be  implemented  through  software  
patches   at   any   layer   of   the   network.   Consequently,   it   can   be   applied   opportunistically,   enabling   a  
gradual   pay-­‐as-­‐you-­‐grow   network   evolution   towards   5G.   It   is   therefore   a   key   ingredient   enabling   the  
graceful  5G  migration  of  today’s  infrastructure,  without  requiring  forklift  upgrades.  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
For  more  information,  please  contact:  
Laila  Partridge    –  Managing  Director,  Code  On     –  lp@code-­‐on-­‐technologies.com       –  781-­‐856-­‐5338      
Kerim  Fouli     –  VP  of  Technology,  Code  On     –  fouli@code-­‐on-­‐technologies.com     –  781-­‐414-­‐0541      

Code  On  Proprietary  and  Confidential 8  


 
Appendices  
Appendix  1  
End-­‐to-­‐End  Coding  with  RLNC  
End-­‐to-­‐end   is   the   simplest   RLNC   configuration,   with   an   encoder   at   the   source   and   a   decoder   at   the  
destination.   In   this   simple   topology,   RLNC   operates   as   a   classical   channel   code,   albeit   featuring   a  
number   of   unique   advantages.   For   instance,   RLNC   can   be   implemented   at   any   layer   of   the   network  
stack   and   has   most   commonly   been   utilized   at   the   application   layer.   In   a   typical   application-­‐layer  
implementation,   the   encoder   is   located   either   at   the   source   of   the   content   or   at   an   intermediate  
(proxy)   location   within   the   network,   while   the   decoder   resides   at   a   destination   device.   Application-­‐
layer   RLNC   implementations   have   demonstrated   5x   throughput   improvements   at   3%   packet   losses.  
Even  with  20%  packet  losses  over  a  25Mbps  link,  RLNC  devices  can  stream  video  without  experiencing  
any  buffer  overruns[2].  
Figure  1  shows  the  expected  positioning  of  the  RLNC  encoding  and  decoding  modules  with  respect  to  
media  source  encoding  and  decoding  processes.  The  role  of  RLNC  here  is  to  deliver  the  source-­‐coded  
stream  reliably  across  the  service  provider’s  network.  

Figure  1  End-­‐to-­‐End  Coding  

RLNC   can   make   use   of   a   pre-­‐existing   packetization   structure   or   generate   its   own   packets   from   an   input  
bitstream.   This   is   illustrated   in   Figure   2,   a   simplified   block   diagram   showing   the   operations   of   the   RLNC  
encoder  and  decoder  modules  of  Figure  1.  Since  packets  entering  the  encoding  process  are  assumed  to  
be   of   equal   size,   an   optional   packetization   stage   may   be   required   for   functions   such   as   padding.   The  
input  frames  are  then  buffered  in  preparation  for  transmission.    

Figure  2  Encoding  and  Decoding  Module  Operation  

The  two  major  units  within  the  encoder  and  decoder  are  the  protocol  unit  and  encoding/decoding  unit.  
While   the   encoding   and   decoding   units   strictly   perform   encoding   or   decoding   over   a   set   of   buffered  

Code  On  Proprietary  and  Confidential 9  


 
frames,  the  protocol  unit  manages  the  communication  process,  including  decisions  on  whether  to  send  
coded  or  uncoded  frames,  and  how  to  perform  coding.    
RLNC  allows  for  multiple  coding  schemes  (see  inset  below).   For  example,  one  simple  block  code  would  
require   input   frames   to   be   transmitted   in   fixed-­‐size   blocks,   followed   by   a   pre-­‐determined   number   of  
coded  frames.  In  this  case,  the  protocol  unit  would  require  the  encoder  to  generate  the  coded  frames  
in  time  to  be  transmitted  at  the  end  of  each  block.  The  protocol  unit  at  the  destination  would  assemble  
the   blocks,   decode   any   missing   packets,   and   output   the   completed   blocks.   Some   coding   schemes  
require  feedback  between  the  protocol  units  at  the  source  and  destination,  as  discussed  below.  
Reliable  AC-­‐3  Audio  Bitstream  
The  stream  of  Figure  1  may  be  a  192kbps  enhanced  AC-­‐3  bitstream  composed  of  768-­‐byte  frames,  each  
representing   a   32ms   interval   of   time   once   decoded   at   48KHz.   These   synframes,   or   access   units,   carry  
their   own   error-­‐detection   checksums.   They   would   typically   be   mapped   into   HTTP   messages   or   RTP  
packets   and   encapsulated   into   UDP   or   TCP,   depending   on   the   application.   IP   packets   would   in   turn   carry  
those  transport  segments  across  the  service  provider’s  network,  where  typical  packet  losses  may  reach  
3%.  (Note  that  packet  losses  may  be  much  higher  in  mobile  access,  wireless  meshes,  and  satellite  links)    
RLNC  reference  implementations  are  available  wherever  a  software  patch  is  easiest  to  install,  including  
the   application-­‐layer   (typically   over   UDP)   and   the   kernel   (TCP,   IP).   However,   RLNC   is   not   layer-­‐
dependent.   For   instance,   RLNC   can   be   seamlessly   cascaded   under   any   proprietary   source   coding,   as  
shown  in  Figure  1.    
In   that   configuration,   the   RLNC   encoder   could   use   synframes   as   input   frames   (see   Figure   2).   Assuming  
the  simple  block  code  described  above  is  applied  over  a  connection  experiencing  uniform  1%  synframe  
losses,  and  assuming  30-­‐frame  blocks  were  used,  the  encoder  would  need  to  append  one  coded  frame  to  
each  block.  The  3.33%  redundancy  would  be  sufficient  to  cover  any  lost  or  corrupted  frames.  The  RLNC  
protocol   unit   at   the   receiver   may   utilize   the   synframes’   built-­‐in   error-­‐detection   scheme.   Note   that   the  
choice  of  the  block  size  and  redundancy  are  subject  to  multiple  channel  and  application  factors,  including  
loss  statistics  (e.g.,  burst  sizes)  and  receiver  delay  sensitivity  (e.g.,  playback  buffer  size).    
It  is  important  to  emphasize  that  the  above  coding  example  uses  RLNC  as  a  classical  block  channel  code  
without   feedback.   Although   RLNC   has   superior   tunability   in   such   settings,   its   unique   latency   gains   and  
versatility  advantages  are  highlighted  in  applications  and  topologies  where  today’s  channel  codes  cannot  
operate,  as  illustrated  in  the  remainder  of  this  document.  
What  differentiates  RLNC  from  existing  channel  codes  is  its  tunability.    RLNC  is  in  principle  agnostic  to  
the   source   coding   or   type   of   traffic   being   carried.   Hence,   parameters   such   as   packet   recovery  
probability,   latency,   jitter,   energy   consumption,   channel   utilization,   and   encoding/decoding   complexity  
can  be  adjusted  for  different  applications.    

The  RLNC  Encoding  and  Decoding  Process  


In  the  example  of  Figure  1,  four  encoded  packets  are  generated  from  the  three  native  stream  packets[2].  
The   encoded   packets   are   labeled   by   the   summation   symbol   Σ,   indicating   that   they   are   linear  
combinations  of  the  native  packets.  Figure  3  illustrates  how  one  encoded  packet  can  be  generated  from  
three   native   packets.   First,   a   random   coefficient   is   generated   for   each   uncoded,   or   native,   packet.  
Symbol-­‐wise   coding   is   then   performed   to   generate   the   coded   packet.   A   symbol   may   represent   any   unit  
of  digital  data  such  as  bit  or  byte.  Each  symbol  of  the  resulting  coded  packet  is  a  linear  combination  of  
three   corresponding   symbols   in   the   native   packets.   Each   new   coded   packet   –while   having   the   same  
number  of  bits–  thus  carries  a  unique  mathematical  representation  of  the  native  packets.    
 

Code  On  Proprietary  and  Confidential 10  


 

Figure  3  RLNC  Encoding  Process  

In   the   example   of   Figure   3,   receiving   any   three   linearly   independent   packets   is   sufficient   to   decode   the  
three  original  packets.  The  number  of  additional  coded  packets  (i.e.,  the  redundancy)  can  be  optimized  
so  that  it  closely  matches  the  channel  loss.    
Decoding   reverses   the   linear   operations   of   Figure   3   through   Gaussian   elimination,   a   customary  
algorithm   for   solving   systems   of   linear   equations.   In   Figure   4,   the   right-­‐side   coefficient   entries   are  
generated   through   this   process.   Gaussian   elimination   typically   requires   a   number   of   arithmetic  
operations  that  are  in  the  order  of  n3,  where  n  is  the  number  of  input  packets.  Importantly,  RLNC  can  
use   coded,   uncoded,   and   partial   packets   to   decode   (see   Figure   4),   leading   to   lower   complexity   and  
easier  integration  with  existing  distributed  systems.  

Figure  4  Decoding  

RLNC  Overhead  
Like   any   form   of   coding,   RLNC   comes   with   two   main   costs.   First,   there   is   a   computational   cost  
associated   with   the   complexity   of   encoding   and   decoding   RLNC   packets.   Second,   there   is   a   header  
overhead   associated   with   the   size   of   the   header   in   each   coded   packet.   Unlike   other   coding   schemes,  
these   costs   in   RLNC   are   strongly   dependent   on   the   application.   This   is   because   the   simplicity   of   the  
RLNC  algorithm  allows  for  a  number  of  tradeoffs  that  limit  such  costs.  Following  are  three  examples  of  
parameters  that  strongly  influence  both  computational  and  packet  overheads:  
• Field/Symbol   Size:   The   symbol   size   (i.e.,   the   number   of   bits   allocated   to   the   encoding   symbols   of  
Figure  3)  governs  the  field  size  (i.e.,  the  number  of  coefficients  that  can  be  represented  by  such  a  
symbol).   For   example,   a   byte-­‐sized   symbol   means   that   the   field   has   28   =   256   elements.   The   field  
size   determines   the   complexity   of   the   finite   field   operations   involved   in   computing   and   decoding  
the   linear   combinations.   In   addition,   it   clearly   impacts   the   size   of   each   coefficient   in   the   coding  
header.      
• Block  Size:  The  block  size  is  the  number  of  packets  to  be  coded  together.  Small  blocks  incur  lower  
computational  complexity.  In  addition,  they  require  a  shorter  list  of  coefficients,  hence  producing  
smaller   coding   headers.   On   the   other   hand,   larger   blocks   enable   finer   granularity   in   determining  
the   proportion   of   non-­‐redundant   packets   (i.e.,   the   code   rate),   thus   enabling   more   efficient  
adjustment  to  channel  losses.  
• Coding  Density:  The  coding  density  represents  the  proportion  of  packets  represented  within  each  
coded   packet.   In   RLNC,   a   linear   combination   can   involve   any   subset   of   the   block   packets.   Full   RLNC  

Code  On  Proprietary  and  Confidential 11  


 
(i.e.,  representing  every  packet  in  each  linear  combination)  is  the  most  computationally  intensive  
and  requires  a  longer  coding  header.  
• Topology:  An  end-­‐to-­‐end  topology  such  as  the  one  shown  in  Figure  1  can  greatly  reduce  the  packet  
overhead  by  using  seeds  instead  of  explicit  coefficients.  This  applies  to  any  single-­‐hop  network  such  
as  most  wireless  access  networks.  
Unlike  any  other  channel  code,  RLNC  can  modify  all  of  the  parameters  listed  above  dynamically  to  trade  
performance   for   reduced   complexity.   It   should   be   noted   that   in   almost   all   existing   implementations,  
the   performance   gains   in   latency,   energy   consumption,   throughput,   and   channel   utilization,   have  
justified  the  described  computational  and  overhead  costs  (see  listed  references).  

Appendix  2  
RLNC’s  Multiple  Encoding  Schemes  
Unlike   traditional   Channel   Codes,   RLNC   allows   for   multiple   transport   schemes   while   using   simple   linear  
algebra   to   encode   and   decode.     This   capability   is   a   function   of   RLNC   using   random   coefficients   within   a  
suitable  field  size  to  generate  linear  combinations.    The  simplicity  of  this  random  coefficient  generation  
means  that  the  RLNC  algorithm  can  be  applied  at  any  node  and  any  layer.  More  importantly,  it  means  
that   the   code   can   be   carried   with   data.   Those   two   unique   attributes   are   the   source   of   RLNC’s  
versatility[2]   and   provide   the   a   number   of   encoding   schemes   where   traditional   codes   are   limited   to   one  
(Block  Coding):  
• Block   Coding:   In   block   coding,   RLNC   operates   as   a  
conventional   block   code,   where   packets   are  
assembled   in   blocks   and   then   coded   together.   In  
RLNC,   feedback   provides   powerful   performance  
optimization  tools.  For  instance,  both  the  block  size  
and  the  redundancy  level  (proportion  of  additional  
coded  packets)  can  vary  dynamically  if  information  
on  received  packets  is  available.  
• On-­‐the-­‐Fly   Coding:   Unlike   traditional   codes,   RLNC  
does  not  require  the  encoder  to  receive  the  entire  
block  before  starting  coded  transmissions.  Such  on-­‐ Figure  5  RLNC  Coding  Capabilities  
the-­‐fly  coding  allows  for  more  flexible  transmission  
schemes,  particularly  in  streaming  applications  [3].    
• Sliding-­‐Window   Coding:   RLNC   has   the   singular   capability   to   depart   from   the   block-­‐coding   paradigm  
and   adopt   a   sliding-­‐window   approach:   Each   coded   packet   becomes   a   representation   of   the  
transmitter’s   current   sliding   window,   as   shown   in   Figure   5.   In   sliding-­‐window   coding,   the  
transmitter   is   also   coding   new   arriving   packets   on   the   fly.   However,   this   approach   enables   both  
transmitter   and   receiver   to   coordinate   window-­‐decoding   events   (through   coordinating   encoding  
and  redundancy).  This  enables  the  control  of  application-­‐layer  latency  while  preserving  the  code’s  
reliability  features.  
• Systematic   Coding:   To   minimize   decoding   complexity   in   end-­‐to-­‐end   schemes,   it   is   desirable   to  
transmit  the  native  packets  ahead  of  any  coded  packet,  a  process  called  systematic  coding.    
Reliable  AC-­‐3  Audio  Bitstream  –  Revisited  
The   example   of   p.4   uses   RLNC   as   a   conventional   block   code   to   protect   an   enhanced   AC-­‐3   audio   bitsteam  
against   a   unifirm   1%   frame   loss.   Using   such   a   basic   block   code   is   inadequate   when   losses   become  
dynamic  (e.g.,  when  loss  bursts  occur).  For  instance,  if  the  average  losses  remained  at  1%  but  bursts  of  

Code  On  Proprietary  and  Confidential 12  


 
20%  were  likely  occur  over  a  single-­‐block  period,  the  block  code  would  need  to  carry  20%  redundancy  at  
all   times   (i.e.,   6   coded   frames   for   each   30-­‐frame   block),   leading   to   bandwidth   inefficiency.   The  
alternative  would  be  to  lose  the  entire  block  whenever  burst  losses  occurred.    
Faced   with   burst   losses,   the   most   efficient   and   robust   reliability   schemes   use   feedback   to   let   the  
transmitter   adapt   the   number   of   coded   frames   to   current   channel   losses.   RLNC   is   the   only   code   that   has  
the  flexibility  of  combining  any  set  of  packets  at  any  time;  it  is  therefore  best  suited  for  using  feedback.  
Consider  the  following  RLNC  sliding-­‐window  approach:  
(1) The  transmitter  adds  each  new  frame  to  the  coding  window.    
(2) For   each   frame   added   to   the   coding   window,   the   transmitter   sends   one   coded   frame.   Each   coded  
frame  is  a  representation  of  the  entire  coding  window.    
(3) The  decoder  sends  an  ACK  for  each  received  frame.    
(4) Each  ACK  identifies  the  earliest  frame  that  can  be  decoded  using  the  received  frames  (i.e.,  earliest  
seen  frame)  and  the  number  of  frames  required  to  decode  the  current  window  (i.e.,  missing  degrees  
of  freedom).  
(5) Upon  receiving  an  ACK,  the  transmitter  slides  the  coding  window  by  removing  any  seen  frames.  
(6) After   sending   10   frames,   the   transmitter   closes   the   window:   it   transmits   the   required   number   of  
coded  frames  for  decoding  the  current  window.  The  transmitter  then  starts  a  new  coding  window  
(step  (1)).  
Some  of  the  notable  advantages  of  the  sliding-­‐window  approach  are  described  below:  
• Significant   Latency   Gains:   Owing   to   RLNC's   sliding   window   scheme,   the   transmitter   can   control   the  
coding  window  size  using  the  thresholding  mechanism  in  step  (6)  (e.g.,  number  of  frames  sent,  time  
elapsed,   number   of   missing   degrees   of   freedom   reached,   etc.).   This   limits   considerably   the   latency,  
buffering,   and   processing   requirements   at   the   receiver.   Although   most   frames   will   be   passed   to   the  
receiver  application  with  no  decoding  delay  at  all,  the  worst-­‐case  decoding  event  (e.g.,  burst  loss  hits  
early   frame   transmissions)   involves   decoding   the   first   ten   frames   together,   leading   to   a   buffering  
latency  of  320ms  followed  by  a  10-­‐frame  decoding  event.  In  contrast,  the  block  code  (p.4)  will  always  
need   a   30-­‐frame   decoding   event   for   each   block.   The   average   and   peak   buffering   latencies   in   the   block  
scheme  are  480ms  (loss  hitting  the  middle  frame)  and  960ms  (loss  hitting  the  first  frame).    
• Higher   Bandwidth   Efficiency:   The   RLNC   sliding-­‐window   scheme   will   adapt   to   burst   losses   by  
generating   the   exact   amount   of   required   coded   frames   (in   step   (6)).   For   example,   a   four-­‐minute  
stream  (7500  synframes)  experiencing  1%  losses  and  fifty  individual  six-­‐frame  burst  loss  events  would  
lose   325   frames   in   total.   The   sliding-­‐window   approach   would   require   precisely   325   additional  
redundancy  frames,  or  4.33%.  In  contrast,  the  block  code  (p.4)  would  invariably  add  a  20%  overhead  to  
all   blocks   (1500   frames).   This   leads   to   massive   bandwidth   gains   in   dynamic   situations   (e.g.,   mobile  
networks  or  congested  broadband  networks).  
• Lower  Decoding  Complexity:  The  delay  trigger  of  step  6  enables  the  transmitter  to  control  the  size  and  
frequency   of   the   decoding   events.   This   creates   a   smoother   throughput   curve   and   lower   frame  
latencies  at  the  receiver,  but  also  controls  the  complexity  of  the  decoding  events.  
 

Appendix  3  
RLNC  Multicasting  Capabilities  
In   broadcast   environments,   RLNC   can   be   combined   with   protocols   such   as   NORM   (NACK-­‐Oriented  
Reliable  Multicast)  to  yield  powerful  broadcast  capabilities.  A  notable  RLNC-­‐based  NORM-­‐enhancement  
is   the   Speeding   Multicast   by   Acknowledgment   Reduction   Technique   (SMART)   protocol   [18],   described  
below.  
SMART   is   an   RLNC-­‐based   feedback   protocol   that   uses   a   predictive   model   to   determine   the   optimal  
feedback   time   for   a   broadcast   channel   with   a   potentially   large   number   of   receivers.   Scheduling   the  
feedback   according   to   this   predictive   model   is   shown   to   reduce   both   the   feedback   traffic   and  

Code  On  Proprietary  and  Confidential 13  


 
redundancy   packet   transmissions.   SMART   reduces   per-­‐packet   completion   time   by   30-­‐50%   compared   to  
existing  reliable  multicast  protocols  over  a  wide  range  of  file  sizes,  loss  levels,  and  network  sizes,  hence  
closely  matching  the  performance  of  an  omniscient  transmitter  requiring  no  feedback.    
SMART   exhibits   strong   QoE   performance   as   well   as   scalability   to   increasing   file   sizes,   varying   channel  
loss   probabilities,   and   most   notably,   network   size.   Its   predictive   model   is   shown   to   be   robust   to  
incorrect   channel   estimation,   uncertainty   on   the   number   of   receivers,   NACK   losses,   and   correlated  
packet  losses.    

Appendix  4  
RLNC’s  Multipath  Encoding  
Since  the  code  is  embedded  in  each  packet,  RLNC  renders  packets  interchangeable,  and  arrival  order  
irrelevant.   The   destination   user   needs   only   to   assemble   a   sufficient   number   of   packets,   coded   or  
uncoded,  in  order  to  decode  the  stream.  This  means  that  RLNC  allows  networks  to  seamlessly  combine  
connections  with  wide-­‐ranging  loss,  latency  and  bandwidth  characteristics,  without  need  for  complex  
scheduling.  

Figure  6  Multipath  Transport  

Not   only   is   no   path   coordination   necessary   at   the   source,   RLNC   also   enables   the   combination   of  
multiple   heterogeneous   sources.   In   the   example   of   Figure   6,   the   destination   pulls   content   from   two  
distinct   sources.   The   coded   source   (source   1)   is   able   to   transmit   packets   via   two   separate   paths.   The  
uncoded  source  (source  2)  uses  a  single  path,  as  is  customary  in  today’s  networks.  The  receiver  is  able  
to  combine  the  first  five  arriving  packets  without  any  need  for  sources  or  path  coordination.  The  loss  of  
one   of   the   two   paths   by   Source   1,   depicted   in   Figure   6,   would   have   dramatic   consequences   on   the  
stream  without  coding.  
Several   multipath   implementations   show   that   applying   RLNC  
across   multiple   channels   yields   the   sum   of   the   optimized  
throughputs,  without  switching  or  coordination[6,9,10].  Coding  over  
multiple   orthogonal   wireless   channels   is   expected   to   yield   other  
performance   improvements,   including   security   gains   and  
robustness  against  single-­‐channel  jamming  or  congestion.  
Multi-­‐sourced  Streaming    
Multimedia   streaming   is   a   major   application   of   RLNC’s   multipath  
principle.   To   illustrate   this,   consider   that   the   sources   of   Figure   6  
are  two  server  or  cache  sites  (e.g.,  Netflix)  hosting  a  coded  version  
of  the  requested  media  content.  The  receiver  (e.g.,  home  set  top  

Code  On  Proprietary  and  Confidential Figure  7  Coded  WiFi  Offloading   14  


 
box  or  enterprise  streaming  device)  can  now  switch  dynamically  from  one  site  to  the  other  depending  
on   availability   and   network   quality,   without   service   interruptions.   Alternatively,   it   can   download  
packets  simultaneously  from  both  sites,  without  need  for  packet  scheduling.  This  configuration  allows  
the  provider  to  slash  downtimes  by  distributing  the  load  dynamically  over  different  sites.  Hot  content  
can  also  be  shifted  closer  to  the  consumer  in  a  more  graceful  manner.    
WiFi  Offload  
Through   its   offloading   capabilities,   RLNC   combines   LTE   reliability   with   WiFi   bandwidths,   thus   improving  
user  QoE.  An  example  is  illustrated  in  Figure  7,  where  a  receiver  is  able  to  receive  a  stream  using  both  
its   WiFi   and   LTE   networks.   In   this   case,   RLNC   offload   capabilities   not   only   provide   higher   data  
throughput,   but   also   allow   networks   to   minimize   transport   costs   by   optimally   combining   expensive  
reliable   networks   (e.g.,   LTE)   with   cheap   unreliable   ones   (e.g.,   WiFi).   This   has   been   demonstrated  
through  OPNET  simulations,  where  RLNC  devices  get  the  benefits  of  a  full  4G  connection  for  a  fraction  
(a  few  percent)  of  the  4G  cost[11].  This  channel  bonding  capability  extends  beyond  WiFi  offloading.  In  a  
home  theatre,  for  example,  RLNC  enables  devices  fitted  with  multiple  WiFi  interfaces  to  multiply  their  
WiFi  bandwidth  without  need  for  major  inter-­‐channel  scheduling.  

Appendix  5  
RLNC’s  Recoding  Gains  
Recoding  enables  RLNC  to  uniquely  adjust  its  coding  overhead  to  local  network  conditions.  In  the  toy  
example  of  Figure  8,  RLNC  recoding  is  contrasted  with  conventional  end-­‐to-­‐end  coding.  The  common  
scenario   is   the   transmission   of   a   20-­‐packet   file   from   source   (S)   to   destination   (D)   across   a   tandem  
network   where   each   of   the   three   hops   has   a   10%   packet   loss.   The   example   simulates   the   quality   of  
each   link   by   showing   packet   losses   in   red   above   each   link.   End-­‐to-­‐end   coding   naturally   requires   the  
provisioning   of   all   the   required   redundancy   at   the   source   node.   In   this   example,   this   redundancy  
amounts  to  37%  of  the  native  file  size.  

Figure  8  Recoding  Illustrative  Example  

Recoding,   on   the   other   hand,   enables   RLNC   to   renew   the   redundancy   at   each   intermediate   node  
without  need  for  decoding.  RLNC’s  random  co-­‐  efficient  generation  and  code  embedding  features  allow  
any   intermediate   node   to   participate   in   the   coding   process,   in   particular   by   recombining   received  

Code  On  Proprietary  and  Confidential 15  


 
packets   and   by   generating   new   redundancy   packets.   As   a   result,   the   required   redundancy   for   RLNC  
does   not   exceed   the   worst-­‐case   redundancy   required   at   any   hop,   11%   in   this   example.   Recoding   hence  
makes  RLNC  a  uniquely  composable  code  that  is  capable  of  adding  redundancy  only  when  and  where  
needed.  

Appendix  6  
Boosting  Coding  Speeds  for  Distributed  Storage  and  CDNs  
When   it   comes   to   sheer   coding   speeds,   the   RLNC   library   is   significantly   faster   than   industry   standard  
libraries.   In   a   May-­‐2014   measurement   campaign,   Kodo   outperformed   state-­‐of-­‐the-­‐art   storage   libraries,  
including  ISA-­‐L,  Jerasure,  and  OpenFEC.  Network  World  picked  up  the  story[19]:  

 
“RLNC   performed   13%   to   465%   faster   than   the   industry   standard   Reed-­‐Solomon   encoding   in   Storage  
Area  Network  (SAN)  erasure  application  testing.  The  Kodo  library,  using  RLNC  to  encode  and  decode  data  
on   a   SAN   for   error   correction   and   fault   tolerance,   was   compared   to   Intel’s   Reed-­‐Solomon   library  
implementation,   called   ISA-­‐L,   and   an   open   source   library   implementation   called   Jerasure.   The  
[19]
Kodo/RLNC  implementation  ran  consistently  faster  on  identical  SAN  hardware.”  
RLNC’s  main  selling  point  is  its  unique  capability  to  enable  next-­‐generation  information  infrastructure  
by  inherently  providing  unique  features  such  as  multipath  transport,  seamless  distributed  storage,  and  
robust   streaming.   Nevertheless,   the   reported   benchmarking   results   illustrate   RLNC’s   readiness   for  
deployment  as  an  alternative  to  existing  storage  and  transport  codes.  

Appendix  7  
RLNC  Multi-­‐Resolution  Transport  
RLNC   offers   a   number   of   tools   to   optimize   scalable   transport   in   both   point-­‐to-­‐point   and   multicast  
topologies,   as   shown   in   Figure   9.   Through   the   transcoding   of   base-­‐   and   enhancement-­‐layer   packets,  
RLNC  is  capable  of  dynamically  switching  to  different  resolutions  while  maintaining  reliability.    
 
RLNC’s  flexibility  is  illustrated  by  the  multicasting  setup  of  Figure  10,  where  all  users  receive  both  layers  
irrespective  of  which  packets  were  lost  by  each  node.  
   

Code  On  Proprietary  and  Confidential 16  


 
 

 
 
 
 
 
 
 
 
 
 
 
 
   
Figure  9  RLNC  Scalable  Multimedia   Figure  10  RLNC  Scalable  Multicasting    
 
 
Multi-­‐Channel  Content  Distribution  
Figures   9   and   10   underline   RLNC’s   ability   to   transport   a   set   of   synchronous   parallel   streams,   or  
channels,   while   insuring   that   losses   affect   the   lowest-­‐priority   channels   first.   In   the   example   of   Figure   9,  
the   base   and   enhancement   layers   may   represent   the   high-­‐   and   low-­‐priority   channels,   respectively.   The  
same   principle   can   therefore   be   applied   to   a   group   of   audio   and   video   channels   belonging   to   the   same  
media  stream,  sent  independently  during  playback.  
RLNC  allows  the  dynamic  allocation  of  redundancy  across  all  channels.  In  the  two-­‐channel  example  of  
Figure  9,  as  long  as  the  allocated  redundancy  is  capable  of  addressing  aggregate  losses,  combining  the  
redundancy   for   the   two   channels   is   beneficial,   as   it   insures   that   losses  will   not   deteriorate   any   single  
channel.  This  is  also  clear  in  the  multicasting  example  of  Figure  10,  where  RLNC  insures  the  reception  of  
both  channels  for  any  two-­‐packet  loss  configuration.  Furthermore,  RLNC  enables  the  redundancy  level  
to   tightly   match   channel   losses   if   good   feedback   is   provided.   If   losses   exceed   allocated   redundancy,  
however,  low-­‐priority  channels  (e.g.,  alternative  dialog)  can  switch  to  transporting  redundancy  for  the  
main  channels,  as  illustrated  in  Figure  9.  
 
In  a  multiresolution  stream  scenario,  the  combination  of  dynamically  adjusting  resolution  to  bandwidth  
(Fig.  9)  and  the  property  of  serving  heterogeneous  devices  (low-­‐  and  high-­‐resolution)  provides  a  very  
versatile   media   stream.   Such   a   stream   will   be   efficient   in   terms   of   storage   on   the   server[12]   (as  
mentioned   above)   and   will   be   able   to   provide   the   same   dynamic   adjustment   as   multiple   single-­‐
resolution  streams  (e.g.,  with  HTTP  live  streaming  protocols)  while  being  multicast-­‐friendly.  
 
 
 
   

Code  On  Proprietary  and  Confidential 17  


 
 
References  
[1]   Code  On  Technologies  (http://www.codeontechnologies.com/)  
[2]   Random  Linear  Network  Coding:  A  Tutorial  –  Code  On  White  Paper  
[3]   On  the  Delay  Characteristics  for  Point-­‐to-­‐Point  Links  using  Random  Linear  Network  Coding  with  On-­‐
the-­‐Fly  Coding  Capabilities.  European  Wireless  2014.  
[4]   Access-­‐Network  Association  Policies  for  Media  Streaming  in  Heterogeneous  Environments,  CDC  2010  
[5]   Avoiding  Interruptions  -­‐  a  QoE  Reliability  Function  for  Streaming  Media  Applications,  JSAC  2011  
[6]   Network  Coded  Software  Defined  Networking.  European  Wireless,  2015.  
[7]   Congestion  Control  for  Coded  Transport  Layers,  ICC  2014  
[8]   Network  Coded  TCP  Performance  over  Satellite  Networks.  (http://arxiv.org/abs/1310.6635)  
[9]   Applying  Network  Coding  to  TCP.  L.  Urbina,  MIT  M.Eng.  Thesis  (2012).  
[10]   Multi-­‐Path  TCP  with  Network  Coding  for  Heterogeneous  Networks.  Information  Theory  and  
Applications  Workshop  (ITA)  2012.  
[11]   Network  Coding  with  Association  Policies  in  Heterogeneous  Networks.  IFIP  TC  6th  int.  conf.  on  
Networking  (2011)  
[12]   Resolution-­‐aware  network  coded  storage  (http://arxiv.org/abs/1305.6864)  
[13]   Network  Coding  for  Multi-­‐Resolution  Multicast,  INFOCOM  2010  
[14]   On  the  Combination  of  Multi–Layer  Source  Coding  and  Network  Coding  for  Wireless  Networks,  IEEE  
CAMAD  2013  
[15]   Secure  Network  Coding  for  Multi-­‐Resolution  Wireless  Video  Streaming,  IEEE  JSAC,  April  2010  
[16]   Supporting  Dynamic  Adaptive  Streaming  over  HTTP  in  Wireless  Meshed  Networks  using  Random  
Linear  Network  Coding.  NetCod  Symposium,  June  2014.  
[17]   Throughput  vs.  Delay  in  Lossy  Wireless  Mesh  Networks  with  Random  Linear  Network  Coding.  
European  Wireless  Conference,  May  2014.  
[18]   Speeding  Multicast  by  Acknowledgment  Reduction  Technique  (SMART)  Enabling  Robustness  of  QoE  to  
the  Number  of  Users.  IEEE  JSAC,  August  2012.  
[19]   Network  World  (http://www.networkworld.com/article/2342846/data-­‐breach/how-­‐mit-­‐and-­‐caltech-­‐s-­‐
coding-­‐breakthrough-­‐could-­‐accelerate-­‐mobile-­‐network-­‐speeds.html)  
[20]   Can  network  coding  bridge  the  digital  divide  in  the  Pacific?  The  University  of  Auckland  —  in  progress.  
[21]   Cloudflare.com  (https://support.cloudflare.com/hc/en-­‐us/articles/200172446-­‐CloudFlare-­‐Post-­‐
Mortem-­‐from-­‐Outage-­‐on-­‐March-­‐3-­‐2013)  
[22]   Distributed  Cloud  Storage  Using  Network  Coding.  IEEE  Consumer  Communications  and  Networking  
Conference  (CCNC),  2014.  
[23]   Implementation  and  Performance  Evaluation  of  Distributed  Cloud  Storage  Solutions  using  Random  
Linear  Network  Coding.  IEEE  ICC  2014.  
[24]   Toward  Sustainable  Networking:  Storage  area  networks  with  network  coding.  Allerton,  2012  
[25]   Kodo  Library,  Steinwurf  (http://steinwurf.com/kodo/)  
 

Code  On  Proprietary  and  Confidential 18  

You might also like