Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

Feedback-Based Network Coding for

Next-Generation Wireless Broadcast

Sovanjyoti Giri
CHAPTER 4

Throughput Efficiency Analysis of


rDWS Technique

4.1 Introduction
We have seen in Chapter 3 that rDWS encoding is not throughput optimal as the
coefficient of the unseen packets are chosen randomly from the finite field. Now,
random linear network coding (RLNC) is proven to achieve multicast capacity
with probability tending to 1 with increasing field size [7]. But, the randomized
encoding in rDWS is not exactly the same as in conventional RLNC approach due
to the in-order delivery constraint. Because of the constraint, a receiver sees its
next unseen packet (and not just any unseen packet) upon receiving an innovative
coded combination in rDWS broadcast. With reference to the throughput-optimal
DWS technique, the measure of throughput efficiency of rDWS is the probability
with which a coded combination, formed by the CM of rDWS, becomes innovative
for all receivers (we term this as the probability of innovativness of a coded com-
bination). In this chapter, our goal is to find out this probability with respect to
the generation-based broadcast setup of Section 3.3.

53
Throughput Efficiency Analysis of rDWS Technique

4.1.1 Contributions of the Chapter


While finding the probability of innovativeness of a coded combination (we denote
it as PL ), we consider the worst-case scenario where all packets of a generation
are involved in a coded packet. One can see, in Algorithm 2, there is no inter-
dependency between the picking of the coefficients of the unseen packets. So, each
pick is independent of the previous picks, and the CM forms the coded combination
after collecting the coefficients of the unseen packets. Now, a coded combination
Pm
pc = i=1 pi αi is innovative to all receivers implies that, for each i (i=1 to
m), the coefficient αi of the unseen packet pi is innovative for every receiver of
R(i). So, to find out the probability of innovativeness of a coded combination
in rDWS, our first task is to determine the probability of a picked-up coefficient
being innovative with respect to the concerned receivers which we name as the
probability of innovativeness of a picked-up coefficient (we write it as PC (i)).
Now, the probability of innovativeness of a picked-up coefficient for an un-
seen packet pi with respect to the receivers who have not heard of that packet
is found in a straight-forward manner. However, finding an exact expression of
the same with respect to the receivers who have heard of that packet but have
not seen it, happens to be difficult as it involves finite-dimensional random walk,
and the dimension of the walk increases rapidly with the number of the receivers.
The probability of innovativeness in the first and the second cases are denoted as
P0C (i) and P00C (i) respectively. By using the theory of integer partition [146], we
have obtained a mathematical expression (not in closed-form) of P00C (i). Next, we
perform two different analysis of the probability P00C (i) which are maximum en-
tropy analysis and average understanding analysis. Subsequently, a lower bound
on the same is obtained that eventually helps in getting a lower bound on the
probability of innovativeness of a coded combination PL . Next, a non-linear inte-
ger programming (NLIP) framework is constructed to find out the minimum value
of PL . After solving the NLIP, we plot the procured minimum value with respect
to field size for different values of m and n.
The throughput efficiency analysis and the relevant plots of this chapter re-
veal some interesting facts about innovativeness and shows that, rDWS encoding
achieves near-optimal performance for a finite field of sufficient size.

54
4.2 Various Type of Receivers and their Sets

The rest of the chapter is organized as follows: Section 4.2 provides an overview
of different type of receivers (and their notations) depending on their status on see-
ing the packets of a generation in rDWS broadcasting scenario. The probability of
innovativeness of a picked-up coefficient of an unseen packet, PC (i), is investigated
in Section 4.3. With respect to the receivers (with same next unseen packet), who
are heard of their next unseen packet, the probability of innovativeness is analyzed
using the theory of integer partitions in Section 4.3.1. Consequently, a lower bound
on PC (i) is obtained in Section 4.3.2. Next, we examine the probability of inno-
vativeness of a formed-up coded combination by the randomized CM algorithm in
Section 4.4, and the minimum value of the same is evaluated in Section 4.4.1 that
corresponds to the worst-case throughput performance of rDWS. Finally, Section
4.5 presents the conclusions of Chapter 4.

4.2 Various Type of Receivers and their Sets


Before going into the details of probability of innovativeness analyses, we look at
the following discussion and the respective notations of various type of receivers
that are present in the broadcasting system. We will use these notations very
frequently in the rest part of this chapter.
In Chapter 3, we have seen the set R(i) which is the collection the receivers
whose next unseen packet is pi . There can be two types of receivers in R(i). The
receivers who have not heard of packet pi are of the first type, and the set of these
receivers is denoted as T (i). The second type of receivers, who have heard of pi
but have not seen it, belongs to set S(i). Therefore, R(i) = T (i) ∪ S(i). One
can observe that R(1) = T (1) as S(1) is always an empty set. Apart from the
receivers which belong to some R(i) for i = 1, 2, · · · , m, there may exist some
receivers which have seen all packets of the generation. The set of those receivers
is denoted as E(m).

55
Throughput Efficiency Analysis of rDWS Technique

4.3 Probability of Innovativeness of a Picked-up


Coefficient
We have seen that Algorithm 1 selects an innovative coefficient for each unseen
packet1 while DWS encoding with the help of the witness of the seen packets of
the concerned receivers. In this section, with reference to Algorithm 1, we find
out the probability of a randomly chosen coefficient being innovative.
If one looks at Algorithm 1, from line 5 to line 7, the CM finds the expression
of yj for receiver j ∈ R(i). Now, the expression of yj for a receiver j ∈ T (i) does
not contain a pi related term. Therefore, any choice of coefficient from the finite
field under consideration (except the zero element) for pi in the coded combination
is going to be innovative with respect to every receiver in T (i). Consequently, the
probability of innovativeness of a picked-up coefficient in rDWS is always q−1 q
(as
the coefficients are picked uniformly from the finite field) for the receivers of T (i).
Next, for a receiver j ∈ S(i), the corresponding yj have a pi -related term,
and let the coefficient of pi in yj be cj (i). Let us also consider a vector c(i)
that contains all the cj (i)’s (for a fixed S(i) and j = 1, 2, · · · , |S(i)| where |S(i)|
is the cardinality of set S(i)). Therefore, c(i) = (c1 (i), c2 (i), · · · , c|S(i)| (i)). All
the cj (i)’s in the vector c(i) may have the same values or may have different
values from each other or any situation in between these two can occur. The
total number of situations that can occur here is same as the integer partitions
[146] of |S(i)|. When all coefficients are of the same value, total q − 1 choices
(excluding that value) are good choice 2 for αi (line 3, Algorithm 2). In the extreme
case, when all the cj (i)’s are different, total q − |S(i)| good choices are available.
Therefore, the CM in rDWS chooses an innovative coefficient with probability
q−1
q
and q−|S(i)|
q
respectively with respect to the receivers of S(i) in the terminal
situations described above. In any other situation, the probability lies between
q−1
q
and q−|S(i)|
q
. We illustrate our discussion on the probability of innovativeness
of a picked-up coefficient for the receivers of S(i) with the help of the integer
partitions of |S(i)| and the corresponding good choices available to the CM in the

1
A coefficient is innovative for an unseen packet pi implies that the same is applicable with
respect to the receivers of R(i) unless mentioned otherwise.
2
By good choice, we mean that the corresponding chosen coefficient is innovative.

56
4.3 Probability of Innovativeness of a Picked-up Coefficient

following example.

Example 2. Let us consider a broadcasting scenario where the packets are encoded
with the rDWS technique. At the beginning of a particular time slot, it is found
that |S(3)| = 4 and the vector c(i) takes the form, c(3) = (c1 (3), c2 (3), c3 (3), c4 (3)).
Depending on the channel erasures and the time slot under consideration, total five
forms of c(i) can happen. These are (β1 , β2 , β3 , β4 ), (β1 , β2 , β3 , β3 ), (β1 , β1 , β2 , β2 ),
(β1 , β2 , β2 , β2 ) and (β1 , β1 , β1 , β1 ) where β1 , β2 , β3 , β4 are arbitrary and distinct ele-
ments of Fq . In the case of (β1 , β2 , β3 , β4 ), all coefficients are different. Therefore,
the total number of available good choices for the coefficient of pi in the coded com-
q−4
bination (i.e. α3 ) is q − 4, and the probability of innovativeness is . Now, the
q
form (β1 , β2 , β3 , β4 ) corresponds to the integer partition |S(3)| = 4 = 1 + 1 + 1 + 1,
where the partition has a total of four different parts. Similarly the other forms
of c(3) correspond to the partitions 4 = 1 + 1 + 2, 4 = 2 + 2, 4 = 1 + 3 and
4 = 4 respectively. For the partition 4 = 2 + 2, two receivers of S(3) have the
same coefficient of p3 (say β1 ) in their corresponding yj ’s. The other two receivers
also have same coefficient in their yj ’s (say β2 ), but it is different from the pre-
vious one (i.e., β2 6= β1 ). Therefore, each part of a partition denotes how many
of the receivers in S(3) have the same coefficient of p3 in their corresponding yj
expressions. The total number of parts of a partition indicates the number of field
elements that the CM must not pick as α3 if it wants to find an innovative coeffi-
cient, and subtraction of the total number of parts of a partition from q gives the
total number of good choices for α3 . Hence, the number of good choices available
for both the partitions 4 = 2 + 2 and 4 = 1 + 3 is q − 2. One can notice that, which
receivers have the same coefficient of p3 in their yj ’s does not affect the number of
good choices, whereas how many of the receivers have the same coefficient matters.
In Table 4.1, a detailed description of all the partitions of |S(3)| along with the
probability of innovativeness are provided.

It is evident that, with rDWS encoding, the probability of innovativeness of a


picked-up coefficient for packet pi (which we denote by PC (i)) takes different forms
with respect to T (i) and S(i). For the first and the second cases, the probability
is considered as P0C (i) and P00C (i) respectively. We have seen that P0C (i) = q−1q
,
and the probability of innovativeness for the receivers of S(i) conditioned on the

57
Throughput Efficiency Analysis of rDWS Technique

Table 4.1: The probability of innovativeness for all possible partitions of |S(3)|
when |S(3)| = 4.

Form of Partition of No. of No. of Good Probability of


c(3) |S(3)| = 4 Parts Choices for α3 Innovativeness
q−4
(β1 , β2 , β3 , β4 ) 1+1+1+1 4 q-4
q
q−3
(β1 , β2 , β3 , β3 ) 1+1+2 3 q-3
q
q−2
(β1 , β1 , β2 , β2 ) 2+2 2 q-2
q
q−2
(β1 , β2 , β2 , β2 ) 1+3 2 q-2
q
q−1
(β1 , β1 , β1 , β1 ) 4 1 q-1
q

q−|S(i)|
integer partitions of |S(i)| lies between q−1q
and q
. Our next goal is to deduce
an exact mathematical expression of PC (i).
00

4.3.1 Integer Partitions of |S(i)| and P00C (i)


Let us construct a set M (i) that contains all the partitions of |S(i)| where the
partitions are arranged in decreasing order of number of parts. Also, each partition
is structured in decreasing order of parts. If two partitions are of the same number
of parts, the partition with the larger first part is placed after the other one. If
the first part is same for two partitions, the partition with the larger second part
is placed later. This procedure is continued until the tie regarding the position of
the partitions is broken. Following the procedure, for |S(i)| = 7, we get M (i) =
{(1 + 1 + 1 + 1 + 1 + 1 + 1), (2 + 1 + 1 + 1 + 1 + 1), (2 + 2 + 1 + 1 + 1), (3 + 1 + 1 +
1 + 1), (2 + 2 + 2 + 1), (3 + 2 + 1 + 1), (4 + 1 + 1 + 1), (3 + 2 + 2), (3 + 3 + 1), (4 + 2 +
1), (5 + 1 + 1), (4 + 3), (5 + 2), (6 + 1), (7)}. The total number of elements of M (i) is
a function (φ) of |S(i)| which we write as, |M (i)| = φ(|S(i)|). We consider another
function ψ(i, k) which gives the number of parts of the k th element of M (i). One
can see that ψ(i, 1) = |S(i)| and ψ(i, |M (i)|) = 1, for any i. Also, for a fixed S(i),
the maximum value of ψ(i, k) is ψ(i, k)|max = ψ(i, 1) = |S(i)|, and the minimum

58
4.3 Probability of Innovativeness of a Picked-up Coefficient

value of ψ(i, k) is ψ(i, k)|min = ψ(i, |M (i)|) = 1.


We consider a random variable Xi which takes value k if the k th element of
M (i) occurs. Let us define an event I which corresponds to getting an innovative
coefficient for pi with respect to the receivers of S(i). With reference to Example
2, the probability of innovativeness subject to the partition Xi = k, i.e. Pr(I |
Xi = k), is same as q−ψ(i,k)
q
. In the last column of Table 4.1, we have tabulated all
the Pr(I | Xi = k)’s for |S(i)| = 4. Now, we can express P00C (i) as the following:

|M (i)|
X
P00C (i) = Pr(I) = Pr(I | Xi = k) Pr(Xi = k)
k=1
|M (i)|
X  ψ(i, k)

= 1− Pr(Xi = k)
k=1
q
|M (i)|
1 X
= {q − ψ(i, k)} Pr(Xi = k) (4.1)
q k=1

Next, we look for the probability of occurrence of a partition of |S(i)| (which is


Pr(Xi = k) for k = 1 to |M (i)|) to get a complete expression of P00C (i) according
to (4.1).

Probability of Occurrence of a Partition of |S(i)|

The probability Pr(Xi = k) depends on certain issues like broadcast settings, chan-
nel erasure probabilities, the time gap between the starting time slot of broadcast-
ing and the slot under consideration. By broadcast settings, we mean the number
of receivers, generation size, finite field size etc. These factors have an influence on
cj (i) heterogeneity (the dissimilarity between the cj (i)’s where j = 1, 2, · · · , |S(i)|),
which has direct impact on Pr(Xi = k).
If the erasure probabilities are very low and close to zero, the absolute cj (i)
heterogeneity is very low. All the receivers are close to each other in terms of
progress in getting coded packets, and the corresponding witness functions (wj (i)’s
for j = 1, 2, · · · , |S(i)|) are also similar. Therefore, we can say that a partition
of |S(i)| with more number of parts is less likely to occur than a partition with

59
Throughput Efficiency Analysis of rDWS Technique

a fewer number of parts. In contrast, when erasure probabilities are high, cj (i)
heterogeneity is also high, and a partition with more number of parts is more
likely to occur than a partition with a fewer number of parts. These two are
the two extreme situations that can happen. Any other situation in between
these two cases may happen depending on the packet erasure phenomenon at the
channels where there is no explicit rule to connect a particular event Xi = k to
that situation. Hence, no perfect ordering between the Pr(Xi = k)’s for k =
1, 2, · · · , |M (i)| exists.
Next, if the considered time slot is beyond a particular value from the beginning
of the broadcast of a generation, the cj (i) heterogeneity becomes prominent even
for low channel erasure probabilities.
Now, for a suitable broadcast setting and moderate erasure probabilities, at
a sufficiently large time slot, we focus our attention on the set S(i). If we consider
a partition of |S(i)| as a state of the receivers in S(i), the path through which
the receivers have reached that state is a |S(i)|-dimensional random walk which is
mathematically difficult to track [35, 80]. Therefore, finding an exact expression
of Pr(Xi = k) is a daunting task. However, we perform three different analyses
of Pr(Xi = k) to get some insight into the statistical understanding of P00C (i)
according to (4.1).

4.3.1.1 Maximum Entropy of Xi and P00C (i)

Each value the random variable Xi takes, corresponds to a partition of |S(i)|. From
the previous discussion it is clear that, mathematically tracking the certainty of
a partition is tough. Nevertheless, we can find the maximum uncertainty of the
partitions. From the information theoretic view, maximum uncertainty of the
partitions is same as the maximum entropy of the discrete random variable Xi
(i.e., H(Xi )). It is well-known that H(Xi ) is maximum when Xi follows uniform
1
distribution [147] which implies, Pr(Xi = k) = . Applying this in (4.1), we
|M (i)|
get the probability of innovativeness for maximum uncertainty of the partitions

60
4.3 Probability of Innovativeness of a Picked-up Coefficient

of |S(i)| as,

|M (i)|
1 X
P00CM (i) = {q − ψ(i, k)} (4.2)
q|M (i)| k=1

In Figure 4.1, we have plotted the probability of innovativeness, P00C (i), with
respect to field size (only the extension fields of GF(2) are considered) for four
different considerations of the probability Pr(Xi = k) (for a fixed S(i) and k =
1, 2, · · · , |M (i)|). The first consideration corresponds to the case where no perfect
ordering of the Pr(Xi = k)’s exists, and we generate the Pr(Xi = k)’s for each k
according to some arbitrary distribution in a Monte Carlo simulation environment.
We plot the maximum and minimum values of P00C (i) among 107 runs according
to (4.1) and denote these two values as Random max., and Random min. in
the figures. In the second case, we consider very high cj (i) heterogeneity with

1 1
0.9 0.85
Probability

Probability

0.8 Random max. (simu.) Random max. (simu.)


Random min. (simu.) 0.7 Random min. (simu.)
0.7 Ascending max. (simu.) Ascending max. (simu.)
Ascending min. (simu.) 0.55 Ascending min. (simu.)
0.6 Descending max. (simu.) Descending max. (simu.)
0.5 Descending min. (simu.) 0.4 Descending min. (simu.)
Maximum entropy (exact) Maximum entropy (exact)
0.4 0.25
4 8 16 32 64 4 8 16 32 64
Field size Field size
1

0.8
Probability

Random max. (simu.)


0.6 Random min. (simu.)
Ascending max. (simu.)
0.4 Ascending min. (simu.)
Descending max. (simu.)
0.2 Descending min. (simu.)
Maximum entropy (exact)
0
4 8 16 32 64
Field size

Figure 4.1: Probability of innovativeness of a picked-up coefficient, P00C (i), versus


field size where |S(i)| = 2 (upper left subfigure), |S(i)| = 3 (upper right subfigure)
and |S(i)| = 4 (lower subfigure).

61
Throughput Efficiency Analysis of rDWS Technique

respect to one extreme situation. This time, the Pr(Xi = k)’s are also generated
according to some arbitrary distribution with the consideration that a partition
with more number of parts is more likely to occur than a partition with a fewer
number of parts. To avoid ambiguity in choosing the probabilities of two partitions
of same number of parts, we consider Pr(Xi = k) > Pr(Xi = k + 1) as the
partitions are contained in M (i) in such a way that, for any two consecutive
partitions, the partition corresponding to more cj (i) heterogeneity is placed before
the partition with less cj (i) heterogeneity. Again, we plot the maximum and
minimum values of P00C (i) among 107 runs according to (4.1) and denote these
two values as Descending max. and Descending min. in the figures. Similarly,
Ascending max. and Ascending min. points in the plots denote the maximum
and minimum values of P00C (i) for very low cj (i) heterogeneity consideration (with
respect to the other extreme situation). Lastly, P00CM (i) is plotted according to
(4.2).
There are three subfigures in Figure 4.1 where we have depicted the proba-
bilities with respect to field size. From left to right, the subfigures correspond to
|S(i)| = 2, |S(i)| = 3 and |S(i)| = 4 respectively. In each case, the probabilities
increase as we increase the field size. Therefore, the chance of getting an innova-
tive coefficient for pi can be made arbitrarily close to one for a field of sufficient
size. When we move from the left to the right subfigure, the cj (i) heterogeneity
gets increased. As a result, the P00C (i)’s decrease from left to right for a particular
field size. For instance, P00C (i) values corresponding to Descending max. curve at
field size 16 are 0.91, 0.88 and 0.85 (approximate values) respectively from the left
to the right subfigure.
From Figure 4.1, we observe one more interesting fact. The overall range of the
PC (i) values for Random case is split by the Maximum entropy curve. The upper
00

range corresponds to the P00C (i) values of Ascending case, whereas the lower range
corresponds to the P00C (i) values of Descending case. This occurrence is because
of the fact that maximum entropy corresponds to the uniform distribution which
implies equal probability for each partition of |S(i)|. Thus, Maximum entropy case
acts as the boundary between the Ascending and the Descending cases.

62
4.3 Probability of Innovativeness of a Picked-up Coefficient

4.3.1.2 Average Understanding of Innovativeness and P00C (i)

We have seen that the likelihood of a partition depends on the channel erasure
probabilities and few other factors. If we consider a particular partition, for in-
stance the partition which is denoted by Xi = |M (i)|, only a single part is available.
For a finite field of size q, the number of c(i) vectors that correspond to partition
Xi = |M (i)| is only q. But, as a whole, total q |S(i)| vectors can result for a particu-
lar |S(i)|. To have an overall understanding of the receivers of S(i) regarding the
innovativeness of the picked-up coefficient of pi , we should look at all the q |S(i)|
vectors.
As discussed previously, no perfect ordering between the Pr(Xi = k)’s exists.
The maximum entropy case implies maximum uncertainty of the partitions that
corresponds to a scenario where all partitions are equally probable. In this section,
we assume a broadcast scenario where all of the c(i) vectors are equally probable.
The analysis of the probability of innovativeness for such an assumption gives an
average understanding of innovativeness of the picked-up coefficient irrespective
of the physical relevance of the scenario.
Here, we consider a new function ξ(i, k) which gives the total number of c(i)
vectors corresponding to the partition Xi = k. Using ξ(i, k), we rewrite (4.1) to
get the probability of innovativeness for uniform distribution of the c(i) vectors
as,

|M (i)|
1 X 1
P00CU (i) = {q − ψ(i, k)} ξ(i, k) |S(i)|
q k=1 q
|M (i)|
1 X
= {q − ψ(i, k)} ξ(i, k) (4.3)
q |S(i)|+1 k=1

V
Lemma 1. Let us consider a partition of |S(i)| as Z := |S(i)| =
P
av bv where
v=1
part bv is present av times and bv > bv+1 (∀v ∈ {1, 2, · · · , V −1}). If the finite field
under consideration is Fq , the total number of possible c(i) vectors corresponding
V
to Z is given in (4.4) where A = av , and q PA represents A permutations of q.
P
v=1

63
Throughput Efficiency Analysis of rDWS Technique

qP 
|S(i)||S(i)| − b  |S(i)| − (a − 1)b  |S(i)| − a b |S(i)| − a b − b 
A 1 1 1 1 1 1 1 2
NZ = ··· · ···
a1 ! a2 ! · · · aV ! b1 b1 b1 b2 b2
|S(i)| − a b − (a − 1)b    |S(i)| − a b − · · · − a |S(i)| − a b − · · · − (a − 1)b  
V −1 bV −1

1 1 2 2 1 1 1 1 V V
··· ···
b2 bV bV
(4.4)

Proof. For partition Z, part bv is present av times. Therefore for each v, there are
av groups where each group consists of bv receivers. The first group of b1 receivers
can be chosen from |S(i)| receivers in |S(i)|

b1
many ways. Considering the second
group of b1 receivers, they can be chosen in |S(i)|−b

b1
1
many ways. Proceeding
in this way, the last group of b1 receivers can be chosen in |S(i)|−(a 1 −1)b1

b1
many
ways. All of these a1 choices are independent because the broadcasting channels
and the receivers are independent. Now, these a1 choices can be permuted among
themselves in a1 ! ways. n Therefore, the total number ofoavailable choices for the
|S(i)| |S(i)|−b1 |S(i)|−(a1 −1)b1
   1
first a1 b1 receivers is b1 b1
· · · b1 a1 !
. Similar arguments
can be shown for the next a2 b2 , a3 b3 , . . . , aL bL receivers.
Now, all receivers within a group have the same coefficient, cj (i), in their
corresponding yj equation. The total number of groups corresponding to partition
V
Z is A =
P
av . Therefore, the total number of different coefficients corresponding
v=1
to Z is also A. These coefficients are elements of finite field Fq . In a pool of q
field elements, A elements can be permuted in q PA ways. Thus, we get the total
number of possible c(i) vectors corresponding to Z as given in (4.4).
=3
VP
Example 3. Let us consider a partition of |S(i)| = 10 as, |S(i)| = 10 = av b v
v=1
where b1 = 3, b2 = 2, b3 = 1, and a1 = 1, a2 = 2, a3 = 3. If the size of the finite
field under consideration is q = 8, from (4.4), we get the total number of c(i)
vectors corresponding to the partition given above as 254, 016, 000.

Lemma 1 provides the total number of possible c(i) vectors with respect to
partition Z that corresponds to Xi = k for some k ∈ {1, 2, · · · , |M (i)|}. Therefore,
we can find out ξ(i, k) for each k using Lemma 1. Next, from (4.3), we get the
probability P00CU (i).

64
4.3 Probability of Innovativeness of a Picked-up Coefficient

4.3.1.3 A Lower Bound on P00C (i)

In this section, we provide a lower bound on the probability P00C (i). From (4.1),
we get,

(a) |M (i)| h i
P00C (i) ≥ 1
P
q
q − max {ψ(i, k)} Pr(Xi = k)
k=1 ∀k

(b) 1 |M (i)|
=q
P
{q − ψ(i, 1)} Pr(Xi = k)
k=1

(c) 1 |M (i)|
(4.5)
=q
P
{q − |S(i)|} Pr(Xi = k)
k=1

(d) {q−|S(i)|} |M (i)|


=
P
q
Pr(Xi = k)
k=1
(e) q−|S(i)|
= q

where (a) is straightforward, (b) and (c) follow from the initial discussion of Section
4.3.1, (d) is true because the taken out term from the summation is constant over
k, and (e) is correct because one of the partitions must occur. Equation (4.5)
provides a lower bound on P00C (i) which we write as,

q − |S(i)|
P00CL (i) = (4.6)
q

Intuitively, one can understand the essence of (4.5). The probability of inno-
vativeness P00C (i) is minimum when the partition Xi = 1 results. At this situation,
the number of good choices left is minimum. The lower bound of (4.6) is impor-
tant because it corresponds to finding the maximum time required for the sender
to drop a packet from the SQ which in turn characterizes the maximum time to
deliver the packets of a generation to all the receivers.

4.3.1.4 Relative Comparison of P00CM (i), P00CU (i) and P00CL (i)

In Figure 4.2, we have plotted the probability of innovativeness of a picked-up co-


efficient for packet pi with respect to field size (only the extension fields of GF(2)
are considered) for the maximum entropy case (P00CM (i)), the average understand-
ing case (P00CU (i)), and the lower bound (P00CL (i)). There are three subfigures where

65
Throughput Efficiency Analysis of rDWS Technique

1 1

0.9 0.85
Probability

Probability
0.8 Maximum entropy case
0.7
Maximum entropy case
Average understanding case Average understanding case
0.7 Lower bound 0.55 Lower bound
0.6 0.4

0.5 0.25
4 8 16 32 64 4 8 16 32 64
Field size Field size
1

0.8
Probability

0.6
Maximum entropy case
Average understanding case
0.4 Lower bound
0.2

0
4 8 16 32 64
Field size

Figure 4.2: Probability of innovativeness of a picked-up coefficient


(P00CM (i), P00CU (i) and P00CL (i)) versus field size where |S(i)| = 2 (upper left sub-
figure), |S(i)| = 3 (upper right subfigure) and |S(i)| = 4 (lower subfigure).

we have depicted the probabilities. From left to right, the subfigures correspond
to |S(i)| = 2, |S(i)| = 3 and |S(i)| = 4 respectively. In each case, the probabilities
increase as we increase the field size. Therefore, the chance of getting an innovative
coefficient for pi can be made arbitrarily close to one for a field of sufficiently large
size. When we move from the left to the right subfigure, the cj (i) heterogeneity
increases. As a result, the P00C (i) value decreases from left to right for a fixed field
size. For instance, the P00CU (i) values corresponding to the average understanding
curve at field size 16 are 0.88, 0.82, 0.77 (approximate values) respectively from
the left to the right subfigure.
One can observe that the P00CM (i) value is greater than the P00CU (i) value in
the subfigures for a fixed field size. Therefore, the maximum uncertainty of the
partitions of |S(i)| leads to better probabilistic performance than the average
understanding scenario in terms of innovativeness of the picked-up coefficient.

66
4.3 Probability of Innovativeness of a Picked-up Coefficient

4.3.2 A Lower Bound on PC (i)


In the last subsection, individually we have analyzed the probability of innova-
tiveness for the receivers of T (i) and S(i). We have seen that finding an exact
expression of P0C (i) is easy. Although the same is not true for P00C (i), we have
obtained a lower bound P00CL (i). In this section, our goal is to find a lower bound
on PC (i) combining the effect of T (i) and S(i) on the probability of nnovativeness
of the picked-up coefficient.
For i = 1, we have seen R(1) = T (1), and hence PC (1) = q−1 q
. For i 6= 1, let
us look at PC (i) for the following cases:
Case 1: When |R(i)| = |S(i)| (i.e., |T (i)| = 0), we get PC (i) ≥ q−|R(i)|
q
(from
(4.5)).
Case 2: Next, let |T (i)| = 1 and |S(i)| = |R(i)| − 1. With reference to
(4.5), for the minimum value of the probability of innovativeness, all receivers
in S(i) have different cj (i)’s in their corresponding yj equation, and none of the
cj (i)’s are zero in the worst case scenario. The CM must not choose any of these
cj (i)’s (because of the receivers in S(i)) or the zero element of Fq (because of the
receiver in T (i)) as the coefficient of pi in the coded combination. Hence, the
minimum number of available good choices with respect to the receivers in R(i) is
(q − (|R(i)| − 1) − 1) = q − |R(i)|. Consequently we get, PC (i) ≥ q−|R(i)|
q
.
Case 3: Next, we consider |T (i)| = 2 and |S(i)| = |R(i)| − 2. With similar
arguments as in case 2, we get, PC (i) ≥ q−|R(i)|+1
q
.
Proceeding this way, as the terminal case we obtain, |R(i)| = |T (i)| (i.e., |S(i)| =
0). Here it is evident that, PC (i) = q−1q
.
Summarizing the above discussion it can be concluded that, for i 6= 1, we obtain
a lower bound on the probability of innovativeness of a picked-up coefficient as,
PCL (i) = q−|R(i)|
q
, whereas for i = 1, the absolute probability of innovativeness is
obtained as, PC (1) = q−1 q
.

67
Throughput Efficiency Analysis of rDWS Technique

4.4 Probability of Innovativeness of a Coded Com-


bination
So far we have analyzed the probability of innovativeness of the picked-up coeffi-
cient corresponding to a packet which is included in the coded linear combination.
Based on that analysis, here our goal is to find out the probability of innovative-
ness of the coded combination to get an insight on the throughput performance
of rDWS encoding.
Let us consider that the CM is making a coded linear combination pc with
only two unseen packets, pi1 and pi2 , at a particular time slot. The proba-
bility of innovativeness of the linear combination (which we denote as PL ) is
same as the joint probability of innovativeness of the coefficients of pi1 and pi2
(let the coefficients are αi1 and αi2 respectively) which we represent as PC (i1 , i2 ).
Now, the probability of innovativeness of αi1 depends on the receivers of R(i1 ),
whereas the same for αi2 depends on the receivers of R(i2 ). For any two receivers
j1 and j2 (where j1 ∈ R(i1 ) and j2 ∈ R(i2 )), the channels through which they are
connected to the transmitter are independent with reference to Section 3.3. Also,
no co-operation is allowed between the broadcasting receivers. Therefore we infer
that, PC (i1 , i2 ) = PC (i1 ) · PC (i2 ).
In Section 3.5.1.2, we have seen that the final linear combination pc contains
all packets of the generation in the worst-case scenario. Although the worst-case
situation may not be very likely to occur, to compare rDWS with DWS from
a generation-based broadcasting perspective, it is considered that pc consists of
all packets. Now, extending the previous argument for two packets (pi1 and pi2 )
to the m packets of the generation, we obtain the probability of innovativeness of
a linear combination as follows:

PL = PC (1, 2, · · · , m)
m
Y
= PC (i) (4.7a)
i=1
m |M (i)|
q−1Y X
(a)
= m {q − ψ(i, k)} Pr(Xi = k) (4.7b)
q i=2 k=1

68
4.4 Probability of Innovativeness of a Coded Combination

where (a) follows from (4.1) and the fact, PC (1) = q−1
q
.
We have seen in Section 3.5.1.1 that DWS is a throughput optimal technique
where each coded combination made by the CM is innovative. Here, the probabil-
ity PL which expressed in (4.7b), helps in analyzing how close the rDWS encoding
is towards the optimal DWS encoding from the throughput efficiency perspective.

Remark 6. The worst-case scenario of DWS and rDWS encoding where the coded
combination contains all packets of a generation is equivalent to the fact that each
packet of the generation is the next unseen packet to at least one receiver. To equip
this fact with the broadcast setting, we assume that the number of receivers is no
less than the generation size (i.e. n ≥ m). We follow this assumption for the rest
part of our analysis on PL and for the simulation plots (such as in Fig. 5.6).

4.4.1 A Lower Bound on PL


In Section 4.3.2, we have found a lower bound on PC (i). If we put the expressions
of PC (1) and PCL (i) (for i 6= 1) in place of PC (i) in (4.7a), we obtain a lower
bound on the probability PL which we denote as PLL . This lower bound provides
a deeper understanding on the least efficient throughput performance of the rDWS
encoding. The expression of PLL is given as,
m
q−1Y
PLL = {q − |R(i)|} (4.8)
q m i=2

4.4.1.1 Finding the Minimum Value of PLL

It can be noted from (4.8) that PLL depends on the field size and on the |R(i)|’s
for i = 2, 3, · · · , m. Depending on these factors, if we can find the minimum value
of PLL , it will lead us to the worst-case behavior of the rDWS technique in terms of
throughput efficiency. Therefore, in this section, we aim to find out the minimum
value of PLL .
While searching for the minimum value of PLL , we consider that each |R(i)|
(i ∈ {2, 3, · · · , m}) is a nonzero integer. This is to make sure that each PC (i)
has some contribution in PLL . Here, one can notice that, none of the |R(i)|’s
(i ∈ {2, 3, · · · , m}) can be more than (n − |E(m)| − |T (1)| − m + 2) as the number

69
Throughput Efficiency Analysis of rDWS Technique

of receivers is n (E(m) is the set of the receivers, which have seen all packets of
the generation).
For fixed values of number of receivers n, generation size m, field size q, and
fixed |E(m)| and |T (1)|, finding the minimum value of PLL is a standard opti-
mization problem of the following form:
m
q−1Y
Minimize: {q − |R(i)|}
q m i=2
m
X
Subject to: |R(i)| = n − |E(m)| − |T (1)| (4.9)
i=2

Now, inspecting all possible values of |E(m)| and |T (1)|, the maximum value
of the right hand side of the constraint (i.e. n−|E(m)|−|T (1)|) can be n−1 when
|E(m)| = 0 and |T (1)| = 1 (here |T (1)| can not be zero as we have considered
that all packets of the generation is involved in the coded combination). With this
consideration, we modify the optimization problem of (4.9) as the following:

m
q−1Y
Minimize: {q − |R(i)|}
q m i=2
m
X
Subject to: |R(i)| ≤ n − 1 (4.10)
i=2

This is a nonlinear integer programming that we reframe as the following:

M
Y M
X
min f (x) = (q − xi ) s.t. xi ≤ N , (4.11)
i=1 i=1

where M = m − 1, N = n − 1, x = (x1 , x2 , · · · , xM ), xi = |S(i + 1)|, and xi ∈


{1, 2, · · · , N − M + 1} for i ∈ {1, 2, · · · , M}.

4.4.1.1.1 Solution of the Nonlinear Integer Program To solve the integer


program, first we look into the following consideration. The objective function
f (x) has the same value for two vectors x1 and x2 if they are just element-wise
permutation of one another. We call x1 and x2 as the equivalent vectors. As

70
4.4 Probability of Innovativeness of a Coded Combination

an example, the vectors (3, 1, 2), (3, 2, 1), (1, 3, 2), (2, 3, 1), (2, 1, 3) and (1, 2, 3)
form a group of equivalent vectors in the three-dimensional space. In a group of
equivalent vectors, the vector whose elements are ordered in a non-increasing order
from the first to the last element, is denoted as the representative of that group.
Thus, (3, 2, 1) is the representative of the previously mentioned vector group.
Now, we solve the nonlinear programming problem using a method which is
quite similar to the famous Branch and Bound method [148]. In order to do
that, a tree is constructed with only the representatives of each possible equiv-
alent vector group of a M-dimensional space, which is depicted in Figure 4.3.
Starting node of the tree is x = (1, 1, · · · , 1) because xi ∈ {1, 2, · · · , N − M + 1}
M
xi = M. Second level
P
for each i, and the lower bound on the constraint is
i=1
M
xi = M + 1, and only one node
P
of the tree corresponds to the constraint
i=1
corresponding to the vector (2, 1, 1, · · · , 1) is possible in this level. Therefore,
the node (2, 1, · · · , 1) is the only child of the node (1, 1, · · · , 1). In general,
a node, x = (x1 , x2 , · · · , xM ), at level h (which corresponds to the constraint
M
xi = M + h − 1) can have at most M children at level h + 1 which are
P
i=1
(x1 + 1, x2 , · · · , xM ), (x1 , x2 + 1, · · · , xM ), · · · , (x1 , x2 , · · · , xM + 1). But, the num-
ber of children is much lesser than M because we only keep the representative
vectors. One can notice that, the leaf nodes of the tree correspond to the con-
M
xi = N.
P
straint
i=1
Next, we look at the positioning of the children of a particular parent node.
The child node whose first component of its vector is the largest is going to be
the leftmost child. If the first component is same for two or more children, the
child with the second largest component will be the leftmost child. This procedure
continues until the leftmost child is decided. Once the leftmost position is fixed,
we proceed for the second leftmost position, and the same rule is followed. We
iteratively apply the rule to get the positions of all children of a parent node at
each level and eventually obtain the complete tree structure as in Figure 4.3.
Now, our goal is to find the node or the nodes for which the cost of the objective
function f (x) in (4.11) is minimum.

Lemma 2. The cost of f (x) associated with a node at level h is greater than the

71
Throughput Efficiency Analysis of rDWS Technique


(1,1,1, .… ,1,1) x 
i 1
i (Level 1)

(2,1,1, .… ,1,1)  x   1
i 1
i
(Level 2)

(3,1,1, .… ,1,1) (2,2,1, .… ,1,1) x   2


i 1
i
(Level 3)

(4,1,1, .… ,1,1) (3,2,1, .… ,1,1) (3,2,1, .… ,1,1) (2,2,2, .… ,1,1) x  3


i 1
i
(Level 4)

. . .
. . .
. . .
. . .
. . .

L - E - A - F N - O - D - E - S x  
i 1
i (Level    1 )

Figure 4.3: Tree corresponding to Section 4.4.1.1.1 where representative vectors


represent nodes. The constraint corresponding to a level of the tree is written at
the right.

cost of f (x) corresponding to any of its children.

Proof. Let the cost of f (x) corresponding to a node at level h is Wh . We arbitrarily


choose one of its children (which are at level h + 1) and consider the associated
cost as Wh+1 . The relation between Wh and Wh+1 is,

q−x−1
Wh+1 = Wh (4.12)
q−x

Depending on the position of the parent node at level h and the position of the
considered child at level h + 1, x can be an element of the set {1, 2, · · · , N − M}.
As the field size is greater than or equal to the number of the receivers (minimum
field size requirement, Section 3.5.1.2.2), we get q > N. With reference to Remark
6, we see that N ≥ M. As the generation size m must be at least 2, we get M ≥ 1.
Considering these facts, it is evident that the value of (q −x−1) is always positive.
Now from (4.12) we get, Wh+1 < Wh .

72
4.4 Probability of Innovativeness of a Coded Combination

Lemma 3. The node at level h corresponding to the vector (h, 1, · · · , 1) is the


leftmost node at that level, and it has a child which is associated with the vector
(h + 1, 1, · · · , 1) at level h + 1.
Proof. The proof is trivial, and we omit it.
Proposition 1. Starting from level 1 of the tree in Figure 4.3, we obtain the
following order:

f (x)|x=(1,1,··· ,1) > f (x)|x=(2,1,··· ,1) > f (x)|x=(3,1,··· ,1) > · · · > f (x)|x=(N−M+1,1,··· ,1)
(4.13)

Proof. This directly follows from Lemma 2 and 3.


Proposition 2. The leftmost node at level h corresponds to the minimum cost of
the objective function f (x) at that level.
Proof. We prove this by induction on level h.
Basis step: For level 1 and 2, single nodes are present. So, the proposition
is true by default for these levels. At level 3, there are two nodes which are
associated with the vectors (3, 1, 1, · · · , 1) and (2, 2, 1, · · · , 1) respectively. Cost
function associated with the first vector is f (x)|x=(3,1,··· ,1) = (q − 3)(q − 1)(q −
1)M−2 = (q 2 − 4q + 3)(q − 1)M−2 , whereas the same for the second vector is
f (x)|x=(2,2,··· ,1) = (q − 2)(q − 2)(q − 1)M−2 = (q 2 − 4q + 4)(q − 1)M−2 . Clearly,
f (x)|x=(3,1,1,··· ,1) < f (x)|x=(2,2,1,··· ,1) . From Lemma 3, the vector (3, 1, 1, · · · , 1) is
associated with the leftmost node at level 3. So, the base case is true.
Induction hypothesis: Let, at level h = k (k > 3), the cost of f (x) for the
vector (k, 1, · · · , 1) (which is the leftmost node at level k according to Lemma 3)
is W0k . Now, we choose an arbitrary node at level k which is different from the
leftmost node and consider its associated cost as W00k . Let us assume, W0k < W00k .
Inductive step: If we choose the cost associated with the leftmost node at level
k + 1 as W0k+1 , then from Lemma 3 and the tree construction rule we get,

q − x0 − 1 0
W0k+1 = Wk
q − x0
q−k
or, W0k = W0 (4.14)
q − k − 1 k+1

73
Throughput Efficiency Analysis of rDWS Technique

Now, we have to choose an arbitrary node at level k + 1 which is different from


the leftmost node. Without any loss of generality, here we choose a node which is
a child of the arbitrarily chosen node (with cost W00k ) at the induction hypothesis
step. Let, the cost of the selected child node at level k + 1 is W00k+1 . The relation
between W00k and W00k+1 is,

q − x00
W00k = 00
W00k+1 (4.15)
q−x −1

where x00 < k according to the tree construction rule. Now, from the induction
hypothesis,

W0k < W00k


q−k q − x00
or, Wk+1 =
0
00
W00k+1
q−k−1 q−x −1
W0k+1 q − (k + x + 1)q + kx00 + x00
2 00
or, = 2 <1
W00k+1 q − (k + x00 + 1)q + kx00 + k
or, W0k+1 < W00k+1 (4.16)

Theorem 3. The node with the minimum cost of the objective function, f (x) =
M M
xi ≤ N corresponds to the vector
Q P
(q − xi ), with respect to the constraint
i=1 i=1
x̃ = (N − M + 1, 1, 1, · · · , 1), where q is the field size, m = M + 1 = generation
size, n = N + 1 = the number of the receivers, x = (x1 , x2 , · · · , xM ), and xi ∈
{1, 2, · · · , N − M + 1} for i ∈ {1, 2, · · · , M}.

Proof. We can prove the theorem using Proposition 1 and Proposition 2.

Here we use the idea of equivalent vectors along with Theorem 3 and get the
solution of the integer program in (4.11) as given in the following corollary:

Corollary 3. The set of the optimal points of the nonlinear integer program in
(4.11) is given as, Q = {x∗ | x∗ is a vector obtained by permuting the elements
of the vector x̃ = (N − M + 1, 1, · · · , 1)}.

74
4.4 Probability of Innovativeness of a Coded Combination

4.4.1.1.2 Expression of PLL |min and its Analysis With the help of Corol-
lary 3 and (4.10), we get the minimum value of PLL as,

(q − 1)m−1
PLL |min = (q − n + m − 1) (4.17)
qm

We plot the minimum value of the lower bound, PLL |min , according to (4.17) in
Figure 4.4 with respect to field size. Just like in the previous plots of this chapter,
only the extension fields of GF(2) are considered here. In the left subfigure, the
total number of receivers (n) is fixed at 15 while we vary the generation size (m)
from 3 to 15 with an interval of 3. Similarly, in the right subfigure, m is fixed
at 3, and n is changed from 3 to 15 with an interval of 3. The left graph shows
that, for a fixed group of receivers, the lower bound gets improved (although small
improvement) with the increment of generation size for a fixed field. So, larger
generation size can be beneficial from the PLL perspective. But, a large generation
size will increase the encoding complexity. Hence, generation size optimization is
required according to the application where the rDWS scheme is incorporated. In
contrast, the right graph shows that the lower bound gets deteriorated (significant
deterioration) with the increase of the number of receivers for a particular field
size. This happens because the cj (i) heterogeneity increases with increase of the
number of the receivers.

1 1

0.85 0.85
Probability

Probability

0.7 0.7
Generation size=3 No. of receivers=15
0.55 Generation size=6 0.55 No. of receivers=12
Generation size=9 No. of receivers=9
0.4 Generation size=12 0.4 No. of receivers=6
Generation size=15 No. of receivers=3
0.25 0.25

0.1 0.1
16 32 64 128 256 16 32 64 128 256
Field size Field size

Figure 4.4: Minimum value of the lower bound of the probability of innovativeness
of a linear combination ( PLL |min ) versus field size. For the left subfigure plots, we
have considered n = 15 and m = 3, m = 6, m = 9, m = 12, m = 15, and for the
right subfigure plots, we have considered m = 3 and n = 3, n = 6, n = 9, n = 12
and n = 15.

75
Throughput Efficiency Analysis of rDWS Technique

The second and the most important observation from Figure 4.4 is that,
PLL |min , gradually tends to one in each plot of the graphs as the field size gets in-
creased from 16 to 256. As PLL |min denotes the minimum value of the lower bound
on the probability of innovativeness of a coded packet, the chance of getting an
innovative combination for rDWS encoding eventually becomes certain with the
increase of field size.

4.5 Conclusion
If we look back at the expression of PLL |min in (4.17), we see that the probability
tends to one with the increase of field size. So, the statement of the last para-
graph is true in general (i.e., for any generation size and any number of receivers).
Here we provide an alternate statement with respect to the same observation: as
PLL |min denotes the worst-case throughput performance of rDWS scheme, max-
imum utilization of the bandwidth is possible with rDWS for a finite field of
sufficient size where each coded packet will bring new information to a receiver if
the corresponding channel is not in erasure.
In summary, the analyses of this chapter leads us to the fact that, apart from
the computational complexity benefit of our proposed rDWS technique, its en-
coding achieves near-optimal performance for a finite field of sufficient size. With
reference to [7], one can infer that the performance of rDWS encoding is identical
to the same of the conventional random linear network encoding (RLNC) from
the throughput efficiency perspective although they are not exactly the same.

76
References

[1] R. Ahlswede, Ning Cai, S. . R. Li, and R. W. Yeung, “Network information flow,” IEEE
Transactions on Information Theory, vol. 46, no. 4, pp. 1204–1216, July 2000.
[2] R. Koetter and M. Medard, “An algebraic approach to network coding,” IEEE/ACM
Transactions on Networking, vol. 11, no. 5, pp. 782–795, Oct 2003.
[3] A. R. Lehman and E. Lehman, “Complexity classification of network information flow
problems,” in Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete al-
gorithms, 2004, pp. 142–150.
[4] R. Dougherty, C. Freiling, and K. Zeger, “Insufficiency of linear coding in network infor-
mation flow,” IEEE Transactions on Information Theory, vol. 51, no. 8, pp. 2745–2759,
Aug 2005.
[5] S. . R. Li, R. W. Yeung, and Ning Cai, “Linear network coding,” IEEE Transactions on
Information Theory, vol. 49, no. 2, pp. 371–381, Feb 2003.
[6] T. Ho, R. Koetter, M. Medard, D. Karger, and M. Effros, “The benefits of coding over
routing in a randomized setting,” in IEEE International Symposium on Information The-
ory, 2003. Proceedings., 2003, pp. 442–.
[7] T. Ho, M. Medard, R. Koetter, D. R. Karger, M. Effros, J. Shi, and B. Leong, “A random
linear network coding approach to multicast,” IEEE Transactions on Information Theory,
vol. 52, no. 10, pp. 4413–4430, Oct 2006.
[8] S. Jaggi, P. Chou, and K. Jain, “Low complexity algebraic multicast network codes,”
in IEEE International Symposium on Information Theory, 2003. Proceedings., 2003, pp.
368–368.
[9] P. Sanders, S. Egner, and L. Tolhuizen, “Polynomial time algorithms for network informa-
tion flow,” in Proceedings of the fifteenth annual ACM symposium on Parallel algorithms
and architectures, 2003, pp. 286–294.
[10] S. Jaggi, P. Sanders, P. Chou, M. Effros, S. Egner, K. Jain, and L. Tolhuizen, “Poly-
nomial time algorithms for multicast network code construction,” IEEE Transactions on
Information Theory, vol. 51, no. 6, pp. 1973–1982, 2005.
[11] N. J. A. Harvey, “Deterministic network coding by matrix completion,” Ph.D. dissertation,
Massachusetts Institute of Technology, 2005.
[12] C. Chekuri, C. Fragouli, and E. Soljanin, “On average throughput and alphabet size in
network coding,” IEEE Transactions on Information Theory, vol. 52, no. 6, pp. 2410–2424,
2006.

149
[13] M. Médard, M. Effros, D. Karger, and T. Ho, “On coding for non-multicast networks,” in
Proceedings of the Annual Allerton Conference on Communication Control and Computing,
vol. 41, no. 1. The University; 1998, 2003, pp. 21–29.
[14] S. Riis, “Linear versus non-linear boolean functions in network flow,” Proc. CISS’04, 2004.
[15] J. Ebrahimi and C. Fragouli, “Vector network coding algorithms,” in 2010 IEEE Interna-
tional Symposium on Information Theory, June 2010, pp. 2408–2412.
[16] ——, “Algebraic algorithms for vector network coding,” IEEE Transactions on Informa-
tion Theory, vol. 57, no. 2, pp. 996–1007, Feb 2011.
[17] C. Fragouli, J.-Y. Le Boudec, and J. Widmer, “Network coding: an instant primer,” ACM
SIGCOMM Computer Communication Review, vol. 36, no. 1, pp. 63–68, 2006.
[18] R. W. Yeung, Information theory and network coding. Springer Science & Business Media,
2008.
[19] R. Bassoli, H. Marques, J. Rodriguez, K. W. Shum, and R. Tafazolli, “Network coding
theory: A survey,” IEEE Communications Surveys Tutorials, vol. 15, no. 4, pp. 1950–1978,
2013.
[20] D. Lun, M. Medard, and R. Koetter, “Network coding for efficient wireless unicast,” in
2006 International Zurich Seminar on Communications, 2006, pp. 74–77.
[21] D. Nguyen, T. Tran, T. Nguyen, and B. Bose, “Wireless broadcast using network coding,”
IEEE Transactions on Vehicular Technology, vol. 58, no. 2, pp. 914–925, 2009.
[22] Z. Li and B. Li, “Network coding in undirected networks.” Citeseer, 2004.
[23] C. Chekuri, C. Fragouli, and E. Soljanin, “On average throughput and alphabet size in
network coding,” IEEE Transactions on Information Theory, vol. 52, no. 6, pp. 2410–2424,
2006.
[24] D. S. Lun, M. Médard, R. Koetter, and M. Effros, “On coding for reliable communication
over packet networks,” Physical Communication, vol. 1, no. 1, pp. 3–20, 2008.
[25] D. S. Lun, “Efficient operation of coded packet networks,” Ph.D. dissertation, Mas-
sachusetts Institute of Technology, 2006.
[26] N. Cai and R. Yeung, “Network coding and error correction,” in Proceedings of the IEEE
Information Theory Workshop, 2002, pp. 119–122.
[27] C. Fragouli and E. Soljanin, “Network coding applications,” Foundations and Trends® in
Networking, vol. 2, 01 2007.
[28] Y. Lin, B. Li, and B. Liang, “Efficient network coded data transmissions in disruption
tolerant networks,” in IEEE INFOCOM 2008 - The 27th Conference on Computer Com-
munications, 2008, pp. 1508–1516.
[29] Y. Wu, P. Chou, and S.-Y. Kung, “Minimum-energy multicast in mobile ad hoc networks
using network coding,” IEEE Transactions on Communications, vol. 53, no. 11, pp. 1906–
1918, 2005.
[30] C. Fragouli, J. Widmer, and J.-Y. Le Boudec, “A network coding approach to energy
efficient broadcasting: From theory to practice,” in Proceedings IEEE INFOCOM 2006.
25TH IEEE International Conference on Computer Communications, 2006, pp. 1–11.

150
[31] D. E. Lucani, M. Stojanovic, and M. Medard, “Random linear network coding for time
division duplexing: Energy analysis,” in 2009 IEEE International Conference on Commu-
nications, 2009, pp. 1–5.
[32] N. Cai and R. Yeung, “Secure network coding,” in Proceedings IEEE International Sym-
posium on Information Theory,, 2002, pp. 323–.
[33] T. Ho, B. Leong, R. Koetter, M. Medard, M. Effros, and D. Karger, “Byzantine modifica-
tion detection in multicast networks using randomized network coding,” in International
Symposium onInformation Theory, 2004. ISIT 2004. Proceedings., 2004, pp. 144–.
[34] J. K. Sundararajan, P. Sadeghi, and M. Medard, “A feedback-based adaptive broadcast
coding scheme for reducing in-order delivery delay,” in 2009 Workshop on Network Coding,
Theory, and Applications, June 2009, pp. 1–6.
[35] J. Barros, R. A. Costa, D. Munaretto, and J. Widmer, “Effective delay control in online
network coding,” in IEEE INFOCOM 2009, April 2009, pp. 208–216.
[36] T. Ho, B. Leong, M. Medard, R. Koetter, Yu-Han Chang, and M. Effros, “On the utility of
network coding in dynamic environments,” in International Workshop on Wireless Ad-Hoc
Networks, 2004., May 2004, pp. 196–200.
[37] J. Widmer and J.-Y. Le Boudec, “Network coding for efficient communication in extreme
networks,” in Proceedings of the 2005 ACM SIGCOMM workshop on Delay-tolerant net-
working. ACM, 2005, pp. 284–291.
[38] D. E. Lucani, M. Stojanovic, and M. Medard, “Random linear network coding for time
division duplexing: When to stop talking and start listening,” in IEEE INFOCOM 2009,
April 2009, pp. 1800–1808.
[39] G. Joshi, Y. Liu, and E. Soljanin, “On the delay-storage trade-off in content download from
coded distributed storage systems,” IEEE Journal on Selected Areas in Communications,
vol. 32, no. 5, pp. 989–997, May 2014.
[40] M. Sipos, J. Gahm, N. Venkat, and D. Oran, “Erasure coded storage on a changing
network: The untold story,” in 2016 IEEE Global Communications Conference (GLOBE-
COM), Dec 2016, pp. 1–6.
[41] C. Gkantsidis and P. R. Rodriguez, “Network coding for large scale content distribution,”
in Proceedings IEEE 24th Annual Joint Conference of the IEEE Computer and Commu-
nications Societies., vol. 4, March 2005, pp. 2235–2245 vol. 4.
[42] Ning Cai and R. W. Yeung, “Secure network coding,” in Proceedings IEEE International
Symposium on Information Theory,, June 2002, pp. 323–.
[43] P. Ostovari, A. Khreishah, and J. Wu, “Cache content placement using triangular network
coding,” in 2013 IEEE Wireless Communications and Networking Conference (WCNC),
2013, pp. 1375–1380.
[44] D. Koutsonikolas, Y. C. Hu, C.-C. Wang, M. Comer, and A. M. S. Mohamed, “Efficient
online wifi delivery of layered-coding media using inter-layer network coding,” in 2011 31st
International Conference on Distributed Computing Systems, 2011, pp. 237–247.
[45] Q. Zhang, Z. Jin, Z. Zhang, and Y. Shu, “Network coding for applications in the delay
tolerant network (dtn),” in 2009 Fifth International Conference on Mobile Ad-hoc and
Sensor Networks, 2009, pp. 376–380.

151
[46] S. Farrell and V. Cahill, Delay-and disruption-tolerant networking. Artech House, Inc.,
2006.
[47] Y. Wu, W. Liu, S. Wang, W. Guo, and X. Chu, “Network coding in device-to-device (d2d)
communications underlaying cellular networks,” in 2015 IEEE International Conference
on Communications (ICC), 2015, pp. 2072–2077.
[48] A. Douik, S. Sorour, T. Y. Al-Naffouri, and M. Alouini, “Instantly decodable network
coding: From centralized to device-to-device communications,” IEEE Communications
Surveys Tutorials, vol. 19, no. 2, pp. 1201–1224, 2017.
[49] T. Matsuda, T. Noguchi, and T. Takine, “Survey of network coding and its applications,”
IEICE transactions on communications, vol. 94, no. 3, pp. 698–717, 2011.
[50] J. Gui, V. Shah-Mansouri, and V. W. S. Wong, “A linear algebraic approach for loss tomog-
raphy in mesh topologies using network coding,” in 2010 IEEE International Conference
on Communications, 2010, pp. 1–6.
[51] H. Yao, S. Jaggi, and M. Chen, “Network coding tomography for network failures,” in
2010 Proceedings IEEE INFOCOM, 2010, pp. 1–5.
[52] C. Fragiadakis, G. S. Paschos, and L. Tassiulas, “On the wireless network coding with
overhearing and index coding,” in 2013 IEEE 18th International Workshop on Computer
Aided Modeling and Design of Communication Links and Networks (CAMAD), 2013, pp.
23–27.
[53] A. Papadopoulos and L. Georgiadis, “Broadcast erasure channel with feedback and mes-
sage side information, and related index coding result,” IEEE Transactions on Information
Theory, vol. 63, no. 5, pp. 3161–3180, 2017.
[54] A. G. Dimakis, K. Ramchandran, Y. Wu, and C. Suh, “A survey on network codes for
distributed storage,” Proceedings of the IEEE, vol. 99, no. 3, pp. 476–489, 2011.
[55] M. Luby, “Lt codes,” in The 43rd Annual IEEE Symposium on Foundations of Computer
Science, 2002. Proceedings., Nov 2002, pp. 271–280.
[56] A. Shokrollahi, “Raptor codes,” IEEE Transactions on Information Theory, vol. 52, no. 6,
pp. 2551–2567, June 2006.
[57] J. Byers, M. Luby, and M. Mitzenmacher, “A digital fountain approach to asynchronous
reliable multicast,” IEEE Journal on Selected Areas in Communications, vol. 20, no. 8,
pp. 1528–1540, 2002.
[58] P. Pakzad, C. Fragouli, and A. Shokrollahi, “Coding schemes for line networks,” in Pro-
ceedings. International Symposium on Information Theory, 2005. ISIT 2005., 2005, pp.
1853–1857.
[59] A. Dana, R. Gowaikar, R. Palanki, B. Hassibi, and M. Effros, “Capacity of wireless erasure
networks,” IEEE Transactions on Information Theory, vol. 52, no. 3, pp. 789–804, 2006.
[60] J. Qureshi, C. H. Foh, and J. Cai, “Optimal solution for the index coding problem using
network coding over gf(2),” in 2012 9th Annual IEEE Communications Society Conference
on Sensor, Mesh and Ad Hoc Communications and Networks (SECON), June 2012, pp.
209–217.
[61] K. Chi, X. Jiang, and S. Horiguchi, “Network coding-based reliable multicast in wireless
networks,” Computer Networks, vol. 54, no. 11, pp. 1823–1836, 2010.

152
[62] X. Xu, Y. L. Guan, and Y. Zeng, “Batched network coding with adaptive recoding
for multi-hop erasure channels with memory,” IEEE Transactions on Communications,
vol. 66, no. 3, pp. 1042–1052, 2018.
[63] J. Cloud, D. Leith, and M. Médard, “A coded generalization of selective repeat arq,” in
2015 IEEE Conference on Computer Communications (INFOCOM), 2015, pp. 2155–2163.
[64] C. Chang and C. Wang, “A new capacity-approaching scheme for general 1-to-k broad-
cast packet erasure channels with ack/nack,” IEEE Transactions on Information Theory,
vol. 66, no. 5, pp. 3000–3025, 2020.
[65] D. P. Bertsekas, R. G. Gallager, and P. Humblet, Data networks. Prentice-Hall Interna-
tional New Jersey, 1992, vol. 2.
[66] T. Ho and H. Viswanathan, “Dynamic algorithms for multicast with intra-session net-
work coding,” in Proc. 43th Annual Allerton Conference on Communication, Control, and
Computing, Sept 2005.
[67] C. Fragouli, D. Lun, M. Medard, and P. Pakzad, “On feedback for network coding,” in 2007
41st Annual Conference on Information Sciences and Systems, March 2007, pp. 248–252.
[68] A. Eryilmaz and D. S. Lun, “Control for inter-session network coding,” in Proc. Workshop
on Network Coding, Theory & Applications. Citeseer, 2007, pp. 1–9.
[69] J. K. Sundararajan, D. Shah, and M. Medard, “Arq for network coding,” in 2008 IEEE
International Symposium on Information Theory, July 2008, pp. 1651–1655.
[70] M. Yu, P. Sadeghi, and A. Sprintson, “Feedback-assisted random linear network coding in
wireless broadcast,” in 2016 IEEE Globecom Workshops (GC Wkshps), 2016, pp. 1–6.
[71] E. Skevakis and I. Lambadaris, “Decoding and file transfer delay balancing in network
coding broadcast,” in 2016 IEEE International Conference on Communications (ICC),
2016, pp. 1–7.
[72] C. W. Sung, K. W. Shum, L. Huang, and H. Y. Kwan, “Linear network coding for erasure
broadcast channel with feedback: Complexity and algorithms,” IEEE Transactions on
Information Theory, vol. 62, no. 5, pp. 2493–2503, 2016.
[73] J. K. Sundararajan, D. Shah, M. Médard, and P. Sadeghi, “Feedback-based online network
coding,” IEEE Transactions on Information Theory, vol. 63, no. 10, pp. 6628–6649, Oct
2017.
[74] S. Athanasiadou, M. Gatzianas, L. Georgiadis, and L. Tassiulas, “Stable xor-based policies
for the broadcast erasure channel with feedback,” IEEE/ACM Transactions on Network-
ing, vol. 24, no. 1, pp. 476–491, 2016.
[75] M. Yu, A. Sprintson, and P. Sadeghi, “On minimizing the average packet decoding delay in
wireless network coded broadcast,” in 2015 International Symposium on Network Coding
(NetCod), 2015, pp. 1–5.
[76] P. Sadeghi, D. Traskov, and R. Koetter, “Adaptive network coding for broadcast channels,”
in 2009 Workshop on Network Coding, Theory, and Applications, June 2009, pp. 80–85.
[77] Y. E. Sagduyu, L. Georgiadis, L. Tassiulas, and A. Ephremides, “Capacity and stable
throughput regions for the broadcast erasure channel with feedback: An unusual union,”
IEEE Transactions on Information Theory, vol. 59, no. 5, pp. 2841–2862, May 2013.

153
[78] L. Yang, Y. E. Sagduyu, J. Zhang, and J. H. Li, “Deadline-aware scheduling with adaptive
network coding for real-time traffic,” IEEE/ACM Transactions on Networking, vol. 23,
no. 5, pp. 1430–1443, 2015.
[79] N. Aboutorab and P. Sadeghi, “Instantly decodable network coding for completion time
or decoding delay reduction in cooperative data exchange systems,” IEEE Transactions
on Vehicular Technology, vol. 65, no. 3, pp. 1212–1228, 2016.
[80] A. Fu, P. Sadeghi, and M. Médard, “Delivery delay analysis of network coded wireless
broadcast schemes,” in 2012 IEEE Wireless Communications and Networking Conference
(WCNC), April 2012, pp. 2236–2241.
[81] ——, “Dynamic rate adaptation for improved throughput and delay in wireless network
coded broadcast,” IEEE/ACM Transactions on Networking, vol. 22, no. 6, pp. 1715–1728,
Dec 2014.
[82] F. Wu, Y. Sun, Y. Yang, K. Srinivasan, and N. B. Shroff, “Constant-delay and constant-
feedback moving window network coding for wireless multicast: Design and asymptotic
analysis,” IEEE Journal on Selected Areas in Communications, vol. 33, no. 2, pp. 127–140,
Feb 2015.
[83] F. Wu, Y. Sun, L. Chen, J. Xu, K. Srinivasan, and N. B. Shroff, “High throughput low
delay wireless multicast via multi-channel moving window codes,” in IEEE INFOCOM
2018 - IEEE Conference on Computer Communications, April 2018, pp. 2267–2275.
[84] M. A. Zarrabian, F. S. Tabataba, S. Molavipour, and P. Sadeghi, “Multirate packet deliv-
ery in heterogeneous broadcast networks,” IEEE Transactions on Vehicular Technology,
vol. 68, no. 10, pp. 10 134–10 144, 2019.
[85] P. Garrido, D. J. Leith, and R. Agüero, “Joint scheduling and coding for low in-order
delivery delay over lossy paths with delayed feedback,” IEEE/ACM Transactions on Net-
working, vol. 27, no. 5, pp. 1987–2000, 2019.
[86] A. Cohen, D. Malak, V. B. Bracha, and M. Médard, “Adaptive causal network coding
with feedback,” IEEE Transactions on Communications, vol. 68, no. 7, pp. 4325–4341,
2020.
[87] R. Torre, C. C. Sala, S. Pandi, H. Salah, G. T. Nguyen, and F. H. P. Fitzek, “A ran-
dom linear network coded harq solution for lossy and high-jitter wireless networks,” in
GLOBECOM 2020 - 2020 IEEE Global Communications Conference, 2020, pp. 1–6.
[88] P. Sadeghi, M. Yu, and N. Aboutorab, “On throughput-delay tradeoff of network coding
for wireless communications,” in 2014 International Symposium on Information Theory
and its Applications, 2014, pp. 689–693.
[89] M. Yu, P. Sadeghi, and A. Sprintson, “Feedback-assisted random linear network coding in
wireless broadcast,” in 2016 IEEE Globecom Workshops (GC Wkshps), 2016, pp. 1–6.
[90] A. Le, A. S. Tehrani, A. Dimakis, and A. Markopoulou, “Recovery of packet losses in
wireless broadcast for real-time applications,” IEEE/ACM Transactions on Networking,
vol. 25, no. 2, pp. 676–689, 2017.
[91] A. Douik, S. Sorour, T. Y. Al-Naffouri, and M. Alouini, “Instantly decodable network
coding: From centralized to device-to-device communications,” IEEE Communications
Surveys Tutorials, vol. 19, no. 2, pp. 1201–1224, 2017.

154
[92] D. Nguyen, T. Tran, T. Nguyen, and B. Bose, “Wireless broadcast using network coding,”
IEEE Transactions on Vehicular Technology, vol. 58, no. 2, pp. 914–925, 2009.
[93] P. Larsson and N. Johansson, “Multi-user arq,” in 2006 IEEE 63rd Vehicular Technology
Conference, vol. 4, 2006, pp. 2052–2057.
[94] J. Lacan and E. Lochin, “On-the-fly coding to enable full reliability without retransmis-
sion,” arXiv preprint arXiv:0809.4576, 2008.
[95] Y. E. Sagduyu and A. Ephremides, “On network coding for stable multicast communica-
tion,” in MILCOM 2007 - IEEE Military Communications Conference, 2007, pp. 1–7.
[96] P. Larsson, “Multicast multiuser arq,” in 2008 IEEE Wireless Communications and Net-
working Conference, March 2008, pp. 1985–1990.
[97] T. Ho and H. Viswanathan, “Dynamic algorithms for multicast with intra-session network
coding,” IEEE Transactions on Information Theory, vol. 55, no. 2, pp. 797–815, Feb 2009.
[98] S. Wunderlich, F. Gabriel, S. Pandi, F. H. P. Fitzek, and M. Reisslein, “Caterpillar rlnc
(crlnc): A practical finite sliding window rlnc approach,” IEEE Access, vol. 5, pp. 20 183–
20 197, 2017.
[99] S. Wunderlich, F. Gabriel, S. Pandi, and F. H. Fitzek, “We don’t need no generation - a
practical approach to sliding window rlnc,” in 2017 Wireless Days, 2017, pp. 218–223.
[100] F. Gabriel, A. K. Chorppath, I. Tsokalo, and F. H. P. Fitzek, “Multipath communication
with finite sliding window network coding for ultra-reliability and low latency,” in 2018
IEEE International Conference on Communications Workshops (ICC Workshops), 2018,
pp. 1–6.
[101] F. Gabriel, G. T. Nguyen, R.-S. Schmoll, J. A. Cabrera, M. Muehleisen, and F. H. Fitzek,
“Practical deployment of network coding for real-time applications in 5g networks,” in 2018
15th IEEE Annual Consumer Communications Networking Conference (CCNC), 2018, pp.
1–2.
[102] E. Tasdemir, J. A. Cabrera, F. Gabriel, D. You, and F. H. P. Fitzek, “Sliding window rlnc
on multi-hop communication for low latency,” in 2021 IEEE 93rd Vehicular Technology
Conference (VTC2021-Spring), 2021, pp. 1–6.
[103] S. Giri and R. Roy, “On nack-based rdws algorithm for network coded broadcast,” Entropy,
vol. 21, no. 9, p. 905, 2019.
[104] S. Giri and R. Roy, “Packet dropping and decoding statistics for linear network coded
broadcast with feedback,” in 2018 IEEE 43rd Conference on Local Computer Networks
Workshops (LCN Workshops), Oct 2018, pp. 70–77.
[105] ——, “Performance analysis of feedback-based network-coded systems for broadcast,”
IEEE Transactions on Vehicular Technology, vol. 70, no. 3, pp. 2544–2560, 2021.
[106] S. Giri and R. Roy, “Feedback-based network coding for broadcast: Queueing analysis,”
IEEE Communications Letters (Early Access Article), 2021.
[107] Y. Li, E. Soljanin, and P. Spasojevic, “Effects of the generation size and overlap on
throughput and complexity in randomized linear network coding,” IEEE Transactions
on Information Theory, vol. 57, no. 2, pp. 1111–1123, Feb 2011.

155
[108] O. Trullols-Cruces, J. M. Barcelo-Ordinas, and M. Fiore, “Exact decoding probability
under random linear network coding,” IEEE Communications Letters, vol. 15, no. 1, pp.
67–69, January 2011.
[109] D. E. Lucani, M. Medard, and M. Stojanovic, “On coding for delay—network coding for
time-division duplexing,” IEEE Transactions on Information Theory, vol. 58, no. 4, pp.
2330–2348, April 2012.
[110] G. Cocco, T. de Cola, and M. Berioli, “Performance analysis of queueing systems with sys-
tematic packet-level coding,” in 2015 IEEE International Conference on Communications
(ICC), June 2015, pp. 4524–4529.
[111] I. Chatzigeorgiou and A. Tassi, “Decoding delay performance of random linear network
coding for broadcast,” IEEE Transactions on Vehicular Technology, vol. 66, no. 8, pp.
7050–7060, Aug 2017.
[112] A. J. Aljohani, S. X. Ng, and L. Hanzo, “Distributed source coding and its applications
in relaying-based transmission,” IEEE Access, vol. 4, pp. 1940–1970, 2016.
[113] D. E. Lucani, M. V. Pedersen, D. Ruano, C. W. Sørensen, F. H. P. Fitzek, J. Heide,
O. Geil, V. Nguyen, and M. Reisslein, “Fulcrum: Flexible network coding for heterogeneous
devices,” IEEE Access, vol. 6, pp. 77 890–77 910, 2018.
[114] S. Pandi, F. Gabriel, J. A. Cabrera, S. Wunderlich, M. Reisslein, and F. H. P. Fitzek,
“Pace: Redundancy engineering in rlnc for low-latency communication,” IEEE Access,
vol. 5, pp. 20 477–20 493, 2017.
[115] F. Karetsi and E. Papapetrou, “A low complexity network-coded arq protocol for ultra-
reliable low latency communication,” in 2021 IEEE 22nd International Symposium on a
World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2021, pp. 11–20.
[116] C. Chen, C. Dong, W. Hai, and Y. Weibo, “Anc: Adaptive unsegmented network coding for
applicability,” in 2013 IEEE International Conference on Communications (ICC), 2013,
pp. 3552–3556.
[117] C. W. Sorensen, D. E. Lucani, F. H. P. Fitzek, and M. Medard, “On-the-fly overlapping
of sparse generations: A tunable sparse network coding perspective,” in 2014 IEEE 80th
Vehicular Technology Conference (VTC2014-Fall), 2014, pp. 1–5.
[118] F. Wu, C. Hua, H. Shan, and A. Huang, “Mwncast: Cooperative multicast based on mov-
ing window network coding,” in 2012 IEEE Global Communications Conference (GLOBE-
COM), 2012, pp. 138–144.
[119] Y. Qin and L.-L. Yang, “Steady-state throughput analysis of network coding nodes em-
ploying stop-and-wait automatic repeat request,” IEEE/ACM Transactions on Network-
ing, vol. 20, no. 5, pp. 1402–1411, 2012.
[120] J. Qureshi, A. Malik, and C. H. Foh, “On expected transmissions for wireless random
linear coding,” in 2017 IEEE Symposium on Computers and Communications (ISCC),
2017, pp. 359–364.
[121] C. Chen, C. Dong, F. Wu, H. Wang, L. Peng, and J. Nie, “Improving unsegmented net-
work coding for opportunistic routing in wireless mesh network,” in 2012 IEEE Wireless
Communications and Networking Conference (WCNC), 2012, pp. 1847–1852.

156
[122] P. Garrido, D. E. Lucani, and R. Agüero, “Markov chain model for the decoding probability
of sparse network coding,” IEEE Transactions on Communications, vol. 65, no. 4, pp.
1675–1685, April 2017.
[123] J. Cloud, D. Leith, and M. Médard, “A coded generalization of selective repeat arq,” in
2015 IEEE Conference on Computer Communications (INFOCOM), 2015, pp. 2155–2163.
[124] R. Tutgun and E. Aktas, “Cooperative network coded arq strategy for broadcast net-
works,” in 2015 IEEE International Conference on Communications (ICC), 2015, pp.
1819–1824.
[125] G. Smith, E. Lochin, J. Lacan, and R. Boreli, “Memory and complexity analysis of on-the-
fly coding schemes for multimedia multicast communications,” in 2012 IEEE International
Conference on Communications (ICC), 2012, pp. 2070–2074.
[126] D. E. Lucani, M. Medard, and M. Stojanovic, “Online network coding for time-division
duplexing,” in 2010 IEEE Global Telecommunications Conference GLOBECOM 2010, Dec
2010, pp. 1–6.
[127] D. E. Lucani, M. Medard, and M. Stojanovic, “Random linear network coding for time-
division duplexing: Queueing analysis,” in 2009 IEEE International Symposium on Infor-
mation Theory, 2009, pp. 1423–1427.
[128] C. Chang and C. Wang, “A new capacity-approaching scheme for general 1-to-k broad-
cast packet erasure channels with ack/nack,” IEEE Transactions on Information Theory,
vol. 66, no. 5, pp. 3000–3025, 2020.
[129] N. Arianpoo, I. Aydin, and V. C. Leung, “Network coding as a performance booster for
concurrent multi-path transfer of data in multi-hop wireless networks,” IEEE Transactions
on Mobile Computing, vol. 16, no. 4, pp. 1047–1058, 2017.
[130] L. Qi, W. Sun, and Y. Wang, “Numerical multilinear algebra and its applications,” Fron-
tiers of Mathematics in China, vol. 2, no. 4, pp. 501–526, 2007.
[131] A. Cichocki, R. Zdunek, A. H. Phan, and S.-i. Amari, Nonnegative matrix and tensor
factorizations: applications to exploratory multi-way data analysis and blind source sepa-
ration. John Wiley & Sons, 2009.
[132] W. Li and M. K. Ng, “On the limiting probability distribution of a transition probability
tensor,” Linear and Multilinear Algebra, vol. 62, no. 3, pp. 362–385, 2014.
[133] S.-J. Wu and M. T. Chu, “Markov chains with memory, tensor formulation, and the
dynamics of power iteration,” Applied Mathematics and Computation, vol. 303, pp. 226–
239, 2017.
[134] A. Fiandrotti, V. Bioglio, and E. Magli, “Feedback-driven network coding for cooperative
video streaming,” in 2013 IEEE International Conference on Acoustics, Speech and Signal
Processing, 2013, pp. 3607–3611.
[135] Y. Lin, B. Liang, and B. Li, “Slideor: Online opportunistic network coding in wireless
mesh networks,” in 2010 Proceedings IEEE INFOCOM, 2010, pp. 1–5.
[136] P. Ostovari and J. Wu, “Throughput and fairness-aware dynamic network coding in wire-
less communication networks,” in 2013 6th International Symposium on Resilient Control
Systems (ISRCS), 2013, pp. 134–139.

157
[137] G. Wang, X. Zhao, and X. Dai, “On efficient network coding scheme with lossy and delayed
feedback,” in 2011 International Symposium on Networking Coding, 2011, pp. 1–6.
[138] P. U. Tournoux, E. Lochin, J. Lacan, A. Bouabdallah, and V. Roca, “On-the-fly erasure
coding for real-time video applications,” IEEE Transactions on Multimedia, vol. 13, no. 4,
pp. 797–812, 2011.
[139] F. Wu, C. Hua, H. Shan, and A. Huang, “Reliable network coding for minimizing decoding
delay and feedback overhead in wireless broadcasting,” in 2012 IEEE 23rd International
Symposium on Personal, Indoor and Mobile Radio Communications - (PIMRC), 2012, pp.
796–801.
[140] A. Fu and P. Sadeghi, “Queue-based rate control for low feedback rlnc,” in 2014 IEEE
International Conference on Communications (ICC), 2014, pp. 2841–2847.
[141] J. K. Sundararajan, D. Shah, M. Medard, M. Mitzenmacher, and J. Barros, “Network
coding meets tcp,” in IEEE INFOCOM 2009, 2009, pp. 280–288.
[142] W. Bao, V. Shah-Mansouri, V. W. Wong, and V. C. Leung, “Tcp von: Joint congestion
control and online network coding for wireless networks,” in 2012 IEEE Global Commu-
nications Conference (GLOBECOM), 2012, pp. 125–130.
[143] N. V. Ha, K. Kumazoe, and M. Tsuru, “Tcp network coding with forward retransmission,”
in 2015 IEEE Asia Pacific Conference on Wireless and Mobile (APWiMob), 2015, pp.
136–141.
[144] ——, “Making tcp/nc adjustable to time varying loss rates,” in 2016 International Con-
ference on Intelligent Networking and Collaborative Systems (INCoS), 2016, pp. 457–462.
[145] G. Strang, Linear Algebra and Its Applications. Thomson, Brooks/Cole, 2006.
[146] G. E. Andrews and K. Eriksson, Integer partitions. Cambridge University Press, 2004.
[147] T. M. Cover and J. A. Thomas, Elements of information theory. John Wiley & Sons,
2012.
[148] D. Li and X. Sun, Nonlinear integer programming. Springer Science & Business Media,
2006, vol. 84.
[149] J. K. Sundararajan, D. Shah, and M. Medard, “Online network coding for optimal through-
put and delay - the three-receiver case,” in 2008 International Symposium on Information
Theory and Its Applications, Dec 2008, pp. 1–6.
[150] C. M. Grinstead and J. L. Snell, Introduction to probability. American Mathematical
Soc., 2012.
[151] J. G. Kemeny and J. L. Snell, Finite Markov Chains. D. van Nostrand Company, Inc.,
Princeton, NJ, USA, 1960.
[152] A. L. Jones, I. Chatzigeorgiou, and A. Tassi, “Binary systematic network coding for pro-
gressive packet decoding,” in 2015 IEEE International Conference on Communications
(ICC), 2015, pp. 4499–4504.
[153] J. Claridge and I. Chatzigeorgiou, “Probability of partially decoding network-coded mes-
sages,” IEEE Communications Letters, vol. 21, no. 9, pp. 1945–1948, 2017.
[154] E. Tsimbalo, A. Tassi, and R. J. Piechocki, “Reliability of multicast under random linear
network coding,” IEEE Transactions on Communications, vol. 66, no. 6, pp. 2547–2559,
2018.

158
[155] M. Artin, Algebra. Pearson Prentice Hall, 2011.
[156] J. F. Shortle, J. M. Thompson, D. Gross, and C. M. Harris, Fundamentals of queueing
theory. John Wiley & Sons, 2018.
[157] D. E. Lucani, “Composite extension fields for (network) coding: Designs and opportuni-
ties,” in Network Coding and Designs (Final Conference of COST Action IC1104), April
2016.

159

You might also like