Professional Documents
Culture Documents
Robust IA For MDSQ Encoder Over Noisy Channel
Robust IA For MDSQ Encoder Over Noisy Channel
Robust IA For MDSQ Encoder Over Noisy Channel
Noisy Channels
Rui Ma and Fabrice Labeau
Centre for Advanced Systems & Technologies in Communications (SYTACom)
Department of Electrical and Computer Engineering
McGill University
Montreal, Quebec, Canada
Email: rui.ma@mail.mcgill.ca, fabrice.labeau@mcgill.ca
row ( i )
multiple description quantizers. They only considered the 3 10 12 14 16 19
conventional on/off channel models and did not try to improve 4 15 18 21 24
the performance of the MDC system over noisy channels. 5 17 20 23 26 27
Here, we propose a novel method to find a feasible solution for 6 22 25 28 30
MDSQ, instead of MDVQ, to reduce the distortion introduced 7 29 31
by noisy channel.
From the results in [3], we inferred that if the capability of
error detection was higher, the reconstructed distortion of the
enhanced central decoder could be reduced further. Therefore,
Fig. 3. MLIA [1] at (3, 2, 32).
here, we propose a new method to assign index pairs for the
MDC system, in particular, MDSQ or MDVQ, so as to achieve
higher ability to detect errors. This IA approach attempts to
illustrated in Fig. 3, we use the modified linear (ML) IA [1] as
re-assign 2b-bit binary code words to N (N < 22b ) index
the example of IA in MDSQ. In MLIA, in order to encode the
pairs, each of which is a b-bit code word, in order to increase
source at b bits/source sample (bpss) on each channel, N index
the minimum Hamming distance, dmin , between any binary
pairs are selected from the main diagonal and the 2k diagonals
index pairs. To the best of our knowledge, this paper is the
closest to the main diagonals of a 2b ×2b IA matrix. Then each
first try to achieve robustness against bit errors by using IA in
ML implementation is defined by (b, k, N ). In Fig. 3, index 0
the MDC system. Without introducing extra redundancy such
is decomposed into (0, 0), and 2 as (0, 1). At the same time,
as channel coding, this kind of scheme utilizes the inherent
many combinations are still not occupied, such as (5, 1). We
redundancy to improve the capability of error detection and,
exploit this property to detect errors. For example, if (5, 1) is
furthermore, reduce the distortion of the reconstructed signals
received under the assumption that Description 1 is right and
by exploiting the enhanced central decoder at the meantime.
Description 2 incurs some errors, 1 is determined as an error.
This work will be presented in the following sections. At
Then we choose the most probable output value by using the
first, as the preliminary part, the enhanced central decoder, op-
methods described in the next subsection to minimize the extra
timal and suboptimal solutions [3] are reviewed in Section II.
distortion caused by errors.
The proposed method of assigning index pairs is discussed in
Section III, which is followed by the experimental results and
B. Error Estimation
conclusion.
We consider real-valued data samples x ∈ R, that are
II. E NHANCED C ENTRAL D ECODER encoded to an index l through the quantizer function q(x) = l.
A. Model of Enhanced Central Decoder There are N possible quantizer output interval indices l, and
In order to utilize residual information of the corrupted de- we will denote by N the set of these indices. The correspond-
scription, we proposed an enhanced central decoder, illustrated ing code words or reproduction levels are denoted by cl , and
in Fig. 2, for an MDC system [3]. Here we assume that one they are chosen here as the centroids of the corresponding
△ R
channel (Description 2) suffers noise that results in bit errors; quantization cell vl : cl = vl xfX (x)dx, where fX (x) is
meanwhile, another description (Description 1) is received the input probability density function (pdf). These indices
correctly. By referring to Description 1, errors in Description 2 l ∈ N are then mapped to a pair of indices (i, j) through an
are detected and, then, the outputs are estimated. index mapping operator a(·). Referring to Fig. 3, the indices l
MDSQ is used as an example to explain how this central appear in the matrix, whereas i and j are the row and column
encoder works. Redundancy is involved when decomposing numbers, respectively. We refer to the two components of
a scalar quantization index into two indices in MDSQ. As the mapping as i = a(1) (l) and j = a(2) (l). Finally, the
inverse mapping, which can be deduced, e.g., from Fig. 3, is III. D ESIGN M ETHOD FOR A ROBUST MDSQ
denoted by l = a−1 (i, j). We also denote the centroids of the
(1) (2)
quantizers in each description as ci and cj . The received According to the algorithm described above, if a received
indices are denoted by î and ĵ; the corresponding output x̂ code words is judged as non-existing, an error code word is
is the reproduction level is ea−1 (î,ĵ) , not necessarily the same found by the decoder. Then we utilize the estimation value to
as the quantizer reproduction level ca−1 (î,ĵ) ; when only one reduce the distortion introduced by the errors. So the higher
description is received, the corresponding output x̂ is eî
(1)
or error detection capability is, the less distortion is introduced
(2) by using our algorithm. In order to improve the capability
eĵ .In the case of a correct reception of two descriptions, the of error detection, we design a new IA scheme to enlarge
reproduced value is thus x̂ = ea−1 (a(1) (q(x)),a(2) (q(x))) . the minimum Hamming distance dmin between any two code
The MSE between two real values is denoted by d(·, ·); we words. The overall procedure of designing a robust MDSQ
△ (RMDSQ) consists of three stages:
also denote the set of possible indices i as I = {i ∈ N : ∃l ∈
N , j ∈ N : a(l) = (i, j)}, and the set of all possible indices j 1) Searching for qualified index pairs (i, j) within a 2b ×2b
△
as J = {j ∈ N : ∃l ∈ N , i ∈ N : a(l) = (i, j)}. For a given IA matrix based on dmin between binary representations
△
i ∈ I, we also define the set J¯i = {j ∈ J :6 ∃l ∈ N : a(l) = of code words.
(i, j)}, which is the set of values of the second index j that 2) Assigning indices ls to the selected index pairs (i, j) in
does not lead to a possible (i, j) pair in the index mapping order to minimize the cost function(s).
matrix. For example, in Fig. 3, at the second row or column, 3) In the terms of the algorithm provided in [1], deter-
J¯1 = {4, 5, 6, 7}. mining the corresponding MDSQ that adapts to source
We consider two types of transmission errors: those that our statistics with balanced side distortions.
error detector can detect, and those that remain undetectable. In the following subsections, we will describe the algorithms
The overall distortion caused by transmission errors, De , is for Step 1) and 2), respectively. As for the algorithm for
given by Step 3), it is the same as what was described in [1].
De = Du + Dd , (1)
P P A. Searching for qualified index pairs
where DP e = l∈N Pq (l)De (l), Du = l∈N Pq (l)Du (l),
Dd = l∈N P q (l)D d (l). P q (l) = P {x ∈ vl } is the In MDSQ, especially, two descriptions, the original indices
probability of the input falling in cell l, De (l), Du (l) and of scalar quantizer are decomposed into index pairs (i, j). Let
Dd (l) are the expected distortion from inputs in cell l due i and j denote as the binary representation of i and j, respec-
respectively to all, undetectable and detectable transmission tively, i.e., i = [i0 , i1 , · · · , ib−1 ] and j = [j0 , j1 , · · · , jb−1 ] are
errors. A transmission error can be detected as long as ĵ ∈ J¯î . two binary b-tuples, where ik , jk = 1 or 0, k = 0, 1, · · · , b−1.
Our goal is now to find the best possible reconstruction We combine i and j in any sequence to form a code word w,
(1) (2) that is to say, w = [i, j] = [i0 , i1 , · · · , ib−1 , j0 , j1 , · · · , jb−1 ].
levels el̂ , eî or eĵ in order to minimize De . When an
index pair (i, j) is transmitted, the corresponding received pair Let C be the set of all possible code words w, i.e., C =
is (î, ĵ). For a transmission error, we assume without losing {0, 1}2b . In order to achieve robustness to bit errors, we are
generality that only Description 1 is received correctly, i.e., going to find a set of the code words W, in which dmin
î = i and ĵ 6= j. Two methods were proposed to estimate the between any two code words should be not less than a certain
outputs of error index pairs to decrease Dd , respectively: value, in C, i.e., W = {w ∈ C : ∀v ∈ W, v 6= w, kv − wk ≥
dmin }, where kv − wk represents Hamming distance between
1) Optimal: Assuming that the a-priori source probabilities v and w. So the number of code words w ∈ W is N , and kwk
Pq (l) or Pq a−1 (i, j) are known at the receiver end, represents the Hamming weight of w.
the output value of the error index pair is calculated by:
For given b and dmin , we search C to find a satisfactory W.
The searching algorithm is described in Table I.
Pq a−1 (i, j) Ped (ĵ|j) ca−1 (i,j)
P P
(1) j∈J ĵ∈J¯i
eiopt = .
Pq (a−1 (i, j)) Ped (ĵ|j)
P P
j∈J ĵ∈J¯i
(2) B. IA by using genetic algorithm
2) Suboptimal: the output value of the error index pair is
replaced by: IA is a kind of permutation problem, in particular, an
NP-complete problem. For W with N code words, the total
(1) (1)
eisub ≈ ci . (3) N
number of possible combinations is = N2 ! . For N =
2
The suboptimal method achieved very close performance to 32, the total number of possible combinations is 1.32 × 1035 .
the optimal with lower computational complexity and without We apply generic algorithm (GA) to find a “close-to-optimal”
source statistics and channel knowledge (see [3] for more solution.
details). First, we describe the cost function for evaluating the found
TABLE I column ( j ) column ( j )
A LGORITHM TO SEARCH QUALIFIED CODE WORDS 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
0 0 12 10 13 9
1 1 30 26 29 22
w0 ← 0 2 2 31 28 24 21
row ( i )
row ( i )
v ← w0 3 3 15 6 14 5
k←0 4 4 25 18 20 16
5 5 7 3 4 0
FOR each v ∈ C
6 6 8 1 11 2
v ← wk + 1 7 7 27 23 19 17