Professional Documents
Culture Documents
Turbo Product Codes Based On Convolutional Codes
Turbo Product Codes Based On Convolutional Codes
Turbo Product Codes Based On Convolutional Codes
ETRI Journal, Volume 28, Number 4, August 2006 Orhan Gazi et al. 453
convolutional code can be set as desired. The time invariant
trellis structure of convolutional codes makes them more
Encode rows
convenient for implementation. In addition, numerous Data bits Data bits
Row parity
bits
practical techniques such as trellis coded modulation and
puncturing can be simply utilized with convolutional codes as
opposed to linear block codes. A code from the same family
was previously studied for orthogonal frequency-division
multiplexing in [14] but was not analyzed or further Encode columns
elaborated.
Multi-input multi-output (MIMO) techniques are quite
important to enhance the capacity of wireless communication
systems. Space-time trellis codes provide both diversity and Row parity
Data bits
coding gain in MIMO channels and are widely used [15]. bits
Space-time trellis codes usually have time-invariant trellis
structures just like convolutional codes. Thus, a product code Data Row
based on convolutional codes is more suitable for integration Column parity bits column
Parity parity bits
with MIMO channels and poses an alternative to block product Parity
codes.
Due to these advantages of convolutional codes, we propose Fig. 1. Regular product code encoding procedure, where a block
a class of product codes constructed by using convolutional code is used to encode rows and columns.
codes, which we call convolutional product codes (CPCs). In
this paper, we will investigate the factors that affect the
performance of CPCs and leave the issues regarding space Encode rows
time trellis codes to other publications. Data bits
The outline of the paper is as follows. The proposed code
structure and the decoding algorithm for CPCs are given in
section II. The minimum distance of these codes is studied in
section III. In section IV, implementation advantages of CPCs Encode columns
are given. Simulation results are presented in section V.
Concluding remarks are given in section VI.
1. CPC Encoder
A regular product code is constructed by placing the
Data
information bits/symbols into a matrix. The rows and columns
are encoded separately using linear block codes [5]-[8]. This Parity
type of a product encoder is shown in Fig. 1. It is seen from the
figure that the data and parity bits are grouped separately. Fig. 2. CPC encoding procedure without an interleaver.
In our case, we use convolutional codes instead of linear
block codes to encode rows and columns. This is illustrated in row is encoded, the matrix is sent, if desired, to an interleaver.
Fig. 2. When compared to Fig. 1, it is obvious that data and Our data matrix dimension is k×k, and the encoded data matrix
parity bits are mixed uniformly. dimension is n×n, that is, our code is an (n×n, k×k) code. The
Encoding is performed by using a matrix that determines interleaved matrix is coded column-wise. In our simulation we
how each encoder works. The data to be sent is put into the used the rate 1/2 recursive systematic convolutional code with
matrix. Each row of the matrix is encoded using a the matrix generator (1, 5/7)octal to encode each row and column.
convolutional code. We use the same recursive systematic Hence, the overall code rate is 1/4. The general encoding
convolutional code to encode each row, although different procedure, which includes any type of interleaver, is illustrated
convolutional codes can be used for this purpose. Once each in Fig. 3.
454 Orhan Gazi et al. ETRI Journal, Volume 28, Number 4, August 2006
Encode
⎡d1 d 2 ⎤ rows ⎡d1 p1 d 2 p2 ⎤ Interleave ⎡a b c d ⎤ Input data Outer Inner Output data
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ convolutional code Interleaver convolutional code
⎣d 3 d 4 ⎦ ⎣d3 p3 d 4 p4 ⎦ ⎣e f g h ⎦ (row encoder) (column encoder)
Encode
columns
⎡ a b c d ⎤
Fig. 5. SCCC encoding operation.
⎢ ⎥
⎢ pa pb pc pd ⎥
⎢ e f g h ⎥
⎢ ⎥ many log-MAP decoders in-parallel, thus showing smaller
⎢⎣ pe p f p g ph ⎥
⎦ decoding delays. Therefore, we will compare the proposed
Fig. 3. Convolutional product code encoder with any type of CPC structure to that of SCCC.
interleaver (d denotes data bits and p denotes parity bits).
3. Puncturing
2. CPC Decoder
Puncturing is a widely used tool to increase the code rate of
Convolutional product coded data is multiplexed to a single convolutional codes [18]. The puncturing operation increases
stream and binary phase shift key (BPSK) modulated. The the rate of a code, but decreases the free distance. This results in
BPSK-modulated signal is passed through an additive white a worse error rate performance compared to a non-punctured
Gaussian noise channel with double-sided noise power spectral case. We used the puncturing matrix
density N0⁄2, that is, the noise variance is σ 2 = N 0 2 . We ⎡1 1⎤
used the log-MAP soft decoding algorithm [16], [17] to ⎢1
⎣ 0⎥⎦
iteratively decode the convolutional product code. Since
columns were encoded last, each column is independently to puncture the convolutional component codes.
decoded one by one. The extrinsic information obtained from the We studied two cases. In the first case, puncturing is applied
columns is passed to the row decoder after being de-interleaved. only to the column encoders, resulting in a code rate of 2/3
Then, row decoding proceeds; rows are decoded one by one, and each. The overall code rate becomes (1/2)×(2/3)=1/3. When
interleaved extrinsic information is passed to the column decoder. trellis termination is used for rows and columns, a
The CPC decoding procedure is depicted in Fig. 4. This convolutional code with a slightly smaller overall code rate
procedure is repeated for a sufficient number of times. The (≤ 1 / 3) is produced. In the other case, we apply puncturing to
decoding structure employed in this study is the same as that of each row and column encoder, resulting in an increased code
serially-concatenated codes in Fig. 5 [17]. For frames of equal rate of approximately (2/3)×(2/3)=4/9. Simulation results for
length, an SCCC decoder uses two log-MAP decoders and punctured convolutional product codes (PCPCs) will be
performs quite well at low rates. CPC decoders can utilize presented in section V.
Pass it to column
decoders Interleave ⎡ Led1 Lep1 Led 2 Lep 2 ⎤ III. CPC Minimum Distance and its Asymptotic
⎢ ⎥
L L L L
⎣⎢ ed 3 ep3 ed 4 ep 4 ⎦⎥
Performance
The Hamming weight of a binary codeword is defined as
⎡a b c d ⎤ the number of ‘1’s available in the codeword [13]. The
⎢ ⎥ De-interleave Decode each row
⎢ pa pb pc pd ⎥ ⎡d1 p1 d 2 p2 ⎤
⎡a b c d⎤ ⎢ ⎥ using a different minimum distance of a linear code is the minimum Hamming
⎢e f g h ⎥ ⎢ ⎥ ⎣d3 p3 d 4 p4 ⎦
⎢ ⎥ ⎣e f g h⎦ log-MAP decoder
⎢⎣ pe p f p g ph ⎥ weight of all the codewords. The minimum distance plays an
⎦
important role in the code performance. As it gets larger, code
performance improves, especially at high signal-to-noise ratio
Decide on bits
Decode each column
(SNR) values [13]. We assume that dfree is the free distance of
using a different log-MAP the component convolutional codes used in CPCs with trellis
decoder, extract extrinsic
information for the bits a-h Pass it to row decoder
termination. We will investigate the minimum distance of the
CPCs according to the usage of the interleavers.
ETRI Journal, Volume 28, Number 4, August 2006 Orhan Gazi et al. 455
Encode rows Encode rows
Row data bits X1X1XXX1X1XX1XXX Row data bits X1X1XXX1X1XX1XXX
An S-random interleaver
Encode columns
X X X X X
1 X X X X
X X X X X 1 X X X X
X X X X X X1XXXXXXXXXXXXXX
1 1 1 1 1 1 X X X X
X X X X X 1 X X X X
1 1 1 1 1 X X X X X
X X X X X X X X X X
X X X X X
X 1 X 1 X X X 1 X 1 X X 1 X X X
X X X X X
X X X X X Encode columns
1 1 1 1 1
X X X X X
X X X X X
X X X X X X X X X X
1 1 1 1 1 X X X X X
X X X X X X X X X X
X X X X X
2
X X X X X
Fig. 6. If the column elements are not mixed, d free is preserved. 1 X X X X
1 X X X X
X1XXXXXXXXXXXXXX
1 X X X X
X X X X X
encoded matrix should contain at least dfree number of ‘1’s. This 1 X X X X
1 X X X X
means that there are dfree columns containing at least a single ‘1’ 1 X
1 X
X X
X X
X
X
in the row-encoded matrix. When columns are encoded, there X X
X X
X X
X X
X
X
exists at least dfree number of columns each containing at least 2
2 Fig. 7. d free is not preserved if an S-random interleaver is used
dfree ‘1’s. Hence, in total there are at least d free ‘1’s in the
(‘x’ stands for a single or a group of 0’s).
coded matrix [6]. This is the dmin distance of the CPC whose
component convolutional codes have a trellis termination
constraint. In Figs. 6 and 7, this concept is explained for (1, a single ‘1’ is not necessarily equal to dfree=5. As a result, the CPC
2
5/7)octal component convolutional codes whose free distance is minimum distance is no longer necessarily equal to d free . In fact,
5. In summary, if no interleaver is used, the CPC minimum after an interleaving operation, all the ‘1’s may appear in a single
2
distance is d free . column. This means that the CPC minimum distance is lower
bounded by dfree. We call this type of interleaving full S-random
2. Column S-Random Interleaver interleaving. In Fig. 7, the effect of the full S-random interleaver
2
is illustrated. It is seen from Fig. 7 that when the row-encoded
Both to preserve the d free minimum distance of the CPC,
matrix is S-random interleaved, all the ‘1’s appearing in a row
and to benefit from the interleaving gain, after row encoding
may go to a single column. This verifies that the CPC minimum
we used S-random interleavers for each column, that is, each
distance is lower bounded by dfree.
column is interleaved but different column elements are not
mixed. In this way, we guarantee that dfree number of columns
4. Punctured CPCs
contains a single ‘1’ before column encoding operation. We
call this type of interleaving column S-random interleaving to The puncturing operation decreases the free distance of
distinguish it from regular S-random interleaving. A helical convolutional codes. In our case, we puncture the (1, 5/7)octal
interleaver [19] also does not mix the different column component convolutional code that has dfree=5, that is, an
elements. A helical interleaver and a combination of helical and input sequence ‘0111’ produces minimum Hamming weight
column S-random interleavers will also be considered. codeword ‘00111011’. When the puncturing matrix is applied,
its free distance decreases to dfree=3, that is, deleting every
3. Full S-Random Interleaver second parity bit in a periodic manner, ‘001x101x’ is obtained
from the minimum Hamming weight codeword. Hence, the
If an S-random interleaver is used for all the elements of a CPCs constructed using punctured component convolutional
matrix after row encoding, the number of columns that contain codes have a smaller minimum distance. In fact,
456 Orhan Gazi et al. ETRI Journal, Volume 28, Number 4, August 2006
2
the minimum distance is equal to d free = 9 if no interleaving of O( L ) and time delay of O( L ). Since these decoders
operation is performed or a column S-random interleaver is are run in-parallel, the total column decoding complexity is of
used. O(2L) but the time delay is of O( L ). Similarly, row
decoding has a total complexity of O(L) and time delay of
5. Asymptotic Performance O( L ). Hence, although both complexities are the same,
time delays differ very much and bring about an O( L )-
If row and column convolutional codes are trellis terminated,
time increase in decoding rate. Hence, the main advantage of
the row and column convolutional codes can be considered as
CPCs lies in their suitability for a parallel decoding procedure.
block codes. Asymptotic performance studies made for block
Although there are some proposed methods for the parallel
product codes are also valid for a convolutional product code.
decoding of SCCCs and PCCCs, these methods usually
The BER probability of the CPCs can be approximated using
the formula, propose extra algorithms to solve problems met in parallel
processing. Such algorithms not only bring extra complexity
N c2, d wc2, d ⎛ 2Es ⎞ to the decoding operation [20]-[21], but also may suffer from
Pb ≈ free free
Q⎜ d 2free ⎟, E s = r 2 Eb , (1) performance loss. This situation is totally remedied with the
k 2 ⎜ N0 ⎟
⎝ ⎠ proposed CPCs.
where dfree is the free distance of the convolutional code used to
construct the convolutional product code, N c , dfree is the number V. Simulation Results
of convolutional codewords with free distance dfree, k is the length
1. Interleaving Effects
of the data information frame, and wc , dfree is the average
Hamming weight of the information words that produces A. No Interleaver
convolutional codewords with Hamming weight dfree. Also, r is In this case, no interleaving operation is performed after row
the rate of the component convolutional codes and is equal to 1/2 encoding. Trellis termination bits are added both to rows and
or 2/3 in our case, and Q is the error function given as columns. The minimum distance of the CPC is
−t 2 d min = d free
2
= 25 . Trellis termination bits are necessary to
1 ∞
∫π e guarantee d min = d free
2 2
Q( x) = 2
dt. ; otherwise dmin is not equal to d free
2π anymore. The performance graph of this code is shown in Fig.
8. It can be seen from the graph that the performance of this
The BER approximation in (1) is valid if no interleaver is
CPC is not good for low SNR values, although its minimum
used during the CPC encoding operation. If an interleaver is
distance is large. As is well known, the minimum distance
used, it does not hold anymore.
dominates the performance of the code at high SNR values.
ETRI Journal, Volume 28, Number 4, August 2006 Orhan Gazi et al. 457
10-1 10-1
C1
CPC No TT
C2
C3 CPC TT
-2
10 10 -2 CPC RT
C4
SCCC
C5
C6
10-3 C7 10-3
BER
10-4 10-4
BER
10-5 10-5
10-6 10-6
10-7 10-7
10-2
interleaving. Different column elements are not mixed. From
Fig. 8, it can be seen that the performance is better compared to 10-3
BER
458 Orhan Gazi et al. ETRI Journal, Volume 28, Number 4, August 2006
3. Puncturing Effects References
Puncturing is first applied only to rows, resulting in a rate 2/3 [1] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon
CPC. From Fig. 10, it is seen that the performance of the CPC Limit Error-Correcting Coding and Decoding: Turbo-Codes,”
with a full S-random interleaver is good after being punctured. Proc. ICC’93, Geneva, Switzerland, May 1993, pp. 1064-1070.
[2] S. Benedetto, L. Gaggero, R. Garello, and G. Montorsi, “On the
When the puncturing process is applied to both rows and
Design of Binary Serially Concatenated Convolutional Codes,”
columns, it results in a CPC rate of approximately 4/9. From
Proc. VIII Communication Theory Mini-Conf. (CTMC),
Fig. 10, it is seen that the performance becomes very poor for
Vancouver, BC, Canada, June 1999, pp. 32-36.
CPC with a full S-random interleaver. Recall that dmin is not
2
[3] R. G. Gallager, “Low Density Parity Check Codes,” IRE Trans.
necessarily lower bounded by d free when an S-random Inform. Theory, vol. IT-8, Jan. 1962, pp. 21-28.
interleaver is used. Thus, the particular interleaver we used [4] P. Elias, “Error Free Decoding,” IRE Trans. Inform. Theory, vol.
resulted in a low dmin. When d min ≥ d free 2
is ensured by IT-4, Sept. 1954, pp. 29-37.
column S-random interleaving, performance is enhanced [5] E. Hewitt, “Turbo Product Codes for LMDS,” IEEE Radio and
significantly. Wireless Conf., Aug. 1998.
[6] Nam Yul Yu, Young Kim, and Pil Joong Lee, “Iterative Decoding
of Product Codes Composed of Extended Hamming Codes,” 5th
VI. Conclusion
IEEE Symposium on Computers and Communications (ISCC
2000), Antibes, France, 04 – 06 July 2000, pp. 732–737.
In this article, we studied a new class of product codes based
[7] R. M. Pyndiah, “Near-Optimum Decoding of Product Codes:
on convolutional codes. This type of product code has
Block Turbo Codes,” IEEE Trans. Commun., vol. 46, no. 8, Aug.
component codes with a time invariant trellis structure, as
1998, pp. 1003-1010
opposed to product codes constructed with linear block codes
[8] T. Shohon, Y. Soutome, and H. Ogiwara, “Simple Computation
(Hamming, BCH, Reed Solomon, and so on). Hence, CPC
Method of Soft Value for Iterative Decoding of Product Code
may be more favorable for implementation than linear block
Composed of Linear Block Code,” IEIC Trans. Fundamentals,
product codes. When compared to serially-concatenated
vol. E82-A, no. 10, Oct. 1999, pp. 2199–2203.
convolutional codes, it exhibits comparable BER levels which
[9] Omar Aitsab and Ramesh Pyndiah, “Performance of Reed
are of practical interest.
Solomon Block Turbo Codes,” Proc. IEEE LOBECOM’ 96
We investigated the effects of different interleavers on the
Conf., London, U.K., vol. 1/3, Nov. 1996, pp. 121-125.
performance of CPCs. It was seen that CPCs are outperformed
[10] David Rankin and T. Aaron Gulliver, “Single Parity Check
by other codes unless good interleavers are used. We proposed
Product Codes,” IEEE Trans. Commun., vol. 49, no. 8, Aug.
interleaving methods to preserve the greatest minimum
2001, pp. 1354-1362.
distance of CPCs. It is seen that the performance of a CPC is
[11] D. M. Rankin and T. A. Gulliver, “Randomly Interleaved Single
best at low rates when a full S-random interleaver is used.
Parity Check Product Codes,” Proc. IEEE Int Symp. on Inform.
Column S-random interleavers are much better for punctured
Theory, June 2000, pp. 88.
CPCs.
[12] A. Goalic and R. Pyndiah, “Real Time Turbo Decoding of
Currently, we are investigating the effects of various
Product Codes on a Digital Signal Processor,” Int. Symposium on
interleavers and the incorporation of trellis coded modulation in
Turbo Codes and Related Topics, Brest, Sept. 1997, pp. 624-628.
row and column encoding. Since CPCs employ matrices in
[13] S. Lin, D. Costello, Error Control Coding, Prentice Hall, 2004.
encoding, it can be easily extended to multi-carrier modulation
[14] F. Sanzi and S. ten Brink, “Iterative Channel Estimation and
where the vertical dimension can correspond to the sub-carriers.
Decoding with Product Codes in Multicarrier Systems,” IEEE
The approach presented here can be successfully extended to
VTS Fall VTC2000 52nd Vehicular Technology Conf., Boston,
space-time trellis coding. Thus, our future studies will also
MA, USA, Sep. 2000, pp. 1388–1344.
include a joint structure for CPCs and MIMO space-time
[15] V. Tarokh V, N. Seshadri, and A. R. Calderbank, “Space-Time
frequency codes.
Codes for High Data Rate Wireless Communication:
Performance Criterion and Code Construction,” IEEE Trans.
Acknowledgement Inform. Theory, vol. 44, no. 2, Mar. 1998, pp. 744-765.
[16] J. Hagenauer, E. Offer, and L. Papke, “Iterative Decoding of
The authors would like to thank the anonymous reviewers Binary Block and Convolutional Codes,” IEEE Trans. Inform.
for their valuable comments and helpful corrections. Theory, vol. 42, no. 2, Mar. 1996, pp. 429–445.
ETRI Journal, Volume 28, Number 4, August 2006 Orhan Gazi et al. 459
[17] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serially Orhan Gazi received the BS and MS
Concatenation of Interleaved Codes: Design and Performance degrees in electrical and electronics engineering
Analysis,” IEEE Trans. Inform. Theory, vol. 44, May 1998, pp. from Middle East Technical University, Ankara,
909-926. Turkey in 1996 and 2001. Since 2001, he has
[18] J. Hagenauer, “Rate-Compatible Punctured Convolutional Codes been working towards the PhD degree. His
(RCPC Codes) and their Applications,” IEEE Trans. Inform. research includes error control coding and signal
Theory, vol. 44, no. 3, May 1998, pp. 909- 926. processing. He is currently employed as an
[19] Branka Vucetic and Jinhong Yuan, Turbo Codes: Principles and Instructor in Cankaya University, where he delivers lectures at the
Applications, Kluwer Academic Publishers, May 2000. undergraduate level in electrical engineering.
[20] A. Tarable, S. Benedetto, and G. Montorsi, “Mapping Interleaving
Laws to Parallel Turbo and LDPC Decoder Architectures,” IEEE Ali Özgür Yılmaz received the BS degree in
Transactions on Information Theory, vol. 50, no. 9, Sep. 2004, pp. electrical engineering from the University of
2002–2009. Michigan, Ann Arbor in 1999. He received the
[21] Seokhyun Yoon and Yeheskel Bar-Ness, “A Parallel MAP MS and PhD degrees at the same school in
Algorithm for Low Latency Turbo Decoding,” IEEE 2001 and 2003. He has been an Assistant
Communication Letters, vol. 6, no. 7, July 2002, pp. 288–290. Professor with Middle East Technical
[22] P. Robertson, “Illuminating the Structure of Parallel Concatenated University, Turkey since September 2003. His
Recursive (TURBO) Codes,” Proc. GLOBECOM’94, San research interests include high rate mobile communication systems,
Francisco, CA, Nov. 1994, pp. 1298-1303. cooperative communications, and radar signal processing.
460 Orhan Gazi et al. ETRI Journal, Volume 28, Number 4, August 2006