Professional Documents
Culture Documents
An Overview of Turbo Codes and Their Applications: November 2005
An Overview of Turbo Codes and Their Applications: November 2005
An Overview of Turbo Codes and Their Applications: November 2005
net/publication/4234971
CITATIONS READS
24 3,898
5 authors, including:
Catherine Douillard
IMT Atlantique
138 PUBLICATIONS 2,862 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Ramesh Pyndiah on 30 May 2014.
Claude Berrou, Ramesh Pyndiah, Patrick Adde, Catherine Douillard and Raphaël Le Bidan
GET/ENST Bretagne, Laboratoire TAMCIC (UMR CNRS 2872), PRACom
Technopôle Brest Iroise, CS 83818, 29238 Brest Cedex 3, FRANCE
E-mail: {firstname.lastname}@enst-bretagne.fr
Abstract — More than ten years after their introduction, energy. Historically, Turbo Codes were first deployed for
Turbo Codes are now a mature technology that has been satellite links and deep-space missions, where they
rapidly adopted for application in many commercial offered impressive Bit-Error Rate (BER) performance
transmissions systems. This paper provides an overview of beyond existing levels with no additional power
the basic concepts employed in Convolutional and Block
Turbo Codes, and review the major evolutions in the field
requirement (a premium resource for satellites). Since
with an emphasis on practical issues such as implementation then, they have made their way in 3G wireless phones,
complexity and high-rate circuit architectures. We address Digital Video Broadcast (DVB) systems, or Wireless
the use of these technologies in existing standards and also Metropolitan Area Networks (WMAN). They are also
discuss future potential applications for this error-control considered for adoption in several emerging standards
coding technology. including enhanced versions of Wi-Fi networks.
A decade after the discovery of Turbo Codes, this
paper provides an overview of this advanced FEC
I. INTRODUCTION technology. The next two sections review the basic
Error-control codes, also called error-correcting codes concepts and the major evolutions in the field for both
or channel codes, are a fundamental component of Convolutional and Block Turbo Codes. Practical issues
virtually every digital transmission system in use today. relevant to the system designer such as implementation
Channel coding is accomplished by inserting controlled complexity and high-rate circuit architectures are also
redundancy into the transmitted digital sequence, thus addressed, and the use of Turbo Codes in existing
allowing the receiver to perform a more accurate decision standards is discussed. Some personal views about the
on the received symbols and even correct some of the next evolutions expected in the field of channel coding
errors made during the transmission. In his landmark are finally proposed in conclusion.
1948 paper that pioneered the field of Information
Theory, Claude E. Shannon proved the theoretical II. CONVOLUTIONAL TURBO CODES
existence of good error-correcting codes that allow data
to be transmitted virtually error-free at rates up to the Classical Convolutional Turbo Codes, also called
absolute maximum capacity (usually measured in bits per Parallel Concatenated Convolutional Codes (PCCC),
second) of a communication channel, and with result from a pragmatic construction conducted by C.
surprisingly low transmitted power (in contrast to Berrou and A. Glavieux, based on the intuitions of G.
common belief at that time). However Shannon’s work Battail [5], J. Hagenauer and P. Hoeher [6], who, in the
left unanswered the problem of constructing such late 80’s, highlighted the interest of introducing
capacity-approaching channel codes. This problem has probabilistic processing in digital communications
motivated intensive research efforts during the following receivers. Previously, other researchers including P. Elias
four decades, and has led to the discovery of fairly good [7], R. G. Gallager [8] and M. Tanner [9] had already
codes, usually (but not always – see convolutional codes imagined coding and decoding systems closely related to
for example) obtained from sophisticated algebraic the principles of Turbo Codes.
constructions. However, 3 dB or more stood between A. Principles of Turbo Codes
what the theory promised and the practical performance
offered by error-correcting codes in the early 90’s. The classical Turbo Code is shown in Fig. 1 and
The introduction of Convolutional Turbo Codes (CTC) consists of the parallel concatenation of two binary
in 1993 [1,2], quickly followed by the invention of Block Recursive Systematic Convolutional (RSC) codes C1 and
Turbo Codes (BTC) in 1994 [3,4], closed much of the C2 separated by a permutation (interleaver) Π. Serial
remaining gap to capacity. Today, advanced Forward concatenation is also possible [10] (with its own pros and
Error Correction (FEC) systems employing Turbo Codes cons) but will not be discussed here. RSC codes are a key
commonly approach Shannon’s theoretical limit within a component of Turbo Codes. They are based on Linear
few tenths of a decibel. Practical implications are Feedback Shift-Registers (LFRS) and act as pseudo-
numerous. Using Turbo Codes, a system designer can for random scramblers. RSC codes offer several advantages
example achieve a higher throughput (by a factor 2 or in comparison with classical non-recursive non-
more) for a given transmitted power, or, alternatively, systematic convolutional codes. First, they resemble
achieve a given data rate with reduced transmitted random codes, and it is known from Shannon’s
pioneering work that random-like codes are the key to converge towards a stable final decision for d. In practice
approach capacity. In addition, they perform better than and depending on the nature of the SISO decoder, fine-
classical convolutional codes at low signal to noise ratios tuning operations (scaling, clipping) may be applied to
[2]. Finally, RSC codes have the interesting property that the extrinsic information in order to ensure convergence
only a small fraction of finite weight information within a small number of iterations.
sequences yields finite weight (“low redundancy”) coded
B. Example of performance results
sequences at the encoder’s output. These particular
sequences are called Return To Zero (RTZ) sequences in Table I show some examples of performance results
the literature and play a fundamental role in the for the DVB-RCS Turbo Code over an AWGN channel
asymptotic performance of the Turbo Code [11,12]. using 8 iterations and 4-bit input quantization. We have
reported the Eb/N0 level (dB) required to achieve a target
Frame Error Rate (FER) of 10-4 for different code rates
and block lengths. The corresponding gap ∆ with respect
to the Sphere-Packing Bound (SPB) is also given. We
recall that the SPB provides a theoretical lower bound on
the minimum Eb/N0 required to achieve a given FER with
the best codes of a given finite block size [13]. Although
these performance do not fully reflect the current state of
the art in CTC, we observe that the DVB-RCS perform
very close (from 1.0 to 1.5 dB) to the theoretical limits
under real implementation constraints.
importance in order to design good permutations yielding based on similar ideas, and combine a high-level regular
high minimum distance (low error floors). In parallel, the permutation with local controlled disorder. These two
introduction of EXtrinsic Information Transfer (EXIT) solutions have good asymptotic performance at low BER
charts [22] and other related convergence analysis and led to very efficient hardware implementation. Note
methods (density evolution, etc) has led to a better that the ARP permutation model has been used in the
understanding of the Turbo Codes behavior in the turbo CTC adopted in the DVB-RCS and DVB-RCT standards.
cliff region (convergence threshold, convergence speed).
C. Applications of CTC
The combination of these various tools allow the system
designer to carefully optimize the performance of his A decade after their introduction, CTC are already in
Turbo Codes with respect to the now classical use in several industry standards. Some of them are
“convergence versus minimum distance” dilemma described in Table II (see also the EchoStar system for
encountered with capacity-approaching codes. satellite TV developed by Broadcom Corp. for another
example). The corresponding four CTC commonly used
6) The art of permutation design
in practice are shown in Fig. 3. Let us examine the
The pseudo-random permutation is another key relative merits and limitations of each of these codes.
component of Turbo Codes. Originally introduced to Since the choice of a FEC system is usually dictated by
break correlation effects during the iterative decoding practical system constraints such as latency, residual
process, the permutation function has been quickly error rate or silicon area, we will consider here three
recognized as a fundamental parameter of the code itself. different FER regions corresponding to different Quality
When considering very large block sizes (say 30000 bits of Service (QoS) requirements:
or more), a permutation drawn at random will yield a
good Turbo Code with high probability. To quote David Medium error rates (FER > 10-4)
Forney (MIT): “It sometimes seems that almost any This is typically the domain of Automatic Repeat
simple codes interconnected by a large pseudo-random reQuest (ARQ) systems and is also the more favorable
interleaver and decoded with sum-product decoding will range of error rates for CTC. 8-state component codes are
yield near-Shannon-limit performance”. This is no longer sufficient to reach near-optimum performance. The
true when one aims at designing CTC operating on small binary CTC in Fig. 3a is suitable for rates < 1/2. The duo-
blocks with good performance (low error floor) at low binary code of Fig. 3b is preferable for higher rates (less
Bit-Error Rates (BER). The way the permutation is sensitivity to puncturing patterns). In both cases,
devised (together with the choice of the component performance close to the theoretical limit is achieved
codes) indeed fixes the minimum Hamming distance dmin with existing silicon decoders for most coding rates and
of the Turbo Code, and therefore the corresponding block sizes, even the shortest.
achievable asymptotic coding gain Ga ≈ 10 log10(Rdmin).
Low error rates (10-9 < FER < 10-4)
Regularity of the permutation is another important factor
that should not be overlooked in practice. Indeed, the 16-state CTC are usually preferable to 8-state CTC in
more regular the permutation, the easier it is to conceive this context since they offer better performance (by about
high-throughput parallel decoding architectures. 1 dB at a FER of 10-7) in this region. The choice between
Designing permutations having both good structural and the two solutions mainly depends on the desired trade-off
spreading properties actually remains an on-going area of between performance and decoding complexity. The
research that regularly inspires new contributions. corresponding Turbo Codes are shown in Fig. 3c and 3d.
Recently however, two permutation models have been Again, binary CTC are suitable for coding rates < 1/2 and
proposed that satisfy most of the requirements for a good non-binary CTC should be used for higher rates. Note
permutation. Called Dithered Relatively Prime (DRP) also that the permutation must be very carefully designed
permutation [23] and Almost Regular Permutation (ARP) in order to maintain good performance at low error rates.
[24] respectively, these two simple models are actually
B
X A
permutation
Y1 Y1
Π
permutation
Π
Y2
(a) (b) Y2
B
X A
k binary
k/2 binary
data
couples
permutation
Y1 Y1
Π
permutation
Π
Y2
(c) (d) Y2
Fig. 3. The four CTC used in practice: a) 8-state binary; b) 8-state duo-binary; c) 16-state binary; d) 16-state duo-binary.
Very low error rates (FER < 10-9) A. Construction and iterative decoding
For the time being, the minimum Hamming distances The general concept of Block Turbo Codes is based
that are currently obtained with CTC cannot prevent a on iterative SISO decoding of product codes which were
change of slope in the performance curves at very low introduced by P. Elias in 1954 [7]. Product codes are
error rates. An increase of about 25% in the minimum constructed by serial concatenation of two (or more)
distance of the code would be necessary to make CTC systematic linear block codes C1 and C2 with parameters
attractive for those applications that operate in this error- (n1,k1,δ1) and (n2,k2,δ2), where ni, ki, and δi stand for the
rate region (such as optical transmission or mass storage code length, code dimension and minimum Hamming
systems for example). distance of each component code Ci. As shown in Fig. 4,
To summarize the previous discussion, 8-state CTC data bits are placed in a k1×k2 information matrix [M] and
are particularly appropriate for ARQ systems and short to the rows and columns are encoded by the codes C and
medium block sizes. On the other hand, 16-state CTC are C respectively, yielding a n1×n2 coded matrix [C]. The
necessary for broadcast systems, long blocks, or high product code has length n=n1.n2, dimension k=k1.k2, and
coding rates. Several remaining challenges are currently code rate R=R1.R2 where Ri is the code rate of code Ci.
under investigation. In particular, it would be desirable to All the rows of the coded matrix are code words of C
reduce by half the number of iterations required to and all columns are code words of C. It follows from
achieve convergence (from 8 to 4), and to decrease the this important property that the minimum Hamming
complexity of the Max-Log-MAP decoder for 16-state distance of the product code is the product δ=δ1.δ2 of the
Turbo Codes. minimum Hamming distance δi of the component codes
[4]. Hence it is easy to construct product codes with large
minimum distance, that do not suffer from error-floor
III. BLOCK TURBO CODES problems as may do CTC in the absence of a careful
Block Turbo Codes (BTC), also called Turbo Product permutation design.
Codes (TPC), offer an interesting alternative to CTC for In the iterative decoding process, all the rows and
applications requiring either high code rates (R > 0.8), columns of the received matrix are decoded sequentially
very low error floors, or low-complexity decoders able to at each iteration. Thus the data bits but also the parity bits
operate at several hundreds of megabits per second (and can exploit the extrinsic information which is another
even higher). advantage of serial over parallel concatenation.
α(m) β(m)