Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 43, NO.

11, NOVEMBER 1995

2653

Transactions Letters
Viterbi Decoding of the (63, 57) Hamming Code-Implementation and Performance Results
A. M. Michelson and D. F. Freeman

length messages (approximately 50-5000 information bits), significant coding gains can be achieved even though the code rates are large. The coding gains reported ranged from 2.4 to more than 3 dE3, depending on message length. These gains are based on message level performance, specifically, the probability of correct message decoding. It was shown that in many cases the severely punctured codes are good distance-2 codes, and some are cyclic. If the message contains 57 information bits and the generator polynomial is primitive, the (63, 57) Hamming code results. The interest in using punctured versions of the constraint I. INTROL~UCTION length seven, rate-; code is that punctured codes can be deN 1978, WOLF showed that the Viterbi algorithm can be used to decode linear block codes [l]. For ( n , k ) binary coded with the same Viterbi decoder used for the unpunctured block codes, the set of codewords in the code space can be code when erasures are substituted for the deleted symbols. It represented as a trellis with 2 n - k states. Decoding with the is necessary, though, to increase the amount of path history Viterbi algorithm follows direct y. Wolf focused on decoding memory in order to avoid performance degradation. In this paper, we consider using the standard Viterbi decoder to systematic block codes; a recmt letter to the TRANSACTIONS [2] considered the use of the Viterbi algorithm to decode decode the (63, 57) Hamming code. Both systematic and nonsystematic Hamming codes are nonsystematic block codes. Specifically, it was shown that a addressed. Of course, the probability of incorrect decoding truncated convolutional code wl-ich is severely punctured can is the same for systematic and nonsystematic codes, since be decoded with the Viterbi algorithm. they contain the same set of codewords. However, it is shown Puncturing is a technique that has been widely used with convolutional codes to obtain good high rate codes from a that when postdecoding bit-error rate is the figure of merit, lower rate code by judiciously deleting code symbols [3]. a systematic code performs better. It is further shown that For example, to form a rate-$ code from a rate-; code, one the Viterbi decoder can be used to decode both, although an could delete every fourth code s,ymbol. Since puncturing can additional step is required for systematic codes.
Abstruct- Use of the Viterbi decoder to decode the (63, 57) Hamming code is considered. Implementation and performance of systematic and nonsystematic (,odesare addressed. It is shown that a Viterbi decoder for the constraint length seven, rate-; convolutional code can be used to decode both systematic and nonsystematic (63, 57) Hamming codes, but an additional step is needed to complete the decoding of the systematic code. Bounds and simulation results for postdecoding bit-error probability are given and it is shown that the systematic code performs 0.4 dB better than the nonsystematic code. A heuristic explanation is provided.

weaken a code by reducing itr, minimum distance, care is 1 . SEVERELY 1 PUNCTURED CONVOLUTIONAL CODES generally taken in selecting the code symbols to be deleted 141, [5].However, severely punctured codes have more sweeping The constraint length seven, rate-; convolutional code is deletions: a severely punctured iersion of a rate-; code omits encoded with the linear feedforward shift register shown in the entire output of one of the two encoding parity circuits. Fig. 1. The code has two generator polynomials, g~(x) and If we further assume transmission of finite length messages, g 2 ( x ) , where terminated with tail bits, the only redundancy conveyed is contained in the code tail. gl(x:) x 6 + x 3 + x 2 + I I : + 1 = (1) Severely punctured versions of the constraint length g) & = x6 x5 x3 x2 1. (2) seven, rate-; convolutional code were considered in [2]. It was shown that for transmission of short to moderate We note that g2(x) is primitive and gl(x) is not. These Paper approved by S. G. Wilson, the Editor for Coding Theory and generators specify the tap connections indicated in the figure. Applications of the IEEE Communicaiions Society. Manuscript received To encode an m-bit message, the m information bits followed October 14, 1993; revised April 6, 199L This work was supported by the by six tail bits (all zeros by convention) are inserted in the Department of the Navy, Space and Naval Warfare Systems Command under Contract NOOO39-91-C-0113 This paper was presented in part at the Military circuit one bit at a time. The encoder computes two sequences 6, Communications Conference, MILCOM '94, Fort Monmouth, NJ, October of output parity bits, Pl(z) and Pz(i),l_< i 5 m 2 4 , 1994. for transmission on the channel. At the receiver, maximum The authors are with the GTE Govemrient Systems Corporation, Needham likelihood decoding is normally used to compute an estimate Heights, MA 02192 USA. of the intended message, based on the reception of the channel IEEE Log Number 9414713.

+ + + +

0090-6778/95$04.00 0 1995 IEEE

2654

IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 43, NO. 11, NOVEMBER 1995

III. ~ ~ A X I M U M LIKELIHOOD DECODING


A

g2(x)=x6+x5 + x 3 + x 2 + x o
Fig. 1. Encoder for the constraint length seven, rate-; convolutional code.

symbols corresponding to PI ( i ) and Pz(i).When the received data is used in this way, the resulting performance is described in 161. Here, we consider the code that is obtained when the output 15 of the first parity circuit (Pl(i), z 5 m + 6 ) is deleted, forming a severely punctured code. The resulting code can be viewed as a truncated tree code with rate m l m + 6 or as an ( m+ 6, m) block code. We further note that the encoder is a polynomial multiplication circuit [7]. That is, if e(.) represents an encoded word and i ( x ) the corresponding information sequence, a codeword polynomial in the severely punctured code is given by

A Viterbi decoder for the constraint length seven, rate$ convolutional code can be used to decode any severely punctured, truncated code defined by (3). The input to the decoder is the set of received channel symbols corresponding to the P2(i) erasures for the Pl(i).When the path history and memory is not truncated, true maximum likelihood decoding is realized [6]. We are assuming that a priori knowledge of the tail bit values is used in the decoding process. It is useful here to think of Viterbi decoding as a two step process. First, the decoder searches the entire code space and finds the most probable encoded channel sequence, conditioned on the received symbols. This step is identical for both systematic and nonsystematic codes. Then, the decoder determines the information sequence that corresponds to the selected channel code sequence. For any nonsystematic code defined by (3), that second step is equivalent to polynomial division by the generator. However, for the equivalent systematic 4, code defined by ( ) polynomial division does not produce the desired information sequence. Rather, it produces the sequence
(7)

This code is clearly linear and all of the codewords are evenly divisible by the generator g2(x) . The code defined by (3) is nonsystematic. The closely related expression

The polynomial i(x) corresponds to the information sequence that would have produced the codeword polynomial c/(z), had a nonsystematic code been used. We, therefore, see that the systematic code can be decoded using a standard Viterbi decoder if an additional step is added. That step is multiplication by the generator polynomial, which forms

c / ( x )= g2(2)i1(x).

(8)

defines a systematic code that contains the same set of codewords. For simplicity, we refer to these as the systematic and nonsystematic codes, neglecting those with other generating polynomials. Since gZ(Ic) is primitive, we have the factorization

P3 + 1 = gz(z)h(z)
where h ( z ) is a degree-57 polynomial. Furthermore, when j

The information bits in c(x) are of course simply the high order bits. We consider the case n = 63, which gives the (63, 57) Hamming code. As we have just seen, the standard Viterbi decoder will decode the nonsystematic code; if the output is multiplied by g2(x), it will decode the systematic Hamming code. A simulation of the constraint length seven, rate-; convolutional code can therefore be used to obtain results for maximum likelihood decoding of the systematic and nonsystematic (63, 57) Hamming code. We assume binary antipodal signaling, the unquantized white Gaussian noise channel, and coherent detection, and therefore model the matched filter
outputs as Gaussian random variables with mean

is a power of two,
(263

+ l)j =

263j

+ 1 = g2(x)hl(x).

Consequently, the codes with block length n = 63j are cyclic; for other block lengths, we have shortened cyclic codes. It is easy to see that the codes with block length n > 6 3 have minimum distance two: x63 + 1 is divisible by gZ(Ic) and is therefore a codeword. The shifts of 263+1 are also codewords. An interesting case occurs when n = 63. Since gZ(x) is primitive, the (63, 57) Hamming code is formed. In this case, the minimum distance is three.

and variance $ No. E, is the received signal energy per channel symbol and No is the single-sided noise power spectral density. Performance estimates are also easily obtained analytically with the union bound. The probability of incorrect decoding is bounded by (9) where d is the minimum distance of the code (three in this case), A, is the number of codewords of weight j , and erfc

+a

IEEE TRANSACTIONS ON COMMUNIC4TIONS, VOL. 43, NO. 11, NOVEMBER 1995

2655

TABLE I
mOBABLlTY OF

ERROR AN INFORMILTIONBIT POSITION GIVEN IN INCORRECT DECODING A WEIGHT j CODE WORD. (63, 57) HAMMING TO CODE
Code Word Weight ( j )

Systematic
(63,571Code 0.01 762

Nonsystematic (63.57) Code

1E-2
0.26413 0.31747 , 0.35220

0.06847
0.0"'937

d e
L

c 2 *

1E-3

[-

z
1E-1

1E-4

1E-2

I
1E-5 L 2

a l

1E-3

6
Eb/No (dB)

10

.5
m

1E-4

Fig. 3. Postdecoding bit-error rate of the systematic (63, 57) Hamming code with truncated path history memory length.

1E-5

1E-6 2
3

6
I:b/No (dB)

10

Fig. 2. Postdecoding hit error rate of the (63, 57) Hamming code with maximum likelihood decoding.

(.) is the complementary error function [2]. Since the weight distributions of the systematic and nonsystematic codes are identical, this expression appli : to both. s A similar bound can be given for the postdecoding bit-error probability:

where P3 is the probability that an information bit is in error, given incorrect decodh: to a weight j channel code sequence. Note that since P3 may be different for the systematic and nonsystematic codes, the two codes can have different postdecoding bit-error probabi ities. For j small, the P3 were found by exhaustive enumeration of the codewords, for both the systematic and nonsystematic codes. These results are given in Table I. Note that for the systematic code we have Pj z j / n , as can be expected. For the nonsystematic code, the P3 are larger, at least for j small. This is because for a norsystematic code, we divide the decoded word by the generator gZ(z) to obtain the information bits. Until the first channel errcr bit is processed, that division operation will produce the correct result. However, after the

first channel error bit is reached, a burst of errors is produced, and those errors can propagate to the end of the decoded word. Thus, with the nonsystematic code, a weight-j channel error pattern can be mapped into a long string of errors, even when j is small. It is not surprising, then, that Pj is larger for the nonsystematic code and we can anticipate that the systematic code will perform better. Simulation results for the systematic and nonsystematic (63, 57) Hamming codes are shown in Fig. 2, along with the union bounds. Postdecoding bit error probability is plotted as a function of &/NO where Eb is the received signal energy per information bit. These results assume that the path history memory is not truncated. The performance of binary antipodal signaling with coherent detection is also shown in Fig. 2. We first note that the simulation results and bounds are in good agreement. For postdecoding bit error probabilities less than the simulation and bound results agree to within 0.1 dB. Second, the systematic code is about 0.4 dB more powerful than the nonsystematic code for small postdecoding bit error rates. For low signal-to-noise ratios, the difference is larger. Finally, we note that a significant coding gain is achieved. For example, to achieve P = lop6, &/No = 7.4 dB is required b with the systematic code; this represents a coding gain of 3.1 dB over uncoded operation. It has been observed previously that the performance of punctured convolutional codes with Viterbi decoding is sensitive to the amount of path history memory [4]. As the degree of puncturing increases, it is necessary to increase the amount of path history provided. This effect is more pronounced here, as can be seen from Fig. 3, which shows performance for the systematic code. Note that when the path histories are truncated, even by a small amount, the performance degradation is severe. (Similar results hold for

2656

EEE TRANSACTIONS ON COMMUNICATIONS, VOL. 43, NO. 11, NOVEMBER 1995

the nonsystematic code.) We therefore see that the path history memory should not be truncated in practice. IV. SUMMARY CONCLUDING AND REMARKS We have shown that the Viterbi decoder for the constraint length seven, rate-; convolutional code can be used to decode the nonsystematic (63, 57) Hamming code. When the output is multiplied by the generator polynomial, a decoder for the systematic code is realized. It was also shown that when postdecoding bit-error probability is compared, the systematic code outperforms the nonsystematic code by 0.4 dB. Of course, in terms of codeword or message level performance, the systematic and nonsystematic codes exhibit identical performance. Performance is, however, sensitive to the amount of path history memory provided, and, to achieve good performance, the entire path history must be utilized. When this is done, the Viterbi decoder implements maximum likelihood decoding, and the systematic Hamming code yields a coding gain of 3.1 dB at P = 10W6. b

ACKNOWLEDGMENT
The authors would like to thank Mr. encouragement and support.
C.

R. Brown for his

REFERENCES
J. K. Wolf, Efficient maximum likelihood decoding of linear block codes using a trellis, IEEE Trans. Inform. Theory, vol. IT-24, pp. 76-89, Jan. 1978. A. M. Michelson and G. Rosen, The performance of a severely punctured convolutional code, some high rate distance-two block codes, and a Hamming code with maximum likelihood decoding, IEEE Trans. Commun., vol. 42, pp. 196-199, Feb. 1994. J. B. Cain, G . C. Clark, Jr., and J. M. Geist, Punctured convolutional codes of rate ( n- l ) / n . and simplified maximum likelihood decoding, IEEE Trans. Inform. Theory, vol. IT-25, pp. 97-110, Jan. 1979. Y. Yasuda, K. Kashusi, and Y . Hirata, High-rate punctured convolutional codes for soft decision Viterbi decoding, IEEE Trans. Commun., VOI. COM-22, pp. 315-319, Mar. 1984. 1.Hagenaner, Rate compatible punctured convolutional codes, in Proc. IEEE ICC87, paper 29.1, pp. 1032-1036. A. M. Michelson and A. H. Levesque, Error-Control Techniques for Digital Communication. New York: Wiley, 1985. R. E. Blahut, Theory and Practice of Error Control Codes. Reading, MA: Addision-Wesley, 1983.

You might also like