Professional Documents
Culture Documents
Enhances The Performance of APP Values For LDPC Channel Decoding by Using BCJR Alorithem
Enhances The Performance of APP Values For LDPC Channel Decoding by Using BCJR Alorithem
Assistant Professor, ECE Department, Sathyabama University, Chennai, India Email ID-1divi.satishkumar@gmail.com, 2rajarrec@gmail.com
Abstract: Low-density parity-check (LDPC) decoders assumes the channel state information (CSI) is known and they have the true a posteriori probability(APP) for each transmitted bit . but in most cases of interest, the CSI needs to be estimated with help of short training sequences and the LDPC decoder has to decode the received word using faulty estimates .the BCJR algorithm computes the a posteriori probabilities for (APP) for each transmitted symbol providing bitwise optimal decisions .in this paper, we study uncertainty in the CSI estimate and how it affects the bit error rate (BER) output by the LDPC decoder. to improve this APP estimates ,we propose Bayesian equalizer that takes into consideration not only the uncertainty due to noise in the channel, but also the uncertainty in the CSI estimate ,reducing the BER after the LDPC decoder .this proposed architecture to design Bayesian equalization for LDPC decoder by using BCJR (Bahl-Cocke-JelinekRaviv) algorithm. Index termsBayesian equalization, BCJR algorithm (Bahl-Cocke-Jelink-Raviv) algorithm, LDPC decoding, APP (a posterior probability) .
to reliable error correcting codes, such as low-density parity checks (LDPCs), allowing maximal achievable rates for the communication system [1], [4]. Maximum likelihood (ML) is the standard approach to estimate the CSI. The ML criterion uses a known training sequence in a preamble or pilots of the transmitted data. These preambles are typically short to improve the efficiency of the transmission in time varying channels. However, short sequences yield inaccurate CSI estimates. Therefore, the BCJR with the CSI estimated by ML, denoted hereafter as ML-BCJR, only delivers an approximation to the APP for the transmitted symbols. These inaccurate APP estimates degrade the performance of the LDPC decoder, which may fail to deliver the correct transmitted codeword or may even fail to converge at all. Several works focus on the effects of this imperfect estimation of the CSI upon the performance of the communication system [5][7]. Since the advent of turbo processing, some Bayesian approaches have been proposed to embed and consider the uncertainties in the whole iterative process of equalization and decoding. Most of the proposed solutions look for iterative message passing solutions that moves between the channel estimation and the sequence estimation [8][12]. In this correspondence, to improve the performance of the standard ML-BCJR equalizer, we propose a Bayesian equalizer (BE), which integrates the uncertainty in the CSI to produce more accurate APP estimates. Our approach gives a direct estimation without iterating between the channel estimation and the decoder, as we do not only provide a point estimate but a probability estimate. By further employing an LDPC code [4], we experimentally show that these new estimates of the APP significantly improve the performance of the channel decoder, which needs accurate APP to provide a correct estimation of the transmitted word.Transmit a message across a noisy channel (AWGN) so that the receiver can determine this message with high probability despite the imperfections of the channel. Shannons channel coding theorem asserts the existence of maximum rate (bit per
I. INTRODUCTION
Channel coding typically assumes the symbols pass through an additive white Gaussian noise (AWGN) channel [1]. To avoid intersymbol interference (ISI) in dispersive channels, e.g., in wireless multipath channels, an optimal channel equalizer recovers the AWGNchannel memoryless sequence prior to the channel decoder [2]. Hence, the channel state information (CSI) is assumed known and the maximum likelihood sequence detector (MLSD) [2] (i.e., the Viterbi algorithm) provides at the receiver end the most probable transmitted sequence. Alternatively, the BCJR (BahlCockeJelinek Raviv) [3] algorithm computes the a posteriori probabilities (APP) for each transmitted symbol providing bitwise optimal decisions. These APP are suitable inputs
channel use) at which information can be transmitted reliably, i.e .,with vanishing probability of error , over a given channel . this maximum rate is called the capacity of the channel. In data detection , the transmitted sequences is one of the several possible sequences (representation the data symbols ) .in channel estimation the transmitted sequences is assumed to be known of the receiver. In a pilot based scheme , a known sequence (called a pilot , sounding tone or training sequence ) is transmitted and that is used to estimate the channel .in a decisionfeedback scheme , the previous detects symbols are used instead to update the channel estimates . Uncertainty is a lack of certainty , A state having limited knowledge where it is impossible to exactly describe the existing state , a future outcome or more than one possible outcome. channel uncertainty limits the achievable data rates of power consideration wide band systems. An LDPC code is a linear error-correcting code that has an parity-check matrix H with a small number of nonzero element in each row and column[4]. Hence, the channel state information (CSI) is assumed known and the maximum likelihood sequence detector (MLSD) [2] (i.e., the Viterbi algorithm) provides at the receiver end the most probable transmitted sequence. Alternatively, the BCJR (BahlCockeJelinek Raviv) [3] algorithm computes the a posteriori probabilities (APP) for each transmitted symbol providing bitwise optimal decisions. These APP are suitable inputs to reliable error correcting codes, such as low-density parity checks (LDPCs), allowing maximal achievable rates for the communication system [1], [4]. Maximum likelihood (ML) is the standard approach to estimate the CSI. The ML criterion uses a known training sequence in a preamble or pilots of the transmitted data. These preambles are typically short to improve the efficiency of the transmission in time varying channels. However, short sequences yield inaccurate CSI estimates. Therefore, the BCJR with the CSI estimated by ML, denoted hereafter as ML-BCJR, only delivers an approximation to the APP for the transmitted symbols. These inaccurate APP estimates degrade the performance of the LDPC decoder, which may fail to deliver the correct transmitted codeword or may even fail to converge at all. Propose a Bayesian equalization (BE) which integrate the uncertainty in the CSI to produce more accurate APP estimates. This approach gives a direct estimation without iterating
between the channel estimation and the decoder,as we dont provide a point estimate but a probability estimate.
II.Bayesian equalization
A.ML-BCJR equalization We consider the discrete-time dispersive communication system depicted in fig. 1. A block of k 1 message bits, M=[m1,m2,..mk]T , is encoded with a rate R=K/N code in to B=[b1,b2,,bN]T. An _-ary modulation is considered to obtain Ns=[N/log2M] symbols. The block frame transmitted over the channel is U=[u1,u2,..,uNS]T. The channel H(z) is completely specified by the CSI, i.e., H=[h1,h2,.,hL]T, where is the length of the channel impulsive response. In this correspondence, we model _ as an independent Gaussian vector (corresponding to Rayleigh fading). The received signal yields. Xi=uiT h+wi=vi+wi --------------(1)
. Where wi represents an AWGN channel with variance .Thus, we denote the received sequence by X=[x1,x2,........,xNS]T.
Fig.1 System Model [20] The maximum likelihood criterion is the standard tool for channel estimation. Thus, prior to every coded block of data, we transmit a preamble with n known symbols ( , ). The receiver uses d={ , i.e., the training sequence, to estimate the channel by maximizing ML= arg maxh p(xo|uo,h) -------(2) In the ML-BCJR equalizer, this estimation is provided to the BCJR algorithm to obtain the APP for each transmitted symbol. p(ui|x, ML) i=1,.NS ---------------(3) which are the inputs to the LDPC decoder [4], that provides the maximum a posteriori estimate for mk..
B. Bayesian Equalization The ML-BCJR receiver is limited by the fact that it only takes into consideration the uncertainty in the channel noise and not the uncertainty in the estimation process, i.e., it assumes the ML estimate is the true CSI. If the training sequence is long enough this might be the case, but it does not need to be in most cases of interest, where we need to keep this training sequence as short as possible. To overcome this problem, we propose to use Bayesian statistics. The resulting Bayesian equalizer (BE) estimates the APP as follows p(ui|x,d)= p(ui|x,h)p(h|d)dh ---------(4) where p(ui|x,h) is given by the BCJR algorithm for each particular h and p(h|d) is the posterior of the CSI: p(h|d)=
( ) ( ( ) )
P(ui|x,d) =
) ---------(12)
uh,ch|d u+
I) ----(13)
We can solve the Bayesian equalizer by using either (4) or (12), since they are equivalent. But both approaches have their own limitations: the BE in (4) is not analytically tractable; and the BE in (12) needs to sum over a Gaussian with a full covariance matrix and we cannot run the forward-backward recursions, because the Markov property is lost. We propose to use Monte Carlo sampling to solve the BE through (4). We consider the following steps: 1) Obtain _ random samples from the posterior of the CSI in (5). 2) Solve the BCJR algorithm for each sample from p(h|d).
( ) (
) )
.----(5)
In our model p(h) is Gaussian, and according to (1) the likelihood is Gaussian as well: p(xo|uo,h) p(h) N((Uo)T h, I) ------(6)
N(0,Ch) --------------------(7)
o
where U =[ , ] and ch is the covariance matrix of the prior. Hence, the numerator in (5) is the product of complex valued Gaussians that leads to a Gaussian posterior: P(h|d) N(
h|d, ,ch|d)
---------(8)
where (14) converges to (12) as G , by the central limit theorem. This solution is time demanding, because we have to calculate G times the BCJR algorithm. In the next section, we approximate the solution in (12). The resulting algorithm, the ABE, presents the same complexity of the ML-BCJR and it is able to incorporate the uncertainties in the CSI estimation. We first solve a particular case of this problem. C. BE for Memory less Channel The memory less channel is an important special case since in OFDM the ISI channel is divided into noninterfering memory less channels and, consequently, the one-tap channel model is of particular interest. Besides, it can be analytically solved without needing to relay on the ABE. We do not have to run the BCJR algorithm to compute the APP since xi only depends on ui. The distribution of the APP in (4) for an AWGN channel can be obtained by first computing P(ui|xi,h) =
( ( ) ( ) )
+uo(uo)h +uo(uo)h
)-1uoxo
------(9)
Ch|d = (
)-1 --------------(10)
Equation (4) can only be computed numerically, as we later show, because P(ui|x,h) is a discrete random variable. But we can interchange the integral by the marginalization respect to u/ui in the BCJR algorithm as follows: P(ui|x,d) = ( )p(u)p(h|d)dh -----(11)
) ---(15)
where here, as in the following, is any normalization constant. In the light of (4)(10), both the CSI posterior and the likelihood of the data are Gaussians and the marginalization over h in (11) can be analytically computed as
Where ( ) N( , ) -------(16)
before the decoder with n= 40 (), after the decoder withn=15(o) , and after the decoder with n=40 ().
= exp (
is the APP estimate for the Bayesian equalizer in memory less channels. In this case, it can be seen that the ABE would provide the same result as the BE and it has two independent error components, see the denominator of the exponent in (18), one due to the noise and the other due to the estimation error, while the ML-BCJR would only count the noise term and would provide overconfident estimates to the LDPC decoder.
Fig.3. BER performance for BE (solid lines), the ABE (dotted lines) and ML-BCJR equalizer (dashed lines), for a channel with 6 taps before the decoder with n= 15 (),
of inaccurate operations grows and we can expect a higher degradation of the equalizer performance, which finally yields into more inaccurate APP estimations. Thus, if we increase the order of the modulation, we can expect a greater gain for the proposed Bayesian equalizer. To illustrate this point, in Fig. 5(b), we include the WER curves for the BE and the ML-BCJR equalizer after the LDPC decoder, assuming a 16-QAM modulation, L=3 and different lengths of the training sequence. We can observe in Fig. 5(b) a gain over 0.5 dB for an E b/N0 9 dB and n= 10. In all the curves, the gain the Bayesian equalizer increases compare to the previous result for modulation.
Fig. 4. WER performance for the ABE (solid lines) and ML-BCJR equalizer (dashed lines) after the LDPC decoder, for a channel with 3 taps, QPSK in (a) and 16-QAM in (b) modulation and different lengths of the training sequence, n=5 (o),n= 10(), n=15() ,and n= 25 (). In dasheddotted line, the BCJR with perfect CSI.
V. CONCLUSION
The Bayesian model introduced in this correspondence, in which the posterior probability of the estimated CSI is taken into consideration, is a principled solution, because it takes into account not only the uncertainty due to the noise, but also the uncertainty about the CSI estimation. The maximum likelihood solution and the Bayesian equalizer perform similarly when we predict the transmitted symbol. However, the BE presents more accurate APP estimates than the standard maximum likelihood solution. We measure the quality of the APP estimates using an LDPC decoder, the standard channel codes in todays communications systems, since the LDPC decoder needs the exact APP to perform optimally. We also propose an approximate Bayesian equalizer that can keep most of the gain of the Bayesian equalizer at the same computational cost as the ML-BCJR equalizer. This gain is remarkable in scenarios with short training sequences, long channels and multilevel constellations. REFERENCES
[1] T. M. Cover and J. A. Thomas, Elements of Information Theory. New York: Wiley, 2006. [2] J. G. Proakis, Digital Communications, 5th ed. New York: McGraw-Hill, 2008. [3] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, Optimal decoding of linear codes for minimizing symbol error rate, IEEE Trans. Inf. Theory, vol. 20, no. 2, pp. 284287, Mar. 1974. [4] T. Richardson and R. Urbanke, Modern Coding Theory. Cambridge, U.K.: Cambridge Univ. Press, Mar. 2008. [5] N. Merhav, G. Kaplan, A. Lapidoth, and S. S. Shitz, On information rates for mismatched decoders, IEEE Trans. Inf. Theory, vol. 40, no. 6, pp. 19531967, Nov. 1994. [6] A. Lapidoth and S. Shamai, Fading channels: How perfect need perfect side information be?, IEEE Trans. Inf. Theory, vol. 48, no. 5, pp. 11181134, May 2002. [7] A.Wiesel, Y. Eldar, and A. Yeredor, Linear regression with Gaussian model uncertainty: Algorithms and bounds, IEEE Trans. Signal Process., vol. 56, no. 6, pp. 21942205, Jun. 2008. [8] R. Raheli, A. Polydoros, and C.-K. Tzou, Per-survivor processing: Ageneral approach to MLSE in uncertain environments, IEEE Trans. Commun., vol. 43, no. 234, pp. 354364, Feb. 1995. [9] A. Anastasopoulos and K. Chugg, Adaptive soft-input softoutput algorithms for iterative detection with parametric
uncertainty, IEEE Trans. Commun., vol. 48, no. 10, pp. 1638 1649, Oct. 2000. [10] M. Skoglund, J. Giese, and S. Parkvall, Code design for combined channel estimation and error protection, IEEE Trans. Inf. Theory, vol. 48, no. 5, pp. 11621171, May 2002. [11] O. Coskun and K. Chugg, Combined coding and training for unknown ISI channels, IEEE Trans. Commun., vol. 53, no. 8, pp. 13101322, Aug. 2005. [12] C.-L. Wu, P.-N. Chen, Y. Han, and M.-H. Kuo, Maximum-likelihood priority-first search decodable codes for combined channel estimation and error correction, IEEE Trans. Inf. Theory, vol. 55, no. 9, pp. 41914203, Sept. 2009. [13] L. Salamanca, J. Murillo-Fuentes, and F. Prez-Cruz, Channel decoding with a Bayesian equalizer, in Proc. IEEE Int. Symp. on Inf. Theory (ISIT), Jun. 2010, pp. 19982002. [14] L. Salamanca, J. Murillo-Fuentes, and F. Prez-Cruz, Bayesian BCJR for channel equalization and decoding, in Proc. IEEE Int. Workshop Mach. Learning for Signal Process., Aug.Sep. 2010, pp. 5358. [15] T. P. Minka, A family of algorithms for approximate Bayesian inference, Ph.D. dissertation, Massachusetts Inst. of Technol., Cambridge, MA, 2001. [16] M. J. Wainwright and M. I. Jordan, Graphical models, exponential families, and variational inference, Found. Trends in Mach. Learning, 2008. [17] Luis Salamanca,juan jose murillo-Fuentes and Fernando perez-cruz Basesian equalization for LDPC Channel decoding vol 60 no 5,May 2012