Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

RAO-BLACKWELLISED PARTICLE FILTERS:

EXAMPLES OF APPLICATIONS
Frédéric Mustière Miodrag Bolić Martin Bouchard
School of Information Technology School of Information Technology School of Information Technology
and Engineering, and Engineering, and Engineering,
University of Ottawa University of Ottawa University of Ottawa
e-mail: mustiere@site.uottawa.ca e-mail: mbolic@site.uottawa.ca e-mail: bouchard@site.uottawa.ca

Abstract they are, can still appear quite intimidating to the reader
In this work, we present some examples of applications of the so-called not initiated to the nomenclature and the many concepts
Rao-Blackwellised Particle Filter (RBPF). RBPFs are an extension specific to particle filtering. The attempt here is to present
to Particle Filters (PFs) which are applicable to conditionally linear- the algorithm in a most accessible way.
Gaussian state-space models.
Although RBPF introductions and reviews may be found in many In the following pages, a practical RBPF algorithm is
existing sources, going through the specific vocabulary and concepts of presented, along with a description of the range of prob-
particle filtering can sometimes prove to be time-consuming for the lems aimed. We then apply the algorithm to two detailed
non-initiated reader willing to experiment with alternative algorithms.
The goal of the paper is to introduce RBPF-based methods in an
examples from various fields, for which a tailored RBPF
accessible manner via a main algorithm, which is detailed enough is derived from the generic algorithm. In the first exam-
to be readily applied to a wide range of problems. To illustrate the ple, we show how RBPFs can be applied to a non-linear
practicality and the convenience of the approach, the algorithm is system identification problem, where an unknown time-
then tailored to two examples from different fields. The first example
is related to system identification, and the second is an application of varying FIR filter is applied to an unknown signal, evolving
speech enhancement. in a known non-linear fashion. As a second example, we
Keywords— Particle filters, RBPF, Rao Blackwellised choose to present a basic RBPF-based solution to speech
particle filters, tutorial, system identification, speech en- denoising problems, which is the basis for more complex
hancement. existing algorithms, such as the ones presented in [3, 7, 9].
Concrete simulation results are presented on both cases.
We conclude on the advantages of using RBPFs in the light
1 Introduction
of the examples shown.
Particle filters are a family of algorithms operating on
systems which can be modelled by discrete time state-space 2 A RBPF Algorithm
equations, where the evolution of the state x and its rela- We choose not to repeat here the generic PF algorithm,
tionship to the measurement z is conveniently represented although we briefly present the idea. (The PF algorithm
as follows: can be found in details in several sources, e.g. [1, 2]). One
may see particle filtering as a genetic algorithm, a “brute
xk = f (xk−1 , wk ) (1)
force” simulation. At each step, the algorithm draws a
zk = h(xk , vk ) (2) large number of possible candidates for the state. As a
measurement is received, scores (or weights) are assigned
PFs can compute online an approximation of the state
to the candidates, depending on how well they fit the mea-
through equations (1) and (2) for weak assumptions. Func-
surement – and in fact, the sequence of past measurements.
tions f and h are assumed to be known, and may depend on
Only the fittest candidates (in a probabilistic sense) survive
time, vk and wk are noises which may be, theoretically, of
onto the next step, and the other ones are discarded.
any nature (although non-Gaussian noises may complicate
A Rao-Blackwellised particle filter can be seen as an en-
substantially the derivation of the PF algorithm). Provided
hanced PF applicable to a wide range of problems in which
that the state x contains enough information, a large class
dependencies within the state vector can be analytically
of problems can be written in the form of equations (1) and
exploited.
(2). Rao-Blackwellised Particle Filters (RBPFs), can be
In practical terms, suppose that we are able to model the
seen as a form of constrained PFs applicable to a subclass
evolution of a quantity of interest, x2k , using the following
of state-space models, where equations (1) and (2) can be
time-varying equations:
written in a conditionally linear-Gaussian form (see section
2 below). In such cases, RBPFs allow the use of less par- x2k = Ak x2k−1 + Bk uk + Gk wk (3)
ticles to obtain a performance similar to that of PFs, even zk = Ck x2k + Dk uk + Hk vk (4)
though more computations must be carried out per parti-
cle. Reviews of RBPF algorithms can be found in many but with one or more of the parameters Ak , Bk , Ck , Dk ,
sources, including [2–4, 9], however the experience of the Gk , Hk or uk unknown, and evolving possibly under non-
authors is that some sources, as detailed and rigorous as linear, non-Gaussian conditions. In (3) and (4), w and v are

1-4244-0038-4 2006 1196


IEEE CCECE/CCGEI, Ottawa, May 2006
zero mean, unit covariance Gaussian random vectors, A, Algorithm 1 RBPF algorithm
B, C, D, G, and H have the dimension of matrices, and u 1. Define p(x1k |x1k−1 ), choose the number of particles N
that of a vector1 . This type of system is sometimes termed
2. Define and initialize the value of {x10,i ; x20,i ; K0,i }N
i=1 ,
conditionally linear-Gaussian. In this situation, a possibil- where K is the covariance matrix of x2 , according to any a
ity to solve the problem is to form another set of variables priori belief.
describing those parameters, say x1k , and then to apply a
3. For every k, update the set {x1k−1,i ; x2k−1,i ; Kk−1,i }N
RBPF algorithm on the whole state xk = {x1k ; x2k }. To be i=1
as follows:
able to do so, there are two pre-requisites. First, we must ◦ For every i ∈ {1, 2, . . . , N }:
be able to define the probability density p(x1k |x1k−1 ), that • Draw x1k,i ∼ p(x1k |x1k−1,i )
is, we must either know or conjecture the evolution of the • From x1k,i , obtain the corresponding Ak,i , Bk,i , Ck,i , Dk,i ,
unknown variables x1k 2 . Secondly, it must be possible to Gk,i , Hk,i , and uk,i as applicable.
• Compute the following:
draw samples from p(x1k |x1k−1 ). Note that the elements of Kk|k−1,i = Gk,i Gk,i T + Ak,i Kk−1,i Ak,i T
x1k do not have to be identical to the unknown parameters Tk,i = Hk,i Hk,i T + Ck,i Kk|k−1,i Ck,i T
among Ak , Bk , Ck , Dk , Gk , Hk , and uk – the only require- x2k|k−1,i = Ak,i x2k−1,i + Bk,i uk,i
ment is that of a one-to-one relationship. Also note that a yk,i = Ck,i x2k|k−1,i + Dk,i uk,i
wek,i = N (zk |yk,i , Tk,i )
Kalman filter alone cannot be used to solve this problem, Jk,i = Kk|k−1,i Ck,i T T−1 k,i
although it may be possible and convenient to use a combi- x2k,i = x2k|k−1,i + Jk,i (zk − yk,i )
nation of algorithms – an algorithm estimating x1k , serving Kk,i = (I − Jk,i Ck,i )Kk|k−1,i
PN
a Kalman filter running on x2k . In contrast, RBPFs can ◦ Compute the normalizing factor i=1 w ek,i
be seen as a holistic solution. ◦ Normalize the weights (obtain {wk,i }N i=1 )
◦ Using the weights, resample the set {x1k,i ; x2k,i ; Kk,i }N i=1
A generic RBPF algorithm, applicable to problems of the ◦ Obtain the state estimates:
form (3) and (4) is presented in Algorithm 13 . A justifica- 1 PN
• x̂1k = N i=1 x1k,i
tive explanation for the algorithm is presented in appendix 1 PN
• x̂2k = N i=1 x2k,i
B. At every iteration of the algorithm, a set of N particles
is maintained and updated. At instant k, the ith particle is
defined as {x1k,i ; x2k,i ; Kk,i }, where Kk,i corresponds to 3 A First Example Of Application
the covariance of x2k,i given the set4 X1k,i  {x1l,i }kl=0 . It 3.1 Problem Formulation
is from this set of particles that the state estimates can be
In the first example, a signal xk is passed through an un-
extracted, as seen at the bottom of Algorithm 1. The set
known, time-varying FIR filter. We are given a set of noisy
of particles form the only variables that must be stored in
measurements at the output of the filter. The evolution of
memory (all the other ones can be generated from them at
the signal in time is described by a known function (which
a given instant k). Within each iteration, each x1k,i corre-
is not restricted to be linear).
sponds to a unique Ak,i , Bk,i , Ck,i , Dk,i , Gk,i , Hk,i , and
Let x2k ∈ Rp represent the p coefficients of the FIR filter.
uk,i (we understate here that all of these parameters are
We define x1k ∈ Rp as the last p values of the signal to
unknown). Observe that in the algorithm, as in every PF,
be estimated, from time k − p + 1 to time k. This way5 ,
one must resample the particles, using the weights wk,i that
x1k (1) ≡ xk and x1k (l) ≡ xk−l+1 (for l ≤ p). In addition,
are assigned to them at each step. Resampling algorithms
let g(.) be a function from Rp to Rp defined as follows:
can be found in many sources. For example, a detailed
T
resampling scheme is presented in [1]. g(x1k )  [f (x1k ) x1k (1) x1k (2) . . . x1k (p − 1)] (5)
Note that it would also be perfectly possible to use a reg- In equation (5), f (.) represents a function from Rp to
ular particle filter to solve the given problem. The corre- R, which defines the evolution of the signal xk ≡ x1k (1).
sponding algorithm is given in appendix A. Given a number Finally, we let Fk be a transition matrix for the FIR co-
of particles, using a PF instead of an RBPF will lead to a efficients, and define the following noises: w1k , w2k and
faster execution, but will in general require more particles vk are all ∼ N (0, I), and we choose to constrain S =
to achieve similar accuracy. diag {σ1 , 0p−1×p−1 }, such that only the first component of
x1k is stochastic. The problem setting can be summarized
1 The situation described does not cover the entire theoretical range
by the following equations:
of applications of the most general form of RBPF (given in Appendix x1k = g(x1k−1 ) + Sw1k (6)
B), however it does cover most of the practical range of applications,
since this is a situation where the algorithm applies most conveniently. x2k = Fk x2k−1 + Gw2k (7)
2 In this paper, we assume that x
1k is independent of x2k−1 , con-
ditioned upon x1k−1 zk = x1k T x2k + Hvk (8)
3 The reader familiar with particle filtering will notice that we
only use here the suboptimal importance density q(xk |Xk−1,i , Zk ) = It is readily seen that conditioned upon the values of the
p(xk |xk−1 ), and that we apply resampling at each step. substate x1k , the system formed by equations (7) and (8) is
4 In this paper, we use the notation a  b to mean “a is defined to
be equal to b” 5 By a ≡ b, we mean “a is identical to b”

1197
linear-Gaussian. We can therefore solve this problem using work for a thorough problem formulation, detailed results,
RBPFs. and extensions to smoothing (more details below). We are
here only focusing on the derivation of the base algorithm
3.2 Application Of The Algorithm itself from Algorithm 1.
We can directly use Algorithm 1, with ∀{k, i}, Ak,i = The model chosen for the speech signal, denoted here
Fk , Bk,i = 0, Ck,i = x1k,i T , Dk,i = 0, uk,i = 0, x2k , is an auto-regression of order M :
and Hk,i = H. In addition, we have p(x1k |x1k−1 ) =
x2k = Ak x2k−1 + Gk wk (9)
N (x1k |g(x1k−1 ); SST ). The RBPF algorithm tailored to
the current example is shown in Algorithm 2. z2k = Cx2k + σv,k vk (10)

where:
Algorithm 2 First example of tailored RBPF algorithm
⎧ T
◦ For every i ∈ 1, 2, .., N ⎪ x2k  [x2k x2k−1 . . . x2k−M+1 ]


• Draw x1k,i ∼ N (x1k |g(x1k−1,i ); SST ) ⎪
⎪ ak  [a T
• Compute the following: ⎪
⎪  1,k a2,k . . . aM,k
 ]


Kk|k−1,i = GGT + Fk Kk−1,i FT k aTk
Tk,i = HHT + x1k,i T Kk|k−1,i x1k,i Ak = C = [1 01×M−1 ]

⎪ IM−1 0M−1×1
x2k|k−1,i = Fk x2k−1,i ⎪

= x1k,i T x2k|k−1,i ⎪
⎪ wk ∼ N (0, I), and Gk = diag {σw,k , 0M−1×M−1 }
yk,i ⎪

w
ek,i = N (zk |yk,i , Tk,i ) ⎩
vk ∼ N (0, 1)
Jk,i = Kk|k−1,i x1k,i T−1 k,i
x2k,i = x2k|k−1,i + Jk,i (zk − yk,i )
Kk,i = (I − Jk,i x1k,i T )Kk|k−1,i
As indicated in section 2, the set of parameters that form
◦ Compute the normalizing factor
PN x1k are the variables which make the system (9),(10) time-
i=1 w
ek,i , and normalize the
weights varying: x1k = [ak ; σw,k ; σv,k ]. It is also clear that condi-
˘ ¯N
◦ Resample the set x1k,i , x2k,i , Kk,i i=1 , and compute the state tioned on X1k , the problem is linear-Gaussian. The density
estimates p(x1k |x1k−1 ) must now be defined, to describe the evolu-
tion of the elements of x1k . There are different ways to
do so. First, we can reasonably state that p(x1k |x1k−1 ) =
3.3 Simulation Results p(ak |ak−1 )p(σw,k |σw,k−1 )p(σv,k |σv,k−1 ). But as mentioned
We present results here for a very simple case, in which a before, the elements of x1k can also be defined differently,
signal is passed through a time-varying gain. We thus have as long as it is via a one-to-one correspondence with ak ,
p = 1 and g(.) ≡ f (.). We choose Fk = I, and f (x1k−1 )  σw,k , and σv,k . For example, it was found in [3] that a con-
cos(x1k−1 ) + sin(x1k−1 ). In this simple case, w1k and w2k strained Gaussian random-walk on the partial correlation
are ∼ N (0, 1), and we let SST = 0.09 and GGT = 0.04. coefficients, rather than the AR coefficients ak , yields bet-
We also set HHT = 10−3 . Using N = 500, the results of a ter results. The one-to-one relation is then defined by the
random simulation run is shown in Figure 1. Levinson-Durbin algorithm. Similarly, the evolution of the
noise variances may be defined on their logarithm, in order
Estimates of x1k Estimates of x2k to ensure their positiveness [3, 7, 9]. Once p(x1k |x1k−1 ) is
2 3 defined, we can start deriving the RBPF algorithm.
2
The sources cited employ a basis RBPF algorithm which
1 corresponds to the one presented here. In [9], a constrained
1 random-walk on the AR coefficients is employed, with ad-
ditional smoothing strategies including a detailed MCMC
0 0
0 50 100 0 50 100 step implementation. In [3], a complete RBP smoother is
Instant k Instant k
derived, based on partial correlation coefficients, and the
Figure 1: Simulation results for the first example resulting algorithm is shown to outperform several other
The blue lines are the true values and the red, dotted lines are the algorithms, including a regular particle smoother. In [7],
estimates.
an extension of the model to α-stable observation noises is
presented, with an additional parameter included in x1k .
The estimates appear to follow the true state convinc-
For a presentation of efficient smoothing methods, which
ingly.
are independent on the type of filtering used, we cite [5] (in
4 A Second Example Of Application which simulation results of speech enhancement are shown
for a regular PF implementation).
4.1 Problem Formulation
We are interested here in the application of an RBPF 4.2 Application Of The Algorithm
algorithm to the problem of speech denoising. Such a pro- In Algorithm 3, we present the RBPF algorithm. The
cedure was introduced and applied with success by several algorithm is directly obtained from Algorithm 1 with B, D,
researchers (e.g. [3, 7, 9]), and we refer the reader to their and u all being equated to 0, and with ∀{k, i}, Ck,i = C.

1198
Algorithm 3 Second example of tailored RBPF algorithm often simplified since, once it is seen that one part of the
◦ For every i ∈ 1, 2, .., N state is (conditionally) linear-Gaussian, then the remaind-
• Draw xk,i as: ing task is reduced to describing the evolution in time of the
ak,i ∼ p(ak |ak−1,i )
σw,k,i ∼ p(σw,k |σw,k−1,i )
rest of the state. The main downside of the use of RBPFs
σv,k,i ∼ p(σv,k |σv,k−1,i ) lies in the fact that they are computationally demanding.
• Form the matrices Ak,i and Gk,i
• Compute the following: References
Kk|k−1,i = Gk,i Gk,i T + Ak,i Kk−1,i Ak,i T
Tk,i = σv,k,i 2 + CKk|k−1,i CT [1] M.S. Arulampalam, S. Maskell, N. Gordon, and
x2k|k−1,i = Ak,i x2k−1,i T. Clapp, “A tutorial on particle filters for online
yk,i = Cx2k|k−1,i nonlinear/non-Gaussian Bayesian tracking,” IEEE
w
ek,i = N (zk |yk,i , Tk,i ) Transactions on Signal Processing, vol. 50, no. 2, pp.
Jk,i = Kk|k−1,i CT T−1 k,i
x2k,i = x2k|k−1,i + Jk,i (zk − yk,i )
174-188, February 2002.
Kk,i = (I − Jk,i C)Kk|k−1,i [2] A. Doucet, J.F.G. Freitas, and N. Gordon, Sequen-
PN tial Monte Carlo Methods In Practice. New York:
◦ Compute the normalizing factor i=1 w
ek,i , and normalize the
weights
˘ ¯N
Springer-Verlag, 2001.
◦ Resample the set x1k,i , x2k,i , Kk,i i=1 , and compute the state [3] W. Fong, S.J. Godsill, A. Doucet, and M. West,
estimates “Monte Carlo smoothing with application to audio sig-
nal enhancement,” IEEE Transactions on Signal Pro-
cessing, vol. 50, no. 2, pp. 438–49, February 2002.
4.3 Simulation Results [4] J.F.G. Freitas, “Rao-Blackwellised particle filtering for
In the second example, we show in Figure 2 a portion of fault diagnosis,” Proceedings of the IEEE Aerospace
a section of speech corrupted by white noise (input SNR Conference, no. 4, pp. 1767–1772, March 2002.
at 2.20 dB), and the estimated clean speech (output SNR [5] S.J. Godsill, A. Doucet, and M. West, “Monte Carlo
at 9.62 dB). Even though the method is quite heavier than smoothing for nonlinear time series,” Journal of the
spectral subtraction or other classical KF algorithms (see American Statistical Association, vol. 99, no. 465, pp.
for example [8]), a comparison of the average segmental 156–168, March 2004.
SNR is found to be favorable to RBPF-based speech en- [6] R.E. Kalman, “A new approach to linear filtering
hancement. Note also that the resulting enhanced speech and prediction problems,” Transactions of the ASME–
is not corrupted by the “musical noise” typically intro- Journal of Basic Engineering, vol. 82, Series D, pp.
duced by spectral subtraction. Again, more information 35–45, 1960.
and results can be found in the references of this paper (see [7] M. Lombardi, Simulation-based Estimation Methods
[3, 5, 7, 9]). for α-Stable Distributions and Processes. Ph.D. thesis,
Universita Degli Studi Di Firenze, 2004.
[8] K.K. Paliwal, and A. Basu, “A speech enhancement
Noisy speech signal, SN R = 2.20 method based on Kalman filtering,” Proceedings of the
ICASSP’87, pp. 177–180, 1987.
[9] J. Vermaak, C. Andrieu, A. Doucet, and S.J. Godsill,
“Particle methods for Bayesian modeling and enhance-
ment of speech signals,” IEEE Transactions on Speech
Enhanced speech signal, SN R = 9.62
and Audio Processing, vol. 10, no. 3, pp. 173–85, March
2002.

Appendix A: A PF Solution
Original, clean speech signal
We present in Algorithm 4 a PF based algorithm to solve
the problem of equations (3) and (4).
0 100 200 300 400
Sample k
Appendix B: Derivation Of Algorithm 1
In this appendix, we give a possible method of obtaining
Figure 2: Simulation results for the second example Algorithm 1 from a RBPF presented in a more “standard”,
or general form. We begin by presenting this standard form
in Algorithm 5.
5 Conclusion First, a so-called importance distribution, denoted
In this paper, we presented simple guidelines to derive q(xk |Xk−1,i , Zk ), must be chosen, and it is required to
basic RBPF algorithms. The advantage of such algorithms be easy to evaluate and to draw samples from. The no-
is that they provide a simple, holistic solution to a wide tation q(xk |Xk−1,i , Zk ) is generic but nevertheless under-
range of problems. In addition, problem modelizations are states that the distribution is conditional upon the set of

1199
Algorithm 4 PF algorithm p(x2k |x2k−1 , x1k,i ) =
1. Define p(x1k |x1k−1 ), choose the number of particles N
2. Define and initialize the value of {x10,i ; x20,i }N i=1 . N (x2k |Ak,i x2k−1 + Bk,i uk,i ; Gk,i Gk,i T ) (15)
3. For every k, update the set {x1k−1,i ; x2k−1,i }N i=1 as follows:
◦ For every i ∈ {1, 2, . . . , N }: According to the algorithm, at instant k we are given
• Draw x1k,i ∼ p(x1k |x1k−1,i ) the distribution p(x2k−1 |X1k−1,i , Zk−1 ) – it is precisely the
• From x1k,i , obtain the corresponding Ak,i , Bk,i , Ck,i , Dk,i ,
Gk,i , Hk,i , and uk,i as applicable. one that we are updating online. From the Gaussianness of
• Draw x2k,i ∼ N (x2k |Ak,i x2k−1,i + Bk,i uk,i ; Gk,i Gk,i T ) the system, this distribution is Gaussian (it is in fact the a
• Compute w ek,i = N (zk |Ck,i x T
P2k,i + Dk,i uk,i ; Hk,i Hk,i ) priori distribution of the state in the KF equations). Let
◦ Compute the normalizing factor i w ek,i and normalize the weights us define it as:
◦ Using the weights, resample the set {x1k,i ; x2k,i }N i=1
1 PN 1 PN
◦ Obtain x̂1k = N i=1 x1k,i and x̂2k = N i=1 x2k,i p(x2k−1 |X1k−1,i , Zk−1 )  N (x2k−1 |x2k−1,i ; Kk−1,i ) (16)

Algorithm 5 Standard RBPF algorithm To complete the derivation, we now use the following
For every k, do the following: result. If N (x|y; Q) represents the density of a Gaussian
◦ For every i ∈ {1, 2, . . . , N }: random vector with mean y and covariance matrix Q, then:
• Draw x1k,i ∼ q(x1k |X1k−1,i , Zk )
• Set X1k,i = {x1k,i ; X1k−1,i }
• Compute the unnormalized weights: N (x|Fy; Q)N (y|z; K)dy = N (x|n; N) (17)
p(zk |X1k,i ,Zk−1 )p(x1k,i |X1k−1,i ,Zk−1 )
w
ek,i = wk−1,i q(x1k,i |X1k−1,i ,Zk )
PN
◦ Compute the normalizing factor i=1 wek,i where N = Q + FKFT and n = Fz.
◦ Normalize the weights and resample the particles, if necessary
◦ For every i ∈ {1, 2, . . . , N }:
We can identify equation equation (17) to equation (14),
• Update p(x2k |X1k,i , Zk ) using p(x2k−1 |X1k−1,i , Zk−1 ), x1k,i ,
with the integrand terms given by (15) and (16), and we
x1k−1,i , and zk (Exact step).
obtain:

p(x2k |X1k,i , Zk−1 ) = N (x2k |x2k|k−1,i ; Kk|k−1,i ) (18)


previous states and the set of all measurements. The perfor-
mance and execution of the algorithm depends on the choice where:
of q(.). Optimal densities theoretically uniquely exist, how-
ever they are often intractable [1, 2]. In the examples of x2k|k−1,i  Ak,i x2k−1,i + Bk,i uk,i
this paper, we only use one of the most common and sim- Kk|k−1,i  Gk,i Gk,i T + Ak,i Kk−1,i Ak,i T
ple choice, q(xk |Xk−1,i , Zk ) = p(xk |xk−1 ). Assuming that
Again, we use (17) but this time applied to (12), using
x1k is independent of x2k−1 , conditioned upon x1k−1 , such
(13) and (18), and we obtain:
a choice reduces the weight update equation to:

k,i = wk−1,i p(zk |X1k,i , Zk−1 )


w (11) p(zk |X1k,i , Zk−1 ) = N (zk |yk,i ; Tk,i ) (19)

Thus, there remains to determine p(zk |X1k,i , Zk−1 ) in where:


the context of this paper. We can show that:
yk,i  Ck,i x2k|k−1,i + Dk,i uk,i
p(zk |X1k,i , Zk−1 ) = Tk,i  Hk,i Hk,i T + Ck,i Kk|k−1,i Ck,i T
In the process of determining p(zk |X1k,i , Zk−1 ), we have
p(zk |x2k , x1k,i )p(x2k |X1k,i , Zk−1 )dx2k (12) taken the same road as the classical Kalman filter equations
(see [6]). Half of the exact step has thus been completed,
Observe from equation (4) that: and we only need to find p(x2k |X1k,i , Zk ). To do so, refer-
ring to [6], we can write:
p(zk |x2k , x1k,i ) =
p(x2k |X1k,i , Zk ) = N (x2k |x2k,i ; Kk,i ) (20)
N (zk |Ck,i x2k + Dk,i uk,i ; Hk,i Hk,i T ) (13)
where, using Jk,i = Kk|k−1,i Ck,i T T−1
k,i ,
we have:
The distribution p(x2k |X1k,i , Zk−1 ) can be computed as: x2k,i = x2k|k−1,i + Jk,i (zk − yk,i )
Kk,i = (I − Jk,i Ck,i )Kk|k−1,i
p(x2k |X1k,i , Zk−1 ) = The KF equations and the weight computations are thus
intertwined, and we can combine the two loops on i of Algo-
p(x2k |x2k−1 , x1k,i )p(x2k−1 |X1k−1,i , Zk−1 )dx2k−1 (14) rithm 5 into a single loop. Finally, if resampling is applied
at each step, then there is no need, in equation (11), to mul-
At this point, note from equation (3) that: tiply by wk−1,i since it will be equal to 1/N for all {k, i}.

1200

You might also like