Professional Documents
Culture Documents
Two Linear Complexity Particle Filters C
Two Linear Complexity Particle Filters C
Two Linear Complexity Particle Filters C
Abstract—In this work, we introduce two particle filters of developing particle filters able to maintain accurate target label
linear complexity in the number of particles that take distinct probabilities.
approaches to solving the problem of tracking two targets in close In [3], Blom and Bloem propose a new particle filter able to
proximity. We operate in the regime in which measurements do
not discriminate between targets and hence uncertainties in the provide an estimated track swap probability for the case of two
labeling of the tracks arise. For simplicity, we limit our study to closely spaced linear Gaussian targets. At its core lies a unique
the two target case for which there are only two possible asso- decomposition of the joint conditional density as a mixture
ciations between targets and tracks. The proposed Approximate of a permutation invariant density and a permutation strictly
Set Particle Filter (ASPF) introduces some approximations but variant density. While the ASPF takes a similar approach, our
has similar complexity and still provides much more accurate
descriptions of the posterior uncertainties compared to standard fast FFUBSi employs a very different (smoothing) technique.
particle filters. The fast Forward Filter Unlabeled Backward In [4], Garcia-Fernandez et. al. introduce a novel particle
Simulator (fast FFUBSi) employs a smoothing technique based on filter able to maintain the multimodality of the posterior pdf
rejection sampling for the calculation of target label probabilities. after the targets have moved in close proximity and thus,
Simulations show that both particle filters do not suffer from to extract information about target labels. The drawback is
track coalescence (when outputting MMOSPA estimates) and
calculate correct target label probabilities. that complexity grows as O(N 2 ) due to the association of a
Keywords: Particle filter, target labels, linear complexity. probability vector to each particle. Instead, our two approaches
associate the same vector to all the particles which can be done
I. I NTRODUCTION with linear complexity.
In theory, particle filters provide optimal solutions to any In this work, the first type of particle filter we introduce
target tracking problem. However, in practice it has been associates a probability vector to each particle. The probability
observed that they have significant problems in situations vector represents the probabilities for different labeling events.
when targets have been closely spaced for some time. The For instance, if the targets have been well separated at all
weaknesses are related to the particle filters’ inability to times, one of the elements in the vector will be one and
accurately express uncertainties regarding past states; due to other(s) will be zero. If, on the other hand, the targets have
degeneracy, they always overestimate how certain they are been very closely spaced for a long time all elements in the
about the past and going sufficiently far back (usually only vector are the same. Note that there are only two possible
a few time steps) they claim that they know the exact value labelings for the two target case and therefore the length of the
of the state vector [1]. probability vector is two. But since the elements of the vector
In a tracking scenario where two targets are closely spaced have to sum to one, we really only have one free parameter. We
for some time and later dissolve, there are uncertainties as call this new filter the Approximate Set Particle Filter (ASPF).
to which target is which, i.e., what we refer to as target The second type of particle filter we introduce takes a
labeling uncertainties. Based on the information obtained forward filtering followed by a backward smoothing approach.
from, e.g., a radar sensor, it is typically not possible to resolve Target labels produced by the forward filter are ignored and
uncertainties in labeling once they have appeared. Still, due to target identity probabilities are calculated based solely on
degeneracy, which is an inherent part of any particle filter (with the backward trajectories generated by performing rejection
finite number of particles), the filter will soon claim that it sampling backwards in time. We call this new filter the fast
knows the correct labeling with probability 1 [2]. Clearly, this Forward Filter Unlabeled Backward Simulator (fast FFUBSi).
does not reflect the true posterior distribution and moreover, The paper is organized as follows. Section II formulates
it may have significant repercussions in many applications. the target tracking problem. Sections III and IV describe the
Consequently, recent years have brought significant interest in two novel particle filters with the ability of extracting target
label probabilities. In Section V, we show results with the this paper, we propose two different algorithms that attempt to
new particle filters operating on simulated scenarios and then solve the (same) problem of building tracks for such a scenario
conclude in Section VI. while maintaining correspondence of target states over time.
Currently, PFs provide a O(N ) approximation to the track
II. P ROBLEM S TATEMENT
posterior but don’t approximate the target posterior very well
Multitarget tracking algorithms represent the multitarget and PF approximations for the target posterior are O(N 2 ). Our
state either as sets or as vectors. The former approach implies Monte Carlo methods work on the target posterior, and obtain
that one is not interested to distinguish between the targets and labeling information, with O(N ) complexity.
the sole purpose is estimation of their states. On the other hand,
the latter approach is concerned with knowing which target is III. A PPROXIMATE S ET PARTICLE F ILTER (ASPF)
which in addition to estimating their states. The information In our first linear complexity particle filter capable of
regarding target labeling is available in and can be extracted maintaining target label probabilities, i.e., the Approximate
from the posterior pdf of the multitarget state [4]. Set Particle Filter (ASPF), we represent the posterior den-
Suppose we wish to track two targets1 with state vectors x1k sity of the tracks at each time scan using a set of M
(1) (2) (M )
and x2k , collected in a joint state vector particles, xk , xk , . . . , xk to which we associate weights
[ 1] (1) (2) (M )
wk , wk , . . . , wk . Additionally, we store the uncertainties
x
xk = k2 . (1) in the target-to-track associations in a separate variable such
xk
that they are affected by the degeneracy of the particle filter
The two targets are assumed to move independently of each and we recursively update this variable. In the general n target
other and follow the same motion model case, the track-to-target probabilities are vectors of length
xjk = fk (xk−1 ) + vk−1
j
, (2) n! since that is the number of possible permutations of a
state vector xk . When we only have two targets, the vector
j
with process noise vk−1 ∼ N (0, Qk ) and j ∈ {1, 2}. contains two elements where the second element is completely
The measurement data observed at time k is denoted zk and determined by the first (the elements have to add up to one).
the collection of data up to and including time k is In this work, we therefore find it sufficient to deal with the
[ ] first element of that vector, here denoted pk at each time scan.
Z k = z1 , z2 , . . . zk . (3)
The posterior density of the tracks is approximated as:
The data is assumed to follow the common assumptions for
∑
M
radar sensor data [5]. A solution to this tracking problem has to π(xk |Z k ) ≈
(i)
wk δ(xk − xk ),
(i)
(6)
handle the usual data association uncertainties. Among other i=1
things, these uncertainties imply that the measurement model
which is identical to the one in a conventional particle filter.
will be such that
However, to distinguish tracks from targets, we introduce a
h(zk |xk ) = h(zk |χxk ), (4) separate density for the targets that we describe as,
where χ is the permutation matrix π̃(xk |Z k ) ≈ pk π(xk |Z k ) + (1 − pk )π(χxk |Z k ). (7)
[ ]
0 I We implicitly assume that the distribution of the targets can be
χ= . (5)
I 0 expressed (at least approximately) as in Eq. (7), where the pdf
and I is the identity matrix. of the tracks (in Eq. (6)) contains no labeling uncertainties.
Assuming the posterior pdf at time k is available, the Specifically, we assume that there is a point x̂ such that
predicted pdf at time k + 1 is calculated using Eq. (2). The π(xk |Z k ) = 0 for all xk satisfying ||x̂ − xk ||2 > ||x̂ − χxk ||2 .
posterior pdf at time k + 1 is obtained by Bayesian update We now proceed to describe how to recursively update the
[6] using the predicted pdf at time k + 1 and the received particles, their weights and the probability, pk .
measurements. A. ASPF Algorithm
Particle filters (PFs) are commonly used to approximate the
The ASPF sequentially computes target label probabilities
full Bayesian posterior pdf when the system is nonlinear and/or
by approximating the MMOSPA2 posterior pdf, π̃k (X) instead
nonGaussian. Unfortunately, due to particle degeneracy, PFs
of the traditional labeled posterior pdf, πk (x).
are unable to approximate well posterior pdfs with multiple
Applying the Chapman-Kolmogorov-Bayes theorem gives:
modes over longer periods of time. A traditional PF will ∫
not represent well such a posterior pdf and therefore would ω̄k+1 (x) ∝ Lk (x) f (x|xk )πk (xk )dxk (8)
be unable to provide reliable information about target labels,
information that lies in the multimodality of the posterior pdf. where Lk is the likelihood at time k and f (·|·) is the multi-
Targets coming into close proximity is a commonly encoun- target state transition density. Note that ω̄k+1 (x) should not
tered scenario in which the posterior pdf is multimodal. In
2 The MMOSPA estimator minimizes the mean Optimal Subpattern Assign-
1 We will treat only two targets, with no loss of generality as regards ideas ment (OSPA) metric. The reader is encouraged to consult [7] for the derivation
and considerably more compact notation than a greater set. and the intuition behind MMOSPA estimation.
(i) (i) (i) (i)
be interpreted as the labeled posterior, but rather as a one-step Set test = 0, xk+1 = χ xk+1 , ck+1 = 1−ck+1 .
labeled posterior. end
The MMOSPA posterior pdf at time k + 1 is defined as:
{ end
ω̄k+1 (x) + ω̄k+1 (χx) if x ∈ Ak+1 (x̂k+1 ) Compute the MMOSPA estimate
π̃k+1 (x) =
0 otherwise
(9) ∑
M
(i) (i)
where Ak+1 (x̂k+1 ) = {x : ||x̂k+1−xk+1 || ≤ ||x̂k+1−χxk+1 ||} x̂k+1 = wk+1 xk+1 . (17)
i=1
and x̂k+1 is the MMOSPA estimator:
∫
end
x̂k+1 = xπ̃k+1 (x)dx (10)
Ak+1 (x̂k+1 ) At this point, we have obtained Πik+1 ∈ {I, χ} such that
the samples x̃ik+1 = Πik+1 xik+1 approximate the MMOSPA
The probability that x̂k+1 has a labeling error, with respect
posterior pdf π̃k+1 (x) of Eq. (9).
to ω̄k+1 (x), is:
∫ Last, the probability mass that we have switched (see Eq.
p̃k+1 = ω̄k+1 (x)dx (11) (11)) is given by:
Āk+1 (x̂k+1 ) ∑
i
p̃k+1 = wk+1 (18)
The posterior target labeling probabilities can then be cal-
{i:Πik+1 ̸=I}
culated as:
pk+1 = (1 − p̃k+1 )pk + p̃k+1 (1 − pk ) (12) or, implementation-wise
methods. For simplicity, our evaluations below are based on and the track-to-target probability (the probability that track
the bootstrap particle filter which performs the following: one is target one) is updated as in:
1) For i = 1, . . . , M, draw samples from the motion model
(i) (i) pk+1 = (1 − p̃k+1 )pk + p̃k+1 (1 − pk ). (20)
xk+1 ∼ f (xk+1 |xk ). (13)
2) For i = 1, . . . , M, compute the unnormalized weights One advantage in running this algorithm is that the
(i) MMOSPA estimate x̂k,M M OSP A = x̂k is calculated as a by-
ŵk+1 = wk h(zk xk+1 )
(i) (i)
(14) product in Eq. (17). If we are instead interested in the MMSE
estimate [6] for our filter, the posterior mean is now given by
and then normalize them as in
(i) ŵ
(i) x̂k,M M SE = pk x̂k,M M OSP A + (1 − pk )x̂k,M M OSP A . (21)
wk+1 = ∑M k+1(j) . (15)
j=1 ŵk+1 IV. FAST F ORWARD F ILTER U NLABELED BACKWARD
3) Under certain conditions, we perform resampling of the S IMULATOR (FAST FFUBS I )
new particles and weights.
The underlying weakness that we try to compensate for is
At this point, we have obtained samples {x1k , · · · , xM
k } from that particle filters (PFs) suffer from degeneracy which leads
ω̄k+1 (x) of Eq. (8). to a self-resolving property with respect to the uncertainties
Next, our ASPF implementation reorders the particles ac- in the target labeling. Thanks to recent developments [8], we
cording to a clustering algorithm that also helps us to calculate can now introduce the fast FFUBSi, an efficient PF smoothing
MMOSPA estimates of Eq. (10). A basic algorithm that does algorithm (of asymptotically linear complexity) which does
(i)
the job is given below. The variables ck+1 are introduced in not degenerate at all. In our approach, we use the forward
order to enable us to later compute pk . filter to figure out where the targets are (but we don’t rely
0) Set test = 0 and compute on its labeling capability!) and then we use a smoothing
algorithm (backwards sampling) [8] to compute/recover the
∑
M
(i) (i)
x̂k+1 = wk+1 xk+1 . (16) labeling probabilities. It is worth underlining that this approach
i=1 is dramatically different from the approach the ASPF takes and
(i)
moreover, the fast FFUBSi approach is asymptotically exact,
Also set ck+1 = 0 for i = 1, . . . , M . which was not the case for the approximate PF proposed in
1) While test = 0, do the previous section.
Set test = 1 Smoothing for particle filters can be done through forward-
for i = 1 to M backward recursions as described below [9]. The joint distri-
(i) (i)
if ∥x̂k+1 − xk+1 ∥2 > ∥x̂k+1 − χxk+1 ∥2 bution π(x1:T |y1:T ) at the end time T of a scenario can be
decomposed as: Algorithm:
−1
T∏
1. Initialize the backward trajectories,
π(x1:T |y1:T ) = π(xT |y1:T ) π(xk |xk+1 , y1:T ) (22) {I(j)}M j=1 ∼ {wT }i=1 ,
i N
j I(j)
k=1 x̃T = xT , j = 1, · · · , M
−1
T∏ 2. for k = T − 1 : 1
= π(xT |y1:T ) π(xk |xk+1 , y1:k ) (23) 3. L = {1, · · · , M }
k=1 4. while L ̸= ∅
5. n = |L|
due to {xk } being an inhomogeneous Markov process (con-
6. δ=∅
ditional on the measurements y1:T ).
7. Sample independently4 {C(q)}nq=1 ∼ {wki }N
To sample from π(x1:T |y1:T ), one first computes the i=1
8. Sample independently5 {U (q)}nq=1 ∼ U([0, 1])
marginal distributions {π(xk |y1:k )} for k = 1, · · · , T in the
9. for q = 1 : n
forward recursion and initializes the backward recursion by f (x̃
L(q)
|x
C(q)
)
sampling xT ∼ π(xT |y1:T ). Then, for k = T − 1, · · · , 1, one 10. if U (q) ≤ k+1
ρ
k
1
ρ= √ (28) −1.5
|2πRk |
−2.5
B. Fast FFUBSi Algorithm
−3.5
10 20 30 40 50 60 70 80 90 100
The fast FFUBSi algorithm consists of running the forward k
filter followed by removing the target labels generated with the
forward filter, backward sampling achieved through the fast Figure 1. True target trajectories.
FFBSi recursion described in Section IV-A and computation
of target label probabilities.
1) Forward filtering: Most forward filters will have labels.
3) Backward sampling: We modify the fast FFBSi al-
However, since we are ignoring them in the backward re-
gorithm that implements the backward recursion to sample
cursion, it does not matter if they don’t, that is, even if the
instead from this symmetric forward pdf, i.e. perform rejection
forward filter lacks labeling, the fast FFUBSi algorithm is able
sampling from a set of n! × N particles (since we have n!
to extract the labels due to its modified fast FFBSi step. In
permutations of each of the N particles).
fact, we could use the same ideas to extract labels even if
4) Computing target label probabilities: Most tracking
the forward filter is not a particle filter! For example, any
algorithms in the literature define the target labels at time
MHT that performs pruning would suffer from the same self-
k = 1, with the objective to track these labeled targets over
resolving property as the particle filter. Hence, we have a good
time. We instead define the target labels at time k = T (at
amount of freedom and flexibility when deciding on which
scenario end time).
forward filter to use for the implementation of our fast FFUBSi
algorithm. We offer the example in Figure 1 to clarify how the fast
In this work, our choice for the forward filter is a standard FFUBSi computes the probabilities of which target corre-
SIR particle filter [10] with the following steps: we draw sam- sponds to which track. At time k = 1 we have one target at
ples from the proposal density (which is the transition density x1 = 3.5 that we call target 1 and a second target at x2 = −3.5
function), update the particle weights based on measurement that we call target 2. We run the forward filter until the final
likelihoods, normalize the particle weights and resample based scan T = 100 and can now apply the labels A and B to the
on these normalized particle weights. two targets. Suppose that we call the target which is close
2) Removing target labels: Let x = 3.5 at time T target xA and that the other target is called
target xB .
π(x1 , x2 ) = g(x1 , x2 ) (29) Through backward sampling, we have tracked these two
targets backwards in time in order to get trajectories of how
be the forward density at time k for the case of two targets,
the targets that we call A and B at time T = 100 got to
which contains (not necessarily very reliable) information
their current positions. Based on the backward trajectories,
about the target labels at time k = 1. The density might be
we count how many particles started assigned/close to which
variant, invariant or a mixture of both these types6 .
target and ended up assigned/close to which target, to form
Which target we decided to call x1 and which target we
the probabilities of which target is which track. Then, we can
decided to call x2 in the forward filter should not influence
calculate target label probabilities such as:
the backward trajectories of our targets A and B. To ignore
the labels produced by forward filtering, we make the forward Pr (target x1 at time k < T is target xA at time k=T) =
density symmetric and replace it by: number of particles with this trajectory
= (31)
g(xA , xB ) + g(xB , xA ) total number of particles
π(xA , xB ) = (30)
2
Notice that once we have made the forward density permu- Note that our particle filters rely on the assumption that at
tation invariant, each of the particles in the forward filter will the initial time k = 1 the targets are well separated. To remove
have n! copies, one for each permutation (so, two in the two [ 1 ]for this assumption, we could look at the probability
the need
x
target case). that k2 comes from the prior pdf at time k = 1 versus the
xk [ 2]
6 For definitions of permutation variant and permutation invariant pdfs, x
probability that k1 comes from the prior.
please see [11]. xk
4 4
True trajectory True trajectory
MMSE estimate MMSE estimate
MMOSPA estimate MMOSPA estimate
3 3
2 2
1 1
k
k
0 0
x
x
−1 −1
−2 −2
−3 −3
−4 −4
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
k k
(a) True target trajectories, together with MMSE and (a) True target trajectories, together with MMSE and
MMOSPA estimates. MMOSPA estimates.
1 1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
k
k
0.5 0.5
p
p
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
10 20 30 40 50 60 70 80 90 100 10 20 30 40 50 60 70 80 90 100
k k
Figure 2. ASPF results for the first simulation with σv = 1. Figure 3. ASPF results for the second simulation with σv = 0.2.
(i)
V. R ESULTS Conditioned on a particle xk , the measurement likelihoods
for these two hypotheses are
In our simulations, we consider a basic example involving
L1 = Pr{zk H1 , xk }
(i)
two targets that come in close proximity of each other. The (36)
state vectors are scalars evolving according to a Gaussian
random walk model, L2 = Pr{zk H2 , xk }
(i)
(37)
xjk = xjk−1 + vk−1
j
, (32) or,
j
where the process noise is vk−1 ∼ N (0, σv2 ) for both targets, L1 = N (zk (1); xk (1), σv2 )N (zk (2); xk (2), σv2 ) (38)
i.e., for j = 1 or 2. Both targets are detected at all times and
there are no false detections. Target detections are modeled as L2 = N (zk (2); xk (1), σv2 )N (zk (1); xk (2), σv2 ). (39)
zkj = xjk + ujk , (33) Given the above likelihoods, we compute the measurement
likelihood given the particle as
with measurement noise ujk ∼ N (0, σu2 ). However, we do not
know which detection is associated to which target. ∑
j=2
(i) 1∑
j=2
Lj Pr{Hj xk } =
(i)
At each time instant, we have to consider two data associ- h(zk |xk ) = Lj , (40)
j=1
2 j=1
ation hypotheses:
{ We use Eq. (40) to perform the weight update steps in both
target 1 is associated to detection 1 and ASPF and the fast FFUBSi.
H1 : (34)
target 2 is associated to detection 2 As metrics of performance, we chose to look at MMSE
{
target 1 is associated to detection 2 and [6] and MMOSPA [7] estimates together with target label
H2 : (35) probabilities. The following figures show the averaging of 100
target 2 is associated to detection 1 Monte Carlo runs.
Pr(x1,x2) = 0.50023 and Pr(x2,x1) = 0.49977 Pr(x1,x2) = 0.82458 and Pr(x2,x1) = 0.17542
4 4
Ground Truth Ground Truth
MMSE estimate MMSE estimate
MMOSPA estimate MMOSPA estimate
3 3
2 2
1 1
k
k
0 0
x
x
−1 −1
−2 −2
−3 −3
−4 −4
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
k k
(a) True target trajectories, together with MMSE and (a) True target trajectories, together with MMSE and
MMOSPA estimates. MMOSPA estimates.
1 1
Pr(x1,x2) Pr(x1,x2)
Pr(x2,x1) Pr(x2,x1)
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
k
k
0.5 0.5
p
p
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
k k
Figure 4. Fast FFUBSi results for the first simulation with σv = 1. Figure 5. Fast FFUBSi results for the second simulation with σv = 0.2.