Professional Documents
Culture Documents
Extended Kalman Filter Tutorial: 1 Dynamic Process
Extended Kalman Filter Tutorial: 1 Dynamic Process
Extended Kalman Filter Tutorial: 1 Dynamic Process
Gabriel A. Terejanu
Department of Computer Science and Engineering
University at Buffalo, Buffalo, NY 14260
terejanu@buffalo.edu
Dynamic process
Consider the following nonlinear system, described by the difference equation and the observation
model with additive noise:
xk = f(xk1 ) + wk1
(1)
zk = h(xk ) + vk
(2)
The initial state x0 is a random vector with known mean 0 = E[x0 ] and covariance P0 =
E[(x0 0 )(x0 0 )T ].
In the following we assume that the random vector wk captures uncertainties in the model and vk
denotes the measurement noise. Both are temporally uncorrelated (white noise), zero-mean random
sequences with known covariances and both of them are uncorrelated with the initial state x0 .
E[wk ] = 0
E[wk wTk ] = Qk
(3)
E[vk ] = 0
E[vk vTk ] = Rk
(4)
(5)
Vectorial functions f() and h() are assumed to be C 1 functions (the function and its first derivative
are continuous on the given domain).
Dimension and description of variables:
xk
n1
State vector
wk
n1
zk
m1
Observation vector
vk
m1
f()
n1
h()
m1
Qk
nn
EKF derivation
Assuming the nonlinearities in the dynamic and the observation model are smooth, we can expand
f(xk ) and h(xk ) in Taylor Series and approximate this way the forecast and the next estimate of xk .
Model Forecast Step
Initially, since the only available information is the mean, 0 , and the covariance, P0 , of the initial
state then the initial optimal estimate xa0 and error covariance is:
xa0 = 0 = E[x0 ]
P0 = E[(x0
(6)
xa0 )(x0
xa0 )T ]
(7)
Assume now that we have an optimal estimate xak1 E[xk1 |Zk1 ] with Pk1 covariance at time
k 1. The predictable part of xk is given by:
xfk
E[xk |Zk1 ]
(8)
(9)
where Jf is the Jacobian of f() and the higher order terms (H.O.T.) are considered negligible. Hence,
the Extended Kalman Filter is also called the First-Order Filter. The Jacobian is defined as:
f1 f1
f1
x1
x2
xn
..
..
..
Jf
(10)
.
.
.
fn
x1
fn
x2
fn
xn
where f(x) = (f1 (x), f2 (x), . . . , fn (x))T and x = (x1 , x2 , . . . , xn )T . The eq.(9) becomes:
f(xk1 ) f(xak1 ) + Jf (xak1 )ek1
(11)
where ek1 xk1 xak1 . The expectated value of f(xk1 ) conditioned by Zk1 :
E[f(xk1 )|Zk1 ] f(xak1 ) + Jf (xak1 )E[ek1 |Zk1 ]
(12)
f(xak1 )
(13)
xk xfk
(14)
= f(xk1 ) + wk1
f(xak1 )
E[efk (efk )T ]
=
=
(15)
E[wk1 wTk1 ]
(16)
(17)
Kk vk )|Zk ]
(18)
(19)
Following the same steps as in model forecast step, expanding h() in Taylor Series about xfk we have:
h(xk ) h(xfk ) + Jh (xfk )(xk xfk ) + H.O.T.
(20)
where Jh is the Jacobian of h() and the higher order terms (H.O.T.) are considered negligible. The
Jacobian of h() is defined as:
Jh
h1
x1
h1
x2
hm
x1
hm
x2
..
.
..
.
h1
xn
..
.
(21)
hm
xn
where h(x) = (h1 (x), h2 (x), . . . , hm (x))T and x = (x1 , x2 , . . . , xn )T . Taken the expectation on both
sides of (20) conditioned by Zk :
E[h(xk )|Zk ] h(xfk ) + Jh (xfk )E[efk |Zk ]
(22)
(23)
(24)
= f(xk1 ) + wk1
f(xk1 )
f(xak1 )
xfk
Kk (zk
h(xfk ))
(25)
+ Kk Rk KTk
The posterior covariance formula holds for any Kk . Like in the standard Kalman Filter we find out
Kk by minimizing tr(Pk ) w.r.t. Kk .
0 =
tr(Pk )
Kk
(26)
= (Jh (xfk )Pfk )T Pfk JTh (xfk ) + 2Kk Jh (xfk )Pfk JTh (xfk ) + 2Kk Rk
Hence the Kalman gain is:
1
Kk = Pfk JTh (xfk ) Jh (xfk )Pfk JTh (xfk ) + Rk
(27)
(28)
Initialization:
xa0 = 0 with error covariance P0
Model Forecast Step/Predictor:
xfk
f(xak1 )
Pfk
In the EKF, h() is linearized about the predicted state estimate xfk . The IEKF tries to linearize it
about the most recent estimate, improving this way the accuracy [3, 1]. This is achieved by calculating
xak , Kk , Pk at each iteration.
Denote xak,i the estimate at time k and ith iteration. The iteration process is initialized with xak,0 = xfk .
Then the measurement update step becomes for each i:
xak,i xfk + Kk (zk h(xak,i ))
1
xk,i ) Jh (xak,i )Pfk JTh (xak,i ) + Rk
Kk,i = Pfk JTh (
Pk,i = I Kk,i Jh (xak,i ) Pfk
If there is little improvement between two consecutive iterations then the iterative process is stopped.
The accuracy reached this way is achieved with higher computational time.
Stability
Since Qk and Rk are symmetric positive definite matrices then we can write:
Qk = Gk GTk
(29)
Dk DTk
(30)
Rk =
Denote by and the high order terms resulted in the following subtractions:
f(xk ) f(xak ) = Jf (xak )ek + (xk , xak )
(31)
(xk , xak )
(32)
h(xk )
h(xak )
Jh (xak )ek
Konrad Reif showed in [2] that the estimation error remains bounded if the followings hold:
5
(33)
kJh (xak )k
(34)
1 I Pk 2
(35)
kxk
xak k2
with kxk
xak k
(36)
(37)
Then the estimation error ek is exponentially bounded in mean square and bounded with probability
one, provided that the initial estimation error satisfies:
kek k
(38)
and the covariance matrices of the noise terms are bounded via:
Gk GTk I
(39)
Dk DTk
(40)
Conclusion
In EKF the state distribution is propagated analytically through the first-order linearization of the
nonlinear system. It does not take into account that xk is a random variable with inherent uncertainty
and it requires that the first two terms of the Taylor series to dominate the remaining terms.
Second-Order version exists [4, 5], but the computational complexity required makes it unfeasible
for practical usage in cases of real time applications or high dimensional systems.
References
[1] Arthur Gelb. Applied Optimal Estimation. M.I.T. Press, 1974.
[2] K.Reif, S.Gunther, E.Yaz, and R.Unbehauen. Stochastic Stability of the Discrete-Time Extended
Kalman Filter. IEEE Trans.Automatic Control, 1999.
[3] Tine Lefebvre and Herman Bruyninckx. Kalman Filters for Nonlinear Systems: A Comparison of
Performance.
[4] John M. Lewis and S.Lakshmivarahan. Dynamic Data Assimilation, a Least Squares Approach.
2006.
[5] R. van der Merwe. Sigma-Point Kalman Filters for Probabilistic Inference in Dynamic State-Space
Models. Technical report, 2003.