Professional Documents
Culture Documents
Kalman Filters Theory and Implementation
Kalman Filters Theory and Implementation
Abstract
Kalman Filters (also known as Linear Quadratic Estimator), is an algorithm
that provides an efficient computational (recursive) means to estimate the
state of a process in a way that minimizes the mean square error. Kalman
filters is a widely applied concept in Time Series analysis and has numerous
applications in signal processing, econometrics, and navigation and control.
We focus primarily on the theory of Discrete Kalman Filters, and have im-
plemented the algorithm in MATLAB using simulations technique. We also
have applied the algorithm on a simplified model of the “navigation and
control” problem.
Introduction
The Kalman filter, also known as linear quadratic estimation (LQE), is an
algorithm that uses a series of measurements observed over time, containing
noise (random variations) and other inaccuracies, and produces estimates of
unknown variables that tend to be more precise than those based on a single
measurement alone. More formally, the Kalman filter operates recursively on
streams of noisy input data to produce a statistically optimal estimate of the
underlying system state. The filter is named for Rudolf (Rudy) E. Kalman,
one of the primary developers of its theory.
The Kalman filter has numerous applications in technology. A common
application is for guidance, navigation and control of vehicles, particularly
aircraft and spacecraft. Furthermore, the Kalman filter is a widely applied
concept in time series analysis used in fields such as signal processing and
econometrics. The algorithm works in a two-step process. In the prediction
step, the Kalman filter produces estimates of the current state variables,
along with their uncertainties. Once the outcome of the next measurement
(necessarily corrupted with some amount of error, including random noise) is
observed, these estimates are updated using a weighted average, with more
weight being given to estimates with higher certainty. Because of the algo-
rithm’s recursive nature, it can run in real time using only the present input
measurements and the previously calculated state and its uncertainty matrix;
no additional past information is required.
zk = h(xk , uk , vk , k) (2)
Also let x0 be the random initial condition of the system and
Zk = {zi : 1 ≤ i ≤ k} (3)
2
problem. When the dynamic model for the process, f (), and for the mea-
surements, h(), are linear, and the random components x0 , wk , vk are uncor-
related Gaussian random vectors, then the solution is given by the classical
Kalman Filter equations.
The Kalman filter addresses the general problem of trying to estimate
the state x ∈ Rn of a discrete-time controlled process that is governed by the
linear stochastic difference equation
xk+1 = Ak xk + Buk + wk
with a measurement z ∈ Rn that is
z k = Hk x k + v k
p(w) = N (0, Q)
p(v) = N (0, R)
uk is the control input and is a known nonrandom vector of dimension l.
The initial state xk , is a random vector with knowm mean µ0 = E(x0 ) and
covariance P0 = E((x0 − µ0 )(x0 − µ0 )T ).
e− −
k = xk − x̂k
ek = xk − x̂k
3
The corresponding a priori and a posteriori error covariance matices are
:
Pk− = E[e− −T
k ek ]
Pk = E[ek eTk ]
We want to find an equation that computes an a posteriori state estimate
as a linear combination of an a priori estimate and a weighted difference
between an actual measurement and a measurement prediction as shown
below
x̂k = x̂− −
k + K(zk − Hk x̂k )
Pk− HkT
Kk = Pk− HkT (Hk Pk− HkT + Rk ) −1
= (4)
Hk Pk− HkT + Rk
we see that as the measurement error covariance approaches zero, the gain
K weights the residual more heavily. ie.
lim Kk = Hk−1
Rk →0
lim Kk = 0
Pk− →0
4
The Algorithm
We first give a “high-level” description of the Kalman Filter Algorithm and
discuss the key ideas behind the algorithm.
The Kalman filter estimates a process by using a form of feedback control:
the filter estimates the process state at some time and then obtains feedback
in the form of (noisy) measurements. As such, the equations for the Kalman
filter fall into two groups: time update equations and measurement update
equations. The time update equations are responsible for projecting forward
(in time) the current state and error covariance estimates to obtain the a
priori estimates for the next time step. The measurement update equations
are responsible for the feedback i.e. for incorporating a new measurement
into the a priori estimate to obtain an improved a posteriori estimate.
c
Discrete Kalman Filter Time Update Equations
x−
k+1 = Ak x̂k + Buk (5)
−
Pk+1 = Ak Pk ATk + Qk
The time update equations project the state and covariance estimates
from time step k to step k + 1.
c
Discrete Kalman Filter Measurement Update Equations
KK = Pk− HkT (Hk Pk− HkT + Rk )−1
(6)
x̂k = x̂− −
k + K(zk − Hk x̂k )
Pk = (I − Kk Hk )Pk−
The first task during the measurement update is to compute the Kalman
gain, Kk . The next step is to actually measure the process to obtain zk
, and then to generate an a posteriori state estimate by incorporating the
measurement zk as in second equation. The final step is to obtain an a
posteriori error covariance estimate via last equation.
5
After each time and measurement update pair, the process is repeated
with the previous a posteriori estimates used to project or predict the new a
priori estimates. This recursive nature is one of the very appealing features
of the Kalman filter it makes practical implementations much more feasible
because the Kalman filter recursively conditions the current estimate on all
of the past measurements.
The following flow chart gives the summary of Kalman Filter Algorithm.
6
produced by the accelerator pedal and steering wheel. Not only will a new
position estimate be calculated, but a new covariance will be calculated as
well. Perhaps the covariance is proportional to the speed of the truck because
we are more uncertain about the accuracy of the dead reckoning estimate at
high speeds but very certain about the position when moving slowly. Next,
in the update phase, a measurement of the truck’s position is taken from the
GPS unit. Along with this measurement comes some amount of uncertainty,
and its covariance relative to that of the prediction from the previous phase
determines how much the new measurement will affect the updated predic-
tion. Ideally, if the dead reckoning estimates tend to drift away from the
real position, the GPS measurement should pull the position estimate back
towards the real position but not disturb it to the point of becoming rapidly
changing and noisy.
The position and velocity of the object are specified by the state-space
xk
Xk =
ẋk
and the initial state of the system is X0
0
X0 =
0
We assume that between the k − 1 and k timestep, the object undergoes
a constant acceleration of a, that is normally distributed, with mean 0 and
standard deviation σa . From Newton’s laws of motion we have
2
xk 1 dt xk−1 dt /2
Xk = = × +a (7)
ẋk 0 1 ẋk−1 dt
which is of the form
xk xk−1
Xk = =A× + aB (8)
ẋk ẋk−1
7
So finally, we have
Xk = A × Xk−1 + wk (9)
where wk ∼ N (0, Q) and
T dt4 /4 dt3 /2
Q = BB σa = σa (10)
dt3 /2 dt2
At each time step, a noisy measurement of the true position of the ob-
ject is made, zt . Let us suppose the measurement noise vk is also normally
distributed, with mean 0 and standard deviation σz . ie. vk ∼ N (0, R) where
R = [σa2 ] Therefore,
zk = HXk + vk (11)
where H = 1 0 . The following graphs depict the results of Kalman
Filter being applied on a simulated data set.
Figure 2: The 1-D simulation of the true path of the object and the observed
path
8
Figure 3: The Moving-Average Filter for the the observed 1-D state-space
Figure 4: The Discrete Kalman Filter for the observed 1-D state-space
9
Figure 5: The 2-D simulation of the true path of the projectile and the
observed path
Figure 6: The Moving-Average Filter for the the observed 2-D state-space
10
Figure 7: The Discrete Kalman Filter for the observed 2-D state-space
And the equation relating the noisy measurements and true state is
x k
1 0 0 0 ẋk + vk
Zk = (13)
0 0 1 0 yk
ẏ k
11
Figure 9: The X vs Y plot for the Kalman Filter
12
Bibliography
13