Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Kalman Filters- Theory and Implementation

Bhargav N Bhatt, 10195


Siddharth Vishwanath, 10712
Tanuj Kumar Sharma, 10760

Indian Institute of Technology-Kanpur


April 7, 2014

MTH511 Course Project.

Supervised by Dr. Subhra Sankar Dhar.

Abstract
Kalman Filters (also known as Linear Quadratic Estimator), is an algorithm
that provides an efficient computational (recursive) means to estimate the
state of a process in a way that minimizes the mean square error. Kalman
filters is a widely applied concept in Time Series analysis and has numerous
applications in signal processing, econometrics, and navigation and control.
We focus primarily on the theory of Discrete Kalman Filters, and have im-
plemented the algorithm in MATLAB using simulations technique. We also
have applied the algorithm on a simplified model of the “navigation and
control” problem.
Introduction
The Kalman filter, also known as linear quadratic estimation (LQE), is an
algorithm that uses a series of measurements observed over time, containing
noise (random variations) and other inaccuracies, and produces estimates of
unknown variables that tend to be more precise than those based on a single
measurement alone. More formally, the Kalman filter operates recursively on
streams of noisy input data to produce a statistically optimal estimate of the
underlying system state. The filter is named for Rudolf (Rudy) E. Kalman,
one of the primary developers of its theory.
The Kalman filter has numerous applications in technology. A common
application is for guidance, navigation and control of vehicles, particularly
aircraft and spacecraft. Furthermore, the Kalman filter is a widely applied
concept in time series analysis used in fields such as signal processing and
econometrics. The algorithm works in a two-step process. In the prediction
step, the Kalman filter produces estimates of the current state variables,
along with their uncertainties. Once the outcome of the next measurement
(necessarily corrupted with some amount of error, including random noise) is
observed, these estimates are updated using a weighted average, with more
weight being given to estimates with higher certainty. Because of the algo-
rithm’s recursive nature, it can run in real time using only the present input
measurements and the previously calculated state and its uncertainty matrix;
no additional past information is required.

Discrete Kalman Filters


The Problem
Consider the following stochastic dynamic model and the sequence of noisy
observations zk :
xk = f (xk−1 , uk−1 , wk−1 , k) (1)

zk = h(xk , uk , vk , k) (2)
Also let x0 be the random initial condition of the system and

Zk = {zi : 1 ≤ i ≤ k} (3)

be the set of k observations. Finding xka , the estimate or analysis of the


state space xk , given Zk and the initial conditions is called the filtering

2
problem. When the dynamic model for the process, f (), and for the mea-
surements, h(), are linear, and the random components x0 , wk , vk are uncor-
related Gaussian random vectors, then the solution is given by the classical
Kalman Filter equations.
The Kalman filter addresses the general problem of trying to estimate
the state x ∈ Rn of a discrete-time controlled process that is governed by the
linear stochastic difference equation

xk+1 = Ak xk + Buk + wk
with a measurement z ∈ Rn that is

z k = Hk x k + v k

where the n × n matrix A in the difference equation relates the state at


time step k to the state at step k + 1, in the absence of either a driving func-
tion or process noise. The n × l matrix B relates the control input u ∈ Rl to
the state x. The matrix H in the measurement equation relates the state to
the measurement zk .

wk represents the process noise and vk represents the measurement noise.


They are assumed to be independent of each other and with normal proba-
bility distributions

p(w) = N (0, Q)

p(v) = N (0, R)
uk is the control input and is a known nonrandom vector of dimension l.
The initial state xk , is a random vector with knowm mean µ0 = E(x0 ) and
covariance P0 = E((x0 − µ0 )(x0 − µ0 )T ).

Computational Aspects of Kalman Filter


n
We define x̂−
k ∈ R (note the super minus) to be our a priori state estimate
at step k given knowledge of the process prior to step k, and x̂k ∈ Rn to be
our a posteriori state estimate at step k given measurement . We can then
define the a priori and a posteriori estimate errors as follows

e− −
k = xk − x̂k

ek = xk − x̂k

3
The corresponding a priori and a posteriori error covariance matices are
:
Pk− = E[e− −T
k ek ]

Pk = E[ek eTk ]
We want to find an equation that computes an a posteriori state estimate
as a linear combination of an a priori estimate and a weighted difference
between an actual measurement and a measurement prediction as shown
below
x̂k = x̂− −
k + K(zk − Hk x̂k )

The difference (zk − Hk x̂−


k ) is called the residual, which represents the
discrepencies between predicted measurement Hk x̂− k and actual measurement
zk . If the residual equals zero, then the predicted and actual measurements
agree.
The matrix K is chosen to be the gain or blending factor that minimizes
the a posteriori error covariance. This minimization can be accomplished
by performing the indicated expectations, taking the derivative of the trace
with respect to K, setting that result equal to zero, and then solving for K.

Pk− HkT
Kk = Pk− HkT (Hk Pk− HkT + Rk ) −1
= (4)
Hk Pk− HkT + Rk
we see that as the measurement error covariance approaches zero, the gain
K weights the residual more heavily. ie.

lim Kk = Hk−1
Rk →0

In contrast, as the a priori estimate error covariance approaches zero, the


gain K weights the residual less heavily. ie.

lim Kk = 0
Pk− →0

As the measurement error covariance Rk approaches zero, the actual mea-


surement zk is“trusted more and more, while the predicted Hk x̂− k measure-
ment is trusted less and less. On the other hand, as the a priori estimate
error covariance Pk− approaches zero the actual measurement is trusted less
and less, while the predicted measurement Hk x−k is trusted more and more.

4
The Algorithm
We first give a “high-level” description of the Kalman Filter Algorithm and
discuss the key ideas behind the algorithm.
The Kalman filter estimates a process by using a form of feedback control:
the filter estimates the process state at some time and then obtains feedback
in the form of (noisy) measurements. As such, the equations for the Kalman
filter fall into two groups: time update equations and measurement update
equations. The time update equations are responsible for projecting forward
(in time) the current state and error covariance estimates to obtain the a
priori estimates for the next time step. The measurement update equations
are responsible for the feedback i.e. for incorporating a new measurement
into the a priori estimate to obtain an improved a posteriori estimate.

The time update equations can also be thought of as predictor equations,


while the measurement update equations can be thought of as corrector equa-
tions. Indeed the final estimation algorithm resembles that of a predictor-
corrector algorithm.
We now present the specific equations for time and measurement updates:

c
Discrete Kalman Filter Time Update Equations
x−
k+1 = Ak x̂k + Buk (5)

Pk+1 = Ak Pk ATk + Qk
The time update equations project the state and covariance estimates
from time step k to step k + 1.

c
Discrete Kalman Filter Measurement Update Equations
KK = Pk− HkT (Hk Pk− HkT + Rk )−1
(6)
x̂k = x̂− −
k + K(zk − Hk x̂k )

Pk = (I − Kk Hk )Pk−

The first task during the measurement update is to compute the Kalman
gain, Kk . The next step is to actually measure the process to obtain zk
, and then to generate an a posteriori state estimate by incorporating the
measurement zk as in second equation. The final step is to obtain an a
posteriori error covariance estimate via last equation.

5
After each time and measurement update pair, the process is repeated
with the previous a posteriori estimates used to project or predict the new a
priori estimates. This recursive nature is one of the very appealing features
of the Kalman filter it makes practical implementations much more feasible
because the Kalman filter recursively conditions the current estimate on all
of the past measurements.
The following flow chart gives the summary of Kalman Filter Algorithm.

Figure 1: Discrete Kalman Filter

Applications in Object Tracking


We now give a practical example where Kalman Filters are typically used.
Consider the problem of determining the precise location of a truck. The
truck can be equipped with a GPS unit that provides an estimate of the
position within a few meters. The GPS estimate is likely to be noisy; readings
’jump around’ rapidly, though always remaining within a few meters of the
real position. In addition, since the truck is expected to follow the laws of
physics, its position can also be estimated by integrating its velocity over
time, determined by keeping track of wheel revolutions and the angle of the
steering wheel. This is a technique known as dead reckoning. Typically, dead
reckoning will provide a very smooth estimate of the truck’s position, but it
will drift over time as small errors accumulate.
Here, the Kalman filter can be thought of as operating in two distinct
phases: predict and update, as mentioned in the algorithm. In the predic-
tion phase, the truck’s old position will be modified according to the physical
laws of motion (the ‘stochastic linear difference’ model) plus any changes

6
produced by the accelerator pedal and steering wheel. Not only will a new
position estimate be calculated, but a new covariance will be calculated as
well. Perhaps the covariance is proportional to the speed of the truck because
we are more uncertain about the accuracy of the dead reckoning estimate at
high speeds but very certain about the position when moving slowly. Next,
in the update phase, a measurement of the truck’s position is taken from the
GPS unit. Along with this measurement comes some amount of uncertainty,
and its covariance relative to that of the prediction from the previous phase
determines how much the new measurement will affect the updated predic-
tion. Ideally, if the dead reckoning estimates tend to drift away from the
real position, the GPS measurement should pull the position estimate back
towards the real position but not disturb it to the point of becoming rapidly
changing and noisy.

Implementation - 1-Dimensional Object Tracking


Consider an object on perfectly frictionless, infinitely long straight line. Ini-
tially the object is stationary at position 0, but it is subjected to uniform
acceleration. We measure the position of the object every t seconds, but
these measurements are imprecise; we want to maintain a model of where
the truck is and what its velocity is. We now model the given problem and
apply the Kalman Filter Algorithm to predict the position of the object.

The position and velocity of the object are specified by the state-space
 
xk
Xk =
ẋk
and the initial state of the system is X0
 
0
X0 =
0
We assume that between the k − 1 and k timestep, the object undergoes
a constant acceleration of a, that is normally distributed, with mean 0 and
standard deviation σa . From Newton’s laws of motion we have
       2 
xk 1 dt xk−1 dt /2
Xk = = × +a (7)
ẋk 0 1 ẋk−1 dt
which is of the form
   
xk xk−1
Xk = =A× + aB (8)
ẋk ẋk−1

7
So finally, we have

Xk = A × Xk−1 + wk (9)
where wk ∼ N (0, Q) and
 
T dt4 /4 dt3 /2
Q = BB σa = σa (10)
dt3 /2 dt2

At each time step, a noisy measurement of the true position of the ob-
ject is made, zt . Let us suppose the measurement noise vk is also normally
distributed, with mean 0 and standard deviation σz . ie. vk ∼ N (0, R) where
R = [σa2 ] Therefore,

zk = HXk + vk (11)
 
where H = 1 0 . The following graphs depict the results of Kalman
Filter being applied on a simulated data set.

Figure 2: The 1-D simulation of the true path of the object and the observed
path

Implementation - Projectile motion Object Tracking


We now consider the problem of tracking object in 2-dimensional projectile
motion. The equations of projectile motion govern the state space of the

8
Figure 3: The Moving-Average Filter for the the observed 1-D state-space

Figure 4: The Discrete Kalman Filter for the observed 1-D state-space

system and form a stochastic linear difference equation. The modeling is


similar to 1-Dimensional case, except that now, A is a 4 × 4 matrix, B is a

9
Figure 5: The 2-D simulation of the true path of the projectile and the
observed path

Figure 6: The Moving-Average Filter for the the observed 2-D state-space

4 × 1 matrice as given in below


       
xk 1 dt 0 0 xk−1 0
 ẋk  0 1 0 0  ẋk−1   0 
Xk =  yk  =  0 0
  × +g 2  (12)
1 dt   yk−1   dt /2 
ẏ k 0 0 0 1 ẏ k−1 dt

10
Figure 7: The Discrete Kalman Filter for the observed 2-D state-space

Figure 8: The X vs Y plot for the Moving-Average Filter

And the equation relating the noisy measurements and true state is
 
  x k
1 0 0 0   ẋk  + vk

Zk = (13)
0 0 1 0  yk 
ẏ k

11
Figure 9: The X vs Y plot for the Kalman Filter

Performance and Conclusion


It is known from the theory that the Kalman filter is optimal in case that a)
the model perfectly matches the real system, b) the entering noise is white(we
mean that the components of error random vector are statistically indepen-
dent and have normal distributions) and c) the covariances of the noise are
exactly known. We can clearly observe the difference in performance of the
Moving Average Filetr and Kalman Filter in Figure-3 and Figure-4 respec-
tively, for the 1D case. Similarly, the state-space system is estimated much
more efficiently by the Kalman Filter in comparison to Moving Average Fil-
ter, in tracking the projectile as weell, as evident in Figure-6 and Figure-7.
The estimated path almost overlaps the true path in the 1-Dimensional case.
Hence Kalman Filters are simple and robust algorithms that can by applied
to estimate state-space systems governed by linear stochastic difference equa-
tions and provided the noises in measurements are white.

12
Bibliography

Greg Welch, Gary Bishop (1996). An Introduction to Kalman Filters

Ramsey Faragher , Understanding the Basis of the Kalman Filter, IEEE


Signal Processing Magazine [131] 2012.

Mark S. Gockenbach , A Practical Implementation to MATLAB,


http://www.math.mtu.edu/ msgocken/intro/intro.html.

Wikipedia Article , Kalman Filters,


http://en.wikipedia.org/wiki/Kalmanf ilter.

Wolfram Burgard, University of Freiburg, Advanced Artificial Intelligence

P.J.Brockwell R.Davies, Introduction to Time Series and Forecasting, 2009

Mike Goodrich, Brigham Young University, Artificial Intelligence, 2005

MathWorks, Introduction to MATLAB,


http://www.mathworks.in/help/index.html

Centre d’Économie de la Sorbonne, Time Series Analysis and Forecasting :


Lecture Notes

13

You might also like