Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

Data Fusion using Kalman Filter

Ioannis Rekleitis
Example of a Parameterized Bayesian Filter:
Kalman Filter
Kalman filters (KF) represent posterior belief by a
Gaussian (normal) distribution

A 1-d Gaussian An n-d Gaussian


distribution is given by: distribution is given by:
( x )2 1
 ( x   )T  1 ( x   )
1 1
P( x)  e 2 2 P( x)  e 2

 2 (2 ) |  |
n

CS-417 Introduction to Robotics and Intelligent Systems 2


Generating Random Numbers
From a uniform RNG produce samples following the Normal distribution:
The most basic form of the transformation looks like:
y1 = sqrt( - 2 ln(x1) ) cos( 2 pi x2 )
y2 = sqrt( - 2 ln(x1) ) sin( 2 pi x2 )
The polar form of the Box-Muller transformation is both faster and more
robust numerically. The algorithmic description of it is:
float x1, x2, w, y1, y2;
do {
x1 = 2.0 * ranf() - 1.0; x2 = 2.0 * ranf() - 1.0;
w = x1 * x1 + x2 * x2;
} while ( w >= 1.0 );
w = sqrt( (-2.0 * ln( w ) ) / w );
y1 = x1 * w;
y2 = x2 * w;
See: http://www.taygeta.com/random/gaussian.html
CS-417 Introduction to Robotics and Intelligent Systems 3
Kalman Filter : a Bayesian Filter
• Initial belief Bel(x0) is a Gaussian distribution
– What do we do for an unknown starting position?
• State at time t+1 is a linear function of state at time t:
xt 1  Fxt  But   t ( action)
• Observations are linear in the state:
ot  Hxt   t ( observation )
• Error terms are zero-mean random variables which are normally distributed
• These assumptions guarantee that the posterior belief is Gaussian
– The Kalman Filter is an efficient algorithm to compute the posterior
– Normally, an update of this nature would require a matrix inversion (similar to a
least squares estimator)
– The Kalman Filter avoids this computationally complex operation

CS-417 Introduction to Robotics and Intelligent Systems 4


The Kalman Filter
• Motion model is Gaussian…
• Sensor model is Gaussian…
• Each belief function is uniquely characterized by its
mean  and covariance matrix 
• Computing the posterior means computing a new
mean  and covariance  from old data using
actions and sensor readings
• What are the key limitations?
1) Unimodal distribution
2) Linear assumptions

CS-417 Introduction to Robotics and Intelligent Systems 5


The Kalman Filter Kalman, 1960

• Linear process and measurement models


• Gaussian noise (or white )
• Gaussian state estimate

Prior Measurement Kalman filter posterior

• Process model is xt  Axt 1  But 1  qt 1

• Measurement model is z  Hx  r
t t t

CS-417 Introduction to Robotics and Intelligent Systems Images courtesy of Maybeck, 1979 6
What we know…
What we don’t know…
• We know what the control inputs of our process are
– We know what we’ve told the system to do and have a model for what the
expected output should be if everything works right
• We don’t know what the noise in the system truly is
– We can only estimate what the noise might be and try to put some sort of upper
bound on it
• When estimating the state of a system, we try to find a set of
values that comes as close to the truth as possible
– There will always be some mismatch between our estimate of the system and the
true state of the system itself. We just try to figure out how much mismatch there
is and try to get the best estimate possible

CS-417 Introduction to Robotics and Intelligent Systems 7


Minimum Mean Square Error
Reminder: the expected value, or mean value, of a
Continuous random variable x is defined as:

E[ x]   xp ( x)dx


Minimum Mean Square Error (MMSE)


What is the mean of this distribution? P( x | Z )
This is difficult to obtain exactly. With our approximations,
we can get the estimate x̂
…such that E[( x  xˆ ) 2 | Z t ] is minimized.
According to the Fundamental Theorem of Estimation Theory
this estimate is: 
MMSE
xˆ  E[ x | Z ]   xp ( x | Z )dx

CS-417 Introduction to Robotics and Intelligent Systems 8
Fundamental Theorem of Estimation Theory

• The minimum mean square error estimator equals the expected (mean) value of x
conditioned on the observations Z
• The minimum mean square error term is quadratic:

E[( x  xˆ ) 2 | Z t ]
– Its minimum can be found by taking the derivative of the function w.r.t x and setting that
value to 0.
 x ( E[( x  xˆ ) 2 | Z ])  0
• It is interesting to note that when they use the Gaussian assumption, Maximum A
Posteriori estimators and MMSE estimators find the same value for the parameters.
– This is because mean and the mode of a Gaussian distribution are the same.

CS-417 Introduction to Robotics and Intelligent Systems 9


Kalman Filter Components
(also known as: Way Too Many Variables…)

Linear discrete time dynamic system (motion model)

State Control input Process noise

xt 1  Ft xt  Bt ut  Gt wt

State transition Control input Noise input


function function function with covariance Q
Measurement equation (sensor model)
Sensor reading State Sensor noise with covariance R
zt 1  H t 1 xt 1  nt 1

Sensor function Note:Write these down!!!


CS-417 Introduction to Robotics and Intelligent Systems 10
Computing the MMSE Estimate of the State
and Covariance

Given a set of measurements: Zt 1  {zi , i  t  1}

According to the Fundamental Theorem of Estimation, the state


and covariance will be: MMSE
xˆ  E[ x | Z t 1 ]
P MMSE  E[( x  xˆ ) 2 | Z t 1 ]

We will now use the following notation:

xˆt 1|t 1  E[ xt 1 | Z t 1 ]
xˆt |t  E[ xt | Z t ]
xˆt 1|t  E[ xt 1 | Z t ]

CS-417 Introduction to Robotics and Intelligent Systems 11


Computing the MMSE Estimate of the
State and Covariance

What is the minimum mean square error estimate


of the system state and covariance?
xˆt 1|t  Ft xˆt|t  Bt ut Estimate of the state variables

zˆt 1|t  H t 1 xˆt 1|t Estimate of the sensor reading

Pt 1|t  Ft Pt|t Ft  Gt Qt Gt
T T
Covariance matrix for the state

St 1|t  H t 1Pt 1|t H t 1  Rt 1 Covariance matrix for the sensors


T

CS-417 Introduction to Robotics and Intelligent Systems 12


At last! The Kalman Filter…
Propagation (motion model):

xˆt 1/ t  Ft xˆt / t  Bt ut


Pt 1/ t  Ft Pt / t Ft  Gt Qt Gt
T T

Update (sensor model):


zˆt 1  H t 1 xˆt 1/ t
rt 1  zt 1  zˆt 1
S t 1  H t 1 Pt 1/ t H t 1  Rt 1
T

1
K t 1  Pt 1/ t H t 1 S t 1
T

xˆt 1/ t 1  xˆt 1/ t  K t 1rt 1


1
Pt 1/ t 1  Pt 1/ t  Pt 1/ t H t 1 S t 1 H t 1 Pt 1/ t
T

CS-417 Introduction to Robotics and Intelligent Systems 13


…but what does that mean in English?!?

Propagation (motion model):

xˆt 1/ t  Ft xˆt / t  Bt ut - State estimate is updated from system dynamics

Pt 1/ t  Ft Pt / t Ft  Gt Qt Gt
T T
- Uncertainty estimate GROWS

Update (sensor model):


zˆt 1  H t 1 xˆt 1/ t - Compute expected value of sensor reading

rt 1  zt 1  zˆt 1 - Compute the difference between expected and “true”

S t 1  H t 1 Pt 1/ t H t 1  Rt 1
T - Compute covariance of sensor reading

1
K t 1  Pt 1/ t H t 1 S t 1 - Compute the Kalman Gain (how much to correct est.)
T

xˆt 1/ t 1  xˆt 1/ t  K t 1rt 1 - Multiply residual times gain to correct state estimate
1
Pt 1/ t 1  Pt 1/ t  Pt 1/ t H t 1 S t 1 H t 1 Pt 1/ t
T
- Uncertainty estimate SHRINKS

CS-417 Introduction to Robotics and Intelligent Systems 14


Kalman Filter Block Diagram

CS-417 Introduction to Robotics and Intelligent Systems 15


Example 1: Simple 1D Linear System

Given: F=G=H=1, u=0


Initial state estimate = 0
Linear system: xt 1  xt  wt
Unknown noise
zt 1  xt 1  nt 1 parameters
Propagation: Update:
xˆt 1/ t  xˆt / t zˆt 1  xˆt 1/ t
Pt 1/ t  Pt / t  Qt rt 1  zt 1  xˆt 1/ t
St 1  Pt 1/ t  Rt 1
1
K t 1  Pt 1/ t St 1
xˆt 1/ t 1  xˆt 1/ t  K t 1rt 1
1
CS-417 Introduction to Robotics and Intelligent Systems
Pt 1/ t 1  Pt 1  Pt 1/ t St 1 Pt 1/ t
16
State Estimate

CS-417 Introduction to Robotics and Intelligent Systems 17


State Estimation Error vs 3 Region of Confidence

CS-417 Introduction to Robotics and Intelligent Systems 18


Sensor Residual vs 3 Region of Confidence

CS-417 Introduction to Robotics and Intelligent Systems 19


Kalman Gain and State Covariance

CS-417 Introduction to Robotics and Intelligent Systems 20


Example 2: Simple 1D Linear System with
Erroneous Start

Given: F=G=H=1, u=cos(t/5)


Initial state estimate = 20
xt 1  xt  cos(t / 5)  wt
Linear system: Unknown noise
zt 1  xt 1  nt 1
parameters
Propagation: Update: (no change)
xˆt 1/ t  xˆt / t  cos(t / 5) zˆt 1  xˆt 1/ t
Pt 1/ t  Pt / t  Qt rt 1  zt 1  xˆt 1/ t
St 1  Pt 1/ t  Rt 1
1
K t 1  Pt 1/ t St 1
xˆt 1/ t 1  xˆt 1/ t  K t 1rt 1
1
CS-417 Introduction to Robotics and Intelligent Systems Pt 1/ t 1  Pt 1  Pt 1/ t St 1 Pt 1/ t 21
State Estimate

CS-417 Introduction to Robotics and Intelligent Systems 22


State Estimation Error vs 3 Region of Confidence

CS-417 Introduction to Robotics and Intelligent Systems 23


Sensor Residual vs 3 Region of Confidence

CS-417 Introduction to Robotics and Intelligent Systems 24


Kalman Gain and State Covariance

CS-417 Introduction to Robotics and Intelligent Systems 25


Some observations
• The larger the error, the smaller the effect on the final state
estimate
– If process uncertainty is larger, sensor updates will dominate state
estimate
– If sensor uncertainty is larger, process propagation will dominate state
estimate
• Improper estimates of the state and/or sensor covariance may
result in a rapidly diverging estimator
– As a rule of thumb, the residuals must always be bounded within a ±3
region of uncertainty
– This measures the “health” of the filter
• Many propagation cycles can happen between updates

CS-417 Introduction to Robotics and Intelligent Systems 26


Using the Kalman Filter for Mobile Robots

• Sensor modeling
– The odometry estimate is not a reflection of the robot’s control
system is rather treated as a sensor
– Instead of directly measuring the error in the state vector (such as
when doing tracking), the error in the state must be estimated
– This is referred to as the Indirect Kalman Filter
• State vector for robot moving in 2D
– The state vector is 3x1: [x,y,q]
– The covariance matrix is 3x3
• Problem: Mobile robot dynamics are NOT linear

CS-417 Introduction to Robotics and Intelligent Systems 27


Problems with the
Linear Model Assumption
• Many systems of interest are highly non-linear, such
as mobile robots
• In order to model such systems, a linear process
model must be generated out of the non-linear
system dynamics
• The Extended Kalman filter is a method by which
the state propagation equations and the sensor
models can be linearized about the current state
estimate
• Linearization will increase the state error residual
because it is not the best estimate

CS-417 Introduction to Robotics and Intelligent Systems 28


Approximating Robot Motion Uncertainty
with a Gaussian

CS-417 Introduction to Robotics and Intelligent Systems 29


Linearized Motion Model for a Robot

xt  Vt
Y From a robot-centric
y x perspective, the velocities y t  0
v
look like this:   w
t t

w xt  Vt cos t
From the global
perspective, the y t  Vt sin t
velocities look like this:   w
G X t t

The discrete time state xˆt 1  xˆt  (Vt  wVt )t cos ˆt Problem! We don’t know
estimate (including noise) linear and rotational
yˆ t 1  yˆ t  (Vt  wVt )t sin ˆt velocity errors. The state
looks like this:
estimate will rapidly
ˆt 1  ˆt  (wt  ww )t
t
diverge if this is the only
CS-417 Introduction to Robotics and Intelligent Systems source of information! 30
Linearized Motion Model for a Robot
Now, we have to compute the covariance matrix propagation
equations.

The indirect Kalman filter derives the pose equations from the
estimated error of the state: xt 1  xˆt 1  ~
xt 1
yt 1  yˆ t 1  ~
yt 1
~
  ˆ  
t 1 t 1 t 1

In order to linearize the system, the following small-angle


assumptions are made: ~
cos   1
~ ~
sin   

CS-417 Introduction to Robotics and Intelligent Systems 31


~
Calculation of t 1
~
t 1  t 1  ˆt 1
 t  wt t  ˆt  (wt  ww )t
~
 t  ww t

CS-417 Introduction to Robotics and Intelligent Systems 32


~
xt 1  xt 1  xˆt 1
Calculation of ~
yt 1  yt 1  yˆ t 1
~
xt 1  xt 1  xˆt 1
~
x  x  xˆ
t 1 t 1 t 1

 xt  vt t cos(t )  xˆt  (vt  wv )t cos(ˆt )


 xt  xˆt  vt t cos(t )  vt t cos(ˆt )  wv t cos(ˆt )
~ ˆ
 xt  vt t cos(t  t )  vt t cos(ˆt )  wv t cos(ˆt )
~
~ ~
 xt  vt t[cos(t ) cos(t )  sin(t ) sin(ˆt )]  vt t cos(ˆt )  wv t cos(ˆt )
~ ˆ
~
 xt  vt t cos(t )  vt tt sin(ˆt )  vt t cos(ˆt )  wv t cos(ˆt )
~ ˆ
~
 x  v t sin(ˆ )  w t cos(ˆ )
~
t t t t v t
~
yt 1  yt 1  yˆ t 1
 ...
CS-417 Introduction to Robotics and Intelligent Systems 33
Linearized Motion Model for a Robot

From the error-state propagation equation, we can obtain the


State propagation and noise input functions F and G :

~xt 1  1 0  Vt t sin ˆ  ~


xt   t cos t 0 
~  
  0 1 V t cos ˆ  ~     t sin    wv 
 ~t 1  
y t  t  
y t 0  
t 1  0 0
~
    
 t   ww 
 1  t   0
~ ~
X t 1  Ft X t  GtWt

From these values, we can easily compute the standard covariance


propagation equation:

Pt 1/ t  Ft Pt / t Ft  Gt Qt Gt
T T

CS-417 Introduction to Robotics and Intelligent Systems 34


Covariance Estimation
~ ~T
Pt 1/ t  E[ X t 1 X t 1 ]
~ ~
 E[( Ft X t  Gt wt )( Ft X t  Gt wt ) ]
T

~ ~T T
 Ft E[ X t X t ]Ft  Gt E[ wt wt ]Gt
T T

 Ft Pt / t Ft  Gt Qt Gt
T T

where
v 0 
2
Qt  E[ wt wt ]  
T
2
 0 w 
CS-417 Introduction to Robotics and Intelligent Systems 35
Alternative Calculation
X t 1  f ( X t , wt )
X t 1  Xˆ t 1 Fx ( X t  Xˆ t )  Fw wt
~ ~
X t 1  Xˆ t 1 Fx ( X t )  Fw wt
1 0  vt t sin(ˆt )
f  ˆ 
Fx  | Xˆ  0 1 vt t cos(t ) 
X t  
0 0 1 
t cos(ˆt ) 0
f  ˆ 
Fw  | Xˆ   t sin(t ) 0 
w t  
 0  t 
CS-417 Introduction to Robotics and Intelligent Systems 36
Propagation
• Finally, for a mobile robot EKF propagation step we
have:
xˆt 1  xˆt  Vt mt cos ˆt
yˆ  yˆ  V mt sin ˆ
t 1 t t t

ˆt 1  ˆt  wtmt


Pt 1/ t  Ft Pt / t Ft  Gt Qt Gt
T T

where :
1 0  Vt t sin ˆ  t cos t 0 
   2
0
Ft  0 1 Vt t cos  , Gt    t sin t
ˆ 0 , Qt   v 2
0 0   0 w 
1  0 
 t 
 
CS-417 Introduction to Robotics and Intelligent Systems 37
Sensor Model for a Robot with a Perfect Map
From the robot, the measurement
looks like this:     n 
zt 1        
 
xLi  xr  y Li  yr
2 2
 
  n 
 

Y
 
q   nq  atan 2 y Li  yr , xLi  xr  r   nq  
Li =[xLi, yLi]T
y x
z  x Li  xr    y Li  yr 
2 2

   xLi  xr    y Li  yr  
 0
 
G X H 
  y Li  yr  
 xLi  xr  
 1
 2 2 

The measurement equation is nonlinear and must also be linearized!


CS-417 Introduction to Robotics and Intelligent Systems 38
Update
r  z  zˆ
S  H *P*H  R T

1
K  P*H *S T

xt 1/ t 1  xt / t 1  K * r
P  (I  K * H ) * P * (I  K * H )  K * R * K T T

P * PT
P
2
CS-417 Introduction to Robotics and Intelligent Systems 39
See Matlab Examples

CS-417 Introduction to Robotics and Intelligent Systems 40


Sensor Model for a Robot with a Perfect Map

 xLt 1   nx 
Y From the robot, the    
measurement looks zt 1   y Lt 1   n y 
L like this: L   n 
x  t 1   
y
z From a global
perspective, the
measurement looks
like:
G X
cos t 1  sin t 1
0  xLt 1  xt 1   nx 
   
zt 1   sin t 1 cos t 1
0  y Lt 1  yt 1   n y 
1  Lt 1  t 1   n 
 0 0
The measurement equation is nonlinear and must also be linearized!
CS-417 Introduction to Robotics and Intelligent Systems 41
Sensor Model for a Robot with a Perfect Map
Now, we have to compute the linearized sensor function. Once
again, we make use of the indirect Kalman filter where the error in
the reading must be estimated.

In order to linearize the system, the following small-angle


assumptions are made: ~
cos   1
~ ~
sin   

The final expression for the error in the sensor reading is:

~xLt 1   cos ˆt 1  sin ˆt 1  sin ˆt 1 ( xL  xˆt 1 )  cos ˆt ( y L  yˆ t 1 )   ~
xt 1   nx 
~     
ˆ
 y Lt 1    sin t 1  cos ˆ t 1  cos ˆt 1 ( xL  xˆt 1 )  sin ˆt ( y L  yˆ t 1 )  ~
yt 1   n y 
~L   1  ~   n 
  t 1    
 t 1   0 0

CS-417 Introduction to Robotics and Intelligent Systems 42


Updating the State Vector

Propagation only
CS-417 Introduction to Robotics and Intelligent Systems Propagation and update 43
Extended Kalman Filter for SLAM

• State vector
– Expanded to contain entries for all landmarks
positions: X  X RT X LT  X LT 
T
1 n

– State vector can be grown as new landmarks are


discovered
– Covariance matrix is also expanded
 PRR PRL1  PRL N
P PL1L1  PL1LN
 L1R 
     
 
 PLN R PLN L1  PLN LN 
CS-417 Introduction to Robotics and Intelligent Systems 44
Extended Kalman Filter for SLAM

• Kinematic equations for landmark


propagation

xˆt 1  xˆt  (Vt  wVt )t cos ˆt


yˆ t 1  yˆ t  (Vt  wVt )t sin ˆt
ˆt 1  ˆt  (wt  ww )t t

xˆ Li t 1  xˆ Li t
yˆ Li t 1  yˆ Li t
ˆL t 1  ˆL t
i i

CS-417 Introduction to Robotics and Intelligent Systems 45


Extended Kalman Filter for SLAM

• Sensor equations for update:


~
~
X  X RT
~
X LT1 
~
X LTi 
~
X LTn 
H  H R 0  0 H Li 0  0 
• Very powerful because covariance update
records shared information between landmarks
and robot positions

CS-417 Introduction to Robotics and Intelligent Systems 46


EKF for SLAM

CS-417 Introduction to Robotics and Intelligent Systems 47


Enhancements to EKF

• Iterated Extended Kalman Filter


State and
covariance update
rt 1  zt 1  zˆt 1
St 1  H t 1 Pk 1/ k H Tt 1  Rt 1
K t 1  Pk 1/ k H Tt 1 S t11
Xˆ k 1/ k 1  Xˆ k 1/ k  K t 1rt 1
Pk 1/ k 1  Pk 1/ k  K t 1St 1 K Tt 1

Iterate state update


equation until
convergence
CS-417 Introduction to Robotics and Intelligent Systems 48
Enhancements to the EKF

• Multiple hypothesis tracking


– Multiple Kalman filters are used to track the data
– Multi-Gaussian approach allows for representation of
arbitrary probability densities
– Consistent hypothesis are tracked while highly inconsistent
hypotheses are dropped
– Similar in spirit to particle filter, but orders of magnitude
fewer filters are tracked as compared to the particle filter

CS-417 Introduction to Robotics and Intelligent Systems 49

You might also like