Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Introduction to Kalman Filter

Luis Ismael Minchala Avila1,2


1 Department

of Mechatronics and Automation, Tec de Monterrey, Eugenio Garza Sada


2501, Monterrey NL, M
exico
2 Engineering Research Center for Innovation and Development, Universidad Polit
ecnica
Salesiana, Calle Vieja 12-30, Cuenca, Ecuador
ismael.minchala@gmail.com

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

1 / 19

Agenda

Introduction

KF Algorithm
Problem statement
Algorithm

Applications
Noise filtering in a measured variable
State estimation

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

2 / 19

Introduction

Introduction

Kalman Filter (KF) has been widely studied since the publication of
its seminal paper in 1960
KF is a very powerful statistical approach used to infer the value of a
signal y (t) = x(t) + , where represents gaussian noise, at a
particular time t = t1 .
t1 could be less than, equal or greater than t.
In the particular case of t1 < t, KF performs data-smothing. When
t1 = t, a filtering operation is executed; and when t1 > t, a prediction
feature is to be carried out by KF.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

3 / 19

Introduction

Introduction

Kalman Filter (KF) has been widely studied since the publication of
its seminal paper in 1960
KF is a very powerful statistical approach used to infer the value of a
signal y (t) = x(t) + , where represents gaussian noise, at a
particular time t = t1 .
t1 could be less than, equal or greater than t.
In the particular case of t1 < t, KF performs data-smothing. When
t1 = t, a filtering operation is executed; and when t1 > t, a prediction
feature is to be carried out by KF.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

3 / 19

Introduction

Introduction

Kalman Filter (KF) has been widely studied since the publication of
its seminal paper in 1960
KF is a very powerful statistical approach used to infer the value of a
signal y (t) = x(t) + , where represents gaussian noise, at a
particular time t = t1 .
t1 could be less than, equal or greater than t.
In the particular case of t1 < t, KF performs data-smothing. When
t1 = t, a filtering operation is executed; and when t1 > t, a prediction
feature is to be carried out by KF.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

3 / 19

Introduction

Introduction

Kalman Filter (KF) has been widely studied since the publication of
its seminal paper in 1960
KF is a very powerful statistical approach used to infer the value of a
signal y (t) = x(t) + , where represents gaussian noise, at a
particular time t = t1 .
t1 could be less than, equal or greater than t.
In the particular case of t1 < t, KF performs data-smothing. When
t1 = t, a filtering operation is executed; and when t1 > t, a prediction
feature is to be carried out by KF.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

3 / 19

Introduction

Introduction

Gaussian distributions and linear dynamics are natural, mutually


plausible assumptions particularly when the statistical data are scant.
The concept of state helps with the description of systems liner and
nonlinear.
Physical random phenomena may be thought of as due to primary
random sources exciting dynamic systems. The primary sources are
assumed to be independent gaussian random processes with zero
mean; the dynamic system will be linear.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

4 / 19

Introduction

Introduction

Gaussian distributions and linear dynamics are natural, mutually


plausible assumptions particularly when the statistical data are scant.
The concept of state helps with the description of systems liner and
nonlinear.
Physical random phenomena may be thought of as due to primary
random sources exciting dynamic systems. The primary sources are
assumed to be independent gaussian random processes with zero
mean; the dynamic system will be linear.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

4 / 19

Introduction

Introduction

Gaussian distributions and linear dynamics are natural, mutually


plausible assumptions particularly when the statistical data are scant.
The concept of state helps with the description of systems liner and
nonlinear.
Physical random phenomena may be thought of as due to primary
random sources exciting dynamic systems. The primary sources are
assumed to be independent gaussian random processes with zero
mean; the dynamic system will be linear.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

4 / 19

Introduction

Introduction

What exactly is KF?


The Kalman filter is a set of mathematical equations that provides an
efficient computational (recursive) means to estimate the state of a
process, in a way that minimizes the mean of the squared error. The filter
is very powerful in several aspects:
it supports estimations of past, present, and even future states, and it
can do so even
when the precise nature of the modeled system is unknown

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

5 / 19

Introduction

Introduction

What exactly is KF?


The Kalman filter is a set of mathematical equations that provides an
efficient computational (recursive) means to estimate the state of a
process, in a way that minimizes the mean of the squared error. The filter
is very powerful in several aspects:
it supports estimations of past, present, and even future states, and it
can do so even
when the precise nature of the modeled system is unknown

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

5 / 19

Introduction

Introduction

What exactly is KF?


The Kalman filter is a set of mathematical equations that provides an
efficient computational (recursive) means to estimate the state of a
process, in a way that minimizes the mean of the squared error. The filter
is very powerful in several aspects:
it supports estimations of past, present, and even future states, and it
can do so even
when the precise nature of the modeled system is unknown

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

5 / 19

Introduction

Introduction

What exactly is KF?


The Kalman filter is a set of mathematical equations that provides an
efficient computational (recursive) means to estimate the state of a
process, in a way that minimizes the mean of the squared error. The filter
is very powerful in several aspects:
it supports estimations of past, present, and even future states, and it
can do so even
when the precise nature of the modeled system is unknown

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

5 / 19

KF Algorithm

Problem statement

Agenda

Introduction

KF Algorithm
Problem statement
Algorithm

Applications
Noise filtering in a measured variable
State estimation

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

6 / 19

KF Algorithm

Problem statement

Model of the system


Consider the linear time-invariant (LTI) state-space model:

x(k + 1) = Ax(k) + Bu(k) + w(k)

(1)

y(k) = Cx(k) + v(k)


where x <n , u <m and y <p are respectively the state vector, the
vector of control signals and the vector of measured output signals, w is
the vector of state noise, v denotes the measurement noise. Noise vectors
are independent Gaussian process that have the following properties:

E

E

w(k)
v(k)

Ismael Minchala A. (ITESM)

w(k)
v(k)



= 0



Qw
0
0
w (l) v (l)
=
Q0wv
Kalman Filter

Qwv
Qv


kl
Dec 9th 2013

7 / 19

KF Algorithm

Problem statement

Model of the system


Consider the linear time-invariant (LTI) state-space model:

x(k + 1) = Ax(k) + Bu(k) + w(k)

(1)

y(k) = Cx(k) + v(k)


where x <n , u <m and y <p are respectively the state vector, the
vector of control signals and the vector of measured output signals, w is
the vector of state noise, v denotes the measurement noise. Noise vectors
are independent Gaussian process that have the following properties:

E

E

w(k)
v(k)

Ismael Minchala A. (ITESM)

w(k)
v(k)



= 0



Qw
0
0
w (l) v (l)
=
Q0wv
Kalman Filter

Qwv
Qv


kl
Dec 9th 2013

7 / 19

KF Algorithm

Problem statement

Model of the system


Consider the linear time-invariant (LTI) state-space model:

x(k + 1) = Ax(k) + Bu(k) + w(k)

(1)

y(k) = Cx(k) + v(k)


where x <n , u <m and y <p are respectively the state vector, the
vector of control signals and the vector of measured output signals, w is
the vector of state noise, v denotes the measurement noise. Noise vectors
are independent Gaussian process that have the following properties:

E

E

w(k)
v(k)

Ismael Minchala A. (ITESM)

w(k)
v(k)



= 0



Qw
0
0
w (l) v (l)
=
Q0wv
Kalman Filter

Qwv
Qv


kl
Dec 9th 2013

7 / 19

KF Algorithm

Problem statement

The problem to be solved consists in estimating the systems state


from
observed values


y
(k

k
)
y
(k

k
+
1)
.
.
.
y
(k

1)
y
(k)
0
0

 and
u(k k0 ) u(k k0 + 1) . . . u(k 1) u(k)
It is convenient to define a priori state estimation
x <n at step k,
given prior knowledge to step k of the observed values of y and u, and
a posteriori state estimate at step k,
x <n given the measurement
of y(k). Therefore, priori and posteriori errors are defined as follows:

Ismael Minchala A. (ITESM)

e (k) , x(k)
x(k)

(2)

e(k) , x(k)
x(k)

(3)

Kalman Filter

Dec 9th 2013

8 / 19

KF Algorithm

Problem statement

The problem to be solved consists in estimating the systems state


from
observed values


y
(k

k
)
y
(k

k
+
1)
.
.
.
y
(k

1)
y
(k)
0
0

 and
u(k k0 ) u(k k0 + 1) . . . u(k 1) u(k)
It is convenient to define a priori state estimation
x <n at step k,
given prior knowledge to step k of the observed values of y and u, and
a posteriori state estimate at step k,
x <n given the measurement
of y(k). Therefore, priori and posteriori errors are defined as follows:

Ismael Minchala A. (ITESM)

e (k) , x(k)
x(k)

(2)

e(k) , x(k)
x(k)

(3)

Kalman Filter

Dec 9th 2013

8 / 19

KF Algorithm

Problem statement

A priori estimate error covariance and a posteriori estimate error


covariance are as following:
h
i
P (k) = E e(k) eT (k)
h
i
P(k) = E e(k)eT (k)

(4)
(5)

A formulation for the posteriori state estimation is then formulated:

x(k) =
x(k) + K(k) y(k) C
x(k)

(6)

whose formulation resembles an observer, certainly a KF is an optimal


observer. Gain K(k) is also called blending factor and its purpose is to
minimize the a posteriori error covariance, Eq. 5

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

9 / 19

KF Algorithm

Problem statement

This optimization procedure, whose details are not presented here, can be
done by substituting Eq. 6 into the definition of the error, Eq. 3,
performing the corresponding expectation operations, taking the derivative
of the trace of the result with respect to K , setting the result equal to zero
and finally solving the equation for K . One particular form of K that
minimizes Eq. 5 is given by:

1
K(k) = P(k)CT CP(k) CT + Qv

(7)

Equations 4 to 7 have to be recursively solved.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

10 / 19

KF Algorithm

Problem statement

This optimization procedure, whose details are not presented here, can be
done by substituting Eq. 6 into the definition of the error, Eq. 3,
performing the corresponding expectation operations, taking the derivative
of the trace of the result with respect to K , setting the result equal to zero
and finally solving the equation for K . One particular form of K that
minimizes Eq. 5 is given by:
1

K(k) = P(k)CT CP(k) CT + Qv

(7)

Equations 4 to 7 have to be recursively solved.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

10 / 19

KF Algorithm

Algorithm

Agenda

Introduction

KF Algorithm
Problem statement
Algorithm

Applications
Noise filtering in a measured variable
State estimation

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

11 / 19

KF Algorithm

Algorithm

KF Algorithm

Initialize
x = x0 and P = P0

Calculate the a priori error covariance.


P(k) = AP(k 1)AT + Qw

Compute the Kalman gain.


1
K(k) = P(k)CT CP(k) CT + Qv

Perform the state estimation.




x(k) = A
x(k 1) + Bu(k 1) + K(k) y(k) C
x(k)

Update the error covariance.


P(k) = (I K(k)C) P(k)

Return to step 1 

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

12 / 19

KF Algorithm

Algorithm

KF Algorithm

Initialize
x = x0 and P = P0

Calculate the a priori error covariance.


P(k) = AP(k 1)AT + Qw

Compute the Kalman gain.


1
K(k) = P(k)CT CP(k) CT + Qv

Perform the state estimation.




x(k) = A
x(k 1) + Bu(k 1) + K(k) y(k) C
x(k)

Update the error covariance.


P(k) = (I K(k)C) P(k)

Return to step 1 

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

12 / 19

KF Algorithm

Algorithm

KF Algorithm

Initialize
x = x0 and P = P0

Calculate the a priori error covariance.


P(k) = AP(k 1)AT + Qw

Compute the Kalman gain.


1
K(k) = P(k)CT CP(k) CT + Qv

Perform the state estimation.




x(k) = A
x(k 1) + Bu(k 1) + K(k) y(k) C
x(k)

Update the error covariance.


P(k) = (I K(k)C) P(k)

Return to step 1 

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

12 / 19

KF Algorithm

Algorithm

KF Algorithm

Initialize
x = x0 and P = P0

Calculate the a priori error covariance.


P(k) = AP(k 1)AT + Qw

Compute the Kalman gain.


1
K(k) = P(k)CT CP(k) CT + Qv

Perform the state estimation.




x(k) = A
x(k 1) + Bu(k 1) + K(k) y(k) C
x(k)

Update the error covariance.


P(k) = (I K(k)C) P(k)

Return to step 1 

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

12 / 19

KF Algorithm

Algorithm

KF Algorithm

Initialize
x = x0 and P = P0

Calculate the a priori error covariance.


P(k) = AP(k 1)AT + Qw

Compute the Kalman gain.


1
K(k) = P(k)CT CP(k) CT + Qv

Perform the state estimation.




x(k) = A
x(k 1) + Bu(k 1) + K(k) y(k) C
x(k)

Update the error covariance.


P(k) = (I K(k)C) P(k)

Return to step 1 

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

12 / 19

KF Algorithm

Algorithm

KF Algorithm

Initialize
x = x0 and P = P0

Calculate the a priori error covariance.


P(k) = AP(k 1)AT + Qw

Compute the Kalman gain.


1
K(k) = P(k)CT CP(k) CT + Qv

Perform the state estimation.




x(k) = A
x(k 1) + Bu(k 1) + K(k) y(k) C
x(k)

Update the error covariance.


P(k) = (I K(k)C) P(k)

Return to step 1 

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

12 / 19

Applications

Noise filtering in a measured variable

Agenda

Introduction

KF Algorithm
Problem statement
Algorithm

Applications
Noise filtering in a measured variable
State estimation

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

13 / 19

Applications

Noise filtering in a measured variable

Noise filtering
Filter response
Ideal response
Filtered signal
Noisy signal

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Time (s)

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

14 / 19

Applications

Noise filtering in a measured variable

Tuning parameters

Another way of thinking about the weighting by K(k) is that as the


measurement error covariance R approaches zero, the actual measurement
z(k) is trusted more and more, while predicted measurement C
x(k) is
trusted less and less. On the other hand, as the a priori estimate error
covariance P(k) approaches zero the actual measurement z(k) is trusted
less.

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

15 / 19

Applications

State estimation

Agenda

Introduction

KF Algorithm
Problem statement
Algorithm

Applications
Noise filtering in a measured variable
State estimation

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

16 / 19

Applications

State estimation

State estimation of a dynamic system


x (t) estimation with KF

x (t) estimation with KF

70

Ideal x (t)
2

Noisy x (t)
2

Filtered x (t)
2

60
0

50
5
Ideal x1(t)

40

Noisy x (t)
1

10

Filtered x1(t)
30

15
20

20

10

0
0

5
Time (s)

Ismael Minchala A. (ITESM)

10

25
0

Kalman Filter

5
Time (s)

Dec 9th 2013

10

17 / 19

Appendix

Questions and Bibliography

Questions I

Ismael Minchala A. (ITESM)

Kalman Filter

Dec 9th 2013

18 / 19

You might also like