Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Hátralévő idő:  

28:20
 Tettamanti Tamás, Luspay Tamás, Varga István: ROAD TRAFFIC MODELING AND SIMULATION › 1. Measurement and estimation › 1.3. Turning
rate estimation using Kalman Filter – a Matlab tutorial

1.3. Turning rate estimation using Kalman Filter – a Matlab tutorial


For dynamical systems it can be necessary to apply ltering technique which is able to model the system itself and
the uncertainties of the system. One of the possible lters is the Kalman Filter algorithm. e lter is named a er

Rudolf Emil Kalman, one of the rst developers of this theory. R. E. Kalman published his famous basic paper in
1960 describing a recursive solution to the discrete linear ltering problem [6].
e Kalman Filter maintains the rst two moments of the state distribution of probability theory, i.e. the
expected value (mean) of the state and the covariance of the state estimate:


E {x (k)} = x̂ (k), (1.8)


T
E {(x(k) − x̂ (k))(x (k) − x̂ (k)) } = P (k). (1.9)

In the sequel, the method is introduced for Linear Time Varying (LTV) system for the sake of generality. Consider
therefore the LTV system below:


x (k + 1) = A (k)x (k) + B (k)u (k) + v (k),     x (0) (1.10)

y (k) = C (k)x (k) + z (k). (1.11)

denotes the system’s state vector with x (0) initial value. u (k) is the deterministic input to the

n m
x (k) ∈ R ∈ R

system. e measurement of the system is represented by y (k) ∈ R


p
. A (k) ∈ R
n×n
, B (k) ∈ R
n×m
, and
C (k) ∈ R
p×n
are the system matrices, their structure and time dependence are assumed to be known. Eq. (1.10) is
called the equation of system dynamics and Eq. (1.11) the measurement (or observation) equation. ese equations
presents the so called state space representation form in which v (k) and z (k) are stochastic noises. For state and
measurement noises z (k) and v (k) as well as for initial state x (0) the following stochastic hypotheses are held:
- initial state x (0) is independent of v (k) and z (k);
- x (0) is a random variable with expected value E {x (0)} = x0 and
- estimate error covariance matrix E {(x (0) − x0 )(x (0) − x0 )
T
} = P (0) ;
- v (k) is zero-mean expected value with normal distribution: E {v (k)} = 0 ;
- the covariance of v (k) is known: E {v (k)v(k)
T
} = Q (k) ;
- v (k) is uncorrelated with previous or next state noise, i.e. v (k) is a white noise (see the Note below):
T
E {v(k)v(k − l) } = 0 for all l ≠ 0 ;
- z (k) is zero-mean expected value with normal distribution: E {z (k)} = 0 ;
- the covariance of z (k) is known: E {z (k)z(k)
T
} = R (k) ;
- z (k) is uncorrelated with previous or next state noise, i.e. z (k) is a white noise:
- E {z (k))z(k − l)
T
} = 0 for all l ≠ 0 ;
- v (k) and z (l) are also uncorrelated: E {v (k)z(l)
T
} = 0 .
 

Note

 
In linear Kalman Filter the errors are assumed to be white noises. By de nition, the white noise is a signal
(analyzed in frequency domain) having equal intensity in a given frequency band, i.e. it has a constant power
spectral density (independent of the frequency). e expression ”white” refers to the light as the spectral
density of the white light is also equal on all frequencies. White noise has o en normal distribution but not
necessarily. A spectacular example for white noise is the cosmic background radiation, which can be detected
by old (analogue) television or radio devices when they are not tuned. Fig. 1.3 depicts a zero-mean normally
distributed white noise.
 
Hátralévő idő:   28:20

Figure 1.3: Example for white noise: le side shows the magnitude (x) of white noise in frequency domain and
right part presents the probability density function p(x) of the white noise

 
If the white noise is interpreted in time domain, the autocorrelation function is always zero except at time
τ = 0 where it is an impulse of in nite amplitude by mathematical de nition. is is the dirac delta function
δ(τ ) . Considering the errors of road tra c measurements this means that the noises at di erent times are not
correlated, i.e. there is no relationship between the measures of subsequent measurement errors.

 
For better understanding of the theory given above, consider the detailed form of Eq. (1.11):

y1 (k) C11 (k) C1n (k) x 1 (k) z1 (k)
⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤

⎥ = ⎢ ⎥⎢

⎢⋮ ⎥ ⎢⋮ ⋱ ⋮ ⎥⎢⋮
⎥ + ⎢
⎥ ⎢⋮
⎥,
⎥ (1.12)
⎣ ⎦ ⎣ ⎦ ⎣ x (k) ⎦ ⎣ ⎦
yn (k) Ck1 (k) Ckn (k) n zn (k)

Eq. (1.12) re ects that state vector x (k) is not measurable directly (unless C (k)
the state variables, for example due to the applied measurement technology. e goal of the Kalman Filter is to
= I ). Moreover, errors are added to

predict the value of x (k) , i.e. it determines x̂ (k) at each discrete time step (sampling interval). As already given,
measurement error z (k) is a white noise having normal distribution and zero-mean. Let σ
2
i
be the variance of
z i (k). en, the symmetric covariance matrix R (k) is de ned as follows:

σ1 σ1 σ1 σ2 σ1 σn
⎡ ⎤

⎢ σ2 σ1 σ2 σ2 σ2 σn ⎥
T
} =  ⎢ ⎥.
(1.13)
T
R (k) = E {z (k)z (k)} = E {(C x − y) (C x − y)
⎢ ⎥
⎢ ⋮ ⎥

⎣ ⎦
σn σ1 σn σ2 σn σn

In practice, however, it is enough to de ne R (k) as a diagonal matrix, so that it contains the measurement noise
covariance solely in its diagonal:


2
σ
⎡ 1 ⎤
2
⎢ σ ⎥
⎢ 2 ⎥
R (k) = ⎢ ⎥. (1.14)
⎢ ⎥
⎢ ⋱ ⎥

⎣ n ⎦
σn

e same is valid for the state (modeling) noise covariance matrix, i.e. Q (k) is typically is created to be a diagonal
matrix.

Covariance matrices Q (k)and R (k) of state and measurement noises express the uncertainty of Equations
(1.10) and (1.11), respectively. e typical errors and their covariances (Q (k) and R (k) must be de ned empirically

before the Kalman Filter estimator is launched. Although Q(k) and R (k) can be time varying, generally constant
covariance matrices are applied. To put it simply, the more R (k) approaches to zero, the more the actual
measurement y (k) is ”trusted” by the estimator. On the other hand, the less is the value of , the more the
Q (k)

Kalman Filter’s estimate is ”accepted” compared to the measurement.


e essence of Kalman Filtering is that the optimal estimation of state x (k) , denoted as x̂ (k) is calculated at
each time step k such that

E{ ((k) ˆ (k)} 0 d
- E {x((k) − x̂ (k)} = 0 , and
- E {(x (k) − x̂ (k))(x (k) − x̂ (k))
T
} = P (k)
Hátralévő idő:  
is minimal. 28:20
e linear Kalman Filter algorithm can be concluded as cyclic steps given below.

 


e steps of the Kalman Filter algorithm at time step k:
 
PREDICTION:
 
1. Calculation of the a priori state estimation:


x̂ (k) = A (k − 1)x̂ (k − 1) + B (k − 1)u (k − 1). (1.15)

 
 
2. Calculation of the a priori error covariance:

. (1.16)
− T
P (k) = A (k − 1)P (k − 1)A (k − 1) + Q (k − 1)

 
CORRECTION:
 
1. Measurement whith provides y(k).
 
2. Calculation of the a posteriori state estimation:

− −
x̂ (k) = x̂ (k) + G (k) (y (k) − C (k)x̂ (k)) , (1.17)
−1
where G (k) = P

(k)C
T
(k)(C (k)P

(k)C
T
(k) + R (k)) (1.18)

 
3. Calculation of the a posteriori error covariance:

P (k) = (I − G (k)C (k))P



(k). (1.19)

 
4. Increment the step index and go to step 1. of PREDICTION:

k := k + 1. (1.20)

 
e Kalman Filter algorithm has two phases: prediction and correction. Accordingly, there two groups of
equations in the algorithm:


- prediction equations produce state estimate x̂ (k) and the extrapolation of estimate error covariance P − (k)
for the next step, i.e. the a priori estimation;
- correction equations recalculate state estimate x̂ (k) and estimate error covariance P (k) based on the
updated measurement values, i.e. a posteriori estimation.
As the Kalman Filter applies system modeling (see Eq. (1.10)), the algorithm contains estimation equations (1.15) as
well as (1.16).

Although the Kalman Filter algorithm is bit complex, the estimate of this method is a Gaussian distribution
practically, which can be multivariate distribution if the state vector is multidimensional. In one-dimensional case

the estimate x̂ (k) (being scalar) is the expected value. In multidimensional case x̂(k) is the vector of expected
values. e variances of x̂ (k) are in the diagonal of matrix P (k) .
e Kalman Filtering is e ciently applied in most of the engineering elds. It’s importance in road tra c has
been recognized by Szeto and Gazis in 1972 [14]. To demonstrate the Kalman Filter algorithm, di erent practical

examples are introduced in the sequel
examples are introduced in the sequel.
  Hátralévő idő:   28:20
Tutorial 1: Kalman Filtering for average speed measurement detected by single inductive loop
 

Consider the problem of the mean speed determination based on single loop detector as given by Eq. (1.1). Apply

the Kalman Filtering method for the ltering of real-time speed measurements.
 
Solution:

First, determine the observation equation (1.11) which will be reduced to scalar equation as only a single state (
x (k) ) is investigated by the estimator:


y (k) = c (k)x (k) + z (k). (1.21)

x (k)
at is all equation terms become scalar compared to the generalized case in which
as well as z (k) are vectors. is will only simplify the calculation.
C (k) is a matrix and y (k) ,


To get the standard measurement equation form (1.21), reformulate Eq. (1.1) by using
average e ective vehicle length (being

L
Lef f = L + Ldet

the average vehicle length and Ldet the loop detector length):
, the


L
ef f


(1.22)

L +Ldet Lef f
− 1 N N 1
v = ∑ = ∑ .
N j=1 tj N j=1 tj

Based on Eq. (1.21) the measurement equation is provided at time step


[kT , (k + 1)T ]):
k (i.e. concerning the sampling interval


y(k)
 c(x) x(k)

Lef f N (k) 1
 

(1.23)
∑ = 1 v (k) + z (k),
N (k) j=1 tj

where the scalar parameters are as follows:


Lef f

- y (k) = ∑
N (k)

j=1
1
is the noisy measurement;
N (k) tj

- c (k) = 1 is the weighting parameter (denoted by C (k) previously, but here in lower case as it is not a matrix
now);
- −
x (k) = v (k) is time-varying, unknown parameter, i.e. the punctual mean speed;
- z (k) is unknown, zero-mean measurement error which variance is contained by covariance matrix R (k), now
denoted by scalar r .
State equation (1.10) is also de ned in scalar form:

x (k + 1) = x (k) + n (k), (1.24)

where according to Eq. (1.10) A (k) = 1, u (k) = 0 and n (k) is the state noise. Note that state noise was denoted by
v (k) previously. In this example, however, v (k) means the speed. n (k) is a zero-mean state error which variance

is contained by covariance matrix Q (k) , now denoted by scalar q . Eq. (1.24) represents a random walk process
where the change of state variable depends only on the previous state and an error term. e state equation of this
example is therefore given below:

x(k+1) x(k)
  
(1.25)
−̂ −̂
v (k + 1) = v (k) + n (k).

e Kalman Filter algorithm given with the parameters above can be concluded as follows:

 


PREDICTION:
 
1. Calculation of the a priori state estimation:

v̂ (k) = v̂ (k − 1).

2. Calculation of the a priori error covariance:



p (k) = p (k − 1) + q.

  28:20
Hátralévő idő:  

CORRECTION:
 

1. Measurement:
Lef f N (k) 1
y (k) = ∑ .
N (k) j=1 tj

2. Calculation of the a posteriori state estimation:


− −
v̂ (k) = v̂ (k) + g (k) (y (k) − v̂ (k)),

p (k))
where g (k) = −
.
p (k)+r

 
3. Calculation of the a posteriori error covariance:
− −
p (k) = p (k) − g (k)p (k).

 
4. Increment the step index and go to step 1. of PREDICTION:
k := k + 1.

 
e solution of this estimation process is tested on the data set shown by Fig. 1.1. e results are presented in Fig.
1.4. It is observable that the Kalman Filter is capable to e ciently lter extreme peaks, at the same time follows the

tendency of the detected raw data.


2
As initial settings, the following parameters were applied: −̂v (0) = 50   (km/h), p

(0) = 400   (km/h)) .
2
e error covariances were assumed to be constant, therefore at each k time step q = 100  (km/h) and
2
r = 400  (km/h) were used.
 

Figure 1.4: Application of Kalman Filtering for average speed estimation measured by a single loop detector

 
As already described, the estimate of Kalman Filter is practically provides a Gaussian distribution at each k
sampling time, i.e. Kalman Filtering calculates the ”time series” (variation by time) of distributions. To

demonstrate this, consider Fig. 1.5 where the last 4 estimates shown by Fig. 1.4 are presented. As the sampling time
was 5 minutes, the presented distributions show the results from 19:40 to 20:00.
e expected value of distribution at kth step is the Kalman Filter estimate x̂ (k).
deviation is the square root of the state estimation error covariance matrix P
e related standard
(k) (in this example it is simply p (k)

as the state variable to estimate is a scalar), i.e. σ (k)=√p (k) . As observable, the standard deviation is constant (12.5
km/h) and only the expected value is changing. e reason for this is that the applied model equation and the noise
covariances were constant during the lter’s operation, i.e. A (k) =1 , B (k)=0 , C (k)=1 , Q (k)=q =100, R (k)=r =
400 . us, gain g (k) and covariance p (k) became constant.
 

Figure 1 5: Kalman Filter estimates from 19:40 until 20:00 (with 5 min sample time)
Figure 1.5: Kalman Filter estimates from 19:40 until 20:00 (with 5 min sample time)

  28:20
Hátralévő idő:  

 
Tutorial 2: Estimation of turning rates by Kalman Filtering
 
In the sequel, Kalman Filter is applied for the estimation of origin-destination matrix (turning rates in a simple
intersection area). Consider the signalized intersection in Fig. 1.6. e goal of the example is to determine the

turning rates.
 

Figure 1.6: Turning rates in a simple intersection

 
Solution:

First, the system dynamic model must be set for the junction, which is based on the vehicle conservation law.
Assume that the entering and leaving tra c ows are measured parameters. e following parameters are used in

the model:
- qi (k) is the incoming tra c ow from direction i,
- yj (k) is the leaving tra c ow from direction j ,
- x ij (k) denotes the turning rate (the given element of the origin-destination matrix) which gives the ratio (%) of
entering tra c (qi (k)) from direction i going to direction j (in this example xii (k) = 0),
- i = 1,  2,   … ,   m and j = 1,  2,    … ,   n means the incoming and leaving directions (m = n = 4 in the
example),
- k = 1,   2,    … ,   N is the discrete time step.
Due to the vehicle conservation law, the number of entering cars are equal to the number of vehicles leaving the
junction during the whole observation period:


m n

i=1
q i (k) = ∑
j=1
yj (k). (1.26)

en, a simple macroscopic tra c ow model can be written for the intersection:

m
yj (k) = ∑
i=1
q i (k)x ij (k) + zj (k), (1.27)

where the leaving tra c ow is always biased by zero-mean white noise zj (k).

Based on Eq. (1.27) the out ows towards the speci c directions are given below:

y1 (k) = q 1 (k)x 11 (k) + q 2 (k)x 21 (k) + q 3 (k)x 31 (k) + q 4 (k)x 41 (k) + z1 (k), (1.28)

y2 (k) = q 1 (k)x 12 (k) + q 2 (k)x 22 (k) + q 3 (k)x 32 (k) + q 4 (k)x 42 (k) + z2 (k), (1.29)

y3 (k) = q 1 (k)x 13 (k) + q 2 (k)x 23 + q 3 (k)x 33 (k) + q 4 (k)x 43 (k) + z3 (k), (1.30)

y4 (k) = q 1 (k)x 14 (k) + q 2 (k)x 24 (k) + q 3 (k)x 34 (k) + q 4 (k)x 44 (k) + z4 (k). (1.31)

e dynamics of the turning rates are described by the simple random walk process. e reason for the use of this
simple dynamics is that the nature of the variation of turning rates is unknown, i.e. it is modeled as a stochastic

process:
process:
Hátralévő idő:   28:20
x ij (k + 1) = x ij (k) + vij (k), (1.32)

where vij (k) is a zero-mean white noise, an error component added to state xij (k) .

By using vector forms, the following parameters are applied in the estimation:

- e system variables are the turning rates:

x 11 (k)
⎡ ⎤

⎢ x 12 (k) ⎥
⎢ ⎥
⎢ ⎥
⎢ x 13 (k) ⎥
⎢ ⎥
⎢x ⎥
14 (k)
⎢ ⎥
x (k) = ⎢ ⎥. (1.33)
⎢ x 21 (k) ⎥
⎢ ⎥
⎢x (k) ⎥
⎢ 22 ⎥
⎢ ⎥
⎢ ⎥
⎢⋮ ⎥

⎣ ⎦
x 44 (k)

- v (k) encompasses the noise terms vij (k) of state dynamics.


- z (k) contains the measurements errors zij (k).
Now, the generalized state space model of the junction can be de ned according to Eqs. (1.10)–(1.11):

x (k + 1) = Ax (k) + v (k), (1.34)

y (k) = C (k)x (k) + z (k). (1.35)

Based on Eq. (1.32) A = I and B (k)u (k)


(1.27) in which the values of q (k) are present:
= 0 as there is no control input in this model. C (k) can be written by Eq.


q1 q2 q3 q4
⎡ ⎤

C (k) = ⎢ ⎥.
(1.36)
⎢ ⋱ ⋱ ⋱ ⋱ ⎥

⎣ ⎦
q1 q2 q3 q4

Practically, C (k) represents the incoming tra c ow.



e Kalman Filter algorithm given with the parameters above can be concluded as follows:

 


PREDICTION:
 
1. Calculation of the a priori state estimation:

x̂ (k) = x̂ (k − 1).

2. Calculation of the a priori error covariance:



P (k) = P (k − 1) + Q.

 
CORRECTION:
 
1. Measurement which provides y(k).
 
2. Calculation of the a posteriori state estimation:
− −
x̂ (k) = x̂ (k) + G (k) (y (k) − C (k)x̂ (k)),
−1
− T − T
where G (k) = P (k)C (k)(C (k)P (k)C (k) + R) .

 
3. Calculation of the a posteriori error covariance:

P (k) = (I − G (k)C (k))P (k).

 
4. Increment the step index and go to step 1. of PREDICTION:
k := k + 1.
 
Hátralévő
Figs. 1.8 and 1.9 show the results of an estimation example  28:20
idő:given in the Short Course on Dynamic Tra c Flow

Modeling and Control (Chania, Greece, 2010). e goal of the example problem is to estimate the turning rates of a
road junction (Fig. 1.7). e input tra c ows q1 and q2 as well as the out ow y3 are assumed to be known

(measured by detectors). Based on this information, two state variables are estimated, i.e. the turning rates x13 and
x 23 . In the gures below, only the results for turning rate x 23 is presented. Two cases were tested. On the one hand,

the real turning rates were assumed to be constant. On the other hand, turning rates were sinusoidal.
 

Figure 1.7: A simple intersection for the turning rate estimation problem

Figure 1.8: Estimation of constant turning rates by Kalman Filtering

 
28:20
Hátralévő idő:  

Figure 1.9: Estimation of sinusoidal turning rates by Kalman Filtering

 
Figs. 1.8 and 1.9 represent the estimation results in the cases of constant and sinusoidal turning rates. In both cases,
di erent covariance matrices Q were applied for the lter design. It is observable in Fig. 1.8 that the best

performance can be obtained by Q = 0 in this example. At the same time, in case of sinusoidal turning rates,
Q = 0 is not the best choice (see Fig. 1.9).

Diagonal matrix R was set to 20000 (J 2 /h2 ), i.e. R = diag (20000).



e Matlab code of the solution is given below which can be copied and saved as Matlab script le.

 


%% Turning rate estimation with Kalman Filter
clear all;
close all;
clc;
format compact
N = 1000; %number of simulations
Q = 0.0025*eye(2); %diagonal matrix: diag(0.0025)
R = eye(2)*20000;
r = sqrt (R)*[1 1]';
x_real(:,1)=[0.8 0.2]'; %real (constant) turning rates
x_estim(:,1)=[0.5 0.5]'; %initial values of estimated turning rates
P=zeros(2); %initial value of error covariance matrix
%% The Kalman Filter algorithm
for k=2:N %Prediction
x_apriori(:,k)=x_estim(:,k-1); %'a priori' state estimation
P_apriori(:,:,k)=P(:,:,k-1)+Q; %'a priori' error covariance
x_real(:,k) = x_real(:,k-1); %Correction
q1 = 1000+500*rand; %traffic 1 (veh/h) + noise (with rand function)
q2 = 700+500*rand; %traffic 2 (veh/h) + noise (with rand function)
C = [q1 0; %actual C(k) matrix
0 q2];
y = C*x_real(:,k) + r *randn;
G=P_apriori(:,:,k)*C'*inv(C*P_apriori(:,:,k)*C'+R);
%Kalman gain
x_estim(:,k)=x_apriori(:,k)+G*(y-C*x_apriori(:,k));
P(:,:,k)=(eye(2)-G*C)*P_apriori(:,:,k);
end
%% Diagram
i = 1:N;
subplot (2,1,1); plot (i,x_estim(1,i),'b-',i, x_real(1,i), 'r-');
title ('x 1-3'), xlabel('Time step'),ylabel('Turning rate x13');
legend('Kalman Filter estimation', 'Real');
subplot (2,1,2); plot (i,x_estim(2,i),'b-',i, x_real(2,i), 'r-');
title ('x 2-3'), xlabel('Time step'),ylabel('Turning rate x23');
legend('Kalman Filter estimation', 'Real');

 
Tutorial 3: Kalman Filtering for tra c estimation (vehicle number) on urban road link
 
Vehicle number on urban road links between adjacent intersections is one of the most important road tra c
parameter for tra c engineers. Although it seems to be straightforward to measure this parameter by detecting

in ow and out ow of the link (and subtracting them), in practice the accurate measurement is not possible. ere
are several uncertainties which can spoil the measurement of this detection setup, e.g.
- motorcyclists use the side part of the lane, so that they might avoid the detected areas by the loop detector;
- in case of multilane road link, vehicles changing the lane at the detector area possibly avoid the loops or are
detected by both lanes’ detector;
- parking also disturbs the punctual measurement a therefore cars can ”disappear” or added to the link ow.
A potential solution is the application of proper estimator for more accurate detection. M. Papageorgiou

p pp p p p g g
introduced an e cient method based on Kalman Filtering [20, 12], which requires at least 3 loops on a road link.

Hátralévő idő:   28:20
Accordingly, a tra c estimator is presented below for a generalized urban road link. (An elaborated version of this
ltering problem is given in [15].)
 
Solution: 
By using 3 inductive loops (or magnetic sensor) per road link and Kalman Filtering, accurate estimation can be
realized. e three detectors, as illustrated by Fig. 1.10, are at the upstream and downstream end of the link, as well

as in the middle of the link.
 

Figure 1.10: Measurement con guration with three loop detectors in a signalized urban link

 
e boundary loops (presence detectors) detect the in ow and out ow tra c volume, and the middle loop (pulse
detector) measures time occupancy.

e link queue dynamics can be de ned according to the vehicle-conservation law [10]:

x (k + 1) = x (k) + T (q in (k) − q out (k)), (1.37)

where x (k) is the vehicle number in the link, T the measurement sampling time, qin (k) , and qout
and out ow tra c. Equation (1.37) describes an ideal case if qin (k) and qout (k)
(k) the in ow
are perfectly known, i.e. equal to

the detector measurements. In practice, however, they can be biased for di erent reasons. us, the measured
tra c ows are written as follows:

q
m
in
(k) = q in (k) + vin (k), (1.38)

q
m
out
(k) = q out (k) + vout (k), (1.39)

with vin (k) and vout (k) denoting the process noises generated by the upstream and downstream detectors. By
using Eq. (1.38) and (1.39), Eq. (1.37) is reformulated:


x(k + 1 = x (k) + T (q
m
in
(k) − q
m
out
(k)) + T v (k), (1.40)

where v (k) = vout (k) − vin (k)

Kalman Filter algorithm.


is the state noise. erefore, Eq. (1.40) can be identi ed as state equation in the

Space occupancy oS can be approximated by time occupancy oT provided by a measurement point within the
link [22], i.e.


oS (δ) ≈ oT (δ). (1.41)

erefore,

o
m
T
(k) = oS (k) + z (k), (1.42)

where output om
noise.
T
(k) denotes the measured average time occupancy, and z (k) is the corresponding measurement

By de nition, the space occupancy is given as the ratio of all vehicle lengths (lj ) and the whole link length L :

m
∑ j=1 l j
os = . (1.43)
L

en, based on Eq. (1.43) space occupancy is reformulated:



x(k)Lef f
(1.44)
28:20
os (k) = ,
Ln Hátralévő idő:  


where x (k) is the number of vehicles (crossed the detector), Lef f = L + Ldet the average e ective vehicle length,

L the link length, and n the lane number. With Eqs. (1.42) and (1.44) the measured vehicle number can be
determined:

Ln m
y (k) =
Lef f
o
T
(k). (1.45)

Finally, by substituting Eqs. (1.42) and (1.44) into Eq. (1.45) one arrives at the Kalman Filter’s observation equation:

Ln
y (k) = x (k) +
Lef f
z (k). (1.46)

It is assumed that state noise v (k) and measurement bias


Filter, i.e. they are independent, zero-mean white Gaussian random variables. Since the state and measurement
z (k) ful ll the stochastic conditions of the Kalman

di erence equations as well as the system noises are properly identi ed, the linear quadratic estimation can be
implemented.
 


Note

e Kalman Filter was introduced based on Eqs. (1.10)–(1.11). is model can be recognized in Eqs. (1.40) and (1.45). At the
same time, a slight di erence can be observed in these equations: the noises – last terms in Eqs. (1.40) and (1.45) – are
multiplied by a constant value. In the example case therefore the system model is as follows:
m m
x (k + 1) = A (k)x (k) + B (k)u (k) + M v (k) = x (k) + T (q (k) − q (k)) + T v (k),
in out

Ln
y (k) = C (k)x (k) + N z (k) = x (k) + z (k),
Lef f

where

- M = T and N =
Ln

Lef f
are coe cients of noise terms,

- A (k) = C (k) = 1 (constant scalar),

- B (k) = T and u (k) = q


m
in
(k) − q
m
out
(k) .

e di erence due to the coe cients M and N is present in the Kalman Filter equations. However, it is only weighting
by a constant value, i.e. instead of using the covariance matrices Q (k) = E {v (k)vT (k)} and R (k) = E {z (k)z T (k)}
(as de ned previously), simply Q̃ (k) = M E {v (k)v
T
(k)}M
T
and R̃ (k) = N E {z (k)z
T
(k)}N
T
are applied.

 
Finally, by using the above parameters, the Kalman Filtering can be summarized as follows:

 


PREDICTION:
 
1. Calculation of the a priori state estimation by using the measured values of qin
m
and qout
m
(see Fig. 1.10):
− m m
x̂ (k) = x̂ (k − 1) + T (q (k − 1) − q (k − 1)).
in out

2. Calculation of the a priori error covariance:


− ˜
P (k) = P (k − 1) + Q (k − 1).

 
CORRECTION:
 
1. Calculation of y(k) based on the measurement of om
T
  (k) (see Fig. 1.10):
Ln m
y (k) = o (k).
Lef f T

2. Calculation of the a posteriori state estimation:


− −
x̂ (k) = x̂ (k) + G (k) (y (k) − x̂ ),
−1
− −
where G (k) = P (k)(P (k) + R̃ (k))
where G (k) = P (k)(P (k) + R (k)) .

  Hátralévő idő:   28:20


3. Calculation of the a posteriori error covariance:

P (k) = (I − G (k))P (k).

 
4. Increment the step index and go to step 1. of PREDICTION:
k := k + 1.

 
Figure 1.11 shows the simulation results of the presented tra c estimation method.
by VISSIM microscopic tra c simulation so ware.
e simulation was carried out

 

Figure 1.11: Simulation example for Kalman Filtering based amelioration of tra c ow estimation from inductive
loop detection

1.3. Turning rate estimatio…

 

Impresszum Adatvédelem Súgó Szerzőknek Könyvtárosoknak Cégeknek GYIK Blog © 2015 - 2020 Akadémiai Kiadó Zrt.

You might also like