Millimeter-Wave Radar and Video Fusion Vehicle

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Journal of Physics: Conference Series

PAPER • OPEN ACCESS You may also like


- Preceding Vehicle Detection Method
Millimeter-Wave Radar and Video Fusion Vehicle Based on Information Fusion of Millimetre
Wave Radar and Deep Learning Vision
Detection based on Adaptive Kalman Filtering Lifu Li, Wenying Zhang, Yi Liang et al.

- Traffic vehicle cognition in severe weather


based on radar and infrared thermal
To cite this article: Qian Wang et al 2021 J. Phys.: Conf. Ser. 1966 012038 camera fusion
Wang Zhangu, Zhan Jun, Duan
Chuanguang et al.

- Automatic Target Recognition of


Millimeter-Wave Radar Based on Deep
View the article online for updates and enhancements. Learning
Chen Xi Wang, Xin Chen, Han Yu Zou et
al.

This content was downloaded from IP address 103.241.108.38 on 04/01/2023 at 06:44


 
 
 
 
 
AIITA 2021 IOP Publishing
 
Journal of Physics: Conference Series 1966 (2021) 012038 doi:10.1088/1742-6596/1966/1/012038

Millimeter-Wave Radar and Video Fusion Vehicle Detection


based on Adaptive Kalman Filtering

Qian Wang1, Yansong Song1*, Dongpo Xu1


1
College of Electronic Information Engineering, Changchun University of Science and
Technology, Changchun,130022, Jilin,China
*
Corresponding author’s e-mail: 2019100470@mails.cust.edu.cn

Abstract. Road traffic safety has always been the focus of social concern, and the actual road
traffic scenes, where only a single sensor is applied cannot cope with the interference brought
by complex external factors, which makes vehicle detection extremely challenging. This paper
focuses on a vehicle detection algorithm for the fusion of millimeter-wave radar sensor and
monocular camera sensor, including the calibration of millimeter-wave radar and camera, the
establishment of a temporal fusion model of the two sensors. Finally, the target information
obtained from the two sensors is fused with the data using the adaptive Kalman filter fusion
algorithm, which can reduce data ambiguity and increase the reliability and validity of the data.
Experiments show that the method can overcome the shortcomings of single sensor in target
detection, and the obtained target information is more comprehensive compared with the
monocular camera detection results.

1. Introduction
Intelligent transportation system is the current development direction of transportation system, which
can solve the traffic problems caused by increasingly complex traffic conditions and improve the
efficiency of traffic monitoring. The current technology applied to traffic detection is mainly infrared
detection, ultrasonic detection, laser detection, video detection, etc. The camera as a sensor to obtain
video gets with a rich amount of target information and a wide detection range, but it is susceptible to
weather and light changes [1]. Millimeter wave radar is a device to obtain target information by emitting
modulated electromagnetic waves and observing echoes, which can accurately determine the target's
position information as well as speed estimation, has all-weather working capability, and can work well
in bad weather, but the obtained information such as target texture and shape characteristics is less
effective [2]. In response to the limitations of single sensors and other drawbacks, the multi-sensor
fusion approach[3] has become an effective way to solve this problem at this stage. At this stage, the
more used is the method of lidar fusion camera [4] for vehicle detection, but the lidar detection range is
narrow, low penetration ability is affected by obscuration, so it can not be opened in bad weather (such
as rain, snow, haze days sandstorm). In recent years, millimeter wave radar and video fusion in vehicle
radar for vehicle anti-collision and forward obstacle detection has been applicated with good
development prospects [5], but the problem of data fusion of the two sensors is not mentioned.
In this paper, an adaptive Kalman filter fusion algorithm is proposed to fuse the data obtained from
the two sensors to address the shortcomings of the literature [5]. Firstly, the two sensors are spatially
calibrated by coordinate conversion to convert the millimeter wave radar coordinates to the image
coordinating system to obtain the interested region of the target; secondly, the two sensors are temporally

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
 
 
 
 
 
AIITA 2021 IOP Publishing
 
Journal of Physics: Conference Series 1966 (2021) 012038 doi:10.1088/1742-6596/1966/1/012038

calibrated using Lagrangian interpolation to ensure the temporal consistency of the two sensors; finally,
the acquired target information is weighted fusion.

2. The proposed fusion detection method

2.1. Space calibration


The information acquired by millimetre-wave radar and camera is not only in different forms and data
rates, but also in different coordinate system types, so establishing accurate coordinate system
conversion relationships is the key to achieve millimetre-wave radar and video fusion. The space fusion
of millimetre-wave radar and vision sensors is the conversion of measurements from different sensor
coordinate systems into the same coordination system. The space calibration process between the two
sensors is shown in Figure 1.
Radar  World  Camera 
Pixel coordinate 
coordinate  coordinate  coordinate 
system
system system system

Figure 1. Calibration process between radar and camera


What the radar detects is the radial distance, radial velocity and azimuth information between the
target and the sensor. The radial distance between the target and the radar detected by the radar is r and
the azimuth angle is α, The radial distance is decomposed as shown in Equation (1):
x  r sin( )
(1)
y  r cos( )
Firstly, the radar data are converted from the spherical coordinates where the radar is located to the
world coordinate system, assuming that the height of the radar installation and the inclination angle of
the radar installation are h, θ. The conversion process is shown in Figure 2, s is the radial distance from
the intersection of the radar beam center axis and the ground plane to the radar. And the positive direction
of the 𝑂 𝑍 axis is the direction of the inclination of the straight line s to the ground from the radar
viewpoint. In the plane where the radar is located with the direction to the left direction is the positive
direction of 𝑂 𝑋 axis, and the upward direction is the positive direction of 𝑂 𝑌 axis to establish
the 𝑂 𝑋 𝑌 𝑍 3D world coordinate system.

Figure 2 Diagram of the conversion of radar coordinate system to world coordinate system

2
 
 
 
 
 
AIITA 2021 IOP Publishing
 
Journal of Physics: Conference Series 1966 (2021) 012038 doi:10.1088/1742-6596/1966/1/012038

Mapping the position information r and α of the detected target into the 𝑂 𝑋 𝑌 𝑍 3D world
coordinate system yields.The conversion relationship is shown in the Equation (2):
X w  x
Yw   y sin( ) (2)
Z w   y cos( )
where x and y are related to r and α as shown in Equation (1). From the geometric relationship, it is
known that θ satisfies the Equation (3):
h h
  arccos( )  arccos( ) (3)
s y
In the world coordinate system, the position coordinates are used to describe the position of the
camera in space, ignoring the error of the radar and camera coordinate origins, the two coordinate origins
are considered to be approximately coincident. And the two world coordinate systems are made to
coincide by rotating the three-dimensional Cartesian coordinate system in which the radar is located.
𝑂 𝑋 𝑌 𝑍 represents the world coordinate system in which the radar is located, and 𝑂 𝑋 𝑌 𝑍
represents the world coordinate system in which the camera is located. The camera coordinate system,
assuming that the 𝑂 𝑌𝑍 plane of the two coordinate systems are coincident by two rotations, will
rotate the world coordinate system of the radar around the 𝑂 𝑋 axis counterclockwise to make the
two coordinate systems coincident, the rotation transformation can be written as the Equation (4), where
γ is the rotation angle.
Xc  Xw
Yc  Yw cos( )  Z w sin( ) (4)
Z c  Z w cos( )  Yw sin( )
The radar data is further converted into the image coordinate system 𝑂 𝑥𝑦 corresponding to the
frame image by the small-aperture imaging principle of the camera. The image coordinate system is a
two-dimensional plane coordinate system, assuming that the focal length of the camera is f. A point
p(𝑋 , 𝑌 , 𝑍 ) in the camera coordinate system is projected onto the point 𝑝 𝑥, 𝑦 on the frame image
after imaging by the camera, which is obtained from the geometric relationship as shown in Equation
(5):
Xc
x f
Zc
(5)
Y
y c f
Zc
From the coordinate transformation relations the matrix form of the transformation relations between
the radar coordinate system and the image coordinate system is obtained as shown in Equation (6).
x   f 0 0   Xt 
 y   0 f

0  Yt  (6)
  
1   0 0 f  1 
Where,𝑋 ,𝑌 .

2.2. Camera internal reference acquisition


The mapping relationship of the target from the camera coordinate system to the pixel coordinate system
is determined by the parameters of the camera imaging model. Due to the aberration of the camera lens
and the difference of the actual geometry, it is necessary to compensate for the lens aberration and to
calibrate the coordinate transformation relationship. The intra-camera parameters are obtained by the

3
 
 
 
 
 
AIITA 2021 IOP Publishing
 
Journal of Physics: Conference Series 1966 (2021) 012038 doi:10.1088/1742-6596/1966/1/012038

Zhengyou zhang camera calibration method, and the radar coordinates obtained after the camera
calibration are related to the pixel coordinate system as follows as shown in Equation (7).
 cot   f  f cot  u0 
1 u0   x x   X t
 u   x x  x 
v   0   
1 v0  y   
 0 f v0  Y t  (7)
   y sin   y sin 
1    1    
0 0 1    0 0 1  1 
   
f f f
Where, kx  ,ks  ,k y 
x x y sin 

2.3. Date fusion model


Temporal alignment is necessary before data fusion. In this paper, the Lagrangian interpolation method
is used to synchronize the two sensors in time, and the observed values at moments 𝒕𝒌 𝟏 and 𝒕𝒌 𝟏 are
used to predict the measured values at moment 𝒕𝒌 . The interpolation function is shown in Equation (8):
(t  tk )(t  tk 1 ) (t  tk 1 )(t  tk 1 ) (t  tk 1 )(t  tk )
x(t )  xk 1  xk  xk 1 (8)
(tk 1  tk )(tk 1  tk 1 ) (tk  tk 1 )(tk  tk 1 ) (tk 1  tk 1 )(tk 1  tk )
After the spatio-temporal alignment, the data collected by both need to be fused to make the obtained
target data more complete. The fusion strategy used in this paper is an adaptive Kalman filter fusion
algorithm with a distributed fusion structure, which weights the target vehicle parameters by the
estimation errors of the two sensors. Assuming that for the same target, the local estimates and
corresponding error covariance arrays of sensor i and sensor j are 𝑿𝒎 𝒌/𝒌 and𝑷𝒎 𝒌/𝒌 , m=i, j,
respectively, let the corresponding state estimation errors be shown in Equation (10):
 
X i ( k )  X (k )  X i ( k )
k k
 
(9)
X ( k )  X (k )  X ( k )
j j
k k
The two are independent of each other and can be weighted directly using the respective estimation
errors of each sensor. The fusion result is shown in Equation (11):
  
X  P j ( Pi  P j )1 X i  Pi ( Pi  P j )1 X j (10)
The covariance array of the estimation error is shown in Equation (12):
P  Pi ( Pi  P j )1 P j (11)

3. Experimental results and analysis


The millimeter wave radar used in this experiment detects the targets by transmitting millimeter waves
in the 24 GHz band, and the visible sensor is selected from Hikvision and camera, and the two sensors
are fixed on the gantry above the road to collect the road vehicle driving information. The millimeter
wave radar can acquire 3 information of speed, angle and distance of the target, and after the above
mentioned spatio-temporal calibration, the area of interest is generated on the corresponding image
frame. The video information captured by the camera separates the moving foreground by hybrid
Gaussian background modeling to reduce the influence of stationary objects on the experimental results.
According to the above fusion algorithm, a set of video acquired by the camera is randomly selected
from a number of frames for testing, where the number of Gaussians selected for video detection is 3,
the learning rate is 0.025, and the foreground threshold is 0.65. The experimental results are as follows:
Figure 3 shows the detection results of the 11th, 61st, and 141st frames of the video after mixed Gaussian
processing, marked with yellow rectangles. Figure 4 shows the results of the radar data and Figure 4 is
the result of the alignment of the radar data with the image, where the red dots indicate the target position

4
 
 
 
 
 
AIITA 2021 IOP Publishing
 
Journal of Physics: Conference Series 1966 (2021) 012038 doi:10.1088/1742-6596/1966/1/012038

of the radar detection and mark the distance and speed information of the target vehicle. Figure 5 is the
result of the fusion of the radar detection with the video, marked with green rectangles and containing
the weighted fused target speed and distance information.

Figure 3. Shows the single video detection results of frame 11, frame 61 and frame 141 respectively

Figure 4. Single millimetre-wave radar detection results

Figure 5. Fusion detection results

5
 
 
 
 
 
AIITA 2021 IOP Publishing
 
Journal of Physics: Conference Series 1966 (2021) 012038 doi:10.1088/1742-6596/1966/1/012038

From the comparison of the above results, Figure 3 shows that the single video detection of the 11th
frame there is a case of missed detection, and there are deviations in the detection results, which can
easily cause false. Figure 4 is the single millimetre-wave radar detection results, but the detected target
position also has certain deviation, and the alignment of the radar data and image does not correspond
to the target in every case, while the fusion detection results in Figure 5 show that the moving vehicles
in the area of interest. The fusion detection results in Figure 5 show that all moving vehicles in the region
of interest can be detected without false detection, and the detection results can visually mark the
distance and speed information of the target vehicle.

4. Summary
The test shows that when the road monitoring system uses a single sensor to detect vehicles, millimetre-
wave radar can detect the location and speed travel information of the target vehicle. And radar calibrates
with the camera, the region of interest can be formed on the video frame image, and the target location
speed information detected at the marker but sometimes there is a case of false detection; and by a single
video sensor detection, the first use of mixed Gaussian background modelling. The motion foreground
region of the video image is obtained to facilitate the subsequent matching and fusion with radar
detection, and the detection result of single video processing has the situation of missed detection. The
results of the fusion detection of the two significantly reduce the occurrence of false detections and
omissions, and the results of vehicle detection contain more complete target information, increasing the
reliability and stability of the data and reducing data ambiguity.

References
[1] R. O. Chavez-Garcia and O. Aycard. Multiple Sensor Fusion and Classification for Moving Object
Detection and Tracking [J]. IEEE Transactions on Intelligent Transportation Systems, 2016,
17(2):525-534.
[2] H. Öztürk and K. Y eˇ gin, “Predistorter based K-band FMCW radar for vehicle speed detection,”
in Proc. 17th IRS, Krakow, Poland, May 2016,pp. 1–4.
[3] Chen S, Huang L, Bai J, Jiang H.et al. “Multi-Sensor Information Fusion Algorithm with Central
Level Architecture for Intelligent Vehicle Environmental Perception System,” SAE Technical
Paper 2016-01-1894.
[4] J. Zhang, J. Han, S. Wang, Y. Liao and P. Li. Real time obstacle detection method based on lidar
and wireless sensor[C]// 2017 Chinese Automation Congress , Jinan, China:CAC, 2017: 5951-
5955.
[5] Meinl F, Stolz M, Kunert M, et al. An experimental high performance radar system for highly
automated driving[C]// 2017 IEEE Mtt-S International Conference on Microwaves for
Intelligent Mobility, Nagoya, Japan: IEEE, 2017:71-74.

You might also like