Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Advanced Driving Assistance Systems

-
Overtaking Maneuver on a Highway

PREPARED IN PARTIAL FULFILLMENT


OF
AUTOMOTIVE VEHICLES – ME F441

Jayesh Mahajan | 2015A4PS0331G | Supervisor – Dr. Kiran D. Mali


Table of Contents
Abstract ..................................................................................................................................... 2
I. Introduction ...................................................................................................................... 2
II. Multiple Stereo Vision for Intelligent Vehicles ................................................................. 3
A. Structure of Overtaking ADAS Algorithm................................................................4
B. Calculating the Disparity Space ................................................................................6
C. Adaptive Cruise Control and Blind Areas Radar Sensor ..........................................8
D. Lane Detection ...........................................................................................................8
E. Custom BM Algorithm Detecting Vehicles and the Distance to Them ..................9
F. Mapping the Environment ...................................................................................... 10
III. Results ........................................................................................................................... 11
A. Calculating the Disparity .......................................................................................... 11
B. Lane Detection ..........................................................................................................12
IV. Conclusion .................................................................................................................... 13
References ................................................................................................................................ 13

PAGE 1
Abstract
This report presents research review for an Advanced Driver Assistance System
(ADAS). A design approach of a driver assistance system for an overtaking
maneuver on a highway is presented. Two stereo vision systems and two radar
sensors to acquire disparity maps for vehicle surrounding environment, obstacle
detection is used. Representation algorithm and a lane detection algorithm are
used to find the vehicle’s position on the road.

Keywords – Stereo vision; lane detection; environment mapping; driver


assistance; highway

I. Introduction
In today’s time safety has become an important factor in the automotive industry.
Therefore, today Active Safety Systems also known as Advanced Driver Assistance
Systems (ADAS) are designed to actively assist the driver to avoid accidents before
they occur. Through ADAS driver error will be reduced and efficiency in traffic and
transport is enhanced. ADAS implementation provide potentially considerable
benefits because of the significant decrease in human stress, economic costs,
pollution, road, driver and traffic safety.

Vehicles that have an ADAS can autonomously intervene if a possible dangerous


situation is detected. In the reviewed paper a system is proposed that can analyze
and understand the information from the surrounding on a highway.

Doing literature survey of many paper they propose a software architecture that
can be used for an Overtaking Maneuver on a Highway. The system is explained in
the further - Section III is organized in four sections: preprocessing stage, the
actual processing stage, representation and planning. In section IV results
obtained from the developed algorithms are presented.

PAGE 2
II. Multiple Stereo Vision for Intelligent Vehicles
System algorithm is developed in C# environment for windows operating system.
The image library described in [1] is used for image processing. The library added a
variant of facilities to the application such as image class for stereo pair cameras
and generic color and depth.

The images for analysis are captured while driving, by two A4Tech cameras, spaced
at 35 cm, calibrated as stereo pair at HD resolution, for front vehicle vision and two
similarly calibrated Genius, with lesser image quality, for the rear view. Fig. 1
presents the assembly mounted on the vehicle.

PAGE 3
Fig. 2 illustrates the architecture of the proposed driving assistance system in the
paper, mentioning the main routines used for visual information processing.
The system is based on stereo image mapping with features detection and
tracking. The approach for a driver assistance system when performing an
overtaking maneuver is presented.

A. Structure of Overtaking ADAS Algorithm


The algorithm used uses information received from the stereo vision systems front
and back, and from the lateral radar sensors.

All the data gathered from the sensors are processed, and checked then unified on
a global map. After this the ADAS algorithm in Fig.3. starts.

PAGE 4
After the detection starts a lane detection algorithm runs to search on the left and
right image the actual road lane and the left lane, and current road lane and right
lane respectively. After running the stereo vision based algorithm, all the distances
to the surrounding obstacles are represented on the map.

If a detected vehicle is slower (considerably) than the system (vehicle) the driver is
informed that the system can overtake. When the driver signals and starts the
maneuver, the algorithm passes in overtake mode. If the left lane (overtake lane)
of the road is not free, the system informs the driver visually and acoustically; if
this action is not enough, the algorithm then steers back the vehicle and brake to
avoid an accident if the driver has accelerated.

PAGE 5
The last part of the overtaking maneuver is to come back to the original lane in
front of the back vehicle. If the road lane is free driver can change the lane. If it’s
not the system gives visual and acoustical warning; if that is not enough the
algorithm will steer the vehicle back.

B. Calculating the Disparity Space


Presented images are captured which driving, by the stereo assembly, from the rear and
front view of the vehicle. Every frame is resized to a 320×240 pixels resolution before
starting any analysis.

Stereo pair images are converted to grayscale images before their comparison to
obtain the matrix of disparities using the semi global algorithm (SGM).

1) Semi Global Matching (SGM)

The Semi Global Matching algorithm is implemented successfully by combining


the global and local pixel wise matching for accurate detection. Knowing the
intrinsic and extrinsic orientation of two pairs of cameras, the main algorithm can
find the distance to the object. For the unrectified images the epipolar lanes are
efficiently computed and followed explicitly while matching, as described in [2].

Matching costs are often the radiometric differences which occur for different
camera characteristics due to different properties, position of the light, vignetting
effect as presented in [3]. The authors also specify how to calibrate camera
characteristics and correct the images to have right sizes. The algorithm is
responsible with handling the distortions of corresponding areas of the image, in
order to take care of the radiometric differences, even when the lighting and image
content is unknown. The properties of the scene and lighting are typically
unknown.

Mutual information transformation is used to match the costs. With mutual


information, the global radiometric difference is modeled in a joint histogram of
corresponding intensities.

The pixel wise algorithm combined with mutual information has the matching
cost suitable for matching unrectified images. Because individual pixels are
considered, the corresponding image parts may be rotated or scaled against each
other.

PAGE 6
2) Block Matching Algorithm

The Block Matching Algorithm computes each possible offset with a square sense
error. Finding the position where the sub images are most similar and the
minimum error occurs it is equivalent to computing the disparity.

Disparities typically have a small dynamic range, usually less than 10 pixels,
compared to the actual distances to objects. Therefore, measuring disparities to
integer pixel values conduct to a very low depth resolution. In Fig. 4 the result of
the SGM and BM algorithm adapted to our video system is presented.

Disparity cube with lateral and above views as can be seen in Fig 5 is generated
from Fig 4 (c) and (d). The four green rectangles representing the detection of the
vehicle from Fig. 4 (a) and (b) which is situated at 40 m from the camera view.
One can see from the image that the sky detection, represented by the orange line,
is very close to the vehicle detection. This means that the SGM and BM algorithms
are not very well suited for our long range detection algorithm.

PAGE 7
C. Adaptive Cruise Control and Blind Areas Radar Sensor
Adaptive cruise control can maintain the vehicle speed. The system can
automatically maintain a preset distance between the vehicle ahead and the radar
sensor. The read from this sensor is used to acquire the exact distance between our
vehicle and the one in front on the same lane. This information is used to check
the data received from the stereo system.

Blind spot monitor radar sensors are used to acquire information about distances
to obstacles for the lateral left and right position of the vehicle.

D. Lane Detection
The adapted algorithm detects current road lane from the left and right images.
The lateral road lanes are outlined with yellow color as seen in Fig.6.

Average of every pixel’s neighbors is calculated and if it is greater than a threshold


the pixel is marked that white, else black. It can be seen in Fig.7 that the left image
only detects current road lane, and the right image, both current road lane and
right lane.

PAGE 8
E. Custom BM Algorithm Detecting Vehicles and the Distance to Them
Through the acquired front and back images, the disparities are calculated for the
closet vehicle and the road lanes are drawn from fig.7 or fig. 11, for front and back
side. After that, the close points calculated from SGM algorithm are transferred to
the global map.

The proposed model can detect an object with accuracy up to of 100 away that is
50m front and 50m back of the vehicle.

First, the algorithm detects the road lanes like in Fig 7 bordered by the green and
yellow lines. Intersecting those lane borderlines with a horizontal line from the
second half of the image, painted with blue in Fig. 8 (a), we obtain the two
Cartesian points from Fig. 8 (b), painted with color green.

PAGE 9
Data for road lane analysis is extracted for the Cartesian point for. The procedure
is repeated for left and right images, which is then overlapped, represented with
black color, for left and right filtered images. In Fig. 8 (c) we show the match
between left and right characteristics and now we have the pixel difference of this
matching block representing an obstacle.

In Fig. 8 (d) the detection of the vehicle is displayed. We approximate the


direction of the vehicle and, based on the current vehicle speed, whether the
vehicle is slower or not. Based on this we compute also the time to collision. Steps
are repeated for front and back stereo images.

F. Mapping the Environment


The surrounding vehicle environment is mapped using the front and back images.
The calculated disparities and object distance information are transferred to the
global map. Lateral radar sensor distance data are also integrated into the map
presented in Fig. 9. Front radar sensor data is used to verify the stereo data.

PAGE 10
III. Results
A. Calculating the Disparity
Positive result though not very well tested were obtained for the algorithm for
calculating the distance to the vehicle up to 100m.

The approach processes 17 to 25 FPS depending on the complexity of the image


and the CPU of the computer that processes the data. The SGM algorithm used is
more reliable even if in long range but not very accurate. The process was fast due
to use of the optimized algorithm and different threads processing simultaneously.
The test platform was a pc with AMD FX8350 processor and 8 GB of ram.

Table I presents the results on three different stereo matching algorithms SGM,
BM and a custom block matching, computed for three different distances where
the obstacle was detected.

Chart in Fig 10 represent the results from table I. It is seen that on the third step
the advantage of the proposed algorithm because the SGM and BM do not receive
any values.

PAGE 11
B. Lane Detection
Table II represent the results where two different types of roads during daytime,
while detecting different types of road lanes.

Fig 10 shows the left, current and right road lanes detected by the algorithm on
different types of highways.

PAGE 12
IV. Conclusion
In this paper, a possible ADAS for the overtaking maneuver is proposed and
described. It aimed to detect as accurate as possible all the obstacles with a fast
refresh rate. The proposed system can specify the driver when it is possible to start
the overtaking maneuver and when the driver can come back to the initial lane. If
the driver tries to start an overtake maneuver but on the overtaking lane a fast
vehicle approaches the system signals the driver visually and acoustically. If this is
not sufficient the system can steer back to the current lane and brake to avoid
collision with the front vehicle in case the driver has accelerated the vehicle.

References
Report based on –

Alexandru Daniel Jarnea, Radu Dobrescu, Dan Popescu, Loretta Ichim, “Advanced
driver assistance system for overtaking maneuver on a highway,” 2015 ICSTCC

http://ieeexplore.ieee.org/document/7321385/

Others-

[1] S. Shi, Emgu CV Essencials, Packt Publishing, UK, 2013.

[2] H. Hirschmüller, “Semi-Global Matching motivation, developments and


applications,” Photogrammetric, 173-184, 2011.

[3] A.D. Jarnea, G. Florea, and R. Dobrescu, “Visual information


processing routines for intelligent vehicles,” International Conference on
Circuits, Systems, Communications and Computers (CSCC), Greece,
July 17-21, 2014.

Cover Image –

http://newsbytes.ph/wp-content/uploads/2015/10/adas.png

PAGE 13

You might also like