Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Vehicle Monitoring on Highways

Siddharth Das Joshin Rexy Dr. Vishal Satpute


Electronics & Communication Electronics & Communication Electronics & Communication
VNIT Nagpur VNIT Nagpur VNIT Nagpur
Nagpur, India Nagpur, India Nagpur, India
hiiamsid518@students.vnit.ac.in hiiamsid518@students.vnit.ac.in hiiamsid518@students.vnit.ac.in

Abstract— It is a well-known fact that are predicted to become increasingly complicated,


perception of road (lane) plays a vital role in eventually achieving total freedom.[1]
assisting drivers by using software that can convey The main problem we face while developing such
them the possible risks in their surroundings. The systems is the perception problem. [3]
strategy of this work is to prevent car accidents on The perception problem includes mainly two
highways mainly due to rash driving that consists of
elements:
sudden lane departure or not keeping up with the
lane in case of curved roads. Hence, we propose a a) Road-lane perception 
method to reduce these crashes by mounting a b) Obstacle detection   
camera on the vehicle which will continuously keep Road-Lane Line detection is an important concept
track of the view in front of vehicle and deliver used to define the path taken by “self-driving
certain relevant warnings to the driver in case of cars” in order to avoid getting into the wrong lane.
any rash driving or fatal situations. Also, it can be Our system nee In urban, rural, and highways,
helpful for self-driving cars by providing a wide identifying the size of the road, the number and
range of possibilities to learn by software and move position of lanes, merging, splitting, and ending
vehicle on its own in the lane. lanes and roads are all part of road and lane
We employ image processing techniques to knowledge.[2][4]
detect and solve some of the most prevalent
roadway concerns. The problem statement is
broken down into basic building blocks to better
understand the comprehensive list of potential
techniques within this framework. It comes with a
decision model that will provide the driver the
required cautions.

Keywords—Image processing, warning system,


road, lane detection, driver aid.
I. INTRODUCTION (HEADING 1)
In today's, era motor vehicles in emerging nations
have progressively increased during the last
decade so do the traffic accidents. It is vital to
build technology that can avoid such unanticipated
incidents. Nowadays Vehicles are rapidly
incorporating advanced driver aid systems, which
alert drivers in risky circumstances or actively
participate in driving. However, maintaining a
constant watch on drivers to eliminate all potential
threats involves a number of difficult difficulties.
Human actions are, in reality, difficult to
understand, anticipate, and manage using present
technology. During the next decade, such systems
II. LITERATURE SURVEY Orientation Consistency Ratio for finding the 
dominant edges[28]. Peyman et al. in his paper
suggested a method where he used only four
Gabor filters and suggested a novel fast voting
scheme for real-time requirements [29,30]. In[31]
new background is computed using  temporal
median function  using  previously sampled
frames.A. Elgammal, D. Harwood, and L.S. Davis
in their paper ‘‘Non-parametric model for
background subtraction’’ [32].With five cameras
and thirteen lidars ,Huang, A.S., Moore, D.,
Antone, M. et al[33] suggested a methodology
which later incorporated into a closed-loop
controlled to successfully steer an autonomous
vehicle.

In the DARPA Grand Challenge robot race,


Hendrik Dahlkamp demonstrated a system for
recognising drivable surfaces in tough unpaved
and offroad terrain conditions[34].Sebastian
Thrun, Mike Montemerlo, Andrei Aron in their
paper “Probabilistic Terrain Analysis For High-
Speed Desert Driving” suggested a PTA
(probabilistic
terrain analysis) algorithm for terrain
classification with a “fast-moving robot
platform”[35].

III. APPROACH

A. Distortion Correction
When a camera captures a 3D world image in
2D it is obvious that it will have many
imperfections. An image taken from camera has
many distortions in it. Our first step will be to clear
the image of any such distortions. There are
generally two kinds of distortions in an image
namely barrel and pincushion distortion.

Du, Xinxin, and Kok Tan in their paper discussed


edge-based detection techniques which can be
applied to detect the road[24].In[25] author used
Region-based methods to look for texture , color
attributes in the image to segment the road region
Curved lenses are used in real cameras to create
from the background[25] .One of the region based
images, and light rays bend a bit too much or too
method where segmentation was performed in the little at the lens' edges. This produces an effect that
seeded region[26].In 2004  Rasmussen, C. distorts picture edges, making lines or objects look
presented a algorithm which is used to estimate more or less curved than they are. Hence, this
the dominant orientations by bank of Gabor results in radial distortion.
wavelet[27].The author Kong et al. in their paper
proposed a road segmentation approach using Also, if camera is not aligned properly parallel to
the image, some image points may seem to be
closer/farther than they actually are. This is termed
as tangential distortion.
We can rectify this error by calculating some
constants for the camera k1, k2, k3 using OpenCV.

Formula for radial distortion correction.


Fig- Image after perspective transform

Hence, we get 2D top view of the road which


becomes easier for the further processing.
C. Edge detection & Thresholding
Formula for tangential distortion correction.
Next step will be to perform the edge
B. Perspective Transform (Bird’s View) detection on the image. It is helpful to find out
Human eyes have the tendency to make things the lane lines on the road as it comprises of
nearer look bigger and that of far away objects sharp lines on the road and can be easily
comparatively smaller. This is simple definition of identified in edge detection process. For this
perspective. we will use Canny edge detection algorithm. It
To align the image appropriately, Perspective uses Sobel filter which can find out the vertical
Transform is a really handy function. After as well as the horizontal edges in an image.
applying Perspective Transformation to the
picture, it transforms it in a linear way. Here we
transform the image of road as seen by the camera
on the vehicle to an image such that it appears as
the picture it taken from above parallel to the
ground. In this way we convert a 3D image into a
2D image which is a lot easier to process for our
further applications.
Fig-Sobel operator for vertical & horizontal edges resp.

Using this filter, we find out the x and y


component of the edge detection output.
Finally, we add both images with an ideal
thresholding value to get the final result for our
Saturation binary.
D. Slope Threshold
By applying saturation binary, we see that we got
the lane lines but with that we also get unwanted
Fig- Original image of highway noise which may be due to the imperfections in
road texture. As Canny edge detection picks up all
We take a simple fact into consideration here that the edges these unwanted lines also pop up in the
the lanes are always parallel and in the bird’s view edge detection. So, to tackle this problem we apply
transformation they must comply the same. So, we Slope threshold which takes into account only a
can draw two imaginary straight lines over the limited range of edges which come in the bracket of
lanes until they meet and the coordinates of the specified angles. The direction is given by inverse
lines will give us the points for the required tangent of the y gradient divided by x gradient.
transformation. i.e. arctan (Sobel-y/Sobel-x)
Using this we can remove unwanted noise that
comes with the edge detection. Again, applying an
ideal value of threshold will result into an image
shown below
loses much information about white lanes if the
road is cemented one)

Fig-Combined Threshold.

E. Gamma correction
Fig-Original image.
The quality of digital photos may be
compromised due to the limits of image-capturing
technologies or the presence of a non-ideal
environment. Despite significant advances in
imaging technology, users' expectations for clean
and relaxing sights are not always met.[1]
To to suitably improve the contrast of the
picture, an adaptive gamma correction is proposed,
with the parameters of AGC being modified
dynamically depending on the image information.

Fig-Problems in gray image.

As we know the fact that H and S channels of


HSL model stay fairly consistent in shadow or
excessive brightness, so for this problem we
explore the overlying issue with a new color model
HSL (hue, saturation, lightness) to try and process
our image. Here we discover that the saturation
component is quite useful as it stores the
Fig-Original Image.
information of lane lines pretty well in harsh
conditions!
Equations for conversion from RGB to S channel
is given below. Here
Vmax is max(R,G,B)
Vmin is min(R,G,B)

Fig-After Gamma correction

F. Trying out HSL Colour Model


Till now our thresholding works perfectly fine,
but it also has some shortcomings. When we
encounter harsh weather conditions or a cemented
road with almost matches the white color of the
lane line the edge detection fails to differentiate the
lane lines from the image. (As we have to covert
image to gray component for edge detection it
watched, and the acceptable distance was
determined to avoid this. The difference in the first
polynomial coefficient was detected and a suitably
modest limit was set to sanity check for similar
curvature to assure parallel lane lines.
H. Calculate polynomial & adding line to image
We need to overlap our lane lines which we
identified earlier. For this purpose we calculate a
second degree equation for the lane lines (second
Fig-S channel threshold. order as the roads can also be curved). The
calculated polynomial from cv2.polyfit function is
then superimposed on the image of road.
The S channel reliably detects lane lines of
various colors and henceforth we will be also I. Checking for best fit
calculating the threshold corresponding to this A best fit of lane mask is calculated using back
processed image so as to improve our pipeline. propagation having the previous and new values of
the lane matrix. New position is calculated by
G. Finding lane lines in thresholded image
combining both using weights so that the previous
A histogram is plotted corresponding to the value is predominant to maintain the stability and
obtained output to find out the x-coordinate of continuity.
the lane line. Histogram is plotted according to
white pixels in the corresponding x axis.(white J. Curved Road detection
pixels here represent the lane lines). Here two We can create important information for a self-
x-coordinate having most number of white driving automobile using polynomials, such as the
pixels will be chosen as the start of the left and curvature of the road.
right lane where y=0.

Here y is the function having the second-degree


equation of the lane line. Solving the differential
gives the value of radius of curvature. We generate
warning signals if the radius of curvature of the
lane crosses a set limit.
K. Vehicle Distance from Center
We've calculated the locations of the left and
right lanes assuming the camera is about in the
Fig-Peaks showing position of lanes. middle of the front glass. As a result, we could
The second technique relies on knowledge of simply compare the bottom positions of the left
the previous frames, as well as the position of lane and right lanes to the center of the picture frame.
lines inside them. By looking around the final We might also use prior information of the
frames fitting polynomial, additional lane lines are distance between the left and right lanes (for
discovered. These new lane line pixels are fitted example, 3.7 m in the United States) to determine
with a polynomial. the distance in actual size.

This approach is less computationally intensive,


which aids in the speeding up of the video data IV. RESULTS
lane detection process.
According to previous section, we analyze the
The function was designed to keep the lane lines pipeline in different conditions and roads and
fairly straight and their distance from one other extend the implementation using video processing.
from increasing too much. When the road edge Low lighting, extreme curves and uneven roads
was recognized as the left lane line in the video, presented a difficulty in processing but still we
this happened on occasion. When the improper have a fair detection due to the methods used to
line was recognized, the lane line distances were detect lanes in such harsh conditions. We have
good results on straight road images because we perspective transform as well as several manually
specialize the mask for each left and right lane in set lane-line filtering parameters. This may be
the road, convert the color space to HSL, and use sufficient for good-condition road driving, such as
an additional erosion mask. We also have good highway driving during the day with obvious lane
results on curved roads because we specialize the markings, but it may fail in a variety of difficult
mask for each left and right lane in the road, conditions. We could choose one of two ways in
convert the color space to HSL, and use the the future:
additional erosion mask. The output of our code - When possible, use more sensing
can be seen below which masks the lane area and modalities than just the monocular camera,
issues necessary warnings if the vehicle is too far such as stereo cameras, lidar, and radar.
from center or when it encounters a curved road. - Machine learning techniques should be
used.

REFERENCES

[1] . López, A., Serrat, J., Canero, C., Lumbreras, F., Graf, T.: Robust
lane markings detection and road geometry computation. Int. J.
Autom. Technol. 11(3), 395---407 (2010)
[2] . Sivaraman, S., Trivedi, M.M.: Integrated lane and vehicle detection,
localization, and tracking: a synergistic approach. Intell. Transp. Syst.
IEEE Trans. 14(2), 906---917 (2013)
[3] . Tapia-Espinoza, R., Torres-Torriti, M.: A comparison of gradient
versus color and texture analysis for lane detection and tracking. In:
Fig-Output. Robotics symposium (LARS), 2009 6th Latin American, IEEE, pp.
1---6 (2009)
[4] . Borkar, A., Hayes, M., Smith, M.T.: A novel lane detection system
with efficient ground truth generation. Intell. Transp. Syst. IEEE
Trans. 13(1), 365---374 (2012)
[5] McCall, J., Trivedi, M.: Video-based lane estimation and tracking for
driver assistance: survey, system, and evaluation. IEEE Trans. Intell.
Transp. Syst. 7, 20–37 (2006)
[6] Labayrade, R., Douret, J., Laneurit, J., Chapuis, R.: A reliable and
robust lane detection system based on the parallel use of three
algorithms for driving safety assistance. IEICE Trans. Inf. Syst. E
89D, 2092–2100 (2006)
[7] Cheng, H., Jeng, B., Tseng, P., Fan, K.: Lane detection with moving
vehicles in the traffic scenes. IEEE Trans. Intell. Transp. Syst. 7, 571–
582 (2006)
[8] Batavia, P.H.: Driver-adaptive lane departure warning systems. CMU-
Fig- Position Caution and Curve warning. RI-TR-99-25 (1999)
[9] Hofmann, U., Rieder, A., Dickmanns, E.: Radar and vision data
fusion for hybrid adaptive cruise control on highways. Mach. Vis.
Appl. 14(1), 42–49 (2003)
[10] Wu, S., Chiang, H., Perng, J., Chen, C., Wu, B., Lee, T.: The
heterogeneous systems integration design and implementation for lane
keeping on a vehicle. IEEE Trans. Intell. Transp. Syst. 9, 246– 263
(2008)
V. DISCUSSION & FUTURE WORK
[11] Gao, T., Aghajan, H.: Self lane assignment using egocentric smart
Driver assistance systems must have extremely mobile camera for intelligent GPS navigation. In: Workshop on
Egocentric Vision, pp. 57–62 (2009)
low mistake rates in order to be helpful. The false [12] Jiang, Y., Gao, F., Xu, G.: Computer vision-based multiple-lane
alarm rate for a warning system like LDW should detection on straight road and in a curve. In: Image Analysis and
be very low, as excessive rates upset drivers and Signal Processing, pp. 114–117 (2010).
[13] Huang, A.S., Moore, D., Antone, M., Olson, E., Teller, S.: Finding
lead to system rejection. The exact number of false multiple lanes in urban road networks with vision and LIDAR. Auton.
alerts that drivers are willing to accept is still under Robots 26, 103–122 (2009)
investigation [1]. In addition, there are a variety of [14] Lipski, C., Scholz, B., Berger, K., Linz, C., Stich, T., Magnor, M.: A
fast and robust approach to lane marking detection and lane tracking.
conditions that road and lane detection must deal In: Southwest Symposium on Image Analysis and Interpretation, pp.
with, and the look of the lane and road might alter 57–60 (2008)
considerably, as shown in the accompanying [15] . Kornhauser, A.L., et al.: DARPA Urban Challenge Princeton
University Technical Paper (2007). http://www.stanford.edu/~jmayer/
diagram. papers/darpa07.pdf
[16] Urmson C., et al.: Autonomous driving in urban environments: boss
The current solution is solely based on a and the urban challenge. J. Field Robot. 25(8), 425–466 (2008)
monocular camera and computer vision [17] (2008) 23. Bacha, A. et al.: Odin: team victor Tango entry in the
techniques, and it includes a hard-coded DARPA urban challenge. J. Field Robot. 25(8), 467–492 (2008)
[18]  Rasmussen, C., Korah, T.: On-vehicle and aerial texture analysis for [27] Rasmussen, C. (2004). Grouping dominant orientations for
vision-based desert road following. In: CVPR Workshop on Machine illstructured road following. Proceedings of the IEEE Computer
Vision for Intelligent Vehicles, vol. III, p. 66 (2005) Society Conference on Computer Vision and Pattern Recognition, 1,
[19] Kong, H., Audibert, J., Ponce, J.: Vanishing point detection for road I470-I477.
detection. In: IEEE Conference on Computer Vision and Pattern [28] Hui Kong, Audibert, and Ponce. (2010). General Road Detection
Recognition, pp. 96–103 (2009) From a Single Image. Image Processing, IEEE Transactions on, 19(8),
[20] Broggi, A., Cattani, S.: An agent based evolutionary approach to path 2211- 2220
detection for off-road vehicle guidance. Pattern Recognit. Lett. 27, [29] Moghadam, P., Starzyk, J., and Wijesoma, W. (2012). Fast
1164–1173 (2006) VanishingPoint Detection in Unstructured Environments. Image
[21]  Alon, Y., Ferencz, A., Shashua, A.: Off-road path following using Processing, IEEE Transactions on, 21(1), 425-430
region classification and geometric projection constraints. In: IEEE [30]  Moghadam, P., & Dong, J. F. (2012, October). Road direction
Conference on Computer Vision and Pattern Recognition, vol. I, pp. detection based on vanishing-point tracking. In 2012 IEEE/RSJ
689–696 (2006) International Conference on Intelligent Robots and Systems (pp.
[22] Nefian, A., Bradski, G.: Detection of drivable corridors for off-road 1553-1560). IEEE
autonomous navigation. In: International Conference on Image [31] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Statistical and
Processing, pp. 3025–3028 (2006) knowledge-based moving object detection in traffic scene,” in
[23]  Katramados, I., Crumpler, S., Breckon, T.: Real-time traversable Pmceedzngs of IEEE Int’l Conference on Intellzgent Transportatzon
surface detection by colour space fusion and temporal analysis. In: Systems, Oct. 2000, pp 27-32.
Computer Vision Systems, pp. 265–274 (2009) [32] A. Elgammal, D. Harwood, and L.S. Davis, “Non-parametric model
[24] Du, Xinxin, and Kok Tan. ”Vision- Based Approach towards Lane for background subtraction,” in Proceedings of IEEE ICCVYY
Line Detection and Vehicle Localization.” Machine Vision and FRAME-RATE Workshop, 1999. 
Applications 27.2 (2016): 175-91. [33] Huang, A.S., Moore, D., Antone, M. et al. Finding multiple lanes in
[25]  Qin, H., Zain, J. M., Ma, X., & Hai, T. (2010, August). Scene urban road networks with vision and lidar. Auton Robot 26, 103–122
segmentation based on seeded region growing for foreground (2009). 
detection. In 2010 Sixth International Conference on Natural [34] http://www.roboticsproceedings.org/rss02/p05.pdf
Computation (Vol. 7, pp. 3619-3623). IEEE. [35] http://www.roboticsproceedings.org/rss02/p21.pdf
[26] Adams, R., Bischof, L.: Seeded region growing. IEEE Transactions [36]
on Pattern Analysis and Machine Intelligence 16(6), 641647.

You might also like