Professional Documents
Culture Documents
Vehicle Monitoring On Highways
Vehicle Monitoring On Highways
III. APPROACH
A. Distortion Correction
When a camera captures a 3D world image in
2D it is obvious that it will have many
imperfections. An image taken from camera has
many distortions in it. Our first step will be to clear
the image of any such distortions. There are
generally two kinds of distortions in an image
namely barrel and pincushion distortion.
Fig-Combined Threshold.
E. Gamma correction
Fig-Original image.
The quality of digital photos may be
compromised due to the limits of image-capturing
technologies or the presence of a non-ideal
environment. Despite significant advances in
imaging technology, users' expectations for clean
and relaxing sights are not always met.[1]
To to suitably improve the contrast of the
picture, an adaptive gamma correction is proposed,
with the parameters of AGC being modified
dynamically depending on the image information.
REFERENCES
[1] . López, A., Serrat, J., Canero, C., Lumbreras, F., Graf, T.: Robust
lane markings detection and road geometry computation. Int. J.
Autom. Technol. 11(3), 395---407 (2010)
[2] . Sivaraman, S., Trivedi, M.M.: Integrated lane and vehicle detection,
localization, and tracking: a synergistic approach. Intell. Transp. Syst.
IEEE Trans. 14(2), 906---917 (2013)
[3] . Tapia-Espinoza, R., Torres-Torriti, M.: A comparison of gradient
versus color and texture analysis for lane detection and tracking. In:
Fig-Output. Robotics symposium (LARS), 2009 6th Latin American, IEEE, pp.
1---6 (2009)
[4] . Borkar, A., Hayes, M., Smith, M.T.: A novel lane detection system
with efficient ground truth generation. Intell. Transp. Syst. IEEE
Trans. 13(1), 365---374 (2012)
[5] McCall, J., Trivedi, M.: Video-based lane estimation and tracking for
driver assistance: survey, system, and evaluation. IEEE Trans. Intell.
Transp. Syst. 7, 20–37 (2006)
[6] Labayrade, R., Douret, J., Laneurit, J., Chapuis, R.: A reliable and
robust lane detection system based on the parallel use of three
algorithms for driving safety assistance. IEICE Trans. Inf. Syst. E
89D, 2092–2100 (2006)
[7] Cheng, H., Jeng, B., Tseng, P., Fan, K.: Lane detection with moving
vehicles in the traffic scenes. IEEE Trans. Intell. Transp. Syst. 7, 571–
582 (2006)
[8] Batavia, P.H.: Driver-adaptive lane departure warning systems. CMU-
Fig- Position Caution and Curve warning. RI-TR-99-25 (1999)
[9] Hofmann, U., Rieder, A., Dickmanns, E.: Radar and vision data
fusion for hybrid adaptive cruise control on highways. Mach. Vis.
Appl. 14(1), 42–49 (2003)
[10] Wu, S., Chiang, H., Perng, J., Chen, C., Wu, B., Lee, T.: The
heterogeneous systems integration design and implementation for lane
keeping on a vehicle. IEEE Trans. Intell. Transp. Syst. 9, 246– 263
(2008)
V. DISCUSSION & FUTURE WORK
[11] Gao, T., Aghajan, H.: Self lane assignment using egocentric smart
Driver assistance systems must have extremely mobile camera for intelligent GPS navigation. In: Workshop on
Egocentric Vision, pp. 57–62 (2009)
low mistake rates in order to be helpful. The false [12] Jiang, Y., Gao, F., Xu, G.: Computer vision-based multiple-lane
alarm rate for a warning system like LDW should detection on straight road and in a curve. In: Image Analysis and
be very low, as excessive rates upset drivers and Signal Processing, pp. 114–117 (2010).
[13] Huang, A.S., Moore, D., Antone, M., Olson, E., Teller, S.: Finding
lead to system rejection. The exact number of false multiple lanes in urban road networks with vision and LIDAR. Auton.
alerts that drivers are willing to accept is still under Robots 26, 103–122 (2009)
investigation [1]. In addition, there are a variety of [14] Lipski, C., Scholz, B., Berger, K., Linz, C., Stich, T., Magnor, M.: A
fast and robust approach to lane marking detection and lane tracking.
conditions that road and lane detection must deal In: Southwest Symposium on Image Analysis and Interpretation, pp.
with, and the look of the lane and road might alter 57–60 (2008)
considerably, as shown in the accompanying [15] . Kornhauser, A.L., et al.: DARPA Urban Challenge Princeton
University Technical Paper (2007). http://www.stanford.edu/~jmayer/
diagram. papers/darpa07.pdf
[16] Urmson C., et al.: Autonomous driving in urban environments: boss
The current solution is solely based on a and the urban challenge. J. Field Robot. 25(8), 425–466 (2008)
monocular camera and computer vision [17] (2008) 23. Bacha, A. et al.: Odin: team victor Tango entry in the
techniques, and it includes a hard-coded DARPA urban challenge. J. Field Robot. 25(8), 467–492 (2008)
[18] Rasmussen, C., Korah, T.: On-vehicle and aerial texture analysis for [27] Rasmussen, C. (2004). Grouping dominant orientations for
vision-based desert road following. In: CVPR Workshop on Machine illstructured road following. Proceedings of the IEEE Computer
Vision for Intelligent Vehicles, vol. III, p. 66 (2005) Society Conference on Computer Vision and Pattern Recognition, 1,
[19] Kong, H., Audibert, J., Ponce, J.: Vanishing point detection for road I470-I477.
detection. In: IEEE Conference on Computer Vision and Pattern [28] Hui Kong, Audibert, and Ponce. (2010). General Road Detection
Recognition, pp. 96–103 (2009) From a Single Image. Image Processing, IEEE Transactions on, 19(8),
[20] Broggi, A., Cattani, S.: An agent based evolutionary approach to path 2211- 2220
detection for off-road vehicle guidance. Pattern Recognit. Lett. 27, [29] Moghadam, P., Starzyk, J., and Wijesoma, W. (2012). Fast
1164–1173 (2006) VanishingPoint Detection in Unstructured Environments. Image
[21] Alon, Y., Ferencz, A., Shashua, A.: Off-road path following using Processing, IEEE Transactions on, 21(1), 425-430
region classification and geometric projection constraints. In: IEEE [30] Moghadam, P., & Dong, J. F. (2012, October). Road direction
Conference on Computer Vision and Pattern Recognition, vol. I, pp. detection based on vanishing-point tracking. In 2012 IEEE/RSJ
689–696 (2006) International Conference on Intelligent Robots and Systems (pp.
[22] Nefian, A., Bradski, G.: Detection of drivable corridors for off-road 1553-1560). IEEE
autonomous navigation. In: International Conference on Image [31] R. Cucchiara, C. Grana, M. Piccardi, and A. Prati, “Statistical and
Processing, pp. 3025–3028 (2006) knowledge-based moving object detection in traffic scene,” in
[23] Katramados, I., Crumpler, S., Breckon, T.: Real-time traversable Pmceedzngs of IEEE Int’l Conference on Intellzgent Transportatzon
surface detection by colour space fusion and temporal analysis. In: Systems, Oct. 2000, pp 27-32.
Computer Vision Systems, pp. 265–274 (2009) [32] A. Elgammal, D. Harwood, and L.S. Davis, “Non-parametric model
[24] Du, Xinxin, and Kok Tan. ”Vision- Based Approach towards Lane for background subtraction,” in Proceedings of IEEE ICCVYY
Line Detection and Vehicle Localization.” Machine Vision and FRAME-RATE Workshop, 1999.
Applications 27.2 (2016): 175-91. [33] Huang, A.S., Moore, D., Antone, M. et al. Finding multiple lanes in
[25] Qin, H., Zain, J. M., Ma, X., & Hai, T. (2010, August). Scene urban road networks with vision and lidar. Auton Robot 26, 103–122
segmentation based on seeded region growing for foreground (2009).
detection. In 2010 Sixth International Conference on Natural [34] http://www.roboticsproceedings.org/rss02/p05.pdf
Computation (Vol. 7, pp. 3619-3623). IEEE. [35] http://www.roboticsproceedings.org/rss02/p21.pdf
[26] Adams, R., Bischof, L.: Seeded region growing. IEEE Transactions [36]
on Pattern Analysis and Machine Intelligence 16(6), 641647.