Professional Documents
Culture Documents
Traffic Light Detection and Recognition For Autonomous Vehicles PDF
Traffic Light Detection and Recognition For Autonomous Vehicles PDF
Traffic Light Detection and Recognition For Autonomous Vehicles PDF
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
Abstract
Traffic light detection and recognition is essential for autonomous driving in urban environments. A camera based
algorithm for real-time robust traffic light detection and recognition was proposed, and especially designed for autonomous
vehicles. Although the current reliable traffic light recognition algorithms operate well under way, most of them are mainly
designed for detection at a fixed position and the effect on autonomous vehicles under real-world conditions is still limited.
Some methods achieve high accuracy on autonomous vehicle, but they can’t work normally without the aid of
high-precision priori map. The authors presented a camera-based algorithm for the problem. The image processing flow can
be divided into three steps, including pre-processing, detection and recognition. Firstly, red-green-blue (RGB) color space
is converted to hue-saturation-value (HSV) as main content of pre-processing. In detection step, the transcendental color
threshold method is used for initial filterings, meanwhile, the prior knowledge is performed to scan the scene in order to
quickly establish candidate regions. For recognition, this article use histogram of oriented gradients (HOG) features and
support vector machine (SVM) as well to recognize the state of traffic light. The proposed system on our autonomous
vehicle was evaluated. With voting schemes, the proposed can provide a sufficient accuracy for autonomous vehicles in
urban enviroment.
Keywords autonomous vehicle, traffic light detection and recognition, histogram of oriented gradients
1 Introduction and also, some are composed of only circles while some
include arrows. In cities like Tianjin, China, the traffic
Over the past few decades, lots of attempts have been lights just look like progress bars with their color
made to autonomous vehicles. Nowadays, driving on indicating stop and go and dynamic length indicating the
highway with autonomous vehicles has become more and remaining time. Different types of traffic lights are shown
more reliable [1], while fully autonomous driving in real in Fig. 1.
urban environment is still a tough and challenging task [2].
Robust detection and recognition of traffic lights is
essential for autonomous vehicle to take appropriate
actions on intersection in urban environment. However,
robust detection of traffic lights is not easy to be carried,
for there would be a dreadful mess of objects in an image
in which colors are similar to the one of the traffic lights, Fig. 1 Various types of traffic lights
and the shape of traffic light is so simple that it’s hard to Human vision is seemed the only way to detect the state
extract sufficient features [3]. The worse situation may be
of traffic lights. Almost all experimental autonomous
met that the traffic lights have a variety of types, of which, vehicles for urban environment are equipped with cameras
some are horizontal arrangements while some are verticals,
used to detect and recognize traffic lights. Some traffic
light recognition algorithms have been proposed in recent
Received date: 14-10-2014 years. Ref. [4] achieved the spotlight detection and
Corresponding author: Guo Mu, E-mail: guom08@gmail.com template matching methods to identify the traffic lights,
DOI: 10.1016/S1005-8885(15)60624-0
Issue 1 Guo Mu, et al. / Traffic light detection and recognition for autonomous vehicles 51
and their result is fairly accurate. But this method is highway test without any manual intervention. The total
designed for non-real-time applications. Refs. [5–6] also length of the highway test section is about 112 km and the
described methods for detecting traffic lights. But these test took 85 min, with an average speed 79.06 km/h.
methods are based on fixed cameras. When applied to During the test, autonomous vehicle accomplished 12
autonomous vehicles where the on-vehicle camera moves overtaking, 36 lane-changing, and got a maximum speed
as the vehicle goes, these methods would lose efficieny. 105 km/h.
Ref. [7] used statistic information in HSV color space to
obtain color thresholds, they use these thresholds to do 2.1 Hardware Implementation
image segmentation, and then the machine learning
algorithm was applied for classifying. Ref. [8] suggested The vehicle used in research is a reformed Hyundai
that can detect traffic lights in a long distance, they search Tucson, shown in Fig. 2. The throttle, shifter, brake and
centers of traffic lights by Gaussian mask, and verify steering system are reconstructed so that we could control
candidates of traffic lights using suggested them by computer. We can also manipulate the turn or
existence-weight map. However, these two methods can brake signals by our command to let the vehicle act like
only detect and recognize the circular traffic lights. real human. An emergency button is designed for manual
Besides, the attempts mentioned above are used to detect takeover so driver could take control of the vehicle at any
and classify traffic lights just from onboard images without time we need.
annotated priori maps. Ref. [9] designed methods for
automatically mapping the three dimensional positions of
traffic lights, they mapped more than four thousand traffic
lights to create a priori map. Ref. [10] presented a passive
camera based pipeline for traffic light detection and
recognition, which is used by Google driveless cars and
pass several tests in real urban environment successfully.
Both methods achieved high accuracy based on prior
knowledge of traffic light location. The main drawback of
the methods is that they need vehicle localization and prior
acquisition of traffic light positions. In this article, the Fig. 2 Autonomous vehicle of our team
authors proposed a new approach to detect and recognize Currently, three lidars, one radar and three cameras are
traffic lights in vertical arrangement. both circular traffic equipped on the vehicle for environment perception. Two
lights and those with arrows are handled thereafter. This SICK LD-LRS lidar scanners are setup in the front and on
approach is designed for autonomous vehicles, so we top of the vehicle to detect other vehicles, pedestrians and
would use on-vehicle camera to do all processing in other obstacles in front. One IBEO Lux-4 lidar is mounted
real-time. on top of the vehicle. We use this lidar to detect bound of
The rest of this article is organized as follows. In Sect. 2, the road. One millimeter wave radar is equipped at the rear
the system architecture of adopted autonomous vehicle is of the vehicle and obstacles in rear area would be detected
described. Main steps of the detection and recognition by it. Three cameras are mounted in the front of the car,
system are detailed in Sect. 3. Some experiments results from left to right. They all fixed inside the car in order to
are shown in Sect. 4. Conclusions and future work are avoid dust, rain or other interference factor. We use these
given in Sect. 5. cameras to detect traffic lights, traffic signs, lane-marks
and vehicle’s movement trends. Sensors’ mounting
2 System and vehicle
positions, functions and coverage areas are showed in
In the compitition of ‘Future Challenge 2012’ in China, Table 1.
two types of contests are held: 7 km urban environment Three on-board 4-core i7 computers provide sufficient
and 16 km rural road. The autonomous vehicle we processing power: One server runs our vision algorithms
employed won both contest at all. On November 24th, while the other takes care of lidar, radar algorithms,
2012, we successfully completed the Beijing-Tianjin decision-making, control and low-level communication.
52 The Journal of China Universities of Posts and Telecommunications 2015
Electricity power is provided by two groups of lead acid mechanism and electric reconstruction, environment
cell. perception, decision making and automatic control.
Table 1 Sensors’ mounting positions, functions and coverage areas Whereas the mechanism and electric reconstruction is the
Mounting foundation of project and belongs to the hardware portion,
Sensor type Function Coverage
position the other three sections are linked serially, they constitute
Traffic lights /
Camera1 Front 30 m~90 m, 24° the architecture of software. Environment perception
signs
Camera2 Front Lane-marks 4 m~30 m, 72° section includes information gathering, image processing,
Camera3 Front Obstacles 4 m~120 m, 72°
SICK lidar1 Front Front obstacles 0 m~50 m, 180° radar and lidar signal processing. Decision making section
SICK lidar2 Top Front obstacles 0 m~50 m, 180°
IBEO Lux4 Top Road boundary 0 m~200 m, 72°
comprehensively analyzes the results provided by the
Millimeter radar Rear Rear obstacles 0 m~200 m, 72° environment perception and makes appropriate decision.
The last section, the automatic control fulfills the decision
2.2 Software architecture made by previous section by adjusting posture of the
vehicle by appropriate control method. Fig. 3. shows
The development of autonomous vehicles involves the information processing flow of the autonomous vehicles.
Traffic light detection and recognition is designed as part We label about 5 000 sample images of 13 categories for
of image processing module in the environment perception offline training. Each sample image is converted to a
section, as well as detection of lane-marks and traffic signs. high-dimensional feature by a HOG descriptor. These
samples are then used to train a hierarchical SVM
3 Proposed algorithm classifier. For online processing, the pipeline can be
divided to three phases: pre-processing, detection and
The authors used an off-the-shelf camera (AVT Pike recognition. Pipeline of proposed algorithm is shown in
F-100C) as vision sensor for detecting traffic lights. This Fig. 4.
camera is mounted behind the windshield of our vehicle.
and label the traffic light pixel region with alignment to the
location and heading angle of host vehicles. When we come
to the same intersection, accurate regions for traffic lights
can be obtained by linear interpolation based on the
real-time data from high-precision INS.
3.3 Recognition
4 Experiments
It can be concluded that in distances less than 40 m, we algorithm is designed mainly for autonomous vehicles.
can distinguish arrow from circular at a detection rate Experiments show that our algorithm performs well in
higher than 80%. With a voting scheme, this accuracy is accurately detecting targets and in determining the distance
sufficient for autonomous vehicle to make decision. At and time to those targets. However, the current method
distances further than 50 m, there is no distinct difference proposed here does have some drawbacks. First, the
between circular and arrow traffic lights in the image. method performs well in the daytime but not as well at
As suggested above, the traffic lights can be detected up night. The false alarm rate increases at night due to more
to 120 m, but the average guaranteed detection distance is light interference. While the method can detect both
about 40 meters within which we can detect the status of circular traffic lights and those with arrows, only the
intersection and distinguish arrows from circulars at a classical suspended, vertical traffic lights were detected.
relative high accuracy. Detection and recognition of more types of traffic lights
In distances less than 40 m, the detection result of the will meet an important area for future work.
proposed method during all time of days is shown in Table
2. Precision and recall numbers in ‘stop’ and ‘go’ status are Acknowledgements
shown in Fig. 8 and Fig. 9, respectively.
This work was supported by Natural Basic Research Program of
Table 2 Detection and recognition results of the proposed
China (91120306, 61203366).
method within 40 m
Type Correct Missing False alarm
Circular red 684 35 43 References
Circular green 921 64 14
Left arrow red 102 17 1
1. Luettel T, Himmelsbach M, Wuensche H J. Autonomous ground
Left arrow green 184 22 0
Forward arrow red 84 12 2 vehicles—Concepts and a path to the future. Proceedings of the IEEE,
Forward arrow green 204 19 1 2012,100 (Special Centennial Issue ): 1831−1839
Right arrow red 34 6 3 2. Levinson J, Askeland J, Becker J, et al. Towards fully autonomous
Right arrow green 264 31 2
driving: Systems and algorithms. Proceedings of the 2011 IEEE
Intelligent Vehicles Symposium (IVS’11), Jun 5−9, 2011, Baden-Baden,
Germany. Piscataway, NJ, USA: IEEE, 2011: 163−168
3. Buch N, Velastin S A, Orwell J. A review of computer vision techniques
for the analysis of urban traffic. IEEE Transactions on Intelligent
Transportation Systems, 2011, 12(3): 920−939
4. de Charette R, Nashashibi F. Real time visual traffic lights recognition
based on spot light detection and adaptive traffic lights templates.
Proceedings of the 2009 IEEE Intelligent Vehicles Symposium (IVS’09),
Jun 3−5, 2009, Xi’an, China. Piscataway, NJ, USA: IEEE, 2009: 358−363
5. Chung Y C, Wang J M, Chen S W. A vision-based traffic light detection
system at intersections. Journal of Taiwan Normal University:
Fig. 8 Precision and recall in ‘stop’ status
Mathematics, Science and Technology, 2002, 47(1): 67−86
6. Yung N H C, Lai A H S. An effective video analysis method for detecting
red light runners. IEEE Transactions on Vehicular Technology, 2001,
50(4): 1074−1084
7. Gong J W, Jiang Y H, Xiong G M, et al. The recognition and tracking of
traffic lights based on color segmentation and CAMSHIFT for intelligent
vehicles. Proceedings of the 2010 IEEE Intelligent Vehicles Symposium
(IVS’10), Jun 21−24, 2010, San Diego, CA USA. Piscataway, NJ, USA:
IEEE, 2010: 431−435
8. Hwang T H, Joo I H, Cho S I. . Detection of traffic lights for vision-based
car navigation system. Advances in Image and Video Technology:
Proceedings of the 1st Pacific Rim Symposium (PSIVT’06), Dec 10−13,
Fig. 9 Precision and recall in ‘go’ status
2006, Hsinchu, China. LNCS 4319. Berlin, Germany: Springer, 2006:
682−691
5 Conclusions and future work 9. Fairfield N, Urmson C. Traffic light mapping and detection. Proceedings
of the 2011 IEEE International Conference on Robotics and Automation
A camera-based algorithm for real-time robust traffic (ICRA’11), May 9−13, 2011, Shanghai, China. Piscataway, NJ, USA:
light detection and recognition was proposed. This IEEE, 2011: 5421−5426
56 The Journal of China Universities of Posts and Telecommunications 2015
10. Levinson J, Askeland J, Dolson J, et al. Traffic light mapping, 11. Dalal N, Triggs B. Histograms of oriented gradients for human detection.
localization, and state detection for autonomous vehicles. Proceedings of Proceedings of the 2005 IEEE Computer Society Conference on
the 2011 IEEE International Conference on Robotics and Automation Computer Vision and Pattern Recognition (CVPR’05): Vol 1, Jun 20−26,
(ICRA’11), May 9−13, 2011, Shanghai, China. Piscataway, NJ, USA: 2005, San Diego, CA, USA. Los Alamitos, CA, USA: IEEE Computer
IEEE, 2011: 5784−5791 Society, 2005: 886−893
From p. 23 principle for the virtual network embedding problem. Journal of Zhejiang
University: Science C, 2011,12(11), 910−918
12. Wang Z, Han Y, Lin T, et al. Virtual network embedding by exploiting 14. Chowdhury N M M K, Rahman M R, Boutaba R. Virtual network
topological information. Proceedings of the IEEE Global embedding with coordinated node and link mapping. Proceedings of the
Communications Conference (GLOBECOM’12), Dec 3−7, 2012, 28th Annual Joint Conference of the IEEE Computer and
Anaheim, CA, USA. Piscataway, NJ, USA: IEEE, 2012: 2603−2608 Communications (INFOCOM’09), Apr 19−25, 2009, Rio de Janeiro,
13. Liu J, Huang T, Chen J Y, et al. A new algorithm based on the proximity Brazil. Piscataway, NJ, USA: IEEE, 2009: 783−791
From p. 30 6. Zhang Z F, Wei G. A new random access mode for mobile internet.
Journal of Communications, 2003, 24(4): 9−16 (in Chinese)
3. Feng J H, Yang L, Fan P Z. A new multiple access protocol based on the 7. Abdelkader T, Naik K, Nayak A, et al. Adaptive backoff scheme for
active state of neighboring nodes for mobile ad hoc networks. contention-based vehicular networks using fuzzy logic. Proceedings of
Proceedings of the 3rd International Workshop on Signal Design and Its the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE’09),
Applications in Communications (IWSDA’07), Sept 23−27, 2007, Aug 20−24, 2009, Jeju Island, Republic of Korea. Piscataway, NJ, USA:
Chengdu, China. Piscataway, NJ, USA: IEEE, 2007: 270−274 IEEE, 2009: 1621−1626
4. Yang L, Fan P Z, Hao L. Performance analysis of multi-channel slotted 8. Su H W, Cheng L L. Backoff algorithm of MAC protocol with flow
ALOHA protocol based on power capture and backoff. Journal of predicted and quality of service distinguished. Application Research of
Southwest Jiaotong University, 2013, 48(4): 761−768 (in Chinese) Computers, 2013, 30(10): 3091−3095 (in Chinese)
5. Yao N M, Peng Z, Zuba M, et al. Improving aloha via backoff tuning in 9. Liu S F, Dang Y G, Fan Z G. The grey system theory and its application.
underwater sensor networks. Proceedings of the 6th International ICST 4nd ed. Beijing, China:Science Press, 2008 (in Chinese)
Conference on Communications and Networking in China 10. Harada H, Prasad R. Simulation and software radio for mobile
(CHINACOM’11), Aug 17−19, 2011, Harbin, China. Piscataway, NJ, communications. Boston, MA, USA: Artech House, 2002
USA: IEEE, 2011: 1038−1043