Professional Documents
Culture Documents
5rexvw 7udiilf 6Ljq 5Hfrjqlwlrq DQG 7udfnlqj Iru $gydqfhg 'Ulyhu $VVLVWDQFH 6/Vwhpv
5rexvw 7udiilf 6Ljq 5Hfrjqlwlrq DQG 7udfnlqj Iru $gydqfhg 'Ulyhu $VVLVWDQFH 6/Vwhpv
5rexvw 7udiilf 6Ljq 5Hfrjqlwlrq DQG 7udfnlqj Iru $gydqfhg 'Ulyhu $VVLVWDQFH 6/Vwhpv
706
environments. Points of interest are identified in both the
image and scale dimensions using AGAST score. The exact
location and true scale of each keypoint are obtained in the
continuous domain via quadratic function fitting.
To extract the descriptor, a sampling pattern consisting of
points lying on appropriately scaled concentric circles at the
neighborhood of the keypoint is constructed. The feature (a) original image pair for matching. (b) matching result, red circles are
direction is determined for retrieving rotation-normalized the AGAST keypoints detected,
descriptor. Finally, the binary BRISK descriptor is obtained and green lines are the ones
from assembled pair-wise brightness comparison results. In matched corretly.
Figure 9. BRISK feature matching result
our experiments, instead of the pixel-wise comparison, we
employ 9×9 patch-wise comparison to reject noise. Fig. 7
shows the process of AGAST keypoints detection in scale
space. Fig. 8 is the modified BRISK sampling pattern with N =
81 patches.
707
which demonstrate that our matching algorithm exhibits
VI. EXPERIMENTAL RESULTS competitive scale and rotation invariant performance with
Our experimental data consists of more than two hours of SIFT and SURF, while the detection and descriptor
video sequences, which are acquired from a video camera computation speed is typically an order of magnitude faster
mounted on a moving vehicle. The sequences contain 649 than SURF, which is shown in Table. IV.
Chinese traffic signs mentioned in Table I. in both highways
and urban environments, involving different illumination C. The Experiment of Traffic Signs Tracking
conditions and other effects such as rotation, occlusions, In the experiments of urban environments, when the
different vehicle velocities, etc. The processing computer has detection and recognition of traffic signs are confirmed in
a 2.8GHz CPU and 3GB of memory, and the Visual C++ 2008 three successive frames, the initial location and recognition
integrated with OpenCV 2.3. information are transferred to the tracking module. Fig. 12
shows the tracking results of two individual video sequences
A. The Experiment of Traffic Signs Detection with our method. The top-left corner is the target area with the
We evaluate the performance of our system with and trajectory drawn in red. From Fig. 12 (a)-(c) we can see that
without AWB and the compared detection rates are shown in the speed limit sign has large scale changes, various
Table. III. The test samples include 1396 images in different illumination conditions and motion blur. The forbidden sign in
driving conditions and the total number of traffic signs in these Fig. 12 (d)-(f) has large appearance changes, perspective
images is 1643. distortion, vehicle vibration and also with large scale changes.
Our system deals with those problems well.
TABLE III. THE RECOGNITION RATES OF THE COMPARED EXPERIMENT
VII. CONCLUSION
System with AWB without AWB
The proposed method has excellent traffic sign detection,
Detection Rate 93.77%(1540/1643) 73.17%(1202/1643) recognition and tracking results in challenging driving
environments with large scale changes, various illumination
conditions, motion blur and perspective distortion, etc. The
B. The Experiment of Traffic Signs Matching main contribution of our approach is the first use of BRISK
feature based template matching and the TLD framework in
We compare our traffic sign matching algorithm with the the area of traffic sign recognition and tracking. Future work is
most common features such as SIFT, SURF,ORB and BRIEF to implement the mentioned algorithm in embedded hardware
(implemented in OpenCV 2.3) in three folds: the scale platforms such as TI DaVinci DSP or FPGA.
invariant performance, the rotation invariant performance and
the computation time. The results are given in Fig.11(a)-(b),
Scale Invariantness Rotation Invariantness
100 100
SIFT SIFT
90 SURF 90 SURF
ORB ORB
80 80
BRIEF BRIEF
our algorithm our algorithm
70 70
Correct matches(%)
Matching rate(%)
60 60
50 50
40 40
30 30
20 20
10 10
0 0
0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 50 100 150 200 250 300 350
Scale Angle in radians
708
(a) frame 28 (b) frame 75 (c) frame 134
REFERENCES approach for in-vehicle camera traffic sign detection and recognition, "
In Proc. IAPR Conference on Machine Vision Applications, Japan, 2009,
[1] A. Broggi, P. Cerri, P. Medici, P. P. Porta and G. Ghisio, ³5HDO time road pp. 2-6.
signs recognition,´ in 2007 IEEE Intelligent Vehicles Symposium, [12] X. Baro, S. Escalera, J. Vitria, O. 3XMRO DQG 3 5DGHYD ³7UDIILF sign
Istanbul , 2007, pp.981-986. recognition using evolutionary adaboost detection and forest-ecoc
[2] X. Qingsong, S. Juan, and L. Tiantian, ³ $ GHWHFWLRQ DQG UHFRJQLWLRQ FODVVLILFDWLRQ ´ IEEE Transactions on Intelligent Transportation
method for prohibition traffic signs, ´ in 2010 IEEE International Systems, , vol. 10, pp. 113±126, March 2009.
Conference on Image Analysis and Signal Processing (IASP), Zhejiang , [13] 1 %DUQHV DQG $ =HOLQVN\ ³5HDO-time radial symmetry for speed sign
2010, pp. 583±586. GHWHFWLRQ ´ LQ 2004 IEEE Intelligent Vehicles Symposium, Parma, 2004,
[3] < :DQJ - 'DQJ DQG = =KX ³7UDIILF 6LJQV 'HWHFWLRQ DQG Recognition pp. 566±571.
E\ ,PSURYHG 5%)11 ´ LQ Proc. International Conference on [14] B. T. Nassu and M. Ukai, ³$XWRPDWLF UHFRJQLWLRQ RI UDLOZD\ VLJQV XVLQJ
Computational Intelligence and Security, Harbin, 2007, pp. 433±437. sift features,´ in 2010 IEEE Intelligent Vehicles Symposium, California ,
[4] / ' /ySH] DQG 2 )XHQWHV ³&RORU-based road sign detection and 2010, pp.348-354.
7UDFNLQJ ´ LQ Proc. International Conference on Image Analysis and [15] B. Hoferlin and K. Zimmermann, ³7RZDUGV UHOLDEOH WUDIILF VLJQ
Recognition, montreal , 2007, pp. 1138±1147.
recognition, ´ in 2009 IEEE Intelligent Vehicles Symposium, Xi¶an,
[5] D. Nienhuser, T. Gumpp, J. M. Zollner and K. Natroshvili, ³Fast and 2009, pp. 324-329.
reliable recognition of supplementary traffic signs,´ in 2010 IEEE
Intelligent Vehicles Symposium, California , 2010, pp.896-901. [16] Z. Kalal, J. Matas, K. Mikolajczyk, ³ P-N learning: bootstrapping
binary classifiers by structural constraints, ³ In 2010 IEEE Conference
[6] N. Barnes and A. Zelinsky, ³Real-time Radial Symmetry for Speed Sign Computer Vision and Pattern Recognition (CVPR), pp. 49-56.
Detection,´ in Proceedings of the IEEE Intelligent Vehicles Symposium,
Parma, 2004, pp. 566-571. [17] C.C. Weng, H. Chen and C.S. Fuh, ³A novel automatic white balance
method for digital still cameras ´ Image and Recognition, vol. 12, pp.
[7] F. Zaklouta and B. Stanciulescu, ³Segmentation masks for real-time 4±9, 2006.
traffic sign recognition using weighted HOG-based trees,´ in 2011 14th
International IEEE Conference on Intelligent Transportation Systems [18] Sebanja and D. B. Megherbi , ³Automatic detection and recognition of
(ITSC), Washington , 2011, pp.1955-1959. traffic road signs for intelligent autonomous unmanned vehicles for
urban surveillance and rescue ´ LQ 2010 IEEE Int. Conf. Technologies
[8] 6 .DQWDZRQJ ³5RDG 7UDIILF 6LJQV 'HWHFWLRQ DQG &ODVVLILFDWLRQ IRU for Homeland Security, pp. 132±138.
%OLQG 0DQ 1DYLJDWLRQ 6\VWHP ´ LQ Proc. Internat. Conf. on Control,
Automation and Systems (ICCAS), Seoul, 2007, pp. 847±852. [19] L.Stefan, C. Margarita and Y. S. Roland. ³BRISK: Binary robust
invariant scalable keypoints.´ In 2011 IEEE Int.l Conf. Computer
[9] Y. Zhang, C. Hong and C. Wang, ³An efficient real time rectangle speed Vision (ICCV), pp. 2548-2555.
limit sign recognition,´ in 2010 IEEE Intelligent Vehicles Symposium,
California , 2010, pp.34-38. [20] M.Elmar, D. H. Gregory, B. Dariu, S. Michael, H. Gerhard, ³ Adaptive
and generic corner detection based on the accelerated segment test, ´ In
[10] M. Shi, H. Wu and H. Fleyeh, ³ $ 5REXVW 0RGHO IRU 7UDIILF Signs 2010 IEEE European Conference Computer Vision (ECCV), pp.
Recognition Based on Support Vector Machine, " in 2008 Congress on
183-196.
Image and Signal Processing, Sanya , 2008, pp. 516-524.
[11] A. Ruta, Y. Li, F. Porikli, S. Watanabe, H. Kage and K. Sumi, "A new
709