Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

2nd Indian International Conference on Artificial Intelligence (IICAI-05)

A Novel Fuzzy Approach for Shape Determination of


Traffic Signs

Hasan Fleyeh

Department of Computer Engineering, Dalarna University, Sweden


Transport Research Institute, Napier University, Edinburgh, Scotland
hfl@du.se

Abstract. In this paper, a novel fuzzy approach is developed to determine the


shape of traffic signs. More than 1600 images of traffic signs were collected in
different light conditions by a digital camera mounted in a car and used for
testing this approach. Every RGB image was converted into HSV colour space,
and segmented by using a set of fuzzy rules depending on the hue and
saturation values of each pixel in the HSV colour space. The fuzzy rules are
used to extract the colours of the road signs. Objects in each segmented image
are labelled and tested for the presence of probable sign. All small objects under
certain threshold are discarded, and the remaining objects are tested by a fuzzy
shape recognizer which invokes another set of fuzzy rules. Four shape measures
are used to decide the shape of the sign; rectangularity, triangularity, ellipticity,
and the new shape measure octagonality.

1 Introduction

Road and traffic sign recognition is one of the important fields in the Intelligent
Transport Systems (ITS). This is due to the importance of the road signs and traffic
signals in daily life. They define a visual language that can be interpreted by the
drivers. They represent the current traffic situation on the road, show the danger and
difficulties around the drivers, give warnings to them, and help them with their
navigation by providing useful information that makes the driving safe and
convenient [1, 2].
The human visual perception abilities depend on the individual’s physical and
mental conditions. In certain circumstances, these abilities can be affected by many
factors such as tiredness, and driving tension. Hence, it is very important to have an
automatic road sign recognition system that can be a subsidiary means to the driver
[3, 4]. Giving this information in a good time to the drivers can prevent accidents,
save lives, increase the driving performance, and reduce the pollution caused by the
vehicles [5, 6].
Due to the complex environment of roads and the scenes around them, the detection
and recognition of road and traffic signs may face some difficulties. The colour of the
sign fades with time as a result of long exposure to sunlight, and the reaction of the
paint with the air. Visibility is affected by weather conditions such as fog, rain, clouds

Bhanu Prasad (Editor): IICAI-05, pp. 1847-1862, 2005.


Copyright © IICAI 2005
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

and snow. The colour information is very sensitive to the variations of the light
conditions such as shadows, clouds, and the sun [7, 1]. It can be affected by the
illuminant colour (daylight), illumination geometry, and viewing geometry [8]. The
presence of objects similar in colours to the road signs in the scene under
consideration, like buildings, or vehicles. Signs may be found disoriented, damaged or
occulted. If the image is acquired from a moving car, then it is often suffers from
motion blur and car vibration.
Many systems were proposed in the last few years. The majority of them used
colours together with shapes to detect and recognize traffic signs. Among these
systems those developed by Vitabile and Sorbello [9] Vitabile et al. [7] Vitabile et al.
[10, 11] in which HSV colour space is divided into a number of subspaces (regions).
The S and V components are used to find in which region the hue is located.
Classification is carried out by normalizing the sign image to 36x36 pixels, and using
two different multi-layer neural networks designed to extract the circular red signs,
and red triangular warning signs. Paclik et al. [12] segmented the colour images by
using HSV colour space. Colours like red, blue, green, and yellow were segmented by
H component and a certain threshold. Laplace kernel classifier is used to classify the
road signs. de la Escalera et al. [13] built a colour classifier based on two look-up
tables derived from hue and saturation of an HSI colour space. This system is
modified in [14]. For the localization of road sign in the image, two different
techniques are used; Genetic Algorithm and Simulated Annealing and a comparison
between the two methods is presented. Fang et al. [2] developed a road sign detection
and tracking system in which the colour images from a video camera are converted
into HSI system. Colour features are extracted from the hue by using a two-layer
neural network.
Jiang and Choi [15] applied fuzzy rules and thresholds to images in Nrgb colour
space to achieve colour segmentation. Warning signs, which are considered here, are
identified by extracting the three corners of the triangles. A fuzzy method is
developed to detect these corners by defining two member functions to specify the
possibility of pixels inside two masks to create a corner. The masks are rectified to
eliminate the problem cased by damages signs.
The layout of the paper is as follows. In Section 2, the system overview is
presented. Section 3 describes the fuzzy colour segmentation algorithm. In section 4,
a full description of the multistage median filter is presented. Shape measures are
shown in section 5, and the fuzzy shape recognizer is described in section 6. Finally,
the discussion of the experimental resulted, the future work are presented in section 7.

2 System Overview

The proposed system, figure 1, starts by taking an image by a camera mounted in a


car. A fuzzy colour segmentation algorithm is invoked to segment this image
according to the desired colour. The result of colour segmentation is a binary image
containing all possible signs in addition to other undesired blobs generated by objects
with similar colours.

1848
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

Fig. 1. System’s Block diagram.

The algorithm checks every segmented image for any objects. If no objects are
found, another image is taken, otherwise it calculates the number of these objects by
applying connected components labelling algorithm developed by Suzuki et al. [16] .
When the number of objects in the segmented image exceeds a certain value, the
system assumes that the segmented image is a noisy one. A multistage median filter is
used to clean it from noise and undesired blobs. This operation is repeated several
times until the number of objects in the image is less than the desired number or the
number of objects is not being affected by the multistage median filter. A list of

1849
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

objects is then generated and the area of each object in the list is computed and
compared with a threshold. The object is discarded when its area is less than the
threshold. All small objects are discarded because it will not be possible to recognize
their pictograms since they are far away from the camera or they are objects with
similar colours to the traffic signs.
Objects pass the test are forwarded to the fuzzy shape recognizer. All objects in the
list are treated in the same manner until the list is empty, then a new image is taken
and new cycle starts.
During the development of this algorithm still images are used for testing and
verification of results. A data base of more than 1600 images taken in different light
conditions, which are collected for this purpose, is used. This library is available for
researchers from http://users.du.se/~hfl/traffic.

3 Colour Segmentation Algorithm

The colour segmentation algorithm is carried out by taking RGB images by a digital
camera mounted in a moving car. The images are converted to HSV colour space. The
HSV colour space is chosen because Hue is invariant to the variations in light
conditions as it is multiplicative/scale invariant, additive/shift invariant, and it is
invariant under saturation changes. It practically means that it is still possible to
recover the tint of the object when it is lit with intensity varying illumination space.
Perez and Koach [17] showed that the hue is unaffected by shadows and highlights on
the object when the illumination is white.
The Swedish National Road Administration defined the colours used for the
signs[18] in CMYK colour space. These values are converted into Normalized Hue
and Normalized Saturation as shown in Table 1.

Table 1. Normalized Hue and Saturation.


Colour Normalized Normalized
Hue [0,255] Saturation [0,255]
Red 250 207
Yellow 37 230
Green 123 255
Light Blue 157 255
Dark Blue 160 230

Normalized Hue and Saturation together with hue and saturation values calculated
from images of traffic signs are used as a priori knowledge to the fuzzy inference
system to specify the range of each colour. To detect and segment a certain colour,
seven fuzzy rules are applied. These rules are as follows:
1. If (Hue is Red1) and (Saturation is Red) then (Result is Red)
2. If (Hue is Red2) and (Saturation is Red) then (Result is Red)
3. If (Hue is Yellow) and (Saturation is Yellow) then (Result is Yellow)
4. If (Hue is Green) and (Saturation is Green) then (Result is Green)

1850
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

5. If (Hue is Blue) and (Saturation is Blue) then (Result is Blue)


6. If (Hue is Noise1) then (Result is Black)
7. If (Hue is Noise2) then (Result is Black)

The membership functions of the Hue and Saturation are shown in figures 2 and 3
respectively. Since the range of Hue of red colour is around zero, two fuzzy variables
are defined for this Hue. They are called Red1 to represent colour values above zero,
and Red2 to represent colour values below or equal 256. There are two regions of
Hues values do not be used for road signs are defined as Noise1 and Noise2. If any of
these colorus are detected by the fuzzy inference system, it responds by initiating a
black pixel.

Fig. 2. Hue Membership Functions.

The membership functions of the Saturation show that the values of colours used in
road signs are almost similar to each other with small differences in their ranges.
Figure 4 shows the “Result” output variable which consists of five member
functions, one for each colour. They represent a certain range of grey levels in the
output image which correspond the colours used in road signs. The fuzzy surface is
shown in figure 5, and it shows the relation among Hue, Saturation, and result
variables.
Grey level slicing is used to separate different grey levels, which represent the
segmented colours generated by the fuzzy inference system. The output of this grey
level slicing is a binary image containing the segmented colour. A multistage median
filter is used after that to remove noise and small undesired objects.
Figure 6 shows a sample image segmented by this method. In figure 7, images of
different signs are segmented, separated and normalized to 36x36 pixels. These
normalized images are used in the development of the fuzzy shape recognizer.

1851
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

Fig. 3. Saturation Membership Functions.

Fig. 4. The Output Functions.

Fig. 5. The Fuzzy System Surface.

1852
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

Fig. 6. Detection Results.

4 Multistage Median Filter

Although median filter does good job in removing noise, fine details of the object are
also removed. Multistage median filter avoids removing fine details of objects, while
it removes effectively the noise [19].
Let {x(.,.)} be a discrete 2-D sequence, and consider a set of elements inside
a (2 N + 1) × (2 N + 1) square window W centred at (i, j ) .
Define the following four subsets of the window W :
W0,1 (i, j ) = {x (i, j + k ); − N ≤ k ≤ N } (1)

W1,1 (i, j ) = {x (i + k , j + k ); − N ≤ k ≤ N } (2)

W1,0 (i, j ) = {x (i + k , j ); − N ≤ k ≤ N } (3)

W1, −1 (i, j ) = {x (i + k , j − k ); − N ≤ k ≤ N } (4)

Suppose that z s (i, j ); s = 1,2,3,4 are the median of the above sets, and

y p (i, j ) = min[z1 (i, j ), z 2 (i, j ), z3 (i, j ), z 4 (i, j )] (5)

y q (i, j ) = max[z1 (i, j ), z 2 (i, j ), z3 (i, j ), z 4 (i, j )] (6)

The output of the multistage median filter is given by:

1853
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

[
y m (i, j ) = med y p (i, j ), y q (i, j ), x(i, j ) ] (7)

5 Shape Measures

Four shape measures are used to decide which shape the sign has. They are ellipticity,
triangularity, rectangularity, and octagonality.
Ellipticity can be obtained by applying an affine transform to a circle. The simplest
affine moment invariant I1 is given:

I 1 = ( µ 20 µ 02 − µ11
2
) / µ 00
4 (8)

Where µ 20 , µ 02 and µ11 are the second order central moments, and µ 00 is the zero
order central moment. To discriminate shapes more precisely higher order invariants
should be involved. However, they are less reliable and very sensitive to the noise. On
contrast I1 is stable and more practical to be used. To measure the ellipticity, the
following equation is used:
⎧⎪16π 2 I 1
E=⎨
(
if I 1 ≤ 1 / 16π 2 ) (9)

(
⎪⎩1 / 16π 2 I 1 ) otherwise

Ellipticity E ranges over [0,1] and for perfect ellipse its value is 1.
The same aforementioned approach is used to characterize triangles. The
triangularity measure T is given by:

⎧108I 1 if I 1 ≤ (1 / 108) (10)


T =⎨
⎩1 /(108 I 1 ) otherwise

Triangularity has the same range as the ellipticity has. Perfect triangle has
triangularity T of 1.
Rectangularity is measured by calculating the area of the region under
consideration to the area of its minimum bounding rectangle (MBR)[20].
A new measure is introduced here to give an idea how much the shape is close to
an octagon. Following the same way described in [20], octagonality is defined and
given by:
⎧⎪15.932π 2 I 1
O=⎨
(
if I 1 ≤ 1 / 15.932π 2 ) (11)

(
⎪⎩1 / 15.932π 2 I 1 ) otherwise

Octagonality O has the same range as ellipticity and triangularity which is [0,1] . A
perfect octagon has an octagonality of 1.

1854
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

Fig. 7. Samples of Segmented Signs.

6 Fuzzy Shape Recognizer

The four shape measures described in the previous section assume that the object is a
solid one. This means that the object under consideration does not contain any holes.
Since traffic signs have two different colours, one for the border and the other for the
interior, the border colour is used for the segmentation and then the holes should be
filled by the same grey level of the object to make it solid.

1855
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

One problem may arise here is that when the object under consideration is occluded
by another object; it will be difficult to fill these holes. The reason is that the object
does not form a closed shape. Figure 8A shows an object occluded by other objects
and figure 8B shows the object after segmentation. This case can be treated by
calculating the convex hull of the object which represents the actual solid shape of the
object under consideration. Shape’s convex hull is implemented using Graham’s scan
algorithm and the results are shown in figure 8C. After getting the convex hull, the
fours shape measures are calculated and their values are forwarded to the fuzzy
classifier.

(A) Occluded objects.

(B) Results of Segmentation and Extraction.

(C) The Convex hull.


Fig. 8. Occlusions and Convex hull.

The fuzzy classifier consists of five fuzzy input variables, and one output variable.
The input variables are R1, R2, T, E, and O. The membership functions of these
variables are shown in figures 9-14. To perform the classification of traffic signs, five
rules are used as follows:
1. If (R1 is Low) and (R2 is Low ) and (T is One) and (E is Low) and (O is High)
then (Shape is Triangle)
2. If (R1 is One) or (R2 is One ) then (Shape is Rectangle)
3. If (R1 is Low) and (R2 is Low ) and (T is High) and (E is Low) and (O is One)
then (Shape is Octagon)
4. If (R1 is Low) and (R2 is Low ) and (T is High) and (E is One) and (O is Low)
then (Shape is Circle)
5. If (R1 is not One) and (R2 is not One) and (T is not One) and (E is not One) and
(O is not One) then (Shape is Undefined)

1856
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

Where R1 is rectangularity calculated for horizontally aligned objects, R2 is the


rectangularity of objects oriented in any other angle, T is triangularity, E is the
ellipticity, and O is the octagonality.
Shape measures of five different samples such as Stop sign (octagon), Yield sign
(triangle), No Entry sign (circle), and different rectangular signs are calculated for
signs shown in figure 7 and shown in Tables 2-5. These values are used to implement
the fuzzy shape recognizer and to verify the results.

Fig. 9. The R1 Membership Functions.

Fig. 10. The R2 Membership Functions.

Fig. 11. The T Membership Functions.

1857
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

Fig. 12. The E Membership Functions.

Fig. 13. The O Membership Functions.

Fig. 14. Output Membership Functions.

Table 2. Shape Measures of Stop Sign


R1 R2 T E O
0.802083 0.609371 1.4553 0.995308 1.0004
0.819853 0.724007 1.4579 0.997082 0.9986
0.827206 0.827206 1.4557 0.995595 1.0002
0.820683 0.758180 1.4559 0.995694 1.0001
0.822917 0.559804 1.4568 0.996312 0.9994

1858
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

Table 3. Shape Measures of Yeild Sign.


R1 R2 T E O
0.627880 0.593256 1.1195 0.765624 1.3006
0.633641 0.588887 1.1381 0.778361 1.2793
0.629464 0.584195 1.1077 0.757563 1.3144
0.597701 0.595498 1.1424 0.781334 1.2744
0.671053 0.664599 1.2104 0.827840 1.2028

Table 4. Shape Measures of No-Entry Sign


R1 R2 T E O
0.802521 0.802521 1.4605 0.998834 0.9969
0.789719 0.772936 1.4620 0.999885 0.9959
0.776591 0.776618 1.4618 0.999765 0.9960
0.790977 0.760449 1.4617 0.999701 0.9960
0.782466 0.775507 1.4621 0.999936 0.9958

Table 5. Shape Measures of Rectangular Signs.


R1 R2 T E O
0.916325 0.938060 1.3318 0.910852 1.0932
0.923166 0.959186 1.3350 0.913026 1.0906
0.944465 0.535576 1.3368 0.914245 1.0891
0.956602 0.839174 1.3391 0.915847 1.0872
0.926373 0.802776 1.3378 0.914982 1.0883

7 Results and Conclusion

A complete fuzzy system for detection and recognition of traffic signs is suggested in
this paper. It consists of fuzzy colour detector and a fuzzy shape recognizer.
The colour segmentation and detection is very robust since it is developed to
work in different light conditions. Figure 7 shows about 100 signs segmented by this
algorithm. The shape recognition algorithm uses four shape measures; among them a
new shape measure, the octagonality, which is proposed in this paper.
Variant lighting conditions and occlusions represent the most serious
problems in working in outdoor images. This system has dealt with these problems
successfully and it is immune to light changes and occlusions.
Since the shape measures are computed by using the Affine Moment
Invariants which are invariant to the general affine transformation, the algorithm is
invariant to the rotation, scaling and translation. It is also invariant to the distortion of
objects by perspective projection, which takes place when the viewing angle between

1859
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

the camera and the sign is not zero, and there is no need to normalize the detected
signs.
Figure 15 shows some results of shape recognition. The images shown here
are noisy. The system succeeded to recognize traffic signs from other big objects. In
the last row of figure 15, an occluded sign is detected and recognized by this
algorithm.

Fig. 15. Results of Shape Recognition

The proposed algorithm is tested on hundreds of images and it shows high


recognition rate. For triangles, rectangles, and circles, the recognition rate is about
93.3% while in the case of octagons it is 95%.
A number of false alarms are generated by the algorithm. The reason behind these
false alarms is the presence of objects with similar properties to the traffic signs. This

1860
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

can be avoided by recognizing the contents of the signs which is part of the future
work.
The proposed system is very useful for sign inventory and maintenance
applications on the motorways. It can also be used for intelligent vehicle applications
when the pictogram recognizer is added. The design and implementation of this
recognizer is also part of the future work.
It is also believed that shadows present a problem in the computer vision
applications in outdoor image. This problem will be studied in depth and it will be
part of the future work which aims to improve the detection efficiency.

References

1. Fang C., Fuh C., Chen S. and Yen P., A road sign recognition system based on dynamic
visual model, The 2003 IEEE Computer Society Conf. Computer Vision and Pattern
Recognition (2003), I-750 I-755.
2. Fang C., Chen S. and Fuh C., Road-sign detection and tracking, IEEE Trans. on
Vehicular Technology 52 (2003), no. 5, 1329-1341.
3. Yabuki N., Matsuda Y., Fukui Y. and Miki S., Region detection using color similarity,
1999 IEEE Inter. Symposium on Circuits and Systems (1999), 98-101.
4. Jiang G., Choi T. and Zheng Y., Morphological traffic sign recognition, 3rd Inter. Conf.
on Signal Processing (1996), 531-534.
5. Estevez L. and Kehtarnavaz N., A real-time histographic approach to road sign
recognition, IEEE Southwest Symposium on Image Analysis and Interpretation (1996),
95-100.
6. de la Escalera A., Moreno L., Puente E. and Salichs M., Neural traffic sign recognition
for autonomous vehicles, 20th Inter. Conf. on Industrial Electronics Control and
Instrumentation (1994), 841-846.
7. Vitabile S., Pollaccia G., Pilato G. and Sorbello F., Road sign recognition using a
dynamic pixel aggregation technique in the hsv color space, 11th Inter. Conf. Image
Analysis and Processing (2001), 572-577.
8. Buluswar S. and Draper B., Color recognition in outdoor images, Inter. Conf. Computer
vision (1998), 171-177.
9. Vitabile S. and Sorbello F., Pictogram road signs detection and understanding in outdoor
scenes, Conf. Enhanced and Synthetic Vision (1998), 359-370.
10. Vitabile S., Gentile A., Dammone G. and Sorbello F., Multi-layer perceptron mapping on
a simd architecture, The 2002 IEEE Signal Processing Society Workshop (2002), 667-
675.
11. Vitabile S., Gentile A. and Sorbello F., A neural network based automatic road sign
recognizer, The 2002 Inter. Joint Conf. on Neural Networks (2002), 2315-2320.
12. Paclik P., Novovicova J., Pudil P. and Somol P., Road sign classification using laplace
kernel classifier, Pattern Recognition Letters 21 (2000), no. 13-14, 1165-1173.
13. de la Escalera A., Armingol J. and Mata M., Traffic sign recognition and analysis for
intelligent vehicles, Image and Vision Comput. 21 (2003), 247-258.
14. de la Escalera A., Armingol J.and Pastor J., Visual sign information extraction and
identification by deformable models for intelligent vehicles, IEEE Trans. on Intelligent
Transportation Systems 5 (2004), no. 2, 57-68.
15. Jiang G. and Choi T., Robust detection of landmarks in color image based on fuzzy set
theory, Fourth Inter. Conf. on Signal Processing (1998), 968-971.

1861
2nd Indian International Conference on Artificial Intelligence (IICAI-05)

16. Suzuki K., Horiba I. and Sugie N., Linear-time connected component labelling based on
sequential local operations, Computer Vision and Image Understanding 89 (2003), 1-23.
17. Perez F. and Koach C., Toward color image segmentation in analog vlsi: Algorithm and
hardware, Int. J. of Computer Vision 12 (1994), no. 1, 17-42.
18. Swedish-Road-Administration, " http://www.Vv.Se/vag_traf/vagmarken/farglikare.htm,"
2004.
19. Wang X., Adaptive multistage median filter, IEEE Trans. Signal Processing 40 (1992),
no. 4, 1015-1017.
20. Rosin P., Measuring shape: Ellipticity, rectangularity, and triangularity, Machine Vision
and Applications 14 (2003), no. 3, 172-184.

1862

You might also like