Professional Documents
Culture Documents
Daytime Preceding Vehicle Brake Light Detection Using Monocular Vision
Daytime Preceding Vehicle Brake Light Detection Using Monocular Vision
Daytime Preceding Vehicle Brake Light Detection Using Monocular Vision
1, JANUARY 1, 2016
Abstract— Advanced vehicle safety is a recently emerging traffic lights [4]–[8], vehicles [9]–[14], and vehicle lights
issue appealed from the explosive population of car owners. (taillights, brake lights, and turn signals) [15]–[24]. For the
Increasing driver assistance systems have been developed for ultimate goal of accident prevention, pre-collision sensing has
warning drivers of potential hazards by analyzing the surround-
ings with sensors and/or cameras. Issuing vehicle deceleration become a hot research topic among automotive manufacturers.
and potential collision, brake lights are particularly important Over the last decades, many existing state-of-the-art systems
warning signals, allowing of no neglect. In this paper, we propose rely on active sensors, e.g., radars [25], or beamforming tech-
a vision-based daytime brake light detection system using a niques [26]–[28]. However, the rapid development and reduced
driving video recorder, which tends to be widespread used. cost of digital cameras have made it feasible to deploy driving
At daytime, visual features, motions, and appearances of vehicles
are highly visible. However, brake lights, on the contrary, are video recorders (DVRs), which tend to be worldwide used.
hard to notice due to low contrast between the brakes lights Moreover, due to the exponential growth in computing power
and environments. Without the significant characteristic of light and the availability of advanced computer vision techniques,
scattering as at night, the proposed system extracts preceding various vision-based sensing can be conducted with low-cost
vehicles with taillight symmetry verification, and then integrates algorithms. Compared to device-based sensing, vision-based
both luminance and radial symmetry features to detect brake
lights. A detection refinement process using temporal information approaches have the main advantage that different tasks can
is also employed for miss recovery. Experiments are conducted on achieved merely by means of software operating on existing
a test data set collected by front-mounted driving video recorders, hardware, requiring no additional devices.
and the results verify that the proposed system can effectively Visual features, motion, and appearance are commonly
detect brake lights at daytime, showing its good feasibility in employed in vehicle detection. For a more thorough discussion
real-world environments.
of vehicle detection techniques, please refer to the extensive
Index Terms— Brake signal detection, vehicle detection, signal review by Sun et al. [12]. There are also a significant number
processing, collision avoidance, driver assistance. of approaches which utilize taillights as a cue for preced-
I. I NTRODUCTION ing vehicle detection, especially at night, wherein the other
features are hard to detect. O’Malley et al. [15] propose an
A DVANCED vehicle safety is always a critical issue
that ordinary people are fervently concerned about
and numerous researchers are devoted to. With the explo-
image processing system to detect and track vehicle rear-lamp
pairs in forward-facing color videos. A camera-configuration
sive growth in car ownership worldwide, increasing drivers process is suggested to optimize the appearance of rear lamps
desiderate, more keenly than ever, all sorts of automatic/ for segmentation. To segment rear-facing lamps from low-
semi-automatic vehicle-mounted systems for driver assistance. exposure forward-facing color videos, the HSV color space
For accident prevention and safety promotion, most driver is used, and a red-color threshold is directly derived from
assistance systems are designed to call the driver’s attention to automotive regulations and adapted for real-world conditions.
potential dangers, since distracting driving is one of the main Finally, lamps are paired by color cross-correlation symmetry
causes of traffic accidents. analysis and tracked by Kalman filtering. Considering the
In related works, lots of research efforts are directed “blooming effect,” caused by the saturation of bright pixels
towards the detection of lane markings [1]–[3] (to cite a few), in CCD cameras with low dynamic range, Skodras et al. [16]
exploit the color and radial symmetry information for taillight
Manuscript received June 27, 2015; revised September 4, 2015; accepted segmentation and detect vehicles by finding taillight pairs.
September 5, 2015. Date of publication September 9, 2015; date of current A video frame is first converted into the L∗ a∗ b∗ color space.
version December 10, 2015. This work was supported in part by the Ministry
of Education, Taiwan, under Grant ATU-103-W958 and Grant ICTL-103- Then, fast radial symmetry transform (FRST) [29] is applied
Q528, and in part by the Ministry of Science and Technology, Taiwan, under to the subspace image of the a∗ component for finding candi-
Grant 101-2221-E-009-087-MY3, Grant 102-2221-E-009-031, and date taillight regions. Through a morphological light pairing
Grant 103-2221-E-009-154. The associate editor coordinating the review of
this paper and approving it for publication was Prof. Aime Lay-Ekuakille. scheme and a symmetry check process, vehicle presence is
H.-T. Chen is with the Information and Communications Technology verified on the basis of heuristically thresholding.
Laboratory, National Chiao Tung University, Hsinchu 30010, Taiwan (e-mail: Signaling vehicle deceleration and potential
huatsung@cs.nctu.edu.tw).
Y.-C. Wu and C.-C. Hsu are with the Department of Computer collision, brake lights are of particular importance.
Science, National Chiao Tung University, Hsinchu 30010, Taiwan (e-mail: Thammakaroon and Tangamchit [17] present a nighttime
b303479@gmail.com; cchsu.cs00g@nctu.edu.tw). brake light detection technique for predictive brake warning.
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. To locate the brake lights, the red color of the rings is first
Digital Object Identifier 10.1109/JSEN.2015.2477412 detected using the RGB color model. The color distribution
1530-437X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 121
of brake lights is calculated from 50 collected samples test video clips (totally 1609 frames in length) containing
to determine the threshold for each color channel. Then, only six signal events (two turn signals and four brake
three candidate ROIs from RGB channels are verified by lights) seem insufficient to demonstrate the effectiveness of
computing the intersection, and the intersected ROIs are the approach. On the other hand, this approach also has
regarded as brake lights. Finally, a collision risk index is some disadvantages. First, this approach highly relies on the
computed by considering the vehicle velocity, the area of tracking of taillights. Once the tracking fails, the error will
the largest detected brake light region, and the area sum be propagated to subsequent computations that the turn/brake
of all detected brake light regions. Schamm et al. [18] signal detection process will no longer work. Besides, the
present a system capable of detecting vehicles at night or luminance change will not be as evident as claimed in [22]
during dusk based on their front and rear lights. The system under a high luminance condition. Moreover, the method
employs a perspective blob filter, which is a two-dimensional also confronts the inability of handling sudden changes of
Gaussian filter, to detect circular light blobs, and consequently environment light.
searches for corresponding light pairs using a rule-based There have been considerable researches on vehicle
clustering approach. To verify each light pair association, collision avoidance. However, to the best of our knowledge,
a symmetrical analysis is applied. Using a color-based few works have focused on detecting vehicle brake lights at
comparison, the system is able to discriminate between front daytime. In this paper, we propose a vision-based system
lights of oncoming vehicles and rear lights of preceding for daytime brake light detection. At daytime, most visual
vehicles. For a preceding vehicle, its braking maneuver can features, motions, and appearances of vehicles are highly
be recognized by the detection of the third brake light. visible, but the brake lights, on the contrary, are hard to notice
Chen and Peng [19] and Chen et al. [20] and [21] put due to the low contrast between the brakes lights and the
forward several visual-based approaches on the detection of environments. Besides, the light scattering effect is not as
brake lights and turn signals at night using a front-mounted obvious as it is at night, making brake lights more difficult to
camera. In [19], Chen and Peng propose a nighttime brake be noticed. To call the driver’s attention to a braking vehicle
light detection approach by analyzing the signal in both spatial in front, daytime brake light detection is of vital importance.
and frequency domain. They first detect candidate regions in Without the significant characteristic of light scattering, we
spatial domain using color features obtained in the YCbCr propose a hierarchical scheme which first extracts preceding
color space. Then the Cr component of the scattering areas vehicles with taillight symmetry verification, and then inte-
of candidate regions is transformed to frequency domain for grates both luminance and radial symmetry features to detect
further verification. Finally, a decision making is conducted for brake lights. For miss recovery, a detection refinement process
determining the braking status. Utilizing the characteristic of using temporal information is also employed.
the scattering in the brake light region, Chen et al. [20] model The remainder of this paper is organized as follows.
brake light scattering by Nakagami imaging and conduct Section II elaborates the detailed processing steps of the
the detection process in a part-based manner. The detec- proposed brake light detection, including vehicle candidate
tion algorithm consists of the following steps: (i) intensity detection, taillight extraction and pairing, potential brake light
computation and contrast enhancement with a step function; detection, and brake light status determination. Section III
(ii) taillight region modelling by Nakagami-m distribution, explains how to perform detection refinement using temporal
and (iii) adaptive decision making for brake light detection. information. Section IV presents and discusses the experimen-
Later on, Chen et al. further adapt this scattering model- tal results, and finally Section V concludes this paper.
ing scheme for nighttime turn signal detection in [21], and
also demonstrate good feasibility in real-world environments. II. D ETECTION OF B RAKE L IGHTS
However, the light scattering effect becomes inconspicuous Unlike at night, wherein the scattering effect makes
under a high ambient light condition and thus is not applicable taillights and brake lights highly visible, in the daytime brake
to daytime brake light detection. signals become inconspicuous and easy to be neglected due
Based on the tracking of vehicle taillights, to high ambient light. Direct detection of brake lights, which
Almagambetov et al. [22], which is an enhanced version is viable at night, may be infeasible for the daytime condition
of [23], propose a scheme for detecting brake lights and turn due to many false candidates with visual features similar to
signals at both daytime and nighttime. To extract candidate the true brake lights. Hence, we propose a hierarchical scheme
taillight pairs, the regions with a blend of red and white for daytime brake light detection, as illustrated in Fig. 1.
are first detected, and then a symmetry test is conducted In Section II-A, we first extract the preceding vehicles in
that two regions are considered symmetrical if their vertical video frames. In Section II-B, we perform taillight extraction
distance is less than the height of one region. A correlation and pairing. Within the obtained taillight regions, we detect
coefficient using RGB color information is computed for each radially symmetric blobs (RSBs) as potential brake lights
candidate pair and the pairs with low correlation coefficients in Section II-C. Finally, brake light status determination is
are eliminated. Applying Kalman filter-based tracking with conducted in Section II-D. For better understanding, Fig. 2
a codebook scheme to the candidate taillight pairs, the gives an illustration in advance, wherein the cyan rectangle is
luminance changes of tracked taillights are thus detected a preset region of interest (ROI), the purple rectangle indicates
as brake or turn signals. Although a near 100% detection a detected preceding vehicle, the green rectangles are the
rate is claimed in [22], their experiments on four short extracted and paired taillight regions, and the small regions
122 IEEE SENSORS JOURNAL, VOL. 16, NO. 1, JANUARY 1, 2016
Fig. 3. Taillight extraction. (a) Image of a detected vehicle candidate. (b) The
Fig. 1. Schematic flowchart of the proposed daytime brake light detection a∗ component image of (a). (c) Binarization of (b). (d) Taillight candidates
scheme. obtained after morphological operations and connected component analysis.
Fig. 2. Illustration for the ROI, detected vehicle region, taillight regions, and
brake lights. taillight extraction. Being a perceptually uniform color space,
L∗ a∗ b∗ equally maps the perceived color difference into
of high luminance inside are the brake lights we aim to qualitative distance in the color space. Separating lightness
detect. information (L∗ component) from color information (a∗ and
b∗ components), L∗ a∗ b∗ color space also has the advantage
A. Vehicle Candidate Detection that illumination changes have little effect on color infor-
For collision avoidance, the proposed brake light detection mation. In our system, the a∗ component (red - green) of
system mainly puts emphasis on the vehicle(s) directly in front L∗ a∗ b∗ is utilized to search red areas in the region of each
of the camera. Therefore, we narrow down the processing detected vehicle candidate. We segment each detected vehicle
area by presetting a ROI to the centered half area of the candidate and compute its a∗ component image, as shown
lower part of a frame, as shown in Fig. 2. The subsequent in Figs. 3(a) and (b). The high red chromaticity regions
vehicle extraction and brake light detection are conducted can be retained after image binarization with a threshold τa ,
within the ROI, and the vehicles aside will be ignored. This not as shown Fig. 3(c). Then, morphological operations and con-
only saves much processing time but also avoids unnecessary nected component analysis are applied to eliminate noises
disturbance, for the driver, of the brake lights detected from and obtain complete regions of taillight candidates, as shown
vehicles aside. in Fig. 3(d).
Histogram of Oriented Gradients (HOG), which is a feature There may be more than two taillight candidates detected
descriptor firstly proposed by Dalal and Triggs [31] for human in a vehicle region. For taillight verification and pairing, we
detection, has been widely adopted in many object detection design the following test and selection procedures based on the
works, showing very good effectiveness. The detection is characteristic of taillight symmetry to verify whether two given
based on evaluating well-normalized local histograms of image candidates, termed Ci and C j , are the taillight pair of a vehicle.
gradient orientation in a dense grid of overlapping blocks, and 1) Y-Distance Test: Let h i and h j denote the heights
a linear Support Vector Machine (SVM)-based approach is of Ci and C j , respectively, as illustrated in Fig. 4. Assuming
adopted for object/non-object classification. It is also verified the road surface is even, the taillight pair should be at
that HOG descriptors can reduce the influence of illumination nearly the same horizontal. Thus, if the vertical distance d
changes or shadowing. Later on, Zhu et al. [32] improve between the centers of Ci and C j is greater than both
the method [31] by selecting features from a large set of h i and h j , the two candidates Ci and C j should not be paired.
blocks at multiple sizes, locations, and aspect ratios using Inspired from [22], this Y-distance test Tyd (Ci , C j ), which
AdaBoost. A rejection cascade scheme is exploited to speed returns whether the pair (Ci , C j ) should be retained or not,
up the detection process. Due to the effectiveness in object can be formulated as
detection and robustness to illumination changes, in this paper
we adopt a cascade of boosted classifiers with HOG features 0, if d > h i and d > h j ,
Tyd (Ci , C j ) = (1)
to locate vehicle candidates in video frames. 1, otherwise.
B. Taillight Extraction and Pairing 2) Size-Shape Test: Considering that a pair of taillights
In video frames, taillights are displayed in high red chro- should be of similar size and shape, we compute the area
maticity. Therefore, L∗ a∗ b∗ is a suitable color space for ratio G of the intersection of Ci and C j to their union for
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 123
Fig. 6. Example of taillight pairing. (a) Detected vehicle region. (b) Candidate
Fig. 5. Size-shape test. pairs.
Fig. 10. Brake light detection for a vehicle with taillights in a shape of a
vertical rectangle (height > width). (a) Detected vehicle image. (b) Obtained
taillight region. (c) Is . (d) Pseudo color presentation of I f . (e) Io . (f) Ib .
Fig. 11. Brake light detection for a vehicle with double brake lights in
each taillight region. (a) Detected vehicle image. (b) Obtained taillight region.
Then, we apply the fast radial symmetry trans- (c) Is . (d) Pseudo color presentation of I f . (e) Io . (f) Ib .
form (FRST) [29] to search for symmetrical shapes in Is .
Since an activated brake light is usually in a color close to
white, leading to lower a∗ values of the corresponding pixels Therefore, the next step is to select an appropriate blob as a
than a nonbrake light, we consider only the negative-affected potential brake light in each taillight region.
pixels1 in the FRST. For noise tolerance and miss avoidance, For noise removal, we first apply morphological operations
a low radial-strictness parameter α = 2 is used in our system. and eliminate small blobs. Although all vehicles will differ
The symmetry detection scans for shapes over a range of sizes in appearance, with different styles and designs of taillights,
(from rmin to rmax ). Normally, in order to detect symmetric they must adhere to automotive regulations [15]. In general, if
shapes of any size, a large range of sizes should be used, a taillight is in a shape of a horizontal rectangle, i.e., its width
which, however, requires high computational cost. Thus, a w is larger than its height h, the brake light will be located near
reasonable range of rmin = 1 and rmax = 5, which can cover the outer sides of the vehicle for better attention attraction, as
possible brake light sizes, is used in our system. Finally, the shown in Fig. 9(a). Otherwise, for a taillight in a shape of a
result of FRST, termed I f , is converted into a binary image Io vertical rectangle, i.e., h > w, the brake light will be located at
by adopting the efficient Otsu’s thresholding algorithm [30] the top side, as shown in Fig. 10(a), wherein the lower bright
due to its optimal threshold setting in bimodal images. blob in the right taillight region is an activated turn signal.
Two examples of the above RSB detection are demon- Based on this observation, we retain the topmost blob in a
strated in Fig. 9. Figs. 9(a) and (b) are the detected w×h taillight region, if h > w, as shown in Fig. 10; otherwise,
vehicle images with activated and inactivated brake lights, the outermost one is retained. There is also the case that one
respectively. Figs. 9(c) and (d) show the image Is of each taillight region contains more than one brake light. For the case
taillight region. One can observe that besides the non-red of a taillight region containing double or multiple brake lights,
areas of taillights, activated brake lights yield additional holes as shown in Fig. 11, retaining the outermost/topmost one is
at their bright regions in Is in comparison with inactivated sufficient for the subsequent brake light status determination.
brake lights. Such holes show high symmetry in Fig. 9(e), Some results of this potential brake light selection, termed Ib ,
which gives the FRST result by considering only negative- are presented in Figs. 9(i), 9(j), 10(f), and 11(f).
affected pixels and the red-to-blue color indicates high-to-low
shape symmetry. As for inactivated brake lights, only non-
D. Brake Light Status Determination
red regions are of high symmetry, as shown in Fig. 9(f).
This significant characteristic is our primary concern and is The detected RSBs can successfully extract the bright
subsequently utilized to determine whether the brake light is regions of illuminated brake lights. However, when brake
activated or not (to be explained later). After applying Otsu’s lights are not activated, the RSBs are located at non-red
thresholding algorithm, there may be more than one blob as symmetric areas, as shown in Fig. 9(j). In comparison, such
well as some noises obtained, as shown in Figs. 9(g) and (h). non-red areas are less bright than the RSBs detected in
activated brake lights. Therefore, we utilize the luminance
1 The negative-affected pixel is the pixel a certain distance away that the feature and count the number of high intensity pixels in
gradient is pointing directly away from. For more details, please refer to [29]. a RSB as the feature for brake light status determination.
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 125
Fig. 12. Number of high intensity pixels over time with some corresponding
detection results of vehicles, taillight regions, and RSBs given below.
Fig. 13. Flowchart of the detection refinement algorithm.
For feature normalization, the vehicle width is also taken into Then, the status of a brake light is determined as activated
consideration. More explicit processing steps are explained as if its N is greater than a threshold τ N . If both the left and
follows. right brake lights are activated, it can be concluded that the
The taillight region R of the original video frame is first preceding vehicle is braking.
converted to a grayscale image I R . We take Ib (the result of
potential brake light selection) as a mask and apply it to I R III. D ETECTION R EFINEMENT U SING
that the pixels in I R corresponding to zero entries in Ib are T EMPORAL I NFORMATION
set to zero, as formulated by
In the previous section, the detection is conducted in a single
I R = M (I R )
Ib
(8) frame. Since a DVR captures successive frames, temporal
information can be exploited for further refinement of the
where M Ib (.) denotes the masking operation with Ib . Then, detection process.
we use a simple step function B(u) to discard the pixels with j
Let Vt −1 and Vti be the vehicles detected in two successive
intensity less than a threshold τb , and count the number Nb frames Ft −1 and Ft , respectively. Consider that the position
of the remaining high intensity pixels, as formulated by of a vehicle may not change very much in successive frames.
j
Therefore, if more than 50% of Vt −1 is overlapped by Vti ,3
1, if u ≥ τb , j
B(u) = (9) we can regard Vt −1 and Vti as the same vehicle, and we call
0, otherwise.
this vehicle being tracked. A vehicle Vti without corresponding
Nb = B(I ), R (10) detection in the previous frame is regarded as a new detection.
j
For a tracked vehicle Vt −1 in the previous frame, if no
where ||.|| returns the number of nonzero pixels. corresponding detection can be found in the current frame,
An example is given in Fig. 12, wherein the horizontal there is probably a vehicle detection miss because a vehicle
and vertical axes of the chart indicate the frame number rarely disappears suddenly. In this case, we perform taillight
and the number of high intensity pixels, respectively. Some j
corresponding detection results of vehicles, taillight regions, extraction and paring within the corresponding region of Vt −1
and RSBs are presented below the chart. One can see that in the current frame. If one verified taillight pair can be
j
when the brake light is not activated, e.g., f = 56 and 192, obtained, we claim that the vehicle Vt −1 is detected at the
the RSBs are located at non-red regions, and consequently same position in the current frame, as Vti . With such temporal
contain very few high intensity pixels. Once the brake lights information-based refinement, some misses in HOG-based
are activated, e.g., f = 65, 132, and 215, the detected RSBs vehicle detection can be recovered. A flowchart of this process
can appropriately match the brake lights, resulting in abrupt is given in Fig. 13, and the detailed algorithm is explained
increase in the number of high intensity pixels. in Algorithm 1.
However, the varied distance between the preceding vehicle Moreover, a similar mechanism can be applied to tail-
and the camera will affect the measure of Nb . Thus, we light extraction. For a tracked vehicle Vt −1 with verified
compute the horizontal distance between the outer sides of the taillights L t −1 and Rt −1 in the previous frame Ft −1 , as shown
RSB pair as the vehicle width wv , and produce a normalized Fig. 14(a), we can search the taillights in the current frame Ft
feature N as2 around the positions of L t −1 and Rt −1 , instead of the whole
3 On average, the width of a vehicle is no less than 1.65m. To move out
N = Nb /w2v . (11)
of the 50% threshold within the duration of two successive frames at 30 fps,
a vehicle must reach a lateral speed (relative to the camera) higher than
2 The square of w is used because the area of a brake light region is 89.1 km/hr (= 50% × 1.65m × 30/s = 24.75 m/s), which rarely happens even
v
proportional to the square of the vehicle width. on a super highway.
126 IEEE SENSORS JOURNAL, VOL. 16, NO. 1, JANUARY 1, 2016
vehicle. ∗ /
j
6: Link Vt −1 to Vti ;
Fig. 14. Enhancing taillight extraction with the aid of temporal information.
7: Vt .tracked = true;
i
(a) A tracked vehicle Vt−1 with verified taillights L t−1 and Rt−1 in the
j q
8: else if Vt −1 has been linked to a vehicle Vt in Ft previous frame Ft−1 . (b) Searching taillights in the current frame Ft around
then the positions of L t−1 and Rt−1 . (c) Illustration of search region setting
for taillight candidates. (d) The a∗ component image of each search region.
/∗ compare the overlap areas and assign Vt −1
j
(e) Binarization of (d). (f) Taillight candidates obtained after morphological
to the ∗
one with the larger
overlap
area. / operations and connected component analysis.
i j q j
9: if (Vt ∩ Vt −1 > Vt ∩ Vt −1 ) then TABLE I
j PARAMETER S ETTING U SED IN THE P ROPOSED S YSTEM
10: Link Vt −1 to Vti ;
11: Vti .tracked = true;
12: end if
13: end if
14: end if
15: if (!Vti .tracked) /∗ no vehicle is linked to Vti ∗ /
16: Consider Vti as a new detection;
17: end for /∗ from line# 1 ∗ /
j
18: for each Vt −1 in Ft −1
j j
19: if (Vt −1 .tracked and Vt −1 links to no vehicle in Ft )
then The corresponding results are shown in Figs. 14(d)-(f). With
/∗ This implies a potential miss. ∗ / the aid of temporal information, not only the search regions for
20: Search taillight pairs {P1 , P2 , . . .} in Ft within the taillight candidates can be reduced, but more reliable taillight
j
bounding region of Vt −1 ; pairs are obtained, effectively improving both computational
21: for each taillight pair Pk efficiency and detection accuracy.
/∗ Find if there exists a verified taillight pair. ∗/
TABLE II
R ESULTS OF V EHICLE D ETECTION
TABLE III
R ESULTS OF V EHICLE D ETECTION W ITH I NACTIVATED /
A CTIVATED B RAKE L IGHTS
Fig. 18. Intensity histograms of RSBs from 500 activated brake lights and
500 inactivated ones.
Fig. 21. Examples of incorrect brake light detection due to (a)-(b) inap-
propriate taillight extraction and pairing, (c) deflective view of the vehicle’s
back, and (d) overexposure.
TABLE V
S UMMARY OF THE F EATURES U SED IN THE C OMPARED
M ETHODS AND O UR P ROPOSED M ETHOD
R EFERENCES
[1] J. C. McCall and M. M. Trivedi, “Video-based lane estimation and
tracking for driver assistance: Survey, system, and evaluation,” IEEE
Fig. 22. Performance comparison using the Receiver Operating Trans. Intell. Transp. Syst., vol. 7, no. 1, pp. 20–37, Mar. 2006.
Characteristic. [2] A. Borkar, M. Hayes, and M. T. Smith, “A novel lane detection system
with efficient ground truth generation,” IEEE Trans. Intell. Transp. Syst.,
independent variable is the feature threshold. The ROC curves vol. 13, no. 1, pp. 365–374, Mar. 2012.
[3] M. M. Trivedi, T. Gandhi, and J. McCall, “Looking-in and looking-
are shown in Fig. 22, wherein the horizontal and vertical out of a vehicle: Computer-vision-based enhanced vehicle safety,”
axes denote the false positive rate (FPR) and the true pos- IEEE Trans. Intell. Transp. Syst., vol. 8, no. 1, pp. 108–120,
itive rate (TPR), respectively. Please note that the highest Mar. 2007.
[4] X. Baró, S. Escalera, J. Vitriá, O. Pujol, and P. Radeva, “Traffic
TPR and FPR cannot reach 100%, which are 95.88% and sign recognition using evolutionary Adaboost detection and forest-
83.93% respectively, because some vehicles with activated or ECOC classification,” IEEE Trans. Intell. Transp. Syst., vol. 10, no. 1,
inactivated brake lights have been missed in the preceding pp. 113–126, Mar. 2009.
[5] G. Siogkas, E. Skodras, and E. Dermatas, “Traffic lights detec-
vehicle detection. A ROC curve more close to the top left tion in adverse conditions using color, symmetry and spatiotemporal
corner indicates better performance. The comparison shows information,” in Proc. Int. Conf. Comput. Vis. Theory Appl., 2012,
that our proposed method outperforms the others. Method-4, pp. 620–627.
[6] J.-H. Park and C.-S. Jeong, “Real-time signal light detection,” in
which performs the second best, can also be a good choice Proc. 2nd Int. Conf. Future Generat. Commun. Netw. Symp., 2008,
for daytime brake light detection since it can achieve higher pp. 139–142.
TPRs if tolerating more false positives. [7] J. Gong, Y. Jiang, G. Xiong, C. Guan, G. Tao, and H. Chen, “The
recognition and tracking of traffic lights based on color segmentation
The proposed system is implemented in C++ with OpenCV and CAMSHIFT for intelligent vehicles,” in Proc. IEEE Intell. Vehicles
2.45 libraries and test on a desktop computer under Windows Symp., Jun. 2010, pp. 431–435.
7 64 bits OS with Intel i7-3770 CPU @3.4 GHz and 8 GB [8] M. R. Yelal, S. Sasi, G. R. Shaffer, and A. K. Kumar, “Color-based
signal light tracking in real-time video,” in Proc. IEEE Int. Conf. Video
RAM. Considering the computation cost, the proposed system Signal Surveill., Nov. 2006, p. 67.
can perform brake light detection for a frame in 0.11 seconds [9] W. Liu, X. Wen, B. Duan, H. Yuan, and N. Wang, “Rear vehicle
and the frame rate is about 9 fps. detection and tracking for lane change assist,” in Proc. IEEE Intell.
Vehicles Symp., Jun. 2007, pp. 252–257.
V. C ONCLUSION [10] C. Caraffi, T. Vojir, J. Trefny, J. Sochman, and J. Matas, “A system for
real-time detection and tracking of vehicles from a single car-mounted
Since distracting driving has been one of the main causes camera,” in Proc. 15th Int. IEEE Conf. Intell. Transp. Syst., Sep. 2012,
of traffic accidents, more and more car owners thirst for pp. 975–982.
[11] Z. Sun, G. Bebis, and R. Miller, “Monocular precrash vehicle detection:
assistance systems for warning drivers of potential hazards. Features and classifiers,” IEEE Trans. Image Process., vol. 15, no. 7,
Signaling vehicle deceleration and potential collision, brake pp. 2019–2034, Jul. 2006.
lights are of vital importance. In this paper, we propose a [12] Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: A review,”
IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 5, pp. 694–711,
vision-based approach for daytime brake light detection using May 2006.
a driving video recorder. Although visual features, motions, [13] Y.-L. Chen, Y.-H. Chen, C.-J. Chen, and B.-F. Wu, “Nighttime vehicle
and appearances of vehicles are highly visible at daytime, detection for driver assistance and autonomous vehicles,” in Proc. IEEE
18th Int. Conf. Pattern Recognit., Aug. 2006, pp. 687–690.
brake lights are inconspicuous due the low contrast to ambient [14] H. Takahashi, D. Ukishima, K. Kawamoto, and K. Hirota, “A study on
light. Besides, the light scattering effect is not as obvious as predicting hazard factors for safe driving,” IEEE Trans. Ind. Electron.,
it is at night, making it more difficult to detect brake lights vol. 54, no. 2, pp. 781–789, Apr. 2007.
[15] R. O’Malley, E. Jones, and M. Glavin, “Rear-lamp vehicle detection and
at daytime. Hence, we propose a hierarchical scheme that tracking in low-exposure color video for night conditions,” IEEE Trans.
preceding vehicles are first extracted with a taillight symmetry Intell. Transp. Syst., vol. 11, no. 2, pp. 453–462, Jun. 2010.
verification, and then brake light detection is conducted by [16] E. Skodras, G. Siogkas, E. Dermatas, and N. Fakotakis, “Rear lights
vehicle detection for collision avoidance,” in Proc. IEEE 19th Int. Conf.
integrating both luminance and radial symmetry features. Syst., Signals Image Process., Apr. 2012, pp. 134–137.
A detection refinement process using temporal information [17] P. Thammakaroon and P. Tangamchit, “Predictive brake warning at night
is also employed for miss recovery. Experiments on a video using taillight characteristic,” in Proc. IEEE Int. Symp. Ind. Electron.,
Jul. 2009, pp. 217–221.
data set collected in real traffic environments by front-mounted [18] T. Schamm, C. von Carlowitz, and J. M. Zollner, “On-road vehicle
driving video recorders show that the proposed system can detection during dusk and at night,” in Proc. IEEE Intell. Vehicles Symp.,
effectively detect brake lights at daytime with a detection rate Jun. 2010, pp. 418–423.
[19] D.-Y. Chen and Y.-J. Peng, “Frequency-tuned taillight-based nighttime
of up to 87.6%, showing its good feasibility in real-world vehicle braking warning system,” IEEE Sensors J., vol. 12, no. 11,
environments. pp. 3285–3292, Nov. 2012.
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 131
[20] D.-Y. Chen, Y.-H. Lin, and Y.-J. Peng, “Nighttime brake-light detection Hua-Tsung Chen (A’07–S’09–M’09) received the
by Nakagami imaging,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 4, B.S., M.S., and Ph.D. degrees in computer science
pp. 1627–1637, Dec. 2012. and information engineering from National Chiao
[21] D.-Y. Chen, Y.-J. Peng, L.-C. Chen, and J.-W. Hsieh, “Nighttime turn Tung University, Hsinchu, Taiwan, in 2001, 2003,
signal detection by scatter modeling and reflectance-based direction and 2009, respectively.
recognition,” IEEE Sensors J., vol. 14, no. 7, pp. 2317–2326, Jul. 2014. He is currently an Assistant Research Fellow with
[22] A. Almagambetov, M. Casares, and S. Velipasalar, “Autonomous track- the Information and Communications Technology
ing of vehicle rear lights and detection of brakes and turn signals,” Laboratory, National Chiao Tung University.
in Proc. IEEE Symp. Comput. Intell. Secur. Defence Appl., Jul. 2012, His research interests include computer vision,
pp. 1–7. video signal processing, content-based video
[23] M. Casares, A. Almagambetov, and S. Velipasalar, “A robust algorithm indexing and retrieval, multimedia information
for the detection of vehicle turn signals and brake lights,” in Proc. IEEE system, and music signal processing.
9th Int. Conf. Adv. Video Light-Based Surveill., Sep. 2012, pp. 386–391.
[24] Q. Ming and K.-H. Jo, “Vehicle detection using tail light segmentation,”
in Proc. IEEE 6th Int. Forum Strategic Technol., vol. 2, Aug. 2011,
pp. 729–732. Yi-Chien Wu received the B.S. and M.S. degrees
[25] R. Stevenson, “A driver’s sixth sense,” IEEE Spectr., vol. 48, no. 10, in computer science from National Chiao Tung
pp. 50–55, Oct. 2011. University, Hsinchu, Taiwan, in 2013 and 2015,
[26] A. Lay-Ekuakille, P. Vergallo, D. Saracino, and A. Trotta, “Optimizing respectively.
and post processing of a smart beamformer for obstacle retrieval,” IEEE Her research interests include computer vision,
Sensors J., vol. 12, no. 5, pp. 1294–1299, May 2012. video signal processing, and video surveillance
[27] V. Agarwal, N. V. Murali, and C. Chandramouli, “A cost-effective system.
ultrasonic sensor-based driver-assistance system for congested traf-
fic conditions,” IEEE Trans. Intell. Transp. Syst., vol. 10, no. 3,
pp. 486–498, Sep. 2009.
[28] M. R. Strakowski, B. B. Kosmowski, R. Kowalik, and P. Wierzba,
“An ultrasonic obstacle detector based on phase beamforming princi-
ples,” IEEE Sensors J., vol. 6, no. 1, pp. 179–186, Feb. 2006.
[29] G. Loy and A. Zelinsky, “Fast radial symmetry for detecting points
of interest,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 8, Chun-Chieh Hsu (S’12) received the B.S. and
pp. 959–973, Aug. 2003. M.S. degrees in computer science from National
[30] N. Otsu, “A threshold selection method from gray-level histograms,” Chiao Tung University, Hsinchu, Taiwan, in 2009
IEEE Trans. Syst., Man, Cybern., vol. 9, no. 1, pp. 62–66, Jan. 1979. and 2011, respectively, where he is currently pursu-
[31] N. Dalal and B. Triggs, “Histograms of oriented gradients for human ing the Ph.D. degree in computer science.
detection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern His research interests include computer vision,
Recognit., vol. 1, Jun. 2005, pp. 886–893. video signal processing, sports video analysis, and
[32] Q. Zhu, M.-C. Yeh, K.-T. Cheng, and S. Avidan, “Fast human detection video surveillance system.
using a cascade of histograms of oriented gradients,” in Proc. IEEE
Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2, Jun. 2006,
pp. 1491–1498.