Daytime Preceding Vehicle Brake Light Detection Using Monocular Vision

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

120 IEEE SENSORS JOURNAL, VOL. 16, NO.

1, JANUARY 1, 2016

Daytime Preceding Vehicle Brake Light Detection


Using Monocular Vision
Hua-Tsung Chen, Member, IEEE, Yi-Chien Wu, and Chun-Chieh Hsu, Student Member, IEEE

Abstract— Advanced vehicle safety is a recently emerging traffic lights [4]–[8], vehicles [9]–[14], and vehicle lights
issue appealed from the explosive population of car owners. (taillights, brake lights, and turn signals) [15]–[24]. For the
Increasing driver assistance systems have been developed for ultimate goal of accident prevention, pre-collision sensing has
warning drivers of potential hazards by analyzing the surround-
ings with sensors and/or cameras. Issuing vehicle deceleration become a hot research topic among automotive manufacturers.
and potential collision, brake lights are particularly important Over the last decades, many existing state-of-the-art systems
warning signals, allowing of no neglect. In this paper, we propose rely on active sensors, e.g., radars [25], or beamforming tech-
a vision-based daytime brake light detection system using a niques [26]–[28]. However, the rapid development and reduced
driving video recorder, which tends to be widespread used. cost of digital cameras have made it feasible to deploy driving
At daytime, visual features, motions, and appearances of vehicles
are highly visible. However, brake lights, on the contrary, are video recorders (DVRs), which tend to be worldwide used.
hard to notice due to low contrast between the brakes lights Moreover, due to the exponential growth in computing power
and environments. Without the significant characteristic of light and the availability of advanced computer vision techniques,
scattering as at night, the proposed system extracts preceding various vision-based sensing can be conducted with low-cost
vehicles with taillight symmetry verification, and then integrates algorithms. Compared to device-based sensing, vision-based
both luminance and radial symmetry features to detect brake
lights. A detection refinement process using temporal information approaches have the main advantage that different tasks can
is also employed for miss recovery. Experiments are conducted on achieved merely by means of software operating on existing
a test data set collected by front-mounted driving video recorders, hardware, requiring no additional devices.
and the results verify that the proposed system can effectively Visual features, motion, and appearance are commonly
detect brake lights at daytime, showing its good feasibility in employed in vehicle detection. For a more thorough discussion
real-world environments.
of vehicle detection techniques, please refer to the extensive
Index Terms— Brake signal detection, vehicle detection, signal review by Sun et al. [12]. There are also a significant number
processing, collision avoidance, driver assistance. of approaches which utilize taillights as a cue for preced-
I. I NTRODUCTION ing vehicle detection, especially at night, wherein the other
features are hard to detect. O’Malley et al. [15] propose an
A DVANCED vehicle safety is always a critical issue
that ordinary people are fervently concerned about
and numerous researchers are devoted to. With the explo-
image processing system to detect and track vehicle rear-lamp
pairs in forward-facing color videos. A camera-configuration
sive growth in car ownership worldwide, increasing drivers process is suggested to optimize the appearance of rear lamps
desiderate, more keenly than ever, all sorts of automatic/ for segmentation. To segment rear-facing lamps from low-
semi-automatic vehicle-mounted systems for driver assistance. exposure forward-facing color videos, the HSV color space
For accident prevention and safety promotion, most driver is used, and a red-color threshold is directly derived from
assistance systems are designed to call the driver’s attention to automotive regulations and adapted for real-world conditions.
potential dangers, since distracting driving is one of the main Finally, lamps are paired by color cross-correlation symmetry
causes of traffic accidents. analysis and tracked by Kalman filtering. Considering the
In related works, lots of research efforts are directed “blooming effect,” caused by the saturation of bright pixels
towards the detection of lane markings [1]–[3] (to cite a few), in CCD cameras with low dynamic range, Skodras et al. [16]
exploit the color and radial symmetry information for taillight
Manuscript received June 27, 2015; revised September 4, 2015; accepted segmentation and detect vehicles by finding taillight pairs.
September 5, 2015. Date of publication September 9, 2015; date of current A video frame is first converted into the L∗ a∗ b∗ color space.
version December 10, 2015. This work was supported in part by the Ministry
of Education, Taiwan, under Grant ATU-103-W958 and Grant ICTL-103- Then, fast radial symmetry transform (FRST) [29] is applied
Q528, and in part by the Ministry of Science and Technology, Taiwan, under to the subspace image of the a∗ component for finding candi-
Grant 101-2221-E-009-087-MY3, Grant 102-2221-E-009-031, and date taillight regions. Through a morphological light pairing
Grant 103-2221-E-009-154. The associate editor coordinating the review of
this paper and approving it for publication was Prof. Aime Lay-Ekuakille. scheme and a symmetry check process, vehicle presence is
H.-T. Chen is with the Information and Communications Technology verified on the basis of heuristically thresholding.
Laboratory, National Chiao Tung University, Hsinchu 30010, Taiwan (e-mail: Signaling vehicle deceleration and potential
huatsung@cs.nctu.edu.tw).
Y.-C. Wu and C.-C. Hsu are with the Department of Computer collision, brake lights are of particular importance.
Science, National Chiao Tung University, Hsinchu 30010, Taiwan (e-mail: Thammakaroon and Tangamchit [17] present a nighttime
b303479@gmail.com; cchsu.cs00g@nctu.edu.tw). brake light detection technique for predictive brake warning.
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org. To locate the brake lights, the red color of the rings is first
Digital Object Identifier 10.1109/JSEN.2015.2477412 detected using the RGB color model. The color distribution
1530-437X © 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 121

of brake lights is calculated from 50 collected samples test video clips (totally 1609 frames in length) containing
to determine the threshold for each color channel. Then, only six signal events (two turn signals and four brake
three candidate ROIs from RGB channels are verified by lights) seem insufficient to demonstrate the effectiveness of
computing the intersection, and the intersected ROIs are the approach. On the other hand, this approach also has
regarded as brake lights. Finally, a collision risk index is some disadvantages. First, this approach highly relies on the
computed by considering the vehicle velocity, the area of tracking of taillights. Once the tracking fails, the error will
the largest detected brake light region, and the area sum be propagated to subsequent computations that the turn/brake
of all detected brake light regions. Schamm et al. [18] signal detection process will no longer work. Besides, the
present a system capable of detecting vehicles at night or luminance change will not be as evident as claimed in [22]
during dusk based on their front and rear lights. The system under a high luminance condition. Moreover, the method
employs a perspective blob filter, which is a two-dimensional also confronts the inability of handling sudden changes of
Gaussian filter, to detect circular light blobs, and consequently environment light.
searches for corresponding light pairs using a rule-based There have been considerable researches on vehicle
clustering approach. To verify each light pair association, collision avoidance. However, to the best of our knowledge,
a symmetrical analysis is applied. Using a color-based few works have focused on detecting vehicle brake lights at
comparison, the system is able to discriminate between front daytime. In this paper, we propose a vision-based system
lights of oncoming vehicles and rear lights of preceding for daytime brake light detection. At daytime, most visual
vehicles. For a preceding vehicle, its braking maneuver can features, motions, and appearances of vehicles are highly
be recognized by the detection of the third brake light. visible, but the brake lights, on the contrary, are hard to notice
Chen and Peng [19] and Chen et al. [20] and [21] put due to the low contrast between the brakes lights and the
forward several visual-based approaches on the detection of environments. Besides, the light scattering effect is not as
brake lights and turn signals at night using a front-mounted obvious as it is at night, making brake lights more difficult to
camera. In [19], Chen and Peng propose a nighttime brake be noticed. To call the driver’s attention to a braking vehicle
light detection approach by analyzing the signal in both spatial in front, daytime brake light detection is of vital importance.
and frequency domain. They first detect candidate regions in Without the significant characteristic of light scattering, we
spatial domain using color features obtained in the YCbCr propose a hierarchical scheme which first extracts preceding
color space. Then the Cr component of the scattering areas vehicles with taillight symmetry verification, and then inte-
of candidate regions is transformed to frequency domain for grates both luminance and radial symmetry features to detect
further verification. Finally, a decision making is conducted for brake lights. For miss recovery, a detection refinement process
determining the braking status. Utilizing the characteristic of using temporal information is also employed.
the scattering in the brake light region, Chen et al. [20] model The remainder of this paper is organized as follows.
brake light scattering by Nakagami imaging and conduct Section II elaborates the detailed processing steps of the
the detection process in a part-based manner. The detec- proposed brake light detection, including vehicle candidate
tion algorithm consists of the following steps: (i) intensity detection, taillight extraction and pairing, potential brake light
computation and contrast enhancement with a step function; detection, and brake light status determination. Section III
(ii) taillight region modelling by Nakagami-m distribution, explains how to perform detection refinement using temporal
and (iii) adaptive decision making for brake light detection. information. Section IV presents and discusses the experimen-
Later on, Chen et al. further adapt this scattering model- tal results, and finally Section V concludes this paper.
ing scheme for nighttime turn signal detection in [21], and
also demonstrate good feasibility in real-world environments. II. D ETECTION OF B RAKE L IGHTS
However, the light scattering effect becomes inconspicuous Unlike at night, wherein the scattering effect makes
under a high ambient light condition and thus is not applicable taillights and brake lights highly visible, in the daytime brake
to daytime brake light detection. signals become inconspicuous and easy to be neglected due
Based on the tracking of vehicle taillights, to high ambient light. Direct detection of brake lights, which
Almagambetov et al. [22], which is an enhanced version is viable at night, may be infeasible for the daytime condition
of [23], propose a scheme for detecting brake lights and turn due to many false candidates with visual features similar to
signals at both daytime and nighttime. To extract candidate the true brake lights. Hence, we propose a hierarchical scheme
taillight pairs, the regions with a blend of red and white for daytime brake light detection, as illustrated in Fig. 1.
are first detected, and then a symmetry test is conducted In Section II-A, we first extract the preceding vehicles in
that two regions are considered symmetrical if their vertical video frames. In Section II-B, we perform taillight extraction
distance is less than the height of one region. A correlation and pairing. Within the obtained taillight regions, we detect
coefficient using RGB color information is computed for each radially symmetric blobs (RSBs) as potential brake lights
candidate pair and the pairs with low correlation coefficients in Section II-C. Finally, brake light status determination is
are eliminated. Applying Kalman filter-based tracking with conducted in Section II-D. For better understanding, Fig. 2
a codebook scheme to the candidate taillight pairs, the gives an illustration in advance, wherein the cyan rectangle is
luminance changes of tracked taillights are thus detected a preset region of interest (ROI), the purple rectangle indicates
as brake or turn signals. Although a near 100% detection a detected preceding vehicle, the green rectangles are the
rate is claimed in [22], their experiments on four short extracted and paired taillight regions, and the small regions
122 IEEE SENSORS JOURNAL, VOL. 16, NO. 1, JANUARY 1, 2016

Fig. 3. Taillight extraction. (a) Image of a detected vehicle candidate. (b) The
Fig. 1. Schematic flowchart of the proposed daytime brake light detection a∗ component image of (a). (c) Binarization of (b). (d) Taillight candidates
scheme. obtained after morphological operations and connected component analysis.

Fig. 4. Y-distance test.

Fig. 2. Illustration for the ROI, detected vehicle region, taillight regions, and
brake lights. taillight extraction. Being a perceptually uniform color space,
L∗ a∗ b∗ equally maps the perceived color difference into
of high luminance inside are the brake lights we aim to qualitative distance in the color space. Separating lightness
detect. information (L∗ component) from color information (a∗ and
b∗ components), L∗ a∗ b∗ color space also has the advantage
A. Vehicle Candidate Detection that illumination changes have little effect on color infor-
For collision avoidance, the proposed brake light detection mation. In our system, the a∗ component (red - green) of
system mainly puts emphasis on the vehicle(s) directly in front L∗ a∗ b∗ is utilized to search red areas in the region of each
of the camera. Therefore, we narrow down the processing detected vehicle candidate. We segment each detected vehicle
area by presetting a ROI to the centered half area of the candidate and compute its a∗ component image, as shown
lower part of a frame, as shown in Fig. 2. The subsequent in Figs. 3(a) and (b). The high red chromaticity regions
vehicle extraction and brake light detection are conducted can be retained after image binarization with a threshold τa ,
within the ROI, and the vehicles aside will be ignored. This not as shown Fig. 3(c). Then, morphological operations and con-
only saves much processing time but also avoids unnecessary nected component analysis are applied to eliminate noises
disturbance, for the driver, of the brake lights detected from and obtain complete regions of taillight candidates, as shown
vehicles aside. in Fig. 3(d).
Histogram of Oriented Gradients (HOG), which is a feature There may be more than two taillight candidates detected
descriptor firstly proposed by Dalal and Triggs [31] for human in a vehicle region. For taillight verification and pairing, we
detection, has been widely adopted in many object detection design the following test and selection procedures based on the
works, showing very good effectiveness. The detection is characteristic of taillight symmetry to verify whether two given
based on evaluating well-normalized local histograms of image candidates, termed Ci and C j , are the taillight pair of a vehicle.
gradient orientation in a dense grid of overlapping blocks, and 1) Y-Distance Test: Let h i and h j denote the heights
a linear Support Vector Machine (SVM)-based approach is of Ci and C j , respectively, as illustrated in Fig. 4. Assuming
adopted for object/non-object classification. It is also verified the road surface is even, the taillight pair should be at
that HOG descriptors can reduce the influence of illumination nearly the same horizontal. Thus, if the vertical distance d
changes or shadowing. Later on, Zhu et al. [32] improve between the centers of Ci and C j is greater than both
the method [31] by selecting features from a large set of h i and h j , the two candidates Ci and C j should not be paired.
blocks at multiple sizes, locations, and aspect ratios using Inspired from [22], this Y-distance test Tyd (Ci , C j ), which
AdaBoost. A rejection cascade scheme is exploited to speed returns whether the pair (Ci , C j ) should be retained or not,
up the detection process. Due to the effectiveness in object can be formulated as
detection and robustness to illumination changes, in this paper 
we adopt a cascade of boosted classifiers with HOG features 0, if d > h i and d > h j ,
Tyd (Ci , C j ) = (1)
to locate vehicle candidates in video frames. 1, otherwise.
B. Taillight Extraction and Pairing 2) Size-Shape Test: Considering that a pair of taillights
In video frames, taillights are displayed in high red chro- should be of similar size and shape, we compute the area
maticity. Therefore, L∗ a∗ b∗ is a suitable color space for ratio G of the intersection of Ci and C j to their union for
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 123

Fig. 6. Example of taillight pairing. (a) Detected vehicle region. (b) Candidate
Fig. 5. Size-shape test. pairs.

size-shape similarity evaluation, formulated as


 
Ci ∩ C j 
G=  . (2)
Ci ∪ C j 

As shown in Fig. 5, if Ci and C j are of similar size and


shape, a high G value will be obtained; otherwise, G will be
low. Thus, the pair (Ci , C j ) with a low G value should be
filtered out. This size-shape test can be formulated as
 Fig. 7. Vehicle images with (a) activated brake lights and (b) inactivated
0, if G < τs , brake lights.
Tss (Ci , C j ) = (3)
1, otherwise.
3) Pair Selection: Through the above tests, if no taillight
pair is retained, this detected vehicle candidate can be regarded
as a false detection, and no subsequent brake signal detection
needs to be conducted on it. Otherwise, we have to select
and verify the most appropriate pair from the retained taillight
pair(s). Since the taillights are usually the largest regions in
high red chromaticity at the rear of a vehicle, we define a
criterion H , which computes the area proportion of Ci and C j
to all candidates (C1 , C2 , . . . , C N ) in the detected vehicle
region, as Fig. 8. Flowchart of potential brake light detection.
 
Ci  + C j 
H= , (4)
C1  + C2  + ... + C N 
To overcome this problem, some other characteristic(s), in
where N indicates the number of all taillight candidates in the addition to luminance, should be taken into account. Consider
vehicle region. A higher H value indicates that Ci and C j are the fact that although there is no regulation on the shape of
more likely to be the taillight pair. Taking both the shape- brake lights, they generally follow a symmetric pattern. Thus,
size and area-proportion characteristics into consideration, inspired by [16], we detect radially symmetric blobs (RSBs)
we define a paring score P as in the taillight regions as potential brake lights.
We first compare vehicle images with activated and inac-
P = ρG + (1 − ρ)H, (5)
tivated brake lights, as shown in Fig. 7. Two phenomenons
where ρ is a weighting parameter ranging from 0 to 1. can be observed when a brake light is activated: (i) the
To regard both characteristics as equally important, we set color of the brake light region becomes far from the original
ρ = 0.5 in our experiments. Finally, the pair with the red in a video frame due to its very high luminance, and
highest pairing score Pmax is selected, and is verified as the (ii) brake lights appear as bright spots with a red halo around
taillight pair of the detected vehicle if Pmax is greater than 1; due to the “blooming effect.” Based on these characteristics,
otherwise, it is concluded that the vehicle candidate contains a novel scheme for potential brake light detection is proposed,
no verified taillight pair and is a false detection to be discarded. as illustrated in Fig. 8. First, the a∗ component image Ia
For example, the detected vehicle region in Fig. 6(a) have of each taillight region R is extracted. Prior to the scan of
two candidate pairs passing the aforementioned tests, as shown symmetry shapes, a simple step function S(u) is first applied
in Fig. 6(b). With a higher pairing score, the upper pair is to the a∗ component image Ia in order to retain only the pixels
selected and verified as the taillight pair of the vehicle. in high red chromaticity, as defined by

1, if u ≥ τa ,
C. Potential Brake Light Detection S(u) = (6)
0, otherwise.
In the daytime, it is very challenging to recognize whether
Using (6), the a∗ component image Ia is filtered by
the brake light is activated or not merely by luminance features
due to the high ambient light and frequent light changes. Is = Ia × S(Ia ). (7)
124 IEEE SENSORS JOURNAL, VOL. 16, NO. 1, JANUARY 1, 2016

Fig. 10. Brake light detection for a vehicle with taillights in a shape of a
vertical rectangle (height > width). (a) Detected vehicle image. (b) Obtained
taillight region. (c) Is . (d) Pseudo color presentation of I f . (e) Io . (f) Ib .

Fig. 9. Examples of radially symmetric blob detection. (a), (b) Detected


vehicle image. (c), (d) Is : the a∗ component image Ia filtered by a step
function. (e), (f) Pseudo color presentation of I f : the results of FRST.
(g), (h) Io : the binary image obtained by applying Otsu’s thresholding to I f .
(i), (j) Ib : result of potential brake light selection.

Fig. 11. Brake light detection for a vehicle with double brake lights in
each taillight region. (a) Detected vehicle image. (b) Obtained taillight region.
Then, we apply the fast radial symmetry trans- (c) Is . (d) Pseudo color presentation of I f . (e) Io . (f) Ib .
form (FRST) [29] to search for symmetrical shapes in Is .
Since an activated brake light is usually in a color close to
white, leading to lower a∗ values of the corresponding pixels Therefore, the next step is to select an appropriate blob as a
than a nonbrake light, we consider only the negative-affected potential brake light in each taillight region.
pixels1 in the FRST. For noise tolerance and miss avoidance, For noise removal, we first apply morphological operations
a low radial-strictness parameter α = 2 is used in our system. and eliminate small blobs. Although all vehicles will differ
The symmetry detection scans for shapes over a range of sizes in appearance, with different styles and designs of taillights,
(from rmin to rmax ). Normally, in order to detect symmetric they must adhere to automotive regulations [15]. In general, if
shapes of any size, a large range of sizes should be used, a taillight is in a shape of a horizontal rectangle, i.e., its width
which, however, requires high computational cost. Thus, a w is larger than its height h, the brake light will be located near
reasonable range of rmin = 1 and rmax = 5, which can cover the outer sides of the vehicle for better attention attraction, as
possible brake light sizes, is used in our system. Finally, the shown in Fig. 9(a). Otherwise, for a taillight in a shape of a
result of FRST, termed I f , is converted into a binary image Io vertical rectangle, i.e., h > w, the brake light will be located at
by adopting the efficient Otsu’s thresholding algorithm [30] the top side, as shown in Fig. 10(a), wherein the lower bright
due to its optimal threshold setting in bimodal images. blob in the right taillight region is an activated turn signal.
Two examples of the above RSB detection are demon- Based on this observation, we retain the topmost blob in a
strated in Fig. 9. Figs. 9(a) and (b) are the detected w×h taillight region, if h > w, as shown in Fig. 10; otherwise,
vehicle images with activated and inactivated brake lights, the outermost one is retained. There is also the case that one
respectively. Figs. 9(c) and (d) show the image Is of each taillight region contains more than one brake light. For the case
taillight region. One can observe that besides the non-red of a taillight region containing double or multiple brake lights,
areas of taillights, activated brake lights yield additional holes as shown in Fig. 11, retaining the outermost/topmost one is
at their bright regions in Is in comparison with inactivated sufficient for the subsequent brake light status determination.
brake lights. Such holes show high symmetry in Fig. 9(e), Some results of this potential brake light selection, termed Ib ,
which gives the FRST result by considering only negative- are presented in Figs. 9(i), 9(j), 10(f), and 11(f).
affected pixels and the red-to-blue color indicates high-to-low
shape symmetry. As for inactivated brake lights, only non-
D. Brake Light Status Determination
red regions are of high symmetry, as shown in Fig. 9(f).
This significant characteristic is our primary concern and is The detected RSBs can successfully extract the bright
subsequently utilized to determine whether the brake light is regions of illuminated brake lights. However, when brake
activated or not (to be explained later). After applying Otsu’s lights are not activated, the RSBs are located at non-red
thresholding algorithm, there may be more than one blob as symmetric areas, as shown in Fig. 9(j). In comparison, such
well as some noises obtained, as shown in Figs. 9(g) and (h). non-red areas are less bright than the RSBs detected in
activated brake lights. Therefore, we utilize the luminance
1 The negative-affected pixel is the pixel a certain distance away that the feature and count the number of high intensity pixels in
gradient is pointing directly away from. For more details, please refer to [29]. a RSB as the feature for brake light status determination.
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 125

Fig. 12. Number of high intensity pixels over time with some corresponding
detection results of vehicles, taillight regions, and RSBs given below.
Fig. 13. Flowchart of the detection refinement algorithm.

For feature normalization, the vehicle width is also taken into Then, the status of a brake light is determined as activated
consideration. More explicit processing steps are explained as if its N is greater than a threshold τ N . If both the left and
follows. right brake lights are activated, it can be concluded that the
The taillight region R of the original video frame is first preceding vehicle is braking.
converted to a grayscale image I R . We take Ib (the result of
potential brake light selection) as a mask and apply it to I R III. D ETECTION R EFINEMENT U SING
that the pixels in I R corresponding to zero entries in Ib are T EMPORAL I NFORMATION
set to zero, as formulated by
In the previous section, the detection is conducted in a single
I R = M (I R )
Ib
(8) frame. Since a DVR captures successive frames, temporal
information can be exploited for further refinement of the
where M Ib (.) denotes the masking operation with Ib . Then, detection process.
we use a simple step function B(u) to discard the pixels with j
Let Vt −1 and Vti be the vehicles detected in two successive
intensity less than a threshold τb , and count the number Nb frames Ft −1 and Ft , respectively. Consider that the position
of the remaining high intensity pixels, as formulated by of a vehicle may not change very much in successive frames.
 j
Therefore, if more than 50% of Vt −1 is overlapped by Vti ,3
1, if u ≥ τb , j
B(u) = (9) we can regard Vt −1 and Vti as the same vehicle, and we call
0, otherwise.
  this vehicle being tracked. A vehicle Vti without corresponding
Nb =  B(I  ), R (10) detection in the previous frame is regarded as a new detection.
j
For a tracked vehicle Vt −1 in the previous frame, if no
where ||.|| returns the number of nonzero pixels. corresponding detection can be found in the current frame,
An example is given in Fig. 12, wherein the horizontal there is probably a vehicle detection miss because a vehicle
and vertical axes of the chart indicate the frame number rarely disappears suddenly. In this case, we perform taillight
and the number of high intensity pixels, respectively. Some j
corresponding detection results of vehicles, taillight regions, extraction and paring within the corresponding region of Vt −1
and RSBs are presented below the chart. One can see that in the current frame. If one verified taillight pair can be
j
when the brake light is not activated, e.g., f = 56 and 192, obtained, we claim that the vehicle Vt −1 is detected at the
the RSBs are located at non-red regions, and consequently same position in the current frame, as Vti . With such temporal
contain very few high intensity pixels. Once the brake lights information-based refinement, some misses in HOG-based
are activated, e.g., f = 65, 132, and 215, the detected RSBs vehicle detection can be recovered. A flowchart of this process
can appropriately match the brake lights, resulting in abrupt is given in Fig. 13, and the detailed algorithm is explained
increase in the number of high intensity pixels. in Algorithm 1.
However, the varied distance between the preceding vehicle Moreover, a similar mechanism can be applied to tail-
and the camera will affect the measure of Nb . Thus, we light extraction. For a tracked vehicle Vt −1 with verified
compute the horizontal distance between the outer sides of the taillights L t −1 and Rt −1 in the previous frame Ft −1 , as shown
RSB pair as the vehicle width wv , and produce a normalized Fig. 14(a), we can search the taillights in the current frame Ft
feature N as2 around the positions of L t −1 and Rt −1 , instead of the whole
3 On average, the width of a vehicle is no less than 1.65m. To move out
N = Nb /w2v . (11)
of the 50% threshold within the duration of two successive frames at 30 fps,
a vehicle must reach a lateral speed (relative to the camera) higher than
2 The square of w is used because the area of a brake light region is 89.1 km/hr (= 50% × 1.65m × 30/s = 24.75 m/s), which rarely happens even
v
proportional to the square of the vehicle width. on a super highway.
126 IEEE SENSORS JOURNAL, VOL. 16, NO. 1, JANUARY 1, 2016

Algorithm 1 Detection Refinement Algorithm


 
Input: V = Vt0 , Vt1 , . . . , Vtm : the vehicles detected in the
current frame Ft by the HOG-based approach.
Output: V: refined vehicle detection results.
1: for each detected vehicle Vti in the current frame Ft
2: Vti .tracked = false;
3: Find in the previous frame Ft −1 the detected vehicle
j
Vt −1 which is closest to Vti ;
   
 j   j 
4: if (Vti ∩ Vt −1  / Vt −1  > 50%) then
j
5: if Vt −1 has not been linked to any vehicle in Ft
then
/∗ Vt −1 and Vti are regarded as the same
j

vehicle. ∗ /
j
6: Link Vt −1 to Vti ;
Fig. 14. Enhancing taillight extraction with the aid of temporal information.
7: Vt .tracked = true;
i
(a) A tracked vehicle Vt−1 with verified taillights L t−1 and Rt−1 in the
j q
8: else if Vt −1 has been linked to a vehicle Vt in Ft previous frame Ft−1 . (b) Searching taillights in the current frame Ft around
then the positions of L t−1 and Rt−1 . (c) Illustration of search region setting
for taillight candidates. (d) The a∗ component image of each search region.
/∗ compare the overlap areas and assign Vt −1
j
(e) Binarization of (d). (f) Taillight candidates obtained after morphological
to the ∗
 one with the larger
 overlap
 area. / operations and connected component analysis.
 i j   q j 
9: if (Vt ∩ Vt −1  > Vt ∩ Vt −1 ) then TABLE I
j PARAMETER S ETTING U SED IN THE P ROPOSED S YSTEM
10: Link Vt −1 to Vti ;
11: Vti .tracked = true;
12: end if
13: end if
14: end if
15: if (!Vti .tracked) /∗ no vehicle is linked to Vti ∗ /
16: Consider Vti as a new detection;
17: end for /∗ from line# 1 ∗ /
j
18: for each Vt −1 in Ft −1
j j
19: if (Vt −1 .tracked and Vt −1 links to no vehicle in Ft )
then The corresponding results are shown in Figs. 14(d)-(f). With
/∗ This implies a potential miss. ∗ / the aid of temporal information, not only the search regions for
20: Search taillight pairs {P1 , P2 , . . .} in Ft within the taillight candidates can be reduced, but more reliable taillight
j
bounding region of Vt −1 ; pairs are obtained, effectively improving both computational
21: for each taillight pair Pk efficiency and detection accuracy.
/∗ Find if there exists a verified taillight pair. ∗/

22: if (Pk is verified) then


IV. E XPERIMENTAL R ESULTS
23: Add to V the vehicle detection Vti with the
same bounding region as Vt −1 ;
j To evaluate the performance of the proposed daytime brake
/∗ recover a miss. ∗ / light detection system, experiments are conducted on a video
24:
j
Link Vt −1 to Vti ; dataset which we collect using front-mounted DVRs. Totally,
25: Vti .tracked = true; there are 36 video sequences (about 1 to 5 minutes per clip),
26: break; among which 22 sequences are used for training and the
27: end if other 14 are for testing. The video resolution is 1920 × 1080
28: end for /∗ from line# 21 ∗ / and the frame rate is 30 fps. The setting of the parameters
29: end if used in our proposed system is shown in Table I, wherein
30: end for /∗ from line# 18 ∗ / the parameters τa , τs , ρ, α, rmin , and rmax are determined
empirically based on our collected DVR video dataset, and
the other parameters τb and τ N are learned from training data
(to be discussed in Sec. IV-B).
vehicle region, as shown in Fig. 14(b). More specifically, the
search region for L t (or Rt ) in Ft is set at the center of
L t −1 with triple the height and width of L t −1 , as illustrated A. Results of Preceding Vehicle Detection
in Fig. 14(c). After similar processing steps as conducted For vehicle detection, a HOG-based approach is conducted
in Sec. II-B, we only have to check whether the largest for vehicle candidate extraction and then taillight symmetry
RSBs in the left and right search regions can be paired. verification is performed to reduce false alarms. The number
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 127

TABLE II
R ESULTS OF V EHICLE D ETECTION

of cascade stages of boosted classifiers with HOG features


is 15 and the maximum false alarm rate is set to 0.5 per stage.
For classifier training, 550 vehicle images are manually
segmented from the training video sequences as positive sam-
ples, and 1100 non-vehicle images are automatically generated
as negative ones. In the 14 test video sequences, there are
4699 frames containing a total of 7394 vehicles, of
which 6545 are within the preset ROI. We manually draw
the bounding box of each vehicle over frames as ground
truth. A detection is regarded as correct if the detected Fig. 15. Results of vehicle detection (a) under high luminance conditions
vehicle overlaps a ground truth vehicle, and the ratio of their and (b) under low luminance conditions or in the shadow.
intersection to their union is greater than 70%.
The results of vehicle detection are presented in Table II,
wherein four methods (i) VD, (ii) TSV, (iii) TIR,
and (iv) TIRW are compared. (i) VD, which performs
HOG-based vehicle detection only, achieves a high detection
rate of 90.88% that 5948 vehicles out of the total 6545 can
be correctly detected. However, if no additional vehicle verifi-
cation process is conducted, VD will produce numerous false
detections, yielding a high false alarm rate of up to 42.61%. Fig. 16. Examples of false detection due to (a) improper pairing of red
(ii) Conducting HOG-based detection followed by a taillight objects around the bus and (b) incorrect taillight pairing.
symmetry verification process, TSV effectively reduces the
false alarm rate to 3.18%. However, some correct detections
yielded by VD may be eliminated by TSV due to no appropri-
ate taillight pair being verified, resulting in a lower detection
rate of 79.42%. (iii) Applying temporal information-based
refinement, TIR is capable of miss recovery and raises the
detection rate to 89.38% with a low false alarm rate of 1.5%.
(iv) Besides, we also conduct TIP to detect vehicles within
the whole video frame (termed TIRW), instead of the preset
ROI only. In this way, the detection and false alarm rates are
82.35% and 2.87%, respectively. One major reason causing
the performance degradation of TIRW is that, compared to
the vehicles within the centered ROI, the vehicles aside are
easier to be missed due to the deflective viewing angle from
the camera to the vehicles.
The vehicle detection results demonstrated in Fig. 15 verify Fig. 17. Examples of detection miss, wherein the vehicles missed
in (a) and (b) can be detected in (c) and (d), respectively, when the brake
the effectiveness of the proposed approach under both high and lights are activated.
low luminance conditions. There are still some false alarms
yielded by improperly pairing red objects in the surrounding can thus be detected correctly, as shown in Figs. 17(c) and (d).
environment, as indicated by the red arrow in Fig. 16(a) and This phenomenon can also be verified by the results given in
the left arrow in Fig. 16(b). The right arrow in Fig. 16(b) Table III that the detection rate of vehicles with inactivated
indicates another example of incorrect taillight pairing. brake lights are 83.93%, but with activated brake lights, up to
Figs. 17(a) and (b) show examples of detection misses, 95.88% of vehicles can be successfully detected.
wherein the preceding vehicles indicated by arrows cannot be
detected for the reason that their taillights in the frames are B. Results of Brake Light Detection
not within our preset color range due to very low luminance. To show the effectiveness of our proposed brake light detec-
However, once their brake lights are activated, the taillight tion approach as well as the parameter setting, we compute
color will fall within the preset color range and these vehicles the intensity histograms of detected RSBs from 500 activated
128 IEEE SENSORS JOURNAL, VOL. 16, NO. 1, JANUARY 1, 2016

TABLE III
R ESULTS OF V EHICLE D ETECTION W ITH I NACTIVATED /
A CTIVATED B RAKE L IGHTS

Fig. 18. Intensity histograms of RSBs from 500 activated brake lights and
500 inactivated ones.

brake lights and 500 inactivated ones. The histograms are


given in Fig. 18, wherein the horizontal and vertical axes
denote the pixel intensity and frequency, respectively. The
green and blue histograms represent activated and inactivated
brake lights, respectively. It is readily observable that the
threshold τb = 180, indicated by the red dotted line, is
effective to differentiate the two cases.
For qualitative evaluation, some brake light detection results
are presented in Fig. 19. The first column shows a sequence
of successive video frames, wherein fuchsia rectangles mark
detected vehicles, and green/yellow rectangles indicate inac-
tivated/activated brake lights. The second column shows the
close-up view of each detected taillight, wherein the blue-
lined region indicates the detected RSB. The third column
gives the intensity histograms of detected RSBs of left and
right taillights (marked with green and fuchsia, respectively).
In this test sequence, the brake lights of the preceding yellow Fig. 19. (a) Frames showing the status transition from non-braking to braking.
vehicle are not activated in the first two frames.4 From (b) Close-up views of segmented taillight regions, wherein the blue-lined
regions indicate detected RSBs. (c) Intensity histograms of the corresponding
Figs. 19(1.b)-(2.b), one can see that the detected RSBs do RSBs in (b).
not match the brake lights, and the intensity values of the
pixels in RSBs are not high, as shown in Figs. 19(1.c)-(2.c). The quantitative evaluation of brake light detection is
When the brake lights are activated, the RSBs can be appro- presented in Table IV, wherein different values of the thresh-
priately located at the bright regions of brake lights, as shown old τ N are tested. The term “ground truth” indicates that
in Figs. 19(3.b)-(6.b), resulting in relatively larger intensity 2985 vehicles, out of the total 6546 in all test frames, have
values of the pixels in RSBs. Accordingly, activated brake activated brake lights. Continued from the preceding vehicle
lights can be effectively discriminated from inactivated ones. detection, the term “correct detection” represents both the
Fig. 20 demonstrates the results of another test sequence. vehicle and its activated brake lights are successfully detected.
In the first three frames, the preceding vehicle is under the Note that since some vehicles with activated brake lights have
shadow of a building, as shown in Figs. 20(1.a)-(3.a), and been missed in the preceding process, the detection rate here
its brake light is activated in the second and third frames. cannot reach 100% no matter what value τ N is. The results
For the last three frames, the preceding vehicle leaves the show that both the detection rate and false alarm rate increase
shadow, encountering a sudden change of environment light, with τ N . In our system, a recommended τ N value is 2 × 10−4 ,
as shown in Figs. 20(4.a)-(6.a). The lighting change leads to no which can achieve a satisfactory detection rate of 87.60%
false alarm since it causes intensity increase in whole taillight with an acceptable low false alarm rate of no more than 2%.
regions, as shown in Figs. 20(4.b)-(6.b), so that the proposed By statistics, 72 misses and 21 false alarms, out of the total
system does not produce RSBs with sufficient bright pixels, 370 and 53 respectively, can be recovered in subsequent
as shown in Figs. 20(4.c)-(6.c). frames. The threshold τ N can also be user-adjusted, since some
4 The discussion here focuses on the preceding yellow vehicle. In fact, drivers may prefer a higher detection rate and tolerate more
the activated brake lights of the black vehicle in the left lane can also be false alarms, while others may lose concentration on driving
successfully detected. if there are too many false alarms.
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 129

Fig. 21. Examples of incorrect brake light detection due to (a)-(b) inap-
propriate taillight extraction and pairing, (c) deflective view of the vehicle’s
back, and (d) overexposure.
TABLE V
S UMMARY OF THE F EATURES U SED IN THE C OMPARED
M ETHODS AND O UR P ROPOSED M ETHOD

characteristics, several features can be designed. Hence, for


comparison we implement eight other brake light detection
Fig. 20. (a) Frames of a test sequence with a sudden change of environment methods using different features, as summarized in Table V,
light. (b) Close-up views of segmented taillight regions, wherein the blue-lined
regions indicate detected RSBs. (c) Intensity histograms of the corresponding and give the information of how these methods perform.
RSBs in (b). Method-1, as proposed in [22], computes the average of
TABLE IV pixel intensity in each taillight region. The changes of such
R ESULTS OF B RAKE L IGHT D ETECTION intensity average are used to infer a light state transition: either
on-to-off or off-to-on. If both left and right taillight regions
demonstrate an off-to-on state transition, the signal is classified
as braking. Method-2 computes the average of pixel intensity
in the RSB of each taillight region. If the intensity averages
of both left and right RSBs are greater than a threshold,
a braking event is detected. Method-3 considers the number of
pixels in the RSB of each taillight region as a feature. If the
pixel numbers of both left and right RSBs are greater than
a threshold, the vehicle is considered as braking. Method-4
Fig. 21 shows some examples of incorrect brake light computes the number of high-intensity pixels in each taillight
detection. In Figs. 21(a) and (b), the brake signals of the region. Similarly, if both left and right taillight regions contain
vehicles indicated by red arrows are not detected due to a sufficient number of high-intensity pixels, a braking event is
inappropriate taillight extraction and pairing. For the white detected. Method-5 (Method-6) computes the intensity entropy
vehicle in Fig. 21(c), the bright region of the right brake light (gradient entropy) of the taillight region. A braking event is
is hardly seen due to the deflective view of the vehicle’s back, detected if the entropies of both taillight regions are greater
causing a detection miss. In Fig. 21(d), the white vehicle is than a threshold. Method-7 (Method-8) computes the intensity
nearly overexposed that it is difficult to recognize whether the entropy (gradient entropy) of the RSB, and then applies
brake light is activated or not. thresholding for braking /non-braking classification.
Intensity, radial symmetry, and entropy are important char- The comparison of these methods is evaluated using
acteristics for daytime brake light detection. Based on these the Receiver Operating Characteristic (ROC), where the
130 IEEE SENSORS JOURNAL, VOL. 16, NO. 1, JANUARY 1, 2016

Currently, we are working on improving the computation


efficiency to detect a braking event more promptly. On the
other hand, some modern vehicles have three brake lights:
two on the sides and the third one in the middle. It will
be our future work to take the third light into consideration
to enhance the detection of braking events. Another future
research direction is to develop a deep learning approach
for daytime brake light detection, which may yield better
performance if big training data are used.

R EFERENCES
[1] J. C. McCall and M. M. Trivedi, “Video-based lane estimation and
tracking for driver assistance: Survey, system, and evaluation,” IEEE
Fig. 22. Performance comparison using the Receiver Operating Trans. Intell. Transp. Syst., vol. 7, no. 1, pp. 20–37, Mar. 2006.
Characteristic. [2] A. Borkar, M. Hayes, and M. T. Smith, “A novel lane detection system
with efficient ground truth generation,” IEEE Trans. Intell. Transp. Syst.,
independent variable is the feature threshold. The ROC curves vol. 13, no. 1, pp. 365–374, Mar. 2012.
[3] M. M. Trivedi, T. Gandhi, and J. McCall, “Looking-in and looking-
are shown in Fig. 22, wherein the horizontal and vertical out of a vehicle: Computer-vision-based enhanced vehicle safety,”
axes denote the false positive rate (FPR) and the true pos- IEEE Trans. Intell. Transp. Syst., vol. 8, no. 1, pp. 108–120,
itive rate (TPR), respectively. Please note that the highest Mar. 2007.
[4] X. Baró, S. Escalera, J. Vitriá, O. Pujol, and P. Radeva, “Traffic
TPR and FPR cannot reach 100%, which are 95.88% and sign recognition using evolutionary Adaboost detection and forest-
83.93% respectively, because some vehicles with activated or ECOC classification,” IEEE Trans. Intell. Transp. Syst., vol. 10, no. 1,
inactivated brake lights have been missed in the preceding pp. 113–126, Mar. 2009.
[5] G. Siogkas, E. Skodras, and E. Dermatas, “Traffic lights detec-
vehicle detection. A ROC curve more close to the top left tion in adverse conditions using color, symmetry and spatiotemporal
corner indicates better performance. The comparison shows information,” in Proc. Int. Conf. Comput. Vis. Theory Appl., 2012,
that our proposed method outperforms the others. Method-4, pp. 620–627.
[6] J.-H. Park and C.-S. Jeong, “Real-time signal light detection,” in
which performs the second best, can also be a good choice Proc. 2nd Int. Conf. Future Generat. Commun. Netw. Symp., 2008,
for daytime brake light detection since it can achieve higher pp. 139–142.
TPRs if tolerating more false positives. [7] J. Gong, Y. Jiang, G. Xiong, C. Guan, G. Tao, and H. Chen, “The
recognition and tracking of traffic lights based on color segmentation
The proposed system is implemented in C++ with OpenCV and CAMSHIFT for intelligent vehicles,” in Proc. IEEE Intell. Vehicles
2.45 libraries and test on a desktop computer under Windows Symp., Jun. 2010, pp. 431–435.
7 64 bits OS with Intel i7-3770 CPU @3.4 GHz and 8 GB [8] M. R. Yelal, S. Sasi, G. R. Shaffer, and A. K. Kumar, “Color-based
signal light tracking in real-time video,” in Proc. IEEE Int. Conf. Video
RAM. Considering the computation cost, the proposed system Signal Surveill., Nov. 2006, p. 67.
can perform brake light detection for a frame in 0.11 seconds [9] W. Liu, X. Wen, B. Duan, H. Yuan, and N. Wang, “Rear vehicle
and the frame rate is about 9 fps. detection and tracking for lane change assist,” in Proc. IEEE Intell.
Vehicles Symp., Jun. 2007, pp. 252–257.
V. C ONCLUSION [10] C. Caraffi, T. Vojir, J. Trefny, J. Sochman, and J. Matas, “A system for
real-time detection and tracking of vehicles from a single car-mounted
Since distracting driving has been one of the main causes camera,” in Proc. 15th Int. IEEE Conf. Intell. Transp. Syst., Sep. 2012,
of traffic accidents, more and more car owners thirst for pp. 975–982.
[11] Z. Sun, G. Bebis, and R. Miller, “Monocular precrash vehicle detection:
assistance systems for warning drivers of potential hazards. Features and classifiers,” IEEE Trans. Image Process., vol. 15, no. 7,
Signaling vehicle deceleration and potential collision, brake pp. 2019–2034, Jul. 2006.
lights are of vital importance. In this paper, we propose a [12] Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: A review,”
IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 5, pp. 694–711,
vision-based approach for daytime brake light detection using May 2006.
a driving video recorder. Although visual features, motions, [13] Y.-L. Chen, Y.-H. Chen, C.-J. Chen, and B.-F. Wu, “Nighttime vehicle
and appearances of vehicles are highly visible at daytime, detection for driver assistance and autonomous vehicles,” in Proc. IEEE
18th Int. Conf. Pattern Recognit., Aug. 2006, pp. 687–690.
brake lights are inconspicuous due the low contrast to ambient [14] H. Takahashi, D. Ukishima, K. Kawamoto, and K. Hirota, “A study on
light. Besides, the light scattering effect is not as obvious as predicting hazard factors for safe driving,” IEEE Trans. Ind. Electron.,
it is at night, making it more difficult to detect brake lights vol. 54, no. 2, pp. 781–789, Apr. 2007.
[15] R. O’Malley, E. Jones, and M. Glavin, “Rear-lamp vehicle detection and
at daytime. Hence, we propose a hierarchical scheme that tracking in low-exposure color video for night conditions,” IEEE Trans.
preceding vehicles are first extracted with a taillight symmetry Intell. Transp. Syst., vol. 11, no. 2, pp. 453–462, Jun. 2010.
verification, and then brake light detection is conducted by [16] E. Skodras, G. Siogkas, E. Dermatas, and N. Fakotakis, “Rear lights
vehicle detection for collision avoidance,” in Proc. IEEE 19th Int. Conf.
integrating both luminance and radial symmetry features. Syst., Signals Image Process., Apr. 2012, pp. 134–137.
A detection refinement process using temporal information [17] P. Thammakaroon and P. Tangamchit, “Predictive brake warning at night
is also employed for miss recovery. Experiments on a video using taillight characteristic,” in Proc. IEEE Int. Symp. Ind. Electron.,
Jul. 2009, pp. 217–221.
data set collected in real traffic environments by front-mounted [18] T. Schamm, C. von Carlowitz, and J. M. Zollner, “On-road vehicle
driving video recorders show that the proposed system can detection during dusk and at night,” in Proc. IEEE Intell. Vehicles Symp.,
effectively detect brake lights at daytime with a detection rate Jun. 2010, pp. 418–423.
[19] D.-Y. Chen and Y.-J. Peng, “Frequency-tuned taillight-based nighttime
of up to 87.6%, showing its good feasibility in real-world vehicle braking warning system,” IEEE Sensors J., vol. 12, no. 11,
environments. pp. 3285–3292, Nov. 2012.
CHEN et al.: DAYTIME PRECEDING VEHICLE BRAKE LIGHT DETECTION USING MONOCULAR VISION 131

[20] D.-Y. Chen, Y.-H. Lin, and Y.-J. Peng, “Nighttime brake-light detection Hua-Tsung Chen (A’07–S’09–M’09) received the
by Nakagami imaging,” IEEE Trans. Intell. Transp. Syst., vol. 13, no. 4, B.S., M.S., and Ph.D. degrees in computer science
pp. 1627–1637, Dec. 2012. and information engineering from National Chiao
[21] D.-Y. Chen, Y.-J. Peng, L.-C. Chen, and J.-W. Hsieh, “Nighttime turn Tung University, Hsinchu, Taiwan, in 2001, 2003,
signal detection by scatter modeling and reflectance-based direction and 2009, respectively.
recognition,” IEEE Sensors J., vol. 14, no. 7, pp. 2317–2326, Jul. 2014. He is currently an Assistant Research Fellow with
[22] A. Almagambetov, M. Casares, and S. Velipasalar, “Autonomous track- the Information and Communications Technology
ing of vehicle rear lights and detection of brakes and turn signals,” Laboratory, National Chiao Tung University.
in Proc. IEEE Symp. Comput. Intell. Secur. Defence Appl., Jul. 2012, His research interests include computer vision,
pp. 1–7. video signal processing, content-based video
[23] M. Casares, A. Almagambetov, and S. Velipasalar, “A robust algorithm indexing and retrieval, multimedia information
for the detection of vehicle turn signals and brake lights,” in Proc. IEEE system, and music signal processing.
9th Int. Conf. Adv. Video Light-Based Surveill., Sep. 2012, pp. 386–391.
[24] Q. Ming and K.-H. Jo, “Vehicle detection using tail light segmentation,”
in Proc. IEEE 6th Int. Forum Strategic Technol., vol. 2, Aug. 2011,
pp. 729–732. Yi-Chien Wu received the B.S. and M.S. degrees
[25] R. Stevenson, “A driver’s sixth sense,” IEEE Spectr., vol. 48, no. 10, in computer science from National Chiao Tung
pp. 50–55, Oct. 2011. University, Hsinchu, Taiwan, in 2013 and 2015,
[26] A. Lay-Ekuakille, P. Vergallo, D. Saracino, and A. Trotta, “Optimizing respectively.
and post processing of a smart beamformer for obstacle retrieval,” IEEE Her research interests include computer vision,
Sensors J., vol. 12, no. 5, pp. 1294–1299, May 2012. video signal processing, and video surveillance
[27] V. Agarwal, N. V. Murali, and C. Chandramouli, “A cost-effective system.
ultrasonic sensor-based driver-assistance system for congested traf-
fic conditions,” IEEE Trans. Intell. Transp. Syst., vol. 10, no. 3,
pp. 486–498, Sep. 2009.
[28] M. R. Strakowski, B. B. Kosmowski, R. Kowalik, and P. Wierzba,
“An ultrasonic obstacle detector based on phase beamforming princi-
ples,” IEEE Sensors J., vol. 6, no. 1, pp. 179–186, Feb. 2006.
[29] G. Loy and A. Zelinsky, “Fast radial symmetry for detecting points
of interest,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 8, Chun-Chieh Hsu (S’12) received the B.S. and
pp. 959–973, Aug. 2003. M.S. degrees in computer science from National
[30] N. Otsu, “A threshold selection method from gray-level histograms,” Chiao Tung University, Hsinchu, Taiwan, in 2009
IEEE Trans. Syst., Man, Cybern., vol. 9, no. 1, pp. 62–66, Jan. 1979. and 2011, respectively, where he is currently pursu-
[31] N. Dalal and B. Triggs, “Histograms of oriented gradients for human ing the Ph.D. degree in computer science.
detection,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern His research interests include computer vision,
Recognit., vol. 1, Jun. 2005, pp. 886–893. video signal processing, sports video analysis, and
[32] Q. Zhu, M.-C. Yeh, K.-T. Cheng, and S. Avidan, “Fast human detection video surveillance system.
using a cascade of histograms of oriented gradients,” in Proc. IEEE
Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2, Jun. 2006,
pp. 1491–1498.

You might also like