Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 2

Vertical Projection

Tracking Cars Using Background Estimation Results


The model uses the background estimation technique that you specify in the Edit Parameters block to estimate the
background. Here are descriptions of the available techniques:
Estimating median over time - This algorithm updates the median value of the time series data based upon the new
data sample. The eample increments or decrements the median by an amount that is related to the running standard
deviation and the si!e of the time series data. The eample also applies a correction to the median value if it detects a
local ramp in the time series data. "verall# the estimated median is constrained within $hebyshev%s bounds# which are
sqrt&'()* of the standard deviation on either side of the mean of the data.
$omputing median over time - This method computes the median of the values at each piel location over a time
window of '+ frames.
Eliminating moving ob,ects - This algorithm identifies the moving ob,ects in the first few image frames and labels the
corresponding piels as foreground piels. -et# the algorithm identifies the incomplete background as the piels that do
not belong to the foreground piels. .s the foreground ob,ects move# the algorithm estimates more and more of the
background piels.
"nce the eample estimates the background# it subtracts the background from each video frame to produce foreground
images. /y thresholding and performing morphological closing on each foreground image# the model produces binary
feature images. The model locates the cars in each binary feature image using the /lob .nalysis block. Then it uses the
0raw 1hapes block to draw a green rectangle around the cars that pass beneath the white line. The counter in the upper
left corner of the 2esults window tracks the number of cars in the region of interest.
Traffic Warning Sign Templates
The eample uses two set of templates - one for detection and the other for recognition.
To save computation# the detection templates are low resolution# and the eample uses one detection template per sign.
.lso# because the red piels are the distinguishing feature of the traffic warning signs# the eample uses these piels in
the detection step.
3or the recognition step# accuracy is the highest priority. 1o# the eample uses three high resolution templates for each
sign. Each of these templates shows the sign in a slightly different orientation. .lso# because the white piels are the key
to recogni!ing each traffic warning sign# the eample uses these piels in the recognition step.
The 0etection Templates window shows the traffic warning sign detection templates.

The 2ecognition Templates window shows the traffic warning sign recognition templates.

The templates were generated using vipwarningsigns4templates.m and were stored in vipwarningsigns4templates.mat.
Detection
The eample analy!es each video frame in the 5$b$r color space. /y thresholding and performing morphological
operations on the $r channel# the eample etracts the portions of the video frame that contain blobs of red piels. 6sing
the /lob .nalysis block# the eample finds the piels and bounding bo for each blob. The eample then compares the
blob with each warning sign detection template. 7f a blob is similar to any of the traffic warning sign detection templates# it
is a potential traffic warning sign.
Tracking and Recognition
The eample compares the bounding boes of the potential traffic warning signs in the current video frame with those in
the previous frame. Then the eample counts the number of appearances of each potential traffic warning sign.
7f a potential sign is detected in 8 contiguous video frames# the eample compares it to the traffic warning sign
recognition templates. 7f the potential traffic warning sign is similar enough to a traffic warning sign recognition template
in ' contiguous frames# the eample considers the potential traffic warning sign to be an actual traffic warning sign.
9hen the eample has recogni!ed a sign# it continues to track it. However# to save computation# it no longer continues to
recogni!e it.
Display
.fter a potential sign has been detected in 8 or more video frames# the eample uses the 0raw 1hape block to draw a
yellow rectangle around it. 9hen a sign has been recogni!ed# the eample uses the 7nsert Tet block to write the name
of the sign on the video stream. The eample uses the term %Tag% to indicate the order in which the sign is detected.

You might also like