Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 23

MOVING VEHICLE DETECTION

GUIDE PROJECT MEMBERS


DR. KHELCHANDRA THONGAM PRIYANK BHARDWAJ
ROHIT KUMAR SAHU
AJEYA SIDDHARTHA
INTRODUCTION
INTELLIGENT TRANSPORTATION SYSTEM
• Traffic data may come from different sensors such as loop detectors, ultrasonic
sensors, or cameras.
• Video-based camera systems coupled with computer vision techniques offers
alternative approach to spot sensors.
• VB camera systems are more sophisticated and powerful.
MOTIVATION
• There are many published works for moving vehicle detection but there were
certain areas of improvement in each like:
• how to detect the moving vehicles at night, casting shadow elimination.
• Different from previous works, this algorithm learns from the known examples
and does not rely on the prior model of vehicles, lighting, shadows, or
headlights as example based classification system is prevalently used in
image-based classification.
CHALLENGES
• Spot sensors have limited capabilities and are often both costly and disruptive
to install.
• Main challenge come from cast shadows, vehicle headlights, and noise which
produce incorrect segments.
• Sunlight cause shadows which are difficult to be distinguished from the vehicle
and cause segmentation errors.
• Vehicle headlights and bad illumination cause strong noise.
ALGORITHM DESCRIPTION IMAGE
SEQUENCE

The whole algorithm includes the following steps: BACKGROUND


ESTIMATION

1. Background estimation
BLOCK
2. block division DIVISION

3. candidates’ selection CANDIDATE


SELECTION

4. Feature extraction,
5. SVM-based classification, FEATURE
EXTRACTION

6. Shape representation.
SVM BASED
SVM-TRAINING
CLASSIFICATION

Fig. referred from-IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 56, NO. 1, JANUARY 2007 SHAPE
REPRESENTATION
BACKGROUND ESTIMATION
• In order to detect moving vehicles, we need to estimate the background of the scene first.
• We will propose improved adaptive background extraction algorithm based
on Kalman filtering.
• The background on the pixel p of the (n + 1)th frame can be defined as
B(n + 1, p) = B(n, p) + β(n, p) + μ(n, p).
• The input image intensity is described as I(n, p) = B(n, p) + η(n, p).
• By combining (1) with (2), we have I(n + 1, p) = B(n, p) + ω(n + 1, p).

Updated area
Sliding window
Image

Fig. referred from-IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 56, NO. 1, JANUARY 2007
BLOCK DIVISION AND CANDIDATES
SELECTION
• The image will be divided into non-overlapped blocks, and each block has the
same size in a same image.
• We will find out the blocks with a gray-level change.
• The current image will be subtracted from the background to get the
difference image, and we compute its mean value for each block.
• These candidates include real vehicles, casting shadows, headlights, or noise.
3.FEATURE EXTRACTION
• Histogram is used for the feature extraction.
• The range of the image’s grey level is taken [0,T] where T =255.
• For difference image d{i,j} the range of its grey scale is taken between
[-T,T] or[0,2T].
• The histogram ha(r) is computed from the image d{i,j}.
• The histogram hb(r) is computed from the edge map E(i,j) of the image
d{I,j}.
• Here r is between –T<=r<=T.

Fig. referred from-wavemetrics.com


FORMULAS

• To compute the edge map,the formula =


• E(i, j) = 1/ 2 {|D(i + 1, j + 1) − D(i, j)| + |D(i + 1, j) − D(i, j +
1)|}.
• A new histogram hc(r) is from by combining the histogram of the
difference image and its edge map.
• The new histogram hc(r) of dimension3T+R is computed as

• hc(r) = ha(r), 0 ≤ r ≤ 2T hb(r − 2T − 1), 2T + 1 ≤ r ≤ 3T + 2.


NORMALIZATION
• .Normalisation of the feature is done to adopt the different block division by h(r)=hc(r)/S where S is the size
of block.
• .PCA, optimal orthonormal decomposition will be applied to reduce the noises from feature.
• .From the collected sample ,a scatter matrix is obtained as.

ℎ𝑖 − 𝑚 (ℎ𝑖 − 𝑚)𝑡
S= 1/𝑁 σ𝑁
𝑖=0 ,here N =no. of training set.

• hi is the ith vector and (hi − m)t is the transpose of the vector of (hi − m).

• .For S ∈ R(3T +2)×(3T +2), it is easy to obtain its eigenvalues A = diag[λ1,...,λ3T +2], λ1 ≥···≥ λ3T +2 and
its eigen-vectors V = [ν1,...,ν3T +2]. Then, a new compressed vector with a dimension p can be computed
from the original vector h(r).
• gp = [ν1,...,νp] t h, when p ≤ 3T + 2.
EXTRACTED FEATURE

• Using compressed vectors as features, our experiments will


show that the classification result is better than directly using the
original histograms. Moreover, the computation cost can also be
reduced.

Fig. referred from-images.google.com


4.SUPPORT VECTOR MACHINE BASED
CLASSIFICATION.
SVM

• Support vector machines (SVMs) are a set of supervised learning


methods used for classification, regression and outliers detection.
• It is used to generalization capabilities which are achieved through a
minimization of bound of the general error.
• The aim is to define a hyperplane that divide the set of examples
such that all points with the same bound are on the same side of the
hyperplane.
• The extracted features are put into the SVMs for training and the
parameter of the SVM-based classifier can be obtained. Fig. referred from-wikipedia.org
DETECTION STAGE

• The input is divided into many blocks.


• The feature vector is generated by PCA from the two kinds of histogram of the images.
• It is put into the well-trained classifier to judge whether this region is covered with a vehicle or others
(such as shadow, headlight, or noise).
5. SHAPE REPRESENTATION
 SHAPE OF VECHICLES NEEDS TO BE EXTRACTED FOR VEHICLE
COUNTING,SPEED ESTIMATION,TRACKING ETC.
 SVM BASED CLASSIFIER GROUPS BLOCKS INTO VEHICLES
AND NON VECHICLES
 BLOCKS CLASSIFIED AS VEHICLES ARE GROUPED BY THEIR 8
CONNECTIVITY
 A CONVEX POLYGON IS DERIVED FROM A SET OF RELATED
BLOCKS Fig: 8 direction
 THE POLYGON IS USED TO APPROXIMATE A VECHICLE’S connectivity
SHAPE IN THE FORM OF A PARALLELOGRAM WHICH IS
MORE ROBUST DUE TO ITS SIMPLE REPRESENTATION

Fig:Parallelogram
Fig. referred from-IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 56, NO. 1, JANUARY 2007 P1P2P3P4
Fig: White Boxes indicate Fig: Shape representation
detected parts of vehicles

Fig. referred from-IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 56, NO. 1, JANUARY 2007
TECH REQUIREMENTS

 MATLAB
 LATEX
 MICROSOFT POWERPOINT
PROJECT’S TIME DIVISION
S.No. Description Date

1. Research 10/10/17 – 11/11/17

2. Project Implementation 15/11/17 – 31/01/18

3. Documentation 01/02/18 – 25/02/18


FUTURE WORK

 THE ALGORITHM CAN BE MADE MORE PRECISE BY USING MORE


INFORMATION SUCH AS COLOR
 USING SOME OTHER CLASSIFIERS SUCH AS CNN OR A
COMBINATION OF CLASSIFIERS TO IMPROVE ACCURACY
REFERENCES

 JIE ZHOU , DASHAN GAO AND DAVID ZHANG , “MOVING VEHICLE


DETECTION FOR AUTOMATIC VEHICLE MONITORING,” IEEE
TRANSACTION ON VEHICULAR TECHNOLOGY , VOL 56 , NO. 1 ,
PP. 51-59, JAN 2007

 ALI SHARIF RAZAVIAN ,HOSSEIN AZIZPOUR, JOSEPHINE SULLIVAN,


STEFAN CARLSSON, “CNN FEATURES OFF-THE-SHELF: AN
ASTOUNDING BASELINE FOR RECOGNITION”, 2014 IEEE
CONFERENCE ON COMPUTER VISION AND PATTERN
RECOGNITION WORKSHOPS
THANK YOU

You might also like