Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Circuits, Systems, and Signal Processing

https://doi.org/10.1007/s00034-019-01224-9

An Efficient Optimal Neural Network-Based Moving Vehicle


Detection in Traffic Video Surveillance System

Ahilan Appathurai1 · Revathi Sundarasekar2 · C. Raja3 · E. John Alex4 ·


C. Anna Palagan5 · A. Nithya6

Received: 15 November 2018 / Revised: 24 July 2019 / Accepted: 26 July 2019


© Springer Science+Business Media, LLC, part of Springer Nature 2019

Abstract
This paper presents an effective traffic video surveillance system for detecting moving
vehicles in traffic scenes. Moving vehicle identification process on streets is utilized
for vehicle tracking, counts, normal speed of every individual vehicle, movement
examination, and vehicle classifying targets and might be executed under various
situations. In this paper, we develop a novel hybridization of artificial neural network
(ANN) and oppositional gravitational search optimization algorithm (ANN–OGSA)-
based moving vehicle detection (MVD) system. The proposed system consists of
two main phases such as background generation and vehicle detection. Here, at first,
we develop an efficient method to generate the background. After the background
generation, we detect the moving vehicle using the ANN–OGSA model. To increase
the performance of the ANN classifier, we optimally select the weight value using
the OGSA algorithm. To prove the effectiveness of the system, we have compared
our proposed algorithm with different algorithms and utilized three types of videos
for experimental analysis. The precision of the proposed ANN–OGSA method has
been improved over 3% and 6% than the existing GSA-ANN and ANN, respectively.
Similarly, the GSA-ANN-based MVD system attained the maximum recall of 89%,
91%, and 91% for video 1, video 2, and video 3, respectively.

Keywords Moving vehicle detection · Artificial neural network · Oppositional-based


learning · Gravitational search optimization algorithm · Traffic video surveillance
system

B Ahilan Appathurai
listentoahil@gmail.com
Extended author information available on the last page of the article
Circuits, Systems, and Signal Processing

1 Introduction

One of the noteworthy uses of video-based supervision frameworks is traffic surveil-


lance. Along these lines, for a long time researchers have explored the vision-based
intelligent transportation system (ITS), transportation arranging, and traffic designing
applications to extricate helpful and exact traffic data for traffic picture investigation
and traffic stream control like vehicle tally, vehicle direction, vehicle tracking, vehi-
cle stream, vehicle arrangement, traffic density, vehicle speed, traffic path changes,
license plate recognition, and so forth [1–10, 17, 33]. The real-time implementation of
all video processing algorithms is mapped on to the reliable field programmable gate
array devices with proper error correction codes [1, 3–7, 33]. Expanding interest for
safety and security has paved the way for more research on smart intelligent surveil-
lance. Intelligent surveillance has an extensive variety of uses, for example, moving
item detection [11], object tracking [28], motion segmentation [13], object classifi-
cation and identification [15], occasion detection [16], and conduct comprehension
and portrayal [14]. Detection and tracking of moving vehicles are an imperative issue
in traffic surveillance. It is utilized to find moving items in video gravitationally and
thus avoid collisions amid traffic clogs. This procedure is vital for administrations, for
example, intelligent parking systems, auto-driving frameworks, estimation of traffic
parameters, or estimation of movement times [35].
Dependable and powerful vehicle detection is a major part in traffic surveillance.
There are still a few issues for vehicle location in ITSs. Different vehicle appear-
ances and stances make it hard to prepare a unified detection model. Complex urban
situations, awful climate, brightening changes, and poor/solid lighting conditions cor-
rupt the location execution drastically. Specifically, for traffic congestion, vehicles are
impeded by one another with the goal that different vehicles will effortlessly converge
into a solitary vehicle. The parameter that is used to learn vehicle discovery is like-
wise a basic issue. A location technique with complex parameters is generally not
reasonable [18]. For detecting the object, some of the researchers first calculate the
salient features which are useful for detection process. Therefore, much research uses
the visual features of a vehicle to recognize it in a still picture [19, 20]. Features, for
example, Gabor, shading, edge, and corner are typically used to detect the vehicle. At
that point, they are nourished into a deterministic classifier and a generative model to
recognize vehicles. Furthermore, specialists for the most part utilize a two-stage tech-
nique, including hypothesis generation and hypothesis verification [26], to locate the
vehicle. This method works well during the daytime but may fail during poor lighting
conditions such as nighttime.
As of late, specialists have started utilizing video processing systems for vehicle
detection [21–25]. Neural systems have been broadly utilized in traffic control frame-
work [27, 29–31]. Traffic location demonstrates the efficacy of utilizing neural systems
that have been generated utilizing movement-attractive sensors [27]. Smart specialist
frameworks have been utilized with the primary objective of controlling the traffic [29].
In order to control the traffic signals, hybrid computational intelligent techniques and
fuzzy neural networks have been applied. This method has significantly minimized the
waiting time during traffic. Moreover, traffic flow prediction is developed in Ref. [30].
Likewise, Kalman filter (KF) is for the most part utilized in the protest following pro-
Circuits, Systems, and Signal Processing

cedure which is a base change estimation of direct development [31]. One important
class of vehicle tracking algorithms is based on probabilistic modeling of associated
data and Bayesian estimation techniques. In these methods, statistical models are used
to represent characteristics of the vehicle features. The performance of the KF and the
extended KF (EKF) hybrid is better compared to the particle filter in tracking a road
in the satellite images [32].
In this paper, we develop an efficient moving vehicle detection (MVD) system using
an optimal artificial neural network (OANN). The proposed system consists of two
modules such as background generation and MVD using the ANN and oppositional
gravitational search optimization (OGSA), that is ANN–OGSA classifier. Here, at first,
we separate the background from the input traffic video. Then, the number of frames
and background of the frames are fed to the OANN classifier. The OANN classifier
is the combination of OGSA and ANN. The ANN classifier weights are optimally
selected using the OGSA algorithm. The rest of the paper is organized as follows:
A brief review of some of the literature in object detection techniques is presented
in Sect. 2. The background of the research is explained in Sect. 3, and the proposed
MVD is described in Sect. 4. The experimental results and performance evaluation
discussion are provided in Sect. 5. Finally, the conclusions are summed up in Sect. 6.

2 Related Works

Many researchers have analyzed the MVD system. We now discuss some of the
research works in this regard. Tian et al. [34] have explained the rear-view vehicle
detection and tracking by combining multiple parts for complex urban surveillance.
Rear-view vehicle detection and tracking method is based on multiple vehicle salient
parts using a stationary camera. They show that spatial modeling of these vehicle parts
was crucial for overall performance. First, the vehicle was treated as an object com-
posed of multiple salient parts, including the license plate and rear lamps. These parts
were localized using their distinctive color, texture, and regional feature. Furthermore,
the detected parts were treated as graph nodes to construct a probabilistic graph using
a Markov random field model. Then, the marginal posterior of each part was inferred
using loopy belief propagation to get final vehicle detection. Finally, the vehicles’
trajectories were estimated using the KF, and a tracking-based detection technique
was realized. Experiments in practical urban scenarios were carried out under various
weather conditions.
Wei et al. [36] have explained vehicle detection and tracking system based on image
data collected by an unmanned aerial vehicle. This system uses consecutive frames to
generate vehicle’s dynamic information, such as positions and velocities. Four major
modules were developed in this regard, such as image registration, image feature
extraction, vehicle shape detecting, and vehicle tracking. Some unique features were
introduced into this system to customize the vehicle and traffic flow and to jointly use
them in multiple consecutive images to increase the system accuracy of detecting and
tracking vehicles.
Similarly, Hu et al. [12] have explained the adaptive approach for validation in visual
object tracking. The uncertainty of validating unpredictable features in object track-
Circuits, Systems, and Signal Processing

ing is a challenging task in visual object tracking with occlusion and large appearance
variation. To address this uncertainty, they introduced an adaptive approach which
uses an updating model based on the occlusion and distortion parameters. In case of
occlusion or large appearance variation, the method uses backward model validation
where it updates the invalid appearance and then validates the target feature model.
If the target feature did not undergo any kind of clutter or distortions, it simply val-
idates and then updates the appearance model using forward feature validation. The
experimental results obtained from this adaptive approach demonstrate effectiveness
in terms of overlap rate and center location error compared with the other relevant
existing algorithms.
Zhang et al. [37] have introduced detection of on-street vehicles dependent on
shading power isolation. This strategy is comprised of two phases. Firstly, details such
as pavements or lanes in the image frame are utilized to extract the region of interest.
Furthermore, another channel was presented that uses the power data to channel the
brightening varieties, shadows, and jumbled foundations from the removed district of
intrigue and identify the vehicles in this manner.
In Ref. [38], Hu has explained the moving object detection and tracking from a
video captured by a moving camera. Moving object detection was relatively difficult
for the video captured by the moving camera since camera motion and object motion
were mixed. In the method, the feature points in the frames were found and then classi-
fied as belonging to foreground or background features. Next, moving object regions
were obtained using an integration scheme based on foreground feature points and
foreground regions, which were obtained using an image difference scheme. Then,
a compensation scheme based on the motion history of the continuous motion con-
tours obtained from three consecutive frames was applied to increase the regions of
moving objects. Moving objects were detected using a refinement scheme and a mini-
mum bounding box. Finally, moving object tracking was achieved using the KF based
on the center of gravity of a moving object region in the minimum bounding box.
Experimental results show that the method is reliable.
Zhou et al. [39] have introduced identification of moving vehicles and in addition
estimation of their rates by utilizing a solitary camera in light or legitimately lit-
up condition. The methodology identifies and tracks the vehicle going through the
reconnaissance territory and keeps the record of vehicles’ position. In this paper, it
was shown that following of vehicles depends on the relative places of the vehicles in
back-to-back edges. These data were utilized in the automatic number plate recognition
system for choice of those key casings where speed restricts infringement happens.

3 Background of the Research

In this section, first we explain the algorithm used in this paper. Then, we explain the
proposed MVD system.
Circuits, Systems, and Signal Processing

Fig. 1 Working principle of an artificial neural network

3.1 ANN

The ANN consists of three layers such as an input layer, a hidden layer, and an output
layer. The input layer has input neurons which transfer the data via synapses to the
hidden layer, and similarly, the hidden layer transfers these data to the output layer
via more synapses. The quantity of neurons in the input layer is same as the number
of features selected in the previous stage. The quantity of neurons in the output layer
is a sum of all classes that are represented in the network. The quantity of hidden
neuron is determined based on experiments. The input of the corresponding neuron
is the weighted sum of the output of all the neurons to which it is connected. The
output value of a neuron is a nonlinear function of its input provided. The weight W ij
is calculated by multiplying both input node j and output node i. The neural network
(NN) has two phases such as a training phase and a testing phase. During the training
phase, the NN parameters are changed. After the training phase, the NN parameters
are fixed and then the testing phase is done.
Figure 1 shows the working concept of the ANN. Here, the input neurons are defined
as (F 1 , F 2 … F a ), the hidden neurons are defined as (H 1 , H 2 … H b ), and the output
neurons are defined as (Y 1 , Y 2 … Y c ). The W hjk denotes the weight connecting the
input layer node k and the hidden layer node j. The Wioj denotes the weight connecting
hidden layer node j and the output layer node i, where 1 ≤ k ≤ a; 1 ≤ j ≤ b; 1 ≤
i ≤ c. Here, h represents the hidden layer and o represents the output layer. The
back-propagation (BP) algorithm is utilized for the training process. We now briefly
discuss the training process involved in this regard.
Circuits, Systems, and Signal Processing

Here, each node k in the input layer has a signal U k as system input, multiplied by
a weight value between the input layer and the hidden layer. Each node j in the hidden
layer receives the signal H j which is given in Eq. (1):


n
Hj  αj + Uk Wihj . (1)
k1

Here, α j represents the bias value of the hidden layer. The output H j is passed
through the tansig activation function. The activation function is represented as a
nonlinear function as described in Eq. (2):

1
f (H j )  . (2)
1 + e−H j

After the activation function calculation, we calculate the output value. Here, the
output of the activation function is given to the output layer. The output function is
described in Eq. (3):


b
 
Oi  αi + Wioj f H j (3)
j1

where αi is the bias in the output layer. Then, we calculate the learning error using
Eq. (4):

h−1 
1 
E (Ti − Yi )2 (4)
2n
i0

where n is the number of training parameters; Y i and T i are the output value and
the target value, respectively. In order to derive optimal weights for the ANN, a BP
algorithm is generally employed. A BP algorithm updates the weights iteratively to
minimize the error function. We now discuss the steps involved in the BP algorithm.
• The loads for the neurons of the hidden layer and the output layer are developed
arbitrarily selecting the weight. Nevertheless, the input layer has constant weight.
• The suggested bias function and the activation function are estimated using Eqs. (2)
and (3) for the NN.
• The back-propagation error is determined for each node, and then, the weights are
updated as follows:

w(n  )  w(n  ) + w(n  ) (5)

• The weight w(n ) is transformed as follows:

w(n  )  δ · X (n  ) · E (BP) (6)


Circuits, Systems, and Signal Processing

where δ is the learning rate, which normally ranges from 0.2 to 0.5 and E (BP) is the
BP error.
• After adjusting the weights, steps (2) and (3) are repeated until the BP error gets
minimized, that is E (BP) < 0.1.
• If the minimum value is received, then the FFBNN is properly qualified for the
screening phase.

3.2 Gravitational Search Optimization Algorithm

In GSA, the gravitational search value is computed utilizing Eq. (7). The mass esti-
mation of every protest is resolved by its fitness value which is given in Eq. (8):

G(k)  G 0 e(−αk / K ) (7)

fiti (k) − worst (k)


m i (k)  (8)
best (k) − worst (k)
m i (t)
Mi (k)   N . (9)
j1 m j (t)

To process the increasing speed of a specialist, add up to powers from an arrange-


ment of heavier masses that apply on it ought to be viewed as the law of gravity which
is trailed by calculation of agent acceleration utilizing the law of movement as per
Eq. (10). A short time later, the following speed of an agent is ascertained as a small
amount of its present speed added to its acceleration utilizing Eq. (11). At that point,
its position could be calculated by utilizing Eq. (12):
 Mi (k)  d 
aid (k)  rand j G(k) x j (k) − xid (k) (10)
Ri j (k) + ε
j ∈gbest, ji

Thus, we have

Vid (k + 1)  randi × Vid (k) + aid (k) (11)

Yid (k + 1)  Yid (k) + Vid (k + 1) (12)

In Eq. (10), G(k) is the gravity constant, ε is a very small value, and Rij (t) is the
Euclidian distance between the two agents i and j, and x i and x j are two random numbers
in the interval [0, 1] that guarantee the stochastic characteristics of the algorithm. The
different steps of GSA are given in Table 1.

3.3 Oppositional-Based Learning Method

Opposition-based learning (OBL), originally introduced by Hamid R. Tizhoosh, has


proven to be an effective method for differential evolution in some optimization prob-
lems. The opposite solution is defined as follows:
Circuits, Systems, and Signal Processing

Table 1 Steps involved in 1. Randomly initialize the search agents


gravitational search optimization
2. Calculate the fitness of each agent
algorithm
3. Order the agents based on the fitness function
4. Calculate the gravitational constant G(k)
5. Calculate the mass value of each mi (k)
6. Calculate the acceleration of agent
7. Update the agent’s velocity and position
8. Repeat the steps 2–6 until the stop criteria is reached
9. End

Let X  [ p, q]. Then, the opposite solution is calculated using Eq. (13).

X  p + q − X (13)

If the solution X is a multidimensional vector, we can generalize the OBL method


analogously. Assume that P(X 1 , X 2 , . . . , X n ) is a solution in n-dimensional
 space

and X i ∈ [ pi , qi ]∀ i ∈ {1, 2, . . . n}. The opposite solution OP X 1 , X 2 , . . . , X n is
defined in Eq. (14):

X i  pi + qi − X i . (14)

4 Proposed Vehicle Detection Using the OGSA–ANN Model

The main objective of this research is MVD using a combination of ANN–OGSA.


The MVD framework is an essential research parameter which gives information of
vehicles to the driving colleague framework in intelligent transportation framework.
Further, MVD is the subpart of object recognition. It is a fundamental building obstruct
for traffic monitoring and numerous different applications. The proposed system con-
sists of two modules such as (1) background generation and (2) MVD. The proposed
MVD system using the OGSA–ANN model is given in Fig. 2, and the steps involved
in the proposed work are discussed hereunder.

4.1 Background Generation

Background generation is the important process for the MVD system. The main objec-
tive of this section is to create the background of the input video Vin . In this paper, we
develop an efficient method for background  generation process. Consider
 the input
video Vin which has n number of frames V in  [V1 , V2 , . . . , Vn ] . Each frame has
some constant process and some moving process. Therefore, these frames have two
types of pixels such as background pixel and moving pixels. Comparing these two
pixels in all the frames, we observed that the background pixels contain the same
pixel value for all the frames, whereas the moving object pixels contain different pixel
Circuits, Systems, and Signal Processing

Fig. 2 Overall diagram of the proposed MVD using the OGSA–ANN model

Fig. 3 Example of background generation process

values. Among them, we had selected the similar pixel values for the background
generation process. A simple example of the background generation process and its
output is given in Fig. 3 (Fig. 4).
Circuits, Systems, and Signal Processing

Fig. 4 Experimental results: a input frame and b generated background

4.2 MVD Based on the Optimal OGSA–ANN Model

After the background generation process, we detect the moving vehicle from a given
frame using a combination of the OGSA–ANN. This section consists of two stages
such as (1) weight optimization based on the OGSA and (2) MVD.
STAGE 1: Weight optimization using the OGSA
In this paper, we utilize the ANN classifier for the MVD. To improve the performance
of the ANN, we optimally select the weight value of the ANN using the OGSA. The
OGSA algorithm is a combination of the OBL strategy and the GSA algorithm. To
increase the searching ability of the GSA algorithm, the OBL strategy is hybridized
with the GSA. In the ANN, the weight values are optimally selected by the OGSA
algorithm.
In the projected OGSA–ANN model, the input layer is the primary layer of the
network that comprises n number of neurons. Each input neuron signifies a detached
attribute in the training test dataset (X 1 , X 2 , . . . , X n ). The value from the input neuron
is multiplied with the conforming weight Wi j to obtain the hidden neuron that is
displayed in Fig. 5. The output layer is the last layer that characteristically comprises
only one class as only one output is typically demanded. In this proposed model,
during the training phase, the objective is to calculate the most accurate weight to
be assigned to the input pattern layer connector line. In this phase, the output is
Circuits, Systems, and Signal Processing

Fig. 5 Mechanism of the proposed OCS-ANN model

computed repeatedly, and the result is compared to the preferred output generated by
the training/test datasets. We now discuss the step-by-step process of the proposed
OGSA–ANN model.

Step 1: Solution encoding Solution encoding is an important process for all


optimization algorithms that help to identify the opti-
mal solution quickly. Here, we optimize the weight
value of the ANN model. At first, we randomly
assign the weight value for the ANN model so that
the random weight value is an initial solution of
the OGSA. The solution encoding representation is
given in Fig. 6.
Step 2: Oppositional solution In this step, we calculate the opposite solution of
every solution. For every solution, Wi j has a unique
opposite Wopi solution.
 The opposite solution OP
W1 , W2 , . . . , Wn is calculated based on Eq. (15):

Wij  W11
 
, W12 
, . . . , W1n . (15)

Step 3: Fitness calculation Now, we calculate the fitness function of steps 1 and
2. To assess the fitness of a result, an objective perfor-
mance desires to be intended to quantify the function
Circuits, Systems, and Signal Processing

Fig. 6 Representation of initial weights

of each individual. The fitness function is given in


Eq. (16):

Fitness  Min (MSE) (16)

1  2
N
MSE  Yi − Ỹi (17)
N
i1

where Yi is the target value and Ỹi is the obtained


value.
Step 4: Update using GSA The agent’s velocity and position for the next
(k + 1)th iteration are calculated as follows:

Vid (k + 1)  randi × Vid (k) + aid (k) (18)

Yid (k + 1)  Yid (k) + Vid (k + 1) (19)

where randi is the random number between the inter-


val [0, 1]. Further, Vid (k + 1) is the velocity of the ith
agent at the dth dimension during the tth iteration and
Yid (k + 1) is the position of the ith agent at the dth
dimension during the kth iteration.
Step 5: Termination criteria The algorithm discontinues its execution only if a
maximum number of iterations are achieved and
the next which is holding the best fitness value is
selected and it is given as the best weight value of the
ANN model. Using this proposed OCS-ANN model,
we detect the moving vehicle from the input video
dataset V in .
Circuits, Systems, and Signal Processing

STAGE 2: MVD using the OGSA–ANN model


After the background generation process, we detect the moving vehicle from a given
video using the OGSA–ANN model. To detect the vehicle, three steps are mainly
used such as (1) block estimation, (2) vehicle detection process, and (3) background
updating process. The step-by-step process of the MVD using the OGSA–ANN model
is discussed hereunder.
Block estimation Consider the input video Vin , at first; we split the video into M
number of frames. These M number of frames is given to the input of the ANN. Then,
we convert each frame into q × q block. Then, each pixel available in each frame
Vin (i, j) is given to the input of the hidden neurons. The block estimation process is
achieved by calculating similarity between the present incoming pixel Vin and the ith
hidden neuron H b . The block estimation process can be expressed using Eq. (20):

−VIN − Hb 2
K (Vin , Hb )m  exp (20)
2σ 2

where σ is the empirical tolerance, m  1 to F, and F signifies the number of neurons.


Following the Gaussian activation, the performance is evaluated. The hidden neuron
that sums up the Gaussian activation function in each gravitation is as follows:


H
T (m)  K (VIN , Hb )m (21)
i1

Then, the maximum value of the sum is chosen to determine whether the block has a
high probability of containing background information. This is expressed in Eq. (22):

Tmax  max T (m) (22)


m1∼ E

The summation of the Gaussian activation performances within each N × N block


from each neuron of the hidden layer is given in Eq. (23):

ϕ Tmax (23)
Pt ∈μ

where Pt is each pixel value of the consistent block B, and the block size N can be
set to 3 empirically. To regulate whether the block B (i, j) has a high probability
of comprising the background data, the intended sum of the block must surpass a
threshold value τ and consequently be considered as “0.” Then, the block B(i, j) will
be considered as “1” to designate a high probability that the block comprises moving
vehicles. This decision rule can be articulated as follows:

0, if ϕ ≥ τ
B (i, j)  (24)
1, otherwise
Circuits, Systems, and Signal Processing

Vehicle Detection Procedure After the block estimation procedure eliminates blocks
that are determined to have a high probability of containing background information,
the vehicle detection procedure accurately detects moving vehicles within only those
blocks that are regarded as having high probability of containing moving vehicles.
Finally, the detection result strongly depends on the output layer of the ANN, which
generates the binary motion detection mask. This is accomplished via the winner-
takes-all rule as follows:


H
Y  max Z im (25)
m1 ∼ E
i1

where Z im is the output value of the ith hidden layer neuron and H is the number of
hidden layer neurons.

1, if Y (x, y) < ω
P(x, y)  (26)
0, otherwise

where w signifies the experiential threshold value and P(x, y) is considered either as
“1” to signify a motion pixel that is a portion of a moving vehicle or as “0” to signify
a background pixel.
Background Updating Procedure After all the operations are completed for the
current incoming frame, we use Eq. (27) to update the neurons of the hidden layer in
the proposed background updating procedure for the next incoming frame as follows:

W (x, y)i  (1 − α) W (x, y)i + α Pt (x, y) (27)

where W (x, y)i and W (x, y)i represent the updated and the original ith neurons at
the position (x, y), respectively, and α is the empirical parameter.

5 Results and Discussion

In this section, we explain the experimental results obtained from the proposed MVD
system. The proposed MVD system has been experimented with three different meth-
ods and evaluated with three different methods such as precision, recall, F-measure,
and similarity measure. For implementing the proposed technique, we have used MAT-
LAB 2017b. This proposed technique is carried out using a computer with windows
having the Intel core i5 processor with a speed of 1.6 GHz and 4 GB RAM. The
performance of the proposed MVD system is compared with different algorithms. We
now discuss the experimental results.

5.1 Evaluation Metrics

The MVD system performance is analyzed using the most common performance
measures such as precision, recall, F-measure, and similarity. The metric values found
Circuits, Systems, and Signal Processing

are based on true positive (TP), true negative (TN), false positive (FP), and false
negative (FN).

Precision  TP (TP + FP) (28)


Recall  TP (TP + FN) (29)


Similarity  TP (TP + FP + FN) (30)


F1  2(Precison) (Recall) (Recall + Precision) (31)

where TP is the total amount of true-positive pixels, FP is the total amount of false-
positive pixels, and FN is the total amount of false-negative pixels.

5.2 Experimental Results

In this section, we analyze the experimental results of the proposed MVD system.
For experimental analysis, in this paper we have utilized three types of videos such as
“expressway (EW),” “highway (HW),” and “freeway (FW).” The selected each video
sample frame is given in Fig. 7 (Tables 2, 3, 4).

5.3 Performance Analysis

In this section, we analyze our proposed methodology performance based on evaluation


metrics such as precision, recall, similarity, and F-measures. Further, to prove the effec-
tiveness of our proposed methodology, we compare our proposed work (OGSA–ANN)
with the ANN-based MVD and the GSA-based MVD.

5.3.1 Performance Based on Precision

In this section, the performance is compared based on precision measure. If the


obtained output precision value is maximum, then it signifies that our proposed system
is good. However, precision is independent of accuracy.
Figure 8 shows the performance of the proposed method analyzed based on pre-
cision measure. In this paper, we have utilized three types of videos for performance
analyzing. In Fig. 10, the x-axis represents the input videos and the y-axis represents
the precision. Based on our analysis of Fig. 10, our proposed approach ensures the
maximum precision of 91% for video 1, 91.5% for video 2, and 91% for video 3. On
the other hand, the ANN-based MVD system ensures the precision of 88% for video 1,
89% for video 2, and 89% for video 3. Similarly, the GSA-ANN-based MVD system
ensures the maximum precision of 85% for video 1, 86% for video 2, and 85.5% for
video 3. From the results, we clearly understand that our proposed approach attains
the maximum precision compared to other methods.
Circuits, Systems, and Signal Processing

Fig. 7 Extracted frames: a video 1 (EW), b video 2 (HW), and c video 3 (FW)

5.3.2 Performance Based on Recall Measure

In this section, we compare the proposed work performance with different methods
using recall measures. Recall is the fraction of items that were effectively identified
among all the items that should have been detected. The performance graph of recall
measure is given in Fig. 9.
Based on an analysis of Fig. 9, our proposed approach ensures the maximum recall
of 94% for video 1, 93% for video 2, and 95% for video 3. Similarly, the GSA-ANN-
based MVD system ensures the maximum recall of 89%, 91%, and 91% for video 1,
video 2, and video 3, respectively. Moreover, we further observe from Fig. 9 that the
ANN-based MVD system ensures the recall of 83%, 86%, and 84% for video 1, video
2, and video 3, respectively. From the results, we clearly understand that our proposed
approach ensures better results compared to other methods.
Circuits, Systems, and Signal Processing

Table 2 Visual representation of the MVD results for EW

5.3.3 Performance Based on the F-Measure

The performance analysis based on the F-measure is explained in this section. The
F-measure is considered as both the precision and the recall. The F-measure gives
an estimate of the accuracy of the system under test. The performance based on the
F-measure is given in Fig. 10.
Circuits, Systems, and Signal Processing

Table 3 Visual representation of the MVD results for HW

Original video frame ANN model OGSA-ANN model

Figure 10 shows the performance of the F-measure. Here, the x-axis represents
the videos, and the y-axis represents the F-measure. Based on an analysis of Fig. 11,
our proposed OGSA–ANN-based MVD system ensures the maximum F-measure of
92%, 90%, and 91% for video 1, video 2, and video 3, respectively. Similarly, the
GSA-ANN-based MVD system ensures the maximum F-measure of 90%, 89%, and
90% for video 1, video 2, and video 3, respectively. On the other hand, the ANN-based
MVD system ensures the F-measure of 85%, 87%, and 87% for video 1, video 2, and
video 3, respectively.

5.3.4 Performance Based on Similarity

The main objective of the proposed methodology is to detect the moving vehicles in
traffic video surveillance using the OGSA–ANN model. To prove the effectiveness of
Circuits, Systems, and Signal Processing

Table 4 Visual representation of the MVD results for FW

Original video frame PNN model [32] OGSA-ANN model

the model, in this paper we compare our proposed model with the existing ANN-based
MVD and GSA-ANN-based MVD. In this section, the performance is analyzed based
on similarity measure.
Figure 11 shows the performance of the proposed MVD system using similarity
measure. Here, our proposed approach ensures the maximum similarity of 90% which
is 88% for using the GSA-ANN model and 85% for the ANN model. Based on the
results obtained, we clearly understand that our proposed approach ensured better
results compared to other methods.
Circuits, Systems, and Signal Processing

92
90

Precision
88
86
84
82
80
video 1 video 2 video 3
Videos
OGSA-ANN GSA-ANN ANN

Fig. 8 Performance of the proposed MVD using precision measure

100

95
OGSA-ANN
90
Recall

GSA-ANN
ANN
85

80

75
video 1 video 2 video 3
Videos
Fig. 9 Performance of the proposed MVD using recall measure

92

90
F-Measure

88 OGSA-ANN
GSA-ANN
86
ANN
84

82

80
video 1 video 2 video 3
Video
Fig. 10 Performance of the proposed MVD using the F-measure

6 Conclusion

In this paper, we have discussed the beneficial features of an MVD system using the
OGSA–ANN model. Using the package MATLAB 2017b, the proposed technique was
implemented. The proposed MVD system was developed using two main stages such
as the design novel OGSA–ANN model and the MVD using the OGSA–ANN model.
In the ANN model, the weight value was optimally selected using the OCS algorithm
Circuits, Systems, and Signal Processing

92
90
88
Similarity 86
84
82
80
78
76
video 1 video 2 video 3
Video
OGSA-ANN GSA-ANN ANN

Fig. 11 Performance of the proposed MVD using similarity measure

which serves to increase the detection accuracy given the high convergence speed
for detection problems. For experimentation, we have utilized three types of videos;
further, performance metrics such as precision, recall, F-measure, and similarity are
analyzed for each video. The simulation results show that our proposed approach
ensures the maximum precision of 91.5% which is high compared to other methods.

References
1. A. Ahilan, E.A.K. James, Design and implementation of real time car theft detection in FPGA, in 2011
Third International Conference on Advanced Computing, Chennai (2011), pp. 353–358
2. A. Ahilan, P. Deepa, Improving lifetime of memory devices using evolutionary computing-based error
correction coding, in Computational Intelligence, Cyber Security and Computational Models (2016),
pp. 237–245
3. A. Ahilan, P. Deepa, Modified Decimal Matrix Codes in FPGA configuration memory for multiple
bit upsets, in 2015 International Conference on Computer Communication and Informatics (ICCCI)
(2015), pp. 1–5
4. A. Ahilan, P. Deepa, Design for built-in FPGA reliability via fine-grained 2-D error correction codes.
Microelectron. Reliab. 55(9–10), 2108–2112 (2015)
5. A. Appathurai, P. Deepa, Design for reliability: a novel counter matrix code for FPGA based quality
applications, in 6 Asia Symposium on Quality Electronic Design (ASQED) (2015), pp. 56–61
6. A. Baher, H. Porwal, W. Recker, Short term freeway traffic flow prediction using genetically opti-
mized time-delay-based neural networks, in Transportation Research Board 78th Annual Meeting,
Washington, DC (1999)
7. P.V.K. Borges, N. Conci, A. Cavallaro, Video-based human behavior understanding: a survey. IEEE
Trans. Circuits Syst. Video Technol. 23(11), 1993–2008 (2013)
8. H.-Y. Cheng, C.-C. Weng, Y.-Y. Chen, Vehicle detection in aerial surveillance using dynamic bayesian
networks. IEEE Trans. Image Process. 21(4), 2152–2159 (2012)
9. M. Cheon, W. Lee, C. Yoon, M. Park, Vision-based vehicle detection system with consideration of the
detecting location. IEEE Trans. Intell. Transp. Syst. 13(3), 1243–1252 (2012)
10. H. Chung-Lin, L. Wen-Chieh, A vision-based vehicle identification system, in Pattern Recognition,
2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 4 (2004), pp. 364–367
11. W.-C. Hu, C.-Y. Yang, D.-Y. Huang, Robust real-time ship detection and tracking for visual surveillance
ofcage aquaculture. J. Vis. Commun. Image Represent. 22(6), 543–556 (2011)
12. W.-C. Hu, C.-H. Chen, T.-Y. Chen, D.-Y. Huang, Z.-C. Wu, Moving object detection and tracking from
video captured by moving camera. J. Vis. Commun. Image Represent. 30, 164–180 (2015)
13. X. Ji, Z. Wei, Y. Feng, Effective vehicle detection techniques for traffic surveillance systems. J. Vis.
Commun. Image Represent. 17(3), 647–658 (2006)
Circuits, Systems, and Signal Processing

14. R.E. Kalman, A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Eng.
82, 35–45 (1960)
15. N.K. Kanhere, S.T. Birchfield, Real-time incremental segmentation and tracking of vehicles at low
camera angles using stable features. IEEE Trans. Intell. Transp. Syst. 9, 148–160 (2008)
16. N.K. Kanhere, Vision-Based Detection, Tracking and Classification of Vehicles Using Stable Features
with Automatic Camera Calibration, ed, (2008), p. 105
17. D.S. Kushwaha, T. Kumar, An efficient approach for detection and speed estimation of moving vehicles.
J. Proc. Comput. Sci. 89, 726–731 (2016)
18. X. Li, Z.Q. Liu, K.M. Leung, Detection of vehicles from traffic scenes using fuzzy integrals. Pattern
Recogn. 35(4), 967–980 (2002)
19. F.-L. Lian, Y.-C. Lin, C.-T. Kuo, J.-H. Jean, Voting-based motion estimation for real-time video trans-
mission in networked mobile camera systems. IEEE Trans. Industr. Inf. 9(1), 172–180 (2013)
20. A. Lozano, G. Manfredi, L. Nieddu, An algorithm for the recognition of levels of congestion in road
traffic problems. Math. Comput. Simul. 79(6), 1926–1934 (2009)
21. Y. Mary Reeja, T. Latha, W. Rinisha, Detecting and tracking moving vehicles for traffic surveillance.
ARPN J. Eng. Appl. Sci. 10(4) (2015)
22. N. Messai, P.T. Thomas, D. Lefebvre, A.El. Moudni, Neural networks for local monitoring of traffic
magnetic sensors. Control Eng. Pract. 13(1), 67–80 (2005)
23. S. Movaghati, A. Moghaddamjoo, A. Tavakoli, Road extraction from satellite images using particle
filtering and extended Kalman filtering. IEEE Trans. Geosci. Remote Sens. 48(7), 2807–2817 (2010)
24. X. Niu, A semi-automatic framework for highway extraction and vehicle detection based on a geometric
deformable model. ISPRS J. Photogr. Remote Sens. 61(3–4), 170–186 (2006)
25. G. Prathiba, M. Santhi, A. Ahilan, Design and implementation of reliable flash ADC for microwave
applications. Microelectron. Reliab. 88–90, 91–97 (2018)
26. M. SaiSravana, S. Natarajan, E.S. Krishna, B.J. Kailath, Fast and accurate on-road vehicle detection
based on color intensity segregation. J. Proc. Comput. Sci. 133, 594–603 (2018)
27. J. Satheesh Kumar, G. Saravana Kumar, A. Ahilan, High performance decoding aware FPGA bit-stream
compression using RG codes. Cluster Comput. 1–5 (2018)
28. J.P. Shinora, K. Muralibabu, L. Agilandeeswari, An adaptive approach for validation in visual object
tracking. Proc. Comput. Sci. 58, 478–485 (2015)
29. B. Sivasankari, A. Ahilan, R. Jothin, A. Jasmine Gnana Malar, Reliable N sleep shuffled phase damping
design for ground bouncing noise mitigation. Microelectron. Reliab. 88–90, 1316–1321 (2018)
30. G. Somasundaram, R. Sivalingam, V. Morellas, N. Papanikolopoulos, Classification and counting of
composite objects in traffic scenes using global and local image analysis. IEEE Trans. Intell. Transp.
Syst. 14(1), 69–81 (2013)
31. D. Srinivasan, M.C. Choy, R.L. Cheu, Neural networks for real time traffic signal control. IEEE Trans.
Intell. Transp. Syst. 7(3), 261–272 (2006)
32. Z. Sun, G. Bebis, R. Miller, On-road vehicle detection using Gabor filters and support vector machines,
in Proceedings of the IEEE Conference Digital Signal Processing, vol. 2 (2002), pp. 1019–1022
33. B. Tian, Y. Li, B. Li, D. Wen, Rear-view vehicle detection and tracking by combining multiple parts
for complex urban surveillance. IEEE Trans. Intell. Transp. Syst. 15(2) (2014)
34. D. Tran, J. Yuan, D. Forsyth, Video event detection: from subvolume localization to spatiotemporal
path search. IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 404–416 (2014)
35. L. Wang, F. Chen, H. Yin, Detecting and tracking vehicles in traffic by unmanned aerial vehicles. J.
Autom. Constr. 72, 294–308 (2016)
36. Z. Wei et al., Multilevel framework to detect and handle vehicle occlusion. IEEE Trans. Intell. Transp.
Syst. 9, 161–174 (2008)
37. W. Zhang, X.Z. Fang, X. Yang, Moving vehicles segmentation based on Bayesian framework for
Gaussian motion model. Pattern Recogn. Lett. 27(1), 956–967 (2006)
38. J. Zhou, D. Gao, D. Zhang, Moving vehicle detection for automatic traffic monitoring. IEEE Trans.
Veh. Technol. 56(1), 51–59 (2007)
39. X. Zhou, C. Yang, W. Yu, Moving object detection by detecting contiguous outliers in the low-rank
representation. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 597–610 (2013)

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Circuits, Systems, and Signal Processing

Affiliations

Ahilan Appathurai1 · Revathi Sundarasekar2 · C. Raja3 · E. John Alex4 ·


C. Anna Palagan5 · A. Nithya6

Revathi Sundarasekar
revathisunder161@gmail.com
C. Raja
rajaresearch2019@gmail.com
E. John Alex
johnalexvlsi@gmail.com
C. Anna Palagan
annapalaganc7467@gmail.com
A. Nithya
info.nithi83@gmail.com
1 Infant Jesus College of Engineering, Tuticorin, Tamil Nadu, India
2 Anna University, Chennai, Tamil Nadu, India
3 Koneru Lakshmaiah Education Foundation, Vaddeswaram, A.P., India
4 CMR Institute of Technology, Hyderabad, Telangana, India
5 Malla Reddy Engineering College, Hyderabad, Telangana, India
6 Vaagdevi College of Engineering, Warangal, Telangana, India

You might also like