Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

technologies

Article
Intelligent System for Vehicles Number Plate Detection and
Recognition Using Convolutional Neural Networks
Nur‑A‑ Alam 1 , Mominul Ahsan 2, * , Md. Abdul Based 3 and Julfikar Haider 2

1 Department of Computer Science & Engineering, Mawlana Bhashani Science and Technology University,
Tangail 1902, Bangladesh; munnacse44@gmail.com
2 Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M15 6BH, UK;
j.haider@mmu.ac.uk
3 Department of Electrical, Electronics and Telecommunication Engineering, Dhaka International University,
Dhaka 1205, Bangladesh; based@kth.se
* Correspondence: M.Ahsan@mmu.ac.uk

Abstract: Vehicles on the road are rising in extensive numbers, particularly in proportion to the
industrial revolution and growing economy. The significant use of vehicles has increased the proba‑
bility of traffic rules violation, causing unexpected accidents, and triggering traffic crimes. In order
to overcome these problems, an intelligent traffic monitoring system is required. The intelligent sys‑
tem can play a vital role in traffic control through the number plate detection of the vehicles. In this
research work, a system is developed for detecting and recognizing of vehicle number plates using
a convolutional neural network (CNN), a deep learning technique. This system comprises of two
parts: number plate detection and number plate recognition. In the detection part, a vehicle’s image
is captured through a digital camera. Then the system segments the number plate region from the im‑
age frame. After extracting the number plate region, a super resolution method is applied to convert
 the low‑resolution image into a high‑resolution image. The super resolution technique is used with
 the convolutional layer of CNN to reconstruct the pixel quality of the input image. Each character of
Citation: Alam, N.‑A.; Ahsan, M.; the number plate is segmented using a bounding box method. In the recognition part, features are
Based, M.A.; Haider, J. Intelligent extracted and classified using the CNN technique. The novelty of this research is the development
System for Vehicles Number Plate of an intelligent system employing CNN to recognize number plates, which have less resolution,
Detection and Recognition Using and are written in the Bengali language.
Convolutional Neural Networks.
Technologies 2021, 9, 9. https:// Keywords: number plate detection; super resolution technique; convolutional neural networks;
doi.org/10.3390/technologies9010009 deep learning; bounding box method

Received: 24 December 2020


Accepted: 18 January 2021
Published: 20 January 2021
1. Introduction

Publisher’s Note: MDPI stays neutral


Vehicle Number Plate Recognition (VNPR) is an exoteric and effective research modal‑
with regard to jurisdictional claims in
ity in the field of computer vision [1]. As there are an increasing number of vehicles on the
published maps and institutional affil‑ road, it is highly challenging to monitor and control the vehicles using existing systems
iations. (such as manual monitoring and monitoring by traffic police). An intelligent system can
be used to overcome this problem in a convenient and efficient way. Real time detection
of number plates from moving vehicles is needed, not only for monitoring traffic systems,
but also for traffic law enforcement. However, development in this area is slow and very
challenging to implement from a practical point of view [2].
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
Recognizing vehicle number plates can help with authorization (for example, when a
This article is an open access article
vehicle enters into an impervious premise). VNPR can develop a security policy while the
distributed under the terms and issues are more crucial. This research work aims to detect and recognize number plates in
conditions of the Creative Commons an intelligent way. The tests were carried out on vehicles in Dhaka city, in Bangladesh, al‑
Attribution (CC BY) license (https:// though the work can be extended to any country. Dhaka is a densely populated city with
creativecommons.org/licenses/by/ huge amounts of traffic—and people frequently break the traffic rules. In Bangladesh,
4.0/).

Technologies 2021, 9, 9. https://doi.org/10.3390/technologies9010009 https://www.mdpi.com/journal/technologies


Technologies 2021, 9, 9 2 of 18
Technologies2021,
Technologies 2021,9,9,xxFOR
FORPEER
PEERREVIEW
REVIEW 22of
of18
18

According
According
Bangladesh toRoad
to theannual
the annual report
report
Transport ofBRTA
of BRTA(BRTA)
Authority [3],the
[3], thenumber
number
has ofvehicles
of vehicles
the authority are
toare increasing
increasing
register rapidly
rapidly
vehicles. Ac‑
every
every year
year inin Bangladesh
Bangladesh (Figure
(Figure 1).
1). This
This could
could be
be accounted
accounted for
for the
the increasing
increasing
cording to the annual report of BRTA [3], the number of vehicles are increasing rapidly ev‑ road
road traffic
traffic
accidents
accidents and
ery year inand relateddeaths.
related
Bangladesh deaths. Bengali
(FigureBengali alphabet
1). Thisalphabet
could beandandBengali
Bengali
accounted numerals
numerals
for areused
are
the increasing used
road intraffic
in Bangla-
Bangla-ac‑
deshi
deshi vehicle
vehicle
cidents number
number
and related plates. The
plates.Bengali
deaths. international
The international
alphabet and vehicle
vehicle
Bengali registration
registration
numerals are code
code for
for in
used Bangladesh
Bangladesh
Bangladeshi isis
BD.
BD. The two
The number
vehicle types
two types of vehicles
of vehicles
plates. are used in Bangladesh
are used in Bangladesh
The international are civil vehicles
are civil vehicles
vehicle registration code forandand army vehicles.
army vehicles.
Bangladesh is BD.
In
The
In Figure
two 2,
Figure 2,the
thesample
types sample
of ofBangladeshi
vehicles
of Bangladeshi vehicle
are used invehicle number
Bangladesh
number plates
areplates
civil isisshown.
shown.and army vehicles.
vehicles
In Figure 2, the sample of Bangladeshi vehicle number plates is shown.

Figure1.1.Figure
Figure
1. Registration
Registration
Registration of
of vehicles’
ofvehicles’
vehicles’ numberin
number
number
in in Bangladesh
Bangladesh
Bangladesh inthe
in
in eight
thepast
past the past eight
years
eight years [3].years [3].
[3].

(a)
(a) (b)
(b)
Figure2.
Figure
Figure 2.Bangladeshi
2. Bangladeshi vehicle
Bangladeshivehicle numberplates:
vehiclenumber
number plates:(a)
plates: (a) civil
(a)civil vehicle
civilvehicle number
vehiclenumber plate,(b)
numberplate,
plate, (b)army
(b) armyvehicle
army vehiclenumber
vehicle numberplate.
number plate.
plate.

Authorizedletters
Authorized lettersand
andnumeric
numericnumbers
numbersforforthe
thevehicle
vehiclenumber
numberplates
platesininBangladesh
Bangladesh
with the corresponding English equivalent
equivalent figures
figures are
are presented
presented in
in Table
Table
with the corresponding English equivalent figures are presented in Table 1. 1.
1.

Table
Table 1. Bangla letters
Table 1.1.Bangla
and Bangla lettersand
andtheir
their corresponding
letters their corresponding
English English
equivalents
corresponding used in
English equivalents used
the vehicleused
equivalents inthe
number
in the vehiclenumber
plates.
vehicle number
plates.
plates.
Bangla letter অ ই উ এ ক খ গ ঘ ঙ চ ছ জ ঝ ত থ ড
English letter a Bangla
i Bangla letter
u letter e ka kha ga
gha na ca cha ja jha ta tha da

Bangla letter ঢ ট English
ঠ দ
letter ধa ন প ফ khaব ga ভ ম caয cha র ja ল taশ tha
tha স
English letter a ii uu ee ka ka kha ga gha gha na
na ca cha ja jha jha ta da
da
English letter dha ta tha da dha na pa pha ba bha ma ya ra la sha sa
Banglaletter
letter ৬ ৮ ৯
Bangla letter ০ ১ Bangla২ ৩ ৪ ৫ ৭ ‑
English number 0 English
1 English letter
2 letter 3 dha
4
dha ta5 tha
ta tha da
6da dha
dha
7 nana8 papa pha
9pha ba
ba bha
bha ma ma ya ya‑ ra ra lala sha
sha sasa
Banglanumber
Bangla number --

English
The number
format
English number of
00 1
vehicle
1 2 2 3
number
3 44 5
plates
5 in66 7 88
Bangladesh
7 is99 “city name—class -- letter of a
vehicle and its number—vehicle number”. For example, “DHAKA METREO—GA 0568”.
The
Here,
The formatrepresents
“Dhaka”
format of vehicle
of vehicle number
number
city plates
name, “GA:
plates inrepresents
in Bangladesh
Bangladesh isis “city
vehicle “city name—class
class letter of
in Bangla letter
name—class of aa
alphabets.
vehicle
vehicle andits
The second
and itsnumber—vehicle
linenumber—vehicle number”.
(number line) contains Forexample,
six For
number”. example,
digits, “DHAKA
where “DHAKA
the first two METREO—GA
digits (15) denote
METREO—GA 0568”.
ve‑
0568”.
Here, “Dhaka”
Here, “Dhaka” represents
represents city
city name,
name, “GA:
“GA: represents
represents vehicle
vehicle classclass in
in Bangla
Bangla alphabets.
alphabets.
The second
The second line
line (number
(number line)
line) contains
contains six
six digits,
digits, where
where the
the first
first two
two digits
digits (15)
(15) denote
denote
Technologies 2021, 9, 9 3 of 18

Technologies 2021, 9, x FOR PEER REVIEW 3 of 18

hicle class
vehicle classnumber
numberand
andthe
thelast
lastfour
fourdigits
digits(0568)
(0568)represent
representthe
thevehicle
vehicleregistration
registrationnumber
num-
in in
ber Bangla
Banglanumeral. Figure
numeral. Figure3 shows
3 showsthe
therepresentation
representationofofthe
theabove
aboveininthe
thenumber
numberplate.
plate.

Figure 3. Representation of a vehicle number plate in Bangladesh.


Figure 3. Representation of a vehicle number plate in Bangladesh.
This research work developed an intelligent system that is efficient to recognize vehi‑
cle number plateswork
This research usingdeveloped
Convolutional Neural Networks
an intelligent system that (CNN). The recognition
is efficient to recognize system
ve-
consists of five major steps: image pre‑processing, detection
hicle number plates using Convolutional Neural Networks (CNN). The recognition sys- of the number plate from the
captured image, learning based super resolution technique to
tem consists of five major steps: image pre-processing, detection of the number plate from produce a high‑resolution
image,
the segmentation,
captured image, learning and recognition
based super of each character.
resolution Segmentation
technique to produce is the most impor‑
a high-resolu-
tantimage,
tion task assegmentation,
it provides theand result of the entire
recognition number
of each plate analysis.
character. SegmentationThe main objective
is the most
of segmentation is to determine each region according to vehicle
important task as it provides the result of the entire number plate analysis. The main ob- city, type, and number.
However,
jective segmentationisoftoblurred
of segmentation determinenumber eachplates was
region more challenging,
according to vehicleand this
city, was and
type, over‑
come by using the super resolution method that transformed
number. However, segmentation of blurred number plates was more challenging, and this the blurred number plate
image
was into a clear
overcome by using image.the super resolution method that transformed the blurred number
plate image into a clearsegmentation
To get a perfect image. result, the bounding box method was used. Then the
system used CNN for extracting
To get a perfect segmentation result, the features of the numberbox
bounding plate and recognizing
method was used.the Thenvehicle
the
number.
system used CNN for extracting features of the number plate and recognizing the vehicle
The main steps of this system are organized as follows:
number.
• The Localization
main stepsofofthe thisnumber
system plate region: template
are organized as follows: matching algorithm is used for
extracting the number plate region from the input image frame of the vehicle.
• • Localization
Super resolution of theandnumber plate region:
segmentation templatethe
techniques: matching algorithm
super resolution is used for
technique ex-
is used
tracting the number plate region from the input image frame
to get a clear number plate with good resolution and the bounding box method is used of the vehicle.
• Super resolutioneach
for segmenting andcharacter
segmentation of the techniques:
number plate. theThe
super
methodresolution
segments technique
the vehicleis
used
city,to get aand
type, clear number
number plate
from thewith
plategood resolution and the bounding box method
region.
• isFeature
used forextraction:
segmenting theeach character
system used of 700the numberplate
number plate. The method
images segments
for training the
by using
vehicle city, type, and number from the plate region.
CNN, and it provided 4096 features for each character to recognize correctly. The num‑
• Feature
ber plateextraction:
images used the in
system used 700 number
this investigation plate images
was collected for trainingRoad
from Bangladesh by using
Trans‑
CNN, and it provided 4096 features
port Authority (https://service.brta.gov.bd/). for each character to recognize correctly. The
number
The paper plate is images
organized used asin this investigation
follows: was collected
Section 2 describes from work;
the related Bangladesh
Section Road
3 an‑
Transport Authority (https://service.brta.gov.bd/).
alyzes the methodology; Section 4 represents the simulation of this work; finally, the con‑
clusion
The is drawn
paper in Section 5.
is organized as follows: Section 2 describes the related work; Sections 3
analyzes the methodology; Section 4 represents the simulation of this work; finally, the
2. Related Works
conclusion is drawn in Section 5.
In the literature, a large number of systems were proposed and applied for vehicle
2.number
Relatedplate
Worksrecognition: digital image enhancement, detection of the number plate area
from the captured image, segmentation of each character, and recognition of the character
In the literature, a large number of systems were proposed and applied for vehicle
form the core steps in the recognition systems.
number plate recognition: digital image enhancement, detection of the number plate area
Cheokman et al. [4] showed morphological operators to pre‑process the image. After
from the captured image, segmentation of each character, and recognition of the character
preprocessing, the template matching approach was used for recognition of each charac‑
form the core steps in the recognition systems.
ter. It was issued for the vehicle registration plate (Macao‑style). In [5], scaling and cross‑
Cheokman et al. [4] showed morphological operators to pre-process the image. After
validation was applied for removing outliers and finding the clear parameters, using the
preprocessing, the template matching approach was used for recognition of each charac-
Support Vector Machine (SVM) method. Recognizing characters via the SVM method,
ter. It was issued for the vehicle registration plate (Macao-style). In [5], scaling and cross-
the rate of accuracy was higher from the Neural Network (NN) system.
validation was applied for removing outliers and finding the clear parameters, using the
Prabhakar et al. [6] proposed a webcam for capturing images. This system can local‑
Support Vector Machine (SVM) method. Recognizing characters via the SVM method, the
ize several sizes of number plates from the captured images. After localizing the plate,
rate of accuracy
characters was higherand
are segmented from the Neuralusing
recognized Network (NN)
several system.
NNs.
Technologies 2021, 9, 9 4 of 18

Sobel color detector for detecting vertical edges was used in [7], where the ineffec‑
tive edge was removed. The plate region was discovered by using the template matching
approach. Mathematical morphology and connected component analysis were used for
segmentation. Chirag Patel [8] proposed mathematical morphology and connected com‑
ponent analysis for segmentation and recognition of characters using the radial basis func‑
tion of the neural network.
The number plate detection system in [9] used plate background and character color
to find the plate location. For segmentation, the column sum vector was adopted. Artificial
Neural Network (ANN) was used for character recognition.
The system in [10] is used for Chinese number plate recognition. The number plate
image converts into a binary image, and noises of the image are removed. Then, the feature
is extracted from the image and the image is normalized in an 8 × 16 pixel. After normal‑
ization, the back‑propagation neuronal network is used for recognition.
Ziya et al. proposed Fuzzy geometry to locate the number plate, and segmented the
plate by using Fuzzy C‑Means [11]. The segmentation technique, by using blob labeling
and clustering, provides a segmentation accuracy of 94.24% [12]. In [13], Gabor filter,
threshold, and connected component labeling were used for finding the number plate.
A self‑organizing map (SOM) neural network wass used for character recognition after
segmentation [13]. In [14], a two‑layer Marko network was used for segmentation and
character recognition. Similar works on number plate detection are published in [15,16].
Maulidia et al. [17] presented a method where the accuracy of Otsu and K‑nearest
neighbor (KNN) were obtained for converting an RGB image into a binary image, extract‑
ing characteristics of the image. Feature extraction in pattern recognition was used for
converting pixels into binary form. Feature extraction was performed by the Otsu method
where KNN classified the image by comparing the neighborhood test data to the training
data. Test data were determined by using the learning algorithm through a classification
process, which groups the test data into classes. The Otsu method was developed based
on a pattern recognition process with a binary vector without influencing the threshold
value. Adjustment of distribution of the pixel values of the image was performed to ob‑
tain binary segmentation. KNN classification proved to be a great boon in recognizing the
vehicle number plate. However, the authors did not provide the recognition capability of
the system under extreme weather conditions.
Liu et al. [18] presented a supervised K‑means machine learning algorithm to segre‑
gate the characters of the number plate into subgroups, which were classified further by
the Support Vector Machine (SVM). Their system recognized blurred number plate im‑
ages and improved the classification accuracy. This system differentiated the obstacles in
character recognition due to the angle of the camera, speed of the vehicle, and surround‑
ing light and shadow. The camera captured faint and unrecognizable character images.
A huge number of samples increased the workload of SVM classifiers; thus, affecting the
accuracy.
Quiros et al. [19] used the KNN algorithm for classifying characters from number
plates. An image processing camera was installed on a highway in their proposed sys‑
tem and analyzed the feed received, capturing the images of vehicles. Contours within
the number plates were computed as if they were valid characters, along with their sizes,
and afterwards, the plates were segmented from the detected contours. Each contour was
classified using the KNN algorithm, which was trained using different sets of data, con‑
taining 36 characters, comprised of 26 alphabets and 10 numerical digits. The algorithm
was tested on previously segmented characters and compared with the character recogni‑
tion technique, such as artificial neural network. Their proposed system did not provide
the character recognition performance compared to the literature.
Thangallapally et al. [20] implemented a technique to recognize the characters on
number plates and to upload details into a server. This, in turn, was segregated to ex‑
tract the image of the vehicle number plate. The process led to compartmentalizing the
characters from the number plate, where KNN was applied to extract the characters up‑
Technologies 2021, 9, 9 5 of 18

loaded in the server. The hindrance of this process was recognizing the number plates
from blurred or ambiguous images.
Singh and Roy [21] proposed a vehicle number plate recognition system in India,
where various issues were observed, including a plethora of font sizes, different colors,
double line number plates, etc. Artificial neural network (ANN) and SVM were employed
to recognize characters, and to detect plate contours, respectively. Although the number of
algorithms were employed in literature to remove noise, and to enhance plate recognition,
ANN showed good results with easing camera constraints.
Sanchez [22] implemented a recognition system of vehicle number plates in the UK
using machine learning algorithms, including SVM, ANN, and KNN. The system received
the car image, processed and analyzed with the Machine Learning (ML) algorithm, and com‑
puter vision techniques. The results from the investigation showed that the system could
identify the number plate of the car from the images.
Panahi and Gholampour [23] proposed a system to detect unclear number plates dur‑
ing rough weather and high‑speed vehicles in different traffic situations. The image data of
the vehicle number plates were collected from different roads, streets, and highways dur‑
ing the day and night. The proposed system was robustly receptive to variations in light,
size, and clarity of the number plates. The aforementioned techniques helped in compil‑
ing a dedicated set of solutions to problems and challenges involved in the formation of a
number plate recognition system in various intelligent transportation system applications.
Subhadhira et al. [24] used the deep learning method, which was used for training
processes, to classify vehicle number plates accurately. This system consisted of two parts:
(1) pre‑processed and extracted features using the histogram of oriented gradients (HOG)
and (2) the second part classified each number and alphabetical character that appeared on
the number plate to be analyzed and segregated. The extreme learning machine (ELM) was
used as a classifier, whereas HOG extracted important features from the plate to recognize
Thai characters on the number plate. The ELM system performed better due to its high
speed and acceptable testing and training tenets.
In previous works, different techniques, such as template matching, and several clas‑
sifiers, SVM, ANN, were used to recognize characters. The ambiguous characters were not
dealt with concerning template matching techniques. For illumination ambiences, orienta‑
tion of each character of the damaged number plates, SVM could not be supported. Consid‑
ering these issues, in this research, the number plate region is extracted from the captured
vehicle image by using a template matching method and the super resolution technique
is applied to improve the resolution quality. Then, segmentation of characters from the
number plate region, and feature extraction to recognize the characters, are performed us‑
ing CNN. CNN uses the gradient‑based learning algorithm with modernized activation
functions, including Rectified Linear Unit (ReLU), which can deal with diminishing gradi‑
ent problem [25]. Gradient descent‑based method performs training and creates models to
minimize the errors and update the weights accordingly. Thus, better prediction accuracy
is obtained through producing highly optimized weights during training. CNN is capa‑
ble of representing two‑dimensional (2D) or 3D images with meaningful features, which
can help achieve superior recognition performance. In particular, the max‑pooling layer
of CNN can deal with shape, as well as scale invariant image problems. In addition, the al‑
gorithm uses a considerably low number of network parameters compared to traditional
neural networks with similar sizes.

3. Proposed Methodology
This proposed system has the capability of detecting and recognizing vehicle number
plates in any language. To detect each character of a number plate, this system trains
all alphabetic letters from the plate using machine learning. Number plate characters for
the majority countries across the world are from A to Z and 0 to 9, so that it can easily
be detected, whereas Bangla number plate detection and recognition is very challenging,
Technologies 2021, 9, 9 6 of 18
Technologies 2021, 9, x FOR PEER REVIEW 6 of 18

detected, whereas Bangla number plate detection and recognition is very challenging, due
due
to thetocomplex
the complex alphanumeric
alphanumeric characters.
characters. Therefore,
Therefore, BanglaBangla number
number platesplates
have have
been been
con-
considered
sidered as aas a case
case study
study in this
in this research.
research.
The basic
The basic steps of the proposed methodology are (a) pre-processing;
pre‑processing; (b) localization
of the number plate region; (c) super resolution techniques
of the number plate region; (c) super resolution techniques to to get
get clear
clear images;
images; (d)
(d) seg-
seg‑
mentation of characters; (e) feature extraction; and (f) recognition of characters, shown
mentation of characters; (e) feature extraction; and (f) recognition of characters, shown in in
Figure 4.
Figure 4.

Figure 4. Overview of the proposed system for vehicle number plate recognition.
Figure 4. Overview of the proposed system for vehicle number plate recognition.

3.1. Pre-Processing
3.1. Pre‑Processing
In this system, there are two parts: one part is for number plate detection of moving
vehicles andsystem,
In this anotherthere
part are
is for number
two parts: plate recognition.
one part The first
is for number platepart extractsofthe
detection num-
moving
vehicles
ber plateand another
region frompart
the is for number
captured image plate recognition.
of the vehicle byThe firstthe
using part extractsmatching
template the num‑
ber plate region
technique. from theplate
The number captured image part
recognition of theconsists
vehicleof bythree
usingactivities.
the template matching
(1) The super
technique. The number plate recognition part consists of three activities.
resolution method is used in the number plate region for converting a low-resolution (1) The super
im-
resolution
age method is used
to a high-resolution in the
image. number
Then plate region
it converts for converting
RGB into a gray image. a low‑resolution
(2) The bounding im‑
age to
box a high‑resolution
method is used for image.
segmenting Thencharacters.
it converts RGB into a gray
(3) Finally, image.
features from (2)the
Theauthorized
bounding
box method
alphabets andisnumbers
used for are
segmenting
extractedcharacters.
using CNN.(3) Finally,
The features
CNN model from the
provides authorized
4096 features
alphabets
for and numbers
recognizing are extracted using CNN. The CNN model provides 4096 features
each character.
for recognizing each character.
3.2. Localization of the Number Plate Region
3.2. Localization of the Number Plate Region
The template matching technique was applied to recognize the plate region from the
Theimage,
vehicle template matching
which technique
is identical to thewas appliedfrom
template to recognize
the targetthe plate In
image. region
this from the
method,
vehicle image, which is identical to the template from the target image.
the target image is governed by the template and calculates the measures of similarity. In this method,
the target image is governed by the template and calculates the measures of similarity.
The localization process traverses the template image to each position in the vehicle
The localization process traverses the template image to each position in the vehicle
image and computes the numeral indexes that ensure the template matches the image
image and computes the numeral indexes that ensure the template matches the image
pixel-by-pixel in that portion. Finally, the strongest similarities are identified as efficient
pixel‑by‑pixel in that portion. Finally, the strongest similarities are identified as efficient
pattern positions. Figure 5 illustrates the template matching procedure. The various por-
pattern positions. Figure 5 illustrates the template matching procedure. The various por‑
tions of the input image are matched to the predefined template image and the naive tem-
tions of the input image are matched to the predefined template image and the naive tem‑
plate matching approach is performed in order to extract the template. Then, the extracted
plate matching approach is performed in order to extract the template. Then, the extracted
plate region is resized to 127 × 127 pixels. An example of the extracted plate region from
plate region is resized to 127 × 127 pixels. An example of the extracted plate region from
the vehicle image frame using the template matching technique is shown in Figure 6.
the vehicle image frame using the template matching technique is shown in Figure 6.
Technologies 2021, 9, 9 7 of 18

Technologies 2021, 9, x FOR PEER REVIEW 7 of 18


Technologies 2021, 9, x FOR PEER REVIEW 7 of 18

Figure 5. Procedure of template matching technique for detecting the vehicle number plate.
Figure 5. Procedure of template matching technique for detecting the vehicle number plate.
Figure 5. Procedure of template matching technique for detecting the vehicle number plate.

(a)Input
(a) InputImage
Image (b)Template
(b) TemplateImage
Image (c)Output
(c) OutputImage
Image
Figure
Figure6.6.6.Localization
Localization
Localizationof the number plate region by using the template matching technique: (a) input image; (b) template
Figure ofof the
the numberplate
number plateregion
regionbybyusing
usingthe
thetemplate
templatematching
matchingtechnique:
technique:(a)
(a)input
inputimage;
image;(b)
(b)template
template
image; and (c) output image.
image;
image;andand(c)(c)output
outputimage.
image.

3.3.Super
3.3. SuperResolution
ResolutionTechnique Technique
3.3. Super Resolution Technique
Theimaging
The imaging chipsand and opticalcomponents components arehighly highly expensiveand, and, practically,not not
The imagingchips chips andoptical optical componentsare are highlyexpensive
expensive and,practically,
practically, not
used
used in surveillance cameras. The quality of the surveillance camera and the configuration
usedininsurveillance
surveillancecameras. cameras.The The quality
quality ofof the thesurveillance
surveillancecamera cameraand andthetheconfiguration
configuration
of the
ofofthe hardware components limit the captured image resolution. It cannot detectcharac-
charac-
thehardware
hardwarecomponents componentslimit limitthe thecaptured
captured image resolution. It cannot detect detect character
ter
ter of the number plate from the vagueness plate region. In this research, a spatial super
ofofthethenumber
numberplate platefromfromthe thevagueness
vaguenessplate plate region.
region. In In this
thisresearch,
research,aaspatial spatialsuper
super
resolution(SR)
resolution (SR) techniqueisisused used to overcomethe the limitationsuccessfully.
successfully.
resolution (SR)technique technique is usedtotoovercome overcome thelimitation limitation successfully.
TheSR
The SR techniquesconstruct construct thehigh-resolution
high-resolution imagesfrom from variousaccomplished
accomplished
The SRtechniques
techniques constructthe the high‑resolutionimages images fromvarious various accomplished
low-resolution
low-resolution images. The concept of SR is to organize the non-redundant data contained
low‑resolutionimages. images.The The concept
concept ofofSRSR isis toto organize
organizethe thenon-redundant
non‑redundantdata datacontained
contained
in
in abundant
abundant low-resolution
low‑resolution frames
frames to
to produce
produce
in abundant low-resolution frames to produce a high-resolution image. The main objec- a a high-resolution
high‑resolution image.
image. The
The main
main objec-
objec‑
tive of super
tiveofofsuper
tive resolution
superresolution is to
resolutionisistotoacquire acquire
acquirethe the high-resolution
thehigh-resolution
high‑resolutionimage image
imagefromfrom
fromthethe multiscale
themultiscale
multiscalelow- low-
low‑
resolution
resolutionimage
resolution image
imagethroughthrough
throughthe the spatial
thespatial resolution
spatialresolution
resolutionapproach.approach.
approach.The The spatial
Thespatial super
spatialsuper resolution
resolutionisisis
superresolution
applied
appliedto
applied totoget
get getaahigh-resolution
high-resolution
a high‑resolution number
number
number plate
plate plateimage.
image.image. First,
First, thesystem
First,
the system
the system usedthe
used thedownsam-
used downsam-
the down‑
pling
sampling
pling approach
approach approach and converted
and converted
and converted the
the imageimage
the image samples,
samples, samples,such
such as as
suchlocal
local gradient,
asgradient,
local gradient,alignment
alignment vec-
alignment
vec-
tor,
tor, and
vector, local statistics.
and statistics.
and local local statistics.Then,
Then, the the
Then, kernel
kernel formed
the formed
kernel formed the downsampling
the downsampling
the downsampling image
image and and
image constructed
and con‑
constructed
the high-resolution
structed the RGB
high‑resolution image. RGB The high-resolution
image. The
the high-resolution RGB image. The high-resolution image was converted from RGB to image
high‑resolution was converted
image was from
converted RGB toaa
from
gray-scale
RGB to a image
gray‑scale (I), according
image
gray-scale image (I), according to Equation (1). (I), to Equation
according to (1).
Equation (1).
II==Wr Wr**RR++Wg Wg**GG++Wb Wb**BB (1)
(1)
The R, G, and B are values Iof
= monochrome
Wr * R + Wg *colors G + Wb * B green, and blue, respectively)
(red, (1)
The R, G, and B are values of monochrome colors (red, green, and blue, respectively)
of the
of the RGBRGB color
color image,
image, which
which are linear
areoflinear in luminance.
in luminance. Wr,
Wr, Wg,
Wg, and
and Wb Wb are the coefficient
The
(fixedweight) R,
weight) G,ofand red,B green,
are values
andblack monochrome
black colors,with with colors
avalue
value(red, green,
of0.299, andare
0.299,0.587,
0.587,
therespectively)
blue, coefficient
and0.114.
0.114.Sum-
Sum-
(fixed
of the RGB of
color red, green,
image, and
which are colors,
linear a
in luminance. of and
Wr, Wg, and Wb are the coeffi‑
mation
mation of all
of all threethree weights
weights (R,
(R, G,
G, and and B)
B)blackis equal
is equal to 1.
to 1.with a value of 0.299, 0.587, and 0.114.
cientFigure
(fixed 7weight)
describes of red, green,
theprocess
process and
toreconstruct
reconstruct colors, imagesusing usingthe thesuper
superresolution
resolution(SR) (SR)
Figure
Summation 7 describes
oftechniques the
all three weights to
(R, G,choice
and B)toisget images
equal to images
1.
technique.
technique. SR
SR7techniques are a
areprocess good
a good to choice to get clear clear images to segment from blurred
images. Figure
There describes
are various
various the super resolution reconstruct
resolution techniques.images usingtothe
Among
segment
these,super from blurred
resolution
better resultsare(SR)
are
images.
technique.There SR aretechniques super
are a good choice techniques.
to get clear Among
images these,
to better
segment results
from blurred
achieved
achieved for the
for the are spatial
spatial super
supersuper resolution
resolution technique.
technique. This
This work work calculates
calculates peak signal-to-
peak signal-to-
images.
noise ratioThere
(PSNR)value various
value fordifferent
different resolution
superresolutiontechniques.
resolution Among
techniques, these, better
butspatial
spatial results
super reso-are
noise ratio
achieved (PSNR)
for the spatial for
super resolutionsuper technique. techniques,
This work but
calculates peaksuper reso-
signal‑to‑
lution technique
lution technique archives archives better better PSNR PSNR value. value. Multiple
Multiple low-resolution
low-resolution frames frames are are
noise ratio (PSNR) value for different super resolution techniques, but spatial super res‑
downsampled,
downsampled, shifting
shifting subpixels
subpixels from the high-resolution scene between one another.
olution technique archives better from PSNRthe value.high-resolution scene between
Multiple low‑resolution one are
frames another.
down‑
Theconstruction
The constructionof ofSRSRaligns
alignswith withthe thelow-resolution
low-resolution observancesand andcombines
combinessubpix- subpix-
sampled, shifting subpixels from the high‑resolution observances scene between one another. The con‑
elsinto
els intohigh-resolution
high-resolutionimage imagegrids gridsto toovercome
overcomethe thelimitation
limitationof ofaacamera’s
camera’simage image pro-
struction of SR aligns with the low‑resolution observances and combines subpixelspro- into
cessing.
cessing. The
The proposed
proposed super
super resolution
resolution isis summarized
summarized in
in Algorithm
Algorithm 1.
1. This
This algorithm
algorithm
high‑resolution image grids to overcome the limitation of a camera’s image processing.
combines many
combines many low-resolution
low-resolution images images from from the the original
original imageimage [26]. [26]. Then,
Then, aa kernel
kernel isis
The proposed super resolution is summarized in Algorithm 1. This algorithm combines
added to the combined image
added to the combined image to get a clearer image. to get a clearer image.
𝑯 = 𝑩𝒊𝒄𝒖𝒃𝒊𝒄𝑰𝒏𝒕𝒆𝒓𝒑𝒐𝒍𝒂𝒕𝒊𝒐𝒏 𝑨𝒗𝒓 𝑶𝟏,……….., 𝑶𝒌

For i = 1 to N
For j = 1 to K
Technologies 2021, 9, 9 SumFrame = SumFrame + Fk−1(Fk (H, Ok) − Ok); 8 of 18

End loop;
H = H – SumFrame;
SR_Frame = H;
many low‑resolution images from the original image [26]. Then, a kernel is added to the
Return SR_Frame;
combined image to get a clearer image.
In Algorithm
Algorithm 1: Super1,Resolution
an enlargedof Plate hypothesis
Region frame, H, was first created from several se-
quential low-resolution frames. The initial hypothesis frame can be either a simple bicubic
Super_resolution
interpolation
(input : Originalof one
Lowof the frames,
Resolution Framesor a[ObicubicOinterpolation of the average of all of the
1, ........., k ]; output : SRFrame )
low-resolution frames. Afterwards,
H = BicubicInterpolation(Avr(O1,........., Ok )) this hypothesis frame was iteratively altered and ad-
justed
For i =with
1 to Nthe information from all of the low-resolution frames. Algorithm 1 runs for N
iterations
   Forspecified
j = 1 to K by the user. The hypothesis frame, H, was adjusted in each of the iter-
ations of the algorithm.=𝐹SumFrame
      SumFrame was a function + Fk−1 (Fkused(H, Oto k) − Ok );the hypothesis frame with the
align
   End
original loop;
low-resolution frames. These were then subtracted with each of the original, in-
   H low-resolution
dividual = H – SumFrame;frames, 𝑂 . 𝐹 was a function that reversed the alignment and
   SR_Frame = H;
enlarged the error frame. These were summed over the number, n, of low-resolution
Return SR_Frame;
frames, so that it could be used to adjust the hypothesis frame, H.

Figure 7. Overview of the super resolution technique to improve the visibility of detected number plates.

Figure 7. Overview of theInsuper resolution


Algorithm technique
1, an to improve
enlarged the visibility
hypothesis frame, H, of detected
was firstnumber plates.
created from several se‑
quential low‑resolution frames. The initial hypothesis frame can be either a simple bicubic
This algorithm
interpolation shows
of one of that H or
the frames, is aa hypothesis frame thatofisthe
bicubic interpolation first createdoffrom
average all ofdifferent
the low‑
pursuant low-resolution frames. The primary hypothesis frame can be either
resolution frames. Afterwards, this hypothesis frame was iteratively altered and adjusted a light bicu-
bic interpolation of one of the frames or a bicubic interpolation of the average
with the information from all of the low‑resolution frames. Algorithm 1 runs for N itera‑ of all low-
resolution frames. This hypothesis frame is altered and adjusted with the
tions specified by the user. The hypothesis frame, H, was adjusted in each of the iterations information
from
of theall of the low-resolution
algorithm. Fn was a functionframes. Theto
used images, after
align the applying frame
hypothesis the super
withresolution
the originalal-
gorithm in the plate
low‑resolution region
frames. These shown
wereinthen
Figure 8, clearlywith
subtracted demonstrate
each of thethatoriginal,
image quality was
individual
significantly
low‑resolution improved
frames, Owith. F −1 correct
the was a balance
function of brightness
that reversed and
the contrast.
alignment and enlarged
n n
the error frame. These were summed over the number, n, of low‑resolution frames, so that
it could be used to adjust the hypothesis frame, H.
This algorithm shows that H is a hypothesis frame that is first created from differ‑
ent pursuant low‑resolution frames. The primary hypothesis frame can be either a light
bicubic interpolation of one of the frames or a bicubic interpolation of the average of all
low‑resolution frames. This hypothesis frame is altered and adjusted with the information
from all of the low‑resolution frames. The images, after applying the super resolution al‑
gorithm in the plate region shown in Figure 8, clearly demonstrate that image quality was
significantly improved with the correct balance of brightness and contrast.

3.4. Segmentation of Character


The character segmentation process will partition the number plate image into multi‑
ple sub‑images; each sub‑image offers one character. In this work, segmentation is the
most significant part, as the successful recognition of each character relies on accurate
Technologies 2021, 9, 9 9 of 18

segmentation. If segmentation is not performed correctly, then recognition will not be


accurate.
Technologies 2021, 9, x FOR PEER REVIEW 9 of 18

Original Image After Super Resolution

Stand by ve-
hicle
Moving vehi- Stand by in
cle low light

Figure
Figure 8.
8. Implementing
Implementing the
the super
super resolution technique on
resolution technique on the
the original
original images
images taken
takenunder
underdifferent
differ-
ent conditions.
conditions.

3.4. Segmentation
The bounding of Character
box method is used to segment the exact region of each character.
This The
method is efficient
character at identifying
segmentation thewill
process boundaries
partitionoftheeach character.
number plate It surrounds
image the
into mul-
labeled
tiple region with
sub-images; a rectangular
each sub-imagebox, as shown
offers in Figure
one character. In9.this
Then, it determines
work, segmentation the upper
is the
left and
most lower left
significant corner
part, of the
as the rectangle
successful by the x and
recognition y coordinates,
of each and iton
character relies provides
accurate a
label of each character.
segmentation. The bounding
If segmentation box method
is not performed is described
correctly, in Equation
then recognition (2)not
will [27]
beand
ac-
Algorithm 2. This algorithm is a bounding box algorithm to describe the target location.
curate.
The bounding box is
The bounding boxa rectangular
method is used box that can be determined
to segment by the
the exact region of x‑ andcharacter.
each y‑axis coordi‑
This
nates in the upper‑left corner, and the x‑ and y‑axis coordinates in the lower‑right
method is efficient at identifying the boundaries of each character. It surrounds the labeled corner
of the rectangle.
region with a rectangular box, as shown in Figure 9. Then, it determines the upper left
and lower left corner of the rectangle by the x and y coordinates,
and it provides a label
E( x ) = ∑ U p × x p + ∑ V pq × x p − xq , x p ϵ{0, 1} (2)
of each character. The bounding box method is described in Equation (2) [27] and Algo-
p∈ β { p,q}ϵε
rithm 2. This algorithm is a bounding box algorithm to describe the target location. The
bounding
where β isboxan isimage
a rectangular
with a set boxofthat canpbe
pixels ∈ determined
β. xp as an by the x- andpixel
individual y-axis coordinates
label assumes
in the upper-left corner, and the x- and y-axis coordinates in the lower-right
values of 1 and 0 for foreground and background, respectively. Pairs of adjacent pixels, corner of the
unary potentials, and pairwise potentials are defined by ε, U , and V .
rectangle. p pq

𝐸(𝑥) = Box𝑈Method
Algorithm 2: Bounding ×𝑥 + 𝑉 × 𝑥
of Plate Region 𝑥 , 𝑥 𝜖 0,1 (2)

For i = 1 : N (each video segment ) do ,
where β is
For t = 1 an
: Limage
( each with
f rame set of pixels p ∈ β. xp as an individual pixel label assumes values
a) do
i
of  Detect
1 and 0 for foreground
character and background,
as foreground object with arespectively.
target boundingPairs
box;of adjacent pixels, unary
potentials,
   Form and
thepairwise potentials
target image region Rare defined
i defined by by
the ε, Up, and
target Vpq. box;
bounding
Normalization Region of Interest (ROI) with preservation of aspect ratio;
 Compute shaper description
Algorithm pi ∈ M fromBox
2: Bounding normalized
MethodROI, M is the underlying
whereRegion
of Plate
Riemannian Formanifold;
𝑖 = 1 ∶ 𝑁 (𝑒𝑎𝑐ℎ 𝑣𝑖𝑑𝑒𝑜 𝑠𝑒𝑔𝑚𝑒𝑛𝑡) do
 End
For 𝑡 = 1 ∶ 𝐿 (𝑒𝑎𝑐ℎ 𝑓𝑟𝑎𝑚𝑒) do
 Collect the set of manifold points { pi }tL=i 1 ;
Detect character as foreground object with a target bounding box;
  Compute final feature vector xi
 End Form the target image region 𝑅 defined by the target bounding box;
Normalization
 Construct training set of
Region X Interest
= ({ xi , yi(ROI)
})iN=1 ; with preservation of aspect ratio;
Compute shaper description 𝑝 ∈ ℳ from normalized ROI, where ℳ is the underly-
 Train CNN classifier using X with cross‑validation
ing Riemannian manifold;
End
Collect the set of manifold points 𝑝 ;
Compute final feature vector 𝑥
End
Construct training set 𝒳 = ( 𝑥 , 𝑦 ) ;
The algorithm selects a frame and detects characters as a foreground object with a tar-
get bounding box. Then, the target image region defined by the target bounding box and
normalization ROI (preserving the aspect ratio) are formed. Afterwards, feature vector
Technologies 2021, 9, x FOR PEERwas
Technologies 2021, 9, 9 extracted for the target image. The training data were matched with the features
REVIEW of18
1018
10 of of
the target image to segment the image. This process was continued until all of the sections
were segmented.
Train CNN classifier using 𝒳 with cross-validation

The algorithm selects a frame and detects characters as a foreground object with a tar-
get bounding box. Then, the target image region defined by the target bounding box and
normalization ROI (preserving the aspect ratio) are formed. Afterwards, feature vector
was extracted for the target image. The training data were matched with the features of
the target image to segment the image. This process was continued until all of the sections
were segmented.

Figure
Figure 9. 9. Segmentation
Segmentation performed
performed of of each
each part
part of of three
three different
different number
number plates.
plates.

The algorithm selects a frame and detects characters as a foreground object with a tar‑
The features were extracted according to three classes for the segmented number
get bounding box. Then, the target image region defined by the target bounding box and
plates. The first class consisted of 64 categories (64 districts of Bangladesh). In the second
normalization ROI (preserving the aspect ratio) are formed. Afterwards, feature vector
class, there were 19 categories according to the vehicle type. The last class consisted of 10
was extracted for the target image. The training data were matched with the features of
categories (0–9), which identified the vehicle number. A deep learning technique, CNN,
the target image to segment the image. This process was continued until all of the sections
wasFigure
employed to train the
9. Segmentation classes of
performed of each
the vehicle city,different
part of three type number numberusingplates.700 number plate
were segmented.
images of different angles, and resolutions to introduce a high-resolution trainable image.
The
The features were
were extracted
featuresfeatures extracted according
according to three
to threeclasses
classesfor forthethesegmented
segmented number
number
The CNN extracted from different categories and stored the features in three sep-
plates.
plates. The first
The[28]. class
firstFigure consisted
class consisted of 64 categories (64 districts of Bangladesh). In the sec‑
arate vectors 10 showsofthe 64 proposed
categories CNN (64 districts
AlexNet of Bangladesh).
architecture.In ThetheAlexNet
second
ond class,
class, therethere
were werecategories
19 categories according tovehicle
the vehicle type. The last class consisted
model consisted of 19five convolutional according to thethree
layers, type. The
max-pooling last class
layers, consisted
two fullyofcon- 10
ofcategories
10 categories (0–9), (0–9),
which which identified
identified the the
vehicle vehicle
number. number.
A deep A deep
learning learning
technique, technique,
CNN,
nected layers, and one Softmax layer. Each convolutional layer took on convolutional fil-
CNN,
was was employed
employed to train tothe
train the classes
classes of the city,
of theReLU.
vehicle vehicle
typecity, type number
number using 700 num‑
ters
berand
platea nonlinear
images ofactivation
different function
angles, The pooling
and resolutions layersusing
were 700
usednumber plate
to perform
images of different angles, and resolutions to introducetoa introduce
high-resolutiona high‑resolution
trainable image. train‑
maxablepooling. The first convolution layer contained 96 filters with a filter size of
the11 × 11
Theimage. The CNN
CNN extracted extracted
features from features
different from different
categories andcategories
stored theand stored
features in three features
sep-
and a Stride
inarate
threevectors of
separate4. On the other hand, Layer 2 had a size of 55 × 55 and the number of filters
[28].vectors
Figure [28]. Figure
10 shows the 10 shows the
proposed CNN proposed
AlexNet CNN AlexNet
architecture. Thearchitecture.
AlexNet
wasThe256. Layersmodel
AlexNet 3 and consisted
4 contained of filter
five sizes of 13 × 13,
convolutional with three
layers, 384 filters. The last layers,
max‑pooling convolu- two
model consisted of five convolutional layers, three max-pooling layers, two fully con-
tional
fully layer contained
connected layers,a filter
and size
one 13 × 13, with
ofSoftmax layer. 256 filters.
Each The two fully
convolutional connected
layer took on layers
convo‑
nected layers, and one Softmax layer. Each convolutional layer took on convolutional fil-
provided
lutional
ters and ×a 4096
1filters features
and
nonlinear to classify
a nonlinear
activation the output
activation
function ReLU. by
Thethe
function Softmax
ReLU.
pooling The layer.
layerspooling
were used layersto were
perform used
tomax
performpooling.maxThe pooling. The first convolution
first convolution layer contained layer96contained
filters with 96afilters withofa 11
filter size × 11
filter size
11 ×
ofand 11 and
a Stride a Stride
of 4. On the of 4. On
other hand,theLayer
other2 had
hand, Layer
a size of 55 Output
2 ×had a layer
55 and size
theof 55 × 55
number and the
of filters
96 filters of number
was 256. ofLayers
filters 3was and256. Layers 3filter
4 contained and sizes
4Max
contained
of 13 × 13,filter sizes
with 384of 13 ×The
filters. 13, with 384 filters.
last convolu-
size 11×11 256 filters of poling
tional
The layer
lastsize contained
convolutional
5×5 a filter
layer size of
contained
Filters13 ×
a
size 13, layer
with
filter 5
size256of 13 ×
filters. The
13, two
with fully
256 connected
filters. The layers
two fully
384 filters of
Input provided layers
connected 1 × 4096 features 1to×classify
provided 1×1features
4096 the output to by the Softmax
classify the layer.
output by the Softmax layer.
size 3×3
Image
108 classes
Output layer
96 filters of
size 11×11 256 filters of Max poling
Input image Convolutional
layer 5
224×224×1 size 5×5 Filters size
384 filters of Layer 5
Input 1×1 Fully Fully
size 3×3 Convolutional
Image Convolutional connected connected
+Max poling layer 1 layer
1082classes
+Max poling Convolutional Convolutional Layer 4
Layer 1 +Max poling +Max poling
Input image Layer 2 Layer 3 Convolutional
224×224×1 Layer 5
Fully Fully
Convolutional connected connected
Convolutional +Max poling
+Max poling Convolutional Convolutional layer 1 layer 2
Layer 4
Layer 1 +Max poling +Max poling
Layer 2 Layer 3

Figure 10. Proposed convolutional neural network (CNN) architecture for vehicle number plate recognition.
Technologies 2021, 9, 9 11 of 18

After, segmentation of the number plate images for vehicle city, type, and number
were found. The CNN had extracted features of these images to recognize vehicle city,
type, and number, separately, by transforming image text to characters. The 224 × 224 × 1
image size was used for feature extraction. The system used AlexNet as the CNN model.
Figure 11 shows the recognition process of each character.

Figure 11. Recognition of each character to text after extracting features from the detected num‑
ber plates.

Table 2 presents a comparison of detection, segmentation, and recognition techniques


used in this investigation and in other relevant literature. The key novelty in the proposed
system compared to the literature was the capability of producing high‑resolution images
from blur images, by using the super resolution method to recognize the number plate
characters with high accuracy. Furthermore, the employment of machine learning based
on the AlexNet model can identify text from images correctly, to obtain a recognition rate
higher than the other techniques.

Table 2. Detection, segmentation, and recognition techniques of the applied method and existing methods.

Reference Detection Segmentation Recognition


For extracting plate region,
bounding box method was used
Proposed system Template matching CNN
Super Resolution techniques
were used to get clear images
Sobel edge detection with Line segmentation, word
[29] additional morphological segmentation based on area Feed forward neural network
operations filtering
[30] Connected component technique Template matching
Line segment orientation
[31] Sobel edge detector Template matching
(LSO) algorithm
[25] CNN CNN CNN
4. Simulation Results and Discussions
An experiment was carried out on MATLAB R2018a simulator (Image Processing
Toolbox™) for recognition of vehicle number plates. This detection and recognition pro-
cess consisted of the following steps:
Technologies 2021, 9, 9 12 of 18
• Step 1: the system captured a video of the vehicle. Then, the system extracted the
vehicle frame from the video and localized the number plate from the vehicle image.
The number plate images were converted to high-resolution images to perform ac-
4. Simulation
curate segmentation. For Results
extracting andtheDiscussions
number plate, the template matching method
was used. An experiment was carried out on MATLAB R2018a simulator (Image Processing
• Step 2: for Toolbox™)
segmentation,for the system used
recognition the bounding
of vehicle numberbox method
plates. This to segmentand
detection eachrecognition pro‑
character. cess
Eachconsisted
letter or word
of thewas mapped
following with a box value and extracted groups of
steps:
characters.• Figure
Step121:illustrates
the system thecaptured
segmented characters
a video of the from the Then,
vehicle. vehicle
thenumber
system extracted the
plate images. vehicle frame from the video and localized the number plate from the vehicle image.
• Step 3: the system
Theused
numberCNN for extracting
plate images were features and tested
converted number plates
to high‑resolution on the
images to perform accu‑
VLPR vehicle dataset. In order to evaluate
rate segmentation. the experiment
For extracting the number results, 700the
plate, vehicle images
template matching method
were appointed. wasThe AlexNet model was employed for training the CNN. The sys-
used.

tem accomplished a maximum of 70 iterations
Step 2: for segmentation, for each
the system usedinput set. The iterations
the bounding box method wereto segment each
confined whencharacter.
the minimum Each letter or word was mapped with a box value andfor
error rate was clarified by the user. The error rate extracted groups
this system wasof1.8%. After training,
characters. Figure 12 the CNN acquired
illustrates 98.2% accuracy
the segmented charactersbased
fromonthe
thevehicle number
validation set, and
plateattained
images.98.1% accuracy based on the testing set.
• Step 3: the system used CNN for extracting features and tested number plates on
The error rate was calculated as Ei of an individual program; I depends on the num-
the VLPR vehicle dataset. In order to evaluate the experiment results, 700 vehicle
ber of samples incorrectly classified (false positives plus false negatives), and is evaluated
images were appointed. The AlexNet model was employed for training the CNN.
by Equation (3):
The system accomplished 𝑓 a maximum of 70 iterations for each input set. The iterations
were confined when𝐸 = the × 100
minimum error rate was clarified by the (3) user. The error rate
𝑛
where f is the number of sample cases incorrectly classified, and n is the total numberaccuracy
for this system was 1.8%. After training, the CNN acquired 98.2% of based on
sample cases. the validation set, and attained 98.1% accuracy based on the testing set.

Figure 12. Example of character segmentation from a vehicle number plate.


Figure 12. Example of character segmentation from a vehicle number plate.

Table 3 shows The


comparative
error rateaccuracies and as
was calculated computational processing
Ei of an individual program;timeI depends
between on the number
the proposed system and the
of samples systems classified
incorrectly employed(false
in relevant literature.
positives In [30],
plus false the authors
negatives), and is evaluated by
estimated that Equation
1.3 s was (3):
taken to complete the image processing, whereas the proposed
system in this experiment took, on average, only 111 milliseconds
f to complete the whole
E = × 100 (3)
process. A Graphics Processing Unit (GEFORCE RTX i2070n super) with 16 GB RAM was
used in this system
wheretof is
perform quickofcomputation.
the number sample casesWith a comparable
incorrectly sample
classified, and nsize fortotal number of
is the
sample cases.
Table 3 shows comparative accuracies and computational processing time between
the proposed system and the systems employed in relevant literature. In [30], the authors
estimated that 1.3 s was taken to complete the image processing, whereas the proposed
system in this experiment took, on average, only 111 milliseconds to complete the whole
process. A Graphics Processing Unit (GEFORCE RTX 2070 super) with 16 GB RAM was
Technologies 2021, 9, x FOR PEER REVIEW 13 of 18

training and testing, the accuracy of the proposed system was found much higher than
the others available in the literature.
Technologies 2021, 9, 9 13 of 18
Table 3. Comparative accuracy of the applied system and the existing systems for image pro-
cessing.

usedReference
in this systemSample Size Localization
to perform Accuracy With a comparable
quick computation. Processing Time
sample size for
training and Training:
testing, the 500
accuracy of the proposed system was found much higher than
Proposed system 100% 98.2% 111 milliseconds for the whole process
the others availableTesting:
in the200
literature.
[29] Testing: 300 84% 80% -
Table 3.[30] Testing:
Comparative 120of the applied
accuracy - system- and the existing systems
1.3for
s image processing.
[31] Testing: 119 95.8% 84.87% -
Reference Sample
Training: 450Size Localization Accuracy Processing Time
[25] 88.67% - -
Proposed Testing: 50 500
Training: 111 milliseconds for
100% 98.2%
system Testing: 200 the whole process
A comparison
[29] ofTesting:
prediction
300 accuracies for detecting number
84% 80% plates between ‑ different
CNN models,
[30]
such as the scratch
Testing: 120
model [32],

ResNet50 [33],

VGG 16 [34], and
1.3 s
the model
employed in this work (AlexNet) [35] are presented in Figure 13. Total of number of true
[31] Testing: 119 95.8% 84.87% ‑
positives (TP) and true negatives (TN) were divided by the total number of TP, TN, false
positives (FP) and falseTraining:
negatives (FN) to measure the accuracy (Equation (4)).
[25] 88.67% ‑ ‑
450Testing: 50
TP + TN (4)
Accuracy (ACC) =
TP + TN + FP + FN
A comparison of prediction accuracies for detecting number plates between different
CNNItmodels,
can be such
concluded
as the that AlexNet
scratch modeloutperforms (98.2%)
[32], ResNet50 the other
[33], VGG models
16 [34], for model
and the image
processing.inFigure
employed 14 shows
this work the performance
(AlexNet) comparison
[35] are presented in terms
in Figure of peak
13. Total signal toofnoise
of number true
ratio (PSNR)
positives (TP)using different
and true super(TN)
negatives resolution techniques,
were divided by thesuch
totalasnumber
interpolation-based
of TP, TN, falseSR
[36] and reconstruction-based SR [37]. The spatial super resolution technique
positives (FP) and false negatives (FN) to measure the accuracy (Equation (4)). used in this
study showed high PSNR (33.7845 dB) compared to other related works, such as interpo-
lation-based (32.3676 dB) and reconstruction-based TP (32.4787
+ TN dB) super resolution tech-
Accuracy (ACC) = (4)
TP + TN
niques, indicating a better quality of the compressed or FP + FN
+reconstructed images.

Scratch model
120%
ResNet50
100% VGG16
Prediction accuracy

Alexnet (Proposed Method)


80%

60%

40%

20%

0%
CNN models

Figure 13. Comparative study on prediction accuracy of different CNN models.

It can be concluded that AlexNet outperforms (98.2%) the other models for image pro‑
cessing. Figure 14 shows the performance comparison in terms of peak signal to noise ra‑
tio (PSNR) using different super resolution techniques, such as interpolation‑based SR [36]
and reconstruction‑based SR [37]. The spatial super resolution technique used in this study
showed high PSNR (33.7845 dB) compared to other related works, such as interpolation‑
based (32.3676 dB) and reconstruction‑based (32.4787 dB) super resolution techniques, in‑
dicating a better quality of the compressed or reconstructed images.
In addition, 700 number plates collected in different backgrounds and different illumi‑
nation conditions were used to test the number plate detection. Among them, 605 number
plates were correctly identified equating to a recognition rate of 86.50%. A total of 133 valid
characters were checked and 121 of them were correctly identified, which showed that the
character recognition rate was significantly high (90.9%). The average recognition time
Technologies 2021, 9, 9 14 of 18

for a single number plate was calculated as 707 ms from a sample of 700 number plates.
Technologies 2021, 9, x FOR PEER REVIEW
The recognition
rates and times for characters are summarized in Table 4. 14 of 18

35
Interpolation-based SR

Peak Signal to Noise Ratio (dB) 34 Reconstruction-based SR

Spatial super resolution


(Proposed Method)
33

32

31

30
Super Resolution Techniques
Figure 14. Comparative
Comparativestudy
studyon
onpeak
peaksignal
signaltoto noise
noise ratio
ratio (PSNR)
(PSNR) of different
of different super
super resolution
resolution
techniques.

TableIn
4. addition, 700performance
Identification number plates collected
of each in different backgrounds and different illu-
character.
mination conditions were used to test the number plate detection. Among them, 605 num-
Object
ber plates were Parameter
correctly identified equating ValueA total of 133
to a recognition rate of 86.50%.
valid characters were checked and Recognition
Letters
121 of them were
Rate (%)correctly identified, which showed
86.5
Recognition Times (ms) 48.6
that the character recognition rate was significantly high (90.9%). The average recognition
time for a single number plate wasRecognition
calculatedRate
as 707
(%) ms from a sample of 700 number
97.8
Numbers
Recognition
plates. The recognition rates and times Times (ms)
for characters are summarized in48.9 Table 4.
Characters (Letters & Recognition Rate (%) 90.9
Table 4. Identification
Numbers) performance ofRecognition
each character.
Times (ms) 52.3
Object Parameter Value
A system presented in [38] might have failed if the texture
Recognition Rate (%)of the plate was not 86.5clear
Letters
and the number plate region was short of the threshold Recognitionof Times
the projection
(ms) operation. To 48.6solve
this problem, this proposed system brought novelty Recognition Rate (%)
in employing 97.8
the super resolution
Numbers
technique to organize the non‑redundant data Recognition
containedTimesin(ms) 48.9
abundant low‑resolution
frames, and to produce a high‑resolution Recognition
image. The Rate (%) converted blur 90.9
technique images
Characters (Letters & Numbers)
Recognition Times (ms)
to clear images to recognize the texts correctly. A vehicle number plate detection system 52.3
presented in [39] was unable to properly recognize some characters and numbers, such as
2, 0, A
7, system
and more, presented
due to in [38] might
having have
a lower failed if the
recognition texture
rate of thetoplate
compared the was
othernot clear
charac‑
and the
ters. number
In this system, plate
theregion
machine waslearning
short ofbased
the threshold
CNN model of the
wasprojection
employedoperation.
to recognize To
solve
all this problem,
characters this proposed
and numbers system
correctly. Thus,brought novelty in
the recognition employing
rate was higherthethan
supertheresolu-
other
tion technique
techniques to organize
reported the non-redundant
in the literature. Capturingdata contained
images in abundant
by directly pointinglow-resolution
the camera to‑
frames,the
wards and to produce
number platesawas
high-resolution image.
not feasible for The technique
real‑world usage dueconverted
to complex blur images to
background
clear images to recognize the texts correctly. A vehicle number
and environment [40,41]. This system employed a novel template matching technique plate detection system pre-
to
detect
sented vehicle number
in [39] was unableplate images from
to properly the video
recognize someincharacters
order to extract the target
and numbers, suchregion
as 2,
more accurately.
0, 7, and more, due to having a lower recognition rate compared to the other characters.
Furthermore,
In this system, theamachine
similar system wasbased
learning proposedCNNformodel
Bangla number
was plate detection
employed to recognizein [42],
all
which showed
characters and anumbers
limitation of correctly
correctly. recognizing
Thus, the characters
the recognition rate wasfrom large
higher blur
than images.
the other
However,
techniquesthe proposed
reported system
in the could Capturing
literature. solve this issue
images by by
employing
directly the superthe
pointing resolution
camera
technique
towards the number plates was not feasible for real-world usage due to complex recog‑
to get high‑resolution images for easy detection. The previous system back-
nized
ground only
andBangla number[40,41].
environment plates in contrast
This systemtoemployed
this system havingtemplate
a novel the capability
matchingof recog‑
tech-
nizing
nique tothedetect
vehicle number
vehicle platesplate
number fromimages
other countries. Furthermore,
from the video in orderthe previous
to extract thesystem
target
used
regiononly
more200 number plate images in comparison to 700 number plate images contain‑
accurately.
Furthermore, a similar system was proposed for Bangla number plate detection in
[42], which showed a limitation of correctly recognizing the characters from large blur
images. However, the proposed system could solve this issue by employing the super
nologies 2021, 9, x FOR PEER REVIEW 15 of 18

resolution technique to get high-resolution images for easy detection. The previous sys-
tem recognized only Bangla number plates in contrast to this system having the capability
Technologies 2021, 9, 9 15 of 18
of recognizing the vehicle number plates from other countries. Furthermore, the previous
system used only 200 number plate images in comparison to 700 number plate images
containing very high and very low resolutions for training the proposed system. There-
fore, low-resolution
ing very images
high and during testing
very low were detected
resolutions and recognized
for training the proposed accurately by
system. Therefore, low‑
the system. resolution images during testing were detected and recognized accurately by the system.
Vehicle number plate
Vehicle recognition
number plate system (VNPRS)
recognition system can(VNPRS)
play a vital
canrole
playinaimplement-
vital role in implement‑
ing technologies for smart cities,
ing technologies such as
for smart traffic
cities, control,
such smart
as traffic parking,
control, smarttollparking,
automation,toll automation,
driverless car, air quality
driverless car,monitoring, security, etc.security,
air quality monitoring, [43,44]. etc.
A conceptual
[43,44]. A framework
conceptual for framework for
integrating VNPRS with the smart city systems is presented in Figure 15. The
integrating VNPRS with the smart city systems is presented in Figure 15. The key theme inkey theme
in the smart the
citysmart
is about
citycollecting data fromdata
is about collecting the from
individual systems by
the individual sensors
systems byand cam-and cameras,
sensors
eras, communicating within within
communicating different systems,
different and taking
systems, actionaction
and taking from thefrom hidden infor-information
the hidden
mation within the the
within data. Automatic
data. Automatic vehicle
vehiclenumber
number identification
identification could
couldprovide
providea abase
base data to the
network of
data to the network of the
the smart
smart city.
city. For
For example,
example, in in case
case ofof monitoring and managing security for
security for aa particular
particularlocation
locationwithin
withinaacity,
city,VNPRS
VNPRScan can serve
serve as as
thethe tracking
tracking aidaid
for for
the the security
authority. The VNPRS can be connected to a cloud‑based
security authority. The VNPRS can be connected to a cloud-based system where all regis- system where all registered ve‑
tered vehiclehicle numbers
numbers willwill be stored.
be stored. A vehicle
A vehicle numbernumberplateplate recognized
recognized by system
by the the system will be
directed
will be directed tocloud
to the the cloudsystemsystem for matching
for matching withwith the database,
the database, and and identifying
identifying the the vehicle
vehicle user user information,
information, for taking
for taking further
further action
action by thebyrelevant
the relevant authority.
authority.

Figure 15. Connecting the vehicle number plate recognition system with a smart city.
Figure 15. Connecting the vehicle number plate recognition system with a smart city.

5. Conclusions
5. Conclusions
In this research, a system is proposed for detecting and recognizing vehicle number
In this research, a system is proposed for detecting and recognizing vehicle number
plates in Bangladesh, which are written in the Bengali language. In this system, the images
plates in Bangladesh, which are written in the Bengali language. In this system, the images
of the vehicles are captured and then the number plate regions are extracted using the
of the vehicles are captured and then the number plate regions are extracted using the tem‑
template matching method. Then, the segmentation of each character is performed. Fi-
plate matching method. Then, the segmentation of each character is performed. Finally,
nally, a convolutional neural networks (CNN) is used for extracting features of each char-
a convolutional neural networks (CNN) is used for extracting features of each character
acter that classifies the vehicle city, type, and number, to recognize the characters of the
that classifies the vehicle city, type, and number, to recognize the characters of the num‑
number plate.berThe CNN
plate. Theprovides a large number
CNN provides of features
a large number to help to
of features with accurate
help recog- recognition
with accurate
nition of characters from the number plate. This research used super resolution techniques
of characters from the number plate. This research used super resolution techniques to
to recognizerecognize
characterscharacters
with high with
resolution. In order to In
high resolution. evaluate theevaluate
order to experiment
the results,
experiment results,
700 vehicle images (with 70 iterations of each input set) were appointed.
700 vehicle images (with 70 iterations of each input set) were appointed. After training,
After training,
the CNN acquired 98.2% accuracy based on the validation set, and attained 98.1%
the CNN acquired 98.2% accuracy based on the validation set, and attained 98.1% accu‑ accu-
racy based on thebased
racy testingonset.
theThis system
testing can also
set. This be used
system for the
can also number
be used plates
for the written
number plates written
in other languages in the same way.
in other languages in the same way.

Author Contributions: Conceptualization: N.‑A.‑A., M.A.B.; methodology: N.‑A.‑A., M.A.B.; for‑


mal analysis and investigation: N.‑A.‑A., M.A., J.H., M.A.B.; writing—original draft preparation:
N.‑A.‑A., M.A.B.; writing—review and editing: N.‑A.‑A., M.A., J.H., M.A.B.; funding acquisition:
N.‑A.‑A., M.A.B., resources: N.‑A.‑A., M.A.B., supervision: M.A., J.H., M.A.B. All authors have read
and agreed to the published version of the manuscript.
Technologies 2021, 9, 9 16 of 18

Funding: The research received no funding.


Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are available on request from the
corresponding author. The data are not publicly available due to privacy.
Acknowledgments: The authors would like to thank Bangladesh Road Transport Authority (BRTA)
for providing the number plate data sets.
Conflicts of Interest: On behalf of all authors, the corresponding authors state that there is no conflict
of interest.

References
1. Kocer, H.E.; Cevik, K.K. Artificial neural networks‑based vehicle license plate recognition. Procedia Comput. Sci. 2011, 3, 1033–
1037. [CrossRef]
2. Balaji, G.N.; Rajesh, D. Smart Vehicle Number Plate Detection System for Different Countries Using an Improved Segmentation
Method. Imp. J. Interdiscip. Res. 2017, 3, 263–268.
3. Annual Report, Bangladesh Road Transport Authority (BRTA). Available online: http://www.brta.gov.bd/ (accessed on 20 April
2020).
4. Wu, C.; On, L.C.; Weng, C.H.; Kuan, T.S.; Ng, K. A Macao license plate recognition system Machine Learning and Cybernetics.
In Proceedings of the 2005 International Conference on Machine Learning and Cybernetics, Guangzhou, China, 18–21 August
2005; IEEE: New York, NY, USA, 2005; 7, pp. 18–21. [CrossRef]
5. Lopez, J.M.; Gonzalez, J.; Galindo, C.; Cabello, J. A Simple Method for Chinese License Plate Recognition Based on Support Vector
Machine Communications. In Proceedings of the 2006 International Conference on Communications, Circuits and Systems,
Guilin, China, 25–28 June 2006; IEEE: New York, NY, USA, 2006; 3, pp. 2141–2145. [CrossRef]
6. Prabhakar, P.; Anupama, P. A novel design for vehicle license plate detection and recognition. In Proceedings of the Second
International Conference on Current Trends in Engineering and Technology‑ICCTET, Coimbatore, India, 8 July 2014; IEEE: New
York, NY, USA, 2014. [CrossRef]
7. Anagnostopoulos, C.N.E.; Anagnostopoulos, I.E.; Psoroulas, I.D.; Loumos, V.; Kayafas, E. License plate recognition from still
images and video sequences: A survey. IEEE Trans. Intell. Transp. Syst. 2008, 9, 377–391. [CrossRef]
8. Patel, C.; Shah, D. Automatic Number Plate Recognition System. Int. J. Comput. Appl. 2013, 69, 1–5.
9. Du, S.; Ibrahim, M.; Shehata, M.; Badawy, W. Automatic license plate recognition (ALPR): A state‑of the‑art review. IEEE Trans.
Circuits Syst. Video Technol. 2013, 23, 311–325. [CrossRef]
10. Zhao, Z.; Yang, S.; Ma, X. Chinese License Plate Recognition Using a Convolutional Neural Network. In Proceedings of the 2008
IEEE Pacific‑Asia Workshop on Computational Intelligence and Industrial Application, Wuhan, China, 19–20 December 2008;
IEEE: New York, NY, USA, 2008; pp. 27–30. [CrossRef]
11. Telatar, Z.; Camasircioglu, E. Plate Detection and Recognition by using Color Information and ANN. In Proceedings of the 2007
IEEE 15th Signal Processing and Communications Applications, Eskisehir, Turkey, 11–13 June 2007; IEEE: New York, NY, USA,
2007. [CrossRef]
12. Khan, N.Y.; Imran, A.S.; Ali, N. Distance and Color Invariant Automatic License Plate Recognition System. In Proceedings of
the 2007 International Conference on Emerging Technologies , Islamabad, Pakistan, 12–13 November 2007; IEEE: New York, NY,
USA, 2007; pp. 232–237. [CrossRef]
13. Juntanasub, R.; Sureerattanan, N. Car license plate recognition through Hausdorff distance technique, Tools with Artificial Intel‑
ligence. In Proceedings of the 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’05), Hong Kong,
China, 14–16 November 2005; IEEE: New York, NY, USA, 2005. [CrossRef]
14. Feng, Y.; Fan, Y. Character recognition using parallel BP neural network, International Conference on Language and Image
Processing. In Proceedings of the 2008 International Conference on Audio, Language and Image Processing, Shanghai, China,
7–9 July 2008; IEEE: New York, NY, USA, 2008; pp. 1595–1599. [CrossRef]
15. Patel, S.G. Vehicle License Plate Recognition Using Morphology and Neural Network. Int. J. Mach. Learn. Cybern. (IJCI) 2013, 2,
1–7. [CrossRef]
16. Syed, Y.A.; Sarfraz, M. Color edge enhancement based fuzzy segmentation of license plates. In Proceedings of the Ninth In‑
ternational Conference on Information Visualisation (IV’05), London, UK, 6–8 July 2005; IEEE: New York, NY, USA, 2005; pp.
227–232. [CrossRef]
17. Hidayah, M.R.; Akhlis, I.; Sugiharti, E. Recognition Number of the Vehicle Plate Using Otsu Method and K‑Nearest Neighbour
Classification. Sci. J. Inform. 2017, 4, 66–75. [CrossRef]
18. Liu, W.‑C.; Lin, C.H. A hierarchical license plate recognition system using supervised K‑means and Support Vector Machine. In
Proceedings of the 2017 International Conference on Applied System Innovation (ICASI); Sapporo, Japan, 13–17 May 2017, IEEE: New
York, NY, USA, 2017; pp. 1622–1625.
Technologies 2021, 9, 9 17 of 18

19. Quiros, A.R.F.; Bedruz, R.A.; Uy, A.C.; Abad, A.; Bandala, A.; Dadios, E.P.; La Salle, D. A kNN‑based approach for the machine
vision of character recognition of license plate numbers. In Proceedings of the TENCON 2017—2017 IEEE Region 10 Conference,
Penang, Malaysia, 5–8 November 2017; IEEE: New York, NY, USA, 2017; pp. 1081–1086.
20. Thangallapally, S.K.; Maripeddi, R.; Banoth, V.K.; Naveen, C.; Satpute, V.R. E‑Security System for Vehicle Number Tracking
at Parking Lot (Application for VNIT Gate Security). In Proceedings of the 2018 IEEE International Students’ Conference on
Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 24–25 February 2018; IEEE: New York, NY, USA, 2018;
pp. 1–4.
21. Subhadhira, S.; Juithonglang, U.; Sakulkoo, P.; Horata, P. License plate recognition application using extreme learning machines.
In Proceedings of the 2014 Third ICT International Student Project Conference (ICT‑ISPC), Nakhon Pathom, Thailand, 26–27
March 2014; IEEE: New York, NY, USA, 2014; pp. 103–106.
22. Singh, A.K.; Roy, S. ANPR Indian system using surveillance cameras. In Proceedings of the 2015 Eighth International Conference
on Contemporary Computing (IC3), Noida, India, 20–22 August 2015; IEEE: New York, NY, USA, 2015; pp. 291–294.
23. Sanchez, L.F. Automatic Number Plate Recognition System Using Machine Learning Techniques. Ph.D. Thesis, Cranfield Uni‑
versity, Cranfield, UK, 2017.
24. Panahi, R.; Gholampour, I. Accurate detection and recognition of dirty vehicle plate numbers for high‑speed applications. IEEE
Trans. Intell. Transp. Syst. 2016, 18, 767–779. [CrossRef]
25. Rahman, M.S.; Mostakim, M.; Nasrin, M.S.; Alom, M.Z. Bangla License Plate Recognition Using Convolutional Neural Networks
(CNN). In Proceedings of the 2019 22nd International Conference on Computer and Information Technology (ICCIT), Dhaka,
Bangladesh, 18–20 December 2019; IEEE: New York, NY, USA, 2019; pp. 1–6.
26. Leung, B.; Memik, S.O. Exploring super‑resolution implementations across multiple platforms. EURASIP J. Adv. Signal Process.
2013, 1, 116. [CrossRef]
27. Lempitsky, V.; Kohli, P.; Rother, C.; Sharp, T. Image segmentation with a bounding box prior. In Proceedings of the 2009 IEEE
12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; IEEE: New York, NY, USA, 2009;
pp. 277–284. [CrossRef]
28. Youcef, M.‑A.H. Convolutional Neural Network for Image Classification with Implementation on Python Using PyTorch. 2019.
Available online: https://mc.ai/convolutional‑neural‑network‑for‑image‑classification‑with‑implementation‑on‑python‑using‑
pytorch/ (accessed on 4 March 2020).
29. Ghosh, A.K.; Sharma, S.K.D.; Islam, M.N.; Biswas SAkter, S. Automatic license plate recognition (ALPR) for Bangladeshi vehicles.
Glob. J. Comput. Sci. Technol. 2011, 11, 1–6.
30. Baten, R.A.; Omair, Z.; Sikder, U. Bangla license plate reader for metropolitan cities of Bangladesh using template matching. In
Proceedings of the 8th International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, 20–22 December
2014; IEEE: New York, NY, USA, 2014; pp. 776–779. [CrossRef]
31. Haque, M.R.; Hossain, S.; Roy, S.; Alam, N.; Islam, M.J. Line segmentation and orientation algorithm for automatic Bengali
license plate localization and recognition. Int. J. Comput. Appl. 2016, 154, 21–28. [CrossRef]
32. Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif.
Intell. Rev. 2019, 53, 5455–5516. [CrossRef]
33. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM
2017, 60, 84–90. [CrossRef]
34. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large‑scale image recognition. arXiv 2014, arXiv:1409.1556.
35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [CrossRef]
36. Ramos‑Llordén, G.; Vegas‑Sánchez‑Ferrero, G.; Martin‑Fernandez, M.; Alberola‑López, C.; Aja‑Fernández, S. Anisotropic Dif‑
fusion Filter with Memory Based on Speckle Statistics for Ultrasound Images. IEEE Trans. Image Process. 2014, 24, 345–358.
[CrossRef] [PubMed]
37. Tsai, R. Multiframe Image Restoration and Registration. Adv. Comput. Vis. Image Process. 1984, 1, 317–339.
38. Li, B.; Zeng, Z.Y.; Zhou, J.Z.; Dong, H.L. An algorithm for license plate recognition using radial basis function neural network.
In Proceedings of the 2008 International Symposium on Computer Science and Computational Technology, Shanghai, China,
20–22 December 2008; IEEE: New York, NY, USA, 2008; pp. 569–572.
39. Shan, B. Vehicle License Plate Recognition Based on Text‑line Construction and Multilevel RBF Neural Network. J. Comput. Sci.
2011, 6, 246–253. [CrossRef]
40. Mutholib, A.; Gunawan, T.S.; Kartiwi, M. Design and implementation of automatic number plate recognition on android plat‑
form. In Proceedings of the 2012 International Conference on Computer and Communication Engineering (ICCCE), Kuala
Lumpur, Malaysia, 3–5 July 2012; IEEE: New York, NY, USA, 2012; pp. 540–543.
41. Romadhon, R.K.; Ilham, M.; Munawar, N.I.; Tan, S.; Hedwig, R. Android‑based license plate recognition using pre‑trained neural
network. Internet Work. Indones. J. 2012, 4, 15–18.
42. Saif, N.; Ahmmed, N.; Pasha, S.; Shahrin MS, K.; Hasan, M.M.; Islam, S.; Jameel, A.S.M.M. Automatic License Plate Recognition
System for Bangla License Plates using Convolutional Neural Network. In Proceedings of the TENCON 2019—2019 IEEE Region
10 Conference (TENCON), Kochi, India, 17–20 October 2019; IEEE: New York, NY, USA, 2019; pp. 925–930.
Technologies 2021, 9, 9 18 of 18

43. Polishetty, R.; Roopaei, M.; Rad, P. A next‑generation secure cloud‑based deep learning license plate recognition for smart cities.
In Proceedings of the 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), Anaheim, CA,
USA, 18–20 December 2016; IEEE: New York, NY, USA, 2016; pp. 286–293.
44. Chen, Y.S.; Lin, C.K.; Kan, Y.W. An advanced ICTVSS model for real‑time vehicle traffic applications. Sensors 2019, 19, 4134.
[CrossRef] [PubMed]

You might also like