Professional Documents
Culture Documents
Null 2
Null 2
A PROJECT REPORT
ON
“Missing Criminals and Children Tracking Using Deep Learning”
Submitted in partial fulfilment of the requirements for the degree of
BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE & ENGINEERING
Under the Guidance of
Mrs. KAVYA T.M B.E., M.Tech.,
Assistant Professor,
Department of Computer Science & Engineering
Adichunchanagiri Institute of Technology
Chikkamagaluru
Submitted by
CERTIFICATE
This is to certify that the Project work Phase II (18CSP83) entitled “Missing Criminals
and Children Tracking Using Deep Learning”is a bonafide work carried out by ABHIMAN
GOWDA B R (4AI19CS002), ADARSHA J NANDAHALLI (4AI19CS003), SANKETH R
(4AI19CS099), SURYA S S (4AI19CS114). students of 8th semester B.E, in partial fulfilment
for the award of Degree of Bachelor of Engineering in Computer Science and Engineering
of the Visvesvaraya Technological University, Belagavi during the academic year 2022-2023. It
is certified that all corrections and suggestions indicated for Internal Assessment have been
incorporated in the report deposited in the department library. The project report has been
approved, as it satisfies the academic requirements in respect of Project Work Phase-II
prescribed for the said Degree.
APPROVAL
The project phase II(18CSP83) entitled “Missing Criminals and Children Tracking
Submitted by
ABHIMAN GOWDA B R (4AI19CS002)
ADARSHA J NANDAHALLI (4AI19CS003)
SANKETH R (4AI19CS099)
SURYA S S (4AI19CS114)
i
ACKNOWLEDGEMENTS
We express our humble Pranamas to his holiness Parama Poojya Jagadguru
Padmabushana Sri Sri Sri Dr.Balagangadharanatha Mahaswamiji and Parama
Poojya Jagadguru Sri Sri Sri Dr. Nirmalanandanatha Mahaswamiji and also to Sri Sri
Gunanatha Swamiji Sringeri Branch Chikkamagaluru who have showered their blessings
on us for framing our career successfully.
We are deeply indebted to our honourable Director Dr. C K Subbaraya for
creating the right kind of care and ambience.
We express our deepest gratitude to Dr. Pushpa Ravikumar, Professor & Head of
the, Department of Computer Science & Engineering for her valuable guidance,
suggestions and constant encouragement without which success of our Project work would
have been difficult.
We are Grateful to our project coordinator Mrs. Arpitha C N for her excellent
guidance, constant encouragement, support and constructive suggestions.
We are thankful to our guide Mrs. Kavya. T.M Asst. Professor, Dept. of Computer
Science & Engineering AIT, Chikkamagaluru for her inspiration and lively correspondence
for carrying our project work.
We would like to thank our beloved parents for their support, encouragement and
blessings. And last but not least, I would be pleased to express our heartfelt thanks to all
teaching and non-teaching staff of CS&E Department and our friends who have rendered
their help, motivation and support.
SANKETH R (4AI19CS099)
SURYA S S (4AI19CS114)
ii
TABLE OF CONTENTS
Chapter Title Page No.
Abstract i
Acknowledgements ii
Contents iii
List of Figures vi
List of Tables vii
List of Snapshots viii
CHAPTERS PAGE NO
1. INTRODUCTION 01
1.1 Introduction 01
1.2 Element of Image Processing 03
1.3 Motivation 06
1.4 Problem statement 07
1.5 Scope of the project 08
1.6 Objectives 08
1.7 Review of literature 08
1.8 Organization of the report 11
1.9 Summary 11
iii
3.1 Design Considerations 14
iv
5.3.3 Pseudocode for Face Detection using Haar Cascade 45
5.3.4 Pseudocode for Feature Extraction using 45
Local Binary Pattern.
5.3.5 Pseudocode for Classification using CNN. 45
5.4 Summary 46
6. System Testing 47
6.1 Test Procedures 47
6.1.1 Formal Test Case 48
6.1.2 Informal Test Case 48
6.2 Unit Testing 48
6.3 Summary 51
7. Results and Discussion 52
7.1 Experimental Results 52
7.1.1 Snapshot of Home Page 52
7.1.2 Snapshot of File Explorer on Clicking ‘Old Photo’ 52
and ‘Recent photo’ button
7.1.3 Snapshot after uploading image 53
7.1.4 Snapshot of RGB to Greyscale conversion. 54
7.1.5 Snapshot of Criminal details 55
7.1.6 Snapshot of Missing Children details 55
7.1.7 Snapshot of Matching Face with no records. 56
7.1.8 Snapshot of Face with no match. 57
7.1.9 Snapshot of image with no face. 57
7.2 Graphical Inferences 58
7.3 Summary 59
8. Conclusion and Future Enhancement 60
8.1 Conclusion 60
8.2 Future Enhancement 60
References 61
v
LIST OF FIGURES
vi
LIST OF TABELS
Table No. Table Name Page No.
Table 6.1 Test case for loading the image of Criminal or Children. 48
Table 6.2 Test case for grayscale image. 49
Table 6.3 Test case for Face Detection 49
Table 6.4 Test case for Main GUI 50
Table 6.5 Test case for Upload Old and Recent image button. 50
Table 6.6 Test case for Submit button 51
vii
LIST OF SNAPSHOTS
Snapshot. Snapshot Name Page No.
No
viii
“Missing Criminals and Children Tracking Using Deep Learning”
CHAPTER 1
INTRODUCTION
1.1 Introduction
Face recognition is one of the most significant research topics with great importance
nowadays in this new world of science and technology. Traditionally, repeated criminals are
identified by their biometrics such as thumbprint. But criminals are smart enough to not to
leave their biometrics in crime scene. In developed countries, the government create dataset
which is helpful to recognize the human face which compares the suspicious act. In our system,
the image of the criminal taken from the CCTV at the time of crime is taken or image of the
children is taken as input.
Automatic detection of match for this picture among the already existing images in the
database will be done through our application. This helps the police department to spot the
criminal quickly and also helps them to recognize missing children.
The basic definition of image processing refers to the processing of digital images by
removing noise, any kind of irregularities that may have crept into the image, either during its
formation, transformation, storage, etc. For mathematical analysis, an image may be defined
as a two dimensional function f(x,y), where x and y are spatial coordinates and the amplitude
of f at any point (x,y) is called the intensity of f at that point. In grey scale images, it is also
called the grey level. When x,y and these intensity values are all finite, discrete quantities, the
image is said to be a digital image. It is very important that a digital image is composed of a
finite number of elements, each of which has coordinates and a value.
These elements are called picture elements or pixels and are the smallest part of an
image. Various techniques have been developed over the past decades. Most of these
techniques are developed for enhancing images obtained from various photography
equipments. Image processing systems are becoming popular due to easy availability of
powerful personal computers, algorithms, memory devices, software, etc.
Moreover, over the years, their usage has also become very simple and can be of use,
not just to researchers and scientists, but to laymen as well, who can also use it in the right way
to realize their potential better.
• Remote Sensing
• Medical imaging
• Non-destructive Evaluation
• Forensic Studies
• Textiles
• Material Science
• Military
• Film Industry
• Document Processing
• Graphic Arts
• Printing Industry.
General purpose digital processing system, the first step in the process is image
acquisition – that is, to acquire a digital image. To do so requires an imaging sensor and the
capability to digitize the signal produced by the sensor. The sensor could be a monochrome,
color TV or line scan camera. If the output of the camera other imaging sensor is not already
in Digital form, an analog to digital converter digitizes it. The nature of the sensor and the
image it produces are determined by the application. After a digital image has been obtained,
the next step deals with preprocessing that image. The key function of preprocessing is to
improve the image in ways that increase the chances for success of the other processes.
Preprocessing typically deals with techniques for enhancing contrast, removing noise, and
isolating regions whose textures indicate a likelihood of alphanumeric information.
The next stage deals with Segmentation. It partitions an input image into its constituent’s parts
or objects. A rugged segmentation procedure brings the process a long way toward successful
solution of an imaging problem. Weak or erratic segmentation algorithms almost always
guarantee eventual failure, in terms of character recognition, the key role of segmentation is to
extract individual characters and words from the background.
A Representation is only part of the solution for transforming raw data into a form suitable for
subsequent computer processing. A method must also be specified for describing the data so
that features of interest are highlighted. Description, also called feature selection, deals with
extracting features that result in some quantitative information of interest or features that are
basic for differentiating one class of objects from another. In terms of character recognition,
descriptors such as lakes (holes) and bays are powerful features that help differentiate one part
of the alphabet from another.
Recognition is the process that assigns a label to an object based on the information provided
by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized
objects. Knowledge about a problem domain is coded into an image processing system in the
form of a knowledge database. This knowledge may be as simple as detailing regions of an
image where the information of interest is known to be located, thus limiting the search that
has to be conducted in seeking that information.
The knowledge base also can be quite complex, such as an interrelated list of all major possible
defects in a materials inspection problem or an image database containing high-resolution
satellite images of a region in connection with change-detection applications. In addition to
guiding the operation of each processing module, the knowledge base also controls the
interaction between modules. The use of double-headed arrows linking the processing modules
and the knowledge base, as opposed to single-headed arrows linking the processing modules.
This depiction indicates that communication between processing modules generally is based
on prior knowledge of what a result should be.
Two elements are required to acquire digital images. The first is a physical device that is
sensitive to a band in the electromagnetic energy spectrum (such as x-ray, ultraviolet, visible,
or infrared bands) and that produces an electrical signal output proportional to the level of
energy sensed. The second called a digitizer, is a device for converting the electrical output of
the physical sensing device into digital form.
• Image enhancement:
Image enhancement is among the simplest and most appealing areas of digital image
processing. Basically, the idea behind enhancement techniques is to bring out detail that is
obscured, or simply to highlight certain features of interest in an image. Such as, changing
brightness & contrast etc.
• Image Restoration:
Image restoration is an area that also deals with improving the appearance of an image.
However, unlike enhancement, which is subjective, image restoration is objective, in the sense
that restoration techniques tend to be based on mathematical or probabilistic models of image
degradation.
• Face Recognition:
Face Recognition system based on Image Processing Project involves processing image from
surveillance camera. Whenever a person is detected in the camera footage, the face recognition
system kicks in. At first it finds out if person is entering or leaving the premise. Then it tries to
recognize the person. If the person could be recognized then, the details of that person will be
provided by the system.
History
We’ve written widely about Facial Recognition in our Market Leadership section, but without
ever looking back to the early beginnings of Facial Recognition or looking forward to what the
future may bring.
Once considered a thing of science fiction, biometric facial recognition is quickly becoming an
integrated part of people’s everyday lives.
Several major industries have benefitted from the rapid advancements that have been made in
Facial Recognition technology over the past 60 years and these include: law enforcement,
border control, retail, mobile technology and banking and finance.
As we look forward to the future uses of Facial Recognition software, it’s good to take a step
back and see how far we have come since the early beginnings.
The earliest pioneers of facial recognition were Woody Bledsoe, Helen Chan Wolf and Charles
Bisson. In 1964 and 1965, Bledsoe, along with Wolf and Bisson began work using computers
to recognize the human face. Due to the funding of the project originating from an unnamed
intelligence agency, much of their work was never published.
However, it was later revealed that their initial work involved the manual marking of various
“landmarks” on the face such as eye centres, mouth etc. These were then mathematically
rotated by a computer to compensate for pose variation. The distances between landmarks were
also automatically computed and compared between images to determine identity.
Carrying on from the initial work of Bledsoe, the baton was picked up in the 1970s by
Goldstein, Harmon and Lesk who extended the work to include 21 specific subjective markers
including hair colour and lip thickness to automate the recognition. While the accuracy
advanced, the measurements and locations still needed to be manually computed which proved
to be extremely labour intensive yet still represents an advancement on Bledsoe’s RAND
Tablet technology.
It wasn’t until the late 1980s that we saw further progress with the development of Facial
Recognition software as a viable biometric for businesses. In 1988, Sirovich and Kirby began
applying linear algebra to the problem of facial recognition.
A system that came to be known as Eigenface showed that feature analysis on a collection of
facial images could form a set of basic features. They were also able to show that less than one
hundred values were required to accurately code a normalized facial image.
In 1991, Turk and Pentland carried on the work of Sirovich and Kirby by discovering how to
detect faces within an image which led to the earliest instances of automatic facial recognition.
This significant breakthrough was hindered by technological and environmental factors;
however, it paved the way for future developments in Facial Recognition technology.
The Defence Advanced Research Projects Agency (DARPA) and the National Institute of
Standards and Technology (NIST) rolled out the Face Recognition Technology (FERET)
programme in the early 1990s to encourage the commercial facial recognition market. The
project involved creating a database of facial images. Included in the test set were 2,413 still
facial images representing 856 people. The hope was that a large database of test images for
facial recognition would inspire innovation and may result in more powerful facial recognition
technology.
The National Institute of Standards and Technology (NIST) began Face Recognition Vendor
Tests (FRVT) in the early 2000s. Building on FERET, FRVTs were designed to provide
independent government evaluations of facial recognition systems that were commercially
available, as well as prototype technologies. These evaluations were designed to provide law
enforcement agencies and the U.S. government with information necessary to determine the
best ways to deploy facial recognition technology.
Launched in 2006, the primary goal of the Face Recognition Grand Challenge (FRGC) was to
promote and advance face recognition technology designed to support existing face recognition
efforts in the U.S. Government. The FRGC evaluated the latest face recognition algorithms
available. High-resolution face images, 3D face scans, and iris images were used in the tests.
The results indicated that the new algorithms were 10 times more accurate than the face
recognition algorithms of 2002 and 100 times more accurate than those of 1995, showing the
advancements of facial recognition technology over the past decade.
Back in 2010, Facebook began implementing facial recognition functionality that helped
identify people whose faces may feature in the photos that Facebook users update daily. The
feature was instantly controversial with the news media, sparking a slew of privacy-related
articles. However, Facebook users by and large did not seem to mind. Having no apparent
negative impact on the website’s usage or popularity, more than 350 million photos are
uploaded and tagged using face recognition each day.
1.3 Motivation
In this era of Internet explosion, computer technology has involved many areas of people's
lives and work. The occasions where people come into contact with computers are gradually
expanding. The frequency with which people use computing is also increasing. One of the most
challenging projects in the field has a broad application prospect because of its huge sense of
innovation. As an important identity label for people to distinguish different individuals, face
recognition technology has gradually entered people's lives. Face recognition is the
combination of artificial intelligence and computer.
Because of its huge challenging innovation and broad application prospects, it has become the
most challenging topic in this field.In recent years, the face recognition application system has
developed rapidly as a computer security technology in the world, especially today, when
terrorist activities are rampant, this technology has received more and more attention. Face
recognition technology has many typical applications in the field of public safety, civil
economy, and home entertainment.
There is no dedicated Criminal and Missing Children Face Detection System .Police
technicians have to go through to different pictures of criminals and manually slice each picture
to generate images .
The motivation behind this project is to provide the ease to police personnel in order to find
criminals and missing children. From this police can track them without wasting the and cost.
This application also helps to find all the details of the criminal and missing children.
To develop an application which will serve a way to register and track criminals and missing
children remotely with the help of criminal data.
Processing:
Output: Detects and displays the information about the criminal or information of
children.
Face recognition across aging has become very popular and challenging task in the area of face
recognition. Many researchers have contributed in this area, but still there is a significant gap to
fill in. Selection of feature extraction and classification algorithms plays an important role in this
area. Deep Learning with Convolution Neural Networks provides us a combination of feature
extraction and classification in a single structure.
• The presented work can be implemented as an effective way to detect and recognize
criminals faces and missing children.
• A survey has revealed that various methods and combination of these methods can be
applied in development of a new face recognition system.
• It can be used in many fields like Bank, Hotel, Schools and Colleges for tracking
attendance, Airport security systems and Police Department.
1.6 Objectives
[1] Dr S. Matilda and S. Ayyapan “Criminals And Missing Children Identification Using
Face Recognition And Web Scrapping” The 2nd International Conference on Applied
Science and Technology 2020
This system can successfully recognize more than one face which is useful for quickly
searching suspected persons as the computation time is very low. It creates a unique template
for each face and compare them with other images available in dataset. If the match is found
for the input face, then the details associated with the related image will be displayed. This
system will decrease the crimes and ensure the security in our society. In this work, we compare
the various types of images and the accuracy level of results is very satisfying. It performs well
with both images and videos. The results displayed are 90% accurate. This requires less
memory space to implement and takes less time when compared with other approaches. By
using this the criminals and missing children/person can be easily identifiable and it keeps on
updating dynamically. The analysis process carried out with real criminal images in the web
and it provides good results. This application will decrease the crimes in our environment.
[2] Mrudula Nimbarte, Kishor Bhoyar “Age Invariant Face Recognition using
Convolutional Neural Network” Third International Conference on Computing
Methodologies and Communication August 2018.
In this paper, they proposed a novel methodology for age invariant face recognition using
Convolutional Neural Network named AIFR-CNN. Experimentation has been performed on
two image datasets FGNET and MORPH-II. In this approach, our goal is to provide a simple
network by using less number of layers, small image size(32х32) for processing. This system
preserved simplicity as no separate algorithm is required for feature extraction. The results
have demonstrated that it is better than current stateof-the-arts in Rank-1 recognition on both
the datasets. Moreover, no complicated preprocessing steps are used for head pose correction.
Resized images of size 32х32 pixels show better results as compared to images of size 64х64
pixels on both datasets. AIFR-CNN with SVM as a final classification stage, shows significant
improvement in the performance over AIFR-CNN with NN as a final classification stage.
[3] Ankit Gupta, Deepika Punj & Anuradha Pillai “Face Recognition System Based on
Convolutional Neural Network (CNN) for Criminal Identification” 2022 IEEE
International Conference on Information and Automation for Sustainability (ICIAS)
approaches adopted for FRS by making use of the CNN model. From recent study, we can
conclude that recognized face recognition systems (FRS) were easily attacked and tricked by
using faked images of the person whom they are targeting obtained from various social
networks like Facebook, Instagram, Twitter, etc. The main task which is involved in image
classification is of extracting important features. By using the important features of image
processing methods like scaling, thresholding and contrast enhancement which is based on
deep neural networks, the face recognition system can classify the results more efficiently and
achieve high accuracy. This paper gives emphasis on extracting various important features
required for face recognition using a CNN model and gives a brief overview of various
techniques employed by researchers in this area. In this research, we have used Flickr-Faces-
HQ Dataset (FFHQ) and test images mixed with some real time images captured from camera
devices like CCTV, mobile camera, and when appropriate match is found, it gives information
according to the matched faces. Accuracy obtained by using this CNN model using Google
Colab is 96%.
[4] Gargi Tela, Sakshi Hiwarale, Shruti Dhave, Dhanashree Rathi ” CNN Based Criminal
Identification” International Journal of Advanced Research in Science, Communication
and Technology (IJARSCT) Volume 2, Issue 2, May 2022
Considering abnormal increase in crime rate and number of criminals, there is a need of more
effective Criminal Identification Technique. Biometric technique like thumb print
identification is faded out today as criminals of these days obtaining cleverer to not leave their
fingerprints on the scene. Human Face is the most important attribute to recognize any
individual. It is a dynamic object having high degree of variability in its appearance which
makes it a better identification technique among the other biometric techniques.
But there are many challenges in Face Identification system too. Our project aims to overcome
such challenges and evaluates various faces using Convolutional Neural Network (CNN) to
provide a complete solution for Image based Face Detection with an accuracy of 76.19 for
Criminal Identification.”
Criminal face recognition is performed using CNN (N=10) over HAAR Cascade (N=10) was
iterated 10 times for efficient and accurate analysis based on labeled data with 2 groups G
power in 80% and threshold 0.05%, CI 95% mean and standard deviation. The split size of
training and testing of 70% and 30% respectively.Results: After analyzing the results, the
accuracy for HAAR is 84.50% and the accuracy for CNN is 90.30% and attained the
significance value of p=0.0309 (p<0.05), showing that there is a significant difference between
the groups.
Chapter 4 - Detailed Design: This chapter contains the structural chart and detailed
description of each module which consists of face recognition module and monitoring
of faculty data.
Chapter 6 – System Testing: The chapter 6 describes the different test procedures
and the unit testing for each module.
1.9 Summary
The summary describes the importance of Image Processing in Face Recognition 1.3
explains about the history of Face Recognition techniques. The motivation of the project is
discussed in the section 1.4. Problem statement is explained in section 1.5. The scope of the
project is described the section 1.6. And objectives are presented in the section 1.7. gives
details of the literature survey reviews, the important papers referred.
CHAPTER 2
• This project is required to execute the given model and project the expected
outputs. The ultimate goal of this project is to make the model recognize a
criminal irrespective of their ages.
• In the training of the model the model should report the accuracy of the output.
• The developed model is scalable, the model is able to detect the person’s face
even few features are extracted (again it depend on the features that are
extracted).
• The user should satisfy all the hardware and software requirements, then only
the model id going to support your actions.
The tool is should:
2.5 Summary
The chapter 2 considers all the system requirements which is required to develop this
proposed system. Section 2.1 consists of functional requirements of this project. Section
2.2 describes the non-functional requirements. Section 2.3 explains the Hardware
requirements. Section 2.4 explains the software requirements.
CHAPTER 3
High-level design (HLD) explains the architecture that would be used for developing a software
product. The architecture diagram provides an overview of an entire system, identifying the
main components that would be developed for the product and their interfaces.
The HLD uses possibly non-technical to mildly technical terms that should be understandable
to the administrators of the system.
In contrast low level design further exposes the logical detailed design of each of these
elements for programmers.
High level design is the design which is used to design the software related requirements. In
this chapter complete system design is generated and shows how the modules, sub modules
and the flow of the data between them are done and integrated. It is very simple phase that
shows the implementation process.
Following are the Design consideration taken into system for special cases.
This case describes when the camera/dataset captures image, but it does not have the face
content which the system requires.
In this case a warning message is displayed to the user stating the problem, that is, the user will
be asked to re-enter the image until the system is able to recognize it is a image containing face.
This case describes when the camera/dataset captures image, image with face is also recognized
but it is not clear.
In this case the user will be asked to re-enter the face image until the system is able to get a
good image else the user can go ahead with the same, results might vary.
This case describes when the camera/dataset is unable to get an image into system buffer.In
this case the system will report an error because it did not have any data to work with as the
image buffer was empty, the system will display the error.
The system can handle this as an expectation and proceed with the code by taking a random
buffer values, but again this will lead to mismatch of results.
This case describes when there are bugs in the program. In this case the system will handle this
bug and error by a concept called exception handling, and need to write assertions in the code
so that the code can report these bugs and errors, so that the user can rectify this once found.
The design consideration briefs about how the system behaves for skin images and classify the
corresponding notation for the given dataset. Pre-processing, Segmentation, Feature
Extraction, Classification.
Pre-Processing: Data set will take sample images and convert them to greyscale images ,
detecting faces and resizing.
The pre- processing required in a ConvNet is much lower as compared to other classification
algorithms.
While in primitive methods filters are hand-engineered, with enough training, ConvNets have
the ability to learn these filters/characteristics.
Face Recognition system based on Image Processing Project involves processing image from
surveillance camera. Whenever a person is detected in the camera footage, the face recognition
system kicks in. At first it finds out if person is entering or leaving the premise.
Then it tries to recognize the person. If the person could be recognized then, the details of that
person will be provided by the system.
This section describes the proposed methodology for missing criminal and children face
recognition using Convolutional Neural Networks (AIFR-CNN). The CNN method is used to
produce 3D face images from 2D face images. Each image to a feature in a high-dimensional
space. In this high dimensional space, features belonging to the same person will be close to
each other and far away for different persons.
The camera that used in face detection process is performed using face detection based on the
conventional neural network (CNN). Face detection is done by a camera to take face images of
objects taken. The image taken from the camera lens is a raw image containing a background
image and a face image. In this face detection process is carried out the process of detecting
and searching for facial features in the camera image, which at this stage the system recognizes
patterns as faces or not. Normalization or pre-processing is a process that result a face image
that has detected on process of face detection. In this normalization phase, a combination of
several face image processing models is used. We used the cropping method, resizing, RGB-
Gray, and using histogram equalization as a contrast-brightness adjustment to optimize the
facial recognition. The pre-processing method is used to improve the sharpness of the image to
anticipate several variations in illumination that commonly appear when capturing facial
images.
Feature extraction is the process of capturing the desired feature descriptors using LBP (Local
Binary Pattern) and CNN rather than extracting it manually. In this model, we used 7-layer
CNN architecture. Classification is required to recognize the identity of the person. This work
includes multi-class classification problem.
Figure 3.2 shows the stages in the face recognition system. This system mainly
contains following modules :
This is the camera placed at Crime scene. This needs to be active all thetime
capturing the image and keep transmitting it to the wired transmission module
This is the module which transports the image captured by Face Capture End into the
Face Recognition Module
• Data base:
This is the place where all criminals and missing children details are stored.
This is responsible to recognize the face in the image captured by Face Capture End.
This module is responsible for getting the complete details after the face
recognition module recognizes the person in the captured image.
The use case diagram at its simplest is a representation of a user's interaction with the system
that shows the relationship between the user and the different use cases in which the user is
involved.
The use case diagram for missing criminal and children recognition model is as depicted in the
figure 3.3. Initially the set of captured images are stored in a temporary file in OpenCV. The
obtained RGB image is converted in to greyscale image to reduce complexity. Then the pre-
processing techniques are applied on the obtained greyscale image. The feature extraction is
carried out then the features are extracted and fed into classifiers which are trained based on
the available dataset to detect the face. The detected face is displayed along with the
intermediate results, various information regarding the detected face is displayed
Input
Pre-processing
Feature extraction
Training Model
Classification
Input image
RGB to Grey
conversion
Face detection
User System
Figure 3.4: Use Case Diagram of pre-processing module.
• Description: Figure 3.5 shows the use case diagram of classification module. In this
use case diagram, there are four use cases and two actors. In the first use case, the
system takes pre-processed image. In second use case the classifier is trained. In the
third use case classifier is applied. In the fourth use case face is recognized.
Pre-processed image
Training
Classification-CNN
User System
Face Recognition
As the name specifies so to the meaning of the words, it is the process which is explained in
detail like how the data flows between the different processes. The figure 3.6 depicts the flow
diagram and is composed of the input, process and the output. After each process the data which
is flown between the systems need to be specified and hence called the data flow diagram. In
most of the times it is the initial step for designing any of the systems to be implemented. It
also shows from where the data originated, where the data flowed and where it got stored. The
input image is pre-processed, then feature extraction is carried out, Then Classification is done
after training the model, Then at the final step Face is recognized.
Input image through camera is captured and it can be used to store as dataset for
training or as input image to detect the face in image. The image is captured and stored in
any supported format specified by the device.
As shown in the figure 3.7 initially the set of captured images are stored in a temporary
file in OpenCV. The storage is linked to the file set account from which the data is
accessed. The obtained RGB image is converted into Greyscale image to reduce
complexity.
Pre-processed
image
Face Recognition
Preprocessed
image Face detection
When CNN is used for classification, we don’t have to do feature extraction. Feature
Extraction will also be carried out by CNN. We feed the pre-processed image directly
to CNN classifier to obtain the type of disease if present. Figure 3.9 shows the Data
Flow Diagram of Classification using CNN
Pre-processed
image CNN
Classification Face Detection
Image
Classifier
A class diagram in the Unified Modeling Language (UML) is a type of static structure
diagram that describes the structure of a subsystem by showing the system’s classes, their
attributes, operations (or methods), and the relationships among the classes.
A class diagram is a type of static structure diagram that describes the structure of a
system by showing the system’s classes, their attributes, operations (or methods), and
the relationships among objects
It is used to visualize, describe, document various different aspects of the system, and
also construct executable software code
Each class is represented as a rectangle with three compartments for the class name,
attributes, and operations.
The class diagram is the main building block of the Object Oriented Modeling. It is used
both for general conceptual modeling of the systematic of the application, and for the
detailed modeling translating the models into programming code.
Class diagrams can also be used for data modeling. The classes in a class diagram
represent both the main objects, interactions in the application and the classes to be
programmed.
3.5.1 Class Diagram 1
In the figure 3.9, the class diagram represent the pre-processing of the image. First the
image will be uploaded, that image is converted to grayscale then face is detected using
Haar cascade function and image is cropped then that cropped image is resized for further
processing.
In the figure 3.10, after pre-processing that image is sent into various feature extraction
technique such as LBP, Facial landmark algorithms and VGG16 for Classification then
the result will be predicted and information of the detected person will be displayed.
3.6 Summary
In third chapter, high level design of the proposed method is discussed. Section 3.1
presents the design consideration for the project. Section 3.2 discusses the system
architectureof proposed system. Section 3.3 describes the specification of the use case
diagram. Section 3.4 describes the dataflow diagrams for all modules in our system.
Finally in section 3.5 using class diagram the process of face recognition is described.
CHAPTER 4
DETAILED DESIGN
A detail design is the process of each individual module which is completed in the earlier
stage than implementation. It is the second phase of the project first is to design phase
and second phase is individual design of each phase options. It saves more time and
another plus point is to make implementation easier.
Detailed design is the process of refining and expanding the preliminary design of a
system or component to the extent that the design is sufficiently complete to begin
implementation. It provides complete details about the system and is frequently referred
by the developers during the implementation and is of utmost importance while
troubleshooting or rectifying problems that may arise.
The prediction model is composed of 4 modules, namely- the data acquisition module,
the pre-processing module, the feature extraction module and then, the classification
module. This constitutes the complete structure of the system, which specifies the
modules that are to be considered during the implementation phase of the project.
In data acquisition module image of a person is taken as input and in the pre-processing
module image will be pre-processed using various pre-processing techniques and then
pre-processed image will sent for feature extraction and classification.
The job of the face recognition module is to recognize the face in the image. Face
Recognition Module uses Convolution Neural Network to do the job.
Feature Convolutional
Image Pre-Processing Face ID
Extraction Neural Network
Figure 4.2 shows the face recognition module. This module consists of three stages.
1. Pre-processing
2. Feature Extraction
3. Convolution Neural Network
4.2.1 Pre-processing
• To store a single color pixel of an RGB color image we will need 8*3 = 24 bits
(8 bit for each color component).
• Only 8 bit is required to store a single pixel of the image. So we will need 33 %
less memory to store grayscale image than to store an RGB image.
• Grayscale images are much easier to work within a variety of task like In many
morphological operation and image segmentation problem, it is easier to work
with single layered image (Grayscale image) than a three-layered image (RGB
color image ).
• It is also easier to distinguish features of an image when we deal with a single
layered image.
These features on the image makes it easy to find out the edges or the lines in the image,
or to pick areas where there is a sudden change in the intensities of the pixels.
AdaBoost
A majority of the features found in previous steps won’t work well or will be irrelevant
to the facial features, as they will be too random to find anything. So here they needed a
Feature Selection technique, to select a subset of features from the huge set.
Attentional Cascade
The subset of all 6000 features will again run on the training images to detect if there’s
a facial feature present or not. If the image size is large it’s a tiresome task.
To simplify this, The Attentional Cascade is used. The idea behind this is, not all the
features need to run on each and every window. If a feature fails on a particular window,
then we can say that the facial features are not present there. Hence, we can move to the
next windows where there can be facial features present.
In the Viola – Jones, there are 38 stages for 6000 features. The number of features in the
first five stages are 1, 10, 25, 25, and 50, and this increased in the subsequent stages.
The initial stages with simpler and lesser number of features removed most of the
windows not having any facial features, thereby reducing the false negative ratio.
33 34 37 41 45 45
29 43 65 73 88 87
(3X3) Window
N=0,1,2,3,4,5,6,7
ic=43
Convolutional Layer Convolutional Layer is the first step in CNN, here 3*3 part of the given
matrix which was obtained from High-pass filter is given as input. That 3*3 matrix is multiplied
with the filter matrix for the corresponding position and their sum is written in the particular
position. This is shown in the below figure. This output is given to pooling layer where the
matrix is further reduced.
Figure 4.9 shows the Convolutional Layer.
Convolution is followed by the rectification of negative values to 0s, before pooling. In fact,
multiple iterations of both are needed before pooling.
Pooling Layer
In Pooling layer 3*3 matrix is reduced to 2*2 matrix, this is done by selecting the maximum
of the particular 2*2 matrix for the particular position. Figure 4.9 shows the Pooling Layer.
Fully connected layer and Output Layer
The output of the pooling layer is flattened and this flattened matrix is fed into the Fully
Connected Layer. In the fully connected layer there are many layers, Input layer, Hidden layer
and Output layers are parts of it. Then this output is fed into the classifier, in this case Softmax
Activation Function is used to classify the and gives face id. Figure 4.11 shows the Fully
connected layer and Output Layer
Softmax activation calculate probability of each class which the input belong.The number
of unit in output layer is equal to the number of classes probability distribution :
sum of each class is equal to 1.
Using these values the face will recognized and face id predicted.
Threshold=0.5
If score<=Threshold then,
“Face Match”
Else,
In Rectifier linear unit after convolution if the resulted matrix consist of negative values those
values are converted into 0, before pooling using max (0,z) function.
We have made use of various CNN models to detect Missing children and criminals and we
have compared these models. We have discussed the different models that we used in our
project. The architectures of these models are fixed and these can be utilized for various
applications.
4.3 Summary
The fourth chapter gives the detailed design for image processing and convolutional neural
network of face recognition system of criminal and children. Section 4.1 contains the
structural chart of the system. Section 4.2 explains the face recognition module which
contains preprocessing stage and convolutional neural network stage.
CHAPTER 5
IMPLEMENTATION
Implementation is the platform where the project working is demonstrated. In
implementation the system drives for real operations. Every consequence completed by the
project will be abundant, if processes are accurately executed according to plan carried out.
Anaconda
Anaconda was first released in 2012 by Continuum Analytics, a software company that was
acquired by Anaconda Inc. in 2017. The distribution includes more than 1,500 open-source
packages that are curated and tested by Anaconda Inc. and the broader community of data
scientists and developers. The packages cover a wide range of topics including data
visualization, scientific computing, machine learning, deep learning, natural language
processing, and more.
One of the key features of Anaconda is its package management system, conda. With conda,
users can easily install, update, and remove packages, as well as create and manage isolated
environments for different projects with different dependencies. This makes it easy to work
on multiple projects with different package requirements without worrying about conflicts
between packages.
In addition to the built-in Spyder IDE, Anaconda also supports other popular IDEs like Jupyter
Notebook and JupyterLab, as well as text editors like Sublime Text and VS Code. Anaconda
can also be integrated with cloud computing platforms like Amazon Web Services and
Microsoft Azure, making it easy to scale up data analysis and machine learning tasks.
Programming language used to design the proposed method is Python. Python is a high-
level programming language with dynamic semantics. It is an interpreted language i.e.
interpreter executes the code line by line at a time, thus makes debugging easy. Python
Imaging Library (PIL) is one of the popular libraries used for image processing. PIL can be
used to display image, create thumbnails, resize, rotation, convert between file formats,
contrast enhancement, filter and apply other digital image processing techniques etc.
Python is often used as a support language for software developers, for build control and
management, testing, and in many other ways. Python is designed by Guido van Rossum. It is
very easy for user to learn this language because of its simpler coding. It provides an easy
environment to furnish computation, programming visualization. Python supports modules
and packages, which encourages program modularity and code reuse. It has various built-in
commands and functions which will allow the user to perform functional programming. Apart
from being an open-source programming language, developers use it extensively for
application development and system development programming. It is a highly extensible
language. Python contains many inbuilt functions which helps beginners to learn easily.
Some of the most commonly used functions are: imread will read image from specified
location, imshow will display the output or images on the screen, cvtColor function converts
binary image into grayscale image. Compare_ssim function will compare pre-flood and post-
flood images, find Contours will give the region of difference between two input images.
• Python is an interpreted language i.e. interpreter executes the code line by line at a time,
thus makes debugging easy.
• Python has a large and broad library and provides rich set of module and functions for
rapid application development.
Tkinter is a built-in Python library for creating graphical user interfaces (GUIs). It is a
cross-platform library that provides a set of tools for creating windows, widgets, and other
GUI elements. Tkinter is based on the Tcl/Tk GUI toolkit, which was developed in the 1980s
by John Ousterhout.
Tkinter provides a wide range of widgets, including buttons, labels, text boxes, check boxes,
radio buttons, and more. These widgets can be used to create complex and interactive GUIs
for desktop applications. Tkinter also provides a geometry manager that allows developers to
position and layout widgets in a window.
One of the advantages of using Tkinter is its simplicity and ease of use. It is built into Python
and does not require any additional installations or setup. It is also well-documented and has
a large community of developers who contribute to its development and support.
Tkinter also provides support for event-driven programming, which allows developers to
create applications that respond to user interactions such as mouse clicks and keyboard inputs.
This makes it easy to create interactive and responsive GUIs.
• Image and video processing: OpenCV provides a wide range of tools for manipulating
and processing images and videos, including filtering, transformation, and
segmentation.
• Feature detection and extraction: OpenCV supports a range of feature detection and
extraction algorithms including SIFT, SURF, ORB, and FAST.
• Object detection and tracking: OpenCV provides tools for detecting and tracking
objects in images and videos using algorithms such as Haar cascades, HOG (Histogram
of Oriented Gradients), and Deep Learning-based methods.
• Camera calibration: OpenCV provides tools for calibrating cameras, including methods
for estimating intrinsic and extrinsic camera parameters.
• Machine learning: OpenCV provides tools for machine learning, including algorithms
for classification, regression, clustering, and deep learning.
5.2.4 Packages
Packages are the namespaces which consists of multiple packages and module
themselves. Each package in Python is a directory which must contain a special file called
_init_py. This file can be empty, and it indicates that the directory it contains is a Python
package, so it can be imported in the same way that the module can be imported. As our
application program grows larger in size with a lot of modules, one can place the similar
modules in one package and different modules in different packages. This makes a program
easy to manage and conceptually clear. We can import the modules from packages using the
dot (.) operator.
1. Tensorflow
TensorFlow provides a rich set of tools for building and training machine learning models,
including APIs for high-level neural network architectures such as convolutional neural
networks (CNNs) and recurrent neural networks (RNNs), as well as low-level APIs for building
custom models. It also includes tools for data preprocessing, visualization, and evaluation of
models.
One of the key features of TensorFlow is its ability to perform distributed computing, which
allows users to scale up their models and training data to run on multiple machines or clusters.
TensorFlow also provides support for mobile and embedded devices, which allows users to
deploy their models on a wide range of platforms. In addition to its core library, TensorFlow
has several high-level APIs and frameworks built on top of it, such as Keras, a user-friendly
API for building neural networks, and TensorFlow.js, a library for building and training models
in the browser.
2. Keras
Keras is an open-source high-level neural networks API, written in Python and designed
to enable fast experimentation with deep neural networks. It was developed by François Chollet
in 2015 and is now part of the TensorFlow project.
Keras provides a user-friendly interface for building and training neural networks, with a focus
on ease of use and modularity. It allows developers to quickly build and experiment with
different network architectures and hyperparameters, without having to write low-level code
for training and optimization. Keras supports a wide range of neural network architectures,
including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and
combinations of the two. It also provides a variety of loss functions, activation functions, and
optimization algorithms.
One of the key features of Keras is its ability to seamlessly integrate with other deep learning
libraries, such as TensorFlow and Theano. This allows developers to take advantage of the
advanced features of these libraries while using Keras as a high-level interface.
Keras also includes tools for data preprocessing and model evaluation, making it a
comprehensive deep learning framework.
3.NumPy
NumPy is a python library used for working with arrays. It also has functions for
working in domain of linear algebra, fourier transform, and matrices. NumPy was created in
2005 by Travis Oliphant. It is an open source project and you can use it freely. NumPy stands
for Numerical Python. In Python we have lists that serve the purpose of arrays, but they are
slow to process. NumPy aims to provide an array object that is up to 50x faster than traditional
Python lists. The array object in NumPy is called ndarray, it provides a lot of supporting
functions that make working with ndarray very easy. Arrays are very frequently used in data
science, where speed and resources are very important. NumPy arrays are stored at one
continuous place in memory unlike lists, so processes can access and manipulate them very
efficiently. This behaviour is called locality of reference in computer science. This is the main
reason why NumPy is faster than lists. Also, it is optimized to work with latest CPU
architectures.
4.Matplotlib
Wx python or pygtk to build rich applications. Others use matplotlib in batch scripts to
generate postscript images from some numerical simulations, and still others in web
application servers to dynamically serve up graphs. Matplotlib is the brainchild of John Hunter
(1968-2012), who, along with its many contributors, have put an immeasurable amount of
time and effort into producing a piece of software utilized by thousands of scientists
worldwide.
5.3 Pseudocodes
In this section the RGB image is converted to gray image because it is easy to
perform the operations such as Haar Cascade face detection, Local Binary Pattern.
Step 1: Converting RGB to Gray
Step 2: Show the window to display the image
Step 3: Display the image in the window
Step 4: Multiply each plane with a threshold value
In this section the face in greyscale image is detected and cropped. This technique
enables accurate and efficient identification of faces in an image, allowing for more effective
and reliable face recognition.
Step 1: Load the pre-trained face detection classifier.
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
Step 2: Load the image.
img = cv2.imread('grey_image.jpg')
Step 3: Detect faces in the image.
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5)
Step 4: Crop the face from the original image.
Step 5: Save the cropped face to a new file.
cv2.imwrite('output_image.jpg', face_img)
LBP (Local Binary Patterns) is used for feature extraction in face recognition because
it is a simple, efficient, and effective way to describe the local texture information in an image.
LBP is particularly suitable for describing facial texture patterns, which are important for face
recognition.
Step 1: Load the greyscale face image.
Step 2: Divide the grayscale image into small regions.
Step 3: For each center pixel in the region, compare its intensity with its neighbors.
If the intensity of the neighbor is greater than or equal to that of the central pixel, assign
it a value of 1, otherwise, assign it a value of 0.
Step 4: Repeat step 3 for all pixels in the region.
Step 5: Concatenate the histograms of all regions to obtain the final feature vector for the input
image.
Face recognition using the VGGFace2 model. The model is first loaded and then trained on a
training set. A new image is then loaded and the trained model is used to predict the class label
of the face in the image. Finally, the predicted class label is displayed.
5.4 Summary
This chapter describes the implementation of the Face detection and Recognition using various
architectures such as LBP, CNN. Implementation requirement is deliberated in Section 5.1,
Section 5.2 briefs about the programming language selected. Section 5.3 describes the
pseudocode for Face Detection and Recognition.
CHAPTER 6
SYSTEM TESTING
Testing is the significant phase in the process or application development life cycle.
Testing is final phase where the application is tested for the expected outcomes. Testing of
the system is done to identify the faults or prerequisite missing. Therefore, testing plays a vital
role for superiority assertion and confirming the consistency of the software. Software testing
is essential for correcting errors and improving the quality of the software system. The
software testing process starts once the program is written and the documentation and related
data structures are designed. Without proper testing or with incomplete testing, the program
or the project is said to be incomplete.
Throughout the testing phase, the procedure will be implemented with the group of test
circumstances and the outcome of a procedure for the test case will be appraised to identify
whether the program is executing as projected. Faults found during testing will be modified
using the testing steps and modification will be recorded for forthcoming reference. Some of
the significant aim of the system testing is,
• To confirm the superiority of the project.
• To discover and eradicate any residual errors from prior stages.
• To authenticate the software as a result to the original problem.
• To offer effective consistency of the system.
There must be at least two test cases such as non-positive and non-negative test for each
requirement of an application in order to completely test and satisfy the requirement. If the
requirement has sub-requirement then each of the sub-requirement needs to have at least two
test cases.
For applications or systems deprived of prescribed necessities, user can write the test cases
based on acknowledged usual action of the program of analogous class. In certain positions of
testing, test cases are not carved at all, however the events and the outcomes are conveyed after
the tests have been run.
Unit testing is the mechanism where individual model of the project is tested. It can also be
called as differentiation testing, as the project is tested based on individual model. Using the
module level designs depiction as a monitor significant device route is tested to discover faults
around the boundary of each module
Table 6.1: Test case for loading the image of Criminal or Children.
Test Feature Checks whether the image selected are loaded or not.
The table 6.1 shows the successful test case for loading the image of Criminals or Children that
is selected by the user to do the processing.
The RGB image is converted to grayscale image and the successful test case for the grayscale
image as shown in the Table 6.2, the gray level represents the brightness of a pixel. Here the
RGB image is converted to grayscale image because it simply reduces complexity from a 3D
pixel value (R, G, B) to a 1D value. Grayscale images can be the result of measuring the
intensity of light at each pixel according to a particular weighted combination of frequencies.
Result Successful.
The Table 6.3 shows the successful test case for Face Detection. Face Detection is the process
detecting the Faces in an image.
This technique enables accurate and efficient identification of faces in an image, allowing for
more effective and reliable face recognition.
Test Feature It shows three buttons to upload Old Photo, New Photo
and Analyze button.
Output Expected Main GUI screen with three options.
The Table 6.4 shows the successful test case for the main GUI when code is executed. In this
the process are selected for the execution of the proposed method. Here it shows three options
1. Upload Old Image, 2. Upload Recent Image and 3. Analyze. When the upload image button
is clicked, a file explorer popup appears, asking us to select the jpg file. When the submit button
is clicked, the system checks for the presence of a valid jpg file and classifies it.
Table 6.5: Test case for Upload Old and Recent image button.
Result Successful.
The Table 6.5 shows the successful test case for upload. This button, when clicked pops a file
explorer window, where the user can browse through the file system to select the required
image which has to be classified. On clicking this button and selecting the required image, the
image appears on the GUI.
Result Successful.
The Table 6.6 shows the successful test case for the Analyze button. When the user clicks this
button, it first checks whether an image has been uploaded to the window. Once checked, it
runs the image file through the classifier and proceeds to display the result window.
6.3 Summary
This chapter presents system testing in section 6.1, which consists of unit test cases for the
various modules of the Face Recognition system. Section 6.2 gives a complete view of the
testing which includes Test Name, Test Feature, Output Expected, Output Obtained and Result.
CHAPTER 7
RESULTS AND DISCUSSION
The below Snapshot Figure 7.1 shows the snapshot of home page.
The Snapshot 7.1 shows the snapshot of home page which consists of a header which reads
‘Missing Criminal and Children Tracking System’ and two buttons which are labeled ‘Old
Photo’ and ‘Recent Photo’ respectively.
7.1.2 Snapshot of File Explorer on Clicking ‘Old Photo’ and ‘Recent photo’ button
The below Snapshot 7.2 shows the snapshot of the file explorer that pops up.
The Snapshot 7.2 shows the snapshot of the file explorer window that pops up when the old or
recent photo button is clicked. The user may select the image for further process.
The below Snapshot 7.3 shows the snapshot of the same window after uploading the image.
The Snapshot 7.3 shows the snapshot of the window after uploading the image file to it for
classification of criminal or child. The page displays the image right next to the Old and Recent
photo buttons.
The below Snapshot 7.4 and 7.5 shows the snapshot of RGB to Greyscale Conversion of the
image.
The Snapshot 7.4 and 7.5 shows the snapshot of the window after greyscale conversion of both
Old and Recent Photo.
The below Snapshot 7.6 shows the snapshot of the details of a recognized criminal.
The Snapshot 7.4 shows the snapshot of the window after getting a status of matched Criminal
face by clicking Analyze button and complete details of that criminal such as Name, Gender,
Crime, Punishment, Wanted by and Nationality.
The below Snapshot 7.7 shows the snapshot of the details of a recognized child.
The Snapshot 7.7 shows the snapshot of the window after getting a status of matched Missing
child face by clicking Analyze button and complete details of that child such as Name, Gender,
Missing Year, Languages Spoken, Missing Place by and Guardian contact.
The below Snapshot 7.8 shows the snapshot of a recognized face with no details.
The Snapshot 7.8 shows the snapshot of the window after getting a status of matched face by
clicking Analyze button but there are no records for this face is found so we can conclude that
the person is neither a Criminal nor a Missing Child.
The Snapshot 7.9 shows the snapshot of the window after getting a status of faces that are not
matching and displaying the status i.e., Face not matched.
The Snapshot 7.10 shows the snapshot of the window after getting a result of face not detected
that means photos that are uploaded have no face and getting a status Face Not Detected on the
window.
It is a plot of accuracy on the y-axis versus epoch on the x-axis, with plots for both
training and test data. Accuracy should increase with epoch values for a better model.
Loss Graph
It is a plot of loss, in terms of features that are dropped from consideration on the y-axis
versus epoch on the x-axis, with plots for both training and test data. Loss should decrease with
Figure 7.1 shows the accuracy during training (solid line) and testing (dashed line).
Figure 7.2 shows the loss during training (solid line) and testing (dashed line).
Inference
From the graphs in Figure 7.1 and 7.2, we see that the test data does fit accurately around
the 90% mark as the training data and also low loss as this model is one of the most powerful
ones for face recognition.
7.3 Summary
This chapter presents the experimental results in section 7.1, which consists of snapshots of the
home page, the functionality of the upload and submit buttons, the output page, Missing
Children and Criminals information page, section 7.2 consists graphical inferences made.
CHAPTER 8
8.1 Conclusion
Facial recognition technology has been used to help identify and locate missing persons,
including missing children and criminals. One popular approach for facial recognition is to use
deep learning models like VGGFace2, which is a deep convolutional neural network designed
for face recognition.
A very simple but powerful method is used to detect Missing Criminals and Children using
CNN architecture VGGFace2. In this project we focus on different methods for prediction and
classification of Criminals and Children. Also, in proposed methodology we discuss different
methods of image processing techniques. We can modify available algorithms so as to obtain
good accuracy while classifying images. Accuracy and early detection of these Criminals and
Children will help police department and various NGOs to take early precautions and saving
child’s life. The method used will give the complete details of Missing Criminals and Children
as the class to which the image belongs as the result. Once a missing child has been identified
through this model, NGOs can assist in reuniting them with their families by providing support
and counseling services.
By utilizing this system, police departments can potentially identify and track individuals who
may pose a threat to public safety, such as wanted criminals.
References
[1] Dr S. Matilda and S.Ayyappan “Criminals And Missing Children Identification Using Face
Recognition And Web Scrapping” .The 2nd International Conference on Applied Science and
Technology 2020 (ICAST’20)
[2] Mrudula Nimbarte, Kishor Bhoyar “Age Invariant Face Recognition using Convolutional
Neural Network” Third International Conference on Computing Methodologies and
Communication (ICCMC 2018)
[3] Ankit Gupta, Deepika Punj & Anuradha Pillai “Face Recognition System Based on
Convolutional Neural Network (CNN) for Criminal Identification” 2022 IEEE International
Conference on Information and Automation for Sustainability (ICIAS).
[4] Gargi Tela, Sakshi Hiwarale, Shruti Dhave, Dhanashree Rathi ” CNN Based Criminal
Identification” International Journal of Advanced Research in Science, Communication and
Technology (IJARSCT) Volume 2, Issue 2, May 2022.
[5] T. Sanjay W.Deva Priya” Criminal Identification System to Improve Accuracy of Face
Recognition using Innovative CNN in Comparison with HAAR Cascade”2019.
[6] Rasanayagam, K.Kumarasiri, S.D.D, Tharuka, W. A. D. Samaranayake, N. Samarasinghe
and P. Siriwardana “CIS: An Automated Criminal Identification System”. 2018 IEEE
International Conference on Information and Automation for Sustainability (ICIAfS).
[7] R. He, B. C. Lovell, R. Chellappa, A. K. Jain, and Z. Sun, “Editorial: Special issue on
ubiquitous biometrics,” Pattern Recognition, vol. 66, pp. 1–3, 2017.
[8] T. H. Le, “Applying Artificial Neural Networks for Face Recognition,” Hindawi Publishing
Corporation Advances in Artificial Neural Systems, vol. 2011, pp. 1-16, 2011.
[9] J. M. Guo, et al., “Human Face Age Estimation with Adaptive Hybrid Features,”
International Conference on System Science and Engineering, 2011.
[10] Y. Fu, et al., “Age Synthesis and Estimation via Faces: A Survey,” IEEE Transactions on
attern Analysis and Machine Intelligence, vol. 32, no. 11, pp. 1955-1976, 2010.
[11] D. Hunter and B. Tiddeman, “Facial Ageing,” Cambridge University Press, 2012.
[12] J. Suo, et al., “A Concatenational Graph Evolution Aging Model,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2083-2096, 2012.
[13] A. Lanitis, et al., “Toward Automatic Simulation of Aging Effects on Face Images,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 442-455, 2002.