Download as pdf or txt
Download as pdf or txt
You are on page 1of 72

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

“Jnana Sangama”, Belagavi- 590 018

A PROJECT REPORT
ON
“Missing Criminals and Children Tracking Using Deep Learning”
Submitted in partial fulfilment of the requirements for the degree of

BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE & ENGINEERING
Under the Guidance of
Mrs. KAVYA T.M B.E., M.Tech.,
Assistant Professor,
Department of Computer Science & Engineering
Adichunchanagiri Institute of Technology
Chikkamagaluru

Submitted by

ABHIMAN GOWDA B R (4AI19CS002) ADARSHA J NANDAHALLI (4AI19CS003)

SANKETH R (4AI19CS099) SURYA S S (4AI19CS114)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


ADICHUNCHANAGIRI INSTITUTE OF TECHNOLOGY
(Affiliated to V.T.U., Accredited by NBA)
CHIKKAMAGALURU-577102, KARNATAKA
2022- 2023
ADICHUNCHANAGIRIINSTITUTEOF TECHNOLOGY
(Affiliated to Visvesvaraya Technological University, Belagavi)
Chikkamagaluru, Karnataka, India-577102.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE
This is to certify that the Project work Phase II (18CSP83) entitled “Missing Criminals
and Children Tracking Using Deep Learning”is a bonafide work carried out by ABHIMAN
GOWDA B R (4AI19CS002), ADARSHA J NANDAHALLI (4AI19CS003), SANKETH R
(4AI19CS099), SURYA S S (4AI19CS114). students of 8th semester B.E, in partial fulfilment
for the award of Degree of Bachelor of Engineering in Computer Science and Engineering
of the Visvesvaraya Technological University, Belagavi during the academic year 2022-2023. It
is certified that all corrections and suggestions indicated for Internal Assessment have been
incorporated in the report deposited in the department library. The project report has been
approved, as it satisfies the academic requirements in respect of Project Work Phase-II
prescribed for the said Degree.

Signature of the Guide Signature of the project Coordinator


Mrs. Kavya.T.M, B.E.,M.Tech Mrs. Arpitha.C.N., B.E.,M.Tech
Assistant Professor , Assistant Professor,
Dept. of CS&E Dept. of CS&E

Signature of the HOD Signature of the Principal


Dr. Pushpa Ravikumar, B.E., M.Tech., Ph.D.,LMISTE Dr. C.T Jayadeva, B.E., M.Tech., Ph.D
Professor & Head Principal,
Dept. of CS&E Dept. of CS&E
A.I.T, Chikkamagaluru

External examiner Signature with date


____________________ _________________
____________________ _________________
ADICHUNCHANAGIRI INSTITUTE OF TECHNOLOGY
(Affiliated to Visvesvaraya Technological University, Belagavi)
CHIKKAMAGALURU, INDIA -577 102

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

APPROVAL
The project phase II(18CSP83) entitled “Missing Criminals and Children Tracking

Using Deep Learning” is hereby approved as a credible study of engineering subject


carried out and presented in a satisfactory manner for acceptance as a pre-requisite to the
degree of BACHELOR OF ENGINEERING in COMPUTER SCIENCE AND
ENGINEERING during academic year 2022-23

Submitted by
ABHIMAN GOWDA B R (4AI19CS002)
ADARSHA J NANDAHALLI (4AI19CS003)
SANKETH R (4AI19CS099)
SURYA S S (4AI19CS114)

Signature of the Guide


Signature of the Coordinator
Mrs. Kavya.T.M, B.E.,M.Tech
Mrs. Arpitha.C.N., B.E.,M.Tech
Assistant Professor
Assistant Professor
Dept. of CS&E
Dept. of CS&E

Signature of the HOD


Dr. Pushpa Ravikumar, B.E., M.Tech., Ph.D.,LMISTE
Professor & Head
Dept. of CS&E
ABSTRACT
The Image Processing domain has become one of the trending Technology Today and
under that, face recognition across aging has become very popular and challenging task in the
area of face recognition. Many Technicians and Many researchers have contributed in this area,
but still there is a significant gap to fill in. Selection of feature extraction and classification
algorithms plays an important role in the area. Deep Learning with Convolutional Neural
Networks provides a combination of feature extraction and classification in a single structure.
CNN architecture has been used for solving the problem of aging for recognizing facial images
across aging of a person.
Identification of criminals is done through thumbprint identification. An NCRB
(National Crime Records Bureau) report shows that 70% of crimes are repeatedly committed
by the same criminals. These criminals can be identified by the face recognition from an image
or video frame which is captured by the cameras which are installed in various locations. The
proposed system can successfully recognize the face which is useful for quickly searching
suspected persons.

i
ACKNOWLEDGEMENTS
We express our humble Pranamas to his holiness Parama Poojya Jagadguru
Padmabushana Sri Sri Sri Dr.Balagangadharanatha Mahaswamiji and Parama
Poojya Jagadguru Sri Sri Sri Dr. Nirmalanandanatha Mahaswamiji and also to Sri Sri
Gunanatha Swamiji Sringeri Branch Chikkamagaluru who have showered their blessings
on us for framing our career successfully.
We are deeply indebted to our honourable Director Dr. C K Subbaraya for
creating the right kind of care and ambience.

We are Thankful to our beloved Principal Dr. C T Jayadeva for inspiring us to


achieve greater endeavours in all aspects of Learning.

We express our deepest gratitude to Dr. Pushpa Ravikumar, Professor & Head of
the, Department of Computer Science & Engineering for her valuable guidance,
suggestions and constant encouragement without which success of our Project work would
have been difficult.

We are Grateful to our project coordinator Mrs. Arpitha C N for her excellent
guidance, constant encouragement, support and constructive suggestions.

We are thankful to our guide Mrs. Kavya. T.M Asst. Professor, Dept. of Computer
Science & Engineering AIT, Chikkamagaluru for her inspiration and lively correspondence
for carrying our project work.

We would like to thank our beloved parents for their support, encouragement and
blessings. And last but not least, I would be pleased to express our heartfelt thanks to all
teaching and non-teaching staff of CS&E Department and our friends who have rendered
their help, motivation and support.

ABHIMAN GOWDA B R (4AI19CS002)


ADARSHA J NANDAHALLI (4AI19CS003)

SANKETH R (4AI19CS099)

SURYA S S (4AI19CS114)

ii
TABLE OF CONTENTS
Chapter Title Page No.

Abstract i

Acknowledgements ii

Contents iii
List of Figures vi
List of Tables vii
List of Snapshots viii

CHAPTERS PAGE NO
1. INTRODUCTION 01

1.1 Introduction 01
1.2 Element of Image Processing 03
1.3 Motivation 06
1.4 Problem statement 07
1.5 Scope of the project 08
1.6 Objectives 08
1.7 Review of literature 08
1.8 Organization of the report 11
1.9 Summary 11

2. SYSTEM REQUIREMENTS SPECIFICATION 12

2.1 Functional Requirement 12


2.2 Non-Functional Requirement 12
2.3 Hardware Requirement 13
2.4 Software Requirements 13
2.5 Summary 13

3. High Level Design 14

iii
3.1 Design Considerations 14

3.2 System Architecture 16

3.2.1 Stages of the Face Recognition System 17


3.3 Use Case diagrams for Facial Recognition System 18
3.3.1 Pre-processing Module 19
3.3.2 Classification Module 19
3.4 Data Flow Diagram 20
3.4.1 Data Flow diagram for pre-processing module 20
3.4.2 Data Flow diagram for classification module 22
3.5 Class Diagram 22
3.5.1 Class Diagram 1 23
3.5.2 Class Diagram 2 23
3.6 Summary 24
4. Detailed Design 25
4.1 Structural Chart 25
4.2 Face Recognition Module 26
4.2.1 Pre-Processing 26
4.2.2 Feature Extraction 29
4.2.3 Convolutional Neural Network (CNN) 32
4.3 Summary 38
5. Implementation 39
5.1 Implementation Requirements 39
5.2 Programming Languages Used 40
5.2.1 Key Features of Python 40
5.2.2 Tkinter GUI 41
5.2.3 OpenCV-Python Tool 41
5.2.4 Packages 42
5.3 Pseudocodes 44
5.3.1 Pseudocode for Picture Uploading 44
5.3.2 Pseudocode for Converting RGB image to Gray 44

iv
5.3.3 Pseudocode for Face Detection using Haar Cascade 45
5.3.4 Pseudocode for Feature Extraction using 45
Local Binary Pattern.
5.3.5 Pseudocode for Classification using CNN. 45
5.4 Summary 46
6. System Testing 47
6.1 Test Procedures 47
6.1.1 Formal Test Case 48
6.1.2 Informal Test Case 48
6.2 Unit Testing 48
6.3 Summary 51
7. Results and Discussion 52
7.1 Experimental Results 52
7.1.1 Snapshot of Home Page 52
7.1.2 Snapshot of File Explorer on Clicking ‘Old Photo’ 52
and ‘Recent photo’ button
7.1.3 Snapshot after uploading image 53
7.1.4 Snapshot of RGB to Greyscale conversion. 54
7.1.5 Snapshot of Criminal details 55
7.1.6 Snapshot of Missing Children details 55
7.1.7 Snapshot of Matching Face with no records. 56
7.1.8 Snapshot of Face with no match. 57
7.1.9 Snapshot of image with no face. 57
7.2 Graphical Inferences 58
7.3 Summary 59
8. Conclusion and Future Enhancement 60
8.1 Conclusion 60
8.2 Future Enhancement 60
References 61

v
LIST OF FIGURES

Figure No. Figure Name Page No.


Figure 3.1 System Architecture 16
Figure 3.2 Stages of process of facial recognition system 17
Figure 3.3 Use case diagram for face-recognition model 18
Figure 3.4 Use case diagram of pre-processing module 19
Figure 3.5 Use case diagram of classification module 20
Figure 3.6 Data Flow diagram of face recognition 21
Figure 3.7 Data Flow diagram of pre-processing module 21
Figure 3.8 Data Flow diagram of classification module 22
Figure 3.9 Class Diagram 1 23
Figure 3.10 Class Diagram 2 24
Figure 4.1 Structural chart of the system 25
Figure 4.2 Face Recognition Module 26
Figure 4.3 Conversion from RGB to greyscale 26
Figure 4.4 Edge detection using edge filter 27
Figure 4.5 Integral Image Calculation 28
Figure 4.6 Detected face using haar cascade 29
Figure 4.7 Feature extraction using LBP 31
Figure 4.8 Typical CNN Architecture 32
Figure 4.9 Layers in CNN 33
Figure 4.10 Convolutional layer 35
Figure 4.11 Pooling layer 35
Figure 4.12 Fully Connected layer 35
Figure 4.13 ReLU layer 37
Figure 4.14 Architecture of VGG 37
Figure 7.1 VGGFace2 accuracy graph 58
Figure 7.2 VGGFace2 loss graph 59

vi
LIST OF TABELS
Table No. Table Name Page No.
Table 6.1 Test case for loading the image of Criminal or Children. 48
Table 6.2 Test case for grayscale image. 49
Table 6.3 Test case for Face Detection 49
Table 6.4 Test case for Main GUI 50
Table 6.5 Test case for Upload Old and Recent image button. 50
Table 6.6 Test case for Submit button 51

vii
LIST OF SNAPSHOTS
Snapshot. Snapshot Name Page No.
No

7.1 Snapshot of Home page 52


7.2 Snapshot of the file explorer on clicking submit. 53
7.3 Snapshot of the image that appears on the window. 53
7.4 Snapshot of Greyscale image of Old Photo. 54
7.5 Snapshot of Greyscale image of Recent Photo 54
7.6 Snapshot of a Criminal details. 55
7.7 Snapshot of a Missing Child details. 56
7.8 Snapshot of a Matching face with no records. 56
7.9 Snapshot of a face with no match. 57
7.10 Snapshot of a image with no face. 57

viii
“Missing Criminals and Children Tracking Using Deep Learning”

CHAPTER 1
INTRODUCTION
1.1 Introduction

Face recognition is one of the most significant research topics with great importance
nowadays in this new world of science and technology. Traditionally, repeated criminals are
identified by their biometrics such as thumbprint. But criminals are smart enough to not to
leave their biometrics in crime scene. In developed countries, the government create dataset
which is helpful to recognize the human face which compares the suspicious act. In our system,
the image of the criminal taken from the CCTV at the time of crime is taken or image of the
children is taken as input.

Automatic detection of match for this picture among the already existing images in the
database will be done through our application. This helps the police department to spot the
criminal quickly and also helps them to recognize missing children.

Overview of Image Processing

The basic definition of image processing refers to the processing of digital images by
removing noise, any kind of irregularities that may have crept into the image, either during its
formation, transformation, storage, etc. For mathematical analysis, an image may be defined
as a two dimensional function f(x,y), where x and y are spatial coordinates and the amplitude
of f at any point (x,y) is called the intensity of f at that point. In grey scale images, it is also
called the grey level. When x,y and these intensity values are all finite, discrete quantities, the
image is said to be a digital image. It is very important that a digital image is composed of a
finite number of elements, each of which has coordinates and a value.

These elements are called picture elements or pixels and are the smallest part of an
image. Various techniques have been developed over the past decades. Most of these
techniques are developed for enhancing images obtained from various photography
equipments. Image processing systems are becoming popular due to easy availability of
powerful personal computers, algorithms, memory devices, software, etc.

Moreover, over the years, their usage has also become very simple and can be of use,
not just to researchers and scientists, but to laymen as well, who can also use it in the right way
to realize their potential better.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 1


“Missing Criminals and Children Tracking Using Deep Learning”

Image processing is used in various applications such as:

• Remote Sensing

• Medical imaging

• Non-destructive Evaluation

• Forensic Studies

• Textiles

• Material Science

• Military

• Film Industry

• Document Processing

• Graphic Arts

• Printing Industry.

Digital Image Processing

General purpose digital processing system, the first step in the process is image
acquisition – that is, to acquire a digital image. To do so requires an imaging sensor and the
capability to digitize the signal produced by the sensor. The sensor could be a monochrome,
color TV or line scan camera. If the output of the camera other imaging sensor is not already
in Digital form, an analog to digital converter digitizes it. The nature of the sensor and the
image it produces are determined by the application. After a digital image has been obtained,
the next step deals with preprocessing that image. The key function of preprocessing is to
improve the image in ways that increase the chances for success of the other processes.
Preprocessing typically deals with techniques for enhancing contrast, removing noise, and
isolating regions whose textures indicate a likelihood of alphanumeric information.

The next stage deals with Segmentation. It partitions an input image into its constituent’s parts
or objects. A rugged segmentation procedure brings the process a long way toward successful
solution of an imaging problem. Weak or erratic segmentation algorithms almost always
guarantee eventual failure, in terms of character recognition, the key role of segmentation is to
extract individual characters and words from the background.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 2


“Missing Criminals and Children Tracking Using Deep Learning”

A Representation is only part of the solution for transforming raw data into a form suitable for
subsequent computer processing. A method must also be specified for describing the data so
that features of interest are highlighted. Description, also called feature selection, deals with
extracting features that result in some quantitative information of interest or features that are
basic for differentiating one class of objects from another. In terms of character recognition,
descriptors such as lakes (holes) and bays are powerful features that help differentiate one part
of the alphabet from another.

Recognition is the process that assigns a label to an object based on the information provided
by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized
objects. Knowledge about a problem domain is coded into an image processing system in the
form of a knowledge database. This knowledge may be as simple as detailing regions of an
image where the information of interest is known to be located, thus limiting the search that
has to be conducted in seeking that information.

The knowledge base also can be quite complex, such as an interrelated list of all major possible
defects in a materials inspection problem or an image database containing high-resolution
satellite images of a region in connection with change-detection applications. In addition to
guiding the operation of each processing module, the knowledge base also controls the
interaction between modules. The use of double-headed arrows linking the processing modules
and the knowledge base, as opposed to single-headed arrows linking the processing modules.
This depiction indicates that communication between processing modules generally is based
on prior knowledge of what a result should be.

1.2 Elements of Image Processing


• Image Acquisition:

Two elements are required to acquire digital images. The first is a physical device that is
sensitive to a band in the electromagnetic energy spectrum (such as x-ray, ultraviolet, visible,
or infrared bands) and that produces an electrical signal output proportional to the level of
energy sensed. The second called a digitizer, is a device for converting the electrical output of
the physical sensing device into digital form.

• Image enhancement:
Image enhancement is among the simplest and most appealing areas of digital image
processing. Basically, the idea behind enhancement techniques is to bring out detail that is

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 3


“Missing Criminals and Children Tracking Using Deep Learning”

obscured, or simply to highlight certain features of interest in an image. Such as, changing
brightness & contrast etc.
• Image Restoration:

Image restoration is an area that also deals with improving the appearance of an image.
However, unlike enhancement, which is subjective, image restoration is objective, in the sense
that restoration techniques tend to be based on mathematical or probabilistic models of image
degradation.

• Face Recognition:

Face Recognition system based on Image Processing Project involves processing image from
surveillance camera. Whenever a person is detected in the camera footage, the face recognition
system kicks in. At first it finds out if person is entering or leaving the premise. Then it tries to
recognize the person. If the person could be recognized then, the details of that person will be
provided by the system.

History

We’ve written widely about Facial Recognition in our Market Leadership section, but without
ever looking back to the early beginnings of Facial Recognition or looking forward to what the
future may bring.

Once considered a thing of science fiction, biometric facial recognition is quickly becoming an
integrated part of people’s everyday lives.

Several major industries have benefitted from the rapid advancements that have been made in
Facial Recognition technology over the past 60 years and these include: law enforcement,
border control, retail, mobile technology and banking and finance.

As we look forward to the future uses of Facial Recognition software, it’s good to take a step
back and see how far we have come since the early beginnings.

• The dawn of Facial Recognition – 1960s

The earliest pioneers of facial recognition were Woody Bledsoe, Helen Chan Wolf and Charles
Bisson. In 1964 and 1965, Bledsoe, along with Wolf and Bisson began work using computers

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 4


“Missing Criminals and Children Tracking Using Deep Learning”

to recognize the human face. Due to the funding of the project originating from an unnamed
intelligence agency, much of their work was never published.

However, it was later revealed that their initial work involved the manual marking of various
“landmarks” on the face such as eye centres, mouth etc. These were then mathematically
rotated by a computer to compensate for pose variation. The distances between landmarks were
also automatically computed and compared between images to determine identity.

• Advancing the accuracy of Facial Recognition – 1970s

Carrying on from the initial work of Bledsoe, the baton was picked up in the 1970s by
Goldstein, Harmon and Lesk who extended the work to include 21 specific subjective markers
including hair colour and lip thickness to automate the recognition. While the accuracy
advanced, the measurements and locations still needed to be manually computed which proved
to be extremely labour intensive yet still represents an advancement on Bledsoe’s RAND
Tablet technology.

• Using linear algebra for Facial Recognition – 1980s/90s

It wasn’t until the late 1980s that we saw further progress with the development of Facial
Recognition software as a viable biometric for businesses. In 1988, Sirovich and Kirby began
applying linear algebra to the problem of facial recognition.

A system that came to be known as Eigenface showed that feature analysis on a collection of
facial images could form a set of basic features. They were also able to show that less than one
hundred values were required to accurately code a normalized facial image.

In 1991, Turk and Pentland carried on the work of Sirovich and Kirby by discovering how to
detect faces within an image which led to the earliest instances of automatic facial recognition.
This significant breakthrough was hindered by technological and environmental factors;
however, it paved the way for future developments in Facial Recognition technology.

• FERET Programme – 1990s/2000s

The Defence Advanced Research Projects Agency (DARPA) and the National Institute of
Standards and Technology (NIST) rolled out the Face Recognition Technology (FERET)
programme in the early 1990s to encourage the commercial facial recognition market. The

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 5


“Missing Criminals and Children Tracking Using Deep Learning”

project involved creating a database of facial images. Included in the test set were 2,413 still
facial images representing 856 people. The hope was that a large database of test images for
facial recognition would inspire innovation and may result in more powerful facial recognition
technology.

• Face Recognition Vendor Tests – 2000s

The National Institute of Standards and Technology (NIST) began Face Recognition Vendor
Tests (FRVT) in the early 2000s. Building on FERET, FRVTs were designed to provide
independent government evaluations of facial recognition systems that were commercially
available, as well as prototype technologies. These evaluations were designed to provide law
enforcement agencies and the U.S. government with information necessary to determine the
best ways to deploy facial recognition technology.

• Face Recognition Grand Challenge – 2006

Launched in 2006, the primary goal of the Face Recognition Grand Challenge (FRGC) was to
promote and advance face recognition technology designed to support existing face recognition
efforts in the U.S. Government. The FRGC evaluated the latest face recognition algorithms
available. High-resolution face images, 3D face scans, and iris images were used in the tests.
The results indicated that the new algorithms were 10 times more accurate than the face
recognition algorithms of 2002 and 100 times more accurate than those of 1995, showing the
advancements of facial recognition technology over the past decade.

• Social Media – 2010-Current

Back in 2010, Facebook began implementing facial recognition functionality that helped
identify people whose faces may feature in the photos that Facebook users update daily. The
feature was instantly controversial with the news media, sparking a slew of privacy-related
articles. However, Facebook users by and large did not seem to mind. Having no apparent
negative impact on the website’s usage or popularity, more than 350 million photos are
uploaded and tagged using face recognition each day.

1.3 Motivation

In this era of Internet explosion, computer technology has involved many areas of people's
lives and work. The occasions where people come into contact with computers are gradually

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 6


“Missing Criminals and Children Tracking Using Deep Learning”

expanding. The frequency with which people use computing is also increasing. One of the most
challenging projects in the field has a broad application prospect because of its huge sense of
innovation. As an important identity label for people to distinguish different individuals, face
recognition technology has gradually entered people's lives. Face recognition is the
combination of artificial intelligence and computer.

Because of its huge challenging innovation and broad application prospects, it has become the
most challenging topic in this field.In recent years, the face recognition application system has
developed rapidly as a computer security technology in the world, especially today, when
terrorist activities are rampant, this technology has received more and more attention. Face
recognition technology has many typical applications in the field of public safety, civil
economy, and home entertainment.

There is no dedicated Criminal and Missing Children Face Detection System .Police
technicians have to go through to different pictures of criminals and manually slice each picture
to generate images .

The motivation behind this project is to provide the ease to police personnel in order to find
criminals and missing children. From this police can track them without wasting the and cost.
This application also helps to find all the details of the criminal and missing children.

1.4 Problem Statement

To develop an application which will serve a way to register and track criminals and missing
children remotely with the help of criminal data.

Input: Takes image of criminal and missing children as input.

Processing:

Pre-processing: Grey Scale conversion, Face detection using OpenCV.

Feature Extraction: Using Local Binary Pattern (LBP) we do feature extraction.

Classification: We use Convolutional Neural Network (CNN) for classification.

Output: Detects and displays the information about the criminal or information of
children.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 7


“Missing Criminals and Children Tracking Using Deep Learning”

1.5 Scope of the project

Face recognition across aging has become very popular and challenging task in the area of face
recognition. Many researchers have contributed in this area, but still there is a significant gap to
fill in. Selection of feature extraction and classification algorithms plays an important role in this
area. Deep Learning with Convolution Neural Networks provides us a combination of feature
extraction and classification in a single structure.

• The presented work can be implemented as an effective way to detect and recognize
criminals faces and missing children.
• A survey has revealed that various methods and combination of these methods can be
applied in development of a new face recognition system.
• It can be used in many fields like Bank, Hotel, Schools and Colleges for tracking
attendance, Airport security systems and Police Department.

1.6 Objectives

The Objectives of the proposed system are:

• To save time taken by the traditional method of recognizing criminals by police.


• To develop a effective model that could detect a face of criminal, where he may
change his appearance.
• To develop a model that could detect a face of missing children.
• To find a series of data of the same face by using a set of training images in a
database and to display the information about the criminal or children.
• To prevent police department time.

1.7 Review of Literature

[1] Dr S. Matilda and S. Ayyapan “Criminals And Missing Children Identification Using
Face Recognition And Web Scrapping” The 2nd International Conference on Applied
Science and Technology 2020

This system can successfully recognize more than one face which is useful for quickly
searching suspected persons as the computation time is very low. It creates a unique template
for each face and compare them with other images available in dataset. If the match is found

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 8


“Missing Criminals and Children Tracking Using Deep Learning”

for the input face, then the details associated with the related image will be displayed. This
system will decrease the crimes and ensure the security in our society. In this work, we compare
the various types of images and the accuracy level of results is very satisfying. It performs well
with both images and videos. The results displayed are 90% accurate. This requires less
memory space to implement and takes less time when compared with other approaches. By
using this the criminals and missing children/person can be easily identifiable and it keeps on
updating dynamically. The analysis process carried out with real criminal images in the web
and it provides good results. This application will decrease the crimes in our environment.

[2] Mrudula Nimbarte, Kishor Bhoyar “Age Invariant Face Recognition using
Convolutional Neural Network” Third International Conference on Computing
Methodologies and Communication August 2018.

In this paper, they proposed a novel methodology for age invariant face recognition using
Convolutional Neural Network named AIFR-CNN. Experimentation has been performed on
two image datasets FGNET and MORPH-II. In this approach, our goal is to provide a simple
network by using less number of layers, small image size(32х32) for processing. This system
preserved simplicity as no separate algorithm is required for feature extraction. The results
have demonstrated that it is better than current stateof-the-arts in Rank-1 recognition on both
the datasets. Moreover, no complicated preprocessing steps are used for head pose correction.
Resized images of size 32х32 pixels show better results as compared to images of size 64х64
pixels on both datasets. AIFR-CNN with SVM as a final classification stage, shows significant
improvement in the performance over AIFR-CNN with NN as a final classification stage.

[3] Ankit Gupta, Deepika Punj & Anuradha Pillai “Face Recognition System Based on
Convolutional Neural Network (CNN) for Criminal Identification” 2022 IEEE
International Conference on Information and Automation for Sustainability (ICIAS)

approaches adopted for FRS by making use of the CNN model. From recent study, we can
conclude that recognized face recognition systems (FRS) were easily attacked and tricked by
using faked images of the person whom they are targeting obtained from various social
networks like Facebook, Instagram, Twitter, etc. The main task which is involved in image
classification is of extracting important features. By using the important features of image
processing methods like scaling, thresholding and contrast enhancement which is based on
deep neural networks, the face recognition system can classify the results more efficiently and

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 9


“Missing Criminals and Children Tracking Using Deep Learning”

achieve high accuracy. This paper gives emphasis on extracting various important features
required for face recognition using a CNN model and gives a brief overview of various
techniques employed by researchers in this area. In this research, we have used Flickr-Faces-
HQ Dataset (FFHQ) and test images mixed with some real time images captured from camera
devices like CCTV, mobile camera, and when appropriate match is found, it gives information
according to the matched faces. Accuracy obtained by using this CNN model using Google
Colab is 96%.

[4] Gargi Tela, Sakshi Hiwarale, Shruti Dhave, Dhanashree Rathi ” CNN Based Criminal
Identification” International Journal of Advanced Research in Science, Communication
and Technology (IJARSCT) Volume 2, Issue 2, May 2022

Considering abnormal increase in crime rate and number of criminals, there is a need of more
effective Criminal Identification Technique. Biometric technique like thumb print
identification is faded out today as criminals of these days obtaining cleverer to not leave their
fingerprints on the scene. Human Face is the most important attribute to recognize any
individual. It is a dynamic object having high degree of variability in its appearance which
makes it a better identification technique among the other biometric techniques.

But there are many challenges in Face Identification system too. Our project aims to overcome
such challenges and evaluates various faces using Convolutional Neural Network (CNN) to
provide a complete solution for Image based Face Detection with an accuracy of 76.19 for
Criminal Identification.”

[5] T. Sanjay W.Deva Priya” Criminal Identification System to Improve Accuracy of


Face Recognition using Innovative CNN in Comparison with HAAR Cascade”2019

Criminal face recognition is performed using CNN (N=10) over HAAR Cascade (N=10) was
iterated 10 times for efficient and accurate analysis based on labeled data with 2 groups G
power in 80% and threshold 0.05%, CI 95% mean and standard deviation. The split size of
training and testing of 70% and 30% respectively.Results: After analyzing the results, the
accuracy for HAAR is 84.50% and the accuracy for CNN is 90.30% and attained the
significance value of p=0.0309 (p<0.05), showing that there is a significant difference between
the groups.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 10


“Missing Criminals and Children Tracking Using Deep Learning”

1.8 Organization of the Report

The report has been organized into the below chapters

Chapter 1- Introduction: The chapter presents a brief description about


Missing Criminals and Children Tracking Using Deep Learning.

Chapter 2 - System Requirements Specification: As the name suggested the


second chapter consists of functional and non-functional requirements, software
and hardware requirements that are used in this project. Also, it summarizes this
chapter.

Chapter 3 - High Level Design: This chapter contains design considerations,


Architectureof the proposed system and use case diagram.

Chapter 4 - Detailed Design: This chapter contains the structural chart and detailed
description of each module which consists of face recognition module and monitoring
of faculty data.

Chapter 5 – Implementation: The chapter 5 describes the implementation


requirements, programming languages used and pseudocodes for various techniques.

Chapter 6 – System Testing: The chapter 6 describes the different test procedures
and the unit testing for each module.

Chapter 7 – Results and Discussions: The chapter 7 describes the experimental


results, snapshots of the system interface and various modules in the project.

Chapter 8 – Conclusion and Future Enhancement: The chapter 8 describes the


Conclusion of the project and the Future Enhancement that can be carried out on the
project.

1.9 Summary

The summary describes the importance of Image Processing in Face Recognition 1.3
explains about the history of Face Recognition techniques. The motivation of the project is
discussed in the section 1.4. Problem statement is explained in section 1.5. The scope of the
project is described the section 1.6. And objectives are presented in the section 1.7. gives
details of the literature survey reviews, the important papers referred.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 11


“Missing Criminals and Children Tracking Using Deep Learning”

CHAPTER 2

SYSTEM REQUIREMENT SPECIFICATION


System requirement specifications gathered by extracting the appropriate
information to implement the system. It is the elaborative conditions which the system
needs to attain. Moreover, the SRS delivers a complete knowledge of the system to
understand what this project is going to achieve without any constraints on how to
achieve this goal. This SRS doesnot provide the information to outside characters, but it
hides the plan.

2.1 Functional Requirements

• This project is required to execute the given model and project the expected
outputs. The ultimate goal of this project is to make the model recognize a
criminal irrespective of their ages.
• In the training of the model the model should report the accuracy of the output.

The tool should be:

• Enabling the user to upload the image from various formats


• Enabling the user to upload the image of any size
• Providing the user with the feature where the user can run the software thus
experiencing the best result all the time.

2.2 Non-Functional Requirements

• The developed model is scalable, the model is able to detect the person’s face
even few features are extracted (again it depend on the features that are
extracted).

• The user should satisfy all the hardware and software requirements, then only
the model id going to support your actions.
The tool is should:

• Produce the results within no time (sometimes few seconds).


• Provide an interactive user interface.
• Work consistently across the platforms.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 12


“Missing Criminals and Children Tracking Using Deep Learning”

• Never fail in the middle of the operation.


• The response time of the tool should be acceptable.

2.3 Hardware requirements

• System: Intel i3/i5 2.4 GHz.


• Hard Disk: 500 GB
• RAM: 4/8 GB

2.4 Software requirements

• Operating system: Windows 7/8/10/11/Linux


• Coding Language: Python
• Software: Visual Studio 2010, Jupyter Notebook, IDLE
• Language: Python3

2.5 Summary

The chapter 2 considers all the system requirements which is required to develop this
proposed system. Section 2.1 consists of functional requirements of this project. Section
2.2 describes the non-functional requirements. Section 2.3 explains the Hardware
requirements. Section 2.4 explains the software requirements.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 13


“Missing Criminals and Children Tracking Using Deep Learning”

CHAPTER 3

HIGH LEVEL DESIGN

High-level design (HLD) explains the architecture that would be used for developing a software
product. The architecture diagram provides an overview of an entire system, identifying the
main components that would be developed for the product and their interfaces.

The HLD uses possibly non-technical to mildly technical terms that should be understandable
to the administrators of the system.

In contrast low level design further exposes the logical detailed design of each of these
elements for programmers.

High level design is the design which is used to design the software related requirements. In
this chapter complete system design is generated and shows how the modules, sub modules
and the flow of the data between them are done and integrated. It is very simple phase that
shows the implementation process.

The errors done here will be modified in the coming processes.

3.1 Design Consideration

Following are the Design consideration taken into system for special cases.

Case -1: Image captured but no face detected

This case describes when the camera/dataset captures image, but it does not have the face
content which the system requires.

In this case a warning message is displayed to the user stating the problem, that is, the user will
be asked to re-enter the image until the system is able to recognize it is a image containing face.

Case-2: Image captured but not clear

This case describes when the camera/dataset captures image, image with face is also recognized
but it is not clear.

In this case the user will be asked to re-enter the face image until the system is able to get a
good image else the user can go ahead with the same, results might vary.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 14


“Missing Criminals and Children Tracking Using Deep Learning”

Case-3: Unable to get an image into buffer

This case describes when the camera/dataset is unable to get an image into system buffer.In
this case the system will report an error because it did not have any data to work with as the
image buffer was empty, the system will display the error.

The system can handle this as an expectation and proceed with the code by taking a random
buffer values, but again this will lead to mismatch of results.

Case-4: System bugs and errors

This case describes when there are bugs in the program. In this case the system will handle this
bug and error by a concept called exception handling, and need to write assertions in the code
so that the code can report these bugs and errors, so that the user can rectify this once found.

The design consideration briefs about how the system behaves for skin images and classify the
corresponding notation for the given dataset. Pre-processing, Segmentation, Feature
Extraction, Classification.

Pre-Processing: Data set will take sample images and convert them to greyscale images ,
detecting faces and resizing.

Face Recognition: A Convolution Neural Network (ConvNet/CNN) is a Deep Learning


algorithm which can take in an input image, assign importance (learnable weights and biases)
to various aspects/objects in the image and be able to differentiate one from the other.

The pre- processing required in a ConvNet is much lower as compared to other classification
algorithms.

While in primitive methods filters are hand-engineered, with enough training, ConvNets have
the ability to learn these filters/characteristics.

Face Recognition system based on Image Processing Project involves processing image from
surveillance camera. Whenever a person is detected in the camera footage, the face recognition
system kicks in. At first it finds out if person is entering or leaving the premise.

Then it tries to recognize the person. If the person could be recognized then, the details of that
person will be provided by the system.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 15


“Missing Criminals and Children Tracking Using Deep Learning”

3.2 System Architecture

Figure 3.1 briefs the architecture of the proposed system

Figure 3.1: System Architecture for Face Recognition System

This section describes the proposed methodology for missing criminal and children face
recognition using Convolutional Neural Networks (AIFR-CNN). The CNN method is used to
produce 3D face images from 2D face images. Each image to a feature in a high-dimensional
space. In this high dimensional space, features belonging to the same person will be close to
each other and far away for different persons.

The camera that used in face detection process is performed using face detection based on the
conventional neural network (CNN). Face detection is done by a camera to take face images of
objects taken. The image taken from the camera lens is a raw image containing a background
image and a face image. In this face detection process is carried out the process of detecting
and searching for facial features in the camera image, which at this stage the system recognizes
patterns as faces or not. Normalization or pre-processing is a process that result a face image

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 16


“Missing Criminals and Children Tracking Using Deep Learning”

that has detected on process of face detection. In this normalization phase, a combination of
several face image processing models is used. We used the cropping method, resizing, RGB-
Gray, and using histogram equalization as a contrast-brightness adjustment to optimize the
facial recognition. The pre-processing method is used to improve the sharpness of the image to
anticipate several variations in illumination that commonly appear when capturing facial
images.

Feature extraction is the process of capturing the desired feature descriptors using LBP (Local
Binary Pattern) and CNN rather than extracting it manually. In this model, we used 7-layer
CNN architecture. Classification is required to recognize the identity of the person. This work
includes multi-class classification problem.

3.2.1 Stages of the Face Recognition System

Complete Face Recognition

Face capture Wired Data Face Identifying


end transmission storage recognition Children or
module module Criminal
record
Figure 3.2: Stages of process of facial recognition system.

Figure 3.2 shows the stages in the face recognition system. This system mainly
contains following modules :

• Face Capture End:

This is the camera placed at Crime scene. This needs to be active all thetime
capturing the image and keep transmitting it to the wired transmission module

• Wired Transmission Module:

This is the module which transports the image captured by Face Capture End into the
Face Recognition Module

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 17


“Missing Criminals and Children Tracking Using Deep Learning”

• Data base:
This is the place where all criminals and missing children details are stored.

• Face Recognition Module:

This is responsible to recognize the face in the image captured by Face Capture End.

• Identifying children or criminal record:

This module is responsible for getting the complete details after the face
recognition module recognizes the person in the captured image.

3.3 Use Case diagrams for Facial Recognition System

The use case diagram at its simplest is a representation of a user's interaction with the system
that shows the relationship between the user and the different use cases in which the user is
involved.

The use case diagram for missing criminal and children recognition model is as depicted in the
figure 3.3. Initially the set of captured images are stored in a temporary file in OpenCV. The
obtained RGB image is converted in to greyscale image to reduce complexity. Then the pre-
processing techniques are applied on the obtained greyscale image. The feature extraction is
carried out then the features are extracted and fed into classifiers which are trained based on
the available dataset to detect the face. The detected face is displayed along with the
intermediate results, various information regarding the detected face is displayed

Input

Pre-processing

Feature extraction

Training Model

Classification

User Face recognition


System
Result
Figure 3.3: Use Case Diagram for face recognition model
B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 18
“Missing Criminals and Children Tracking Using Deep Learning”

3.3.1 Pre-Processing Module

Name of the Module: Pre-Processing

• Actors: User, System


• Use Cases: Input Image, Generate RGB Image, RGB to Grey Image, Noise Removal,
Thresholding, Face Detection, Image Sharpening.
• Functionality: The main functionality of this module is to pre-process the data to obtain
the features conveniently.
• Description: Figure 3.4 shows the use case diagram of the pre-processing module. In this
use case diagram, there are six use cases and two actors. In the first use case, image is
captured and is used as input for this module. In second use case, for the captured image
RGB image is generated. In third use case, the RGB image into Greyscale image to reduce
complexity. Finally face detection is done through Haar Cascade.

Input image

RGB to Grey
conversion

Face detection

User System
Figure 3.4: Use Case Diagram of pre-processing module.

3.3.2 Classification Module

Classification using CNN:

• Actors: System, User

• Use Cases: Pre-processed Image, Training, Classification, Face Recognition

• Functionality: The main functionality of this module is to recognize face if present.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 19


“Missing Criminals and Children Tracking Using Deep Learning”

• Description: Figure 3.5 shows the use case diagram of classification module. In this
use case diagram, there are four use cases and two actors. In the first use case, the
system takes pre-processed image. In second use case the classifier is trained. In the
third use case classifier is applied. In the fourth use case face is recognized.

Pre-processed image

Training

Classification-CNN

User System
Face Recognition

Figure 3.5: Use Case Diagram of Classification using CNN module.

3.4 Data Flow Diagram

As the name specifies so to the meaning of the words, it is the process which is explained in
detail like how the data flows between the different processes. The figure 3.6 depicts the flow
diagram and is composed of the input, process and the output. After each process the data which
is flown between the systems need to be specified and hence called the data flow diagram. In
most of the times it is the initial step for designing any of the systems to be implemented. It
also shows from where the data originated, where the data flowed and where it got stored. The
input image is pre-processed, then feature extraction is carried out, Then Classification is done
after training the model, Then at the final step Face is recognized.

3.4.1 Data Flow Diagram of Pre-processing Module

Input image through camera is captured and it can be used to store as dataset for
training or as input image to detect the face in image. The image is captured and stored in
any supported format specified by the device.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 20


“Missing Criminals and Children Tracking Using Deep Learning”

As shown in the figure 3.7 initially the set of captured images are stored in a temporary
file in OpenCV. The storage is linked to the file set account from which the data is
accessed. The obtained RGB image is converted into Greyscale image to reduce
complexity.

Input Image Training

Image given as input Feature extracted image Dataset for


training
Pre- Feature Classification
processing Extraction

Pre-processed
image
Face Recognition

Figure 3.6: Data Flow Diagram of Face Recognition

Input image RGB to Greyscale


Input image
Grey Scale image

Preprocessed
image Face detection

Image after face


detection

Figure 3.7: Data Flow Diagram for pre-processing module

Pre-processing is required on every image to enhance the functionality of image


processing. Captured images are in the RGB format. The pixel values and the
dimensionality of the captured images is very high. As images are matrices and
mathematical operations are performed on images are the mathematical operations
on matrices. So, we convert the RGB image into Gray image. Then we carry out
Noise Removal followed by Haar Cascade; the laststep is resizing after which we

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 21


“Missing Criminals and Children Tracking Using Deep Learning”

obtain the pre-processed Image.

3.4.2 Data Flow Diagram of Classification using CNN

In deep learning, a convolution neural network (CNN) is a class of deep neural


network most applied to analyzing visual imagery. They are also known as shift
invariant or space invariant artificial neural networks (SIANN), based on their
shared-weights architecture and translation invariance characteristics

When CNN is used for classification, we don’t have to do feature extraction. Feature
Extraction will also be carried out by CNN. We feed the pre-processed image directly
to CNN classifier to obtain the type of disease if present. Figure 3.9 shows the Data
Flow Diagram of Classification using CNN

Pre-processed
image CNN
Classification Face Detection
Image
Classifier

Figure 3.8: Data Flow Diagram of Classification using CNN

3.5 Class Diagram

A class diagram in the Unified Modeling Language (UML) is a type of static structure
diagram that describes the structure of a subsystem by showing the system’s classes, their
attributes, operations (or methods), and the relationships among the classes.

A class diagram is a type of static structure diagram that describes the structure of a
system by showing the system’s classes, their attributes, operations (or methods), and
the relationships among objects

It is used to visualize, describe, document various different aspects of the system, and
also construct executable software code

Each class is represented as a rectangle with three compartments for the class name,
attributes, and operations.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 22


“Missing Criminals and Children Tracking Using Deep Learning”

The class diagram is the main building block of the Object Oriented Modeling. It is used
both for general conceptual modeling of the systematic of the application, and for the
detailed modeling translating the models into programming code.

Class diagrams can also be used for data modeling. The classes in a class diagram
represent both the main objects, interactions in the application and the classes to be
programmed.
3.5.1 Class Diagram 1

Figure 3.9: Class diagram 1

In the figure 3.9, the class diagram represent the pre-processing of the image. First the
image will be uploaded, that image is converted to grayscale then face is detected using
Haar cascade function and image is cropped then that cropped image is resized for further
processing.

3.5.2 Class Diagram 2

In the figure 3.10, after pre-processing that image is sent into various feature extraction
technique such as LBP, Facial landmark algorithms and VGG16 for Classification then
the result will be predicted and information of the detected person will be displayed.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 23


“Missing Criminals and Children Tracking Using Deep Learning”

Figure 3.10: Class Diagram 2

3.6 Summary

In third chapter, high level design of the proposed method is discussed. Section 3.1
presents the design consideration for the project. Section 3.2 discusses the system
architectureof proposed system. Section 3.3 describes the specification of the use case
diagram. Section 3.4 describes the dataflow diagrams for all modules in our system.
Finally in section 3.5 using class diagram the process of face recognition is described.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 24


“Missing Criminals and Children Tracking Using Deep Learning”

CHAPTER 4
DETAILED DESIGN
A detail design is the process of each individual module which is completed in the earlier
stage than implementation. It is the second phase of the project first is to design phase
and second phase is individual design of each phase options. It saves more time and
another plus point is to make implementation easier.

Detailed design is the process of refining and expanding the preliminary design of a
system or component to the extent that the design is sufficiently complete to begin
implementation. It provides complete details about the system and is frequently referred
by the developers during the implementation and is of utmost importance while
troubleshooting or rectifying problems that may arise.

4.1 Structural chart

Figure 4.1: Structural chart of the system

The prediction model is composed of 4 modules, namely- the data acquisition module,
the pre-processing module, the feature extraction module and then, the classification
module. This constitutes the complete structure of the system, which specifies the
modules that are to be considered during the implementation phase of the project.
In data acquisition module image of a person is taken as input and in the pre-processing
module image will be pre-processed using various pre-processing techniques and then
pre-processed image will sent for feature extraction and classification.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 25


“Missing Criminals and Children Tracking Using Deep Learning”

4.2 Face Recognition Module

The job of the face recognition module is to recognize the face in the image. Face
Recognition Module uses Convolution Neural Network to do the job.

Feature Convolutional
Image Pre-Processing Face ID
Extraction Neural Network

Figure 4.2: Face Recognition Module

Figure 4.2 shows the face recognition module. This module consists of three stages.

1. Pre-processing
2. Feature Extraction
3. Convolution Neural Network

4.2.1 Pre-processing

There are two sub steps:

• First, input color (RGB) image will be converted to Grayscale image.

• The greyscale image face is detected using haar cascade classifier.


The first step in pre-processing is converting the image from RGB to Greyscale. It can
be obtained by applying the below formula to the RGB image. The figure 4.4 depicts the
Conversion from RGB to grayscale.
The formula:
0.299*R+0.587*G+0.114*B

Figure 4.3: Conversion from RGB to grayscale.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 26


“Missing Criminals and Children Tracking Using Deep Learning”

Advantages of converting RGB colorspace to gray

• To store a single color pixel of an RGB color image we will need 8*3 = 24 bits
(8 bit for each color component).
• Only 8 bit is required to store a single pixel of the image. So we will need 33 %
less memory to store grayscale image than to store an RGB image.
• Grayscale images are much easier to work within a variety of task like In many
morphological operation and image segmentation problem, it is easier to work
with single layered image (Grayscale image) than a three-layered image (RGB
color image ).
• It is also easier to distinguish features of an image when we deal with a single
layered image.

Face detection using OpenCV (Haar Cascade)

It is an Object Detection Algorithm used to identify faces in an image. The algorithm


uses edge or line detection features proposed by Viola and Jones

These features on the image makes it easy to find out the edges or the lines in the image,
or to pick areas where there is a sudden change in the intensities of the pixels.

Figure 4.4: Edge detection using edge filter

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 27


“Missing Criminals and Children Tracking Using Deep Learning”

This feature values are then used in AdaBoost variant.


This is just one representation of a particular haar feature separating a vertical edge.
Now, the haar features traversal on an image would involve a lot of mathematical
calculations.
To tackle this, they introduced another concept known as The Integral Image to perform
the same operation.
An Integral Image is calculated from the Original Image in such a way that each pixel in
this is the sum of all the pixels lying in its left and above in the Original Image.
The last pixel at the bottom right corner of the Integral Image will be the sum of all the
pixels in the Original Image.
To get the delta value
• Instead of summing up, use the most bottom right corner values of each region
in the integral image and subtract.
• 1461-599=862 This sum is the pixel of shaded area.
• Delta =862-599=263.
In total we have reduced more number of addition operation down to just 2 subtraction
operation.

Figure 4.5: Integral image calculation

AdaBoost

A majority of the features found in previous steps won’t work well or will be irrelevant
to the facial features, as they will be too random to find anything. So here they needed a
Feature Selection technique, to select a subset of features from the huge set.

For this Boosting technique is used that is AdaBoost.AdaBoost is a boosting technique


uses an iterative approach to learn from the mistakes of weak classifiers, and turn them
into strong ones.180,000 features were applied to the images separately to create Weak
Learners. With this technique, final set of features got reduced to a total of 6000 of them.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 28


“Missing Criminals and Children Tracking Using Deep Learning”

Attentional Cascade

The subset of all 6000 features will again run on the training images to detect if there’s
a facial feature present or not. If the image size is large it’s a tiresome task.
To simplify this, The Attentional Cascade is used. The idea behind this is, not all the
features need to run on each and every window. If a feature fails on a particular window,
then we can say that the facial features are not present there. Hence, we can move to the
next windows where there can be facial features present.
In the Viola – Jones, there are 38 stages for 6000 features. The number of features in the
first five stages are 1, 10, 25, 25, and 50, and this increased in the subsequent stages.
The initial stages with simpler and lesser number of features removed most of the
windows not having any facial features, thereby reducing the false negative ratio.

Figure 4.6: Detected face using Haar Cascade

4.2.2 Feature Extraction

Local Binary Pattern is a Feature Extraction Technique.


It works on Local Features
• It summarizes the local special structure of an image.
• LBP is defined as an ordered set of binary comparisons of pixel intensities between
center pixel and its eight surrounding pixels.
• LBP works on 3x3 pixel.
• It looks at 9 pixels at a time (i.e., 3x3).
To get LBP value we use this equation

Where In = Neighbor pixel value

and Ic = Centre pixel value

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 29


“Missing Criminals and Children Tracking Using Deep Learning”

33 34 37 41 45 45

29 43 65 73 88 87

116 97 162 180 187 184

(3X3) Window

N=0,1,2,3,4,5,6,7
ic=43

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 30


“Missing Criminals and Children Tracking Using Deep Learning”

Figure 4.7: Feature Extraction Using Local Binary Pattern

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 31


“Missing Criminals and Children Tracking Using Deep Learning”

4.2.3 Convolution Neural Network (CNN)

A neural network is a system of interconnected artificial “neurons” that exchange messages


between each other. The connections have numeric weights that are tuned during the training
process, so that a properly trained network will respond correctly when presented with an image
or pattern to recognize. The network consists of multiple layers of feature-detecting “neurons”.
Each layer has many neurons that respond to different combinations of inputs from the previous
layers. As shown in Figure 4.7, the layers are built up so that the first layer detects a set of
primitive patterns in the input, the second layer detects patterns of patterns, the third layer
detects patterns of those patterns, and so on. Typical CNNs use 5 to 25 distinct layers of pattern
recognition.

A CNN is composed of several kinds of layers:


• Convolutional layer-creates a feature map to predict the class probabilities for each
feature by applying a filter that scans the whole image, few pixels at a time.
• Pooling layer (down-sampling)-scales down the amount of information the
convolutional layer generated for each feature and maintains the most essential
information (the process of the convolutional and pooling layers usually repeats several
times).
• Fully connected layer- “flattens” the outputs generated by previous layers to turn them
into a single vector that can be used as an input for the next layer. Applies weights over
the input generated by the feature analysis to predict an accurate label.
• Output layer- generates the final probabilities to determine a class for the image.
Figure 4.8 represents the Layers in CNN.

Figure 4.8: Typical CNN Architecture

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 32


“Missing Criminals and Children Tracking Using Deep Learning”

Figure 4.9: Layers in CNN

Convolutional Layer Convolutional Layer is the first step in CNN, here 3*3 part of the given
matrix which was obtained from High-pass filter is given as input. That 3*3 matrix is multiplied
with the filter matrix for the corresponding position and their sum is written in the particular
position. This is shown in the below figure. This output is given to pooling layer where the
matrix is further reduced.
Figure 4.9 shows the Convolutional Layer.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 33


“Missing Criminals and Children Tracking Using Deep Learning”

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 34


“Missing Criminals and Children Tracking Using Deep Learning”

Figure 4.10: Convolution Layer

Convolution is followed by the rectification of negative values to 0s, before pooling. In fact,
multiple iterations of both are needed before pooling.
Pooling Layer

Figure 4.11: Pooling Layer

In Pooling layer 3*3 matrix is reduced to 2*2 matrix, this is done by selecting the maximum
of the particular 2*2 matrix for the particular position. Figure 4.9 shows the Pooling Layer.
Fully connected layer and Output Layer

Figure 4.12: Fully Connected Layer and Output Layer

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 35


“Missing Criminals and Children Tracking Using Deep Learning”

The output of the pooling layer is flattened and this flattened matrix is fed into the Fully
Connected Layer. In the fully connected layer there are many layers, Input layer, Hidden layer
and Output layers are parts of it. Then this output is fed into the classifier, in this case Softmax
Activation Function is used to classify the and gives face id. Figure 4.11 shows the Fully
connected layer and Output Layer

SOFTMAX ACTIVATION FUNCTION

Softmax activation calculate probability of each class which the input belong.The number
of unit in output layer is equal to the number of classes probability distribution :
sum of each class is equal to 1.

Suppose is 246.5 then,


𝑒 375
➢ P1= 246.5 = 0.475
𝑒 366
➢ P2=246.5 = 0.174
𝑒 366
➢ P3=246.5 = 0.174
𝑒 366
➢ P4=246.5 = 0.174

Using these values the face will recognized and face id predicted.
Threshold=0.5

If score<=Threshold then,
“Face Match”
Else,

“Face not Match.”

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 36


“Missing Criminals and Children Tracking Using Deep Learning”

Rectifier linear unit (ReLU)

R(z) = max (0, z)

Figure 4.13 ReLU Layer

In Rectifier linear unit after convolution if the resulted matrix consist of negative values those
values are converted into 0, before pooling using max (0,z) function.

Various Convolutional Neural Network Models

We have made use of various CNN models to detect Missing children and criminals and we
have compared these models. We have discussed the different models that we used in our
project. The architectures of these models are fixed and these can be utilized for various
applications.

Figure 4.14 Architecture of VGG-16.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 37


“Missing Criminals and Children Tracking Using Deep Learning”

VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman


from the University of Oxford.
• The first two layers are convolutional layers with 3*3 filters, and first two layers use 64
filters that results in 224*224*64 volume as same convolutions are used. The filters are
always 3*3 with stride of 1
• After this, pooling layer was used with max-pool of 2*2 size and stride 2 which reduces
height and width of a volume from 224*224*64 to 112*112*64.
• This is followed by 2 more convolution layers with 128 filters. This results in the new
dimension of 112*112*128.
• After pooling layer is used, volume is reduced to 56*56*128.
• Two more convolution layers are added with 256 filters each followed by down sampling
layer that reduces the size to 28*28*256.
• Two more stack each with 3 convolution layer is separated by a max-pool layer.
• After the final pooling layer, 7*7*512 volume is flattened into Fully Connected (FC)
layer with 4096 channels and soft-max output of 1000 classes.

4.3 Summary

The fourth chapter gives the detailed design for image processing and convolutional neural
network of face recognition system of criminal and children. Section 4.1 contains the
structural chart of the system. Section 4.2 explains the face recognition module which
contains preprocessing stage and convolutional neural network stage.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 38


“Missing Criminals and Children Tracking Using Deep Learning”

CHAPTER 5
IMPLEMENTATION
Implementation is the platform where the project working is demonstrated. In
implementation the system drives for real operations. Every consequence completed by the
project will be abundant, if processes are accurately executed according to plan carried out.

5.1 Implementation Requirements

To implement Missing Criminals and Children Tracking using Convolutional Neural


Network, software’s used are

a. Language used to code the project is Python.

b. Operating System-Windows 10.

Anaconda

Anaconda is an open-source distribution of Python programming language and several


other data science-related packages, libraries, and tools that are commonly used in scientific
computing, data analysis, and machine learning. It includes many pre-installed packages such
as NumPy, Pandas, Matplotlib, and Scikit-Learn, as well as an integrated development
environment (IDE) called Spyder.

Anaconda was first released in 2012 by Continuum Analytics, a software company that was
acquired by Anaconda Inc. in 2017. The distribution includes more than 1,500 open-source
packages that are curated and tested by Anaconda Inc. and the broader community of data
scientists and developers. The packages cover a wide range of topics including data
visualization, scientific computing, machine learning, deep learning, natural language
processing, and more.

One of the key features of Anaconda is its package management system, conda. With conda,
users can easily install, update, and remove packages, as well as create and manage isolated
environments for different projects with different dependencies. This makes it easy to work
on multiple projects with different package requirements without worrying about conflicts
between packages.

In addition to the built-in Spyder IDE, Anaconda also supports other popular IDEs like Jupyter
Notebook and JupyterLab, as well as text editors like Sublime Text and VS Code. Anaconda
can also be integrated with cloud computing platforms like Amazon Web Services and

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 39


“Missing Criminals and Children Tracking Using Deep Learning”

Microsoft Azure, making it easy to scale up data analysis and machine learning tasks.

5.2 Programming Language Used

Programming language used to design the proposed method is Python. Python is a high-
level programming language with dynamic semantics. It is an interpreted language i.e.
interpreter executes the code line by line at a time, thus makes debugging easy. Python
Imaging Library (PIL) is one of the popular libraries used for image processing. PIL can be
used to display image, create thumbnails, resize, rotation, convert between file formats,
contrast enhancement, filter and apply other digital image processing techniques etc.

Python is often used as a support language for software developers, for build control and
management, testing, and in many other ways. Python is designed by Guido van Rossum. It is
very easy for user to learn this language because of its simpler coding. It provides an easy
environment to furnish computation, programming visualization. Python supports modules
and packages, which encourages program modularity and code reuse. It has various built-in
commands and functions which will allow the user to perform functional programming. Apart
from being an open-source programming language, developers use it extensively for
application development and system development programming. It is a highly extensible
language. Python contains many inbuilt functions which helps beginners to learn easily.

Some of the most commonly used functions are: imread will read image from specified
location, imshow will display the output or images on the screen, cvtColor function converts
binary image into grayscale image. Compare_ssim function will compare pre-flood and post-
flood images, find Contours will give the region of difference between two input images.

5.2.1 Key Features of Python

• Python is an interpreted language i.e. interpreter executes the code line by line at a time,
thus makes debugging easy.

• Python is more expressive language, since it is more understandable and readable.

• To design and solve problems it offers an interactive atmosphere.

• Python has a large and broad library and provides rich set of module and functions for
rapid application development.

• Built in graphics for conception of data and tools is also supported.

• It can be easily integrated with languages like C, C++, JAVA etc.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 40


“Missing Criminals and Children Tracking Using Deep Learning”

5.2.2 TKinter GUI

Tkinter is a built-in Python library for creating graphical user interfaces (GUIs). It is a
cross-platform library that provides a set of tools for creating windows, widgets, and other
GUI elements. Tkinter is based on the Tcl/Tk GUI toolkit, which was developed in the 1980s
by John Ousterhout.

Tkinter provides a wide range of widgets, including buttons, labels, text boxes, check boxes,
radio buttons, and more. These widgets can be used to create complex and interactive GUIs
for desktop applications. Tkinter also provides a geometry manager that allows developers to
position and layout widgets in a window.

One of the advantages of using Tkinter is its simplicity and ease of use. It is built into Python
and does not require any additional installations or setup. It is also well-documented and has
a large community of developers who contribute to its development and support.

Tkinter also provides support for event-driven programming, which allows developers to
create applications that respond to user interactions such as mouse clicks and keyboard inputs.
This makes it easy to create interactive and responsive GUIs.

5.2.3 OpenCV-Python Tool

OpenCV-Python is a library of Python bindings designed to solve computer vision


problems. Visual information is the most important type of information perceived, processed
and interpreted by the human brain. Image processing is a method to perform some operations
on an image, in order to extract some useful information from it. An image is nothing more
than a two dimensional matrix (3-D in case of colored images) which is defined by the
mathematical function f(x,y) where x and y are the two coordinates horizontally and vertically.
The value of f(x,y) at any point is gives the pixel value at that point of an image, the pixel
value describes how bright that pixel is, and/or what color it should be. In image processing
we can also perform image acquisition, storage, image enhancement.

Some of the key features of OpenCV include:

• Image and video processing: OpenCV provides a wide range of tools for manipulating
and processing images and videos, including filtering, transformation, and
segmentation.

• Feature detection and extraction: OpenCV supports a range of feature detection and
extraction algorithms including SIFT, SURF, ORB, and FAST.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 41


“Missing Criminals and Children Tracking Using Deep Learning”

• Object detection and tracking: OpenCV provides tools for detecting and tracking
objects in images and videos using algorithms such as Haar cascades, HOG (Histogram
of Oriented Gradients), and Deep Learning-based methods.

• Camera calibration: OpenCV provides tools for calibrating cameras, including methods
for estimating intrinsic and extrinsic camera parameters.

• Machine learning: OpenCV provides tools for machine learning, including algorithms
for classification, regression, clustering, and deep learning.

5.2.4 Packages

Packages are the namespaces which consists of multiple packages and module
themselves. Each package in Python is a directory which must contain a special file called
_init_py. This file can be empty, and it indicates that the directory it contains is a Python
package, so it can be imported in the same way that the module can be imported. As our
application program grows larger in size with a lot of modules, one can place the similar
modules in one package and different modules in different packages. This makes a program
easy to manage and conceptually clear. We can import the modules from packages using the
dot (.) operator.

1. Tensorflow

TensorFlow is an open-source machine learning library developed by Google Brain


team in 2015. It is designed for building and training machine learning models, especially for
deep learning applications such as neural networks. TensorFlow is built on a computation
graph, which allows users to define complex mathematical operations as a series of nodes and
edges, and then execute them efficiently on CPUs or GPUs.

TensorFlow provides a rich set of tools for building and training machine learning models,
including APIs for high-level neural network architectures such as convolutional neural
networks (CNNs) and recurrent neural networks (RNNs), as well as low-level APIs for building
custom models. It also includes tools for data preprocessing, visualization, and evaluation of
models.
One of the key features of TensorFlow is its ability to perform distributed computing, which
allows users to scale up their models and training data to run on multiple machines or clusters.
TensorFlow also provides support for mobile and embedded devices, which allows users to
deploy their models on a wide range of platforms. In addition to its core library, TensorFlow

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 42


“Missing Criminals and Children Tracking Using Deep Learning”

has several high-level APIs and frameworks built on top of it, such as Keras, a user-friendly
API for building neural networks, and TensorFlow.js, a library for building and training models
in the browser.

2. Keras

Keras is an open-source high-level neural networks API, written in Python and designed
to enable fast experimentation with deep neural networks. It was developed by François Chollet
in 2015 and is now part of the TensorFlow project.

Keras provides a user-friendly interface for building and training neural networks, with a focus
on ease of use and modularity. It allows developers to quickly build and experiment with
different network architectures and hyperparameters, without having to write low-level code
for training and optimization. Keras supports a wide range of neural network architectures,
including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and
combinations of the two. It also provides a variety of loss functions, activation functions, and
optimization algorithms.

One of the key features of Keras is its ability to seamlessly integrate with other deep learning
libraries, such as TensorFlow and Theano. This allows developers to take advantage of the
advanced features of these libraries while using Keras as a high-level interface.

Keras also includes tools for data preprocessing and model evaluation, making it a
comprehensive deep learning framework.

3.NumPy

NumPy is a python library used for working with arrays. It also has functions for
working in domain of linear algebra, fourier transform, and matrices. NumPy was created in
2005 by Travis Oliphant. It is an open source project and you can use it freely. NumPy stands
for Numerical Python. In Python we have lists that serve the purpose of arrays, but they are
slow to process. NumPy aims to provide an array object that is up to 50x faster than traditional
Python lists. The array object in NumPy is called ndarray, it provides a lot of supporting
functions that make working with ndarray very easy. Arrays are very frequently used in data
science, where speed and resources are very important. NumPy arrays are stored at one
continuous place in memory unlike lists, so processes can access and manipulate them very
efficiently. This behaviour is called locality of reference in computer science. This is the main
reason why NumPy is faster than lists. Also, it is optimized to work with latest CPU

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 43


“Missing Criminals and Children Tracking Using Deep Learning”

architectures.

4.Matplotlib

Matplotlib is a comprehensive library for creating static, animated, and interactive


visualizations in Python. Matplotlib produces publication-quality figures in a variety of
hardcopy formats and interactive environments across platforms. Matplotlib can be used in
Python scripts, the Python and IPython shell, web application servers, and various graphical
user interface toolkits. Matplotlib. pyplot is a collection of command style functions that make
matplotlib work like MATLAB. Each pyplot function makes some change to a figure: e.g.,
creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates
the plot with labels, etc. Some people embed matplotlib into graphical user interfaces like

Wx python or pygtk to build rich applications. Others use matplotlib in batch scripts to
generate postscript images from some numerical simulations, and still others in web
application servers to dynamically serve up graphs. Matplotlib is the brainchild of John Hunter
(1968-2012), who, along with its many contributors, have put an immeasurable amount of
time and effort into producing a piece of software utilized by thousands of scientists
worldwide.

5.3 Pseudocodes

5.3.1 Pseudocode for Picture Uploading


Input will be the Old photo and Recent photo of a Criminals or Children.
Step 1: Click on upload
Step 2: Read the image.
load1 = Image.open(fileName1)
Step 3: Display the image on the window

5.3.2 Pseudocode for Converting RGB image to Gray

In this section the RGB image is converted to gray image because it is easy to
perform the operations such as Haar Cascade face detection, Local Binary Pattern.
Step 1: Converting RGB to Gray
Step 2: Show the window to display the image
Step 3: Display the image in the window
Step 4: Multiply each plane with a threshold value

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 44


“Missing Criminals and Children Tracking Using Deep Learning”

Step 5: Gray_pixel = (R_plan * R_thresh) + (G_plan * G_thresh) + (B_plan * B_thresh);

5.3.3 Pseudocode for Face Detection using Haar Cascade

In this section the face in greyscale image is detected and cropped. This technique
enables accurate and efficient identification of faces in an image, allowing for more effective
and reliable face recognition.
Step 1: Load the pre-trained face detection classifier.
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
Step 2: Load the image.
img = cv2.imread('grey_image.jpg')
Step 3: Detect faces in the image.
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=5)
Step 4: Crop the face from the original image.
Step 5: Save the cropped face to a new file.
cv2.imwrite('output_image.jpg', face_img)

5.3.4 Pseudocode for Feature Extraction using Local Binary Pattern.

LBP (Local Binary Patterns) is used for feature extraction in face recognition because
it is a simple, efficient, and effective way to describe the local texture information in an image.
LBP is particularly suitable for describing facial texture patterns, which are important for face
recognition.
Step 1: Load the greyscale face image.
Step 2: Divide the grayscale image into small regions.
Step 3: For each center pixel in the region, compare its intensity with its neighbors.
If the intensity of the neighbor is greater than or equal to that of the central pixel, assign
it a value of 1, otherwise, assign it a value of 0.
Step 4: Repeat step 3 for all pixels in the region.
Step 5: Concatenate the histograms of all regions to obtain the final feature vector for the input
image.

5.3.5 Pseudocode for Classification using CNN.

Face recognition using the VGGFace2 model. The model is first loaded and then trained on a
training set. A new image is then loaded and the trained model is used to predict the class label

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 45


“Missing Criminals and Children Tracking Using Deep Learning”

of the face in the image. Finally, the predicted class label is displayed.

Step 1: Load the VGGFace2 model.


Step 2: Train the model on the training set.
Step 3: Load a new image for face recognition.
Step 4: Predict the class label using the trained model.
Step 5: Display the predicted class label of the new image.

5.4 Summary

This chapter describes the implementation of the Face detection and Recognition using various
architectures such as LBP, CNN. Implementation requirement is deliberated in Section 5.1,
Section 5.2 briefs about the programming language selected. Section 5.3 describes the
pseudocode for Face Detection and Recognition.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 46


“Missing Criminals and Children Tracking Using Deep Learning”

CHAPTER 6
SYSTEM TESTING

Testing is the significant phase in the process or application development life cycle.
Testing is final phase where the application is tested for the expected outcomes. Testing of
the system is done to identify the faults or prerequisite missing. Therefore, testing plays a vital
role for superiority assertion and confirming the consistency of the software. Software testing
is essential for correcting errors and improving the quality of the software system. The
software testing process starts once the program is written and the documentation and related
data structures are designed. Without proper testing or with incomplete testing, the program
or the project is said to be incomplete.

Throughout the testing phase, the procedure will be implemented with the group of test
circumstances and the outcome of a procedure for the test case will be appraised to identify
whether the program is executing as projected. Faults found during testing will be modified
using the testing steps and modification will be recorded for forthcoming reference. Some of
the significant aim of the system testing is,
• To confirm the superiority of the project.
• To discover and eradicate any residual errors from prior stages.
• To authenticate the software as a result to the original problem.
• To offer effective consistency of the system.

6.1 Test Procedures


Test cases are the key aspect for the success of any system. A test case is a manuscript
which has a set of test data, prerequisites, predictable results and post conditions, designed for
a specific test scenario in imperative to validate acquiescence against a specific requirement.
Performance of a system is based on the cases written for each and every modules of a system.
Test cases are precarious for the prosperous performance of the system. If the predicted
outcome doesn’t match with the real outcome, then error log is displayed. There are two
elementary forms of test cases, namely

• Formal test case


• Informal test case

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 47


“Missing Criminals and Children Tracking Using Deep Learning”

6.1.1 Formal Test Case

There must be at least two test cases such as non-positive and non-negative test for each
requirement of an application in order to completely test and satisfy the requirement. If the
requirement has sub-requirement then each of the sub-requirement needs to have at least two
test cases.

6.1.2 Informal Test Case

For applications or systems deprived of prescribed necessities, user can write the test cases
based on acknowledged usual action of the program of analogous class. In certain positions of
testing, test cases are not carved at all, however the events and the outcomes are conveyed after
the tests have been run.

6.2 Unit Testing

Unit testing is the mechanism where individual model of the project is tested. It can also be
called as differentiation testing, as the project is tested based on individual model. Using the
module level designs depiction as a monitor significant device route is tested to discover faults
around the boundary of each module

Table 6.1: Test case for loading the image of Criminal or Children.

Test Case Sl. No 1

Test Name User selects image of Children or Criminals.

Test Feature Checks whether the image selected are loaded or not.

Output Expected The image loaded.

Output Obtained Loaded Facial image which is used for the


processing is shown in Snapshot 7.3.
Result Successful.

The table 6.1 shows the successful test case for loading the image of Criminals or Children that
is selected by the user to do the processing.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 48


“Missing Criminals and Children Tracking Using Deep Learning”

Table 6.2: Test case for grayscale image.

Test Case Sl. No 2

Test Name Greyscale image.

Test Feature RGB image is converted to grayscale image.

Output Expected Grayscale image.

Output Obtained RGB to grayscale image conversion is shown in


Snapshot 7.4 and Snapshot 7.5.
Result Successful.

The RGB image is converted to grayscale image and the successful test case for the grayscale
image as shown in the Table 6.2, the gray level represents the brightness of a pixel. Here the
RGB image is converted to grayscale image because it simply reduces complexity from a 3D
pixel value (R, G, B) to a 1D value. Grayscale images can be the result of measuring the
intensity of light at each pixel according to a particular weighted combination of frequencies.

Table 6.3: Test case for Face Detection.

Test Case Sl. No 3

Test Name Face Detection.

Test Feature Detecting face in the image.

Output Expected Detected Face image.

Output Obtained Face detection is done.

Result Successful.

The Table 6.3 shows the successful test case for Face Detection. Face Detection is the process
detecting the Faces in an image.
This technique enables accurate and efficient identification of faces in an image, allowing for
more effective and reliable face recognition.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 49


“Missing Criminals and Children Tracking Using Deep Learning”

Table 6.4: Test case for Main GUI.

Test Case Sl. No 4

Test Name Main GUI screen.

Test Feature It shows three buttons to upload Old Photo, New Photo
and Analyze button.
Output Expected Main GUI screen with three options.

Output Obtained Main GUI screen with three options is shown in


Snapshot.7.1
Result Successful.

The Table 6.4 shows the successful test case for the main GUI when code is executed. In this
the process are selected for the execution of the proposed method. Here it shows three options
1. Upload Old Image, 2. Upload Recent Image and 3. Analyze. When the upload image button
is clicked, a file explorer popup appears, asking us to select the jpg file. When the submit button
is clicked, the system checks for the presence of a valid jpg file and classifies it.

Table 6.5: Test case for Upload Old and Recent image button.

Test Case Sl. No 5

Test Name Upload button functionality check.

Test Feature Uploads the image to Tkinter window.

Output Expected Uploaded image on the window.

Output Obtained Image appearing on window is shown in Snapshot 7.2

Result Successful.

The Table 6.5 shows the successful test case for upload. This button, when clicked pops a file
explorer window, where the user can browse through the file system to select the required
image which has to be classified. On clicking this button and selecting the required image, the
image appears on the GUI.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 50


“Missing Criminals and Children Tracking Using Deep Learning”

Table 6.6: Test case for Submit button.

Test Case Sl. No 6

Test Name Analyzing button functionality check.

Test Feature Analyzing the image for classification.

Output Expected In the window, the details of Missing Children or Criminals


is displayed,
Output Obtained Result appearing on window is shown in Snapshot 7.3.

Result Successful.

The Table 6.6 shows the successful test case for the Analyze button. When the user clicks this
button, it first checks whether an image has been uploaded to the window. Once checked, it
runs the image file through the classifier and proceeds to display the result window.

6.3 Summary

This chapter presents system testing in section 6.1, which consists of unit test cases for the
various modules of the Face Recognition system. Section 6.2 gives a complete view of the
testing which includes Test Name, Test Feature, Output Expected, Output Obtained and Result.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 51


“Missing Criminals and Children Tracking Using Deep Learning”

CHAPTER 7
RESULTS AND DISCUSSION

Programming language used to design the proposed method is Python. By using


relevant images of Criminals and Children, classified using CNN architectures, we get to
know the details of them, with a description as the output. Results obtained in each step are
demonstrated in following snapshots.

7.1 Experimental Results

7.1.1 Snapshot of Home Page

The below Snapshot Figure 7.1 shows the snapshot of home page.

Snapshot 7.1: Snapshot of Home page.

The Snapshot 7.1 shows the snapshot of home page which consists of a header which reads
‘Missing Criminal and Children Tracking System’ and two buttons which are labeled ‘Old
Photo’ and ‘Recent Photo’ respectively.

7.1.2 Snapshot of File Explorer on Clicking ‘Old Photo’ and ‘Recent photo’ button

The below Snapshot 7.2 shows the snapshot of the file explorer that pops up.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 52


“Missing Criminals and Children Tracking Using Deep Learning”

Snapshot 7.2: Snapshot of the file explorer on clicking submit.

The Snapshot 7.2 shows the snapshot of the file explorer window that pops up when the old or
recent photo button is clicked. The user may select the image for further process.

7.1.3 Snapshot after uploading image

The below Snapshot 7.3 shows the snapshot of the same window after uploading the image.

Snapshot 7.3: Snapshot of the image that appears on the window.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 53


“Missing Criminals and Children Tracking Using Deep Learning”

The Snapshot 7.3 shows the snapshot of the window after uploading the image file to it for
classification of criminal or child. The page displays the image right next to the Old and Recent
photo buttons.

7.1.4 Snapshot of RGB to Greyscale conversion.

The below Snapshot 7.4 and 7.5 shows the snapshot of RGB to Greyscale Conversion of the
image.

Snapshot 7.4: Snapshot of Greyscale image of Old Photo.

Snapshot 7.5: Snapshot of Greyscale image of Recent Photo .

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 54


“Missing Criminals and Children Tracking Using Deep Learning”

The Snapshot 7.4 and 7.5 shows the snapshot of the window after greyscale conversion of both
Old and Recent Photo.

7.1.5 Snapshot of Criminal details.

The below Snapshot 7.6 shows the snapshot of the details of a recognized criminal.

Snapshot 7.6: Snapshot of a Criminal details.

The Snapshot 7.4 shows the snapshot of the window after getting a status of matched Criminal
face by clicking Analyze button and complete details of that criminal such as Name, Gender,
Crime, Punishment, Wanted by and Nationality.

7.1.6 Snapshot of Missing Children details.

The below Snapshot 7.7 shows the snapshot of the details of a recognized child.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 55


“Missing Criminals and Children Tracking Using Deep Learning”

Snapshot 7.7: Snapshot of a Missing Child details.

The Snapshot 7.7 shows the snapshot of the window after getting a status of matched Missing
child face by clicking Analyze button and complete details of that child such as Name, Gender,
Missing Year, Languages Spoken, Missing Place by and Guardian contact.

7.1.7 Snapshot of Matching Face with no records.

The below Snapshot 7.8 shows the snapshot of a recognized face with no details.

Snapshot 7.8: Snapshot of a Matching face with no records.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 56


“Missing Criminals and Children Tracking Using Deep Learning”

The Snapshot 7.8 shows the snapshot of the window after getting a status of matched face by
clicking Analyze button but there are no records for this face is found so we can conclude that
the person is neither a Criminal nor a Missing Child.

7.1.8 Snapshot of Face with no match.

The below Snapshot 7.9 shows the snapshot of a unmatching faces.

Snapshot 7.9: Snapshot of a face with no match.

The Snapshot 7.9 shows the snapshot of the window after getting a status of faces that are not
matching and displaying the status i.e., Face not matched.

7.1.9 Snapshot of image with no face.

Snapshot 7.10: Snapshot of a image with no face.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 57


“Missing Criminals and Children Tracking Using Deep Learning”

The Snapshot 7.10 shows the snapshot of the window after getting a result of face not detected
that means photos that are uploaded have no face and getting a status Face Not Detected on the
window.

7.2 Graphical Inferences

Prior to having a literary conclusion, as an understanding of the results we have seen in


the implementation of the three CNN architectures VGGFace2, let us have a look at the
accuracy and loss graphs generated by each of these methods, before we derive inferences.
Accuracy Graph

It is a plot of accuracy on the y-axis versus epoch on the x-axis, with plots for both

training and test data. Accuracy should increase with epoch values for a better model.

Loss Graph

It is a plot of loss, in terms of features that are dropped from consideration on the y-axis

versus epoch on the x-axis, with plots for both training and test data. Loss should decrease with

epoch for a better model.

Graphs for VGG-16 Model

Figure 7.1: VGGFace2 accuracy graph.

Figure 7.1 shows the accuracy during training (solid line) and testing (dashed line).

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 58


“Missing Criminals and Children Tracking Using Deep Learning”

Figure 7.2: VGGFace2 loss graph.

Figure 7.2 shows the loss during training (solid line) and testing (dashed line).

Inference

From the graphs in Figure 7.1 and 7.2, we see that the test data does fit accurately around
the 90% mark as the training data and also low loss as this model is one of the most powerful
ones for face recognition.

7.3 Summary

This chapter presents the experimental results in section 7.1, which consists of snapshots of the
home page, the functionality of the upload and submit buttons, the output page, Missing
Children and Criminals information page, section 7.2 consists graphical inferences made.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 59


“Missing Criminals and Children Tracking Using Deep Learning”

CHAPTER 8

CONCLUSION AND FUTURE ENHANCEMENT

8.1 Conclusion

Facial recognition technology has been used to help identify and locate missing persons,
including missing children and criminals. One popular approach for facial recognition is to use
deep learning models like VGGFace2, which is a deep convolutional neural network designed
for face recognition.
A very simple but powerful method is used to detect Missing Criminals and Children using
CNN architecture VGGFace2. In this project we focus on different methods for prediction and
classification of Criminals and Children. Also, in proposed methodology we discuss different
methods of image processing techniques. We can modify available algorithms so as to obtain
good accuracy while classifying images. Accuracy and early detection of these Criminals and
Children will help police department and various NGOs to take early precautions and saving
child’s life. The method used will give the complete details of Missing Criminals and Children
as the class to which the image belongs as the result. Once a missing child has been identified
through this model, NGOs can assist in reuniting them with their families by providing support
and counseling services.
By utilizing this system, police departments can potentially identify and track individuals who
may pose a threat to public safety, such as wanted criminals.

8.2 Future Enhancement

VGGFace2 is already a powerful facial recognition tool, it can be further enhanced by


incorporating additional facial features beyond the basic facial landmarks (e.g., eyes, nose,
mouth). This could include the use of ear shape, forehead shape, and other unique facial features
that can help to differentiate individuals. Combining face recognition systems with other
databases, such as criminal records, social media profiles, and other open-source data, can help
provide more comprehensive and accurate information about missing individuals, increasing
the likelihood of finding them. Enhancing accuracy in difficult scenarios, such as low lighting
or obscured faces, can make face recognition more challenging. Developing techniques to
improve recognition in these situations could enhance the effectiveness of the system.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 60


“Missing Criminals and Children Tracking Using Deep Learning”

References

[1] Dr S. Matilda and S.Ayyappan “Criminals And Missing Children Identification Using Face
Recognition And Web Scrapping” .The 2nd International Conference on Applied Science and
Technology 2020 (ICAST’20)
[2] Mrudula Nimbarte, Kishor Bhoyar “Age Invariant Face Recognition using Convolutional
Neural Network” Third International Conference on Computing Methodologies and
Communication (ICCMC 2018)
[3] Ankit Gupta, Deepika Punj & Anuradha Pillai “Face Recognition System Based on
Convolutional Neural Network (CNN) for Criminal Identification” 2022 IEEE International
Conference on Information and Automation for Sustainability (ICIAS).
[4] Gargi Tela, Sakshi Hiwarale, Shruti Dhave, Dhanashree Rathi ” CNN Based Criminal
Identification” International Journal of Advanced Research in Science, Communication and
Technology (IJARSCT) Volume 2, Issue 2, May 2022.
[5] T. Sanjay W.Deva Priya” Criminal Identification System to Improve Accuracy of Face
Recognition using Innovative CNN in Comparison with HAAR Cascade”2019.
[6] Rasanayagam, K.Kumarasiri, S.D.D, Tharuka, W. A. D. Samaranayake, N. Samarasinghe
and P. Siriwardana “CIS: An Automated Criminal Identification System”. 2018 IEEE
International Conference on Information and Automation for Sustainability (ICIAfS).
[7] R. He, B. C. Lovell, R. Chellappa, A. K. Jain, and Z. Sun, “Editorial: Special issue on
ubiquitous biometrics,” Pattern Recognition, vol. 66, pp. 1–3, 2017.
[8] T. H. Le, “Applying Artificial Neural Networks for Face Recognition,” Hindawi Publishing
Corporation Advances in Artificial Neural Systems, vol. 2011, pp. 1-16, 2011.
[9] J. M. Guo, et al., “Human Face Age Estimation with Adaptive Hybrid Features,”
International Conference on System Science and Engineering, 2011.
[10] Y. Fu, et al., “Age Synthesis and Estimation via Faces: A Survey,” IEEE Transactions on
attern Analysis and Machine Intelligence, vol. 32, no. 11, pp. 1955-1976, 2010.
[11] D. Hunter and B. Tiddeman, “Facial Ageing,” Cambridge University Press, 2012.
[12] J. Suo, et al., “A Concatenational Graph Evolution Aging Model,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2083-2096, 2012.
[13] A. Lanitis, et al., “Toward Automatic Simulation of Aging Effects on Face Images,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 4, pp. 442-455, 2002.

B.E., Dept. of CS&E, A.I.T., Chikkamagaluru. Page 61

You might also like