Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

A Machine Learning Powered Mobile

App to find Missing People


Submitted By:

Hassaan Ahmad (17PWCSE-1589)

Hamza Azmat (17PWCSE-1610)

Hafiz Gohar Zaman (17PWCSE-1616)

Afaq Ahmad (17PWCSE-1598​)

Under the Supervision of:

Engr. Naina Said

Department of Computer Systems Engineering

University of Engineering & Technology, Peshawar

1
Introduction:
An estimated ​8 million​ children go missing around the world each year according to a
report of The​ Wall Street Journal i​ n 2012. This means approximately ​21,918 ​children​ ​each day.
While some of them are abducted and undertaken by criminals, a huge number of this figure
results because of negligence and carelessness of parents/guardians and sometimes by accident.

As soon as you notice your child is missing a huge panic is created and you don’t know what to
do. Usually you call the Police to report a FIR. In Pakistan one way is to head to the mosque and
make an announcement that the abc child of xyz is missing. These traditional ways are helpful
sometimes but takes a lot of time to really find the missing person and also the outreach is very
limited.
A better solution used nowadays is social media facebook/whatsapp etc though the outreach
through these platforms is huge but there is very little certainty that it will reach the specific
person who has found your child.

The solution we propose here can have the outreach similar to these social media platforms and a
much higher certainty to connect the guardian to the person who found the missing person. We
plan to build a Mobile based application that uses Machine Learning in the core in which the
guardian registers a case with a photo of the missing person and on the other hand the person
who found your child does the same by taking a picture and mentioning some credentials. Now
what's new here is you just have to upload the picture and wait until the other person uploads a
photo as soon as he does that both of you will have all the information you need to contact each
other. Seems easy doesn’t it. But what makes it easy for you makes it hard for the engineers.

Figure 1: Intro Example

2
Problem Statement:
For this application to connect two people it needs to recognize the person in the
photograph and verify if it is the same person of all the persons that are reported missing in the
app. The Machine Learning Algorithm should be capable enough to learn geometry and features
of a face from just one provided image.

Motivation:
The real motivation behind this project is to solve this huge problem and minimize the
panic involved in going through a situation this stressful. This application will be able to
recognize a face given an image irrespective of the illumination, pose, emotion and age bias. The
goal is to achieve industry level face recognition accuracy with just a single image of a person.

Objectives:

❖ To Detect a Face in a given Image.


❖ Implementation of a Face recognition system.
❖ The implemented system should be able enough to recognize and perform with just a
single face image of a person.
❖ The implementation should be able to match two faces irrespective of light conditions,
pose and emotion.
❖ The app must classify the missing person correctly.
❖ Deploy the Machine Learning model in a mobile application.
❖ Integrating a Database with the mobile app to store structured(credentials etc.) and
unstructured(images) data.
❖ A user-friendly interface, so that the application is easy for anyone to use.

Methodology:
A digital photograph will be used of format ‘.jpg’ or ‘.png’ to feed the model. The photo
will then be processed through the following phases.

Face detection: ​A ML model will be used to detect the face in the image and return the specific
box around the face. We will be using a pretrained model for this task.

Feature Extraction: ​Feature Extraction is the most important phase in this application as each
face must have its own attributes. We will train our model on an Neural Network architecture
“Inception_resnet​[1]​” (we may change the network architecture in future regarding performance
and accuracy). The Model will be given an input image cropped by the face detection phase and
will output a feature embedding/vector to pass onto the next stage.

3
Classification: ​Nearest Neighbor (NN) approach will be used to classify the face by the feature
vectors that were extracted in the previous phase. Multiple distance metrics are available to
use(e.g. Euclidean distance, Cosine distance etc), we will use whichever works best for face
matching.

Design Phase:
Traditional Supervised Machine Learning approach cannot be used to solve this problem
as it requires a lot of samples to learn about a class and in this case we have only one image of a
person. So we will train a classifier to output a D-dimensional feature vector and then find its
nearest neighbor, if the training and algorithm is done right we will have less distance between
feature vectors of the same face in two different images and if the distance is large then they are
not the same person.
To Train the Deep Learning Model We will use a network architecture ​GoogLeNet​[1]​. The
training set we will use will be ​VGGFace2​[2]​. This dataset contains 3.3M+ images of 9000+
unique identities collected from google images. The dataset has a large variety in pose, age,
illumination and emotion and it has a much more complex distribution than any other publicly
available face dataset.

​Figure 2: Example Subset of VGGFace2

The nearest neighbour(NN) algorithm is chosen because it's a runtime classification problem. In
this method the face with the least distance from the target face is said to be of the same class as
the target face.

4
Implementation Phase:

The Tools we will use for the implementation of the design phase are following.
● Python
● Pycharm
● Anaconda
● Tensorflow
● Keras
● OpenCV
● Flutter SDK
● Dart
● MySQL
● Adobe Xd (for UI/UX)

Validation and Testing Phase:


For Validation and testing of the model we will use the ​Labeled Faces in the Wild
LFW​ ​Dataset which is a commonly used dataset for testing face recognition problems in the
[3]​

industry. The dataset consists of 13,233 images of 5749 identities. In the testing phase we will
make random pairs of same and not same persons images and test how our model performs on
the pairs.

​Figure 3: Example Pairs of LFW Dataset

5
Block Diagram:

Conclusion:
This project will hopefully implement all these complex functions with industry level
accuracy and we hope that this mobile application will help a lot of people and address this
problem successfully. This idea will have a huge impact on society as it is for wellness and a step
towards a beautiful, connected and friendly world where everyone helps each other in their
difficult times.

6
Gantt Chart:

Work/Schedu Novemb Decembe January/ Feburary March/ April/ May/ June/


le Plan er/ 2020 r/ 2020 2021 /2021 2021 2021 2021 2021

Literature
survey

Frontend
Development
,
Model
Training on
VGGFace2

Evaluation &
Testing on
LFW

Model
Deployment

Backend
Integration

Thesis
Writing

References:
[1] ​ Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir
Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich (2014). Going Deeper with
Convolutions. Google Inc.
[2] ​Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi and Andrew Zisserman (2018).
VGGFace2: A dataset for recognising faces across pose and age. Visual Geometry Group,
University of Oxford.
[3] ​G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller (2007). Labeled faces in the wild:
A database for studying face recognition in unconstrained environments. Technical report.

You might also like