Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

FACE MASK DETECTION WITH FINE ALLOCATION

Sachin Ramachandra Achari, Sachin Subrahmanya Shet, Vinay Kumar H S and Vinod
Guided By: Spoorthi P A, Assistant professor
Electronics and Communication Engineering
Dr. Ambedkar Institute of Technology
Bengaluru - 560056
facemaskdetection883@gmail.com

Abstract
Project is based on Computer vision and Deep learning in Python Machine Learning. It will
continuously capture the image and check for Face mask or not, if the person with no face mask
detected, system will check in database for face, if the person recognized it will send the email to
specified email address. With the help of python programming and machine learning concept we
achieved the result as required.
The face is one of the easiest ways to distinguish the individual identity of each other. Face
recognition is a personal identification system that uses personal characteristics of a person to
identify the person's identity. Human face recognition procedure basically consists of two phases,
namely face detection, where this process takes place very rapidly in humans, except under
conditions where the object is located at a short distance away, the next is the introduction, which
recognize a face as individuals.

1. Introduction includes image resizing, normalization, and image


Main AIM of the project is to design a Face Mask enhancement through several filters. The pre-
Detection and Fine Allocation system. It’s a python- processed images identify the face and segment it.
based software project which allocates fine when From the segmented face, the mask image is being
person with no face mask detected. segmented using the trained set of mask images in
COVID-19 Pandemic has changed so much in the the database. The detection of the mask is done
world. Most of the infectious diseases are spread using a convolutional neural network (CNN). The
through the air. As the preventive measures it is captured images are stored using the server. The
necessary to wear face mask at outside or at the prediction model identifies the mask and provides
crowded areas, WHO also suggest the same. To an alert to the system. The captured image without
create awareness in people Government has taken the face mask is being triggered to the server and a
some most needed steps. One of those are fining the mail alert is being sent to the registered person. [6]
person who not wear the face mask at public areas.
Fining is not only to generate revenue to 2. Related Work
government, it’s also a medium of creating In the year 2018, Suma S L implemented a real time
awareness. [5] face recognition algorithm using Linear Binary
So, we created the software model to detect Person Pattern Histogram (LBPH) and Viola Jones
has Facemask in his face or not. If the person with algorithm. This method consists of com fusion and
no face mask detected, the system will recognize recognition. is done using Viola Jones algorithm is
the face of person and generate the mail contains applied is for Face detection, feature extraction is
fine amount and time of person detected. done by LBPH technique and Euclidean Distance
The face mask detection process starts from the Classifier is used for face recognition. This work
image acquisition using a camera. The imaging has recognition rate of about “85%-95%”. This
device and the modules are developed using python work can be further amended to favour in all
through Open-CV programming. The captured conditions such as brightness, in case of twins,
images are pre-processed. The pre-processing step beard and wearing goggles. [1]
includes image resizing, normalization, and image
In the year 2017, Li Cuimei implemented a human enhancement through several filters. The pre-
face detection algorithm using three weak classifiers processed images identify the face and segment it.
including Haar cascade classifier. Skin hue From the segmented face, the mask image is being
histogram, Eye detection and Mouth detection are segmented using the trained set of mask images in
the three classifiers adopted by this method. This the database. The detection of the mask is done
yields sufficiently high detection. The proposed using a convolutional neural network (CNN). The
method generates a position prediction value (PPV) captured images are stored using the server. The
to about 78.18% - 98.01%. This can be amended to prediction model identifies the mask and provides
detect human faces only of multiple races and an alert to the system. The captured image without
reduce the delay for detecting and recognizing the face mask is being triggered to the server and a
various faces among different images of people mail alert is being sent to the registered person. [7]
with variation in light and background conditions.
[2]  FACE DETECTION
Detecting a face is a computer technology which let
In the year 2017, Souhail Guennouni implement a
us know the locations and sizes of human faces.
face detection system by collating with Haar
This helps in getting the facial features and avoiding
cascade classifiers and edge orientation matching.
other objects and things. In the present situation
Edge orientation matching algorithm and Haar-like
human face perception is a biggest research area. It
feature selection combined cascade classifiers are
is basically about detecting a human face through
the two techniques used in this system. This
some trained features. Here face detection is
algorithm produces a better matching but the
preliminary step for many other applications such as
detection speed is comparatively less. [3]
face recognition, video surveillance etc.
In the year 2015, Jiwen Lu using learning CBFD
 FACE DETECTION USING HAAR
proposed a face recognition system. The face
representation and recognition are implemented via CASCADE METHOD
Compact Binary Face Descriptor (CBFD) feature
Object Detection using Haar feature-based cascade
learning method while coupled CBFD is executed
classifiers is an effective method proposed by Paul
for heterogeneous face matching by minimizing the
Viola and Michael Jones in the 2001 paper, "Rapid
modality gap of feature level. Collating with other
Object Detection using a Boosted Cascade of
Binary Codes Learning techniques, CBFD extracts
Simple Features". It is a machine learning based
compact and discriminative feature, hence produces
approach in which a cascade function is trained
a better recognition rate of about 93.80% is
from a lot of positive and negative images. It is then
obtained. In this work, feature is learned only from
used to detect objects in other images.
one single layer. This system can achieve better
performance by Learning Hierarchal features in Here we will work with face detection. Initially, the
deep networks. [4] algorithm needs a lot of positive images (images of
faces) and negative images (images without faces)
to train the classifier. Then we need to extract
3. Methodology features from it. For this, Haar features shown in
below image are used. They are just like our
 BLOCK DIAGRAM convolutional kernel. Each feature is a single value
obtained by subtracting the sum of pixels under the
white rectangle from the sum of pixels under the
black rectangle.
BLOCK DIAGRAM
The face mask detection process starts from the
image acquisition using a camera. The imaging
device and the modules are developed using python
through Open-CV programming. The captured
images are pre-processed. The pre-processing step
classify the image, but together with others forms a
strong classifier. The paper says even 200 features
provide detection with 95% accuracy. Their final
setup had around 6000 features. (Imagine a
reduction from 160000+ features to 6000 features.
That is a big gain).
So now you take an image. Take each 24x24
image window. Apply 6000 features to it. Check if it is
face or not.
Now all possible sizes and locations of each kernel
are used to calculate plenty of features. For each In an image, most of the image region is non-face
feature calculation, we need to find the sum of the region. So, it is a better idea to have a simple
pixels under the white and black rectangles. To method to check if a window is not a face region. If
solve this, they introduced the integral images. It it is not, discard it in a single shot. Don't process it
simplifies calculation of the sum of the pixels, how again. Instead focus on region where there can be a
large may be the number of pixels, to an operation face. This way, we can find more time to check a
involving just four pixels. possible face region.
But among all these features we calculated, most of For this they introduced the concept of Cascade of
them are irrelevant. For example, consider the Classifiers. Instead of applying all the 6000
image below. Top row shows two good features. features on a window, group the features into
The first feature selected seems to focus on the different stages of classifiers and apply one-by-one.
property that the region of the eyes is often darker (Normally first few stages will contain very a
than the region of the nose and cheeks. The second smaller number of features). If a window fails the
feature selected relies on the property that the eyes first stage, discard it. We don't consider remaining
are darker than the bridge of the nose. But the same features on it. If it passes, apply the second stage of
windows applying on cheeks or any other place is features and continue the process. The window
irrelevant. So how do we select the best features out which passes all stages is a face region.
of 160000+ features? It is achieved by Adaboost. Authors' detector had 6000+ features with 38 stages
with 1, 10, 25, 25 and 50 features in first five
stages. (Two features in the above image is actually
obtained as the best two features from Adaboost).
According to authors, on an average, 10 features out
of 6000+ are evaluated per sub-window.
So, this is a simple intuitive explanation of how
image Viola-Jones face detection works. Read paper for
For this, we apply each and every feature on all the more details. [8]
training images. For each feature, it finds the best
threshold which will classify the faces to positive  FLOW CHART
and negative. But obviously, there will be errors or
misclassifications. We select the features with
minimum error rate, which means they are the
features that best classifies the face and non-face
images. (The process is not as simple as this. Each
image is given an equal weight in the beginning.
After each classification, weights of misclassified
images are increased. Then again same process is
done. New error rates are calculated. Also new
weights. The process is continued until required
accuracy or error rate is achieved or required
number of features are found).
Final classifier is a weighted sum of these weak
classifiers. It is called weak because it alone can't
 SERVER RUN
We created local server using FLASK API in
Python. Which is runs on local IP address specified.
Server part is divided in two parts server part and
function part.

 RUN CODE
“run.py” is python code is responsible to run local
server. In this code web page frameworks are
rendered from HTML code. Run code includes the
function calls and several Application codes.
Rewriting of database is done through this program.
It also calls function code to run relative function.

 FUNCTION CODE
“functions.py” is backbone program to “run.py”,
which means depends on the condition run code
calls the function code. Function program has the
capability of calculating time difference, sending
mail.

FLOW CHART  HTML CODE


Webpages are programmed using HTML. In our
Steps to be followed in system: project we are using two webpages named “Home”
1. Run system. and “Dashboard”, which is written on two different
2. Check for face mask. files named “home.html” and “dashboard.html”
3. If person with face mask detected
capture image.  MAIN CODE
4. Check in database for image. Main aim of our project is to detect face mask on
5. If image identified in database send persons face or not if person with no face mask
mail to person using API detected send mail request to server. This part also
does the same. Both code in this part is created
using python programming.
4. Implementation
We are discussing the software part in this chapter.  FACE MASK CODE
System codes are developed using python
Main code of the system is named as the
programming and Web page is designed using
“facemask.py”. which takes image input from
HTML.
webcam/system camera, processed using haar
cascade algorithm in tensorflow. If person with face
 CREATE DATASET mask detected, system displays Mask in Image
There are several methods to create datasets, Like screen itself. If the person with No mask detected,
online platforms (Teachable Machine) and Different system sends pass signal to face recognition
coding methods (MATLAB). We created ID program.
specified datasets using the python code.
We created the python program named  FACE RECOGNITION CODE
“create_data.py” to create dataset. Which captures “face_recognition.py” is the supporting code to
image and store it in local path. This code uses “facemask.py”, used to recognize the face using
“Haar cascade frontal face algorithm for face Haar cascade algorithm in Open CV. When the pass
recognition. signal received from facemask program identify the
image and checks in database or pretrained
database. If person identified sends the request  So today we are going to look at
signal to “run.py” for farther operation. another COVID-related application
of computer vision, this one on
5. Conclusions and Future Work detecting face masks with OpenCV
The approach presented here for face detection and and Keras/TensorFlow.
tracking decreases the computation time producing  By using flask, we can implement a
results with high accuracy. Tracking of a face in a e-payment web app to collect the
video sequence is done using haar cascade fine.
algorithm for detecting facial features. Not only in  As experts forecast a future with
video sequences, it has also been tested on live more pandemics, rising levels of air
video using a webcam. Using this system many pollution, persisting authoritarian
security and surveillance systems can be developed regimes and a
and required object can be traced down easily. In projected increase in bushfires produ
the coming days these algorithms can be used to cing dangerous smoke – it’s likely
detect a particular object rather than faces. mask-wearing will become the norm
for at least a proportion of us.
FUTURE SCOPE  Proposed technique can be integrated
 By adding hardware part, we can into any high-resolution video
design stand-alone system. surveillance devices and not limited
to mask detection only.
 Contact less fining system is more
effective than traditional systems.  The model can be extended to detect
facial landmarks with a facemask for
biometric purposes.

References
[1] S L Suma, Sarika Raga. “Real Time Face R Human Faces by using LBPH and Viola Jones International Journal of Scientific
Research Science and Engineering ,Vol.6, Issue.5, pp.01- 03
[2] Li Cuimei, Qi Zhiliang. “Human face detection Haar cascade classifier with three additional clas IEEE International
Conference on Electronic Me Instruments, pp. 01-03, 2017.
[3] Kushsairy Kadir , Mohd Khairi Kamaruddin .Ha Sairul I Safie,Zulkifli Abdul Kadir Bakti.” A comp between LBP and Haar-
like features for Face De OpenCV”, 4th International Conference on Technology and Technopreneuship (ICE2T), 2014
[4] Souhail Guennouni, Anass Mansouri.“Face Comparing Haar-like combined with Cascade C Edge Orientation Matching”,
International Co Wireless Technologies, Embedded and Intellig (WITS), pp. 02-04, 2017.
[5] Social distancing, surveillance, and stronger health systems as keys to controlling COVID-19 Pandemic, PAHO Director says
- PAHO/WHO | Pan American Health Organization. (n.d.).
[6] J. Redmon, S. Divvala, R. Girshick, A. Farhadi, You only look once: Unified, real-time object detection, in: Proceedings of
the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016.
[7] M. Jiang, X. Fan, and H. Yan, RetinaMask: A Face Mask detector, 2020,
[8] N. Dvornik, K. Shmelkov, J. Mairal, C. Schmid, BlitzNet: A Real-Time Deep Network for Scene Understanding, in:
Proceedings of the IEEE International Conference on Computer Vision, 2017, doi: 10.1109/ICCV.2017.447.

You might also like