Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 37

Project Report

On

GLUCOSE BOTTLE DEVOID DETECTION SYSTEM

By

S.VIKNESH (RA1931241020119)

Submitted to the

DEPARTMENT OF COMPUTER APPLICATIONS (BCA)

Under the guidance of

DR. S.JAYACHANDRAN, MCA, Ph.D

Assistant Professor

Department of Computer Applications

Submitted in partial fulfillment of the requirement

for the award of the degree of

BACHELOR OF COMPUTER APPLICATIONS

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY

Ramapuram, Chennai.

April 2022
COLLEGE OF SCIENCE & HUMANITIES
Ramapuram, Chennai.
__________________________________________________________________
Department of Computer Applications (BCA)

BONAFIDE CERTIFICATE

Certified that this project report titled “CROWD SOCIAL DISTANCE MEASUREMENT
AND MASK DETECTION” is the bonafide work of S.VIKNESH, who carried out the Main project
work done under my supervision.

Certified further, that to the best of my knowledge the work reported herein does not form part
of any other project report on the basis of which a degree or award was conferred on an earlier occasion
on this or any other candidate.

Signature of Internal Guide Signature of Head of the Department

Signature of External Examiner


CROWD SOCIAL DISTANCE MEASUREMENT AND MASK DETECTION

ABSTRACT:

The worst possible situation faced by humanity, COVID-19, is proliferating across more than 180
countries and about 37,000,000 confirmed cases, along with 1,000,000 deaths worldwide as of October
2020. The absence of any medical and strategic expertise is a colossal problem, and lack of immunity
against it increases the risk of being affected by the virus. Since the absence of a vaccine is an issue,
social spacing and face covering are primary precautionary methods apt in this situation. This study
proposes automation with a deep learning framework for monitoring social distancing using
surveillance video footage and face mask detection in public and crowded places as a mandatory rule
set for pandemic terms using computer vision. The paper proposes a framework is based on YOLO
object detection model to define the background and human beings with bounding boxes and assigned
Identifications. In the same framework, a trained module checks for any unmasked individual. The
automation will give useful data and understanding for the pandemic’s current evaluation; this data will
help analyse the individuals who do not follow health protocol norms.
ACKNOWLEDGEMENT

I extend my sincere gratitude to the Chancellor Dr. T.R.PACHAMUTHU and to


Chairman Dr. R. SHIVAKUMAR of SRM Institute of Science and Technology, Ramapuram
and Trichy campuses for providing me the opportunity to pursue the BCA degree at this
University.

I express my sincere gratitude to Dr.C.SUNDAR, Principal, College of Science


Humanities, SRM IST, Ramapuram for his support and encouragement for the successful
completion of the project.

I am thankful to Dr. J. DHILLIPAN, Vice Principal-Admin, College of Science


Humanities, SRM IST, Ramapuram, for his support and encouragement for the successful
completion of the project.

I am grateful to Dr. N. ASOKAN, Vice Principal- Academic, College of Science


Humanities, SRM IST, Ramapuram, for his provision and support for the successful
completion of the project.

I record my sincere thanks to Dr. AGUSTHIYAR.R, Professor and Head, Department


of Computer Applications (BCA), SRM IST, Ramapuram for his continuous support and keen
interest to make this project a successful one.

I find no word to express profound gratitude to my guide DR. S.JAYACHANDRAN MCA,


Ph.D, Assistant Professor, Department of Computer Applications (BCA), SRMIST
Ramapuram.

I thank the almighty who has made this possible. Finally I thank my beloved family
member and friend for their motivation, encouragement and cooperation in all aspect which led
me to the completion of this project.

S.VIKNESH (RA1931241020119)
TABLE OF CONTENTS

S.NO TITLE PAGE NO

PROJECT REPORT I

BONAFIDE CERTIFICATE iii

ABSTRACT iii

ACKNOWLEDGEMENT iv

LIST OF TABLES v

LIST OF FIGURES vii

CHAPTERS TITLE PAGE NO.

1 1.1 PROJECT INTRODUCTION 7

SYSTEM ANALYSIS

2 2.1 EXISTING SYSTEM

2.2 DISADVANTAGE 9-10

2.3 PROPOSED SYSTEM 10

2.4 ADVANTAGES

SYSTEM SPECIFICATION

3.1 HARDWARE CONFIGURATION 11

3 3.2 SOFTWARE CONFIGURATION


DIAGRAM

4 4.1 ARCHITECTURE DIAGRAM 12

4.2 USE CASE DIAGRAM 13

4.3 DATAFLOW DIAGRAM 14

4.4 CLASS DIAGRAM 15

SYSTEM DESIGN

5 5.1 RELATED WORKS 16

5.2 APPLICATION OF YOLO ON MASK 17


DETECTION
18
5.3 OPTIMIZATION OF HUMAN BODY
19
ATTITUDE DETECTION BASED ON MASK RCNN
20
5.4 COLOR QUOTIENT MASK DETECTION
21
5.5 VISION-BASED CROWD COUNTING AND
SOCIAL DISTANCE MONITERING USING TINY-
YOLOv4 AND DEEPSORT

CROWD ESTIMATION

6.1 MODULE 22

6.2 MODULE DESCRIPTION 22

6.3 IMAGE PROCESSING FUNCTION 22

6.4 VIDEO ANALYSIS 22-23


APPENDIX

7 7.1METHODOLOGY 24

7.2 CONCLUTION

8.1 SCREENSHOT 25-29

8.2 SOURCECODE 30-37

LIST OF FIGURES

S. No Fig. No Figures Name Page No.

1 1 Architecture diagram 18

2 2 Use case diagram 19

3 3 Dataflow diagram 20

4 4 Class diagram 21
CHAPTER 1
1.1 INTRODUCTION:
Since the COVID-19 pandemic took the world by storm, tough but necessary measures
were taken by governments throughout the world to control its spread. This resulted in
bringing normal day-to-day activities to a complete standstill. Months into lock down,
when we see the curve flattening in several countries, the community grows restless.
Relevant authorities like WHO have laid down certain guidelines to minimise people's
exposure to the virus. Some safety measures people are encouraged to follow include
wearing masks and maintaining a distance of 3 ft, which is approximately 1m, from
another individual .We have recognized this need and have developed a model
particularly suited to detect certain violations in real time. The first use of our model is
to actually detect people's faces to determine whether or not they're wearing an
acceptable mask. The second use is to determine whether or not social distancing is
being maintained between 2 individuals, in the most efficient, accurate and simple
manner, hence requiring overseeing authorities to take minimum effort.
CHAPTER-2

2.1 EXISTING SYSTEM:

Infections have been growing rapidly and tremendous efforts are being made to fight the
disease. However, existing studies of the observation system have mainly been utilizing
cameras and image analysis techniques for specifying people flow, but the use of
cameras is not preferable in actual fields because of the privacy issues. Therefore, in this
study, we propose a new people crowd density observation system for people flow
observation. In order to avoid social distance issues, the proposed system detect the
every individual person under the survilence and calculate the distance between them.
The mask protection also has been analysed in the proposed system.

2.2 DISADVANTAGE:

 It needs to acquire the delay.


 This technology can be considered a promising solution to practice social
distancing inside multi-story buildings, airports, alleys, parking garages, and
underground locations where GPS and other satellite technologies may not be
available or provide low accuracy.

2.3 PROPOSED SYSTEM:

A report from the World Health Organization (WHO) presented that coronavirus disease
(COVID-19) has globally infected millions of people and caused deaths on a large scale.
This virus was initially taken into account in December in Wuhan, and later it was found
it had spread across 216 countries, declared this as a pandemic. Many healthcare
facilities, medical experts, and scientists are in the process of creating medicines or a
vaccine for this deadly virus. However, to date, the development processes are in the
latter stages, but they are still not sure of the effect caused by this unknown virus.
Making the countries stop racticing their daily activities on which their livelihoods
depend, leads to a severe economic crisis during the pandemic period. The higher
official came up with a temporary solution to re-establish the economy by opening up
the countries by practising health protocols prescribed by WHO. In this pandemic,
people look for alternative precautionary practices to overcome this deadly virus. Social
distancing and face cover are the best precautionary methods used in today’s time. The
rules set by the World Health Organisation.

2.4 ADVANTAGES:

 The work is based upon both real-time and stored feeds, as per the use
conditions.
 The information obtained by training the data with variable and unfavourable
conditions has shown impeccable results which can help to oversee the measures
being implemented around the socially active place.
CHAPTER-3

SYSTEM SPECIFICATION

3.1 HARDWARE CONFIGURATION:

 Processor - I5

 Speed - 3 GHz

 RAM - 8 GB(min)

 Hard Disk - 500 GB

 Key Board - Standard Windows Keyboard

 Mouse - Two or Three Button Mouse

 Monitor - LCD

3.2 SOFTWARE CONFIGURATION

 Operating System - Linux, Windows/7/10

 Server - Anaconda, Jupyter,pycharm

 Front End - tkinter |GUI toolkit

 Server side Script - Python , AIML


CHAPTER-4

4.1 ARCHITECTUREDIAGRAM:

camera recognition

algorithm

TRACKING

output
4.2 USE CASE DIAGRAM:
4.3 DATAFLOW DIAGRAM:
4.4 CLASS DIAGRAM:
CHAPTER-5
5.1 RELATED WORKS:
Regression based object detectors like — You Only Look Once(YOLO) and Single Shot
Detector(SSD) multibox have been proven to be significantly faster than region based
object detectors. Among the two, YOLO has long been the most popular choice among
other similar object detectors. It takes as input the entire image at once, unlike region
based detectors which deduce region proposals which are fed to the classifier. This
makes it faster than other detectors by a wide margin. The model pipeline expects an
RGB image which is divided into grid cells S × S. Each grid cell is responsible for
predicting B bounding boxes. For each bounding box 5 values are predicted x, y, w, h
and c. The coordinates of the centre point of a bounding box relative to a grid cell are x,
y and the width and height of the bounding box are w and h. The confidence score of an
object being present in a bounding box is c. For class probabilities C, the output of the
object detector is a tensor of shape. The comparison of these distances against the
threshold requires the absolute distance between 2 points to be given. This is in fact
feasible for a factory or a closed room, but for every public area, every road, this would
be costly, and time consuming. It must be noted that these two models are separate
entities and an integrated solution has not been presented.
LITERATURE SURVEY REFERENCES:

5.2 TITLE: Application of Yolo on Mask Detection Task

ABSTRACT:
2020 has been a year marked by the COVID-19 pandemic. This event has caused
disruptions to many aspects of normal life. An important aspect in reducing the impact
of the pandemic is to control its spread. Studies have shown that one effective method in
reducing the transmission of COVID-19 is to wear masks. Strict mask-wearing policies
have been met with not only public sensation but also practical difficulty. We cannot
hope to manually check if everyone on a street is wearing a mask properly. Existing
technology to help automate mask checking uses deep learning models on real-time
surveillance camera footages. The current dominant method to perform real-time mask
detection uses Mask-R-CNN with ResNet as backbone. While giving good detection
results, this method is computationally intensive and its efficiency in real-time face
mask detection is not ideal.
5.3 TITLE: Optimization of Human Body Attitude Detection Based on Mask RCNN

ABSTRACT:
Mask RCNN has the advantage of higher accuracy than other human body attitude
detection models, and has been widely used for multi-person human body attitude
detection. However, Mask RCNN has a low detection speed. Therefore, we propose a
MobileNet-based Mask RCNN optimization model to accelerate the inference
calculation speed of Mask RCNN. At the same time, in order to further improve the
detection accuracy of Mask RCNN, we propose a method by using pixel segmentation
results to assist the detection of key points in the human body, which improves the speed
of reasoning and reduces the rate of false detection to some extent.
5.4 TITLE: Color quotient based mask detection

ABSTRACT:
The paper deals with mask detection in the age of COVID - 19, by proposing a simple
and efficient method to detect people not wearing mask. The approach includes a feature
extraction step followed by a supervised learning model built with support vector
machines. The features are formed of color information by considering red, green and
blue channels for an RGB color image. Ratio of color channels is taken into account to
discriminate between mask and non mask images. The approach has been tested on a set
of 1211 facial images extracted from group of people wearing or not wearing a mask, by
considering a 2 - class problem, where the mask class represents the positive examples,
where the non-masked faces are negative examples. Part of the image data set is used to
train the support vector machines for learning discriminant features for each class,
followed by a prediction for each test sample.
5.5 TITLE: Vision-based Crowd Counting and Social Distancing Monitoring using
Tiny-YOLOv4 and DeepSORT

ABSTRACT:
With the novel coronavirus, social distancing and crowd monitoring became vital in
managing the spread of the virus. This paper presents a desktop application that utilizes
Tiny-YOLOv4 and DeepSORT tracking algorithm to monitor crowd count and social
distancing in a top-view camera perspective. The application is able to process video
files or live camera feed such as CCTV or surveillance cameras and generate reports
indicating people detected per unit time, percentage of social distancing per unit time,
detection and social distancing logs as well as color-coded bounding boxes to indicate if
the detected people are following social distancing protocols.
5.6 TITLE: Social Distance Capturing and Alerting Tool

ABSTRACT:

The Novel Coronavirus outbreak worldwide has made people think about the possibility
of protecting themselves from infections. Various precaution measures like social
distancing, maintain hand hygiene, stay away from crowded places, refrain touching
eyes, mouth, and nose. The pandemic has also challenged various researchers,
mathematicians, Pharmacists etc., to dig out the solutions in this pand emic situation.
Machine learning concepts and algorithms also find a great place amidst various
scientists. Among all the preventive measures, social distancing plays a vital role in
flattening the COVID-19 curve. This paper proposes an effective social distancing
capturing and alerting tool to detect humans if they are not maintaining the social
distance.
CHAPTER-6
CROWN ESTIMATION

6.1 MODULE:

 Function Checking
 Image Process Function
 Video Analysis

6.2 MODULE DESCRIPTION:

FUCTION CHECKING:

This function is used to calculate the distance between two different points (a and b) in
an image (single frame of a video). The two points denote the center of the individuals
detected in the video. The distance that is calculated is the Euclidean Distance between
two points.

6.3 IMAGE PROCESSING FUNCTION:

This is the most complex function in our program. In this function, The argument is a
single frame of the video. For every iteration, each frame of the input video gets
processed along with the social distancing detection between each individual in a crowd
and is returned to the main function.

6.4 VIDEO ANALYSIS:

We can see that the Social Distancing Detector is able to detect if the crowd is following
social distancing and marks the people who are not in safe distances with a red box and
the safe people with a green box.
This program can be altered with various threshold values to check for social distancing.
For those who are interested, just by tweaking the code a little, you can set the source of
the video as the webcam of your PC. I’ll leave that for you to try it out by yourself.
CHAPTER-7
7.1 METHODOLOGY:
This section discusses the different models used for the system. which use a wide range
of object detection and image classification techniques. depicts the flow of these models
to implement the system. Images are sampled from a web cam video. The video was
processed using a sampling of one frame for every five frames. Since there were about
80 frames per second on average of the videos, some frames could be skipped as the
movement of people would not be too drastic within a fraction of a second. Hence, this
provides a method to increase the computation speed while not compromising on the
performance of the model.

7.2 CONCLUSION:
The system works great, but it required some effort to overcome some challenges. For
Social distancing framework, the calibration of the model by simulating a 3D depth
factor based on the camera position and orientation gives better analysis. For Face mask
detection small faces were hard to detect and even harder to track hence using pose
estimation, which is using the body to estimate the head bounding box and using facial
key points instead if they are visible. Installation of this system on surveillance cameras
for monitoring health protocols by the government will help to control the outbreak. The
proposed framework can detect social spacing and face covering correctly. Experimental
results show that YOLO weights and the face data set had achieved promising
performance.
CHAPTER-8

8.1 SCREENSHOTS:
8.2 SOURCE CODE:

# import the necessary packages


from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model
from imutils.video import VideoStream
import numpy as np
import imutils
import time
import cv2
import os
import math
import face_recognition

def mainc():

scale_percent = 20 # percent of original size


width = 0
height = 0

labelsPath = "coco.names"
LABELS = open(labelsPath).read().strip().split("\n")

np.random.seed(42)
COLORS = np.random.randint(0, 255, size=(len(LABELS), 3),
dtype="uint8")

weightsPath = "yolov3.weights"
configPath = "yolov3.cfg"

net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)

cap = cv2.VideoCapture(1)
if not cap.isOpened():
print("Could not open webcam")
exit()
else: #get dimension info
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
dim = (width, height)
print('Original Dimensions : ',dim)
width = int(width * scale_percent / 100)
height = int(height * scale_percent / 100)
dim = (width, height)
print('Resized Dimensions : ', dim)

def detect_and_predict_mask(frame, faceNet, maskNet):


# grab the dimensions of the frame and then construct a blob
# from it
(h, w) = frame.shape[:2]
blob = cv2.dnn.blobFromImage(frame, 1.0, (300, 300),
(104.0, 177.0, 123.0))
# pass the blob through the network and obtain the face detections
faceNet.setInput(blob)
detections = faceNet.forward()
# initialize our list of faces, their corresponding locations,
# and the list of predictions from our face mask network
faces = []
locs = []
preds = []

# loop over the detections


for i in range(0, detections.shape[2]):
# extract the confidence (i.e., probability) associated with
# the detection
confidence = detections[0, 0, i, 2]
# filter out weak detections by ensuring the confidence is
# greater than the minimum confidence
if confidence > 0.5:
# compute the (x, y)-coordinates of the bounding box for
# the object
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
(startX, startY, endX, endY) = box.astype("int")
# ensure the bounding boxes fall within the dimensions of
# the frame
(startX, startY) = (max(0, startX), max(0, startY))
(endX, endY) = (min(w - 1, endX), min(h - 1, endY))
# extract the face ROI, convert it from BGR to RGB channel
# ordering, resize it to 224x224, and preprocess it
face = frame[startY:endY, startX:endX]
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
face = cv2.resize(face, (224, 224))
face = img_to_array(face)
face = preprocess_input(face)
# add the face and bounding boxes to their respective
# lists
faces.append(face)
locs.append((startX, startY, endX, endY))

# only make a predictions if at least one face was detected


if len(faces) > 0:
# for faster inference we'll make batch predictions on *all*
# faces at the same time rather than one-by-one predictions
# in the above `for` loop
faces = np.array(faces, dtype="float32")
preds = maskNet.predict(faces, batch_size=32)
# return a 2-tuple of the face locations and their corresponding
# locations
return (locs, preds)

base_dir=os.getcwd()
base_dir=base_dir.replace('\\','/')

model_store_dir='mask_detector.model'

confidence=0.4

face_detector_caffe='res10_300x300_ssd_iter_140000.caffemodel'

# load our serialized face detector model from disk


print("[INFO] loading face detector model...")
prototxtPath ='deploy.prototxt'
weightsPath = face_detector_caffe
faceNet = cv2.dnn.readNet(prototxtPath, weightsPath)
# load the face mask detector model from disk
print("[INFO] loading face mask detector model...")
maskNet = load_model(model_store_dir)
# initialize the video stream and allow the camera sensor to warm up
print("[INFO] starting video stream...")
vs = VideoStream(src=0).start()
time.sleep(2.0)

# loop over the frames from the video stream


iter=0
while True:

# grab the frame from the threaded video stream and resize it
# to have a maximum width of 400 pixels
frame = vs.read()
frame = imutils.resize(frame, width=1200)

resized = cv2.resize(frame, dim, interpolation=cv2.INTER_AREA)


#face_landmarks_list = face_recognition.face_landmarks(frame)
(H, W) = frame.shape[:2]
ln = net.getLayerNames()
ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]
blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (224, 224), swapRB=True,
crop=False)
net.setInput(blob)
#start = time.time()
layerOutputs = net.forward(ln)
#end = time.time()
# print("Frame Prediction Time : {:.6f} seconds".format(end - start))
boxes = []
confidences = []
classIDs = []

for output in layerOutputs:


for detection in output:
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]
if confidence > 0.1 and classID == 0:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
boxes.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
classIDs.append(classID)

if iter % 3 == 0:

idxs = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.3)


ind = []
for i in range(0, len(classIDs)):
if (classIDs[i] == 0):
ind.append(i)
a = []
b = []

if len(idxs) > 0:
for i in idxs.flatten():
(x, y) = (boxes[i][0], boxes[i][1])
(w, h) = (boxes[i][2], boxes[i][3])
a.append(x)
b.append(y)

distance = []
nsd = []
for i in range(0, len(a) - 1):
for k in range(1, len(a)):
if (k == i):
break
else:
x_dist = (a[k] - a[i])
y_dist = (b[k] - b[i])
d = math.sqrt(x_dist * x_dist + y_dist * y_dist)
distance.append(d)
if (d <= 6912): #6 feet = 6912 pixels
nsd.append(i)
nsd.append(k)
nsd = list(dict.fromkeys(nsd))
# print(nsd)

color = (0, 0, 255)


for i in nsd:
(x, y) = (boxes[i][0], boxes[i][1])
(w, h) = (boxes[i][2], boxes[i][3])
cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
text = "DANGER"
cv2.putText(frame, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
color, 2)
color = (0, 255, 0)
if len(idxs) > 0:
for i in idxs.flatten():
if (i in nsd):
break
else:
(x, y) = (boxes[i][0], boxes[i][1])
(w, h) = (boxes[i][2], boxes[i][3])
cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
text = 'FINE'
cv2.putText(frame, text, (x, y - 5), cv2.FONT_HERSHEY_SIMPLEX,
0.5, color, 2)

text = "NOT 6 FEET AWAY: {}".format(len(nsd))


cv2.putText(frame, text, (10, frame.shape[0] - 25),
cv2.FONT_HERSHEY_SIMPLEX, 0.85, (0, 0, 255), 3)
text1 = "PHOENIX-SOCIAL DISTANCING AND MASK DETECTOR"
cv2.putText(frame, text1, (200, 45),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2)

# detect faces in the frame and determine if they are wearing a


# face mask or not
(locs, preds) = detect_and_predict_mask(frame, faceNet, maskNet)

# loop over the detected face locations and their corresponding


# locations
for (box, pred) in zip(locs, preds):
# unpack the bounding box and predictions
(startX, startY, endX, endY) = box
(mask, withoutMask) = pred
# determine the class label and color we'll use to draw
# the bounding box and text
label = "MASK WORN" if mask > withoutMask else "MASK NOT WORN"
color = (0, 255, 0) if label == "MASK WORN" else (0, 0, 255)
# include the probability in the label
label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)
# display the label and bounding box rectangle on the output
# frame
cv2.putText(frame, label, (startX, startY - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
cv2.rectangle(frame, (startX, startY), (endX, endY), color, 2)

face_locations_list = face_recognition.face_locations(frame)

face_landmarks_list = face_recognition.face_landmarks(frame)

if face_landmarks_list!=[] :
# Display the results
for (top, right, bottom, left) in face_locations_list:
# Scale back up face locations since the frame we detected in was scaled to 1/4
size
top *= 4
right *= 4
bottom *= 4
left *= 4

# Draw a box around the face


cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)

# Draw a label with a name below the face


cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255),
cv2.FILLED)
font = cv2.FONT_HERSHEY_DUPLEX
text = "Not Proper Mask"
cv2.putText(frame,text, (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)

#print(face_locations_list)
# show the output frame
cv2.namedWindow('frame', cv2.WINDOW_NORMAL)
cv2.setWindowProperty('frame', cv2.WND_PROP_FULLSCREEN,
cv2.WINDOW_FULLSCREEN)
cv2.imshow('frame', frame)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()

#if __name__=='__main__':
#mainc()

You might also like