Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

PROJECT REPORT

Face Recognition Based Attendence System

Submitted in partial fulfillment of the


Requirements for the award of

Degree of Bachelor of Technology


in
Computer Science and Engineering

SUBMITTED BY:

Name: Amandeep kaur, Mansi, Shivani Thakur, Shruti Thakur


University Roll No: 2124112, 2124140, 2124157, 2124159

SUBMITTED TO: Dr. Brijesh Bakariya

Department of Computer Science & Engineering

IK GUJRAL PUNJAB TECHNICAL UNIVERSITY CAMPUS,


HOSHIARPUR

1
DECLARATION

I hereby declare that the Project Report entitled “Face Recognition based Attendence
System” is an authentic record of my own work as requirements of project during the
period from Jan to June for the award of degree of B.Tech. (CSE), IK Gujral Punjab
Technical University Campus, Hoshiarpur, under the guidance of Dr. Brijesh
Bakariya.

Amandeep Kaur (2124112)


Mansi (2124140)
Shivani Thakur (2124157)
Shruti Thakur (2124159)

Date: ____________________

Certified that the above statement made by the student is correct to the best of our
knowledge and belief.

Signatures

Examined by:
1. 2. 3. 4.

Head of Department
(Signature and Seal)
2
Acknowledgment

I express my sincere gratitude to the I.K. Gujral Punjab Technical University,


Kapurthala for giving me the opportunity to work on the Minor Project during my Third
year of B. Tech (CSE) is an important aspect in the field of engineering.

I would like to thank Director Dr. Yadwinder Singh Brar, I.K.G.PTU Campus,

Hoshiarpur and Dr. Brijesh Bakariya, Head of Department, CSE at I.K.G.PTU


Campus, Hoshiarpur for their kind support.

I also owe my sincerest gratitude towards Dr. Brijesh Bakariya for his valuable advice
and healthy criticism throughout my project which helped me immensely to complete my
work successfully. I would also like to thank everyone who has knowingly and
unknowingly helped me throughout my work. Last but not least, a word of thanks for the
authors of all those books and papers which I have consulted during my project work as
well as for preparing the report.

Amandeep Kaur (2124112)

Mansi (2124140)

Shivani Thakur (2124157)

Shruti Thakur (2124159)

B.Tech (CSE)6th Sem

3
TABLE OF CONTENT

Chapter 1: Introduction Page No.

1.1 Introduction about project 8

1.2 Objective 9
1.3 Existing System 9
1.4 Proposed System 9

Chapter 2: Tools & Technology


2.1 Programming language 10
2.2 Technologies 10

Chapter 3: Dataset Description


3.1 Gathering dataset 11
3.2 Data Preprocessing 12
3.3 Data Processing and final creation of data 13

Chapter 4: Concept used in Model Creation


4.1 Explanation of ResNet 15
4.2 Residual Learning 15
4.3 Architecture 16
4.4 Benefits 16
4.5 Why Use ResNet in Face Recognition-Based Attendance System 16
4.6 Where Res-Net is utilized in Face Recognition Based Attendance System 18

Chapter 5: Face Recognition based Attendence System


5.1 Defination 21
5.2 Uses 21

4
5.3 Working 22
Chapter 6: Libraries and Module Used
6.1 Libraries used 30
6.2 Modules used 31

Chapter 7: Code Implementation


7.1 Add data to database 33
7.2 Blink New Attendance 34
7.3 Encoding Generator 35
7.4 Accuracy Visualize 36

Chapter 8: Performance Evaluation


8.1 Matrices 37
8.2 Feedback option for improve the performance 40

Chapter 9: Screenshots and images


9.1 Flowcharts 42
9.2 Final output of the project (Screenshots) 43
Chapter 10: Conclusion
10.1 Conclusion 45
10.2 Application 45
10.3 Limitations 45
10.4 Future Scope 46

5
LIST OF FIGURES

Topic Page No.

Figure 1. 11
Figure 2. 12
Figure 3. 12
Figure 4. 13
Figure 5. 13
Figure 6. 14
Figure 7. 20
Figure 8. 23
Figure 9. 23
Figure 10. 24
Figure 11. 24
Figure 12. 25
Figure 13. 25
Figure 14. 26
Figure 15. 26
Figure 16. 27
Figure 17. 27
Figure 18. 28
Figure 19. 28
Figure 20. 29
Figure 21. 29
Figure 22. 33

6
Figure 23. 34
Figure 24. 35
Figure 25. 36
Figure 26. 37
Figure 27. 39
Figure 28. 39
Figure 29. 40
Figure 30. 41
Figure 31. 41
Figure 32. 42
Figure 33. 43
Figure 34. 43
Figure 35. 44
Figure 36. 44

7
CHAPTER 1: INTRODUCTION

1.1 Introduction about Project

The basic idea behind our project is to create a system that is based on face recognition
technique that is used to recognize the face of the student and compare the face with
already existing image that is stored our database.

In our project we use firebase that is cloud computing platform we use it as backend as
a service. If the image are 100% matches with the existing image then the attendance will
be marked and automatically stored our database without human intervention.

Apart from that our face recognition attendance system also maintain the record that is
total attendance of a particular student, their grade according to their attendance,
performance either good or bad, student details and stored image, student detail like
name, roll number, branch. All these information is automatically maintain by our
system. This system basically reduce the faculties burden and give the productive time
for teaching. Its all about the introduction of our project.

Our system also stores the timing which time the student mark attendance and it maintain
automatically in our database. Our system will detect face of the student then mark
attendance it is one of the most important task so only present student will mark their
attendance and this system is protect any proxy attendance. The main objective is to
reduce the human work and maintain the record honestly that will do our machine
learning model. This model marks the attendance few seconds so it is less time
consuming as compared to register based attendance.

8
1.2 Objective

 Automating the whole process so that we have digital environment.


 Reducing time wastage during conventional class attendance.
 Preventing fake roll call as one to one attendance marking is possible only.
 To record the attendance of the identified student.

Existing System
Many Face Recognition Software have been implemented during the past decade.
Each software uses different methods and different algorithm than other software.
Some facial recognition extracts the face features from the input image to identify
the face. Other algorithm normalize a set of face images and then compress the face
data, it saves the data in one image then compared with the face data.

1.3 Proposed System

In Our System Multiple student marked their attendance at the same time. All the student
of the class must register themselves by entering the required details and then their
images will be captured and stored in the dataset. During each session, faces will be
detected from live streaming video of classroom. The faces detected will be compared
with images present in the dataset. If match found, attendance will be marked for the
respective student. The task of the proposed system is to capture the face of each student
and to stored it in the database for their attendance. The face of the student needs to be
captured in such a manner that all the featured of the student face needs to be detected,
even the seating and the posture of the student need to be recognized. There is no need
for the teacher to manually take attendance in the class because the system records a
video and through further processing steps the face is being recognized and the
attendance database id updated.

9
CHAPTER 2: TOOLS & TECHNOLOGY

2.1 Programming Language


The programming language which we used to implement our project is Python.

Python is a high-level, interpreted, interactive and object-oriented scripting language.


Python is designed to be highly readable. It uses English keywords frequently whereas
other languages use punctuation, and it has fewer syntactical constructions than other
languages.
Characteristics of Python

Following are important characteristics of Python Programming −

 It supports functional and structured programming methods as well as OOP.

 It can be used as a scripting language or can be compiled to byte-code for building


large applications.
 It provides very high-level dynamic data types and supports dynamic type checking.
 It supports automatic garbage collection.

 It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.

2.2 Technologies
The system we used in order to create our project has the following specification:

 Intel Core i5 12th Gen


 Window 11
 512 GB SSD
 16 GB RAM

The following IDEs were used:

 Visual Studio code.

10
CHAPTER 3: DATASET DESCRIPTION

3.1 Gathering Dataset


The dataset used in our project “Face Recognition Based Attendance System” consists of
images of the individuals(students). Each image is labelled with a unique id (student id)
and all the information regarding the student id is saved in the Realtime Database. The
dataset will be used to train and test the model for the purpose of recognizing and
identifying faces accurately.
The dataset consists of variety of facial images of different students captured under
different facial expressions and background. In this project we are basically using the
pre-trained model provided by face_recognition library which is based on ResNet-based
CNN provided by the dlib library and accessed through the face_recognition library.

Fig 1 Dataset

This figure consists the images of the students which we are taking as dataset and it is
saved with their Unique Id (Roll no). Here the dataset contains the images in the different
formats such as .jpg and .webp which will be converted into same format in data
preprocessing.

11
Fig 2 Addition of data in database

A Separate AddDatatoDatabase.py file is generated which keeps all the information


regarding the students. That information is saved in the database in our project we are
using Firebase (Database).

Fig 3 Firebase (database)

Fig 3 shows the information of the students saved in the Firebase.


3.2 Data Preprocessing
In Data Preprocessing we will remove the images or change the format of the images
which are of different file format which means all the images should be of similar format
i.e same file extension. If .jpg is used then all the files should be of .jpg extension.
12
Fig 4 Dataset Preprocessing

3.3 Dataset Processing and Final creation of Data


For data preprocessing EncodeGenerator.py file is generated.

Fig 5 Coding of Preprocessing


In this file the ‘findEncodings()’ function takes a list of images(‘imagesList’) as input
and returns a list of face encodings (‘encodeList’) as output. The step by step process:

13
1.Iterating Over Images: The function iterates over each image in the imagesList using
a for loop.
2.Color Conversion: Inside the loop, each image (img) is converted from the BGR color
space to the RGB color space using cv2.cvtColor(img, cv2.COLOR_BGR2RGB). This
conversion ensures compatibility with the face_recognition library, which expects
images in RGB format.
3.Face Detection and Encoding: After converting the image to RGB format, the
face_recognition.face_encodings() function is called to detect faces in the image and
generate face encodings. face_encodings() returns a list of face encodings for all faces
detected in the image. Since the assumption is that each image contains only one face (as
mentioned in the comment), [0] is used to select the encoding of the first (and presumably
only) face in the list.
4.Appending to Encode List: The generated face encoding (encode) is appended to the
encodeList.
5.Returning Encode List: Once all images in imagesList have been processed, the
function returns the list of face encodings (encodeList).
These face encodings are then used for face recognition.

Fig 6 Encoding Generator


14
CHAPTER 4: CONCEPTS USED IN MODEL CREATION

4.1 Explanation of ResNet


In Face Recognition Based Attendance System we are using face_recognition library
which uses a pre-trained deep learning model Res-Net to locate faces accurately.

ResNet, short for Residual Network, is based on the concept of residual learning or skip
connections. The key idea behind ResNet is to address the degradation problem that
arises when deep neural networks are trained.
Here's a brief explanation of the main concept:

4.2 Residual Learning:

 In traditional deep neural networks, as the network depth increases, accuracy


tends to saturate and then degrade rapidly. This phenomenon is known as the
degradation problem.

 ResNet addresses this issue by introducing residual or shortcut connections that


allow the network to learn residual functions instead of directly learning
underlying mappings.

 The core idea is to learn the residual 𝐻(𝑥)−𝑥H(x)−x instead of directly learning
the underlying mapping 𝐻(𝑥)H(x), where 𝐻(𝑥)H(x) represents the desired
underlying mapping, and 𝑥x represents the input to a given layer.

 By using these residual connections, the network can effectively bypass certain
layers during training if those layers do not contribute significantly to the desired
mapping. This enables the training of much deeper networks without suffering
from degradation.

15
4.3 Architecture:
 ResNet architectures typically consist of multiple residual blocks, each containing
several convolutional layers along with shortcut connections.

 The residual blocks can have different configurations, such as bottleneck structures
for deeper networks or plain structures for smaller networks.

 The most common ResNet architectures include ResNet-18, ResNet-34, ResNet-


50, ResNet-101, and ResNet-152, which differ in terms of the number of layers
and computational complexity.

4.4 Benefits:
 ResNet's use of residual connections enables the training of extremely deep neural
networks, reaching hundreds or even thousands of layers while maintaining or
improving performance.

 The skip connections help alleviate the vanishing gradient problem by providing
a direct path for gradient flow during backpropagation, facilitating easier training
of deep networks.

In summary, ResNet is based on the concept of residual learning, where residual


connections are introduced to enable the training of very deep neural networks. These
connections allow the network to learn residual functions, which helps mitigate the
degradation problem and enables the construction of highly accurate and deep
convolutional neural networks.

4.5 Why Use ResNet in Face Recognition-Based Attendance System:


1. Effective Feature Learning:

 Face recognition tasks often require learning complex and hierarchical


features from facial images. ResNet's ability to effectively capture features
through residual learning makes it well-suited for this task.

16
 With the deep layers of ResNet, the model can learn discriminative features
at different levels of abstraction, enabling accurate face recognition even in
challenging conditions.

2. Robustness to Variations:

 Faces can vary significantly in terms of pose, illumination, expression, and


occlusion. ResNet's deep architecture allows it to learn robust
representations that can generalize well to various facial variations.

 The skip connections in ResNet facilitate gradient flow during training,


helping the model to better handle variations in facial appearance and pose.

3. High Accuracy and Performance:

 ResNet architectures have achieved state-of-the-art performance on various


computer vision tasks, including face recognition.

 By leveraging the depth and skip connections of ResNet, the face


recognition model can achieve high accuracy in identifying individuals,
which is crucial for an attendance system where precision is paramount.

4. Easy Integration:

 ResNet models are readily available and can be easily integrated into
existing deep learning frameworks like TensorFlow or PyTorch.

 With pre-trained ResNet models and transfer learning techniques,


developers can fine-tune the model on a specific face recognition dataset,
reducing the need for extensive training from scratch.

In summary, ResNet's residual learning concept, coupled with its deep architecture
and skip connections, makes it an ideal choice for face recognition-based
attendance systems. Its ability to learn complex features, robustness to variations,
high accuracy, and ease of integration contribute to its effectiveness in accurately
identifying individuals from facial images captured in real-time video streams.
17
4.6 Where Res-Net is utilized in Face Recognition Based Attendance
System
In your project, which is a face recognition-based attendance system, ResNet can be
utilized in the following manner:
1. Feature Extraction:

 Pre-trained ResNet models, such as ResNet-50 or ResNet-101, can be


employed as feature extractors for facial images.

 The convolutional layers of the ResNet model are used to extract


hierarchical features from facial images, capturing facial characteristics and
patterns.

2. Transfer Learning:

 You can leverage a pre-trained ResNet model that has been trained on a
large-scale dataset like ImageNet.

 By fine-tuning the pre-trained ResNet model on your specific face


recognition dataset, you can adapt it to learn features relevant to your task,
such as identifying individuals from facial images.

3. Face Embeddings:

 Once a facial image is passed through the ResNet model, the output from
one of the intermediate layers (before the fully connected layers) can be
considered as a face embedding or feature vector.

 This face embedding captures the unique characteristics of the face in a


high-dimensional feature space.

4. Similarity Comparison:

18
 During the recognition phase, the face embeddings extracted from new
facial images (captured in real-time) are compared with the embeddings of
known faces stored in the system.

 Similarity metrics such as cosine similarity or Euclidean distance can be


used to measure the similarity between face embeddings.

5. Classification or Identification:

 Based on the similarity scores obtained from comparing face embeddings,


you can determine whether the input face matches any of the known faces
in the database.

 If a match is found above a certain threshold, the system recognizes the


individual and records their attendance.

6. Training and Model Updates:

 The ResNet model used for face recognition can be trained and updated
periodically to adapt to changes in facial appearance, lighting conditions, or
new individuals entering the system.

 Training data can be collected over time to enhance the model's performance
and accuracy.

By integrating ResNet into your face recognition-based attendance system, you


can benefit from its ability to learn discriminative features from facial images and
accurately identify individuals, thus automating the attendance tracking process
with high precision and reliability.

19
Fig 7 workflow

This workflow illustrates the sequential steps involved in using ResNet for face
recognition in your attendance system, from data collection and preprocessing to feature
extraction, recognition, and model updates.

20
CHAPTER 5: Face Recognition based Attendence System

5.1 Definition

Face Recognition based attendance system is biometric technology that is used to


automatically identify the face of the student and verify the student based on their facial
features. This system is used to track the attendance of particular student and store their
record automatically without user intervention. Only authorized student can access the
data and mark attendance.

5.2 Uses

1 only authorized student can mark attendance if detected face is recognized, So it


provide security and protect unwanted users.
2 Automated time tracking with entry of the student and data is stored automatically.
3 Time and cost saving because all work is done by our system so it will save time and
cost, it eliminate the paper work that is costly so it is cost saving.
4 Easy to manage the record of students if the strength is very large using Face
Recognition based Attendance System.
5 Reduces manual efforts beacuse the all process is automated.
6 Unlimited scale beacuse there is no limit to the number of students.
7 Touchless attendance system Pandemics like covid 19 its good solution to minimize
the physical contact in educational institutes.

5.3 Working
In the below given points the working of our project is defined briefly

21
1.Importing Libraries: Firstly we will import the libraries according to our project
requirement.
 cv2: OpenCV library for image and video processing.
 os: Operating system module for file operations.
 pickle: Python module for object serialization.
 numpy: Numerical computing library for handling arrays and matrices.
 face_recognition: Library for face detection and recognition.
 cvzone: Additional OpenCV-based library for computer vision tasks.
 firebase_admin: Firebase SDK for Python, used for Firebase integration.
 datetime: Module for handling dates and times.
 csv: Module for reading and writing CSV files.

Fig 8 Importing Libraries

2.Firebase Initialization:
 Firebase is a platform for building web and mobile applications. In this script,
Firebase is initialized using a service account key ("serviceAccountKey.json") to
access the Firebase Realtime Database and Storage.

22
Fig 9 Initializing database

Fig 10 Firebase

3.Video Capture Setup:


 The script attempts to initialize video capture from a camera by iterating over
camera indices from 0 to 9 using cv2.VideoCapture.
 Once an open camera is found, its resolution is set to 640x480 using cap.set(3,
640) and cap.set(4, 480).

23
Fig 11 Frame

4.Background Image and Modes Loading:


 A background image ('Resources/background.png') and mode images from a
folder ('Resources/Modes') are loaded.
 These images are used for displaying the video feed and additional UI elements.

Fig 12 Resources Folder

24
Fig13 Modes

Fig14 Background Image

5.Face Encoding File Loading:


 A file containing precomputed face encodings and associated student IDs is loaded
using pickle.load.
 These precomputed encodings are used for recognizing known faces.

25
Fig 15 Loading Encoding files
6.Main Loop:
 The script enters a main loop where it continuously captures frames from the video
feed using cap.read().
 Each frame is resized and converted to RGB format.
 Face locations and encodings are computed using face_recognition.face_locations
and face_recognition.face_encodings, respectively.
 Face encodings are compared with the precomputed encodings to identify known
faces.

Fig 16 Encodings

In figure 5.9 the generated encodings are compared with the image detected by the
webcam and then the face_distance is calculated. If face_distance is greater than
there is no similarity between the faces if face distance is less then there is
similarity between the faces.

26
Fig 17 Face Distance

Figure 5.10 shows face_distance and tells about the face similarity.

7.Attendance Tracking:
 When a known face is detected, the script retrieves the corresponding student
information from Firebase using db.reference.
 It updates the attendance information for the detected student if certain conditions
are met, such as checking the elapsed time since the last attendance record.
 Attendance percentage is calculated based on the total attendance count for each
student.

Fig 18 Records of students attendance

Fig 18 represents all the information of the student when the face is detected along with
Attendance Percentage.

27
Fig 19 Records in database

8.Feedback Mechanism:
 The script prompts the user to provide feedback.
 Feedback, along with the roll number (presumably the student's ID), is stored in a
CSV file ('feedback.csv') using csv.writer.

Fig 20 Feedback message

After providing all the information regarding the student feedback option will be
provided.

Fig 21 Respond to Feedback

28
Feedback is collected in a csv file.

9.Video Capture Release:


 Finally, the video capture object is released using cap.release() and any OpenCV
windows are closed using cv2.destroyAllWindows().
 On pressing ‘q’ the webcam will be closed.

This script combines various technologies including face recognition, real-time video
processing, Firebase integration, and user feedback collection to create a face
recognition-based attendance system with additional features.

29
CHAPTER 6: LIBRARIES AND MODULE USED

6.1 Libraries Used

Several libraries and frameworks are commonly used in the development of face
recognition-based attendance systems. Here are some of the key ones:

 OpenCV (OpenSource Computer Vision Library):


OpenCV is a famous open-source computer vision library that provides a wide range
of functionalities for image and video processing, including face detection, facial
landmark detection, and image manipulation.
 dlib:
dlib is a C++ toolkit containing machine learning algorithms and tools for computer
vision tasks. It includes pre-trained models for face detection, facial landmark
detection, and face recognition. dlib is often used in conjunction with OpenCV for
face-related tasks.
 face_recognition:
face_recognition is a Python library built on top of dlib and provides a simple
interface for face recognition tasks. It allows developers to easily detect faces in
images, identify facial landmarks, and perform face recognition using pre-trained
models.
 Cvzone:
Cvzone is basically computer vision package that is easy to run image processing and
AI functions.
 Numpy:
Numpy is a library that will help you manage multi-dimensional arrays in
an efficient way. When extracting features from facial images, Numpy can
be used to perform mathematical operations on the pixel values to compute
embedding representing facial characteristics.

30
 Sklearn:
Sklearn is used as preprocessing techniques can help improve the accuracy
of the face recognition model by ensuring that the input data is in a suitable
format for training and inference.

6.2 Modules Used:

OS: (Operating System Interface): The ‘os’ module in python provides a way to
interact with the operating system. It can be used for tasks like file and directory
manipulation, which can be useful in handling datasets and managing files in a project.

Pickle: The ‘pickle’ module in Python is used for serializing and deserializing objects.
It’s commonly employed to save and laod machine learning models, making it making it
useful for storing trained face recognition models.

Datetime: This module is used when an individual’s attendance is recognized, the current
date and time can be captured using datetime library to create timestamp.

31
CHAPTER 7: Code Implementation

Our code is written in python language.Following are the files which we


have used in our project.

->ImagesAttendance
-> Resources
->AddDatatoDatabase.py
->EncodeGenerator.py
->Blink_NewAttendance.py
->Accuracyvisualize.py
->Feedback.csv

Information related to the Folders which are used

ImagesAttendance->Contains images of the students


Resources->Contains the background and modes which we
have used for Frontend.
Feedback.csv->Contains all the information related to
feedback

32
7.1 AddDatatoDatabase.py
Here All the Images and Data related to students will be saved in Firebase.
Here we will generate key, value pair to store the information in database.
Key will be their roll no and value will contain all the information.

Fig 22 adding data to database

33
7.2 Blink_NewAttendance.py
All the working of the code will be done here, Displaying
webcam,detecting faces, marking attendance.

Figure 23

34
7.3 EncodeGenerator.py
Here all the images from the storage will be taken and their feature
extraction will be done and encodings will be generated so that it will be
easy to compare image.

Fig 24 code of encodings

35
7.4 accuracyvisualize.py
Here the performnce of the project will be evaluated by comparing
predicted labels and true labels.

Figure 25 code of accuracy visualization

36
CHAPTER 8: PERFORMANCE EVALUATION

8.1 Matrices
Performance of our project:
Accuracy Metrics:
To gauge the system's performance, accuracy metrics such as precision, recall, and
F1 score were computed. These metrics quantified the system's ability to correctly
identify individuals under varying conditions, offering valuable insights into its
robustness and reliability.
In our project we have used accuracyvisualize.py file to evaluate the performance of
our project.

Fig 26 (accuracyvisualize.py)

In the provided code, performance is evaluated through several key steps and metrics:

1. Face Recognition and Attendance Tracking: The main functionality of the code
is to recognize faces in a video stream captured by a camera and track attendance
based on the recognized faces. This is achieved using the face_recognition library

37
to detect and encode faces, and then comparing these encoded faces with a pre-
existing list of known faces.

2. Attendance Count: The code maintains a dictionary called attendance_count,


where the keys are student IDs and the values represent the number of times each
student has been recognized in the video stream. This count serves as a basis for
calculating attendance percentage.

3. Accuracy Score: The code calculates the accuracy score using the
accuracy_score function from sklearn.metrics. It compares the true labels
(ground truth student IDs) with the predicted labels (student IDs recognized by the
system) to determine the overall accuracy of face recognition.

4. Confusion Matrix: The code computes a confusion matrix using the


confusion_matrix function from sklearn.metrics. The confusion matrix provides
a detailed breakdown of true positive, false positive, true negative, and false
negative predictions, enabling a deeper understanding of the system's performance
across different classes (student IDs).

5. Visualization: The confusion matrix is visualized using matplotlib, making it


easier to interpret the distribution of correct and incorrect predictions. The matrix
includes numerical values in each cell, representing the count of predictions for
each class.

6. Output: Finally, the code prints out key metrics such as the number of true labels,
number of predicted labels, attendance percentage, accuracy, and the confusion
matrix.

38
By evaluating these metrics, the code provides a comprehensive assessment of the face
recognition system's performance in terms of accuracy, attendance tracking, and the
ability to correctly identify individuals.

Fig 27 output of accuracy

This Figure gives an output of Accuracy on the basis of Comparison of True labels and
Predicted Labels and also provides a confusion matrix .

Fig 28 confusion matrix of above output for visualization

Attendance Percentage:

39
Attendance tracking extends beyond mere identification, encompassing the
calculation of attendance percentages based on recorded attendance counts and the
total number of scheduled classes. This metric provides educators with a holistic view
of students' attendance patterns, facilitating informed decision making and
intervention strategies.

Fig 29 Attendance percentage

8.2 Feedback option for improve the performance:

Feedback Mechanism:
Acknowledging the importance of user feedback in iterative development, the system
incorporated a user feedback mechanism. Users were empowered to provide input on
system performance, usability, and overall experience. Feedback data, including
timestamps and associated user IDs, were systematically logged and analyzed,
guiding future enhancements and refinements.

40
Fig 30 Feedback Mechanism

User Feedback Analysis:


User feedback served as a cornerstone for continuous improvement and refinement
of the system. Through thorough analysis of feedback data, trends, patterns, and areas
for improvement were identified. Iterative enhancements were subsequently
implemented, ensuring that the system evolved to meet the evolving needs and
expectations of its users.

Fig 31 Feedback Analysis

41
CHAPTER 9: SCREENSHOTS AND IMAGES

9.1 FLOWCHARTS

Fig 32 Flowchart

42
9.2 Final Output of the Project (Screenshots)

1. Firstly Webcam will be displayed to detect the person and along with it a Graphical
User Interface will be displayed.

Fig 33 Active process

2. If a person from the Database Storage is detected then all the information regarding
that person will be displayed otherwise it will show active mode.

Fig 34 Face recognised


43
3. After dispalying entire inforamtion that person Attendance will be marked.

Fig 35 Attendance marked

4. If that person will again try to mark their attendance they will be showed already
marked with the help of this functionality a person cannot mark their attendance again
and again.

Fig 36 Already marked


5. After the entire process you can exit by pressing ‘q’ and after that feedback form
and Attendance Percentage will be displayed.

44
CHAPTER 10: CONCLUSION

10.1 Conclusion
Our “Face Recognition based on Attendance System” project can be used to build an
active class attendance system using Face recognition techniques. The traditional
attendance system with pen paper only mark attendance by teacher. After this system, it
will mark attendance of the recognized student and update the attendance record.

10.2 Applications
Face Recognition based on Attendance system is a system which utilizes computer vision,
machine learning to recognize and detect faces in camera. Applications of this system
include:
 Schools and Universities: It manage student attendance for regular classes, for
exams and for events.
 Events and Conferences: monitor attendee participation and manage access to
different sessions or areas.
 Gyms and fitness centres: monitor each member attendance and track usage of
facilities.
 Hospitals and Healthcare: Ensure accurate tracking of staff attendance.
 Airports and Transportation hubs: enhance security by verifying the identity
of passengers and staff.

10.3 Limitations
 Requires high-quality cameras to accurately recognize human faces.
 This system has limitation in its ability to detect spoofed facial images.
 Requires a robust database storage system to efficiently manage the large amount
of facial data and attendee information.

45
 Need strong cloud computing infrastructure, affecting the scalability and
accessibility of face recognition attendance system.

10.4 Future Scope

 We can design it in such a way that a mobile application allows users to access
attendance records, receive notifications, and even mark attendance remotely
using smart phones.
 We can add Anti-Spoofing feature which aim to prevent unauthorized access using
fake facial images.
 We can increase the accuracy of the system by using robust cloud system as well
as strong database to make the system more efficient.

46
47

You might also like