Professional Documents
Culture Documents
Prooroo
Prooroo
SUBMITTED BY:
1
DECLARATION
I hereby declare that the Project Report entitled “Face Recognition based Attendence
System” is an authentic record of my own work as requirements of project during the
period from Jan to June for the award of degree of B.Tech. (CSE), IK Gujral Punjab
Technical University Campus, Hoshiarpur, under the guidance of Dr. Brijesh
Bakariya.
Date: ____________________
Certified that the above statement made by the student is correct to the best of our
knowledge and belief.
Signatures
Examined by:
1. 2. 3. 4.
Head of Department
(Signature and Seal)
2
Acknowledgment
I would like to thank Director Dr. Yadwinder Singh Brar, I.K.G.PTU Campus,
I also owe my sincerest gratitude towards Dr. Brijesh Bakariya for his valuable advice
and healthy criticism throughout my project which helped me immensely to complete my
work successfully. I would also like to thank everyone who has knowingly and
unknowingly helped me throughout my work. Last but not least, a word of thanks for the
authors of all those books and papers which I have consulted during my project work as
well as for preparing the report.
Mansi (2124140)
3
TABLE OF CONTENT
1.2 Objective 9
1.3 Existing System 9
1.4 Proposed System 9
4
5.3 Working 22
Chapter 6: Libraries and Module Used
6.1 Libraries used 30
6.2 Modules used 31
5
LIST OF FIGURES
Figure 1. 11
Figure 2. 12
Figure 3. 12
Figure 4. 13
Figure 5. 13
Figure 6. 14
Figure 7. 20
Figure 8. 23
Figure 9. 23
Figure 10. 24
Figure 11. 24
Figure 12. 25
Figure 13. 25
Figure 14. 26
Figure 15. 26
Figure 16. 27
Figure 17. 27
Figure 18. 28
Figure 19. 28
Figure 20. 29
Figure 21. 29
Figure 22. 33
6
Figure 23. 34
Figure 24. 35
Figure 25. 36
Figure 26. 37
Figure 27. 39
Figure 28. 39
Figure 29. 40
Figure 30. 41
Figure 31. 41
Figure 32. 42
Figure 33. 43
Figure 34. 43
Figure 35. 44
Figure 36. 44
7
CHAPTER 1: INTRODUCTION
The basic idea behind our project is to create a system that is based on face recognition
technique that is used to recognize the face of the student and compare the face with
already existing image that is stored our database.
In our project we use firebase that is cloud computing platform we use it as backend as
a service. If the image are 100% matches with the existing image then the attendance will
be marked and automatically stored our database without human intervention.
Apart from that our face recognition attendance system also maintain the record that is
total attendance of a particular student, their grade according to their attendance,
performance either good or bad, student details and stored image, student detail like
name, roll number, branch. All these information is automatically maintain by our
system. This system basically reduce the faculties burden and give the productive time
for teaching. Its all about the introduction of our project.
Our system also stores the timing which time the student mark attendance and it maintain
automatically in our database. Our system will detect face of the student then mark
attendance it is one of the most important task so only present student will mark their
attendance and this system is protect any proxy attendance. The main objective is to
reduce the human work and maintain the record honestly that will do our machine
learning model. This model marks the attendance few seconds so it is less time
consuming as compared to register based attendance.
8
1.2 Objective
Existing System
Many Face Recognition Software have been implemented during the past decade.
Each software uses different methods and different algorithm than other software.
Some facial recognition extracts the face features from the input image to identify
the face. Other algorithm normalize a set of face images and then compress the face
data, it saves the data in one image then compared with the face data.
In Our System Multiple student marked their attendance at the same time. All the student
of the class must register themselves by entering the required details and then their
images will be captured and stored in the dataset. During each session, faces will be
detected from live streaming video of classroom. The faces detected will be compared
with images present in the dataset. If match found, attendance will be marked for the
respective student. The task of the proposed system is to capture the face of each student
and to stored it in the database for their attendance. The face of the student needs to be
captured in such a manner that all the featured of the student face needs to be detected,
even the seating and the posture of the student need to be recognized. There is no need
for the teacher to manually take attendance in the class because the system records a
video and through further processing steps the face is being recognized and the
attendance database id updated.
9
CHAPTER 2: TOOLS & TECHNOLOGY
It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.
2.2 Technologies
The system we used in order to create our project has the following specification:
10
CHAPTER 3: DATASET DESCRIPTION
Fig 1 Dataset
This figure consists the images of the students which we are taking as dataset and it is
saved with their Unique Id (Roll no). Here the dataset contains the images in the different
formats such as .jpg and .webp which will be converted into same format in data
preprocessing.
11
Fig 2 Addition of data in database
13
1.Iterating Over Images: The function iterates over each image in the imagesList using
a for loop.
2.Color Conversion: Inside the loop, each image (img) is converted from the BGR color
space to the RGB color space using cv2.cvtColor(img, cv2.COLOR_BGR2RGB). This
conversion ensures compatibility with the face_recognition library, which expects
images in RGB format.
3.Face Detection and Encoding: After converting the image to RGB format, the
face_recognition.face_encodings() function is called to detect faces in the image and
generate face encodings. face_encodings() returns a list of face encodings for all faces
detected in the image. Since the assumption is that each image contains only one face (as
mentioned in the comment), [0] is used to select the encoding of the first (and presumably
only) face in the list.
4.Appending to Encode List: The generated face encoding (encode) is appended to the
encodeList.
5.Returning Encode List: Once all images in imagesList have been processed, the
function returns the list of face encodings (encodeList).
These face encodings are then used for face recognition.
ResNet, short for Residual Network, is based on the concept of residual learning or skip
connections. The key idea behind ResNet is to address the degradation problem that
arises when deep neural networks are trained.
Here's a brief explanation of the main concept:
The core idea is to learn the residual 𝐻(𝑥)−𝑥H(x)−x instead of directly learning
the underlying mapping 𝐻(𝑥)H(x), where 𝐻(𝑥)H(x) represents the desired
underlying mapping, and 𝑥x represents the input to a given layer.
By using these residual connections, the network can effectively bypass certain
layers during training if those layers do not contribute significantly to the desired
mapping. This enables the training of much deeper networks without suffering
from degradation.
15
4.3 Architecture:
ResNet architectures typically consist of multiple residual blocks, each containing
several convolutional layers along with shortcut connections.
The residual blocks can have different configurations, such as bottleneck structures
for deeper networks or plain structures for smaller networks.
4.4 Benefits:
ResNet's use of residual connections enables the training of extremely deep neural
networks, reaching hundreds or even thousands of layers while maintaining or
improving performance.
The skip connections help alleviate the vanishing gradient problem by providing
a direct path for gradient flow during backpropagation, facilitating easier training
of deep networks.
16
With the deep layers of ResNet, the model can learn discriminative features
at different levels of abstraction, enabling accurate face recognition even in
challenging conditions.
2. Robustness to Variations:
4. Easy Integration:
ResNet models are readily available and can be easily integrated into
existing deep learning frameworks like TensorFlow or PyTorch.
In summary, ResNet's residual learning concept, coupled with its deep architecture
and skip connections, makes it an ideal choice for face recognition-based
attendance systems. Its ability to learn complex features, robustness to variations,
high accuracy, and ease of integration contribute to its effectiveness in accurately
identifying individuals from facial images captured in real-time video streams.
17
4.6 Where Res-Net is utilized in Face Recognition Based Attendance
System
In your project, which is a face recognition-based attendance system, ResNet can be
utilized in the following manner:
1. Feature Extraction:
2. Transfer Learning:
You can leverage a pre-trained ResNet model that has been trained on a
large-scale dataset like ImageNet.
3. Face Embeddings:
Once a facial image is passed through the ResNet model, the output from
one of the intermediate layers (before the fully connected layers) can be
considered as a face embedding or feature vector.
4. Similarity Comparison:
18
During the recognition phase, the face embeddings extracted from new
facial images (captured in real-time) are compared with the embeddings of
known faces stored in the system.
5. Classification or Identification:
The ResNet model used for face recognition can be trained and updated
periodically to adapt to changes in facial appearance, lighting conditions, or
new individuals entering the system.
Training data can be collected over time to enhance the model's performance
and accuracy.
19
Fig 7 workflow
This workflow illustrates the sequential steps involved in using ResNet for face
recognition in your attendance system, from data collection and preprocessing to feature
extraction, recognition, and model updates.
20
CHAPTER 5: Face Recognition based Attendence System
5.1 Definition
5.2 Uses
5.3 Working
In the below given points the working of our project is defined briefly
21
1.Importing Libraries: Firstly we will import the libraries according to our project
requirement.
cv2: OpenCV library for image and video processing.
os: Operating system module for file operations.
pickle: Python module for object serialization.
numpy: Numerical computing library for handling arrays and matrices.
face_recognition: Library for face detection and recognition.
cvzone: Additional OpenCV-based library for computer vision tasks.
firebase_admin: Firebase SDK for Python, used for Firebase integration.
datetime: Module for handling dates and times.
csv: Module for reading and writing CSV files.
2.Firebase Initialization:
Firebase is a platform for building web and mobile applications. In this script,
Firebase is initialized using a service account key ("serviceAccountKey.json") to
access the Firebase Realtime Database and Storage.
22
Fig 9 Initializing database
Fig 10 Firebase
23
Fig 11 Frame
24
Fig13 Modes
25
Fig 15 Loading Encoding files
6.Main Loop:
The script enters a main loop where it continuously captures frames from the video
feed using cap.read().
Each frame is resized and converted to RGB format.
Face locations and encodings are computed using face_recognition.face_locations
and face_recognition.face_encodings, respectively.
Face encodings are compared with the precomputed encodings to identify known
faces.
Fig 16 Encodings
In figure 5.9 the generated encodings are compared with the image detected by the
webcam and then the face_distance is calculated. If face_distance is greater than
there is no similarity between the faces if face distance is less then there is
similarity between the faces.
26
Fig 17 Face Distance
Figure 5.10 shows face_distance and tells about the face similarity.
7.Attendance Tracking:
When a known face is detected, the script retrieves the corresponding student
information from Firebase using db.reference.
It updates the attendance information for the detected student if certain conditions
are met, such as checking the elapsed time since the last attendance record.
Attendance percentage is calculated based on the total attendance count for each
student.
Fig 18 represents all the information of the student when the face is detected along with
Attendance Percentage.
27
Fig 19 Records in database
8.Feedback Mechanism:
The script prompts the user to provide feedback.
Feedback, along with the roll number (presumably the student's ID), is stored in a
CSV file ('feedback.csv') using csv.writer.
After providing all the information regarding the student feedback option will be
provided.
28
Feedback is collected in a csv file.
This script combines various technologies including face recognition, real-time video
processing, Firebase integration, and user feedback collection to create a face
recognition-based attendance system with additional features.
29
CHAPTER 6: LIBRARIES AND MODULE USED
Several libraries and frameworks are commonly used in the development of face
recognition-based attendance systems. Here are some of the key ones:
30
Sklearn:
Sklearn is used as preprocessing techniques can help improve the accuracy
of the face recognition model by ensuring that the input data is in a suitable
format for training and inference.
OS: (Operating System Interface): The ‘os’ module in python provides a way to
interact with the operating system. It can be used for tasks like file and directory
manipulation, which can be useful in handling datasets and managing files in a project.
Pickle: The ‘pickle’ module in Python is used for serializing and deserializing objects.
It’s commonly employed to save and laod machine learning models, making it making it
useful for storing trained face recognition models.
Datetime: This module is used when an individual’s attendance is recognized, the current
date and time can be captured using datetime library to create timestamp.
31
CHAPTER 7: Code Implementation
->ImagesAttendance
-> Resources
->AddDatatoDatabase.py
->EncodeGenerator.py
->Blink_NewAttendance.py
->Accuracyvisualize.py
->Feedback.csv
32
7.1 AddDatatoDatabase.py
Here All the Images and Data related to students will be saved in Firebase.
Here we will generate key, value pair to store the information in database.
Key will be their roll no and value will contain all the information.
33
7.2 Blink_NewAttendance.py
All the working of the code will be done here, Displaying
webcam,detecting faces, marking attendance.
Figure 23
34
7.3 EncodeGenerator.py
Here all the images from the storage will be taken and their feature
extraction will be done and encodings will be generated so that it will be
easy to compare image.
35
7.4 accuracyvisualize.py
Here the performnce of the project will be evaluated by comparing
predicted labels and true labels.
36
CHAPTER 8: PERFORMANCE EVALUATION
8.1 Matrices
Performance of our project:
Accuracy Metrics:
To gauge the system's performance, accuracy metrics such as precision, recall, and
F1 score were computed. These metrics quantified the system's ability to correctly
identify individuals under varying conditions, offering valuable insights into its
robustness and reliability.
In our project we have used accuracyvisualize.py file to evaluate the performance of
our project.
Fig 26 (accuracyvisualize.py)
In the provided code, performance is evaluated through several key steps and metrics:
1. Face Recognition and Attendance Tracking: The main functionality of the code
is to recognize faces in a video stream captured by a camera and track attendance
based on the recognized faces. This is achieved using the face_recognition library
37
to detect and encode faces, and then comparing these encoded faces with a pre-
existing list of known faces.
3. Accuracy Score: The code calculates the accuracy score using the
accuracy_score function from sklearn.metrics. It compares the true labels
(ground truth student IDs) with the predicted labels (student IDs recognized by the
system) to determine the overall accuracy of face recognition.
6. Output: Finally, the code prints out key metrics such as the number of true labels,
number of predicted labels, attendance percentage, accuracy, and the confusion
matrix.
38
By evaluating these metrics, the code provides a comprehensive assessment of the face
recognition system's performance in terms of accuracy, attendance tracking, and the
ability to correctly identify individuals.
This Figure gives an output of Accuracy on the basis of Comparison of True labels and
Predicted Labels and also provides a confusion matrix .
Attendance Percentage:
39
Attendance tracking extends beyond mere identification, encompassing the
calculation of attendance percentages based on recorded attendance counts and the
total number of scheduled classes. This metric provides educators with a holistic view
of students' attendance patterns, facilitating informed decision making and
intervention strategies.
Feedback Mechanism:
Acknowledging the importance of user feedback in iterative development, the system
incorporated a user feedback mechanism. Users were empowered to provide input on
system performance, usability, and overall experience. Feedback data, including
timestamps and associated user IDs, were systematically logged and analyzed,
guiding future enhancements and refinements.
40
Fig 30 Feedback Mechanism
41
CHAPTER 9: SCREENSHOTS AND IMAGES
9.1 FLOWCHARTS
Fig 32 Flowchart
42
9.2 Final Output of the Project (Screenshots)
1. Firstly Webcam will be displayed to detect the person and along with it a Graphical
User Interface will be displayed.
2. If a person from the Database Storage is detected then all the information regarding
that person will be displayed otherwise it will show active mode.
4. If that person will again try to mark their attendance they will be showed already
marked with the help of this functionality a person cannot mark their attendance again
and again.
44
CHAPTER 10: CONCLUSION
10.1 Conclusion
Our “Face Recognition based on Attendance System” project can be used to build an
active class attendance system using Face recognition techniques. The traditional
attendance system with pen paper only mark attendance by teacher. After this system, it
will mark attendance of the recognized student and update the attendance record.
10.2 Applications
Face Recognition based on Attendance system is a system which utilizes computer vision,
machine learning to recognize and detect faces in camera. Applications of this system
include:
Schools and Universities: It manage student attendance for regular classes, for
exams and for events.
Events and Conferences: monitor attendee participation and manage access to
different sessions or areas.
Gyms and fitness centres: monitor each member attendance and track usage of
facilities.
Hospitals and Healthcare: Ensure accurate tracking of staff attendance.
Airports and Transportation hubs: enhance security by verifying the identity
of passengers and staff.
10.3 Limitations
Requires high-quality cameras to accurately recognize human faces.
This system has limitation in its ability to detect spoofed facial images.
Requires a robust database storage system to efficiently manage the large amount
of facial data and attendee information.
45
Need strong cloud computing infrastructure, affecting the scalability and
accessibility of face recognition attendance system.
We can design it in such a way that a mobile application allows users to access
attendance records, receive notifications, and even mark attendance remotely
using smart phones.
We can add Anti-Spoofing feature which aim to prevent unauthorized access using
fake facial images.
We can increase the accuracy of the system by using robust cloud system as well
as strong database to make the system more efficient.
46
47