Professional Documents
Culture Documents
Final Project Work
Final Project Work
Final Project Work
INTRODUCTION
Attendance management system is a necessary tool for taking attendance in any environment where
attendance is critical. However, most of the existing approaches are time consuming, intrusive and it
requires manual work from the users. This project is aimed at developing a less intrusive, cost
effective and more efficient automated student attendance management system using face
recognition.
Maintaining attendance is very important in all educational institutions. Every institution has its
own method of taking student attendance. Some institutions uses paper based approach and others
have adopted automated methods such as fingerprint biometric techniques. However, these methods
subjects students to wait in a queue which consumes time and it is intrusive. Humans often use
faces to recognise individuals but advancement in computing capability over the past few decades
now enable similar recognitions automatically. Face recognition technology is one of the least
intrusive and fastest growing biometric technology (Sevy, 2007). It works by identification of
humans using the most unique characteristics of their faces. Face recognition has characteristics that
other biometrics do not have. Facial images can be captured from a distance and any special action
is not required for authentication. Due to such characteristics, the face recognition technique is
applied widely, not only to security applications but also to image indexing, image retrievals and
natural user interfaces. Faces are highly challenging and dynamic objects that are employed as
biometric evidence, in identity verification. Biometrics systems have proven to be an essential
security tool, in which bulk matching of enrolled people and watch lists is performed every day. The
importance of developing a new solution to improve the performance of face identification methods
has been highlighted and Local Binary Patterns Histograms (LBPH) (1996) is one of the most
promising technology paradigms that can be used to achieve it. Face recognition technology has
also been used for both verification and identification of students in a classroom. This project is
aimed at developing a less intrusive, cost effective and more efficient automated student attendance
management system using face recognition, leveraging Local Binary Patterns Histograms (LBPH)
(1996) Algorithm for Bingham University.
1
Face recognition is one of the important biometric methods; it deals with automatically identifying
or verifying a person from a digital image or video source by comparing selected facial features. It
is a form of identity access management and access control. Moreover, face recognition is
considered a passive and non-intrusive approach to verifying and identifying people.
Facial recognition has been used in several areas such as security and detection of criminals or
suspects. It has been a means of authentication and access control and identity management in some
private corporations but has not been applied in tertiary institutions to automatically take attendance
in developing countries. That is, the technology has suffered low adoption in educational
institutions.
In view of this, there is the need for an automated system for taking attendance at both lectures and
examination. This project considers facial recognition system among other biometric systems.
Automated facial recognition system proposed in this project will take attendance automatically by
capturing the student's facial picture and perform necessary verification. The facial recognition
system is envisaged to provide suitable and reliable way for detecting the face of the students,
taking the attendance and verifying the studentship of the person. Face recognition is one of the few
biometric methods that possess the merits of both high accuracy and low intrusiveness.
Bingham University has a large number of student, which makes attendance recording on paper
more stressful and in-effective.
The aim of this project is to design an automated system for recording student attendance in both
lectures and examinations that would prove to be a better alternative to manual paper-based
attendance recording system in Bingham University.
The goal of this project is to develop an automated attendance recording system for both lectures
and examinations, the design and implementation is limited to Bingham University. Two scopes will
be considered in this project work, firstly, the use of a suitable camera that can perform according to
requirements, this will enable the facial verification process, for this project an inbuilt laptop
camera would be used and secondly, the design of the platform that the (automated attendance
record system using facial recognition) will be incorporated. The programming language of choice
2
that will be employed to bring this project to light is the python programming language, with the
use of visual studio code editor.
The results of this project will be of great benefit to the students and the institution of Bingham
University as a whole in having and maintaining a suitable automated attendance record system
which will go a long way in increasing the institution’s stance in productivity and efficiency. It will
also streamline the process of eliminating the stress passed through just to take attendance, and also
provide a better and accurate record of attendance, it will also create room for faster and more
efficient operational process that reduces the time spent in queues that breeds from the manual or
traditional attendance taking method and provide the institution a quick, easy and affordable access
to attendance recorded.
Attendance recording in Bingham University using the manual paper-based approach can be
stressful and inefficient. With an automated way of recording attendance, this can help improve
attendance recording in Bingham University making it faster and more efficient.
1.7 SUMMARY
Most of the existing attendance recording systems are time-consuming and require a semi-manual
work from the teacher or students like calling students ID, and passing attendance sheet around the
class, etc during lecture time. In the proposed system the aim is to provide a solution for the above-
mentioned problems by integrating face recognition in the process of attendance management that
can be used during exams or a lecture which will save effort and time. Currently, the facial
recognition system is implemented by other researchers as well, but there is also have some
limitations regarding on functionalities, accuracy, lighting problem, and etc that supposed to be
solved by the proposed system. The system takes attendance by using a camera to acquire image of
individuals, detect the faces in the image and compare with the enrolled faces in the database. On
identification of a registered student face on the acquired image collections, the attendance register
is marked as present otherwise absent. The system is designed to be cost effective with no specific
vendor hardware and software required for deployment.
3
CHAPTER TWO
LITERATURE REVIEW
2.1 INTRODUCTION
Face detection is defined as finding the position of the face of an individual. In other word it can be
defined as locating the face region in an image. After detecting the face of human its facial
features is extracted and has wide range of application like facial expression recognition, face
recognition, observation systems, human PC interface and so forth. Detecting face in an image of
single person is easy but when we consider a group image of an image containing multiple
faces, the task becomes difficult.
For the application of face recognition, detection of face is very important and the first step. After
detecting face the face recognition algorithm can only be functional. Face detection itself involves
some complexities for example surroundings, postures, enlightenment etc.
There are many systems have been developed in engineering colleges and industries to keep a track
of the attendance. The developed systems are good but their performance and stability problems.
The developed systems are:
1) Biometric based System
2) Bluetooth based System
3) RFID based System
2) Bluetooth System
This system has high usability and proxy removal methods can be included to make the system
perfect. However, the system is not scalable and requires 8 connections active at a time. Bluetooth
4
do not allow more than 8 connections at a time this is due to a master and slave concept. This
redundancy makes it a feasible resource for a limited population.
Most face recognition systems rely on face recognition algorithms to complete functional task. (as
suggested by Shang-Hung Lin 2000.)
Face Detection or face detector will detect any given face in the given image or input video. Face
localization, will detect where the faces are located in the given image/video, by use of bounding
boxes. Face Alignment is when the system will find a face and align landmarks such as nose, eyes,
chin, and mouth for feature extraction. Feature extraction, extracts key features such as the eyes,
nose, mouth to undergo tracking. Feature matching and classification matches a face based on a
trained data set of pictures from a database of about 200 pictures. Face recognition, gives a positive
or negative output of a recognized face based on feature matching and classification from a
referenced facial image. Face detection is the process of locating a face in a digital image by any
special computer software build for this purpose. Feraud et al (2000), discuss face detection as “To
detect a face in an image means to find its position in the image plane and its size or scale “. The
detection of a face in a digital image is a prerequisite to any further process in face recognition or
any face processing software. In early years, face detection algorithms focused mainly on the frontal
part of the human face (Srinivasan, Golomb and Martinez, 2016). However, in recent years,
Cynganek, (2013) suggest that newer algorithms take into consideration different perspectives for
face detection. Researchers have used such systems but the most challenge that has been faced is to
make a system detect faces irrespective of different illumination conditions. This is based on a study
by Castrillón et al. (2011) on the Yale database which contains higher resolution images of 165
frontal faces. Face detection is often classified into different methods. In order to face the first
major problem of the project (Detecting students faces), a wide range of techniques have been
5
researched. These several face detection techniques/ methodologies have been proposed by many
different researchers and often classified in major categories of different approaches.
Face recognition should not be confused with face detection. Face detection concerns finding faces
in images amongst many other objects, whereas face recognition concerns what the face looks like
after it has been found. Therefore, face detection focuses on the where-question and face
recognition on the who/what-question. There are two approaches to study face recognition systems.
Either we build a system ourselves in the computer, based on computer vision knowledge. This is
called automatic face recognition. By looking at the performance of the system we can figure out
which techniques work well and which do not. By figuring out which techniques work, we get an
understanding of face recognition. Another approach is studying the existing and best working face
recognition system, the human brain. Studying the brain can give us fruitful insights in approaches
that can then be used in automatic face recognition. The human brain is however not as accessible
as a computer. Studying this system can only be done using carefully designed experiments or using
brain-scans with very coarse resolutions relative to the size of neurons. (M.F. Stollenga 2011.)
There are good surveys on face-recognition research (Tolbaet al. 2005, Zhao et al. 2003). A
distinction is made in (Zhao et al. 2003) between holistic and feature based methods:
a) Holistic methods, such as principle component analysis, take in the complete face image as
input and map it into a lower dimension. They create this lower dimension by finding typical faces
(e.g. eigenfaces) and describe face images as a (linear) combination of these typical faces. Using the
full face as input allows these methods to use all information of the face, from its general structure
to the type of eyes and model relationships between that information. However they have no natural
way to deal with changing positions or orientations of the face, other than create new typical faces
for every condition. The number of typical faces needed increases exponentially with the number of
variables that can change in a face, giving rise to scaling problems.
b) Feature based methods first look at a face image locally to extract features that describe the
image. These low level features are aggregated to one representation that is then used for
classification. This method makes it easier to be invariant to variability because the aggregation step
6
provides a natural generalization. However, these methods have difficulty combining information
from different parts of the face making it harder to model the global structure of the face. The
method also requires more steps and is therefore more complex and less restrictive; The type of
features used, the size of the features, the aggregation step, all can be done in many different ways
allowing for more creativity in application but also making the search space of models bigger.
We humans, are all experts in face recognition. No system in the world, digital or other, can
recognize faces better than us. It seems that no matter how transformed and obscured a face is, as
long as there is enough information left, we can recognize it. Human face recognition has been
studied by psychologists and neuroscientists for many years. Most of this research is focused
however on immediate recognition (Serre et al. 2007). Immediate recognition comprises recognition
tasks that can be evaluated in about 150ms. This assures that the subject does not have time to use
complex feedback processes to come to a result, they do not even have time to move their eyes in
response to what they see. The experimenter gets very clean data of the low-level recognition
process, with the cost of leaving out the more complex and perhaps interesting interactions in the
brain. In a paper by (Sinha et al. 2006), which has the subtitle ”Nineteen Results all Computer
Vision Researchers Should Know About”, important results from face recognition are shown. For
example, humans can recognize faces in extremely low-resolution images. Because the images we
are using are 64 by 64 pixels, (low resolution) it is good to know that at least humans can recognize
the faces. In (Barbeau et al. 2007) experiments show that humans are very fast in a face recognition
task where the subject has to decide whether a presented face is a famous person or not. It is so fast
that the researchers suggest that face-recognition is a one-way process that does not interact with
the data. This is against the ideas in this thesis to increase the amount of interaction with data.
However it should be noted that the images were selected in such a way that they were not very
confusing. Also, the task of classifying someone as famous or not famous, is known to be very easy
for humans. We expect that interaction is needed when images start to get confusing and tasks are
not so clearly defined. It is at those times that we have to look at an image again in response to what
we already see, in order to understand the image correctly.
7
2.3 FACE RECOGNITION ALGORITHMS
There are different types of face recognition algorithms, some of them includes:
1) Eigenfaces (1991)
Eigenfaces refers to an appearance-based approach to face recognition that seeks to capture the
variation in a collection of face images and use this information to encode and compare images of
individual faces in a holistic (as opposed to a parts-based or feature-based) manner. Specifically, the
eigenfaces are the principal components of a distribution of faces, or equivalently, the eigenvectors
of the covariance matrix of the set of face images, where an image with N pixels is considered a
point (or vector) in N-dimensional space. The idea of using principal components to represent
human faces was developed by Sirovich and Kirby (Sirovich and Kirby 1987) and used by Turk and
Pentland (Turk and Pentland 1991) for face detection and recognition. The Eigenface approach is
considered by many to be the first working facial recognition technology, and it served as the basis
for one of the top commercial face recognition technology products. Since its initial development
and publication, there have been many extensions to the original method and many new
developments in automatic face recognition systems. Eigenfaces is still often considered as a
baseline comparison method to demonstrate the minimum expected performance of such a system.
8
been determined that when LBP is combined with the Histogram of oriented gradients (HOG)
descriptor, it improves the detection performance considerably on some datasets. A comparison of
several improvements of the original LBP in the field of background subtraction was made in 2015
by Silva et al. A full survey of the different versions of LBP can be found in Bouwmans et al.
3) Fisherfaces (1997)
According to Aleix Martinez (2011), The key problem in computer vision, pattern recognition and
machine learning is to define an appropriate data representation for the task at hand.
One way to represent the input data is by finding a subspace which represents most of the data
variance. This can be obtained with the use of Principal Components Analysis (PCA). When applied
to face images, PCA yields a set of eigenfaces. These eigenfaces are the eigenvectors associated to
the largest eigenvalues of the covariance matrix of the training data. The eigenvectors thus found
correspond to the least-squares (LS) solution. This is indeed a powerful way to represent the data
because it ensures the data variance is maintained while eliminating unnecessary existing
correlations among the original features (dimensions) in the sample vectors.
When the goal is classification rather than representation, the LS solution may not yield the most
desirable results. In such cases, one wishes to find a subspace that maps the sample vectors of the
same class in a single spot of the feature representation and those of different classes as far apart
from each other as possible. The techniques derived to achieve this goal are known as discriminant
analysis (DA).
The most known DA is Linear Discriminant Analysis (LDA), which can be derived from an idea
suggested by R.A. Fisher in 1936. When LDA is used to find the subspace representation of a set of
face images, the resulting basis vectors defining that space are known as Fisherfaces.
9
5) Speed Up Robust Features (SURF) (2006)
In computer vision, speeded up robust features (SURF) is a patented local feature detector and
descriptor. It can be used for tasks such as object recognition, image registration, classification or
3D reconstruction. It is partly inspired by the scale-invariant feature transform (SIFT) descriptor.
The standard version of SURF is several times faster than SIFT and claimed by its authors to be
more robust against different image transformations than SIFT.
To detect interest points, SURF uses an integer approximation of the determinant of Hessian blob
detector, which can be computed with 3 integer operations using a precomputed integral image. Its
feature descriptor is based on the sum of the Haar wavelet response around the point of interest.
These can also be computed with the aid of the integral image.
SURF descriptors have been used to locate and recognize objects, people or faces, to reconstruct 3D
scenes, to track objects and to extract points of interest.
SURF was first published by Herbert Bay, Tinne Tuytelaars, and Luc Van Gool, and presented at the
2006 European Conference on Computer Vision. An application of the algorithm is patented in the
United States.
The original LBP operator labels the pixels of an image with decimal numbers, which are called
LBPs or LBP codes that encode the local structure around each pixel. Each pixel is compared with
its eight neighbours in a 3 × 3 neighbourhood by subtracting the centre pixel value. The resulting
strictly negative values are encoded with 0, and the others with 1. For each given pixel, a binary
number is obtained by concatenating all these binary values in a clockwise direction, which starts
from the one of its top-left neighbour. The corresponding decimal value of the generated binary
number is then used for labelling the given pixel. The derived binary numbers are referred to be the
LBPs or LBP codes. One limitation of the basic LBP operator is that its small 3 × 3 neighbourhood
cannot capture dominant features with large-scale structures. To deal with the texture at different
scales, the operator was later generalized to use neighbourhoods of different sizes. A local
neighbourhood is defined as a set of sampling points evenly spaced on a circle, which is centred at
the pixel to be labelled, and the sampling points that do not fall within the pixels are interpolated
10
using bilinear interpolation, thus allowing for any radius and any number of sampling points in the
neighbourhood.
Figure 2.4.2: Examples of the ELBP operator. The circular (8, 1), (16, 2), and (24, 3)
neighbourhoods
Figure 2.4.2 shows some examples of the extended LBP (ELBP) operator, where the notation (P, R)
denotes a neighbourhood of P sampling points on a circle of radius of R. Formally, given a pixel at
(x c , y c ), the resulting LBP can be expressed in decimal form as follows:
where i c and i P are, respectively, grey level values of the central pixel and P surrounding pixels in
the circle neighbourhood with a radius R, and function s(x) is defined as:
From the aforementioned definition, the basic LBP operator is invariant to monotonic grey-scale
transformations, which preserve pixel intensity order in the local neighbourhoods. The histogram of
LBP labels calculated over a region can be exploited as a texture descriptor. The operator LBP (P ,R
p p
) produces 2 different output values, corresponding to 2 different binary patterns formed by P
pixels in the neighbourhood. If the image is rotated, these surrounding pixels in each neighbourhood
11
will move correspondingly along the perimeter of the circle, thus resulting in a different LBP value,
except for patterns with only 1 and 0 s. In order to remove rotation effect, a rotation-invariant LBP
is proposed
r,i
where ROR(x, i) performs a circular bitwise right shift, on the P-bit number x, i times. The LBP
(P,R) operator quantifies occurrence statistics of individual rotation-invariant patterns, which
correspond to certain micro features in the image; hence, the patterns can be considered to be a
feature detector. However, it was shown that such a rotation-invariant LBP operator does not
necessarily provide discriminative information, since the occurrence frequencies of the individual
r,i
patterns that are incorporated in LBP (P,R) vary greatly and the crude quantization of the angular
spaces at 45 ◦ intervals.
The most challenging component of a robust face detection system is the efficiency of the classifier.
The system accepts an input of 25 × 15 pixel region of the image and generates an output of 1 for a
face and − 1 for a non face. The initiative of making the classifier fast is addressed by decreasing
the searching space but preserving the details and perceptual quality of the image. The architecture
is a multilayer neural network with one hidden layer and a back-propagation algorithm to train the
network. The input layer which served as receiving the Haar features has nodes equal in dimension
as the feature vectors. The output layer has one node. The number of epochs for this experiment
was 1000 and the goal was 0.001. We assume that the training samples are represented by { p1, t1},
{ p2, t2}, …, { pQ, tQ}, where pq = [ pq1, pq2, …, pqr, …, pqR] T ∈ ℜR is an input to the network. All the
pixel values are read line by line to form an input pattern, which is a column vector in ℜR
dimensional space, R represents the total face image, pqr represents the intensity value of rth feature
value in the qth face image. The network is trained to produce an output of 1 for the face samples
and −1 for the nonface samples. The training algorithm is standard error back-propagation. The
process of training involves weight initialization, calculation of the activation unit, weight
adjustment, weight adaptation, and convergence testing.
12
Figure 2.4.1: Original image (top), normalized image (down)
All weights were initially set to small random values. Assume vji represents the weight between the
jth hidden unit and ith input unit; and wkj representing the weight between the kth output and the jth
hidden unit. The activation unit is calculated sequentially, starting from the input layer. The
activation of hidden and output unit is calculated as follows:
where 𝑦 (p) 𝑗 is the activation of the jth hidden unit and o (p) k is the activation of the kth output unit
for the pattern, ie, p. f is a sigmoid function. In this work, we scaled the inputs to [−1, 1]. k is the
total number of output units, I is the total number of input units, and J is the total number of hidden
units. vjo is the weight connected to the bias unit in the hidden layer. We adjusted the weights,
starting at the output units, and recursively propagated error signals to the input layer. The detected
output o (p) k is compared with the corresponding target value t (p) k , which is a face image over
the entire training set using the sigmoid function to express the approximation error in the network's
target functions
13
The minimization of the error E( p) requires the partial derivative of E( p) with respect to each
weight in the network to be computed. The change in weight is proportional to the corresponding
derivative
Where the learning rate between 0 and 1, we set it to 0.9. The function α is also set to 0.9. The last
term is a function of the previous weight change
where t is equal to the current time step. Δvji and Δwkj are the weight adjustments. The output of this
network has been just one so we have just one neuron at the output. In this work, we added neuron
bias to the network to offset the origin of the activation functions so as to make room for rapid
convergence of the training process. The bias has a constant activation value −1. We trained the bias
weights the same way as we did for other weights. The functions that trained the hidden and the
output units are given in (figure 2.4.5) and (figure 2.4.6), respectively,
14
zo = −1 and yo = −1. vjo is the weight to the bias unit yo in the hidden layer. In this work, we scaled
the inputs to [−1, 1]. A green bounding box of size 25 × 15 is drawn around a detected face.
2.5 SUMMARY
LBP (Local Binary Patterns) is one of the most powerful descriptors to represent local structures.
Due to its advantages, i.e., its tolerance of monotonic illumination changes and its computational
simplicity, LBP has been successfully used for many different image analysis tasks, such as facial
image analysis, biomedical image analysis, aerial image analysis, motion analysis, and image and
video retrieval. LBP is the algorithm used for this project. During the development of LBP
methodology, a large number of variations are designed to expand the scope of application, which
offer better performance as well as improve the robustness in one or more aspects of the original
LBP.
15
CHAPTER THREE
RESEARCH METHODOLOGY
It is a methodology for systematically organizing the best ways to develop systems efficiently. It
includes, for example, descriptions of work to be performed at each stage of the development
process and drafted documents. Multiple methodologies; which differ according to viewpoints
available. In terms of the development process, some example methodologies are "water-fall
development," "spiral development," and "agile-software development." And in terms of the design
approach, some example methodologies are the process-oriented approach (POA), the data-oriented
approach (DOA), the object-oriented approach (OOA), and the service-oriented approach (SOA).
The basic idea in Prototype model is that instead of freezing the requirements before a design or
coding can proceed, a throwaway prototype is built to understand the requirements. This prototype
is developed based on the currently known requirements. Prototype model is a software
development model. By using this prototype, the user can get an “actual feel” of the system, since
the interactions with prototype can enable the user to better understand the requirements of the
desired system. Prototyping is an attractive idea for complicated and large systems for which there
is no manual process or existing system to help determining the requirements.
The prototype are usually not complete systems and many of the details are not built in the
prototype. The goal is to provide a system with overall functionality.
16
3.2 EXISTING SYSTEM
In the existing system, the lecturer takes the attendance of the students during lecture time by
calling each and every student or by passing the attendance sheet around the class. In the existing
student's attendance system, there is only one approach, manual attendance system.
During the manual system, the lecturer has many works to do especially when there is a large
number of students in the classroom; like collecting, verifying, and managing students record. In
reality, the manual system also takes more time for recording and calculating the average attendance
of every student in the class.
The proposed automated attendance system can be divided into five main modules. The modules
and their functions are defined in this section. The five modules into which the proposed system is
divided are:
The image of the student is captured using any computer’s inbuilt camera in which the application
is installed on. And further process goes for face detection.
A proper and efficient face detection algorithm always enhances the performance of face
recognition systems. Various algorithms are proposed for face detection such as Face geometry
based methods, Feature Invariant methods, Machine learning based methods. Out of all these
methods, I am using the Local Binary Pattern (LBP) algorithm, which gives a high detection rate
and is fast. I observed that this algorithm gives better results in different lighting conditions and i
combined multiple haar classifiers to achieve a better detection rates up to an angle of 30 degrees.
17
3.3.3 PRE-PROCESSING
The detected face is extracted and subjected to preprocessing. This pre-processing step involves
with histogram equalization of the extracted face image and is resized to 100x100. Histogram
Equalization is the most common Histogram Normalization technique. This improves the contrast
of the image as it stretches the range of the intensities in an image by making it more clear.
3.3.4 POST-PROCESSING
In the proposed system, after recognizing the faces of the students, the names are updated into an
excel sheet. The excel sheet is generated by exporting mechanism present in the system. These
generated records can be sent to parents or guardians of students.
This is an orderly application of proven principles, methods, tools and notations to describe a
proposed system’s intended behaviour and its associate constraints.
The functional requirement captures the necessary behaviours of the system. This behaviour may be
expressed as services, task or functions that the system required to perform completely (Ruth,
2001). Below is the functional requirement that this automated attendance record system using
facial recognition offers when it’s finally implemented.
1. Lecturer Module: This controls all the activities handled by the lecturer, some of theses
activities includes:
18
(a) Take Images: The camera of running computer is opened and it starts taking image sample of
person. This ‘id’ and ‘name’ is stored in folder ‘StudentDetails’ and file name is
‘StudentDetails.csv’. It takes 60 images as sample and stores them in folder ‘TrainingImage’. After
completion it notifies that images has been saved.
(b) Train Images: Now it takes few seconds to train machine for the images that are taken by
clicking Take Image button and creates a ‘Trainner.yml’ file and store in ‘TrainingImageLabel’
folder.
(c) Track Images: The camera of running machine is opened again, If face is recognised by system
then ‘id’ and ‘name’ of person is shown on Image.
Non-functional Requirements are characteristics or attributes of the system that can judge its
operation. The following points clarify them:
1. Accuracy and Precision: the system should perform its process in accuracy and Precision to
avoid problems.
2. Modifiability: the system should be easy to modify, any wrong should be corrected.
3. Security: the system should be secure and saving student’s privacy.
4. Usability: the system should be easy to deal with and simple to understand.
5. Maintainability: the maintenance group should be able to fix any problem occur suddenly.
6. Speed and Responsiveness: Execution of operations should be fast.
Tools that the user must have in order to use the system and obtain good results:
1. Software Requirements: Windows 7 or Higher/Any Linux Distro and visual studio.
2. Hardware Requirements: High resolution camera and screen.
19
3.5 SYSTEM DESIGN
This System Design is created to show a technical solution that satisfies the system’s functional
requirements mentioned in the analysis stage (Seyyed, 2005). At this level of the project life-cycle,
there should be a functional specification written primarily in business terminology, that contains a
complete description about operational needs of an automated attendance recording system using
facial recognition. The purpose of this design is to convert the gathered information in analysis
stage into a technical specification in order to describe the exact design of the system which will be
used for constructing the system.
The hardware used in this project so far consists of only 1 component which is:
1. Inbuilt HP WebCam
There are two major system flows in the software development section as shown below:
1. The creation of the face database
2. The process of attendance taking
Both process mentioned above are essential because they made up the backbone of the attendance
management system. In this section, the process of both flows will be briefly described.
20
3.5.2.1 THE CREATION OF THE FACE DATABASE
The face database is an important step to be done before any further process can be initiated. This is
because the face database acts as a comparison factor during the recognition process which will be
discussed in later section. In the process above, a csv file is created to aid the process of image
labelling because there will be more than one portrait stored for each student, thus, in order to group
their portraits under the name of the same person, labels are used to distinguish them. After that,
those images will be inserted into a recognizer to do its training. Since the training process is very
time consuming as the face database grew larger, the training is only done right after there is a batch
of new addition of student’s portraits to ensure the training is done as minimum as possible.
21
3.5.2.2 THE PROCESS OF ATTENDANCE TAKING
Other than the creation of face database, the rest of the remaining process can all be done through a
application interface. Thus, the attendance taking procedure will also be done through the
application interface. This is to provide a friendly user-interface to the user (lecturer) while being
able to do attendance taking without the need to control the modules or repositories from a terminal
which will be ambiguous for most user. Therefore, just with a click of button on the application’s
GUI, a python script will be executed which it will launch a series of initialization such as loading
the trained data to the recognizer and etc. The attendance taking process will then proceed in a loop
to acquire, identify and mark the attendance for each of the students that is obtained from the web
camera.
22
3.5.3 USE CASE DIAGRAM
Figure 3.5.3.1: Use case diagram of the proposed automated attendance recording system using
facial recognition
Figure 3.5.4.1: Data Flow Diagram for the proposed system indicating the flow of data across
included parties.
23
3.5.6 SEQUENCE DIAGRAM
Figure 3.5.6.1: Sequence Diagram for the proposed attendance record system using facial
recognition.
Figure 3.5.7.1: Flow Chart Diagram for Attendance Record System Using Facial Recognition
24
3.6 IMPLEMENTATION METHODOLOGY
The proposed system introduces an automated attendance system which integrates an application
and face recognition algorithms. Any device with a camera can capture an image and upload to the
database server using the application. The received file undergoes face detection and face
recognition so the detected faces are extracted from the image. The extracted faces are then
compared with the saved faces in the Trainner.yml file and on the successful recognition the
Trainner.yml file is updated with the attendance and a sheet is generated and exported in .csv
format.
3.7 SUMMARY
An automatic attendance management system is needed tool for huge organizations. Many
organizations have been used face recognition system such as train stations, airports, and
companies. Overall, this chapter provided an overview of a research conducted on building an
automated attendance recording using facial recognition. The matter that has to be taken into
consideration in the future is a method to guarantee users’ privacy. Whenever an image is stored on
servers, it must be impossible for unauthorized person to get or see that image.
25
CHAPTER FOUR
IMPLEMENTATION
4.1 INTRODUCTION
This chapter provides detailed specification of computerized library system user interface. The user
interface provides the means for the user to interact with the system. This user interface
specification is intended to convey the general idea for the user interface design and the operational
concept for the software.
In order to get this project done, there is need for the following hardware and software.
Testing is done to show that a program does and to discover program defects before it is put into
use. The program is executed using artificial data. And the results of the test run are checked for
errors. Testing reveals the presence of the errors not their in-existence. It is part of a more general
verification and validation process, which also include static validation techniques.
The following are the main testing levels:
26
1. Unit testing Unit testing focuses verification efforts on the smaller unit of software
design. While utilizing the detailed design description as a guide, necessary management
ways are tested to uncover errors among the boundary of the module. The relative
complexity of the test and error detected as a result is restricted by the constraint scope
established for unit testing. The unit test is usually whereas box oriented, and therefore
the step may be conducted in parallel for multiple modules.
2. Integration testing With unit testing the modules could perform properly, however from
time to time they'll have accidental impact on another, sub perform, once combined,
might not turn out to produce the specified major functions; on an individual basis
acceptable impression could also be signed to unacceptable levels then global data
structure may present problems. Integration testing could be a systematic technique for
constructing the program structure whereas at the same time conducting tests to uncover
errors associated with interfacing. The target is to require unit tested modules and build a
program structure that has been determined by the design.
3. Validation testing At the beginning of integration testing, computer code is totally
assembled as a package, interfacing errors are uncovered and corrected, and final series
of software test validation check might begin. Validation is outlined in some ways;
however, an easy definition is that validation succeeds once the computer code functions
in a way which will be affordable and accepted by the client.
4. System testing In system testing the software engineer should anticipate potential
interfacing problems such as:
1) Record the results of the test to use as an indication for future reference.
2) Design error-handling path that each information is returning from alternative parts
of the system.
3) Conduct a series of test that simulate dangerous information or potential errors at the
software package interfaces.
4) Participate into planning and design of the system test to ensure that the software is
adequately tested.
27
4.5 TEST RESULTS
ADMIN PANEL
Figure 4.5.1: Shows the programs start module. Here the Lecturer controls all activities.
The lecturer of any particular course mostly handles the admin panel. In the admin panel, the
lecturer can Take Images of students, train the images and mark attendance on demand. The lecturer
can view list of registered students and also fill in the attendance of students manually and export to
a .csv file.
28
FACE RECOGNITION PROCESS AND IMAGE TRAINING
During this process, the program captures several images of the face and stores them in a
TrainingImage folder. In this process, many conditions are put into place some of which includes
the Algorithm used and Haar features used to reduce noise in an image.
Figure 4.5.3: Indicates that the image have been successfully trained for detection
29
Figure 4.5.4: Shows list of demo registered students.
From the Admin Panel, The lecturer or administrator can manage and view the list of registered
students.
Figure 4.5.5: Shows the option to record attendance manually by first entering the name of the
course/subject.
30
Another added feature is marking attendance manually. From the admin panel the lecturer can
decide not to use the facial recognition based attendance system and fill in attendance for a
particular course/subject manually.
The screenshot above represents the main functionality of the program. This section of the source
code shows the imported modules required to build the system, it also shows the codes for the user
interface.
31
Figure 4.6.2: Snippet of the source code for the system
The screenshot above represents the main functionality of the program. This section of the source
code shows how Image Training is implemented. It also shows how the trained images are retrieved
for processing.
4.7 SUMMARY
After the analysis of the system and checking for hardware and software requirements that will
make sure the system functions properly, the system was successfully deployed and errors
encountered during the testing process of the system where successfully corrected giving birth to a
fully functional system.
32
CHAPTER FIVE
5.1 INTRODUCTION
This system has gone through the system development process and proper testing phase. This was
done to ensure that the system meets it user requirement and specification. This system was
designed to automated attendance recording, using facial recognition.
5.2 SUMMARY
The system development is evaluated against the project work from chapter one to chapter four.
Therefore, the system will have the following functionalities:
1. Lecturers can record attendance for both lectures and examinations
2. Only the administrator/lecturer can manage the activities of the program
3. Students can record their attendance using facial recognition
The system developed during the course of this project implements facial recognition in attendance
recording in an educational institution. The study aim at improving attendance recording in
Bingham University using Biometrics (Facial Recognition).
5.4 CONCLUSION
5.5 RECOMMENDATION
The presence of technology has made things and way of learning easier in our society today. It will
be a very good idea if Bingham University integrates the technology discussed in this project. More
resources should also be put into research to determine the various ways technology can solve
problems at all levels of education.
33
5.6 SUGGESTION
Further research should be done on attendance recording systems using Biometrics in general not
just facial recognition.
34
REFERENCES
A. Hadid, “The local binary pattern and its applications to face analysis,” in Proc. Int. Workshops
Image Process. Theor., Tools Appl., 2008, pp. 28–36.
Akshara Jadhav, Tushar Ladhe and Krishna Yeolekar, “Automated Attendance System using Facial
Recognition” University of Pune, NDMVP’s KBT COLLEGE OF ENGINEERING,
NASHIK International Research Journal of Engineering and Technology (IRJET) Available
at: http://www.irjet.net/
C. Chan, J. Kittler, and K. Messer, “Multi-scale local binary pattern histograms for face
recognition,” in Proc. Int. Conf. Biometrics, 2007, pp. 809–818.’
Elbeheri, A. (2016) The Dynamic Systems Development Method (DSDM) - Agile Methodology.
Available at: https://www.linkedin.com/pulse/dynamic-systems-development-method-dsdm-
agilealaa
Kisku R. and Rana S. (2016) Multithread Face Recognition in Cloud. Available at:
https://www.hindawi.com/journals/js/2016/2575904/
L. Zhang, R. Chu, S. Xiang, and S. Z. Li, “Face detection based on Multi-Block LBP
representation,” in Proc. Int. Conf. Biometrics, 2007, pp. 11–18.
Marciniak, T., Chmielewska, A., Weychan, R., Parzych, M. and Dabrowski, A. (2015) 'Influence of
low resolution of images on reliability of face detection and recognition', Multimedia Tools
and Applications, 74 (12), pp. 4329-4349.
Mathworks (2017) Detect objects using the Viola-Jones algorithm. Available at:
https://uk.mathworks.com/help/vision/ref/vision.cascadeobjectdetector-system-object.html?
s_tid=srchtitle
Mayank Chauhan , Mukesh Sakle. (2014) 'Study & Analysis of Different Face Detection
Techniques', (IJCSIT) International Journal of Computer Science and Information
Technologies, 5 (2), pp. 1615-1618.
Ming-Hsuan Yang, Kriegman, D. J. and Ahuja, N. (2002) 'Detecting faces in images: a survey',
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 24 (1), pp. 34-58.
Modi, M. and Macwan , F. (2014) 'Face Detection Approaches: A Survey.', International Journal of
Innovative Research in Science, Engineering and Technology, 3 (4), pp. 11107-11116.
35
Mohamed, A. S. S., Ying Weng, S. S., Ipson, S. S. and Jianmin Jiang, S. S. (2007) 'Face detection
based on skin color in image by neural networks', Intelligent and Advanced Systems, 2007.
ICIAS 2007.International Conference on, pp. 779-783.
Ojala, T.Pietikainen, M. and Maenpaa, T. (2002) 'Multiresolution gray-scale and rotation invariant
texture classification with local binary patterns' Pattern Analysis and Machine Intelligence,
IEEE Transactions on, 24 (7), pp. 971-987. 10.1109/TPAMI.2002.1017623.
Osuna, E., Freund, R. and Girosit, F. (1997). "Training support vector machines: an application to
face detection." 130-136.
Parmar, D. N. and Mehta, B. B. (Jan-Feb 2013) 'Face Recognition Methods & Applications',
International Journal of Computer Technology & Applications, 4 (1), pp. 84-86.
Rowley, H. A., Baluja, S. and Kanade, T. (1998) 'Neural network-based face detection', Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 20 (1), pp. 23-38.
Samuel Lukas, Aditya Rama Mitra, Ririn Ikana Desanti, Dion Krisnadi, "Student Attendance
System in Classroom Using Face Recognition Technique", 2016 International Conference on
Information and Communication Technology Convergence (ICTC), IEEE Conference
Publications Pages: 1032 – 1035.
Sato A., Imaoka H., Suzuki T. and Hosoi T. (2005) “Advances in Face Detection and Recognition
Technologies” NEC Journal of Advance Technology, 2 (1): 28-34.
T. Ahonen, A. Hadid, and M. Pietikäinen, “Face description with local binary patterns: Application
to face recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 12, pp. 2037–
2041, Dec. 2006.
T. Ahonen, A. Hadid, and M. Pietikäinen, “Face recognition with local binary patterns,” in Proc.
Euro. Conf. Comput. Vis., 2004, pp. 469–481.
Viola, P. and Jones, M. (2001) 'Rapid object detection using a boosted cascade of simple features',
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, 1 pp. I511-I518.
Wenyi Zhao, and Rama Chellappa (2005) Face Processing: Advanced Modeling and Methods.
Elsevier Science.
Zhao W, Chellappa R, Phillips P. J and Rosenfeld A. (2003) “Face recognition: a literature survey”
ACM. Compu. Surveys (CSUR) 35 399-458.
36
APPENDIX
AMS_Run.py
import tkinter as tk
from tkinter import *
import cv2
import csv
import os
import numpy as np
from PIL import Image,ImageTk
import pandas as pd
import datetime
import time
window.geometry('1280x720')
window.configure(background='snow')
def manually_fill():
global sb
sb = tk.Tk()
sb.title("Enter subject name...")
sb.geometry('580x320')
sb.configure(background='snow')
def err_screen_for_subject():
def ec_delete():
37
ec.destroy()
global ec
ec = tk.Tk()
ec.geometry('300x100')
ec.title('Warning!!')
ec.configure(background='snow')
Label(ec, text='Please enter your subject name!!!', fg='red', bg='white', font=('times', 16, ' bold
')).pack()
Button(ec, text='OK', command=ec_delete, fg="black", bg="lawn green", width=9, height=1,
activebackground="Red",
font=('times', 15, ' bold ')).place(x=90, y=50)
def fill_attendance():
ts = time.time()
Date = datetime.datetime.fromtimestamp(ts).strftime('%Y_%m_%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Time = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Hour, Minute, Second = timeStamp.split(":")
####Creatting csv of attendance
import pymysql.connections
38
cursor = connection.cursor()
except Exception as e:
print(e)
if subb=='':
err_screen_for_subject()
else:
sb.destroy()
MFW = tk.Tk()
MFW.title("Manually attendance of "+ str(subb))
MFW.geometry('880x470')
MFW.configure(background='snow')
def del_errsc2():
errsc2.destroy()
def err_screen1():
global errsc2
errsc2 = tk.Tk()
39
errsc2.geometry('330x100')
errsc2.title('Warning!!')
errsc2.configure(background='snow')
Label(errsc2, text='Please enter Student & Enrollment!!!', fg='red', bg='white',
font=('times', 16, ' bold ')).pack()
Button(errsc2, text='OK', command=del_errsc2, fg="black", bg="lawn green", width=9,
height=1,
activebackground="Red", font=('times', 15, ' bold ')).place(x=90, y=50)
global ENR_ENTRY
ENR_ENTRY = tk.Entry(MFW, width=20,validate='key', bg="yellow", fg="red",
font=('times', 23, ' bold '))
ENR_ENTRY['validatecommand'] = (ENR_ENTRY.register(testVal), '%P', '%d')
ENR_ENTRY.place(x=290, y=105)
def remove_enr():
ENR_ENTRY.delete(first=0, last=22)
40
STUDENT_ENTRY = tk.Entry(MFW, width=20, bg="yellow", fg="red", font=('times', 23, '
bold '))
STUDENT_ENTRY.place(x=290, y=205)
def remove_student():
STUDENT_ENTRY.delete(first=0, last=22)
def create_csv():
import csv
cursor.execute("select * from " + DB_table_name + ";")
csv_name='C:/Users/kusha/PycharmProjects/Attendace managemnt
system/Attendance/Manually Attendance/'+DB_table_name+'.csv'
41
with open(csv_name, "w") as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerow([i[0] for i in cursor.description]) # write headers
csv_writer.writerows(cursor)
O="CSV created Successfully"
Notifi.configure(text=O, bg="Green", fg="white", width=33, font=('times', 19, 'bold'))
Notifi.place(x=180, y=380)
import csv
import tkinter
root = tkinter.Tk()
root.title("Attendance of " + subb)
root.configure(background='snow')
with open(csv_name, newline="") as file:
reader = csv.reader(file)
r=0
42
c1ear_enroll = tk.Button(MFW, text="Clear", command=remove_enr, fg="black", bg="deep
pink", width=10,
height=1,
activebackground="Red", font=('times', 15, ' bold '))
c1ear_enroll.place(x=690, y=100)
def attf():
import subprocess
subprocess.Popen(r'explorer /select,"C:\Users\kusha\PycharmProjects\Attendace
managemnt system\Attendance\Manually Attendance\-------Check atttendance-------"')
43
MFW.mainloop()
global SUB_ENTRY
SUB_ENTRY = tk.Entry(sb, width=20, bg="yellow", fg="red", font=('times', 23, ' bold '))
SUB_ENTRY.place(x=250, y=105)
def clear1():
txt2.delete(first=0, last=22)
def del_sc1():
sc1.destroy()
def err_screen():
global sc1
sc1 = tk.Tk()
sc1.geometry('300x100')
sc1.title('Warning!!')
sc1.configure(background='snow')
Label(sc1,text='Enrollment & Name required!!!',fg='red',bg='white',font=('times', 16, ' bold
')).pack()
44
Button(sc1,text='OK',command=del_sc1,fg="black" ,bg="lawn green" ,width=9 ,height=1,
activebackground = "Red" ,font=('times', 15, ' bold ')).place(x=90,y= 50)
##Error screen2
def del_sc2():
sc2.destroy()
def err_screen1():
global sc2
sc2 = tk.Tk()
sc2.geometry('300x100')
sc2.title('Warning!!')
sc2.configure(background='snow')
Label(sc2,text='Please enter your subject name!!!',fg='red',bg='white',font=('times', 16, ' bold
')).pack()
Button(sc2,text='OK',command=del_sc2,fg="black" ,bg="lawn green" ,width=9 ,height=1,
activebackground = "Red" ,font=('times', 15, ' bold ')).place(x=90,y= 50)
45
ret, img = cam.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
# incrementing sample number
sampleNum = sampleNum + 1
# saving the captured face in the dataset folder
cv2.imwrite("TrainingImage/ " + Name + "." + Enrollment + '.' + str(sampleNum) +
".jpg",
gray[y:y + h, x:x + w])
cv2.imshow('Frame', img)
# wait for 100 miliseconds
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# break if the sample number is morethan 100
elif sampleNum > 70:
break
cam.release()
cv2.destroyAllWindows()
ts = time.time()
Date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
Time = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
row = [Enrollment, Name, Date, Time]
with open('StudentDetails/StudentDetails.csv', 'a+') as csvFile:
writer = csv.writer(csvFile, delimiter=',')
writer.writerow(row)
csvFile.close()
res = "Images Saved for Enrollment : " + Enrollment + " Name : " + Name
Notification.configure(text=res, bg="SpringGreen3", width=50, font=('times', 18, 'bold'))
Notification.place(x=250, y=400)
except FileExistsError as F:
f = 'Student Data already exists'
46
Notification.configure(text=f, bg="Red", width=21)
Notification.place(x=450, y=400)
harcascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(harcascadePath)
df = pd.read_csv("StudentDetails/StudentDetails.csv")
cam = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
col_names = ['Enrollment', 'Name', 'Date', 'Time']
attendance = pd.DataFrame(columns=col_names)
while True:
ret, im = cam.read()
gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
47
for (x, y, w, h) in faces:
global Id
else:
Id = 'Unknown'
tt = str(Id)
cv2.rectangle(im, (x, y), (x + w, y + h), (0, 25, 255), 7)
cv2.putText(im, str(tt), (x + h, y), font, 1, (0, 25, 255), 4)
if time.time() > future:
break
attendance = attendance.drop_duplicates(['Enrollment'], keep='first')
cv2.imshow('Filling attedance..', im)
key = cv2.waitKey(30) & 0xff
if key == 27:
48
break
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Hour, Minute, Second = timeStamp.split(":")
fileName = "Attendance/" + Subject + "_" + date + "_" + Hour + "-" + Minute + "-" +
Second + ".csv"
attendance = attendance.drop_duplicates(['Enrollment'], keep='first')
print(attendance)
attendance.to_csv(fileName, index=False)
49
);
"""
####Now enter attendance in Database
insert_data = "INSERT INTO " + DB_Table_name + "
(ID,ENROLLMENT,NAME,DATE,TIME) VALUES (0, %s, %s, %s,%s)"
VALUES = (str(Id), str(aa), str(date), str(timeStamp))
try:
cursor.execute(sql) ##for create a table
cursor.execute(insert_data, VALUES)##For insert data into table
except Exception as ex:
print(ex) #
cam.release()
cv2.destroyAllWindows()
import csv
import tkinter
root = tkinter.Tk()
root.title("Attendance of " + Subject)
root.configure(background='snow')
cs = '' + fileName
with open(cs, newline="") as file:
reader = csv.reader(file)
r=0
50
label = tkinter.Label(root, width=8, height=1, fg="black", font=('times', 15, ' bold
'),
bg="lawn green", text=row, relief=tkinter.RIDGE)
label.grid(row=r, column=c)
c += 1
r += 1
root.mainloop()
print(attendance)
def Attf():
import subprocess
subprocess.Popen(r'explorer /select,"Attendance\-------Check atttendance-------"')
51
fill_a = tk.Button(windo, text="Fill Attendance", fg="white",command=Fillattendances,
bg="deep pink", width=20, height=2,
activebackground="Red", font=('times', 15, ' bold '))
fill_a.place(x=250, y=160)
windo.mainloop()
def admin_panel():
win = tk.Tk()
win.title("LogIn")
win.geometry('880x420')
win.configure(background='snow')
def log_in():
username = un_entr.get()
password = pw_entr.get()
if username == 'kushal' :
if password == 'kushal14320':
win.destroy()
import csv
import tkinter
root = tkinter.Tk()
root.title("Student Details")
root.configure(background='snow')
cs = 'StudentDetails\StudentDetails.csv'
with open(cs, newline="") as file:
reader = csv.reader(file)
r=0
52
# i've added some styling
label = tkinter.Label(root, width=8, height=1, fg="black", font=('times', 15, ' bold
'),
bg="lawn green", text=row, relief=tkinter.RIDGE)
label.grid(row=r, column=c)
c += 1
r += 1
root.mainloop()
else:
valid = 'Incorrect ID or Password'
Nt.configure(text=valid, bg="red", fg="black", width=38, font=('times', 19, 'bold'))
Nt.place(x=120, y=350)
else:
valid ='Incorrect ID or Password'
Nt.configure(text=valid, bg="red", fg="black", width=38, font=('times', 19, 'bold'))
Nt.place(x=120, y=350)
def c00():
un_entr.delete(first=0, last=22)
53
un_entr = tk.Entry(win, width=20, bg="yellow", fg="red", font=('times', 23, ' bold '))
un_entr.place(x=290, y=55)
def c11():
pw_entr.delete(first=0, last=22)
pw_entr = tk.Entry(win, width=20,show="*", bg="yellow", fg="red", font=('times', 23, ' bold '))
pw_entr.place(x=290, y=155)
54
faces, Id = getImagesAndLabels("TrainingImage")
except Exception as e:
l='please make "TrainingImage" folder & put Images'
Notification.configure(text=l, bg="SpringGreen3", width=50, font=('times', 18, 'bold'))
Notification.place(x=350, y=400)
recognizer.train(faces, np.array(Id))
try:
recognizer.save("TrainingImageLabel\Trainner.yml")
except Exception as e:
q='Please make "TrainingImageLabel" folder'
Notification.configure(text=q, bg="SpringGreen3", width=50, font=('times', 18, 'bold'))
Notification.place(x=350, y=400)
def getImagesAndLabels(path):
imagePaths = [os.path.join(path, f) for f in os.listdir(path)]
# create empth face list
faceSamples = []
# create empty ID list
Ids = []
# now looping through all the image paths and loading the Ids and the images
for imagePath in imagePaths:
# loading the image and converting it to gray scale
pilImage = Image.open(imagePath).convert('L')
# Now we are converting the PIL image into numpy array
imageNp = np.array(pilImage, 'uint8')
# getting the Id from the image
Id = int(os.path.split(imagePath)[-1].split(".")[1])
55
# extract the face from the training image sample
faces = detector.detectMultiScale(imageNp)
# If a face is there then append that in the list as well as Id of it
for (x, y, w, h) in faces:
faceSamples.append(imageNp[y:y + h, x:x + w])
Ids.append(Id)
return faceSamples, Ids
window.grid_rowconfigure(0, weight=1)
window.grid_columnconfigure(0, weight=1)
def on_closing():
from tkinter import messagebox
if messagebox.askokcancel("Quit", "Do you want to quit?"):
window.destroy()
window.protocol("WM_DELETE_WINDOW", on_closing)
load = Image.open("logo.png")
render = ImageTk.PhotoImage(load)
img = Label(image=render)
img.image = render
img.place(x=0, y=0)
message.place(x=80, y=20)
lbl = tk.Label(window, text="Enter Matric No:", width=20, height=2, fg="black", bg="deep pink",
font=('times', 15, ' bold '))
56
lbl.place(x=200, y=200)
def testVal(inStr,acttyp):
if acttyp == '1': #insert
if not inStr.isdigit():
return False
return True
txt = tk.Entry(window, validate="key", width=20, bg="yellow", fg="red", font=('times', 25, ' bold '))
txt['validatecommand'] = (txt.register(testVal),'%P','%d')
txt.place(x=550, y=210)
lbl2 = tk.Label(window, text="Enter Full Name", width=20, fg="black", bg="deep pink", height=2,
font=('times', 15, ' bold '))
lbl2.place(x=200, y=300)
txt2 = tk.Entry(window, width=20, bg="yellow", fg="red", font=('times', 25, ' bold '))
txt2.place(x=550, y=310)
57
trainImg = tk.Button(window, text="Train Images",fg="black",command=trainimg ,bg="lawn
green" ,width=20 ,height=3, activebackground = "Red" ,font=('times', 15, ' bold '))
trainImg.place(x=390, y=500)
quitWindow.place(x=990, y=500)
window.mainloop()
58