Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 104

Automated Attendance with Facial Recognition

ABSTRACT

This document will show how we can implement algorithms for face
detection and recognition in image processing to build a system that will detect
and recognize frontal faces of students in a classroom to add attendace. “A face
is the front part of a person’s head from the forehead to the chin, or the
corresponding part of an animal” (Oxford Dictionary). In human interactions,
the face is the most important factor as it contains important information about a
person or individual. All humans have the ability to recognize individuals from
their faces. The proposed solution is to develop a working prototype of a system
that will facilitate class control for Kingston University lecturers in a classroom
by detecting the frontal faces of students from a picture taken in a classroom.
The second part of the system will also be able to perform a facial recognition
against a small database.

In recent years, research has been carried out and face recognition and detection
systems have been developed. Some of which are used on social media
platforms, banking apps, government offices e.g. the Metropolitan Police,
Facebook etc.
S.no Particulars Page

1 Introduction

1.1 Related works and background

2 System analysis

2.1 Study of the system

2.2 Process models used with justification

2.3 Hardware and software requirements

2.4 Functional and non-functional


requirements
3 Feasibility study

3.1 Technical feasibility

3.2 Operational feasibility

3.3 Economic feasibility


S.no Particulars Page

4 Requirement specification 27

4.1 Functional requirements


4.2 Performance requirements

4.3 Software requirements


4.3.1 Python 3

4.3.2 Euclidean algorithm


4.3.3 Artificial intelligence
4.3.4 Machine learning libraries
5 System design 78

5.1 Introduction
5.2 Data flow daigrams
5.3 UML diagrams

6 Implementation
6.1 Source Code
7 System testing 112

7.1 Introduction
7.2 Strategic approach
7.3 Unit testing
8 Output screens 117

9 Conclusion 119
10 Bibliography 121

Introduction

In Face Detection and Recognition systems, the flow process starts by being able to detect
and recognize frontal faces from an input device i.e. mobile phone. In today’s world, it has
been proven that students engage better during lectures only when there is effective
classroom control. The need for high level student engagement is very important. An analogy
can be made with that of pilots as described by Mundschenk et al (2011 p101)” Pilots need to
keep in touch with an air traffic controller, but it would be annoying and unhelpful if they
called in every 5 minutes”. In the same way students need to be continuously engaged during
lectures and one of the ways is to recognize and address them by their names. Therefore, a
system like this will improve classroom control. In my own view based on experience, during
my time as a teacher, I realized calling a student by his/her name gives me more control of
the classroom and this draws the attention of the other students in the classroom to engage
during lectures.

Face detection and recognition is not new in our society we live in. The capacity of the
human mind to recognize particular individuals is remarkable. It is amazing how the human
mind can still persist in identification of certain individuals even through the passage of time,
despite slight changes in appearance.

Face recognition processes images and identifies one or more faces in an image by analyzing
patterns and comparing them. This process uses algorithms which extracts features and
compare them to a database to find a match. Furthermore, in one of most recent research,
Nebel (2017, p. 1), suggest that DNA techniques could transform facial recognition
technology, by the use of video analysis software which can be improved thanks to a
completely advance in research in DNA analysis. By so doing, camera-based surveillance
systems software to analyze DNA sequences, by treating a video as a scene that evolves the
same way DNA does, to detect and recognize human face.

Related Works and Background:

In this chapter, a brief overview of studies made on face detection and recognition will be
introduced alongside some popular face detection and recognition algorithms. This will give
a general idea of the history of systems and approaches that have been used so far.

Study of System:
Most face recognition systems rely on face recognition algorithms to complete the following
functional task as suggested by Shang-Hung Lin. (2000, p.2).

Fig2.1 Face recognition system framework as suggested by Shang-Hung Lin. (2000, p.2).
The figure below shows a simplified diagram from the framework for face recognition from the study
suggested by Shang-Hung Lin. (2000).

Figure 2.2 Face Detection and Recognition Flow Diagram

From the figure, above, Face Detection or face detector will detect any given face in the given image
or input video. Face localization, will detect where the faces are located in the given image/video, by
use of bounding boxes. Face Alignment is when the system will find a face and align landmarks such
as nose, eyes, chin, mouth for feature extraction. Feature extraction, extracts key features such as the
eyes, nose, mouth to undergo tracking. Feature matching and classification. matches a face based on a
trained data set of pictures from a database of about 200 pictures. Face recognition, gives a positive or
negative output of a recognized face based on feature matching and classification from a referenced
facial image.

Face detection is the process of locating a face in a digital image by any special computer software
build for this purpose. Feraud et al (2000 p.77) discuss face detection as “To detect a face in an image
means to find its position in the image plane and its size or scale “.

As figure 2.2 shows, the detection of a face in a digital image is a prerequisite to any further process
in face recognition or any face processing software.

In early years, face detection algorithms focused mainly on the frontal part of the human face
(Srinivasan, Golomb and Martinez, 2016, p.4434).
However, in recent years, Cynganek, (2013, p.346) suggest that newer algorithms take into
consideration different perspectives for face detection. Researchers have used such systems but the
most challenge that has been faced is to make a system detect faces irrespective of different
illumination conditions. This is based on a study by Castrillón et al. (2011, p.483) on the Yale
database which contains higher resolution images of 165 frontal faces. Face detection is often
classified into different methods. In order to face the first major problem of the project (Detecting
students faces), a wide range of techniques have been researched. These several face detection
techniques/ methodologies have been proposed by many different researchers and often classified in
major categories of different approaches. In this paper, we will look at some reviews and major
categories of classification by different groups of researchers and relate it to the system.

Figure 2.4 Taken from Detecting Faces in Images: A Survey (Yand et al (2002 p.36).

They subdivided these resolution hierarchies into three levels with level 1 being the lowest resolution
which only searches for face candidate and further processed at finer resolutions. At level 2 they used
the face candidate in level 1 to alongside local histogram equalization followed by edge detection. At
level three, the surviving face candidate region uses a set rule responding to facial features such as
mouth and eyes. They conducted their experiment on 60 images. Their system located faces on 50 of
these images and 28 images gave false alarm, thus giving a success rate of 83.33% and a false alarm
rate at 46.66%. Feature-Based-Methods that uses algorithms to look for structural features
regardless of pose, viewpoint or lighting conditions to find faces. Template Matching Methods;
uses standard facial patterns stored for use to correlate an input image with the stored pattern to
compute for detection. Appearance Base Methods; uses a set of training sets of images to learn the
templates and capture the representative of facial appearance. Furthermore, Yang et al. also carried
out their experiments on a standard database set which is shown on the Table2.1 and Table 2.2 Yang
et al. (2002, pp53-54) below with the detection rate results and false detection rates.
Table 2.1 showing standard database test set for Face Detection. Yang et al. (2002, p.53).

Table 2.2 Results of Two Image Test Sets experimented. Yang et al. (2002, p.54).

As Table 2.2 summarizes, the experimental results show images of different training set with different
parameters of tuning which has a direct impact on the training performance. For example, the
dimensionality reduction is carried out to improve computation efficiency and detection efficacy, with
image patterns projected to a lower dimensional space to form a discriminant function for
classification. Also, the training and execution time and the number of scanning windows in these
experimented influenced the performance in some way. Hjelmås and Low, (2001) classifies face
detection methodologies into two major categories. Image-based approaches, which is further sub-
categorized into Linear subspace methods, Neural networks and statistical approaches.

Image Based Approaches; Most of the recent feature-based attempts in the same study by Hjelmås
and Low, (2001 p.252) have improved the ability to cope with variations, but are still limited to head,
shoulder and part of frontal faces. There is therefore need for techniques to cope in hostile scenarios
such as detecting multiple faces in a cluttered scene, e.g. clutter-intensive background. Furthermore,
this method ignores the basic knowledge of the face in general and uses face patterns from a given set
of images. This is mostly known as the training stage in the detection method.

From this training stage, the system may be able to detect similar face patterns from an input image. A
decision of face existence by the system is now established based on a comparison of the distance
between the pattern from the input image and training image with a 2D intensity array extracted from
the input image. Most image-based approaches use window-scanning techniques for face detection.

Window scanning algorithm searches for possible face locations at all scales. This method depends on
window scanning algorithms. In other research carried out on this method which depends on window
scanning algorithms, Ryu et al. (2006), in their study experimented the scanning window techniques
discussed by Hjelmas and Low, (2001, p.252) in their system. They go further to experiment their
system, based on a combination of various classifiers for a more reliable result compared to a single
classifier. They designed multiple face classifiers which can take different representations of face
patterns. They used three classifiers, Gradient feature classifier which contains the integral
information of pixel distribution that returns certain invariability among facial features. The second
classifier is Texture Feature which extracts texture features by correlation (uses joint probability
occurrence of specified pixel), variance (measures the amount of local variations in an image) and
entropy (measures image disorder). The third classifier used here is Pixel Intensity Feature, which
extracts pixel intensity features of the eye, nose and mouth region for determining the face pattern.
They further used Coarse-To-Fine Classification approach with their classifications for computational
efficiency. Based on 1056 images which were obtained from the AT&T, BioID, Stirling, and Yale
dataset, they achieved the results presented in Table 2.4 and Table 2.5 Ryu et al. (2006, p.489) The
first face classification of their experiment with respect to shift in both x and y direction achieved a
detection rate of 80% when images are shifted within 10 pixels in the x direction and 4 pixels in the y
direction. The second and third face of their classification showed a detection rate of over 80% when
2 pixels were shift in both x and y directions respectively.

Table 2.4: Results showing Exhaustive full scanning method and Proposed scanning method. Ryu et
al (2006, p.489)

Table 2.5: Performance Comparison by different Researchers and Proposed System by Ryu et al
(2006, p.490)

As seen in Table 2.5, their system achieved a detection rate between 93.0% and 95.7%. Rowley et al.
1998 in their study on Neural Network-Based face detection, experimented on their system which
applies a set of neural network-based filters to an image and then uses an arbitrator to combine the
outputs. They tested their system against two databases of images. The CMU databased which was
made of 130 images and the FERET database achieve a detection rate of 86.2% with 23 false
detections. Feraud et al. (2001) also experimented on neural network-based face detection technique.
They used a combination of different components in their system (motion filter, colour filter,
prenetwork filter and large neural network). The prenetwork filter is a single multilayer perceptron,
with 300 inputs corresponding to the extracted sizes of the subwindows, hidden with 20 neurons and
outputs a face/nonface for a total of number of weights [reference]. These components, with a
combination of neural network achieved an 86.0% detection rate with 8 false detections, based on a
face database of 8000 images from Sussex Face Database and CMU Database which is further
subdivided into different subsets of equal sizes corresponding to different views. (page 48). Table 2.6
and Table 2.7 Feraud et al. (2001, p.49) below shows the experimental results carried out by these
researchers.

Table 2.6: Showing Results of Sussex Face Database. Feraud et al. (2001, p.49)

Table 2.7: Showing Results of CMU Test Set A. Feraud et al. (2001, p.49)

Wang et al. (2016) in their study to support neural network face detector used a multi-task
convolutional neural network-based face detector, which relies directly on learning features from
images instead of hand crafted features. Hence their ability to differentiate faces from uncontrolled
backgrounds or environments. The system the experimented on used the Region Proposed Network
which generates the candidate proposal and the CNN-Based detector for the final detection output.
They experimented this based on 183200 images from their database and used the AFLW dataset for
validation. Their face detector system was evaluated on AFW, FDDB and Pascal faces datasets
respectively and achieved a 98.1% face detection rate. The authors did not reveal all the facts leading
to the development of the system and I have limited time to implement this on OpenCV. 2.8 (Wang et
al. 2016 p.479), shows the different comparisons of their system against other state of the arts. (Wang
et al. 2016 p.480), discuss their system (FaceHunter) perform better than all other structured models.
However, this cannot be independently verified as this system was commercialised. One cannot
conclude if this was for marketing purpose or a complete solution to the problem as I have limited
time to implement it.

Table 2.8: Showing PR curve on AFW. (Wang et al. 2016 p.479).

The other major category is Feature-based approaches; depends on extracted features which are not
affected by variations in lighting conditions and pose. This according to these researchers, Hjelmås
and Low, (2001 p.241),further clarifies that “visual features are organised into a more global concept
of face and facial features using information of face geometry “. This technique in my own opinion
will be slightly difficult to use for images containing facial features from uncontrolled background.
This technique relies on feature analysis and feature derivation to gain the required knowledge about
the face to be detected. The features extracted are the skin colour, face shape, eyes, nose and mouth.
On the other hand, in another study by Mohamed et al (2007 p.2) suggest that the “human skin colour
is an effective feature used to detect faces, although different people have different skin colour,
several studies have shown that the basic difference based on the intensity rather than their
chrominance”. The texture of the human skin can therefore be separated from different objects.
Feature methods for face detection uses features for face detection. Some users depend on the edges
and then grouping the edges for face detection. Furthermore, Sufyanu et al (2016) suggest a good
extraction process will involve feature points chosen in terms of their reliability of automatic
extraction and importance for face representation. Most geometric feature-based approaches use the
active appearance model(AAM) as suggested by (S., N.S. and M., M. 2010). This allows localization
of facial landmarks in different ways to extract the shape of facial features and movement of these
features as expression evolves

Hjelmås and Low, (2001 p.241),further placed the feature-based approach into sub categories of; Low
level analysis (Edges, Gray-levels, Colour, motion and generalized measure).

Feature analysis (Feature searching and constellation analysis).

Active shape models (Snakes, Deformable templates and Point distribution models(PDMS).

Figure 2.5 shows the different approaches for Face detection as reported in a study by Hjelmås and
Low, (2001), which can be compared with Figure 2.6 showing the exact same classification by Modi
and Macwan (2014, p.11108).

Figure2.5 Face Detection, classified into different methodologies. Hjelmås and Low, (2001, p.238)
Figure 2.6 Various Face Detection Methodologies. Modi and Macwan (2014, p.11108).

Hjelmås and Low, (2001, p.240) in their study, show experiment based on edge-detection based
approach for face detection, on a set of 60 images of 9 faces, with complex backgrounds and correctly
detected 76% of faces with an average of two false alarms per image. Nehru and Padmavathi, (2017,
p.1), in their study, experimented face detection based on the Viola-Jones algorithm in a dataset of
dark and colored men to support their statement which states “It is possible to detect various parts of
the human body based on the facial features present”, like the eyes, nose and mouth. In this case,
systems as such will have to be trained properly to be able to distinguish features like the eyes, nose,
mouth etc., when a live dataset is used. The Viola-Jones algorithm to detect faces as seen in the
images in Figure 2.7 which shows dark and colored skin faces detected accurately.

Figure2.7 Face Detection in Dark and Colored Men by. Nehru and Padmavathi, (2017, p.1).

Also, in support of the claim made by Nehru and Padmavathi, (2017), the research carried out by
Viola-Jones to come up with the Viola-Jones algorithm in face detection, has had the most impact in
the past decade. As suggested by Mayank Chauhan et al (2014 p.1615), the Viola-Jones in face
detection is widely used in genuine applications such as digital cameras and digital photo managing
software. This claim is made based on a study by Viola and Jones (2001). Table 2.9 gives a summary
of the results obtained by these experts, showing various numbers of false and positive detections
based on the MIT and CMU database set with 130 images and 507 faces.

Table 2.9: Various Detection rates by different algorithms showing positive and false detection rates.
Viola and Jones (2001, pI-517).

Wang et al, (2015, p.318) states that” the process of searching a face is called face detection. Face
detection is to search for faces with different expressions, sizes and angles in images in possession of
complicated light and background and feeds back parameters of face”. In their study, they tested face
detection based on two modules which shows one module uses a combination of two algorithms
(PCA with SVM) and the other module based on a real-time field-programmable gate array (FPGA).
With these they concluded with their results of face detection accuracy of 89%. Table 2.10 is a screen
short taken from this paper to show experimental results of two units combined in order to investigate
the accuracy of the system.

Table 2.10. Detection accuracy system by Wang et al, (2015, p.331).

Another method is the Learning based methods, that includes machine learning techniques that
extract discriminative features from a trained dataset before detection. Some well-known classifiers
used for face detection, based on a study by Thai et al. (2011) are Canny, Principal Component
Analysis(PCA), Support Vector Machine(SVM), and Artificial Neural Network (ANN). Although
used for facial expression classification, the algorithms are however, also used in the initial stage of
their experiment, which is the detection phase. Their experiment as achieved some results which are
shown in Table 2.11. A screenshot from Thai et al. (2011, p.392).
Table 2.11. Comparing Different Algorithms on classification rates. Thai et al. (2011, p.392).

The overall objective of the Face detection part of this project will be to find out if any faces exist in
the input image and if present will return the location in bounding boxes and extent of each face,
counting the number of faces detected. It is a challenge to this project that due to the variations in
location, scale, pose orientation, facial expression, illumination or lighting condition and various
appearance features such as facial hair, makeup etc. It will be difficult to achieve an excellent result.
However, the performance of the system will be evaluated, taking into consideration the learning
time, execution time and number of samples required for training and the ratio between the detection
rate and false detections. Table 2.12 below shows experiments from different researchers. They have
used different sizes of image dataset. Some have used a combination of different algorithms and
applied other methods like colour filtering etc. and different training sets to obtain their results.
However, we can conclude the Viola-Jones algorithm which is on its own classifies images based on
local features only and can still detect at very high accuracy and rapidly than pixel-based systems.
Viola-Jones (2001, p.139).
SYSTEM ANALYSIS

There is various software development approaches defined and designed which are
used/employed during development process of software, these approaches are also
referred as "Software Development Process Models". Each process model follows a
particular life cycle in order to ensure success in process of software development

.
Requirements:

Business requirements are gathered in this phase.  This phase is the main focus of the
project managers and stake holders.  Meetings with managers, stake holders and users
are held in order to determine the requirements.  Who is going to use the system? 
How will they use the system?  What data should be input into the system?  What data
should be output by the system?  These are general questions that get answered during
a requirements gathering phase.  This produces a nice big list of functionality that the
system should provide, which describes functions the system should perform, business
logic that processes data, what data is stored and used by the system, and how the user
interface should work.  The overall result is the system as a whole and how it
performs, not how it is actually going to do it.

Design

The software system design is produced from the results of the requirements phase. 
Architects have the ball in their court during this phase and this is the phase in which
their focus lies.  This is where the details on how the system will work is produced. 
Architecture, including hardware and software, communication, software design
(UML is produced here) are all part of the deliverables of a design phase.

Implementation

Code is produced from the deliverables of the design phase during implementation,
and this is the longest phase of the software development life cycle.  For a developer,
this is the main focus of the life cycle because this is where the code is produced. 
Implementation my overlap with both the design and testing phases.  Many tools
exists (CASE tools) to actually automate the production of code using information
gathered and produced during the design phase.

Testing

During testing, the implementation is tested against the requirements to make sure that
the product is actually solving the needs addressed and gathered during the
requirements phase.  Unit tests and system/acceptance tests are done during this
phase.  Unit tests act on a specific component of the system, while system tests act on
the system as a whole.

So in a nutshell, that is a very basic overview of the general software development life
cycle model.  Now let’s delve into some of the traditional and widely used variations.

2.1 STUDY OF THE SYSTEM


In the flexibility of uses the interface has been developed a graphics concepts in
mind, associated through a browser interface. The GUI’s at the top level has
been categorized as follows

1. Administrative User Interface Design

2. The Operational and Generic User Interface Design

The administrative user interface concentrates on the consistent


information that is practically, part of the organizational activities and which
needs proper authentication for the data collection. The Interface helps the
administration with all the transactional states like data insertion, data deletion,
and data updating along with executive data search capabilities.

The operational and generic user interface helps the users upon the system in
transactions through the existing data and required services. The operational
user interface also helps the ordinary users in managing their own information
helps the ordinary users in managing their own information in a customized
manner as per the assisted flexibilities.

2.2 PROCESS MODELS USED WITH JUSTIFICATION

SDLC METHDOLOGIES

This document play a vital role in the development of life cycle (SDLC) as it
describes the complete requirement of the system. It means for use by developers and
will be the basic during testing phase. Any changes made to the requirements in the
future will have to go through formal change approval process.

SPIRAL MODEL
Spiral Model was defined by Barry Boehm in his 1988 article, “A spiral Model of
Software Development and Enhancement. This model was not the first model to
discuss iterative development, but it was the first model to explain why the iteration
models.

As originally envisioned, the iterations were typically 6 months to 2 years long. Each
phase starts with a design goal and ends with a client reviewing the progress thus far.
Analysis and engineering efforts are applied at each phase of the project, with an eye
toward the end goal of the project.

The following diagram shows how a spiral model acts like:

The steps for Spiral Model can be generalized as follows:

 The new system requirements are defined in as much details as


possible. This usually involves interviewing a number of users
representing all the external or internal users and other aspects of the
existing system.
 A preliminary design is created for the new system.

 A first prototype of the new system is constructed from the


preliminary design. This is usually a scaled-down system, and
represents an approximation of the characteristics of the final product.

 A second prototype is evolved by a fourfold procedure:

1. Evaluating the first prototype in terms of its strengths,


weakness, and risks.

2. Defining the requirements of the second prototype.

3. Planning an designing the second prototype.

4. Constructing and testing the second prototype.

 At the customer option, the entire project can be aborted if the risk is
deemed too great. Risk factors might involved development cost
overruns, operating-cost miscalculation, or any other factor that could,
in the customer’s judgment, result in a less-than-satisfactory final
product.

 The existing prototype is evaluated in the same manner as was the


previous prototype, and if necessary, another prototype is developed
from it according to the fourfold procedure outlined above.

 The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.

 The final system is constructed, based on the refined prototype.

 The final system is thoroughly evaluated and tested. Routine


maintenance is carried on a continuing basis to prevent large scale
failures and to minimize down time.
2.3 HARDWARE AND SOFTWARE REQUIREMENTS

Hardware Requirements:

• RAM: 4GB and Higher

• Processor: Intel i3 and above

• Hard Disk: 500GB: Minimum


Software Requirements:

• OS: Windows

• Python IDE: python 2.7.x and above

• Pycharm IDE

• Setup tools and pip to be installed for 3.6.x and above

2.4 Functional requirements

user
 Turn on the camera

 Run the main file

Non-Functional Requirements
1. Secure access of camera and long life battery.

2. 24 X 7 availability.

3. Better component design to get better performance at peak time


4. Flexible service based architecture will be highly desirable for future
extension
3.FEASABILITY STUDY

Preliminary investigation examine project feasibility, the likelihood the system


will be useful to the organization. The main objective of the feasibility study is
to test the Technical, Operational and Economical feasibility for adding new
modules and debugging old running system. All system is feasible if they are
unlimited resources and infinite time. There are aspects in the feasibility study
portion of the preliminary investigation:

 Technical Feasibility
 Operational Feasibility
 Economical Feasibility

3.1. TECHNICAL FEASIBILITY

The technical issue usually raised during the feasibility stage of the
investigation includes the following:

 Does the necessary technology exist to do what is suggested?


 Do the proposed equipment have the technical capacity to hold the data
required to use the new system?
 Will the proposed system provide adequate response to inquiries, regardless
of the number or location of users?
 Can the system be upgraded if developed?
 Are there technical guarantees of accuracy, reliability, ease of access and
data security?
The current system developed is technically feasible. It is a web based
user interface for audit workflow at NIC-CSD. Thus it provides an easy access
to the users. The database’s purpose is to create, establish and maintain a
workflow among various entities in order to facilitate all concerned users in
their various capacities or roles. Permission to the users would be granted based
on the roles specified. Therefore, it provides the technical guarantee of
accuracy, reliability and security. The software and hard requirements for the
development of this project are not many and are already available in-house at
NIC or are available as free as open source. The work for the project is done
with the current equipment and existing software technology. Necessary
bandwidth exists for providing a fast feedback to the users irrespective of the
number of users using the system.

3.2. OPERATIONAL FEASIBILITY

Proposed projects are beneficial only if they can be turned out into
information system. That will meet the organization’s operating requirements.
Operational feasibility aspects of the project are to be taken as an important part
of the project implementation. Some of the important issues raised are to test the
operational feasibility of a project includes the following: -

 Is there sufficient support for the management from the users?


 Will the system be used and work properly if it is being developed and
implemented?
 Will there be any resistance from the user that will undermine the possible
application benefits?
This system is targeted to be in accordance with the above-mentioned
issues. Beforehand, the management issues and user requirements have been
taken into consideration. So there is no question of resistance from the users that
can undermine the possible application benefits.

The well-planned design would ensure the optimal utilization of the computer
resources and would help in the improvement of performance status.

3.3 ECONOMIC FEASIBILITY

The computerized system takes care of the present existing system’s data
flow and procedures completely and should generate all the reports of the
manual system besides a host of other management reports

It should be built a web-based application with separate web server and


database server. This is required as the activities are spread through out
organization customer wants a centralized database. Further some of the linked
transactions take place in different locations.
4.REQUIREMENT SPECIFICATION

4.1 FUNCTIONAL REQUIREMENT SPECIFICATION

This application consists following modules.

 User module

User module:

The User module only have the camera in-front of the user which
continuously record the video and analyse the video to check the his eyes
closed or not. If user closed his eye for 10sec the alert system comes to alert
the user

4.2 PERFORMANCE REQUIREMENT:

Performance is measured in terms of the output provided by the


application. Requirement specification plays an important part analysis of a
system. Only when the requirement specification is properly given, it is
possible to design a system, which will fit into required environment. It rests
largely with the users of existing system to give the requirement specifications
because they are the people who finally use the system. This is because the
requirements have to be known during the initial stages so that the system can
be designed according to those requirements. It is very difficult to change the
system once it has been designed and on the other hand designing a system,
which does not cater to the requirements of the user, if of no use.
The requirements specification for any system can be broadly stated as given
below

 The system should be able to interface with the existing system


 The system should be accurate
 The system should be better than existing system.

4.3 SOFTWARE REQUIREMENTS

 Language: Python
 Artificial Intelligence
 Facial Recognition
 Machine Learning libraries
IMPLIMENTATION ON (PYTHON):

What Is A Script?

Up to this point, I have concentrated on the interactive programming capability of


Python.  This is a very useful capability that allows you to type in a program and to
have it executed immediately in an interactive mode

Scripts are reusable

Basically, a script is a text file containing the statements that comprise a Python
program.  Once you have created the script, you can execute it over and over
without having to retype it each time.

Scripts are editable


Perhaps, more importantly, you can make  different versions of the script by
modifying the statements from one file to the next using a text editor.  Then you
can execute each of the individual versions.  In this way, it is easy to create
different programs with a minimum amount of typing.

You will need a text editor

Just about any text editor will suffice for creating Python script files.

You can use Microsoft Notepad, Microsoft WordPad, Microsoft Word, or just


about any word processor if you want to.

Difference between a script and a program

Script:

Scripts are distinct from the core code of the application, which is usually written
in a different language, and are often created or at least modified by the end-
user. Scripts are often interpreted from source code or byte code, where as the
applications they control are traditionally compiled to native machine code.

Program:
The program has an executable form that the computer can use directly to
execute the instructions.

The same program in its human-readable source code form, from which
executable programs are derived(e.g., compiled)

Python

what is Python? Chances you are asking yourself this. You may have found this
book because you want to learn to program but don’t know anything about
programming languages. Or you may have heard of programming languages like C
and want to know what Python is and how it compares to “big name” languages.
Hopefully I can explain it for you.

Python concepts

If your not interested in the the hows and whys of Python, feel free to skip to the
next chapter. In this chapter I will try to explain to the reader why I think Python is
one of the best languages available and why it’s a great one to start programming
with.

• Open source general-purpose language.

• Object Oriented, Procedural, Functional

• Easy to interface with C

• Easy-ish to interface with C++ (via SWIG)

• Great interactive environment


Python is a high-level, interpreted, interactive and object-oriented scripting
language. Python is designed to be highly readable. It uses English keywords
frequently where as other languages use punctuation, and it has fewer
syntactical constructions than other languages.

 Python is Interpreted − Python is processed at runtime by the interpreter.


You do not need to compile your program before executing it. This is
similar to PERL and PHP.

 Python is Interactive − You can actually sit at a Python prompt and interact
with the interpreter directly to write your programs.

 Python is Object-Oriented − Python supports Object-Oriented style or


technique of programming that encapsulates code within objects.

 Python is a Beginner's Language − Python is a great language for the


beginner-level programmers and supports the development of a wide
range of applications from simple text processing to WWW browsers to
games.

History of Python

Python was developed by Guido van Rossum in the late eighties and early
nineties at the National Research Institute for Mathematics and Computer
Science in the Netherlands.

Python is derived from many other languages, including ABC, Modula-3, C, C++,
Algol-68, SmallTalk, and Unix shell and other scripting languages.

Python is copyrighted. Like Perl, Python source code is now available under the
GNU General Public License (GPL).
Python is now maintained by a core development team at the institute, although
Guido van Rossum still holds a vital role in directing its progress.

Python Features

Python's features include −

 Easy-to-learn − Python has few keywords, simple structure, and a clearly


defined syntax. This allows the student to pick up the language quickly.

 Easy-to-read − Python code is more clearly defined and visible to the eyes.

 Easy-to-maintain − Python's source code is fairly easy-to-maintain.

 A broad standard library − Python's bulk of the library is very portable and
cross-platform compatible on UNIX, Windows, and Macintosh.

 Interactive Mode − Python has support for an interactive mode which


allows interactive testing and debugging of snippets of code.

 Portable − Python can run on a wide variety of hardware platforms and has
the same interface on all platforms.

 Extendable − You can add low-level modules to the Python interpreter.


These modules enable programmers to add to or customize their tools to
be more efficient.

 Databases − Python provides interfaces to all major commercial databases.

 GUI Programming − Python supports GUI applications that can be created


and ported to many system calls, libraries and windows systems, such as
Windows MFC, Macintosh, and the X Window system of Unix.
 Scalable − Python provides a better structure and support for large
programs than shell scripting.

Apart from the above-mentioned features, Python has a big list of good features,
few are listed below −

 It supports functional and structured programming methods as well as


OOP.

 It can be used as a scripting language or can be compiled to byte-code for


building large applications.

 It provides very high-level dynamic data types and supports dynamic type
checking.

 IT supports automatic garbage collection.

 It can be easily integrated with C,.

Dynamic vs Static

Types Python is a dynamic-typed language. Many other languages are static


typed, such as C. A static typed language requires the programmer to explicitly tell
the computer what type of “thing” each data value is.

For example, in C if you had a variable that was to contain the price of something,
you would have to declare the variable as a “float” type.

This tells the compiler that the only data that can be used for that variable must
be a floating point number, i.e. a number with a decimal point.
If any other data value was assigned to that variable, the compiler would give an
error when trying to compile the program.

Python, however, doesn’t require this. You simply give your variables names and
assign values to them. The interpreter takes care of keeping track of what kinds of
objects your program is using. This also means that you can change the size of the
values as you develop the program. Say you have another decimal number (a.k.a.
a floating point number) you need in your program.

With a static typed language, you have to decide the memory size the variable can
take when you first initialize that variable. A double is a floating point value that
can handle a much larger number than a normal float (the actual memory sizes
depend on the operating environment).

If you declare a variable to be a float but later on assign a value that is too big to
it, your program will fail; you will have to go back and change that variable to be a
double.

With Python, it doesn’t matter. You simply give it whatever number you want
and Python will take care of manipulating it as needed. It even works for derived
values.

For example, say you are dividing two numbers. One is a floating point number
and one is an integer. Python realizes that it’s more accurate to keep track of
decimals so it automatically calculates the result as a floating point number

4.3.2 ANALYSIS OF THE EUCLIDEAN ALGORITHM

Suppose a and b are integers, not both zero. The greatest common


divisor (gcd, for short) of a and b, written (a,b) or gcd(a,b), is the largest
positive integer that divides both a and b. We will be concerned almost
exclusively with the case where a and b are non-negative, but the theory goes
through with essentially no change in case a or b is negative. The
notation (a,b) might be somewhat confusing, since it is also used to denote
ordered pairs and open intervals. The meaning is usually clear from the context.

It may be worth reminding the reader of the details of the Euclidean


algorithm. Let u0 and u, be positive integers.

ANALYSIS OF THE EUCLIDEAN ALGORITHM:

uo = q0ul + u2
ul = qlu2 + u 3
:
:
Un-I = qn-lun +un+l,
where 0 = un+l < un < ….< u2 < ul. Then u1 = gcd (uo, u1).

We define E (u0, u1) to be the number of division steps performed by the


algorithm on input (u0, u1) and we see that E (uo, u1) = n. It can be proved by
induction that if u > v > 0, E (u, v) = n, and u is as small as possible, then (u, v)
= (Fn+ 2, Fn+1), where Fk denotes the kth Fibonacci number, defined by F 0 = 0; F1
= 1; and F k+ 2 = F k+ 1 + F k for k ≥ 0.

4.3.3. INTRODUCTION TO ARTIFICIAL INTELLIGENCE

Artificial intelligence (AI), the ability of a digital computer or computer-


controlled robot to perform tasks commonly associated with intelligent beings.
The term is frequently applied to the project of developing systems endowed
with the intellectual processes characteristic of humans, such as the ability to
reason, discover meaning, generalize, or learn from past experience. Since the
development of the digital computer in the 1940s, it has been demonstrated that
computers can be programmed to carry out very complex tasks as, for example,
discovering proofs for mathematical theorems or playing chess with great
proficiency

As machines become increasingly capable, tasks considered to require


"intelligence" are often removed from the definition of AI, a phenomenon
known as the AI effect. A quip in Tesler's Theorem says “AI is whatever hasn't
been done yet”.

The traditional problems (or goals) of AI research include reasoning,


knowledge representation, planning, learning, natural language processing,
perception and the ability to move and manipulate objects. General intelligence
is among the field's long-term goals. Approaches include statistical methods,
computational intelligence, and traditional symbolic AI. Many tools are used in
AI, including versions of search and mathematical optimization, artificial neural
networks, and methods based on statistics, probability and economics. The AI
field draws upon computer science, information engineering, mathematics,
psychology, linguistics, philosophy, and many other fields.

Types of Artificial Intelligence (AI):

AI can be classified in any number of ways there are two types of main
classification.

Type1:

Weak AI or Narrow AI: It is focused on one narrow task, the phenomenon


that machines which are not too intelligent to do their own work can be built in
such a way that they seem smart. An example would be a poker game where a
machine beats human where in which all rules and moves are fed into the
machine. Here each and every possible scenario need to be entered beforehand
manually. Each and every weak AI will contribute to the building of strong AI.

Strong AI: The machines that can actually think and perform tasks on its own
just like a human being. There are no proper existing examples for this but some
industry leaders are very keen on getting close to build a strong AI which has
resulted in rapid progress.

Type2(based on functionalities):

Reactive Machines: This is one of the basic forms of AI. It doesn’t have past
memory and cannot use past information to information for the future actions.
Example:- IBM chess program that beat Garry Kasparov in the 1990s.

Limited Memory: AI systems can use past experiences to inform future


decisions. Some of the decision-making functions in self-driving cars have been
designed this way. Observations used to inform actions happening in the not so
distant future, such as a car that has changed lanes. These observations are not
stored permanently and also Apple’s Chatbot Siri.

Theory of Mind: This type of AI should be able to understand people’s


emotion, belief, thoughts, expectations and be able to interact socially Even
though a lot of improvements are there in this field this kind of AI is not
complete yet.

Self-awareness: An AI that has it’s own conscious, super intelligent, self-


awareness and sentient (In simple words a complete human being). Of course,
this kind of bot also doesn’t exist and if achieved it will be one of the milestones
in the field of AI.

There are many ways AI can be achived some of them are as follows:
The most important among them are as follows:

Machine Learning (ML): It is a method where the target(goal) is defined


and the steps to reach that target is learned by the machine itself by
training(gaining experience).For example to identify a simple object such as an
apple or orange. The target is achieved not by explicitly specifying the details
about it and coding it but it is just as we teach a child by showing multiple
different pictures of it and therefore allowing the machine to define the steps to
identify it like an apple or an orange.
Natural Language Processing (NLP): Natural Language Processing is
broadly defined as the automatic manipulation of natural language, like speech
and text, by software. One of the well-known examples of this is email spam
detection as we can see how it has improved in our mail system.

Vision: It can be said as a field which enables the machines to see. Machine
vision captures and analyses visual information using a camera, analog-to-
digital conversion, and digital signal processing. It can be compared to human
eyesight but it is not bound by the human limitation which can enable it to see
through walls(now that would be interesting if we can have implants that can
make us see through the wall). It is usually achieved through machine learning
to get the best possible results so we could say that these two fields are
interlinked.

Robotics: It is a field of engineering focused on the design and


manufacturing of robots. Robots are often used to perform tasks that are
difficult for humans to perform or perform consistently. Examples include car
assembly lines, in hospitals, office cleaner, serving foods, and preparing foods
in hotels, patrolling farm areas and even as police officers. Recently machine
learning has been used to achieve certain good results in building robots that
interact socially(Sophia)
ADVANTAGES AND DISADVANTAGES OF A.I.

Artificial Intelligence is the ability of a computer program to learn and think.


Everything can be considered Artificial intelligence if it involves a program
doing something that we would normally think would rely on the intelligence of
a human. In this article, we will discuss the different advantages and
disadvantages of Artificial Intelligence.

1.Advantages of Artificial Intelligence


2.Disadvantages of Artificial Intelligence

The advantages of Artificial intelligence applications are enormous and can


revolutionize any professional sector. Let’s see some of them.

Advantages of Artificial Intelligence

1) Reduction in Human Error

The phrase “human error” was born because humans make mistakes from time
to time. Computers however, do not make these mistakes if they are
programmed properly. With Artificial intelligence, the decisions are taken from
the previously gathered information applying certain set of algorithms. So errors
are reduced and the chance of reaching accuracy with a greater degree of
precision is a possibility.

Example: In Weather Forecasting using AI it has reduced majority of human


error.
2) Takes risks instead of Humans

This is one of the biggest advantage of Artificial intelligence. We can overcome


many risky limitations of human by developing an AI Robot which in turn can
do the risky things for us. Let it be going to mars, defuse a bomb, explore the
deepest parts of oceans, minning for coal and oil, it can be used effectively in
any kinds of nautural or man made disasters.

Example: Have you heard about the Chernobyl nuclear power plant explosion in
Ukraine? At that time there were no AI powered robots which can help us to
minimise the affect of radiation by controlling the fire in early stages, as any
human went close to the core was dead in matter of minutes. They eventually
poured sand and boron from helichopters from a mere distance.

AI Robots can be used in such situations where an human intervention can be


hazardous.

3) Available 24×7

An Average human will work for 4-6 hours a day excluding the breaks. Humans
are built in such a way to get some time out for refereshing themselves and get
ready for a new day of work and they even have weekly off’s to stay intact with
their work life and personal life. But using AI we can make machines work
24×7 without any breaks and they don’t even get bored unlike humans.

Example: Educational Institutes and Helpline centers are getting many queries
and issues which can be handled effecively using AI.

4) Helping in Repetitive Jobs

In our day-to-day work, we will be performing many repetitive works like


sending a thanking mail, verifying certain documents for errors and many more
things. Using artificial intelligence we can productively automate these
mundane tasks and can even remove “boring” tasks for humans and free them
up to be increasingly creative.
Example: In banks, we often see many verifications of documents in order to
get a loan which is a repetitive task for the owner of the bank. Using AI
Cognitive Automation the owner can speed up the process of verifying the
documents by which both the customers and the owner will be benefited.

5) Digital Assistance

Some of the highly advanced organizations use digital assistants to interact with
users which saves the need of human resource. The digital assistant also used in
many websites to provide things that user want. We can chat with them about
what we are looking for. Some chatbots are designed in such a way that it
becomes hard to determine that we’re chatting with a chatbot or a human being.

Example: We all know that organizations have a customer support team which
needs to clarify the doubts and queries of the customers. Using AI the
organizations can set up a Voicebot or Chatbot which can help customers with
all their queries. We can see many organizations already started using them in
their websites and mobile applications.

6) Faster Decisions:

Using AI alongside other technologies we can make machines take decisions


faster than a human and carry out actions quicker. While taking a decision
human will analyse many factors both emotionally and practically but AI-
powered machine works on what it is programmed and delivers the results in a
faster way.

Example: We all have played a Chess game in Windows. It is nearly impossible


to beat CPU in the hard mode because of the AI behind that game. It will take
the best possible step in a very short time according to the algorithms used
behind it.
7) Daily Applications

Daily applications such as Apple’s Siri, Window’s Cortana, Google’s OK


Google are frequently used in our daily routine whether it is for searching a
location, taking a selfie, making a phone call, replying to a mail and many more.

Example: Around 20 years ago, when we are planning to go somewhere we


used to ask a person who already went there for the directions. But now all we
have to do is say “OK Google where is Visakhapatnam”. It will show you
Visakhapatnam’s location on google map and best path between you and
Visakhapatnam.

8) New Inventions:
AI is powering many inventions in almost every domain which will help
humans solve the majority of complex problems.

As every bright side has a darker version in it. Artificial Intelligence also has
some disadvantages. Let’s see some of them.

Disadvantages of Artificial Intelligence

1) High Costs of Creation

As AI is updating every day the hardware and software need to get updated with
time to meet the latest requirements. Machines need repairing and maintenance
which need plenty of costs. It’ s creation requires huge costs as they are very
complex machines.

2) Making Humans Lazy

AI is making humans lazy with its applications automating the majority of the
work. Humans tend to get addicted to these inventions which can cause a
problem to the future generations.
3) Unemployement:

As AI is replacing the majority of the repetitive tasks and other works with
robots, human interference is becoming less which will cause a major problem
in the employment standards. Every organization is looking to replace the
minimum qualified individuals with AI robots which can do similar work with
more efficiency.

4) No Emotions

There is no doubt that machines are much better when it comes to working
efficiently but they cannot replace the human connection that makes the team.
Machines cannot develop a bond with humans which is an essential attribute
when comes to Team Management.

Why Python for AI ?

The obvious question that we need to encounter at this point is why we should
choose Python for AI over others.

Python offers the least code among others and is in fact 1/5 the number
compared to other OOP languages. No wonder it is one of the most popular in
the market today.

Python has Prebuilt Libraries like Numpy for scientific computation, Scipy
for advanced computing and Pybrain for machine learning (Python Machine
Learning) making it one of the best languages For AI.

Python developers around the world provide comprehensive support and


assistance via forums and tutorials making the job of the coder easier than any
other popular languages.
Python is platform Independent and is hence one of the most flexible and
popular choiceS for use across different platforms and technologies with the
least tweaks in basic coding.

Python is the most flexible of all others with options to choose between
OOPs approach and scripting. You can also use IDE itself to check for most
codes and is a boon for developers struggling with different algorithms.

Python along with packages like NumPy, scikit-learn, iPython Notebook,


and matplotlib form the basis to start your AI project.

NumPy is used as a container for generic data comprising of an N-


dimensional array object, tools for integrating C/C++ code, Fourier transform,
random number capabilities, and other functions.

Another useful library is pandas, an open source library that provides


users with easy-to-use data structures and analytic tools for Python.

Matplotlib is another service which is a 2D plotting library creating


publication quality figures. You can use matplotlib to up to 6 graphical users
interface toolkits, web application servers, and Python scripts.

Some of the most commonly used Python AI libraries are AIMA,


pyDatalog, SimpleAI, EasyAi, etc. There are also Python libraries for machine
learning like PyBrain, MDP, scikit, PyML.

Python Libraries for General AI

--AIMA – Python implementation of algorithms from Russell and Norvig’s


‘Artificial Intelligence: A Modern Approach.’

--pyDatalog – Logic Programming engine in Python


--SimpleAI – Python implementation of many of the artificial intelligence
algorithms described on the book “Artificial Intelligence, a Modern Approach”.
It focuses on providing an easy to use, well documented and tested library.

--EasyAI – Simple Python engine for two-players games with AI (Negamax,


transposition tables, game solving).

Python for Machine Language (ML)

Let us look as to why Python is used for MachinPython is one of the most
popular programming languages used by developers today. Guido Van Rossum
created it in 1991 and ever since its inception has been one of the most widely
used languages along with C++, Java, etc.

In our endeavour to identify what is the best programming language for AI and
neural network, Python has taken a big lead. Let us look at why Artificial
Intelligence with Python is one of the best ideas under the sun.

Python Libraries for Natural Language & Text Processing

 NLTK – Open source Python modules, linguistic data and documentation

for research and development in natural language processing and text


analytics with distributions for Windows, Mac OSX, and Linux.

PyBrain - A flexible, simple yet effective algorithm for ML tasks. It is also a


modular Machine Learning Library for Python providing a variety of predefined
environments to test and compare algorithms.

PyML – A bilateral framework written in Python that focuses on SVMs and


other kernel methods. It is supported on Linux and Mac OS X
Scikit-learn – Scikit-learn is an efficient tool for data analysis while using
Python. It is open source and the most popular general purpose machine
learning library.

MDP-Toolkit – Another Python data processing framework that can be easily


expanded, it also has a collection of supervised and unsupervised learning
algorithms and other data processing units that can be combined into data
processing sequences and more complex feed-forward network architectures.

The implementation of new algorithms is easy and intuitive. The base of


available algorithms is steadily increasing and includes signal processing
methods (Principal Component Analysis, Independent Component Analysis,
and Slow Feature Analysis), manifold learning methods ([Hessian] Locally
Linear Embedding), several classifiers, probabilistic methods (Factor Analysis,
RBM), data pre-processing methods, and many others

4.3.4 MACHINE LEARNING LIBRARIES

TKINTER

Tkinter is a graphical user interface (GUI) module for Python, you can
make desktop apps with Python. You can make windows, buttons, show text
and images amongst other things.

Tk and Tkinter apps can run on most Unix platforms. This also works on
Windows and Mac OS X.
The module Tkinter is an interface to the Tk GUI toolkit.
Example:

Tkinter module
This example opens a blank desktop window. The tkinter module is part of the
standard library.
To use tkinter, import the tkinter module.

1 from tkinter import *

This is tkinter with underscore t, it has been renamed in Python 3.

Setup the window


Start tk and create a window.

root = Tk()
1
app =
2
Window(root)

PYGAME

Pygame is a cross-platform set of Python modules designed for writing


video games. It includes computer graphics and sound libraries designed to be
used with the Python programming language.

Pygame was originally written by Pete Shinners to replace PySDL after


its development stalled.It has been a community project since 2000 and is
released under the open source free software GNU Lesser General Public
License.
Game creation in any programming language is very rewarding, and also
makes for a great teaching tool. With game development, you often have quite a
bit of logic, mathematics, physics, artificial intelligence, and other things, all of
which come together for game creation. Not only this, but the topic is games, so
it can be very fun.

Many times people like to visualize the programs they are creating, as it
can help people to learn programming logic quickly. Games are fantastic for
this, as your are specifically programming everything you see.

SCIPY

SciPy is a free and open-source Python library used for scientific computing and
technical computing.

SciPy contains modules for optimization, linear algebra, integration,


interpolation, special functions, FFT, signal and image processing, ODE solvers
and other tasks common in science and engineering.

SciPy builds on the NumPy array object and is part of the NumPy stack which
includes tools like Matplotlib, pandas and SymPy, and an expanding set of
scientific computing libraries. This NumPy stack has similar users to other
applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is
also sometimes referred to as the SciPy stack.[4]

SciPy is also a family of conferences for users and developers of these tools:
SciPy (in the United States), EuroSciPy (in Europe) and SciPy.in (in India).[5]
Enthought originated the SciPy conference in the United States and continues to
sponsor many of the international conferences as well as host the SciPy website.

The SciPy library is currently distributed under the BSD license, and its
development is sponsored and supported by an open community of developers.
It is also supported by NumFOCUS, a community foundation for supporting
reproducible and accessible science.

DLIB

Dlib is a general purpose cross-platform software library written in the


programming language C++. Its design is heavily influenced by ideas from
design by contract and component-based software engineering. Thus it is, first
and foremost, a set of independent software components. It is open-source
software released under a Boost Software License.

Since development began in 2002, Dlib has grown to include a wide


variety of tools. As of 2016, it contains software components for dealing with
networking, threads, graphical user interfaces, data structures, linear algebra,
machine learning, image processing, data mining, XML and text parsing,
numerical optimization, Bayesian networks, and many other tasks. In recent
years, much of the development has been focused on creating a broad set of
statistical machine learning tools and in 2009 Dlib was published in the Journal
of Machine Learning Research. Since then it has been used in a wide range of
domains.

DLib-ml implements numerous machine learning algorithms:

SVMs,

K-Means clustering,

Bayesian Networks,

and many others.

DLib also features utility functionality including

Threading,

Networking,

Numerical Algorithms,

Image Processing,

and Data Compression and Integrity algorithms.

DLib includes extensive unit testing coverage and examples using the
library. Every class and function in the library is documented. This
documentation can be found on the library's home page. DLib provides a good
framework for developing machine learning applications in C++.
DLib is much like DMTL in that it provides a generic high-performance
machine learning toolkit with many different algorithms, but DLib is more
recently updated and has more examples. DLib also contains much more
supporting functionality.

What makes DLib unique is that it is designed for both research use and
creating machine learning applications in C++ and Python.

Imutils

Translation
Translation is the shifting of an image in either the x or y direction. To translate
an image in OpenCV you need to supply the (x, y)-shift, denoted as (tx, ty) to
construct the translation matrix M:

[0
M = 1 0 ty
1 tx
]
And from there, you would need to apply the cv2.warpAffine function. Instead
of manually constructing the translation matrix M and calling cv2.warpAffine,
you can simply make a call to the translate function of imutils 
Rotation:

Rotating an image in OpenCV is accomplished by making a call


to cv2.getRotationMatrix2D   and cv2.warpAffine . Further care has to be taken
to supply the (x, y)-coordinate of the point the image is to be rotated about.
These calculation calls can quickly add up and make your code bulky and less
readable. The rotate  function in imutils  helps resolve this problem.
Resizing:
Resizing an image in OpenCV is accomplished by calling
the cv2.resize function. However, special care needs to be taken to ensure that
the aspect ratio is maintained. This resize function of imutils maintains the
aspect ratio and provides the keyword arguments width and height so the image
can be resized to the intended width/height while (1) maintaining aspect ratio
and (2) ensuring the dimensions of the image do not have to be explicitly
computed by the developer.
Another optional keyword argument, inter, can be used to specify interpolation
method as well.
Skeletonization
Skeletonization is the process of constructing the “topological skeleton” of an
object in an image, where the object is presumed to be white on a black
background. OpenCV does not provide a function to explicitly construct the
skeleton, but does provide the morphological and binary functions to do so.

For convenience, the skeletonize function of imutils can be used to construct the


topological skeleton of the image.
The first argument, size is the size of the structuring element kernel. An
optional argument, structuring, can be used to control the structuring element -
it defaults to cv2.MORPH_RECT, but can be any valid structuring element.

OPENCV

OpenCV (Open Source Computer Vision Library) is an open source


computer vision and machine learning software library. OpenCV was built to
provide a common infrastructure for computer vision applications and to
accelerate the use of machine perception in the commercial products. Being a
BSD-licensed product, OpenCV makes it easy for businesses to utilize and
modify the code.

The library has more than 2500 optimized algorithms, which includes a
comprehensive set of both classic and state-of-the-art computer vision and
machine learning algorithms. These algorithms can be used to detect and
recognize faces, identify objects, classify human actions in videos, track camera
movements, track moving objects, extract 3D models of objects, produce 3D
point clouds from stereo cameras, stitch images together to produce a high
resolution image of an entire scene, find similar images from an image database,
remove red eyes from images taken using flash, follow eye movements,
recognize scenery and establish markers to overlay it with augmented reality,
etc.

Along with well-established companies like Google, Yahoo, Microsoft,


Intel, IBM, Sony, Honda, Toyota that employ the library, there are many
startups such as Applied Minds, VideoSurf, and Zeitera, that make extensive
use of OpenCV. OpenCV’s deployed uses span the range from stitching
streetview images together, detecting intrusions in surveillance video in Israel,
monitoring mine equipment in China, helping robots navigate and pick up
objects at Willow Garage, detection of swimming pool drowning accidents in
Europe, running interactive art in Spain and New York, checking runways for
debris in Turkey, inspecting labels on products in factories around the world on
to rapid face detection in Japan.

SYSTEM DESIGN

5.1 INRODUCTION

Software design sits at the technical kernel of the software engineering


process and is applied regardless of the development paradigm and area of
application. Design is the first step in the development phase for any engineered
product or system. The designer’s goal is to produce a model or representation
of an entity that will later be built. Beginning, once system requirement have
been specified and analyzed, system design is the first of the three technical
activities -design, code and test that is required to build and verify software.

The importance can be stated with a single word “Quality”. Design is the place
where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way
that we can accurately translate a customer’s view into a finished software
product or system. Software design serves as a foundation for all the software
engineering steps that follow. Without a strong design we risk building an
unstable system – one that will be difficult to test, one whose quality cannot be
assessed until the last stage. The purpose of the design phase is to plan a solution
of the problem specified by the requirement document. This phase is the first step in
moving from the problem domain to the solution domain. In other words, starting
with what is needed, design takes us toward how to satisfy the needs. The design of
a system is perhaps the most critical factor affection the quality of the software; it has
a major impact on the later phase, particularly testing, maintenance. The output of
this phase is the design document. This document is similar to a blueprint for the
solution and is used later during implementation, testing and maintenance. The
design activity is often divided into two separate phases System Design and Detailed
Design.
System Design also called top-level design aims to identify the modules that
should be in the system, the specifications of these modules, and how they interact
with each other to produce the desired results. At the end of the system design all
the major data structures, file formats, output formats, and the major modules in the
system and their specifications are decided.
During, Detailed Design, the internal logic of each of the modules specified in
system design is decided. During this phase, the details of the data of a module is
usually specified in a high-level design description language, which is independent
of the target language in which the software will eventually be implemented.
In system design the focus is on identifying the modules, whereas during
detailed design the focus is on designing the logic for each of the modules. In other
works, in system design the attention is on what components are needed, while in
detailed design how the components can be implemented in software is the issue.
Design is concerned with identifying software components specifying
relationships among components. Specifying software structure and providing blue
print for the document phase. Modularity is one of the desirable properties of large
systems. It implies that the system is divided into several parts. In such a manner,
the interaction between parts is minimal clearly specified.
During the system design activities, Developers bridge the gap between the
requirements specification, produced during requirements elicitation and analysis,
and the system that is delivered to the user.
Design is the place where the quality is fostered in development. Software
design is a process through which requirements are translated into a representation
of software.

5.2 Data Flow Diagrams:

A graphical tool used to describe and analyze the moment of data through a system
manual or automated including the process, stores of data, and delays in the system.
Data Flow Diagrams are the central tool and the basis from which other components
are developed. The transformation of data from input to output, through processes,
may be described logically and independently of the physical components
associated with the system. The DFD is also know as a data flow graph or a bubble
chart.

DFDs are the model of the proposed system. They clearly should show the
requirements on which the new system should be built. Later during design activity
this is taken as the basis for drawing the system’s structure charts. The Basic
Notation used to create a DFD’s are as follows:

1. Dataflow: Data move in a specific direction from an origin to a destination.

2. Process: People, procedures, or devices that use or produce (Transform) Data.


The physical component is not identified.
3. Source: External sources or destination of data, which may be People, programs,
organizations or other entities.

4. Data Store: Here data are stored or referenced by a process in the System.

What is a UML Class Diagram?

Class diagrams are the backbone of almost every object-oriented method


including UML. They describe the static structure of a system.

Basic Class Diagram Symbols and Notations

Classes represent an abstraction of entities with common characteristics.


Associations represent the relationships between classes.

Illustrate classes with rectangles divided into compartments. Place the name of
the class in the first partition (centered, bolded, and capitalized), list the
attributes in the second partition, and write operations into the third.
Active Class

Active classes initiate and control the flow of activity, while passive classes
store data and serve other classes. Illustrate active classes with a thicker border.

Visibility

Use visibility markers to signify who can access the information contained
within a class. Private visibility hides information from anything outside the
class partition. Public visibility allows all other classes to view the marked
information. Protected visibility allows child classes to access information they
inherited from a parent class. .

Associations

Associations represent static relationships between classes. Place association


names above, on, or below the association line. Use a filled arrow to indicate
the direction of the relationship. Place roles near the end of an association.
Roles represent the way the two classes see each other.
Note: It's uncommon to name both the association and the class roles.

Multiplicity (Cardinality)

Place multiplicity notations near the ends of an association. These symbols


indicate the number of instances of one class linked to one instance of the other
class. For example, one company will have one or more employees, but each
employee works for one company only.

Constraint

Place constraints inside curly braces {}.


Simple Constraint

Composition and Aggregation

Composition is a special type of aggregation that denotes a strong ownership


between Class A, the whole, and Class B, its part. Illustrate composition with a
filled diamond. Use a hollow diamond to represent a simple aggregation
relationship, in which the "whole" class plays a more important role than the
"part" class, but the two classes are not dependent on each other. The diamond
end in both a composition and aggregation relationship points toward the
"whole" class or the aggregate

Generalization
Generalization is another name for inheritance or an "is a" relationship. It refers
to a relationship between two classes where one class is a specialized version of
another. For example, Honda is a type of car. So the class Honda would have a
generalization relationship with the class car.

In real life coding examples, the difference between inheritance and aggregation
can be confusing. If you have an aggregation relationship, the aggregate (the
whole) can access only the PUBLIC functions of the part class. On the other
hand, inheritance allows the inheriting class to access both the PUBLIC and
PROTECTED functions of the super class.

What is a UML Use Case Diagram?

Use case diagrams model the functionality of a system using actors and use
cases. Use cases are services or functions provided by the system to its users.

Basic Use Case Diagram Symbols and Notations

System

Draw your system's boundaries using a rectangle that contains use cases. Place
actors outside the system's boundaries.
Use Case

Draw use cases using ovals. Label with ovals with verbs that represent the
system's functions.

Actors

Actors are the users of a system. When one system is the actor of another
system, label the actor system with the actor stereotype.

Relationships
Illustrate relationships between an actor and a use case with a simple line. For
relationships among use cases, use arrows labeled either "uses" or "extends." A
"uses" relationship indicates that one use case is needed by another in order to
perform a task. An "extends" relationship indicates alternative options under a
certain use case.

Sequence Diagram

Sequence diagrams describe interactions among classes in terms of an exchange


of messages over time.

Basic Sequence Diagram Symbols and Notations

Class roles

Class roles describe the way an object will behave in context. Use the UML
object symbol to illustrate class roles, but don't list object attributes.
Activation

Activation boxes represent the time an object needs to complete a task.

Messages

Messages are arrows that represent communication between objects. Use half-
arrowed lines to represent asynchronous messages. Asynchronous messages are
sent from an object that will not wait for a response from the receiver before
continuing its tasks.
Various message types for Sequence and Collaboration diagrams

Lifelines

Lifelines are vertical dashed lines that indicate the object's presence over time.

Destroying Objects

Objects can be terminated early using an arrow labeled "<< destroy >>" that
points to an X.
Loops

A repetition or loop within a sequence diagram is depicted as a rectangle. Place


the condition for exiting the loop at the bottom left corner in square brackets [ ].

Collaboration Diagram

A collaboration diagram describes interactions among objects in terms of


sequenced messages. Collaboration diagrams represent a combination of
information taken from class, sequence, and use case diagrams describing both
the static structure and dynamic behavior of a system.

Basic Collaboration Diagram Symbols and Notations

Class roles

Class roles describe how objects behave. Use the UML object symbol to
illustrate class roles, but don't list object attributes.

Association roles

Association roles describe how an association will behave given a particular


situation. You can draw association roles using simple lines labeled with
stereotypes.

Messages

Unlike sequence diagrams, collaboration diagrams do not have an explicit way


to denote time and instead number messages in order of execution. Sequence
numbering can become nested using the Dewey decimal system. For example,
nested messages under the first message are labeled 1.1, 1.2, 1.3, and so on. The
a condition for a message is usually placed in square brackets immediately
following the sequence number. Use a * after the sequence number to indicate a
loop.
Learn how to add arrows to your lines.

Activity Diagram

An activity diagram illustrates the dynamic nature of a system by modeling the


flow of control from activity to activity. An activity represents an operation on
some class in the system that results in a change in the state of the system.
Typically, activity diagrams are used to model workflow or business processes
and internal operation. Because an activity diagram is a special kind of state
chart diagram, it uses some of the same modeling conventions.

Basic Activity Diagram Symbols and Notations

Action states

Action states represent the non interruptible actions of objects. You can draw an
action state in Smart Draw using a rectangle with rounded corners.

Action Flow
Action flow arrows illustrate the relationships among action states.

Object Flow

Object flow refers to the creation and modification of objects by activities. An


object flow arrow from an action to an object means that the action creates or
influences the object. An object flow arrow from an object to an action indicates
that the action state uses the object.

Initial State

A filled circle followed by an arrow represents the initial action state.

Final State

An arrow pointing to a filled circle nested inside another circle represents the
final action state.
Branching

A diamond represents a decision with alternate paths. The outgoing alternates


should be labeled with a condition or guard expression. You can also label one
of the paths "else."

Synchronization

A synchronization bar helps illustrate parallel transitions. Synchronization is


also called forking and joining.

Swimlanes
Swimlanes group related activities into one column.

State chart Diagram

A state chart diagram shows the behavior of classes in response to external


stimuli. This diagram models the dynamic flow of control from state to state
within a system.

Basic State chart Diagram Symbols and Notations

States

States represent situations during the life of an object. You can easily illustrate a
state in Smart Draw by using a rectangle with rounded corners.

Transition

A solid arrow represents the path between different states of an object. Label the
transition with the event that triggered it and the action that results from it.
Initial State

A filled circle followed by an arrow represents the object's initial state.

Final State

An arrow pointing to a filled circle nested inside another circle represents the
object's final state.

Synchronization and Splitting of Control

A short heavy bar with two transitions entering it represents a synchronization


of control. A short heavy bar with two transitions leaving it represents a
splitting of control that creates multiple states.
STATE CHART DIAGRAM:

What is a UML Component Diagram?

A component diagram describes the organization of the physical components in


a system.

Basic Component Diagram Symbols and Notations

Component

A component is a physical building block of the system. It is represented as a


rectangle with tabs.
Learn how to resize grouped objects like components.

Interface

An interface describes a group of operations used or created by components.

Dependencies

Draw dependencies among components using dashed arrows.


Learn about line styles in SmartDraw.
COMPONENT DIAGRAM:

What is a UML Deployment Diagram?

Deployment diagrams depict the physical resources in a system including


nodes, components, and connections.

Basic Deployment Diagram Symbols and Notations

Component

A node is a physical resource that executes code components.


Learn how to resize grouped objects like nodes.

Association

Association refers to a physical connection between nodes, such as Ethernet.


Learn how to connect two nodes.
Components and Nodes

Place components inside the node that deploys them.

UML Diagrams Overview


UML combines best techniques from data modeling (entity relationship
diagrams), business modeling (work flows), object modeling, and component
modeling. It can be used with all processes, throughout the software
development life cycle, and across different implementation technologies . UML
has synthesized the notations of the Booch method, the Object-modeling
technique (OMT) and Object-oriented software engineering (OOSE) by fusing
them into a single, common and widely usable modeling language. UML aims
to be a standard modeling language which can model concurrent and distributed
systems.
USECASE DAIGRAM:

Use case diagrams are considered for high level requirement analysis of a
system. So when the requirements of a system are analyzed the functionalities
are captured in use cases.So we can say that uses cases are nothing but the
system functionalities written in an organized manner. Now the second things
which are relevant to the use cases are the actors. Actors can be defined as
something that interacts with the system.

The actors can be human user, some internal applications or may be some
external applications. So in a brief when we are planning to draw an use case
diagram we should have the following items identified.

 Functionalities to be represented as an use case

 Actors

 Relationships among the use cases and actors.


user

take dataset preparing dateset clasification

supervised unsupervised

classifying regression

features lables

predict(result)
testing&training
SVM

predending the data using alogorithms SVR ALGORITHM

plotting

SEQUENCE DAIGRAM

A sequence diagram in Unified Modeling Language (UML) is a kind of


interaction diagram that shows how processes operate with one another and in
what order. It is a construct of a Message Sequence Chart. A sequence diagram
shows, as parallel vertical lines ("lifelines"), different processes or objects that
live simultaneously, and, as horizontal arrows, the messages exchanged between
them, in the order in which they occur. This allows the specification of simple
runtime scenarios in a graphical manner.
user dataset classification supervised classifying testing&training algorithms predict(result) plotting

1 : supervised()

2 : unsupervised()

3 : classifying()

4 : regrission()

5 : features()

6 : lables()

7 : SVM algorithm()

8 : SVR algorithm()

9 : using SVM algorithm()

10 : results()

COLLEBARATION DAIGRAM:

A collaboration diagram, also called a communication diagram or interaction


diagram. A Collaboration diagram is easily represented by modeling objects in a
system and representing the associations between the objects as links. The
interaction between the objects is denoted by arrows. To identify the sequence
of invocation of these objects, a number is placed next to each of these arrows.
A sophisticated modeling tool can easily convert a collaboration diagram into a
sequence diagram and the vice versa. Hence, the elements of a Collaboration
diagram are essentially the same as that of a Sequence diagram.

supervised classifying algorithms plotting predict(result) classification

9 : using SVM algorithm()


10 : results()
7 : SVR
8 SVM algorithm()
5 6: features()
: lables()
34 : classifying()
regrission()
21: :unsupervised()
supervised()
user dataset testing&training predict(result)using SVM algorithms
ACTIVITY DAIGRAM:

Activity diagrams are graphical representations of Workflows of stepwise


activities and actions with support for choice, iteration and concurrency. In the
Unified Modeling Language, activity diagrams can be used to describe the
business and operational step-by-step workflows of components in a system. An
activity diagram shows the overall flow of control.

user

dadaset

clasification

supervised unsupervised

regression classifying

features(days) lables(price)

testing&training

predict the data using SVM algorithm

plotting
Component

matplotlib
datasets

server

numpy
sklearn
Implementation

Source Code:

import tkinter as tk

from tkinter import Message ,Text

import cv2,os

import shutil

import csv

import numpy as np

from PIL import Image, ImageTk

import pandas as pd

import datetime

import time

import tkinter.ttk as ttk

import tkinter.font as font

window = tk.Tk()

#helv36 = tk.Font(family='Helvetica', size=36, weight='bold')

window.title("Face_Recogniser")

dialog_title = 'QUIT'

dialog_text = 'Are you sure?'

#answer = messagebox.askquestion(dialog_title, dialog_text)

#window.geometry('1280x720')

window.configure(background='blue')
#window.attributes('-fullscreen', True)

window.grid_rowconfigure(0, weight=1)

window.grid_columnconfigure(0, weight=1)

#path = "profile.jpg"

#Creates a Tkinter-compatible photo image, which can be used everywhere Tkinter expects
an image object.

#img = ImageTk.PhotoImage(Image.open(path))

#The Label widget is a standard Tkinter widget used to display a text or image on the screen.

#panel = tk.Label(window, image = img)

#panel.pack(side = "left", fill = "y", expand = "no")

#cv_img = cv2.imread("img541.jpg")

#x, y, no_channels = cv_img.shape

#canvas = tk.Canvas(window, width = x, height =y)

#canvas.pack(side="left")

#photo = PIL.ImageTk.PhotoImage(image = PIL.Image.fromarray(cv_img))

# Add a PhotoImage to the Canvas

#canvas.create_image(0, 0, image=photo, anchor=tk.NW)

#msg = Message(window, text='Hello, world!')


# Font is a tuple of (font_family, size_in_points, style_modifier_string)

message = tk.Label(window, text="Face-Recognition-Based-Attendance-Management-


System" ,bg="Green" ,fg="white" ,width=50 ,height=3,font=('times', 30, 'italic bold
underline'))

message.place(x=200, y=20)

lbl = tk.Label(window, text="Enter


ID",width=20 ,height=2 ,fg="red" ,bg="yellow" ,font=('times', 15, ' bold ') )

lbl.place(x=400, y=200)

txt = tk.Entry(window,width=20 ,bg="yellow" ,fg="red",font=('times', 15, ' bold '))

txt.place(x=700, y=215)

lbl2 = tk.Label(window, text="Enter


Name",width=20 ,fg="red" ,bg="yellow" ,height=2 ,font=('times', 15, ' bold '))

lbl2.place(x=400, y=300)

txt2 = tk.Entry(window,width=20 ,bg="yellow" ,fg="red",font=('times', 15, ' bold ') )

txt2.place(x=700, y=315)

lbl3 = tk.Label(window, text="Notification :


",width=20 ,fg="red" ,bg="yellow" ,height=2 ,font=('times', 15, ' bold underline '))

lbl3.place(x=400, y=400)
message = tk.Label(window, text="" ,bg="yellow" ,fg="red" ,width=30 ,height=2,
activebackground = "yellow" ,font=('times', 15, ' bold '))

message.place(x=700, y=400)

lbl3 = tk.Label(window, text="Attendance :


",width=20 ,fg="red" ,bg="yellow" ,height=2 ,font=('times', 15, ' bold underline'))

lbl3.place(x=400, y=650)

message2 = tk.Label(window, text="" ,fg="red" ,bg="yellow",activeforeground =


"green",width=30 ,height=2 ,font=('times', 15, ' bold '))

message2.place(x=700, y=650)

def clear():

txt.delete(0, 'end')

res = ""

message.configure(text= res)

def clear2():

txt2.delete(0, 'end')

res = ""

message.configure(text= res)

def is_number(s):

try:

float(s)

return True

except ValueError:
pass

try:

import unicodedata

unicodedata.numeric(s)

return True

except (TypeError, ValueError):

pass

return False

def TakeImages():

Id=(txt.get())

name=(txt2.get())

if(is_number(Id) and name.isalpha()):

cam = cv2.VideoCapture(0)

harcascadePath = "haarcascade_frontalface_default.xml"

detector=cv2.CascadeClassifier(harcascadePath)

sampleNum=0

while(True):

ret, img = cam.read()

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

faces = detector.detectMultiScale(gray, 1.3, 5)

for (x,y,w,h) in faces:

cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)

#incrementing sample number

sampleNum=sampleNum+1
#saving the captured face in the dataset folder TrainingImage

cv2.imwrite("TrainingImage\ "+name +"."+Id +'.'+ str(sampleNum) + ".jpg",


gray[y:y+h,x:x+w])

#display the frame

cv2.imshow('frame',img)

#wait for 100 miliseconds

if cv2.waitKey(100) & 0xFF == ord('q'):

break

# break if the sample number is morethan 100

elif sampleNum>60:

break

cam.release()

cv2.destroyAllWindows()

res = "Images Saved for ID : " + Id +" Name : "+ name

row = [Id , name]

with open('StudentDetails\StudentDetails.csv','a+') as csvFile:

writer = csv.writer(csvFile)

writer.writerow(row)

csvFile.close()

message.configure(text= res)

else:

if(is_number(Id)):

res = "Enter Alphabetical Name"

message.configure(text= res)

if(name.isalpha()):

res = "Enter Numeric Id"

message.configure(text= res)
def TrainImages():

recognizer = cv2.face_LBPHFaceRecognizer.create()#recognizer =
cv2.face.LBPHFaceRecognizer_create()#$cv2.createLBPHFaceRecognizer()

harcascadePath = "haarcascade_frontalface_default.xml"

detector =cv2.CascadeClassifier(harcascadePath)

faces,Id = getImagesAndLabels("TrainingImage")

recognizer.train(faces, np.array(Id))

recognizer.save("TrainingImageLabel\Trainner.yml")

res = "Image Trained"#+",".join(str(f) for f in Id)

message.configure(text= res)

def getImagesAndLabels(path):

#get the path of all the files in the folder

imagePaths=[os.path.join(path,f) for f in os.listdir(path)]

#print(imagePaths)

#create empth face list

faces=[]

#create empty ID list

Ids=[]

#now looping through all the image paths and loading the Ids and the images

for imagePath in imagePaths:

#loading the image and converting it to gray scale

pilImage=Image.open(imagePath).convert('L')

#Now we are converting the PIL image into numpy array

imageNp=np.array(pilImage,'uint8')
#getting the Id from the image

Id=int(os.path.split(imagePath)[-1].split(".")[1])

# extract the face from the training image sample

faces.append(imageNp)

Ids.append(Id)

return faces,Ids

def TrackImages():

recognizer = cv2.face.LBPHFaceRecognizer_create()#cv2.createLBPHFaceRecognizer()

recognizer.read("TrainingImageLabel\Trainner.yml")

harcascadePath = "haarcascade_frontalface_default.xml"

faceCascade = cv2.CascadeClassifier(harcascadePath);

df=pd.read_csv("StudentDetails\StudentDetails.csv")

cam = cv2.VideoCapture(0)

font = cv2.FONT_HERSHEY_SIMPLEX

col_names = ['Id','Name','Date','Time']

attendance = pd.DataFrame(columns = col_names)

while True:

ret, im =cam.read()

gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)

faces=faceCascade.detectMultiScale(gray, 1.2,5)

for(x,y,w,h) in faces:

cv2.rectangle(im,(x,y),(x+w,y+h),(225,0,0),2)

Id, conf = recognizer.predict(gray[y:y+h,x:x+w])

if(conf < 50):

ts = time.time()

date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')

aa=df.loc[df['Id'] == Id]['Name'].values

tt=str(Id)+"-"+aa

attendance.loc[len(attendance)] = [Id,aa,date,timeStamp]

else:

Id='Unknown'

tt=str(Id)

if(conf > 75):

noOfFile=len(os.listdir("ImagesUnknown"))+1

cv2.imwrite("ImagesUnknown\Image"+str(noOfFile) + ".jpg", im[y:y+h,x:x+w])

cv2.putText(im,str(tt),(x,y+h), font, 1,(255,255,255),2)

attendance=attendance.drop_duplicates(subset=['Id'],keep='first')

cv2.imshow('im',im)

if (cv2.waitKey(1)==ord('q')):

break

ts = time.time()

date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')

timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')

Hour,Minute,Second=timeStamp.split(":")

fileName="Attendance\Attendance_"+date+"_"+Hour+"-"+Minute+"-"+Second+".csv"

attendance.to_csv(fileName,index=False)

cam.release()

cv2.destroyAllWindows()

#print(attendance)

res=attendance

message2.configure(text= res)
clearButton = tk.Button(window, text="Clear",
command=clear ,fg="red" ,bg="yellow" ,width=20 ,height=2 ,activebackground =
"Red" ,font=('times', 15, ' bold '))

clearButton.place(x=950, y=200)

clearButton2 = tk.Button(window, text="Clear",


command=clear2 ,fg="red" ,bg="yellow" ,width=20 ,height=2, activebackground =
"Red" ,font=('times', 15, ' bold '))

clearButton2.place(x=950, y=300)

takeImg = tk.Button(window, text="Take Images",


command=TakeImages ,fg="red" ,bg="yellow" ,width=20 ,height=3, activebackground =
"Red" ,font=('times', 15, ' bold '))

takeImg.place(x=200, y=500)

trainImg = tk.Button(window, text="Train Images",


command=TrainImages ,fg="red" ,bg="yellow" ,width=20 ,height=3, activebackground =
"Red" ,font=('times', 15, ' bold '))

trainImg.place(x=500, y=500)

trackImg = tk.Button(window, text="Track Images",


command=TrackImages ,fg="red" ,bg="yellow" ,width=20 ,height=3, activebackground =
"Red" ,font=('times', 15, ' bold '))

trackImg.place(x=800, y=500)

quitWindow = tk.Button(window, text="Quit",


command=window.destroy ,fg="red" ,bg="yellow" ,width=20 ,height=3, activebackground
= "Red" ,font=('times', 15, ' bold '))

quitWindow.place(x=1100, y=500)

copyWrite = tk.Text(window, background=window.cget("background"),


borderwidth=0,font=('times', 30, 'italic bold underline'))

copyWrite.tag_configure("superscript", offset=10)

copyWrite.insert("insert", "Developed by Ashish","", "TEAM", "superscript")

copyWrite.configure(state="disabled",fg="red" )

copyWrite.pack(side="left")
copyWrite.place(x=800, y=750)

window.mainloop()

setup.py

from cx_Freeze import setup, Executable

import sys,os

PYTHON_INSTALL_DIR = os.path.dirname(os.path.dirname(os.__file__))

os.environ['TCL_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tcl8.6')

os.environ['TK_LIBRARY'] = os.path.join(PYTHON_INSTALL_DIR, 'tcl', 'tk8.6')

base = None

if sys.platform == 'win32':

base = None

executables = [Executable("train.py", base=base)]

packages = ["idna","os","sys","cx_Freeze","tkinter","cv2","setup",

"numpy","PIL","pandas","datetime","time"]

options = {

'build_exe': {

'packages':packages,

},
}

setup(

name = "ToolBox",

options = options,

version = "0.0.1",

description = 'Vision ToolBox',

executables = executables

#write python setup build


SYSTEM TESTING

8.1.INTRODUCTION

Software testing is a critical element of software quality assurance


and represents the ultimate review of specification, design and coding. In
fact, testing is the one step in the software engineering process that could be
viewed as destructive rather than constructive.

A strategy for software testing integrates software test case design


methods into a well-planned series of steps that result in the successful
construction of software. Testing is the set of activities that can be planned in
advance and conducted systematically. The underlying motivation of program
testing is to affirm software quality with methods that can economically and
effectively apply to both strategic to both large and small-scale systems.

8.2. STRATEGIC APPROACH TO SOFTWARE TESTING

The software engineering process can be viewed as a spiral. Initially


system engineering defines the role of software and leads to software
requirement analysis where the information domain, functions, behavior,
performance, constraints and validation criteria for software are established.
Moving inward along the spiral, we come to design and finally to coding. To
develop computer software we spiral in along streamlines that decrease the level
of abstraction on each turn.
A strategy for software testing may also be viewed in the context of the
spiral. Unit testing begins at the vertex of the spiral and concentrates on each
unit of the software as implemented in source code. Testing progress by moving
outward along the spiral to integration testing, where the focus is on the design
and the construction of the software architecture. Talking another turn on
outward on the spiral we encounter validation testing where requirements
established as part of software requirements analysis are validated against the
software that has been constructed. Finally we arrive at system testing, where
the software and other system elements are tested as a whole.

UNIT TESTING

MODULE TESTING

Component Testing
SUB-SYSTEM TESING

SYSTEM TESTING
Integration Testing

ACCEPTANCE TESTING

User Testing

8.3. UNIT TESTING

Unit testing focuses verification effort on the smallest unit of software


design, the module. The unit testing we have is white box oriented and some
modules the steps are conducted in parallel.
1. WHITE BOX TESTING
This type of testing ensures that

 All independent paths have been exercised at least once


 All logical decisions have been exercised on their true and false sides
 All loops are executed at their boundaries and within their operational
bounds
 All internal data structures have been exercised to assure their validity.
To follow the concept of white box testing we have tested each form .we
have created independently to verify that Data flow is correct, All conditions are
exercised to check their validity, All loops are executed on their boundaries.

2. BASIC PATH TESTING

Established technique of flow graph with Cyclomatic complexity was used to


derive test cases for all the functions. The main steps in deriving test cases
were:

Use the design of the code and draw correspondent flow graph.

Determine the Cyclomatic complexity of resultant flow graph, using formula:

V(G)=E-N+2 or

V(G)=P+1 or

V (G) =Number Of Regions

Where V (G) is Cyclomatic complexity,

E is the number of edges,

N is the number of flow graph nodes,

P is the number of predicate nodes.


Determine the basis of set of linearly independent paths.
3. CONDITIONAL TESTING

In this part of the testing each of the conditions were tested to both true
and false aspects. And all the resulting paths were tested. So that each path that
may be generate on particular condition is traced to uncover any possible errors.

4. DATA FLOW TESTING

This type of testing selects the path of the program according to the
location of definition and use of variables. This kind of testing was used only
when some local variable were declared. The definition-use chain method was
used in this type of testing. These were particularly useful in nested statements.

5. LOOP TESTING

In this type of testing all the loops are tested to all the limits possible. The
following exercise was adopted for all loops:
 All the loops were tested at their limits, just above them and just below them.
 All the loops were skipped at least once.
 For nested loops test the inner most loop first and then work outwards.
 For concatenated loops the values of dependent loops were set with the help
of
 connected loop.Unstructured loops were resolved into nested loops or
concatenated loops and tested as above.
Output Screens
Conclusion

An accurate and efficient automated attendance system with facial


recognition has been developed which achieves comparable metrics with the
existing state-of-the-art system. This project uses recent techniques in the field
of computer vision and deep learning. Custom dataset was created using label-
Image and the evaluation was consistent. This can be used in real-time
applications which require facial recognition for pre-processing in their
pipeline.

An important scope would be to train the system on a video sequence for


usage in tracking applications. Addition of a temporally consistent network
would enable smooth detection and more optimal than per-frame detection
Bibliography:

Alpaydin, E. (2014) Introduction to Machine Learning. 3rd ed edn. Cambridge: Cambridge: The
MIT Press.

Anthony, S. (2014) Facebook's facial recognition software is now as accurate as the human
brain, but what now?. Available at: http://www.extremetech.com/extreme/178777-facebooks-
facial-recognition-software-is-now-as-accurate-as-the-human-brain-but-what-now (Accessed:
09/01/2018).

Baseer, K. (2015) 'A Systematic Survey on Waterfall Vs. Agile Vs. Lean Process Paradigms', I-
Manager's Journal on Software Engineering, 9 (3), pp. 34-59.

Belaroussi, R. and Milgram, M. (2012) 'A comparative study on face detection and tracking
algorithms', Expert Systems with Applications, 39 (8), pp. 7158-7164.

C, R.,Kavitha and Thomas, S.,Mary (2011) 'Requirement Gathering for small Projects using Agile
Methods', IJCA Special Issue on “Computational Science - New Dimensions & Perspectives, pp.
122-128.

Carro, R. C., Larios, J. -. A., Huerta, E. B., Caporal, R. M. and Cruz, F. R. (2015) 'Face
recognition using SURF', Lecture Notes in Computer Science (Including Subseries Lecture Notes
in Artificial Intelligence and Lecture Notes in Bioinformatics), 9225 pp. 316-326.

Castrillón, M., Déniz, O., Hernández, D. and Lorenzo, J. (2011) 'A comparison of face and facial
feature detectors based on the Viola–Jones general object detection framework', Machine Vision
and Applications, 22 (3), pp. 481-494.
Cheng, Y. F., Tang, H. and Chen, X. Q. (2014) 'Research and improvement on HMM-based face
recognition', Applied Mechanics and Materials, 490-491 pp. 1338-1341.

Da Costa, Daniel M. M., Peres, S. M., Lima, C. A. M. and Mustaro, P. (2015) Face recognition
using Support Vector Machine and multiscale directional image representation methods: A
comparative study. Killarney,Ireland. Neural Networks (IJCNN), 2015 International Joint
Conference on: IEEE.

Dagher, I., Hassanieh, J. and Younes, A. (2013) Face recognition using voting technique for the
Gabor and LDP features. Dallas, TX, USA. Neural Networks (IJCNN), The 2013 International
Joint Conference on: IEEE.

Dhanaseely, A. J., Himavathi, S. and Srinivasan, E. (2012) 'Performance comparison of cascade


and feed forward neural network for face recognition system', pp. 21.

Elbeheri, A. (2016) The Dynamic Systems Development Method(DSDM)-Agile Methodology.


Available at: https://www.linkedin.com/pulse/dynamic-systems-development-method-dsdm-agile-
alaa (Accessed: 28/03/2018).

Feraud, R., Bernier, O., Viallet, J. E. and Collobert, M. (2000) 'A fast and accurate face detector
for indexation of face images', Automatic Face and Gesture Recognition,
2000.Proceedings.Fourth IEEE International Conference on, pp. 77-82.

GettyImages (2017) Lecture Hall. Available at: https://www.gettyimages.co.uk/photos/lecture-


hall?mediatype=photography&page=5&phrase=lecture%20hall&sort=mostpopular (Accessed:
25/01/2018).

Hadizadeh, H. (2015) 'Multi-resolution local Gabor wavelets binary patterns for gray-scale texture
description', Pattern Recognition Letters, 65 pp. 163-169.

Hiremath, P. S. and Hiremath, M. (2014) '3D Face Recognition based on Radon Transform, PCA,
LDA using KNN and SVM', International Journal of Image, 6 (7), pp. 36-43.

Hjelmås, E. and Low, B. K. (2001) 'Face Detection: A Survey', Computer Vision and Image
Understanding, 83 (3), pp. 236-274.
Jadhav, D. V. and Holambe, R. S. (2010) 'Rotation, illumination invariant polynomial kernel
Fisher discriminant analysis using Radon and discrete cosine transforms based features for face
recognition', Pattern Recognition Letters, 31 (9), pp. 1002-1009.

Jafri, R. and Arabnia, H. ((2009).) 'A Survey of Face Recognition Techniques.', Journal of
Information Processing Systems, 5 (2), pp. 41-68.

Jeong, G. and Choi, S. (2013) 'Performance evaluation of face recognition using feature feedback
over a number of Fisherfaces', IEEJ Transactions on Electrical and Electronic Engineering, 8 (6),
pp. 541-545.

Kashif, M., Deserno, T. M., Haak, D. and Jonas, S. (2016) 'Feature description with SIFT, SURF,
BRIEF, BRISK, or FREAK? A general question answered for bone age assessment', Computers
in Biology and Medicine, 68 pp. 67-75.

Leigh-Pollitt, P. (2001) The Data Protection Act explained. 3rd ed. edn. London: London : The
Stationery Office.

Lemley, J.Bazrafkan, S. and Corcoran, P. (2017) 'Deep Learning for Consumer Devices and
Services: Pushing the limits for machine learning, artificial intelligence, and computer vision'
Consumer Electronics Magazine, IEEE, 6 (2), pp. 48-56. 10.1109/MCE.2016.2640698.

Lenc, L. and Král, P. (2014) 'Automatic face recognition approaches', Journal of Theoretical and
Applied Information Technology, 59 (3), pp. 759-769.

Li, C., Tan, Y., Wang, D. and Ma, P. (2017) 'Research on 3D face recognition method in cloud
environment based on semi supervised clustering algorithm', Multimedia Tools and Applications,
76 (16), pp. 17055-17073.

Li, S. Z.Rong Xiao, S. Z.Li, Z. Y. and Hong, J. Z. (2001) 'Nonlinear mapping from multi-view
face patterns to a Gaussian distribution in a low dimensional space' Recognition, Analysis, and
Tracking of Faces and Gestures in Real-Time Systems, 2001.Proceedings.IEEE ICCV Workshop
on, pp. 47-54. 10.1109/RATFG.2001.938909.
Linna, M., Kannala, J. and Rahtu, E. (2015) 'Online face recognition system based on local binary
patterns and facial landmark tracking', Lecture Notes in Computer Science (Including Subseries
Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9386 pp. 403-414.

Lowe, D. G. (1999) 'Object recognition from local scale-invariant features', Computer Vision,
1999.the Proceedings of the Seventh IEEE International Conference on, 2 pp. 1150-1157.

Marciniak, T., Chmielewska, A., Weychan, R., Parzych, M. and Dabrowski, A. (2015) 'Influence
of low resolution of images on reliability of face detection and recognition', Multimedia Tools
and Applications, 74 (12), pp. 4329-4349.

You might also like