Download as pdf or txt
Download as pdf or txt
You are on page 1of 88

EYE GAZE DETECTION FOR MOUSE POINTER CONTROLLING

A project report submitted in


The partial fulfillment of the requirements for the award of the degree of
Bachelor of Technology
in
Computer Science & Engineering
Submitted by
K. GAYATRI DEVI (18KD1A0579)
M.VENKATA SATYA GIRISH (18KD1A0590)
M.YASWANTH (18KD1A0593)
M.SRIDEVI (18KD1A0594)
P. SUMANTH (18KD1A05B4)
Under the guidance of
Mr.P.Jagannadha Varma, M.Tech, (Ph.D)
Associate Professor
Department of CSE

Department of Computer Science & Engineering


LENDI INSTITUTE OF ENGINEERING & TECHNOLOGY
An Autonomous Institution, Accredited by NBA & NAAC with “A” Grade
Permanently Affiliated to JNTUK, Approved by A.I.C.T.E,
Jonnada, Vizianagaram Dist.535005.
2018-2022
LENDI INSTITUTE OF ENGINEERING &TECHNOLOGY
AN AUTONOMOUS INSTITUTION
Accredited by NAAC with “A” Grade, Accredited by NBA
Permanently Affiliated to JNTUK, Approved by A.I.C.T.E,
Jonnada, Vizianagaram Dist. - 535005.

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

BONAFIDE CERTIFICATE

This is to certify that the project entitled “Eye Gaze Detection For Mouse Pointer Controlling” is a
bonafide record of the work done by K.Gayatri Devi(18KD1A0579),M.Venkata Satya
Girish(18KD1A0590),M.Yaswanth(18KD1A0593),M.Sridevi(18KD1A0594),P.Sumanth(18KD1A05
B4) under the supervision and guidance of Mr.P.JAGANNADHA VARMA, Associate Professor in
partial fulfillment of the requirements for the award of the degree of Bachelor of Technology in Computer
Science and Engineering from Lendi Institute of Engineering and Technology (Affiliated to JNTUK),
Jonnada, Vizianagaram for the year 2022.

Internal guide Head of the Department


Mr. P.Jagannadha Varma, M.Tech, (Ph.D) Dr.A.Rama Rao, M.Tech,Ph.D
Associate Professor Professor
Department of CSE Department of CSE

External Examiner
ACKNOWLEDGEMENT

With great solemnity and sincerity, we offer our profuse thanks to our management, for providing all the
resources to complete our project successfully. We express our deepest sense of gratitude and pay our
sincere thanks to our guide Mr. P.JAGANNADHA VARMA, Associate Professor, Department of C.S.E,
who evinced keen interest in our efforts and provided his valuable guidance throughout our project work.

We thank our project coordinator Mr. U.Kartheek Chandra Patnaik,Mr.S.K.Nagul,Mrs.R.Suchitra


who has made his support available in a number of ways and helped us to complete our project work in
correct manner.

We thank our Dr. A.RAMA RAO , Head of the Department of Computer Science & Engineering
who helped us to complete our project work in a truthful method.

We thank our gratitude to our principal Dr. V.V.RAMA REDDY, for his kind attention and valuable
guidance to us throughout this course in carrying out the project.

We wish to express gratitude to our Management Memebers who supported us in providing good lab
facility.

We also thankful to All Staff Members of Department of Computer Science & Engineering, for
helping us to complete this project work by giving valuable suggestions.

All of the above we great fully acknowledge and express our thanks to our parents who have been
instrumental for the success of this project which play a vital role.

K. GAYATRI DEVI (18KD1A0579)


M.VENKATA SATYA GIRISH (18KD1A0590)
M.YASWANTH (18KD1A0593)
M.SRIDEVI (18KD1A0594)
P. SUMANTH (18KD1A05B4)
DECLARATION

We hereby declare that the project work entitled “Eye Gaze Detection for Mouse Pointer
Controlling” submitted to the JNTU Kakinada is a record of an original work done by K.Gayatri
Devi(18KD1A0579),M.Venkata Satya Girish(18KD1A0590),M.Yaswanth(18KD1A0593),M.Sridevi
(18KD1A0594),P.Sumanth(18KD1A05B4) under the esteemed guidance of Mr.P.JAGANNADHA
VARMA Associate Professor, Computer science & Engineering, Lendi Institute of Engineering &
Technology. This project work is submitted in the partial fulfillment of the requirements for the award of
the degree Bachelor of Technology in Computer Science & Engineering. This entire project is done with
the best of our knowledge and is not submitted to any university for the award of degree/diploma.

K. GAYATRI DEVI (18KD1A0579)


M.VENKATA SATYA GIRISH (18KD1A0590)
M.YASWANTH (18KD1A0593)
M.SRIDEVI (18KD1A0594)
P. SUMANTH (18KD1A05B4)
LENDI INSTITUTE OF ENGINEERING &TECHNOLOGY
AN AUTONOMOUS INSTITUTION
Accredited by NAAC with “A” Grade, Accredited by NBA
Permanently Affiliated to JNTUK, Approved by A.I.C.T.E,
Jonnada, Vizianagaram Dist - 535005.

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


VISION
To be a frontier in computing technologies to produce globally competent computer science engineering
graduates with moral values to build a vibrant society and nation.

MISSION
➢ Providing a strong theoretical and practical background in computer science engineering with an
emphasis on software development
➢ Inculcating professional behavior, strong ethical values, innovative research capabilities, and leadership
abilities.
➢ Imparting the technical skills necessary for continued learning towards their professional growth and
contribution to society and rural communities.

PROGRAM EDUCATIONAL OBJECTIVES (PEOs)


PEO-1: Graduates will have strong knowledge and skills to comprehend latest tools and techniques of
Computer Engineering so that they can analyze, design and create computing products and solutions for real
life problems.

PEO-2: Graduates shall have multidisciplinary approach, professional attitude and ethics, communication
and teamwork skills, and an ability to relate and solve social issues through computer engineering.

PEO-3: Graduates will engage in life-long learning and professional development to adapt to rapidly
changing technology.
PROGRAM SPECIFIC OUTCOMES (PSOs)
PSO-1: Ability to grasp advanced programming techniques to solve contemporary issues.

PSO-2: Have knowledge and expertise to analyze data and networks using latest tools and technologies.

PSO-3: Qualify in national and international competitive examinations for successful higher studies and
employment

PROGRAM OUTCOMES (POS)


PO-1:Engineering Knowledge: Apply the knowledge of mathematics, science, engineering fundamentals,
and an engineering specialization to the solution of complex engineering problems.

PO-2: Problem Analysis: Identify, formulate, review research literature, and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and
engineering sciences.

PO-3 : Design/development of Solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for the public
health and safety, and the cultural, societal, and environmental considerations

PO-4 Conduct Investigations of Complex Problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to
provide valid conclusions.

PO-5 Modern Tool Usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with an
understanding of the limitations.

PO-6 The Engineer and Society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice.
PO-7 Environment and Sustainability: Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.

PO-8 Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of
the engineering practice.

PO-9 Individual and Team Work: Function effectively as an individual, and as a member or leader in
diverse teams, and in multidisciplinary settings.

PO-10 Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports and
design documentation, make effective presentations, and give and receive clear instructions.

PO-11 Project Management and Finance: Demonstrate knowledge and understanding of the engineering
and management principles and apply these to one’s own work, as a member and leader in a team, to manage
projects and in multidisciplinary environments.

PO-12 Life-Long Learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
ABSTRACT

This project describes a real-time contact-free eye-gaze tracking system that bases its accuracy in a very
precise estimation of the pupil center. The eye camera follows the head movements maintaining the pupil
centered in the image. When a tracking error is produced, the image from a camera with a wider field of
view is used to locate the eye and quickly recover the tracking process. Its special shape has been exploited
to allow the optimization of the image processing algorithms developed for this system. After a calibration
procedure, the line of gaze is determined by using the pupil-glint vector. The glints are validated using the
iris outline with the purpose of avoiding the glint distortion due to the changes in the curvature on the ocular
globe. The proposed algorithms determine the pupil center with sub-pixel resolution, minimizing the
measurement error in the pupil-glint vector.

Keywords:
Pattern recognition, image processing, eye tracking, gaze tracking

Outcomes:
Our project titled “EYE GAZE DETECTION FOR MOUSE POINTER CONTROLLING” is mapped
with the following outcomes.

Program Outcomes : PO1, PO2, PO3, PO4, PO7, PO8, PO9, PO10, PO11, PO12.

Program Specific Outcomes : PSO1, PSO2, PSO3


LIST OF CONTENTS

S No Title Page No
INTRODUCTION 1
1.1 Project Overview 2
1
1.2 Project Objective 2
1.3 Project Scope 2

2 LITERATURE REVIEW 3

PROBLEM ANALYSIS 5
3.1 Existing System 5
3 3.1.1 Limitations 5
3.2 Proposed System 5
3.2.1 Advantages 6
SYSTEM ANALYSIS 7
4.1 Software Requirement Specification 7
4.1.1 Functional Requirements 8
4.1.2 Non - Functional Requirements 8
4.2 Feasibility Study 9
4.2.1 Technical Feasibility 9
4
4.2.2 Economic Feasibility 10
4.2.3 Operation Feasibility 11
4.3 Use Case Scenario 12
4.4 System Requirements 16
4.4.1 Hardware Requirements 16
4.4.2 Software Requirements 16
SYSTEM DESIGN 17
5.1 System Architecture 17
5.2 Introduction 18
5 5.2.1 Class Diagram 26
5.2.2 Sequence Diagram 28
5.2.3 Activity Diagram 30
5.2.4 component Diagram 33
5.2.5 Deployment Diagram 33
IMPLEMENATION 35
6.1 Technology Description 35
6 6.2 Python 35
6.3 OpenCV 46
6.4 Dlib 48
6.5 Source Code 49
TESTING 54
7 7.1 Introduction 54
7.2 Test Cases 57
8 RESULTS 59
9 CONCLUSION 63
10 BIBLOGRAPHY 64
11 REFERENCES 65
LIST OF FIGURES

Fig. No Figure Caption Page No

4.3.1 Actor Diagram for user 15

4.3.2 Actor Diagram for System 15

5.1.1 System Architecture 18

5.2 Software Development Life Cycle 19


Spiral Model for SDLC 22
5.2.1.1 Class Diagram 28
5.2.2.1 Sequence Diagram 29
5.2.3.1 Activity Diagram 32
5.2.4.1 Component Diagram 33
5.2.5.1 Deployment Diagram 34
8.1 Execution in command prompt 59
8.2 Interface 59
8.3 In Reading mode 60
8.4 Moving curser to the right 60
8.5 Moving curser to the left 61
8.6 Moving curser to the up 61
8.7 Moving curser to the down 62
8.8 Termination 62
LIST OF TABLES

Table. Table Caption Page no


No
4.2.2 Determining Economic Feasibility 11
4.2.3 Operations Issues 11
4.3 Graphical Representation of Use Case 13
Diagram
5.2.2 Graphical Representation of Sequence 28
Diagram
5.2.3 Graphical Representation of Activity 30
Diagram
7.2 Representing Test Cases and Status 58
CHAPTER-1
INTRODUCTION
1. INTRODUCTION

ADVANCES in eye gaze tracking technology over the past few decades have led to
the development of promising gaze estimation techniques and applications for human computer interaction.
Historically, research on gaze tracking dates back to the early 1900s, starting with invasive eye tracking
techniques. These included electro-occulography using pairs of electrodes placed around the eyes or the
scleral search methods that include coils embedded into a contact lens adhering to the eyes. The first video-
based eye tracking study was made on pilots operating airplane controls in the 1940s. Research on head-
mounted eye trackers advanced in the 1960s and gaze tracking developed further in the 1970s with focus
on improving accuracy and reducing the constraints on users. With increasing computing power in devices,
real time operation of eye trackers became possible during the 1980s. However, till this time, owing to
limited availability of computers, eye tracking was mainly limited to psychological and cognitive studies
and medical research. The application focus towards general purpose human computer interaction as sparse.
This changed in the 1990s as eye gaze found applications in computer input and control. Post 2000, rapid
advancements in computing speed, digital video processing and low-cost hardware brought gaze tracking
equipment closer to users, with applications in gaming, virtual reality and web-advertisements.

Eye gaze information is used in a variety of user platforms. The main use cases may be
broadly classified. Applications based on desktop platforms involve using eye gaze for computer
communication and text entry, computer control and entering gaze-based passwords. Remote eye tracking
has recently been used on TV panels to achieve gaze-controlled functions, for example selecting and
navigating menus and switching channels. Head-mounted gaze tracking setups usually comprise of two or
more cameras mounted on a support framework worn by the user. Such systems have been extensively
employed in user attention and cognitive studies, psychoanalysis, occulo-motor measurements, virtual and
augmented reality applications. Real time gaze and is used in driver support systems to evaluate driver
vigilance and drowsiness levels. These use eye tracking setus mounded on a car’s dashboard along with
computing hardware running machine vision algorithms. In handheld devices such as smartphones or
tablets, the front camera is used to track user gaze to activate functions such as locking/unlocking phones,
interactive displays, dimming backlights or suspending sensors. Within each of these use cases there exists
a wide range of system configurations, operating conditions and varying quality of imaging and optical
components. Furthermore, the variations in eye-movement and biological aspects of individuals lead to
challenges in achieving consistent and repeatable performance from gaze tracking methods. Thus, despite
1
several decades of development in eye gaze research, performance evaluation and comparison of different
gaze estimation techniques across different platforms is a still a difficult task.
In order to provide insight into the current status of eye gaze research and outcomes, this
project presents a detailed literature review and analysis that considers algorithms, system configuration,
user conditions and performance issues for existing gaze tracking systems. Specifically, use-cases based on
five different eye gaze platforms are considered.

1.1 Project Overview


Systems based on different technologies have been implemented, in many cases with some devices mounted
on the person head or using pupil. Also, non-intrusive systems have been developed and are being used in
many applications. We focused on improving accuracy and reducing the constraints on users. The main
objective of this work has been building a non-intrusive eye-gaze tracking system based on the images taken
by conventional video cameras and performing real-time tracking and analysis. The system analyses an
image of the eye and determines the gaze direction by computing the vector defined by the pupil Centre,
nose and a set of glints generated in the eye. Using movement of eyes, we can control movement of cursor
and perform right and left clicks along with mouse scrolling. So, this will even to build computer interfaces
for handicapped people.

1.2 Project Objective


The main objective of this work is to study the movements of a participant’s eyes during a range of activities.
This gives insight into the cognitive processes underlying a wide variety of human behavior and can reveal
things such as learning patters and social interaction methods. Which helps in interaction with the system
without any hand involvement.

1.3 Project scope


Eye tracking is, simply, the observation and recording of eye behaviour such as pupil dilation and
movement. It has applications in many areas, including psychological research and packaging design, but
with regards to screen-based media, it's primarily used by researchers to identify where users are looking

2
CHAPTER-2
LITERATURE SURVEY
2.LITERATURE SURVEY

Eye gaze tracking algorithms comprise of corneal reflection-based methods which use NIR illumination to
estimate the gaze direction or the point of gaze using polynomial functions, or a geometrical model of the
human eye. 2D regression, 3D models, and Cross ratio-based methods fall into this category. Another class
of methods uses visible light and content information (e.g., local features, shape, texture of eye regions) to
estimate gaze direction, e.g., appearance and shape-based methods. The five different gaze tracking methods
have their own advantages and disadvantages which are briefly discussed at the end of this section and
summary of some key works on the development of these algorithms are presented.

2D regression-based methods:
In regression-based methods, the vector between pupil center and corneal glint is
mapped to corresponding gaze coordinates on the frontal screen using a polynomial transformation function.
This mapping function can be stated as: f: (Xe, Ye) → (Xs, Ys) where Xe, Ye & Xs, Ys are equipment and
screen coordinates respectively. Where n represents polynomial order, ai & bi are the coefficients. The
polynomial is optimized through calibration in which a user is asked to gaze at certain fixed points on the
frontal screen. The order and coefficients are then chosen to minimize mean squared difference (ε) between
the estimated and actual screen coordinates.

Cherif et al. used a 5x5 point calibration routine and 2 higher order polynomial transformations while
Villanueva studied effects of head movement on system accuracy with a 4x4 and 8x8 grid. Blignaut also
compared several mapping functions and calibration configurations with a 15x9 point grid and determined
that number and arrangement of calibration targets and components of the mapping function play very
important roles in determining overall accuracy of tracker. Robust and accurate gaze estimation under head
movement was also achieved using neural networks by Ji and Chuang.
.
Some other key works in this class of gaze estimation methods include Ma et al. which introduces a 2D
mapping algorithm that can handle unconstrained head movements and distorted corneal reflections due to
various noise effects. It uses multiple geometrical transformation-based mapping of CRs and demonstrates
high reliability measures for different user distances and loss of CRs due to head/eye motion. A calibration
free algorithm is detailed in Bennett using SVMs which is robust to natural head movement and achieves
high accuracy (1.5 degrees) in presence of head movement Villanueva provided an exhaustive and detailed
3
review of mapping equations and their impact on gaze tracking system response. The paper reports 400000
calibration functions, mapping orders and features to compare their impact on accuracy measures.

Another important work is by Zhu et al. which presents a very high-resolution gaze estimation method
resistant to head pose without requiring geometrical models. In an improved three-layer artificial neural
network is used to estimate the mapping function between gaze coordinates and the pupil-glint vector. This
method is shown to achieve better accuracy than simple regression-based methods. A new approach
involving only 2 light sources instead of four is implemented in where two IR LED induced glints are real
and the other two are virtual ones computed mathematically. This is done to make the algorithm more
suitable for consumer applications and simplify hardware. The POG is then estimated using mapping
function to relate the four glint locations and screen through a calibration process. A modified PCCR method
is implemented in to have improved tracking accuracy and suitable for indoor and outdoor use. In this,
adaptive exposure control is proposed since the IR LED brightness variations within a PCCR based setup
affects pupil detection and gaze tracking to a large extent.

4
CHAPTER-3
PROBLEM ANALYSIS
3.PROBLEM ANALYSIS

3.1 EXISTING SYSTEM:


ADVANCES in eye gaze tracking technology over the past few decades have led to the development of
promising gaze estimation techniques and applications for human computer interaction. Historically,
research on gaze tracking dates back to the early 1900s, starting with invasive eye tracking techniques.
These included electro-occulography using pairs of electrodes placed around the eyes or the scleral search
methods that include coils embedded into a contact lens adhering to the eyes. The first video-based eye
tracking study was made on pilots operating airplane controls in the 1940s.
Research on head-mounted eye trackers advanced in the 1960s and gaze tracking developed further in the
1970s with focus on improving accuracy and reducing the constraints on users. With increasing computing
power in devices, real time operation of eye trackers became possible during the 1980s. However, till this
time, owing to limited availability of computers, eye tracking was mainly limited to psychological.

3.1.1 Limitations
➢ The variations in eye-movement and biological aspects of individuals lead to challenges in achieving
consistent and repeatable performance from gaze tracking methods.
➢ Thus, despite several decades of development in eye gaze research, performance evaluation and
comparison of different gaze estimation techniques across different platforms is a still a difficult task.
➢ Systems based on different technologies have been implemented, in many cases with some devices
mounted on the person head.
➢ Also, non-intrusive systems have been developed and are being used in many applications.

3.2 PROPOSED SYSTEM:


The main objective of this work has been building a non-intrusive eye-gaze tracking system based on the
images taken by conventional video cameras and performing real-time tracking and analysis.
The system analyses an image of the eye and determines the gaze direction by computing the vector defined
by the pupil center and a set of glints generated in the eye. Using movement of eyes, we can control
movement of cursor and perform right and left clicks along with mouse scrolling.

5
3.2.1 Advantages:
The line of gaze determination has been the subject of a great number of studies, with applications in very
different fields including medical research for oculography determination, car drivers’ behavior
characterization, and even to build computer interfaces for handicapped people.

6
CHAPTER-4
SYSTEM ANALYSIS
4. SYSTEM ANALYSIS

The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in systems
engineering, information systems and software engineering, is the process of creating or altering systems,
and the models and methodologies that people use to develop these systems.
In software engineering the SDLC concept underpins many kinds of software development
methodologies. These methodologies form the framework for planning and controlling the creation of an
information system the software development process.

SOFTWARE MODEL OR ARCHITECTURE ANALYSIS:


Structured project management techniques (such as an SDLC) enhance management’s control over projects
by dividing complex tasks into manageable sections. A software life cycle model is either a descriptive or
prescriptive characterization of how software is or should be developed. But none of the SDLC models
discuss the key issues like Change management, Incident management and Release management processes
within the SDLC process, but it is addressed in the overall project management. In the proposed hypothetical
model, the concept of user-developer interaction in the conventional SDLC model has been converted into
a three-dimensional model which comprises of the user, owner and the developer. In the proposed
hypothetical model, the concept of user-developer interaction in the conventional SDLC model has been
converted into a three-dimensional model which comprises of the user, owner and the developer. The ―one
size fits all‖ approach to applying SDLC methodologies is no longer appropriate. We have made an attempt
to address the above-mentioned defects by using a new hypothetical model for SDLC described elsewhere.
The drawback of addressing these management processes under the overall project management is missing
of key technical issues pertaining to software development process that is, these issues are talked in the
project management at the surface level but not at the ground level.

4.1 Software Requirement Specification


Requirement Specification provides a high secure storage to the web server efficiently. Software
requirements deal with software and hardware resources that need to be installed on a serve which provides
optimal functioning for the application. These software and hardware requirements need to be installed
before the packages are installed. These are the most common set of requirements defined by any operation
system. These software and hardware requirements provide a compatible support to the operation system in
developing an application.
7
4.1.1 Functional requirements
Outputs from computer systems are required primarily to communicate the results of processing to users.
They are also used to provide a permanent copy of the results for later consultation. The various types of
outputs in general are:
➢ External Outputs, whose destination is outside the organization
➢ Internal Outputs whose destination is within organization and they are the
➢ User’s main interface with the computer.
➢ Operational outputs whose use is purely within the computer department.
➢ Interface outputs, which involve the user in communicating directly.
➢ Understanding user’s preferences, expertise level and his business requirements through a friendly
questionnaire.
➢ Input data can be in four different forms - Relational DB, text files, .xls and xml files. For testing and
demo you can choose data from any domain. User-B can provide business data as input.

4.1.2 Non-Functional Requirements:


➢ Secure access of confidential data (user’s details). SSL can be used.
➢ 24 X 7 availability.
➢ Better component design to get better performance at peak time
➢ Flexible service-based architecture will be highly desirable for future extension.
Non-functional requirements define how a system is supposed to be. Non-functional requirement is
a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific
behaviors. Non-functional requirements are in the form of an overall property of the system as a whole or
of a particular aspect and not a specific function.
Non-functional requirements focus on the quality and exposure of project. Our project the following
features that meet the needs of software services in a distributed data environment. They are:
a) Reliability
Reliability is the ability of a system to perform its required functions under stated conditions for a
specific period of time, constraints on the run-time behavior of the system. This proposed system yields
accurate results.

b) Performance
The cost of the learning process is zero. If variables are huge, then regression – based approach is
most of the times computationally faster than other mining techniques. The model can be modified with
new training data without having to rebuild the model.
8
c)Scalability and Availability
The Availability is the ability to use the model that perform under any specific condition based on
requirement. This approach can be used and predicted based on available data . The model can be applied
on any type of dataset that represents the Scalability.

d) Accuracy
The main purpose of using Multi Linear Regression model is to attain accurate result. This technique
is highly used and performed for its efficient performance and accurate result.

4.2 Feasibility Study


Preliminary investigation examines project feasibility, the likelihood the system will be useful to the
organization. The main objective of the feasibility study is to test the Technical, Operational and Economical
feasibility for adding new modules and debugging old running system. All system is feasible if they are
unlimited resources and infinite time. There are aspects in the feasibility study portion of the preliminary
investigation:
• Technical Feasibility
• Economical Feasibility
• Operation Feasibility
4.2.1. Technical Feasibility:
In the feasibility study first step is that the organization or company has to decide that what technologies
are suitable to develop by considering existing system.
The technical issue usually raised during the feasibility stage of the investigation includes the following:
• Does the necessary technology exist to do what is suggested?
• Do the proposed equipment have the technical capacity to hold the data required to use the new system?
• Will the proposed system provide adequate response to inquiries, regardless of the number or location
of users?
• Can the system be upgraded if developed?
• Are there technical guarantees of accuracy, reliability, ease of access and data security?
Earlier no system existed to cater to the needs of ‘Secure Infrastructure Implementation System’.
The current system developed is technically feasible. It is a web-based user interface for audit workflow at
NIC-CSD. Thus, it provides an easy access to the users. The database’s purpose is to create, establish and
maintain a workflow among various entities in order to facilitate all concerned users in their various
capacities or roles. Permission to the users would be granted based on the roles specified. Therefore, it

9
provides the technical guarantee of accuracy, reliability and security. The software and hard requirements
for the development of this project are not many and are already available in-house at NIC or are available
as free as open source. The work for the project is done with the current equipment and existing software
technology. Necessary bandwidth exists for providing fast feedback to the users irrespective of the number
of users using the system.
Here in this application used the technologies like Visual Studio 2012 and SQLServer 2014. These
are free software that would be downloaded from web.
Visual Studio 2013 –it is tool or technology.

4.2.2. Economical Feasibility


A system can be developed technically and that will be used if installed must still be a good
investment for the organization. In the economical feasibility, the development cost in creating the system
is evaluated against the ultimate benefit derived from the new systems. Financial benefits must equal or
exceed the costs.
The system is economically feasible. It does not require any addition hardware or software. Since
the interface for this system is developed using the existing resources and technologies available at NIC,
there is nominal expenditure and economical feasibility for certain.

Type Potential Costs Potential Benefits

Hardware/software upgrades
Fully-burdened cost of labor
(salary + benefits) .
Reduced operating costs .
Support costs for the application .
Reduced personnel costs from a reduction in staff .
Quantitative Expected operational costs .
Increased revenue from additional sales of your organization’s
Training costs for users to learn
products/services .
the application .
Training costs to train developers
in new/updated technologies .

Increased employee
Improved decisions as the result of access to accurate and timely
Qualitative dissatisfaction from fear of
information
change

10
Raising of existing, or introduction of a new, barrier to entry
within your industry to keep competition out of your market
Positive public perception that your organization is an innovator
Figure 4.2.2 Determining Economic Feasibility
The table includes both qualitative factors, costs or benefits that are subjective in nature, and quantitative
factors, costs or benefits for which monetary values can easily be identified. I will discuss the need to take
both kinds of factors into account when performing a cost/benefit analysis.

4.2.3. Operational Feasibility


Proposed projects are beneficial only if they can be turned out into information system. That will meet the
organization’s operating requirements. Operational feasibility aspects of the project are to be taken as an
important part of the project implementation. Some of the important issues raised are to test the operational
feasibility of a project includes the following: -
• Is there sufficient support for the management from the users?
• Will the system be used and work properly if it is being developed and implemented?
• Will there be any resistance from the user that will undermine the possible application benefits?
This system is targeted to be in accordance with the above-mentioned issues. Beforehand, the management
issues and user requirements have been taken into consideration. So there is no question of resistance from
the users that can undermine the possible application benefits.
The well-planned design would ensure the optimal utilization of the computer resources and would help in
the improvement of performance status.

Operations Issues Support Issues

What tools are needed to support operations?


What documentation will users be given?
What skills will operators need to be trained in?
What training will users be given?
What processes need to be created and/or updated?
How will change requests be managed?
What documentation does operations need?

4.2.3 Operation Issues


Operations Issues
Very often you will need to improve the existing operations, maintenance, and support infrastructure

11
to support the operation of the new application that you intend to develop. To determine what the impact
will be you will need to understand both the current operations and support infrastructure of your
organization and the operations and support characteristics of your new application. To operate this
application END-TO-END VMS. The user no need to require any technical knowledge that we are used to
develop this project is Asp.net C#.net. That the application providing rich user interface by user can do the
operation in flexible manner.
User-friendly
Customer will use the forms for their various transactions i.e., for adding new routes, viewing the routes
details. Also, the Customer wants the reports to view the various transactions based on the constraints. These
forms and reports are generated as user-friendly to the Client.
Reliability
The package wills pick-up current transactions on line. Regarding the old transactions, User will enter them
in to the system.
Security
The web server and database server should be protected from hacking, virus etc
Portability
The application will be developed using standard open-source software (Except Oracle) like Java, tomcat
web server, Internet Explorer Browser etc. this software will work both on Windows and Linux o/s. Hence
portability problems will not arise.
Availability
This software will be available always.
Maintainability
The system uses the 2-tier architecture. The 1st tier is the GUI, which is said to be front-end and the 2nd
tier is the database, which uses SQLite, which is the back-end.
The front-end can be run on different systems (clients). The database will be running at the server. Users
access these forms by using the user-ids and the passwords.

4.3 Use Case Scenario:


A use case diagram at its simplest is a representation of a user's interaction with the system and
depicting the specifications of a use case. A use case diagram can portray the different types of users of a
system and the various ways that they interact with the system. This type of diagram is typically used in
conjunction with the textual use case and will often be accompanied by other types of diagrams as well.
UML includes a set of graphic notation techniques to create visual models of object-oriented software-
12
intensive systems. UML is used to specify, visualize, modify, construct and document the artifacts of an object of an
object-oriented software-intensive system under development. An important part of the Unified Modeling Language
(UML) is the facilities for drawing use case diagrams. Use cases are used during the analysis phase of a project to
identify and partition system functionality. They separate the system into actors and use cases. Actors represent roles
that can play by users of the system. Those users can be humans, other computers, pieces of hardware, or even other
software systems. Use cases describe the behavior of the system.

Actor An Actor as mentioned is


a user of the system and
is depicted using a stick
figure. The role of the Actor Role Name

user is written beneath


the icon. Actors are not
limited to humans. If a
system communicates
with another application
and expects input or
delivers output then that
application can also be
considered as an actor.

Use case A Use Case is the


functionality provided Use Case Name

by the system typically


described as verb+
object (eg: Register Car,
Delete User). Use Cases
are depicted with an
ellipse. The name of the
Use Case Is written
within the ellipse.

13
Directed Associations are used to
Association link Actors with use
cases and indicate that an
actor participates in the
Use Case in some form.
Directed Association is
same as association but
difference is that it
represented by a line
having an arrow head.

System You can draw a


System

boundary boxes rectangle around the use


cases, called the system
boundary box, to
indicate the scope of
your system. Anything
within the box
represents functionality
that is in scope and
anything outside the box
is no

4.3Graphical Representation of Use Case Diagram

14
User (Actor):

Fig.4.3.1 Actor Diagram for user


System (Actor):

Fig.4.3.2Actor Diagram for System

15
4.4 System Requirements
4.4.1 HARDWARE REQUIREMENTS:
The hardware requirement specifies each interface of the software elements and the hardware elements of
the system.
These hardware requirements include configuration characteristics.
➢ System : Pentium IV 2.4 GHz.
➢ Hard Disk : 100 GB.
➢ Monitor : 15 VGA Color.
➢ Mouse : Logitech.
➢ RAM : 1 GB.

4.4.2 SOFTWARE REQUIREMENTS:


The software requirements specify the use of all required software products like data management system.
The required software product specifies the numbers and version. Each interface specifies the purpose of
the interfacing software as related to this software product.
➢ Operating system : Windows XP/7/10
➢ Coding Language : Html, JavaScript,
➢ IDE : Anaconda prompt
➢ Environment : Anaconda
➢ Library : OpenCV, dlib, NumPy, spacy, imutils

16
CHAPTER-5
SYSTEMDESIGN
5.SYSTEM DESIGN

5.1System Architecture
The purpose of the design phase is to arrange an answer of the matter such as by the necessity document.
This part is that the opening moves in moving the matter domain to the answer domain. The design phase
satisfies the requirements of the system. The design of a system is probably the foremost crucial issue warm
heartedness the standard of the software package. It’s a serious impact on the later part, notably testing and
maintenance.
The output of this part is that the style of the document. This document is analogous to a blueprint of answer
and is employed later throughout implementation, testing and maintenance. The design activity is commonly
divided into 2 separate phases System Design and Detailed Design.
System Design conjointly referred to as top-ranking style aims to spot the modules that bought to be within
the system, the specifications of those modules, and the way them move with one another to supply the
specified results.
At the top of the system style all the main knowledge structures, file formats, output formats, and also the
major modules within the system and their specifications square measure set. System design is that the
method or art of process the design, components, modules, interfaces, and knowledge for a system to satisfy
such as needs. Users will read it because the application of systems theory to development.
Detailed Design, the inner logic of every of the modules laid out in system design is determined. Throughout
this part, the small print of the info of a module square measure sometimes laid out in a high-level style
description language that is freelance of the target language within which the software package can
eventually be enforced.
In system design the main target is on distinguishing the modules, whereas throughout careful style the main
target is on planning the logic for every of the modules.

17
Fig5.1.1System Architecture

Fig5.1.1System Architecture

5.2Introduction
Software Development Life Cycle:
There are various software development approaches defined and designed which are used/employed during
development process of software, these approaches are also referred as "Software Development Process
Models". Each process model follows a particular life cycle in order to ensure success in process of software
development.

18
5.2Software Development Life Cycle
Software Development Life Cycle
Requirements:
Business requirements are gathered in this phase. This phase is the main focus of the project
managers and stake holders. Meetings with managers, stake holders and users are held in order to determine
the requirements. Who is going to use the system? How will they use the system? What data should be
input into the system? What data should be output by the system? These are general questions that get
answered during a requirement gathering phase. This produces a nice big list of functionalities that the
system should provide, which describes functions the system should perform, business logic that processes
data, what data is stored and used by the system, and how the user interface should work. The overall result
is the system as a whole and how it performs, not how it is actually going to do it.
Design:
The software system design is produced from the results of the requirements phase. Architects have
the ball in their court during this phase and this is the phase in which their focus lies. This is where the
details on how the system will work is produced. Architecture, including hardware and software,
communication, software design (UML is produced here) are all part of the deliverables of a design phase.
Implementation:
Code is produced from the deliverables of the design phase during implementation, and this is the
longest phase of the software development life cycle. For a developer, this is the main focus of the life cycle
because this is where the code is produced. Implementation my overlap with both the design and testing
phases. Many tools exist (CASE tools) to actually automate the production of code using information
gathered and produced during the design phase.
Testing:
During testing, the implementation is tested against the requirements to make sure that the product
19
is actually solving the needs addressed and gathered during the requirements phase. Unit tests and
system/acceptance tests are done during this phase. Unit tests act on a specific component of the system,
while system tests act on the system as a whole.
So, in a nutshell, that is a very basic overview of the general software development life cycle model. Now
let’s delve into some of the traditional and widely used variations.
SDLC METHDOLOGIES:
This document plays a vital role in the development of life cycle (SDLC) as it describes the complete
requirement of the system. It means for use by developers and will be the basic during testing phase. Any
changes made to the requirements in the future will have to go through formal change approval process.
SPIRAL MODEL was defined by Barry Boehm in his 1988 article, “A spiral Model of Software
Development and Enhancement. This model was not the first model to discuss iterative development, but
it was the first model to explain why the iteration models.
As originally envisioned, the iterations were typically 6 months to 2 years long. Each phase starts
with a design goal and ends with a client reviewing the progress thus far. Analysis and engineering efforts
are applied at each phase of the project, with an eye toward the end goal of the project.

The following diagram shows how a spiral model act like:

5.2 Spiral model of SDLC

The steps for Spiral Model can be generalized as follows:


• The new system requirements are defined in as much details as possible. This usually involves
interviewing a number of users representing all the external or internal users and other aspects of the
existing system.
20
• A preliminary design is created for the new system.
• A first prototype of the new system is constructed from the preliminary design. This is usually a scaled-
down system, and represents an approximation of the characteristics of the final product.
• A second prototype is evolved by a fourfold procedure:
1. Evaluating the first prototype in terms of its strengths, weakness, and risks.
2. Defining the requirements of the second prototype.
3. Planning a designing the second prototype.
4. Constructing and testing the second prototype.
• At the customer option, the entire project can be aborted if the risk is deemed too great. Risk factors
might involve development cost overruns, operating-cost miscalculation, or any other factor that could,
in the customer’s judgment, result in a less-than-satisfactory final product.
• The existing prototype is evaluated in the same manner as was the previous prototype, and if necessary,
another prototype is developed from it according to the fourfold procedure outlined above.
• The preceding steps are iterated until the customer is satisfied that the refined prototype represents the
final product desired.
• The final system is constructed, based on the refined prototype.
• The final system is thoroughly evaluated and tested. Routine maintenance is carried on a continuing
basis to prevent large scale failures and to minimize down time.
STUDY OF THE SYSTEM
In the flexibility of uses the interface has been developed a graphics concept in mind, associated
through a browser interface. The GUIs at the top level has been categorized as follows
1. Administrative User Interface Design
2. The Operational and Generic User Interface Design
The administrative user interface concentrates on the consistent information that is practically, part
of the organizational activities and which needs proper authentication for the data collection. The Interface
helps the administration with all the transactional states like data insertion, data deletion, and data updating
along with executive data search capabilities.
The operational and generic user interface helps the users upon the system in transactions through
the existing data and required services. The operational user interface also helps the ordinary users in
managing their own information helps the ordinary users in managing their own information in a customized
manner as per the assisted flexibilities.

21
5.2. Spiral Model for SDLC

INPUT AND OUTPUT:


INPUT DESIGN
Input design is a part of overall system design. The main objective during the input design is as given
below:
• To produce a cost-effective method of input.
• To achieve the highest possible level of accuracy.
• To ensure that the input is acceptable and understood by the user.
INPUT STAGES:
The main input stages can be listed as below:
• Data recording
• Data transcription
• Data conversion
• Data verification
• Data control
• Data transmission
• Data validation

22
• Data correction
INPUT TYPES:
It is necessary to determine the various types of inputs. Inputs can be categorized as follows:
• External inputs, which are prime inputs for the system.
• Internal inputs, which are user communications with the system.
• Operational, which are computer department’s communications to the system?
• Interactive, which are inputs entered during a dialogue.
INPUT MEDIA:
At this stage choice has to be made about the input media. To conclude about the input media
consideration has to be given to;
• Type of input
• Flexibility of format
• Speed
• Accuracy
• Verification methods
• Rejection rates
• Ease of correction
• Storage and handling requirements
• Security
• Easy to use
• Portability
Keeping in view the above description of the input types and input media, it can be said that most of
the inputs are of the form of internal and interactive. As
Input data is to be the directly keyed in by the user, the keyboard can be considered to be the most suitable
input device.
OUTPUT DESIGN
Outputs from computer systems are required primarily to communicate the results of processing to users.
They are also used to provide a permanent copy of the results for later consultation. The various types of
outputs in general are:
• External Outputs, whose destination is outside the organization
• Internal Outputs whose destination is within organization and they are the
• User’s main interface with the computer.
23
• Operational outputs whose use is purely within the computer department.
• Interface outputs, which involve the user in communicating directly.

OUTPUT DEFINITION:
The outputs should be defined in terms of the following points:
▪ Type of the output
▪ Content of the output
▪ Format of the output
▪ Location of the output
▪ Frequency of the output
▪ Volume of the output
▪ Sequence of the output
It is not always desirable to print or display data as it is held on a computer. It should be decided as
which form of the output is the most suitable.
Fundamental Concepts on (Domain)
Several types of eye movements are studied in eye gaze research and applications to collect information
about user intent, cognitive processes, behavior and attention analysis. These are broadly classified as
follows:
1. Fixations:
These are phases when the eyes are stationary between movements and visual input occurs. Fixation
related measurement variables include total fixation duration, mean fixation duration, fixation spatial
density, number of areas fixated, fixation sequences and fixation rate.
2. Saccades:
These are rapid and involuntary eye movements that occur between fixations. Measurable saccade
related parameters include saccade number, amplitude and fixation-saccade ratio
3. Scan path:
This includes a series of short fixations and saccades alternating before the eyes reach a target
location on the screen. Movement measures derived from scan path include scan path direction, duration,
length and area covered .
4. Gaze duration:
It refers to the sum of all fixations made in an area of interest before the eyes leave that area and also the
proportion of time spent in each area.

24
5. Pupil size and blink:
Pupil size and blink rate are measures used to study cognitive workload. Table I presents the characteristics
of different eye movements and their applications.
B. Basic setup and method used for eye gaze estimation
Video based eye gaze tracking systems comprise fundamentally of one or more digital cameras, near infra-
red (NIR) LEDs and a computer with screen displaying a user interface where the user gaze is tracked. A
typical eye gaze tracking setup is shown in Fig. 1. The steps commonly involved in passive video-based eye
tracking include user calibration, capturing video frames of the face and eye regions of user, eye detection
and mapping with gaze coordinates on screen. The common methodology (called Pupil Center Corneal
Reflection or PCCR method) involves using NIR LEDs to produce glints on the eye cornea surface and then
capturing images/videos of the eye region [26][32]. Gaze is estimated from the relative movement between
the pupil center and glint positions. External NIR illumination with single/multiple LEDs (wavelengths
typically in the range 850+/- 30 nm with some works such as [33] using 940 nm) is often used to achieve
better contrast and avoid effects due to variations induced by natural light. Webcams are mostly used; those
operate at 30/60 fps frame rate and have infrared (IR) transmission filters to block out the visible light.
Different gaze tracking methods are discussed in detail in Section III. The user-interface for gaze tracking
can be active or passive, single or multimodal [34]– [36]. In an active user interface, the user’s gaze can be
tracked to activate a function and gaze information can be used as an input modality. A passive interface is
a non-command interface where eye gaze data is collected to understand user interest or attention. Single
modal gaze tracking interfaces use gaze as the only input variable whereas a multimodal interface combines
gaze input along with mouse, keyboard, touch, or blink inputs for command.
UML DIAGRAMS
The Unified Modeling Language allows the software engineer to express an analysis model using the
modeling notation that is governed by a set of syntactic semantic and pragmatic rules.
A UML system is represented using five different views that describe the system from distinctly different
perspective. Each view is defined by a set of diagrams, which is as follows.
User Model View
This view represents the system from the user’s perspective. The analysis representation describes a usage
scenario from the end-user’s perspective.
Structural Model view
In this model the data and functionality are arrived from inside the system. This model view models the
static structures.

25
Behavioral Model View
It represents the dynamic of behavioral as parts of the system, depicting the interactions of collection
between various structural elements described in the user model and structural model view.
Implementation Model View
In this the structural and behavioral as parts of the system are represented as they are to be built.

5.2.1 Class diagram


The Class diagram is a static diagram. It represents the static view of an application. Class diagram is not
only used for visualizing, describing and documenting different aspects of a system but also for constructing
executable code of the software application. It describes the attributes and operations of a class and also the
constraints imposed on the system. The class diagrams are widely used in the modeling of object-oriented
systems because they are the only UML diagrams which can be mapped directly with object-oriented
languages. The class diagram shows a collection of classes, interfaces, associations, collaborations and
constraints. It is also known as a structural diagram
Purpose
The purpose of the class diagram is to model the static view of an application. The class diagrams are only
diagrams which can be directly mapped with object-oriented languages and thus widely used at the time of
construction.
The UML diagrams like activity diagram, sequence diagram can only give the sequence flow of the
application but class diagram is a bit different. So, it is the most popular UML diagram in the coder
community. So, the purpose of the class diagram can be summarized as:
• Analysis and design of the static view of an application.
• Describe the responsibilities of a system.
• Base for component and deployment diagrams.
• Forward and reverse engineering.
Active Class
Active classes initiate and control the flow of activity, while passive classes store data and serve other
classes. Illustrate active classes with a thicker border.
Visibility
Use visibility markers to signify who can access the information which is in a class. Private visibility hides
information from anything outside the class partition. Public visibility allows all other classes to view the
marked information. Protected visibility allows child classes to access information which is in inherited
from a parent class.
26
Associations
Associations represent static relationship between the classes. Place the association names above, on or
below the association line. Use a filled arrow to indicate the direction of the relationship. Place roles at the
end of an association. Roles represent how the two classes see each other.
Multiplicity
Place multiplicity notations at the ends of an association. These symbols indicate the number of instances
of one class linked to the instance of other class.
Constraint
Constraints are placed inside the curly braces {}.
Composition and Aggregation
Composition and aggregation link a semantic association between two classes in UML diagram. They are
used in class diagram. They both differ in their symbols.
Generalization
It is a specification relationship in which objects of the specialized element (the child) are
substitutable for objects of the generalization element (the parent). It is used in class diagram.
In our class diagram classes are symptoms reader, symptoms Analyzer and Calculate vales. The
responsibility of Symptoms reader is to read the symptoms from the patients. Symptoms Reader have the
methods such as get Symptoms and add symptoms which are used to get the symptoms from the user and
add to the training data set. Symptoms Analyzer can analyze the symptoms entered by the user and calculate
vale class can calculate the accuracy values to predict the disease and suggest Specialist.
The class diagram is the main building block of object-oriented modeling. It is used both for general
conceptual modeling of the systematic of the application, and for detailed modeling translating the models
into programming code. Class diagrams can also be used for data modeling. The classes in a class diagram
represent both the main objects, interactions in the application and the classes to be programmed. A class
with three sections, in the diagram, classes is represented with boxes which contain three parts:
The upper part holds the name of the class
The middle part contains the attributes of the class
The bottom part gives the methods or operations the class can take or undertake.

27
Figure 5.2 .1.1Class Diagram
5.2.2Sequence Diagram
A sequence diagram is a kind of interaction diagram that shows how processes operate with one another
and in what order. It is a construct of a Message Sequence Chart. A sequence diagram shows object
interactions arranged in time sequence. It depicts the objects and classes involved in the scenario and the
sequence of messages exchanged between the objects needed to carry out the functionality of the scenario.
Sequence diagrams are typically associated with use case realizations in the Logical View of the system
under development. Sequence diagrams are sometimes called event diagrams, event scenarios, and timing
diagrams.

Object Objects are instances of


classes and are arranged
horizontally. The pictorial
representation for an Object
is class (a rectangle) with
the name prefixed by the
object name (optional).
Actor Actor can also
communicated with objects
so they too can be listed as a
Actor Role Name
column.An Actor is
modeled using the stick
figure.

28
Lifeline The Lifeline identifies the
existene of the object over
time. The notation for a life
time is a vertical dotted line
etending from an object.
Activation Activation modeled as
rectangular boxes on the
lifeline indicate when the
object is performing an
action.
Message Messages modeled as
horizontal arrows between
Activations indicate the
communication between the
objects.
5.2.2 Graphical Representation for Sequence Diagram

Figure 5.2.2.1 sequence Diagram

29
Figure 5.2.2.1 sequence Diagram

Figure 5.2.2.1 sequence Diagram

5.2.3 Activity Diagram


Activity diagrams are graphical representations of workflows of stepwise activities and actions with
support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams
can be used to describe the business and operational step-by-step workflows of components in a
system. An activity diagram shows the overall flow of control.
Action An action state represents a single
step within an activity that is one not Activity1
further decomposed of individual
elements that are actions. Action
states have sets of incoming and

30
outgoing edges that specify control
flow and data flow from and to other
nodes.

Initial state An initial node is a control node at


which flow starts when the activity is
invoked .An activity may have more
than one initial state. Initial sate in
activity is represented as filled circle.

Fork For the branching of flows in two or


more parallel flows we use a
synchronization bar, which is
depicted as a thick horizontal or
vertical line.

Control Flow A control flow is an edge that starts an


activity node after the previous one is
finished.

Final state An activity may have more than one


final node. The first one reached stops
all flow in the activity.

5.2.3Graphical Representation of Activity Diagram

31
Figure 5.2 .3.1Activity Diagram

32
5.2.4 Component Diagram
Component diagrams are different in terms of nature and behaviour. Component diagrams are used to model
the physical aspects of a system. Now the question is, what are these physical aspects? Physical aspects are
the elements such as executables, libraries, files, documents, etc. which reside in a node.
Component diagrams are used to visualize the organization and relationships among components in a
system. These diagrams are also used to make executable systems.

Figure 5.2.4.1 component Diagram

5.2.5Deployment Diagram
In UML, deployment diagrams model the physical architecture of a system. Deployment diagrams show
the relationships between the software and hardware components in the system and the physical
distribution of the processing.
Deployment diagrams, which you typically prepare during the implementation phase of development, show
the physical arrangement of the nodes in a distributed system, the artifacts that are stored on each node, and
the components and other elements that the artifacts implement. Nodes represent hardware devices such as
computers, sensors, and printers, as well as other devices that support the runtime environment of a system.
Communication paths and deploy relationships model the connections in the system.

33
Figure 5.2.5.1 Deployment Diagram

34
CHAPTER-6

IMPLEMENTATION
6.Implementatuon

6.1 Technology Description:


Eye Aspect Ratio and Mouth Aspect Ratio:
Eye Aspect Ratio:
In this post we will use facial landmark detection to detect specific 68 points on the face. This post is
inspired by the following posts which are further modified and improved: [1] and [2]. By knowing the
indexes of these points, we can use the same method and select a specific area of the face (e.g., eyes,
mouth, eyebrows, nose, ears). To create an eye blink detector, eyes will be the area on the face that we are
interested in. We can divide the process of developing eye blink detector into following steps:
1. Detecting the face in the image
2. Detecting facial landmarks of interest (the eyes)
3. Calculating eye width and height
4. Calculating eye aspect ratio (EAR) – relation between the width and the height of the eye
5. Displaying the eye blink counter in the output video

Mouth Aspect Ratio:


➢ we define the mouth ratio function for finding out if a person is yawning or not. This function gives
the ratio of height to width of mouth. If height is more than width it means that the mouth is wide
open.
➢ For this as well we use a series of points from the dlib detector to find the ratio. We create a counter
variable to count the number of frames the eye has been close for or the person is yawning

6.2 Python:
You will need a text editor:
Just about any text editor will suffice for creating Python script files. You can use Microsoft Notepad,
Microsoft WordPad, Microsoft Word, or just about any word processor if you want to.

35
Program:
The program has an executable form that the computer can use directly to execute the instructions.
The same program in its human-readable source code form, from which executable programs are derived
(e.g., compiled)
Python
what is Python? Chances you are asking yourself this. You may have found this book because you
want to learn to program but don’t know anything about programming languages. Or you may have heard
of programming languages like C, C++, C#, or Java and want to know what Python is and how it compares
to “big name” languages. Hopefully I can explain it for you.
Python concepts
If you not interested in the how and whys of Python, feel free to skip to the next chapter. In this
chapter I will try to explain to the reader why I think Python is one of the best languages available and why
it’s a great one to start programming with.
• Open-source general-purpose language.
• Object Oriented, Procedural, Functional
• Easy to interface with C/ObjC/Java/Fortran
• Easy-ish to interface with C++ (via SWIG)
• Great interactive environment
Python is a high-level, interpreted, interactive and object-oriented scripting language. Python is designed to
be highly readable. It uses English keywords frequently where as other languages use punctuation, and it
has fewer syntactical constructions than other languages.
Variables
Variables are nothing but reserved memory locations to store values. This means that when you
create a variable you reserve some space in memory.
Based on the data type of a variable, the interpreter allocates memory and decides what can be stored in the
reserved memory. Therefore, by assigning different data types to variables, you can store integers, decimals
or characters in these variables.
Standard Data Types
The data stored in memory can be of many types. For example, a person's age is stored as a numeric
value and his or her address is stored as alphanumeric characters. Python has various standard data types
that are used to define the operations possible on them and the storage method for each of them.
Python has five standard data types −

36
• Numbers
• String
• List
• Tuple
• Dictionary
Python Numbers
Number data types store numeric values. Number objects are created when you assign a value to
them.
Python Strings
Strings in Python are identified as a contiguous set of characters represented in the quotation marks.
Python allows for either pairs of single or double quotes. Subsets of strings can be taken using the slice
operator ([] and [:]) with indexes starting at 0 in the beginning of the string and working their way from -1
at the end.
Python Lists
Lists are the most versatile of Python's compound data types. A list contains items separated by
commas and enclosed within square brackets ([]). To some extent, lists are similar to arrays in C. One
difference between them is that all the items belonging to a list can be of different data type.
The values stored in a list can be accessed using the slice operator ([] and [:]) with indexes starting at 0 in
the beginning of the list and working their way to end -1. The plus (+) sign is the list concatenation operator,
and the asterisk (*) is the repetition operator.
Python Tuples
A tuple is another sequence data type that is similar to the list. A tuple consists of a number of values
separated by commas. Unlike lists, however, tuples are enclosed within parentheses.
The main differences between lists and tuples are: Lists are enclosed in brackets ([]) and their elements and
size can be changed, while tuples are enclosed in parentheses (()) and cannot be updated. Tuples can be
thought of as read-only lists.
Python Dictionary
Python's dictionaries are kind of hash table type. They work like associative arrays or hashes found
in Perl and consist of key-value pairs. A dictionary key can be almost any Python type, but are usually
numbers or strings. Values, on the other hand, can be any arbitrary Python object.
Dictionaries are enclosed by curly braces ({}) and values can be assigned and accessed using square braces
([]).

37
Different modes in python
Python has two basic modes: normal and interactive.
The normal mode is the mode where the scripted and finished .py files are run in the Python interpreter.
Interactive mode is a command line shell which gives immediate feedback for each statement, while running
previously fed statements in active memory. As new lines are fed into the interpreter, the fed program is
evaluated both in part and in whole.
20 Python libraries
1. Requests: The most famous http library written by Kenneth Reitz. It’s a must have for every python
developer.
2. Scrapy: If you are involved in web scraping then this is a must have library for you. After using this
library, you won’t use any other.
3. WX Python: A GUI toolkit for python. I have primarily used it in place of tkinter. You will really love
it.
4. Pillow: A friendly fork of PIL (Python Imaging Library). It is more user friendly than PIL and is a must
have for anyone who works with images.
5. SQL Alchemy: A database library. Many love it and many hate it. The choice is yours.
6. Beautiful Soup: I know it’s slow but this xml and html parsing library is very useful for beginners.
7. Twisted: The most important tool for any network application developer. It has a very beautiful api and
is used by a lot of famous python developers.
8. NumPy: How can we leave this very important library? It provides some advance math functionalities to
python.
9. SciPy: When we talk about NumPy then we have to talk about SciPy. It is a library of algorithms and
mathematical tools for python and has caused many scientists to switch from ruby to python.
10. matplotlib.: A numerical plotting library. It is very useful for any data scientist or any data analyzer.
11. Pygame: Which developer does not like to play games and develop them? This library will help you
achieve your goal of 2d game development.
12. Pyglet: A 3d animation and game creation engine. This is the engine in which the famous python port of
Minecraft was made
13. pyQT: A GUI toolkit for python. It is my second choice after wxpython for developing GUIs for my
python scripts.
14. pyGtk.: Another python GUI library. It is the same library in which the famous BitTorrent client is
created.
15. Scapy:A packet sniffer and analyzer for python made in python.
38
16. pywin32. A python library which provides some useful methods and classes for interacting with
windows.
17. nltk: Natural Language Toolkit – I realize most people won’t be using this one, but it’s generic enough.
It is a very useful library if you want to manipulate strings. But it’s capacity is beyond that. Do check it out.
18. nose: A testing framework for python. It is used by millions of python developers. It is a must have if
you do test driven development.
19. SymPy: SymPy can do algebraic evaluation, differentiation, expansion, complex numbers, etc. It is
contained in a pure Python distribution.
20. I Python: I just can’t stress enough how useful this tool is. It is a python prompt on steroids. It has
completion, history, shell capabilities, and a lot more. Make sure that you take a look at it.
NumPy
NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually
numbers), all of the same type, indexed by a tuple of positive integers. In NumPy dimensions are called axes.
The number of axes is rank.
• Offers MATLAB-ish capabilities within Python
• Fast array operations
• 2D arrays, multi-D arrays, linear algebra etc.
Matplotlib
• High quality plotting library.
Python class and objects
These are the building blocks of OOP class creates a new object. This object can be anything,
whether an abstract data concept or a model of a physical object, e.g., a chair. Each class has individual
characteristics unique to that class, including variables and methods. Classes are very powerful and currently
“the big thing” in most programming languages. Hence, there are several chapters dedicated to OOP later
in the book.
When I first learned C and C++, I did great; functions just made sense for me.
Having messed around with BASIC in the early ’90s, I realized functions were just like subroutines so there
wasn’t much new to learn.
However, when my C++ course started talking about objects, classes, and all the new features of OOP, my
grades definitely suffered.
Once you learn OOP, you’ll realize that it’s actually a pretty powerful tool. Plus, many Python libraries and
APIs use classes, so you should at least be able to understand what the code is doing.
One thing to note about Python and OOP: it’s not mandatory to use objects in your code in a way that works
39
best; maybe you don’t need to have a full-blown class with initialization code and methods to just return a
calculation. With Python, you can get as technical as you want.
Objects are an encapsulation of variables and functions into a single entity. Objects get their variables and
functions from classes. Classes are essentially a template to create your objects.
Inheritance
First off, classes allow you to modify a program without really making changes to it.
To elaborate, by subclassing a class, you can change the behavior of the program by simply adding new
components to it rather than rewriting the existing components.
As we’ve seen, an instance of a class inherits the attributes of that class.
However, classes can also inherit attributes from other classes. Hence, a subclass inherits from a superclass
allowing you to make a generic superclass that is specialized via subclasses.
The subclasses can override the logic in a superclass, allowing you to change the behavior of your classes
without changing the superclass at all.
Exceptions
I’ve talked about exceptions before but now I will talk about them in depth. Essentially, exceptions
are events that modify program’s flow, either intentionally or due to errors.
They are special events that can occur due to an error, e.g., trying to open a file that doesn’t exist, or when
the program reaches a marker, such as the completion of a loop.
Exceptions, by definition, don’t occur very often; hence, they are the "exception to the rule" and a special
class has been created for them. Exceptions are everywhere in Python.
Virtually every module in the standard Python library uses them, and Python itself will raise them in a lot
of different circumstances.
Here are just a few examples:
• Accessing a non−existent dictionary key will raise a Key Error exception.
• Searching a list for a non−existent value will raise a Value Error exception
. • Calling a non−existent method will raise an Attribute Error exception.
• Referencing a non−existent variable will raise a Name Error exception.
• Mixing datatypes without coercion will raise a Type Error exception.
One use of exceptions is to catch a fault and allow the program to continue working; we have seen this
before when we talked about files.
This is the most common way to use exceptions. When programming with the Python command line
interpreter, you don’t need to worry about catching exceptions.
Your program is usually short enough to not be hurt too much if an exception occurs.
40
Plus, having the exception occur at the command line is a quick and easy way to tell if your code logic has
a problem.
However, if the same error occurred in your real program, it will fail and stop working. Exceptions can be
created manually in the code by raising an exception.
It operates exactly as a system-caused exceptions, except that the programmer is doing it on purpose. This
can be for a number of reasons. One of the benefits of using exceptions is that, by their nature, they don’t
put any overhead on the code processing.
Because exceptions aren’t supposed to happen very often, they aren’t processed until they occur.
Exceptions can be thought of as a special form of the if/elif statements. You can realistically do the same
thing with if blocks as you can with exceptions.
However, as already mentioned, exceptions aren’t processed until they occur; if blocks are processed all the
time.
Proper use of exceptions can help the performance of your program.
The more infrequent the error might occur, the better off you are to use exceptions; using if blocks require
Python to always test extra conditions before continuing.
Exceptions also make code management easier: if your programming logic is mixed in with error-handling
if statements, it can be difficult to read, modify, and debug your program.
Python modules
Python allows us to store our code in files (also called modules). This is very useful for more serious
programming, where we do not want to retype a long function definition from the very beginning just to
change one mistake. In doing this, we are essentially defining our own modules, just like the modules
defined already in the Python library.
To support this, Python has a way to put definitions in a file and use them in a script or in an interactive
instance of the interpreter. Such a file is called a module; definitions from a module can be imported into
other modules or into the main module.
Testing code
As indicated above, code is usually developed in a file using an editor.
To test the code, import it into a Python session and try to run it.
Usually there is an error, so you go back to the file, make a correction, and test again.
This process is repeated until you are satisfied that the code works. T
her entire process is known as the development cycle.
There are two types of errors that you will encounter. Syntax errors occur when the form of some command
is invalid.
41
This happens when you make typing errors such as misspellings, or call something by the wrong name, and
for many other reasons. Python will always give an error message for a syntax error.
Functions in Python
It is possible, and very useful, to define our own functions in Python. Generally speaking, if you
need to do a calculation only once, then use the interpreter. But when you or others have need to perform a
certain type of calculation many times, then define a function.
To carry out that specific task, the function might or might not need multiple inputs. When the
task is carried out, the function can or cannot return one or more values. There are three types of
functions in python:
help(), min(),print().
Python Namespace
Generally speaking, a namespace (sometimes also called a context) is a naming system for making
names unique to avoid ambiguity. Everybody knows a name spacing system from daily life, i.e., the naming
of people in first name and family name (surname).
An example is a network: each network device (workstation, server, printer, ...) needs a unique
name and address. Yet another example is the directory structure of file systems.
The same file name can be used in different directories, the files can be uniquely accessed via the
pathnames.
Many programming languages use namespaces or contexts for identifiers. An identifier defined in a
namespace is associated with that namespace.
This way, the same identifier can be independently defined in multiple namespaces. (Like the same file
names in different directories) Programming languages, which support namespaces, may have different
rules that determine to which namespace an identifier belongs.
Namespaces in Python are implemented as Python dictionaries, this means it is a mapping from names
(keys) to objects (values). The user doesn't have to know this to write a Python program and when using
namespaces.
Some namespaces in Python:
• global names of a module
• local names in a function or method invocation
• built-in names: this namespace contains built-in functions (e.g., abs (), cmp (), ...) and built-in exception
names

42
Garbage Collection
Garbage Collector exposes the underlying memory management mechanism of Python, the
automatic garbage collector. The module includes functions for controlling how the collector operates and
to examine the objects known to the system, either pending collection or stuck in reference cycles and unable
to be freed.
Python XML Parser
XML is a portable, open-source language that allows programmers to develop applications that can be read
by other applications, regardless of operating system and/or developmental language.
What is XML? The Extensible Markup Language XML is a markup language much like HTML or SGML.
This is recommended by the World Wide Web Consortium and available as an open standard.
XML is extremely useful for keeping track of small to medium amounts of data without requiring a SQL-
based backbone.
XML Parser Architectures and APIs the Python standard library provides a minimal but useful set of
interfaces to work with XML.
The two most basic and broadly used APIs to XML data are the SAX and DOM interfaces.
Simple API for XML SAX: Here, you register callbacks for events of interest and then let the parser proceed
through the document.
This is useful when your documents are large or you have memory limitations, it parses the file as it reads
it from disk and the entire file is never stored in memory.
Document Object Model DOM API: This is a World Wide Web Consortium recommendation wherein the
entire file is read into memory and stored in a hierarchical tree − based form to represent all the features of
an XML document.
SAX obviously cannot process information as fast as DOM can when working with large files. On the other
hand, using DOM exclusively can really kill your resources, especially if used on a lot of small files.
SAX is read-only, while DOM allows changes to the XML file. Since these two different APIs literally
complement each other, there is no reason why you cannot use them both for large projects.
Python Web Frameworks
A web framework is a code library that makes a developer's life easier when building reliable, scalable and
maintainable web applications.
Why are web frameworks useful?
Web frameworks encapsulate what developers have learned over the past twenty years while programming
sites and applications for the web. Frameworks make it easier to reuse code for common HTTP operations

43
and to structure projects so other developers with knowledge of the framework can quickly build and
maintain the application.
Common web framework functionality
Frameworks provide functionality in their code or through extensions to perform common operations
required to run web applications. These common operations include:
1. URL routing
2. HTML, XML, JSON, and other output format templating
3. Database manipulation
4. Security against Cross-site request forgery (CSRF) and other attacks
5. Session storage and retrieval
Not all web frameworks include code for all of the above functionality. Frameworks fall on the spectrum
from executing a single use case to providing every known web framework feature to every developer.
Some frameworks take the "batteries-included" approach where everything possibly comes bundled with
the framework while others have a minimal core package that is amenable to extensions provided by other
packages.
Comparing web frameworks
There is also a repository called compare-python-web-frameworks where the same web application is being
coded with varying Python web frameworks, templating engines and object.
Web framework resources
• When you are learning how to use one or more web frameworks it's helpful to have an idea of what the
code under the covers is doing.
• Frameworks is a really well-done short video that explains how to choose between web frameworks.
The author has some particular opinions about what should be in a framework. For the most part I agree
although I've found sessions and database ORMs to be a helpful part of a framework when done well.
• what is a web framework? is an in-depth explanation of what web frameworks are and their relation to
web servers.
• Django vs Flash vs Pyramid: Choosing a Python web framework contains background information and code
comparisons for similar web applications built in these three big Python frameworks.
• This fascinating blog post takes a look at the code complexity of several Python web frameworks by
providing visualizations based on their code bases.
• Python’s web frameworks benchmarks are a test of the responsiveness of a framework with encoding
an object to JSON and returning it as a response as well as retrieving data from the database and

44
rendering it in a template. There were no conclusive results but the output is fun to read about
nonetheless.
• What web frameworks do you use and why are they awesome? is a language agnostic Reddit discussion
on web frameworks. It's interesting to see what programmers in other languages like and dislike about
their suite of web frameworks compared to the main Python frameworks.
• This user-voted question & answer site asked "What are the best general purpose Python web
frameworks usable in production?". The votes aren't as important as the list of the many frameworks
that are available to Python developers.
Web frameworks learning checklist
1. Choose a major Python web framework (Django or Flask are recommended) and stick with it. When
you're just starting it's best to learn one framework first instead of bouncing around trying to understand
every framework.
2. Work through a detailed tutorial found within the resource’s links on the framework's page.
3. Study open-source examples built with your framework of choice so you can take parts of those projects
and reuse the code in your application.
4. Build the first simple iteration of your web application then go to the deployment section to make it
accessible on the web.
Python-Data Base Communication
Connector/Python provides a connect () call used to establish connections to the MySQL server. The
following sections describe the permitted arguments for connect () and describe how to use option files that
supply additional arguments.
A database is an organized collection of data. The data are typically organized to model aspects of reality
in a way that supports processes requiring this information.
The term "database" can both refer to the data themselves or to the database management system. The
Database management system is a software application for the interaction between user’s database itself.
Databases are popular for many applications, especially for use with web applications or customer-oriented
programs
Users don't have to be human users. They can be other programs and applications as well. We will learn
how Python or better a Python program can interact as a user of an SQLdatabase.
This is an introduction into using SQLite and MySQL from Python.
The Python standard for database interfaces is the Python DB-API, which is used by Python's database
interfaces.
The DB-API has been defined as a common interface, which can be used to access relational databases.
45
In other words, the code in Python for communicating with a database should be the same, regardless of
the database and the database module used. Even though we use lots of SQL examples, this is not an
introduction into SQL but a tutorial on the Python interface.
SQLite is a simple relational database system, which saves its data in regular data files or even in the internal
memory of the computer, i.e., the RAM.
It was developed for embedded applications, like Mozilla-Firefox (Bookmarks), Symbian OS or Android.
SQLITE is "quite" fast, even though it uses a simple file. It can be used for large databases as well.
If you want to use SQLite, you have to import the module sqlite3. To use a database, you have to create first
a Connection object.
The connection object will represent the database. The argument of connection - in the following example
"companys.db" - functions both as the name of the file, where the data will be stored, and as the name of
the database. If a file with this name exists, it will be opened.
Spacy:
spaCy is a free, open-source library for advanced Natural Language Processing (NLP) in Python.
If you’re working with a lot of text, you’ll eventually want to know more about it. For example, what’s it
about? What do the words mean in context? Who is doing what to whom? What companies and products
are mentioned? Which texts are similar to each other?
spaCy is designed specifically for production use and helps you build applications that process and
“understand” large volumes of text. It can be used to build information extraction or natural language
understanding systems, or to pre-process text for deep learning.
Imutils:
Imutils is a package based on OpenCV, which can call the OpenCV interface more simply. It can easily
realize a series of operations such as image translation, rotation, scaling, skeletonization and so on. Confirm
that NumPy, SciPy, Matplotlib and OpenCV are installed before installation.

6.3 Open Cv
OpenCV was started at Intel in 1999 by Gary Bradsky, and the first release came out in 2000. Vadim
Pisarevsky joined Gary Bradsky to manage Intel's Russian software OpenCV team. In 2005, OpenCV was
used on Stanley, the vehicle that won the 2005 DARPA Grand Challenge. Later, its active development
continued under the support of Willow Garage with Gary Bradsky and Vadim Pisarevsky leading the
project. OpenCV now supports a multitude of algorithms related to Computer Vision and Machine Learning
and is expanding day by day. OpenCV supports a wide variety of programming languages such as C++,
Python, Java, etc., and is available on different platforms including Windows, Linux, OS X, Android, and

46
iOS. Interfaces for high-speed GPU operations based on CUDA and OpenCL are also under active
development. 38 OpenCV-Python is the Python API for OpenCV, combining the best qualities of the
OpenCV C++ API and the Python language

OpenCV-Python
OpenCV-Python is a library of Python bindings designed to solve computer vision problems. Python is a
general-purpose programming language started by Guido van Rossum that became very popular very
quickly, mainly because of its simplicity and code readability. It enables the programmer to express ideas
in fewer lines of code without reducing readability. Compared to languages like C/C++, Python is slower.
That said, Python can be easily extended with C/C++, which allows us to write computationally intensive
code in C/C++ and create Python wrappers that can be used as Python modules. This gives us two
advantages: first, the code is as fast as the original C/C++ code (since it is the actual C++ code working in
background) and second, it easier to code in Python than C/C++. OpenCV-Python is a Python wrapper for
the original OpenCV C++ implementation. OpenCV-Python makes use of NumPy, which is a highly
optimized library for numerical operations with a MATLAB-style syntax. All the OpenCV array structures
are converted to and from NumPy arrays. This also makes it easier to integrate with other libraries that use
NumPy such as SciPy and Matplotlib.
NumPy:
NumPy is a very popular python library for large multi-dimensional array and matrix processing, with the
help of a large collection of high-level mathematical functions. It is very useful for fundamental scientific
computations in Machine Learning. It is particularly useful for linear algebra, Fourier transform, and random
number capabilities. High-end libraries like Tensor Flow uses NumPy internally for manipulation of
Tensors.
import NumPy as np
➢ Face detection uses classifiers, which are algorithms that detects what is either a face (1) or not a face
(0) in an image. Here we work with face detection using Haar-Cascade classifier with OpenCV.
OpenCV comes with many pre-trained models of haarcascade classifiers like face, eye, smile etc in
XML file. Those XML files (pre-trained models) are stored in OpenCV/data/haarcascades/ folder. You
can also add those classifiers if you want, but here I am going to use only two classifiers i.e., one for
face and another for eye.
➢ In this project, we are going to use default camera of our system/laptop to take input as live video OR
you can also pass a video as an argument to perform face detection on video. As we can't directly work
on video, we will be extracting each frame from the video and then perform face detection on frames.
47
6.4 Dlib
Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex
software in C++ to solve real world problems. It is used in both industry and academia in a wide range of
domains including robotics, embedded devices, mobile phones, and large high performance computing
environments. Dlib's open-source licensing allows you to use it in any application, free of charge.
The dlib library is arguably one of the most utilized packages for face recognition. A Python package
appropriately named face recognition wraps dlib face recognition functions into a simple, easy to use
API.However, dlib includes two face detection methods built into the library:
A HOG + Linear SVM face detector that is accurate and computationally efficient.
A Max-Margin (MMOD) CNN face detector that is both highly accurate and very robust, capable of
detecting faces from varying viewing angles, lighting conditions, and occlusion.
To detect the faces and eyes in our video we need to call a frontal face
detector dlib.get_frontal_face_detector () and facial landmark predictor dlib. shape predictor from dlib
library. We already explained what facial landmark predictor is in the previous post entitled: How to detect
facial landmarks using DLIB and Open CV.
1. The dlib's face detector is an implementation of One Millisecond Face Alignment with an Ensemble
of Regression Trees paper by Kazemi and Sullivan (2014).

2. 68 coordinates are detected for the given face by the face detector.
3. dlib's framework can be trained to predict any shape. Hence it can be used for custom shape

detections as well.
4. Used dlib's pre-trained face detector based on the modification of the standard Histogram of Oriented
Gradients + Linear SVM method for object detection.
5. In face recognition there are two commonly used open-source libraries namely Dlib and OpenCV.
Analysis of facial recognition algorithms is needed as reference for software developers who want to
implement facial recognition features into an application program. From Dlib algorithm to be analyzed
is CNN and HoG, from OpenCV algorithm is DNN and HAAR Cascades. These four algorithms are
analyzed in terms of speed and accuracy
.

48
6.5 SAMPLE CODE:

if len(rects) > 0:
rect = rects[0]
else:
cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF
continue
# Determine the facial landmarks for the face region, then
# Convert the facial landmark (x, y)-coordinates to a NumPy
# Array
shape = predictor (gray, rect)
shape = face_utils. shape_to_np(shape)
# Extract the left and right eye coordinates, then use the
# Coordinates to compute the eye aspect ratio for both eyes
mouth = shape [mStart: mEnd]
left Eye = shape [lStart: lEnd]
right Eye = shape [rStart: rEnd]
nose = shape [nStart: nEnd]
# Because I flipped the frame, left is right, right is left.
temp = left Eye
left Eye = right Eye
right Eye = temp

# Average the mouth aspect ratio together for both eyes


mar = mouth_aspect_ratio(mouth)
left EAR = eye_aspect_ratio (left Eye)
right EAR = eye_aspect_ratio (right Eye)
ear = (leftEAR + right EAR) / 2.0
diff_ear = np.abs (leftEAR - right EAR)
nose point = (nose [3, 0], nose [3, 1])
# Compute the convex hull for the left and right eye, then
# Visualize each of the eyes
49
mouth Hull = cv2.convexHull(mouth)
leftEyeHull = cv2.convexHull(left Eye)
rightEyeHull = cv2.convexHull(right Eye)
cv2.drawContours(frame, [mouth Hull],-1, YELLOW_COLOR,1)
cv2.drawContours(frame, [leftEyeHull],-1, YELLOW_COLOR1)
cv2.drawContours(frame, [rightEyeHull],-1,YELLOW_COLOR,1)
for (x, y) in np. concatenate ((mouth, leftEye, right Eye),axis=0):
cv2.circle(frame, (x, y), 2, GREEN_COLOR, -1)
# Check to see if the eye aspect ratio is below the blink
# Threshold, and if so, increment the blink frame counter
if diff_ear > WINK_AR_DIFF_THRESH:
if leftEAR < right EAR:
if leftEAR < EYE_AR_THRESH:
WINK_COUNTER += 1
if WINK_COUNTER > WINK_CONSECUTIVE_FRAMES:
Pag. Click(button='left')
WINK_COUNTER = 0
elif leftEAR > right EAR:
if right EAR < EYE_AR_THRESH:
WINK_COUNTER += 1

if WINK_COUNTER > WINK_CONSECUTIVE_FRAMES:


Pag. Click(button='right')
WINK_COUNTER = 0
else:
WINK_COUNTER = 0
else:
if ear <= EYE_AR_THRESH:
EYE_COUNTER += 1

if EYE_COUNTER > EYE_AR_CONSECUTIVE_FRAMES:


SCROLL_MODE = not SCROLL_MODE
# INPUT_MODE = not INPUT_MODE
50
EYE_COUNTER = 0
# Nose point to draw a bounding box around it
else:
EYE_COUNTER = 0
WINK_COUNTER = 0

if mar > MOUTH_AR_THRESH:


MOUTH_COUNTER += 1

if MOUTH_COUNTER >= MOUTH_AR_CONSECUTIVE_FRAMES:


# If the alarm is not on, turn it on
INPUT_MODE = not INPUT_MODE
# SCROLL_MODE = not SCROLL_MODE
MOUTH_COUNTER = 0
ANCHOR_POINT = nose point

else:
MOUTH_COUNTER = 0

if INPUT_MODE:
cv2.putText(frame, "READING INPUT!", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7,
RED_COLOR, 2)
x, y = ANCHOR_POINT
nx, ny = nose point
w, h = 60, 35
multiple = 1
cv2.rectangle(frame, (x - w, y - h), (x + w, y + h), GREEN_COLOR, 2)
cv2.line(frame, ANCHOR_POINT, nose point, BLUE_COLOR, 2)

dir = direction (nose point, ANCHOR_POINT, w, h)


cv2.putText(frame, dir. Upper (), (10, 90), cv2.FONT_HERSHEY_SIMPLEX, 0.7,
RED_COLOR, 2)
drag = 18
51
if dir == 'right':
pag. moveRel (drag, 0)
elif dir == 'left':
pag. moveRel (-drag, 0)
elif dir == 'up':
if SCROLL_MODE:
Pag. Scroll (40)
else:
pag. moveRel (0, -drag)
elif dir == 'down':
if SCROLL_MODE:
Pag. Scroll (-40)
else:
pag. moveRel (0, drag)

if SCROLL_MODE:
cv2.putText(frame, 'SCROLL MODE IS ON!', (10, 60), cv2.FONT_HERSHEY_SIMPLEX,
0.7, RED_COLOR, 2)

# cv2.putText(frame, "MAR: {:.2f}”. format(mar), (500, 30),


# cv2.FONT_HERSHEY_SIMPLEX, 0.7, YELLOW_COLOR, 2)
# cv2.putText(frame, "Right EAR: {:.2f}”. format (right EAR), (460, 80),
# cv2.FONT_HERSHEY_SIMPLqEX, 0.7, YELLOW_COLOR, 2)
# cv2.putText(frame, "Left EAR: {:.2f}”. format(leftEAR), (460, 130),
# cv2.FONT_HERSHEY_SIMPLEX, 0.7, YELLOW_COLOR, 2)
# cv2.putText(frame, "Diff EAR: {:.2f}”. format (np.abs (leftEAR - right EAR)), (460, 80),
# cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

# Show the frame


cv2.imshow("Frame", frame)
key = cv2.waitKey(1) & 0xFF

52
# If the `Esc` key was pressed, break from the loop
if key == 27:
break

53
CHAPTER-7
TESTING
7.TESTING

7.1 Introduction
Software testing is an investigation conducted to provide stakeholders with information about the quality
of the product or service under test. Software testing can also provide an objective, independent view of the
software to allow the business to appreciate and understand the risks of software implementation. Test
techniques include, but are not limited to, the process of executing a program or application with the intent
of finding software bugs (errors or other defects). Testing involves the execution of a software component
or system component to evaluate one or more properties of interest. In general, these properties indicate the
extent to which the component or system under test have the following • Meets the requirements that guided
its design and development • Responds correctly to all kinds of inputs • Performs its functions within an
acceptable time • Is sufficiently usable • Can be installed and run in its intended environments, and •
Achieves the general result its stakeholders desire. As the number of possible tests for even simple software
components is practically infinite, all software testing uses some strategy to select tests that are feasible for
the available time and resources. As a result, software testing typically (but not exclusively) attempts to
execute a program or application with the intent of finding software bugs (errors or other defects). Software
testing can provide objective, independent information about the quality of software and risk of its failure
to users and/or sponsors. Software testing can be conducted as soon as executable software (even if partially
complete) exists. The overall approach to software development often determines when and how testing is
conducted. For example, in a phased process, most testing occurs after system requirements have been
defined and then implemented in testable programs. In contrast, under an Agile approach, requirements,
programming, and testing are often done concurrently.
Testing Types
A software engineering product can be tested in one of two ways:
• Black Box Testing
• White Box Testing
Black box testing
Knowing the specified function that a product has been designed to perform, determine whether each
function is fully operational.
White box testing
Knowing the internal workings of a software product determine whether the internal operation
implementing the functions perform according to the specification, and all the internal components have
54
been adequately exercised.
Testing Strategies
Four Testing Strategies that are often adopted by the software development team include: Testing Strategies
Four Testing Strategies that are often adopted by the software development team include:
• Unit Testing
• Integration Testing
• Validation Testing
• System Testing
Module Testing
Unit Testing
We adopt white box testing when using this testing technique. This testing was carried out on individual
components of the software that were designed. Each individual module was tested using this technique
during the coding phase [9]. Every component was checked to make sure that they adhere strictly to the
specifications spelt out in the data flow diagram and ensure that they perform the purpose intended for them.
All the names of the variables are scrutinized to make sure that they are truly reflected of the element they
represent. All the looping mechanisms were verified to ensure that they were as decided. Beside these, we
trace through the code manually to capture syntax errors and logical errors.
Integration Testing
After finishing the Unit Testing process, next is the integration testing process. In this testing process we
put our focus on identifying the interfaces between components and their functionality as dictated by the
DFD diagram. The Bottom-up incremental approach was adopted during this testing. Low level modules
are integrated and combined as a cluster before testing.
The Black box testing technique was employed here. The interfaces between the components were tested
first. This allowed identifying any wrong linkages or parameters passing early in the development process
as it just can be passed in a set of data and checked if the result returned is an accepted one.
Validation Testing
The system has been tested and implemented successfully and thus ensured that all the requirements as
listed in the software requirements specification are completely fulfilled. In case of erroneous input
corresponding error messages are displayed.
System Testing
System testing is a series of different tests whose primary purpose is to fully exercise the computer-based
system. Although each test has a different purpose, all the work should verify that all system elements have
been properly integrated and perform allocated functions. System testing also ensures that the project works
55
well in the environment. It traps the errors and allows convenient processing of errors without coming out
of the program abruptly.
Software testing is critical element of software quality assurance and represents ultimate review of
specification, design and coding. Test case design focuses on a set of technique for the creation of test cases
that meet overall testing objectives. Planning and testing of a programming system involve formulating a
set of test cases, which are similar to the real data that the system is intended to manipulate. Test castes
consist of input specifications, a description of the system functions exercised by the input and a statement
of the extended output.
In principle, testing of a program must be extensive. Every statement in the program should be exercised
and every possible path combination through the program should be executed at least once. Thus, it is
necessary to select a subset of the possible test cases and conjecture that this subset will adequately test the
program.
MODULE TESTING
To locate errors, each module is tested individually. This enables us to detect error and correct it without
affecting any other modules. Whenever the program is not satisfying the required function, it must be
corrected to get the required result. Thus, all the modules are individually tested from bottom up starting
with the smallest and lowest modules and proceeding to the next level. Each module in the system is tested
separately. For example, the job classification module is tested separately. This module is tested with
different job and its approximate execution time and the result of the test is compared with the results that
are prepared manually. Each module in the system is tested separately. In this system the resource
classification and job scheduling modules are tested separately and their corresponding results are obtained
which reduces the process waiting time.
ACCEPTANCE TESTING
When that user fined no major problems with its accuracy, the system passers through a final acceptance
test. This test confirms that the system needs the original goals, objectives and requirements established
during analysis without actual execution which elimination wastage of time and money acceptance tests on
the shoulders of users and management, it is finally acceptable and ready for the operation.
Guidelines for developing test cases
• Describe which feature or service your test attempts to cover
• If the test case is based on a use case it is a good idea to refer to the use case name.
• Remember that the use cases are the source of test cases. In theory the software is supposed to match the
use cases not the reverse. As soon as you have enough use cases , go ahead and write the test plan for
that piece
56
• Specify what you are testing and which particular feature. Then specify what you are going to do to test
the feature and what you expect to happen.
• Test the normal use of the object’s methods. Test the abnormal but reasonable use of the object’s methods.
• Test the abnormal but unreasonable use of the object’s methods.
• Test the boundary conditions. Also specify when you expect error dialog boxes, when you expect some
default event, and when functionality till is being defined.
• Test object’s interactions and the messages sent among them. If you have developed sequence diagrams,
they can assist you in this process
• when the revisions have been made, document the cases so they become the starting bases for the follow-
up test
Attempting to reach agreement on answers generally will raise other what-if questions. Add these to the list
and answer them, repeat the process until the list is stabilized, then you need not add any more questions

7.2 Test Cases


Test Test Case Test Case Test Steps Test Test Priority
Case Name Desc. Step Expected Actual Case
Id Status
01 Load face Verify if Run project to Face and Face and Passed High
land mark face check if face is eye’s location eyes are
detection recogniti recognized must be detected
dataset on is from video detected and marks
correct are set
02 Check if From Run project Draw pattern Points are Passed High
eye’s face eye from live on eyes detected
location is location camera feed and pattern
detected must be eyes on face is showing
detected must be on eyes
detected
03 Check Check if Run project Mouse cursor Left text is Passed High
mouse mouse and move head should move showing
movement cursor is to left left and left on live
to left moving display video and

57
to left should be mouse is
when shown moving to
head left
moveme
nt is left
04 Check Check if Run project Mouse cursor Right, up, Passed High
mouse mouse and move head should move down text
movement cursor is to right, up, right, up, is showing
to right, moving down down and on live
up, down to right, right, up, video and
up, down down display mouse is
when should be moving to
head shown right, up,
moveme down
nt
05 Mouse Check if Run project With right With both Passed High
click mouse blink right eye eye blink eye blinks
left right for right click, mouse right options are
button is left eye for left button and working
working click left blingk left
with eye click should
blink work
06 Noisy Whether If graph is not It does not Passed Passed Medium
Records the graph displayed show the
Chart is variations in
displaye between
d or not clean and
noisy records
7.2Representing Test Cases and Status

58
CHAPTER-8
RESULTS
8.Results

Fig 8.1Execution in command prompt

Fig 8.2 Interface

59
Fig 8.3 In Reading mode

Fig 8.4 Moving curser to the right

60
Fig 8.5 Moving curser to the left

Fig 8.6 Moving curser to the up

61
Fig 8.7 Moving curser to the down

Fig8.8 Termination

62
CHAPTER 9
CONCLUSION
9.CONCLUSION
Eye gaze estimation is an interdisciplinary area of research and development which has received quite a lot
of interest from academic, industrial and general user communities in the last decades owing to the ease of
availability of computing and hardware resources and increasing demands for human computer interaction
methods. In this paper, a detailed literature review is made on the recent advances in eye gaze research, and
information in statistical format is presented to highlight the diversity in various aspects such as platforms,
setups, users, algorithms and performance measures existing between different branches of this field.
Future Enhancements:
It is not possible to develop a system that makes all the requirements of the user. User requirements keep
changing as the system is being used. Some of the future enhancements that can be done to this system are:
• As the technology emerges, it is possible to upgrade the system and can be adaptable to desired
environment.
• Based on the future security issues, security can be improved using emerging technologies like single
sign-on.
Program Outcomes:
PO1 PO2 PO3 PO4 PO7 PO8 PO9 PO10 PO11 PO12

3 3 3 2 2 2 3 1 1 2

Program Specific Outcomes:


PSO1 PSO2 PSO3

3 3 2

63
CHAPTER-10
BIBLOGRAPHY
BIBLOGRAPHY

S No Book Name Author Edition Publish Year

Yogesh Second Oxford 2010


1 Software Testing Singh Edition University Press
Oxford 2017
2 Python Vamsi University Press
Programming Kurama

Grady Second
The Unified Booch, Edition
Modeling James
3 Language Guide Rumbaugh,
Ivar
Jacorson
4 Machine Tom M. (India) Mc Graw Hill 2013
Learning Mitchell Edition Education
private limited

64
CHAPTER-11
REFERENCES
REFERENCES

1. Abdallahi Ould Mohamed, Matthieu Perreira da Silva, Vincent Courboulay. "A history of eye gaze
tracking". Rapport Interne. 2007. HAL Id: hal-00215967 Available from: https://hal.archivesouvertes.
fr/hal-00215967

2. Duchowski, A.T. Behavior Research Methods, Instruments, & Computers (2002) 34: 455.

3. C. H. Morimoto and M. R. M. Mimica, “Eye gaze tracking techniques for interactive applications,”
Comput. Vis. Image Underst., vol. 98, no. 1, pp. 4–24, 2005.

4. C. H. Morimoto, A. Amir, and M. Flickner, “Detecting eye position and gaze from a single camera
and 2 light sources,” Int. Conf. Pattern Recognit., vol. 4, pp. 314–317, 2002.

5. P. M. Corcoran, F. Nanu, S. Petrescu and P. Bigioi, "Real-time eye gaze tracking for gaming design
and consumer electronics systems," in IEEE Transactions on Consumer Electronics, vol. 58, no. 2, pp.
347-355, May 2012.

6. D. W. D. Hansen, J. S. J. Agustin, and A. Villanueva, “Homography Normalization for Robust Gaze


Estimation in Uncalibrated Setups,” Proc. 2010 Symp. Eye-Tracking Res. \& Appl., vol. 1, no. July
2015, pp. 13–20, 2010

65

You might also like