Professional Documents
Culture Documents
Face Attendace
Face Attendace
ABSTRACT
This document will show how we can implement algorithms for face
detection and recognition in image processing to build a system that will detect
and recognize frontal faces of students in a classroom to add attendace. “A face
is the front part of a person’s head from the forehead to the chin, or the
corresponding part of an animal” (Oxford Dictionary). In human interactions,
the face is the most important factor as it contains important information about a
person or individual. All humans have the ability to recognize individuals from
their faces. The proposed solution is to develop a working prototype of a system
that will facilitate class control for Kingston University lecturers in a classroom
by detecting the frontal faces of students from a picture taken in a classroom.
The second part of the system will also be able to perform a facial recognition
against a small database.
In recent years, research has been carried out and face recognition and detection
systems have been developed. Some of which are used on social media
platforms, banking apps, government offices e.g. the Metropolitan Police,
Facebook etc.
S.no Particulars Page
1 Introduction
2 System analysis
4 Requirement specification 27
5.1 Introduction
5.2 Data flow daigrams
5.3 UML diagrams
6 Implementation
6.1 Source Code
7 System testing 112
7.1 Introduction
7.2 Strategic approach
7.3 Unit testing
8 Output screens 117
9 Conclusion 119
10 Bibliography 121
Introduction
In Face Detection and Recognition systems, the flow process starts by being able to detect
and recognize frontal faces from an input device i.e. mobile phone. In today’s world, it has
been proven that students engage better during lectures only when there is effective
classroom control. The need for high level student engagement is very important. An analogy
can be made with that of pilots as described by Mundschenk et al (2011 p101)” Pilots need to
keep in touch with an air traffic controller, but it would be annoying and unhelpful if they
called in every 5 minutes”. In the same way students need to be continuously engaged during
lectures and one of the ways is to recognize and address them by their names. Therefore, a
system like this will improve classroom control. In my own view based on experience, during
my time as a teacher, I realized calling a student by his/her name gives me more control of
the classroom and this draws the attention of the other students in the classroom to engage
during lectures.
Face detection and recognition is not new in our society we live in. The capacity of the
human mind to recognize particular individuals is remarkable. It is amazing how the human
mind can still persist in identification of certain individuals even through the passage of time,
despite slight changes in appearance.
Face recognition processes images and identifies one or more faces in an image by analyzing
patterns and comparing them. This process uses algorithms which extracts features and
compare them to a database to find a match. Furthermore, in one of most recent research,
Nebel (2017, p. 1), suggest that DNA techniques could transform facial recognition
technology, by the use of video analysis software which can be improved thanks to a
completely advance in research in DNA analysis. By so doing, camera-based surveillance
systems software to analyze DNA sequences, by treating a video as a scene that evolves the
same way DNA does, to detect and recognize human face.
In this chapter, a brief overview of studies made on face detection and recognition will be
introduced alongside some popular face detection and recognition algorithms. This will give
a general idea of the history of systems and approaches that have been used so far.
Study of System:
Most face recognition systems rely on face recognition algorithms to complete the following
functional task as suggested by Shang-Hung Lin. (2000, p.2).
Fig2.1 Face recognition system framework as suggested by Shang-Hung Lin. (2000, p.2).
The figure below shows a simplified diagram from the framework for face recognition from the study
suggested by Shang-Hung Lin. (2000).
From the figure, above, Face Detection or face detector will detect any given face in the given image
or input video. Face localization, will detect where the faces are located in the given image/video, by
use of bounding boxes. Face Alignment is when the system will find a face and align landmarks such
as nose, eyes, chin, mouth for feature extraction. Feature extraction, extracts key features such as the
eyes, nose, mouth to undergo tracking. Feature matching and classification. matches a face based on a
trained data set of pictures from a database of about 200 pictures. Face recognition, gives a positive or
negative output of a recognized face based on feature matching and classification from a referenced
facial image.
Face detection is the process of locating a face in a digital image by any special computer software
build for this purpose. Feraud et al (2000 p.77) discuss face detection as “To detect a face in an image
means to find its position in the image plane and its size or scale “.
As figure 2.2 shows, the detection of a face in a digital image is a prerequisite to any further process
in face recognition or any face processing software.
In early years, face detection algorithms focused mainly on the frontal part of the human face
(Srinivasan, Golomb and Martinez, 2016, p.4434).
However, in recent years, Cynganek, (2013, p.346) suggest that newer algorithms take into
consideration different perspectives for face detection. Researchers have used such systems but the
most challenge that has been faced is to make a system detect faces irrespective of different
illumination conditions. This is based on a study by Castrillón et al. (2011, p.483) on the Yale
database which contains higher resolution images of 165 frontal faces. Face detection is often
classified into different methods. In order to face the first major problem of the project (Detecting
students faces), a wide range of techniques have been researched. These several face detection
techniques/ methodologies have been proposed by many different researchers and often classified in
major categories of different approaches. In this paper, we will look at some reviews and major
categories of classification by different groups of researchers and relate it to the system.
Figure 2.4 Taken from Detecting Faces in Images: A Survey (Yand et al (2002 p.36).
They subdivided these resolution hierarchies into three levels with level 1 being the lowest resolution
which only searches for face candidate and further processed at finer resolutions. At level 2 they used
the face candidate in level 1 to alongside local histogram equalization followed by edge detection. At
level three, the surviving face candidate region uses a set rule responding to facial features such as
mouth and eyes. They conducted their experiment on 60 images. Their system located faces on 50 of
these images and 28 images gave false alarm, thus giving a success rate of 83.33% and a false alarm
rate at 46.66%. Feature-Based-Methods that uses algorithms to look for structural features
regardless of pose, viewpoint or lighting conditions to find faces. Template Matching Methods;
uses standard facial patterns stored for use to correlate an input image with the stored pattern to
compute for detection. Appearance Base Methods; uses a set of training sets of images to learn the
templates and capture the representative of facial appearance. Furthermore, Yang et al. also carried
out their experiments on a standard database set which is shown on the Table2.1 and Table 2.2 Yang
et al. (2002, pp53-54) below with the detection rate results and false detection rates.
Table 2.1 showing standard database test set for Face Detection. Yang et al. (2002, p.53).
Table 2.2 Results of Two Image Test Sets experimented. Yang et al. (2002, p.54).
As Table 2.2 summarizes, the experimental results show images of different training set with different
parameters of tuning which has a direct impact on the training performance. For example, the
dimensionality reduction is carried out to improve computation efficiency and detection efficacy, with
image patterns projected to a lower dimensional space to form a discriminant function for
classification. Also, the training and execution time and the number of scanning windows in these
experimented influenced the performance in some way. Hjelmås and Low, (2001) classifies face
detection methodologies into two major categories. Image-based approaches, which is further sub-
categorized into Linear subspace methods, Neural networks and statistical approaches.
Image Based Approaches; Most of the recent feature-based attempts in the same study by Hjelmås
and Low, (2001 p.252) have improved the ability to cope with variations, but are still limited to head,
shoulder and part of frontal faces. There is therefore need for techniques to cope in hostile scenarios
such as detecting multiple faces in a cluttered scene, e.g. clutter-intensive background. Furthermore,
this method ignores the basic knowledge of the face in general and uses face patterns from a given set
of images. This is mostly known as the training stage in the detection method.
From this training stage, the system may be able to detect similar face patterns from an input image. A
decision of face existence by the system is now established based on a comparison of the distance
between the pattern from the input image and training image with a 2D intensity array extracted from
the input image. Most image-based approaches use window-scanning techniques for face detection.
Window scanning algorithm searches for possible face locations at all scales. This method depends on
window scanning algorithms. In other research carried out on this method which depends on window
scanning algorithms, Ryu et al. (2006), in their study experimented the scanning window techniques
discussed by Hjelmas and Low, (2001, p.252) in their system. They go further to experiment their
system, based on a combination of various classifiers for a more reliable result compared to a single
classifier. They designed multiple face classifiers which can take different representations of face
patterns. They used three classifiers, Gradient feature classifier which contains the integral
information of pixel distribution that returns certain invariability among facial features. The second
classifier is Texture Feature which extracts texture features by correlation (uses joint probability
occurrence of specified pixel), variance (measures the amount of local variations in an image) and
entropy (measures image disorder). The third classifier used here is Pixel Intensity Feature, which
extracts pixel intensity features of the eye, nose and mouth region for determining the face pattern.
They further used Coarse-To-Fine Classification approach with their classifications for computational
efficiency. Based on 1056 images which were obtained from the AT&T, BioID, Stirling, and Yale
dataset, they achieved the results presented in Table 2.4 and Table 2.5 Ryu et al. (2006, p.489) The
first face classification of their experiment with respect to shift in both x and y direction achieved a
detection rate of 80% when images are shifted within 10 pixels in the x direction and 4 pixels in the y
direction. The second and third face of their classification showed a detection rate of over 80% when
2 pixels were shift in both x and y directions respectively.
Table 2.4: Results showing Exhaustive full scanning method and Proposed scanning method. Ryu et
al (2006, p.489)
Table 2.5: Performance Comparison by different Researchers and Proposed System by Ryu et al
(2006, p.490)
As seen in Table 2.5, their system achieved a detection rate between 93.0% and 95.7%. Rowley et al.
1998 in their study on Neural Network-Based face detection, experimented on their system which
applies a set of neural network-based filters to an image and then uses an arbitrator to combine the
outputs. They tested their system against two databases of images. The CMU databased which was
made of 130 images and the FERET database achieve a detection rate of 86.2% with 23 false
detections. Feraud et al. (2001) also experimented on neural network-based face detection technique.
They used a combination of different components in their system (motion filter, colour filter,
prenetwork filter and large neural network). The prenetwork filter is a single multilayer perceptron,
with 300 inputs corresponding to the extracted sizes of the subwindows, hidden with 20 neurons and
outputs a face/nonface for a total of number of weights [reference]. These components, with a
combination of neural network achieved an 86.0% detection rate with 8 false detections, based on a
face database of 8000 images from Sussex Face Database and CMU Database which is further
subdivided into different subsets of equal sizes corresponding to different views. (page 48). Table 2.6
and Table 2.7 Feraud et al. (2001, p.49) below shows the experimental results carried out by these
researchers.
Table 2.6: Showing Results of Sussex Face Database. Feraud et al. (2001, p.49)
Table 2.7: Showing Results of CMU Test Set A. Feraud et al. (2001, p.49)
Wang et al. (2016) in their study to support neural network face detector used a multi-task
convolutional neural network-based face detector, which relies directly on learning features from
images instead of hand crafted features. Hence their ability to differentiate faces from uncontrolled
backgrounds or environments. The system the experimented on used the Region Proposed Network
which generates the candidate proposal and the CNN-Based detector for the final detection output.
They experimented this based on 183200 images from their database and used the AFLW dataset for
validation. Their face detector system was evaluated on AFW, FDDB and Pascal faces datasets
respectively and achieved a 98.1% face detection rate. The authors did not reveal all the facts leading
to the development of the system and I have limited time to implement this on OpenCV. 2.8 (Wang et
al. 2016 p.479), shows the different comparisons of their system against other state of the arts. (Wang
et al. 2016 p.480), discuss their system (FaceHunter) perform better than all other structured models.
However, this cannot be independently verified as this system was commercialised. One cannot
conclude if this was for marketing purpose or a complete solution to the problem as I have limited
time to implement it.
The other major category is Feature-based approaches; depends on extracted features which are not
affected by variations in lighting conditions and pose. This according to these researchers, Hjelmås
and Low, (2001 p.241),further clarifies that “visual features are organised into a more global concept
of face and facial features using information of face geometry “. This technique in my own opinion
will be slightly difficult to use for images containing facial features from uncontrolled background.
This technique relies on feature analysis and feature derivation to gain the required knowledge about
the face to be detected. The features extracted are the skin colour, face shape, eyes, nose and mouth.
On the other hand, in another study by Mohamed et al (2007 p.2) suggest that the “human skin colour
is an effective feature used to detect faces, although different people have different skin colour,
several studies have shown that the basic difference based on the intensity rather than their
chrominance”. The texture of the human skin can therefore be separated from different objects.
Feature methods for face detection uses features for face detection. Some users depend on the edges
and then grouping the edges for face detection. Furthermore, Sufyanu et al (2016) suggest a good
extraction process will involve feature points chosen in terms of their reliability of automatic
extraction and importance for face representation. Most geometric feature-based approaches use the
active appearance model(AAM) as suggested by (S., N.S. and M., M. 2010). This allows localization
of facial landmarks in different ways to extract the shape of facial features and movement of these
features as expression evolves
Hjelmås and Low, (2001 p.241),further placed the feature-based approach into sub categories of; Low
level analysis (Edges, Gray-levels, Colour, motion and generalized measure).
Active shape models (Snakes, Deformable templates and Point distribution models(PDMS).
Figure 2.5 shows the different approaches for Face detection as reported in a study by Hjelmås and
Low, (2001), which can be compared with Figure 2.6 showing the exact same classification by Modi
and Macwan (2014, p.11108).
Figure2.5 Face Detection, classified into different methodologies. Hjelmås and Low, (2001, p.238)
Figure 2.6 Various Face Detection Methodologies. Modi and Macwan (2014, p.11108).
Hjelmås and Low, (2001, p.240) in their study, show experiment based on edge-detection based
approach for face detection, on a set of 60 images of 9 faces, with complex backgrounds and correctly
detected 76% of faces with an average of two false alarms per image. Nehru and Padmavathi, (2017,
p.1), in their study, experimented face detection based on the Viola-Jones algorithm in a dataset of
dark and colored men to support their statement which states “It is possible to detect various parts of
the human body based on the facial features present”, like the eyes, nose and mouth. In this case,
systems as such will have to be trained properly to be able to distinguish features like the eyes, nose,
mouth etc., when a live dataset is used. The Viola-Jones algorithm to detect faces as seen in the
images in Figure 2.7 which shows dark and colored skin faces detected accurately.
Figure2.7 Face Detection in Dark and Colored Men by. Nehru and Padmavathi, (2017, p.1).
Also, in support of the claim made by Nehru and Padmavathi, (2017), the research carried out by
Viola-Jones to come up with the Viola-Jones algorithm in face detection, has had the most impact in
the past decade. As suggested by Mayank Chauhan et al (2014 p.1615), the Viola-Jones in face
detection is widely used in genuine applications such as digital cameras and digital photo managing
software. This claim is made based on a study by Viola and Jones (2001). Table 2.9 gives a summary
of the results obtained by these experts, showing various numbers of false and positive detections
based on the MIT and CMU database set with 130 images and 507 faces.
Table 2.9: Various Detection rates by different algorithms showing positive and false detection rates.
Viola and Jones (2001, pI-517).
Wang et al, (2015, p.318) states that” the process of searching a face is called face detection. Face
detection is to search for faces with different expressions, sizes and angles in images in possession of
complicated light and background and feeds back parameters of face”. In their study, they tested face
detection based on two modules which shows one module uses a combination of two algorithms
(PCA with SVM) and the other module based on a real-time field-programmable gate array (FPGA).
With these they concluded with their results of face detection accuracy of 89%. Table 2.10 is a screen
short taken from this paper to show experimental results of two units combined in order to investigate
the accuracy of the system.
Another method is the Learning based methods, that includes machine learning techniques that
extract discriminative features from a trained dataset before detection. Some well-known classifiers
used for face detection, based on a study by Thai et al. (2011) are Canny, Principal Component
Analysis(PCA), Support Vector Machine(SVM), and Artificial Neural Network (ANN). Although
used for facial expression classification, the algorithms are however, also used in the initial stage of
their experiment, which is the detection phase. Their experiment as achieved some results which are
shown in Table 2.11. A screenshot from Thai et al. (2011, p.392).
Table 2.11. Comparing Different Algorithms on classification rates. Thai et al. (2011, p.392).
The overall objective of the Face detection part of this project will be to find out if any faces exist in
the input image and if present will return the location in bounding boxes and extent of each face,
counting the number of faces detected. It is a challenge to this project that due to the variations in
location, scale, pose orientation, facial expression, illumination or lighting condition and various
appearance features such as facial hair, makeup etc. It will be difficult to achieve an excellent result.
However, the performance of the system will be evaluated, taking into consideration the learning
time, execution time and number of samples required for training and the ratio between the detection
rate and false detections. Table 2.12 below shows experiments from different researchers. They have
used different sizes of image dataset. Some have used a combination of different algorithms and
applied other methods like colour filtering etc. and different training sets to obtain their results.
However, we can conclude the Viola-Jones algorithm which is on its own classifies images based on
local features only and can still detect at very high accuracy and rapidly than pixel-based systems.
Viola-Jones (2001, p.139).
SYSTEM ANALYSIS
There is various software development approaches defined and designed which are
used/employed during development process of software, these approaches are also
referred as "Software Development Process Models". Each process model follows a
particular life cycle in order to ensure success in process of software development
.
Requirements:
Business requirements are gathered in this phase. This phase is the main focus of the
project managers and stake holders. Meetings with managers, stake holders and users
are held in order to determine the requirements. Who is going to use the system?
How will they use the system? What data should be input into the system? What data
should be output by the system? These are general questions that get answered during
a requirements gathering phase. This produces a nice big list of functionality that the
system should provide, which describes functions the system should perform, business
logic that processes data, what data is stored and used by the system, and how the user
interface should work. The overall result is the system as a whole and how it
performs, not how it is actually going to do it.
Design
The software system design is produced from the results of the requirements phase.
Architects have the ball in their court during this phase and this is the phase in which
their focus lies. This is where the details on how the system will work is produced.
Architecture, including hardware and software, communication, software design
(UML is produced here) are all part of the deliverables of a design phase.
Implementation
Code is produced from the deliverables of the design phase during implementation,
and this is the longest phase of the software development life cycle. For a developer,
this is the main focus of the life cycle because this is where the code is produced.
Implementation my overlap with both the design and testing phases. Many tools
exists (CASE tools) to actually automate the production of code using information
gathered and produced during the design phase.
Testing
During testing, the implementation is tested against the requirements to make sure that
the product is actually solving the needs addressed and gathered during the
requirements phase. Unit tests and system/acceptance tests are done during this
phase. Unit tests act on a specific component of the system, while system tests act on
the system as a whole.
So in a nutshell, that is a very basic overview of the general software development life
cycle model. Now let’s delve into some of the traditional and widely used variations.
The operational and generic user interface helps the users upon the system in
transactions through the existing data and required services. The operational
user interface also helps the ordinary users in managing their own information
helps the ordinary users in managing their own information in a customized
manner as per the assisted flexibilities.
SDLC METHDOLOGIES
This document play a vital role in the development of life cycle (SDLC) as it
describes the complete requirement of the system. It means for use by developers and
will be the basic during testing phase. Any changes made to the requirements in the
future will have to go through formal change approval process.
SPIRAL MODEL
Spiral Model was defined by Barry Boehm in his 1988 article, “A spiral Model of
Software Development and Enhancement. This model was not the first model to
discuss iterative development, but it was the first model to explain why the iteration
models.
As originally envisioned, the iterations were typically 6 months to 2 years long. Each
phase starts with a design goal and ends with a client reviewing the progress thus far.
Analysis and engineering efforts are applied at each phase of the project, with an eye
toward the end goal of the project.
At the customer option, the entire project can be aborted if the risk is
deemed too great. Risk factors might involved development cost
overruns, operating-cost miscalculation, or any other factor that could,
in the customer’s judgment, result in a less-than-satisfactory final
product.
The preceding steps are iterated until the customer is satisfied that the
refined prototype represents the final product desired.
Hardware Requirements:
• OS: Windows
• Pycharm IDE
user
Turn on the camera
Non-Functional Requirements
1. Secure access of camera and long life battery.
2. 24 X 7 availability.
Technical Feasibility
Operational Feasibility
Economical Feasibility
The technical issue usually raised during the feasibility stage of the
investigation includes the following:
Proposed projects are beneficial only if they can be turned out into
information system. That will meet the organization’s operating requirements.
Operational feasibility aspects of the project are to be taken as an important part
of the project implementation. Some of the important issues raised are to test the
operational feasibility of a project includes the following: -
The well-planned design would ensure the optimal utilization of the computer
resources and would help in the improvement of performance status.
The computerized system takes care of the present existing system’s data
flow and procedures completely and should generate all the reports of the
manual system besides a host of other management reports
User module
User module:
The User module only have the camera in-front of the user which
continuously record the video and analyse the video to check the his eyes
closed or not. If user closed his eye for 10sec the alert system comes to alert
the user
Language: Python
Artificial Intelligence
Facial Recognition
Machine Learning libraries
IMPLIMENTATION ON (PYTHON):
What Is A Script?
Basically, a script is a text file containing the statements that comprise a Python
program. Once you have created the script, you can execute it over and over
without having to retype it each time.
Just about any text editor will suffice for creating Python script files.
Script:
Scripts are distinct from the core code of the application, which is usually written
in a different language, and are often created or at least modified by the end-
user. Scripts are often interpreted from source code or byte code, where as the
applications they control are traditionally compiled to native machine code.
Program:
The program has an executable form that the computer can use directly to
execute the instructions.
The same program in its human-readable source code form, from which
executable programs are derived(e.g., compiled)
Python
what is Python? Chances you are asking yourself this. You may have found this
book because you want to learn to program but don’t know anything about
programming languages. Or you may have heard of programming languages like C
and want to know what Python is and how it compares to “big name” languages.
Hopefully I can explain it for you.
Python concepts
If your not interested in the the hows and whys of Python, feel free to skip to the
next chapter. In this chapter I will try to explain to the reader why I think Python is
one of the best languages available and why it’s a great one to start programming
with.
Python is Interactive − You can actually sit at a Python prompt and interact
with the interpreter directly to write your programs.
History of Python
Python was developed by Guido van Rossum in the late eighties and early
nineties at the National Research Institute for Mathematics and Computer
Science in the Netherlands.
Python is derived from many other languages, including ABC, Modula-3, C, C++,
Algol-68, SmallTalk, and Unix shell and other scripting languages.
Python is copyrighted. Like Perl, Python source code is now available under the
GNU General Public License (GPL).
Python is now maintained by a core development team at the institute, although
Guido van Rossum still holds a vital role in directing its progress.
Python Features
Easy-to-read − Python code is more clearly defined and visible to the eyes.
A broad standard library − Python's bulk of the library is very portable and
cross-platform compatible on UNIX, Windows, and Macintosh.
Portable − Python can run on a wide variety of hardware platforms and has
the same interface on all platforms.
Apart from the above-mentioned features, Python has a big list of good features,
few are listed below −
It provides very high-level dynamic data types and supports dynamic type
checking.
Dynamic vs Static
For example, in C if you had a variable that was to contain the price of something,
you would have to declare the variable as a “float” type.
This tells the compiler that the only data that can be used for that variable must
be a floating point number, i.e. a number with a decimal point.
If any other data value was assigned to that variable, the compiler would give an
error when trying to compile the program.
Python, however, doesn’t require this. You simply give your variables names and
assign values to them. The interpreter takes care of keeping track of what kinds of
objects your program is using. This also means that you can change the size of the
values as you develop the program. Say you have another decimal number (a.k.a.
a floating point number) you need in your program.
With a static typed language, you have to decide the memory size the variable can
take when you first initialize that variable. A double is a floating point value that
can handle a much larger number than a normal float (the actual memory sizes
depend on the operating environment).
If you declare a variable to be a float but later on assign a value that is too big to
it, your program will fail; you will have to go back and change that variable to be a
double.
With Python, it doesn’t matter. You simply give it whatever number you want
and Python will take care of manipulating it as needed. It even works for derived
values.
For example, say you are dividing two numbers. One is a floating point number
and one is an integer. Python realizes that it’s more accurate to keep track of
decimals so it automatically calculates the result as a floating point number
uo = q0ul + u2
ul = qlu2 + u 3
:
:
Un-I = qn-lun +un+l,
where 0 = un+l < un < ….< u2 < ul. Then u1 = gcd (uo, u1).
AI can be classified in any number of ways there are two types of main
classification.
Type1:
Strong AI: The machines that can actually think and perform tasks on its own
just like a human being. There are no proper existing examples for this but some
industry leaders are very keen on getting close to build a strong AI which has
resulted in rapid progress.
Type2(based on functionalities):
Reactive Machines: This is one of the basic forms of AI. It doesn’t have past
memory and cannot use past information to information for the future actions.
Example:- IBM chess program that beat Garry Kasparov in the 1990s.
There are many ways AI can be achived some of them are as follows:
The most important among them are as follows:
Vision: It can be said as a field which enables the machines to see. Machine
vision captures and analyses visual information using a camera, analog-to-
digital conversion, and digital signal processing. It can be compared to human
eyesight but it is not bound by the human limitation which can enable it to see
through walls(now that would be interesting if we can have implants that can
make us see through the wall). It is usually achieved through machine learning
to get the best possible results so we could say that these two fields are
interlinked.
The phrase “human error” was born because humans make mistakes from time
to time. Computers however, do not make these mistakes if they are
programmed properly. With Artificial intelligence, the decisions are taken from
the previously gathered information applying certain set of algorithms. So errors
are reduced and the chance of reaching accuracy with a greater degree of
precision is a possibility.
Example: Have you heard about the Chernobyl nuclear power plant explosion in
Ukraine? At that time there were no AI powered robots which can help us to
minimise the affect of radiation by controlling the fire in early stages, as any
human went close to the core was dead in matter of minutes. They eventually
poured sand and boron from helichopters from a mere distance.
3) Available 24×7
An Average human will work for 4-6 hours a day excluding the breaks. Humans
are built in such a way to get some time out for refereshing themselves and get
ready for a new day of work and they even have weekly off’s to stay intact with
their work life and personal life. But using AI we can make machines work
24×7 without any breaks and they don’t even get bored unlike humans.
Example: Educational Institutes and Helpline centers are getting many queries
and issues which can be handled effecively using AI.
5) Digital Assistance
Some of the highly advanced organizations use digital assistants to interact with
users which saves the need of human resource. The digital assistant also used in
many websites to provide things that user want. We can chat with them about
what we are looking for. Some chatbots are designed in such a way that it
becomes hard to determine that we’re chatting with a chatbot or a human being.
Example: We all know that organizations have a customer support team which
needs to clarify the doubts and queries of the customers. Using AI the
organizations can set up a Voicebot or Chatbot which can help customers with
all their queries. We can see many organizations already started using them in
their websites and mobile applications.
6) Faster Decisions:
8) New Inventions:
AI is powering many inventions in almost every domain which will help
humans solve the majority of complex problems.
As every bright side has a darker version in it. Artificial Intelligence also has
some disadvantages. Let’s see some of them.
As AI is updating every day the hardware and software need to get updated with
time to meet the latest requirements. Machines need repairing and maintenance
which need plenty of costs. It’ s creation requires huge costs as they are very
complex machines.
AI is making humans lazy with its applications automating the majority of the
work. Humans tend to get addicted to these inventions which can cause a
problem to the future generations.
3) Unemployement:
As AI is replacing the majority of the repetitive tasks and other works with
robots, human interference is becoming less which will cause a major problem
in the employment standards. Every organization is looking to replace the
minimum qualified individuals with AI robots which can do similar work with
more efficiency.
4) No Emotions
There is no doubt that machines are much better when it comes to working
efficiently but they cannot replace the human connection that makes the team.
Machines cannot develop a bond with humans which is an essential attribute
when comes to Team Management.
The obvious question that we need to encounter at this point is why we should
choose Python for AI over others.
Python offers the least code among others and is in fact 1/5 the number
compared to other OOP languages. No wonder it is one of the most popular in
the market today.
Python has Prebuilt Libraries like Numpy for scientific computation, Scipy
for advanced computing and Pybrain for machine learning (Python Machine
Learning) making it one of the best languages For AI.
Python is the most flexible of all others with options to choose between
OOPs approach and scripting. You can also use IDE itself to check for most
codes and is a boon for developers struggling with different algorithms.
Let us look as to why Python is used for MachinPython is one of the most
popular programming languages used by developers today. Guido Van Rossum
created it in 1991 and ever since its inception has been one of the most widely
used languages along with C++, Java, etc.
In our endeavour to identify what is the best programming language for AI and
neural network, Python has taken a big lead. Let us look at why Artificial
Intelligence with Python is one of the best ideas under the sun.
TKINTER
Tkinter is a graphical user interface (GUI) module for Python, you can
make desktop apps with Python. You can make windows, buttons, show text
and images amongst other things.
Tk and Tkinter apps can run on most Unix platforms. This also works on
Windows and Mac OS X.
The module Tkinter is an interface to the Tk GUI toolkit.
Example:
Tkinter module
This example opens a blank desktop window. The tkinter module is part of the
standard library.
To use tkinter, import the tkinter module.
root = Tk()
1
app =
2
Window(root)
PYGAME
Many times people like to visualize the programs they are creating, as it
can help people to learn programming logic quickly. Games are fantastic for
this, as your are specifically programming everything you see.
SCIPY
SciPy is a free and open-source Python library used for scientific computing and
technical computing.
SciPy builds on the NumPy array object and is part of the NumPy stack which
includes tools like Matplotlib, pandas and SymPy, and an expanding set of
scientific computing libraries. This NumPy stack has similar users to other
applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is
also sometimes referred to as the SciPy stack.[4]
SciPy is also a family of conferences for users and developers of these tools:
SciPy (in the United States), EuroSciPy (in Europe) and SciPy.in (in India).[5]
Enthought originated the SciPy conference in the United States and continues to
sponsor many of the international conferences as well as host the SciPy website.
The SciPy library is currently distributed under the BSD license, and its
development is sponsored and supported by an open community of developers.
It is also supported by NumFOCUS, a community foundation for supporting
reproducible and accessible science.
DLIB
SVMs,
K-Means clustering,
Bayesian Networks,
Threading,
Networking,
Numerical Algorithms,
Image Processing,
DLib includes extensive unit testing coverage and examples using the
library. Every class and function in the library is documented. This
documentation can be found on the library's home page. DLib provides a good
framework for developing machine learning applications in C++.
DLib is much like DMTL in that it provides a generic high-performance
machine learning toolkit with many different algorithms, but DLib is more
recently updated and has more examples. DLib also contains much more
supporting functionality.
What makes DLib unique is that it is designed for both research use and
creating machine learning applications in C++ and Python.
Imutils
Translation
Translation is the shifting of an image in either the x or y direction. To translate
an image in OpenCV you need to supply the (x, y)-shift, denoted as (tx, ty) to
construct the translation matrix M:
[0
M = 1 0 ty
1 tx
]
And from there, you would need to apply the cv2.warpAffine function. Instead
of manually constructing the translation matrix M and calling cv2.warpAffine,
you can simply make a call to the translate function of imutils
Rotation:
OPENCV
The library has more than 2500 optimized algorithms, which includes a
comprehensive set of both classic and state-of-the-art computer vision and
machine learning algorithms. These algorithms can be used to detect and
recognize faces, identify objects, classify human actions in videos, track camera
movements, track moving objects, extract 3D models of objects, produce 3D
point clouds from stereo cameras, stitch images together to produce a high
resolution image of an entire scene, find similar images from an image database,
remove red eyes from images taken using flash, follow eye movements,
recognize scenery and establish markers to overlay it with augmented reality,
etc.
SYSTEM DESIGN
5.1 INRODUCTION
The importance can be stated with a single word “Quality”. Design is the place
where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way
that we can accurately translate a customer’s view into a finished software
product or system. Software design serves as a foundation for all the software
engineering steps that follow. Without a strong design we risk building an
unstable system – one that will be difficult to test, one whose quality cannot be
assessed until the last stage. The purpose of the design phase is to plan a solution
of the problem specified by the requirement document. This phase is the first step in
moving from the problem domain to the solution domain. In other words, starting
with what is needed, design takes us toward how to satisfy the needs. The design of
a system is perhaps the most critical factor affection the quality of the software; it has
a major impact on the later phase, particularly testing, maintenance. The output of
this phase is the design document. This document is similar to a blueprint for the
solution and is used later during implementation, testing and maintenance. The
design activity is often divided into two separate phases System Design and Detailed
Design.
System Design also called top-level design aims to identify the modules that
should be in the system, the specifications of these modules, and how they interact
with each other to produce the desired results. At the end of the system design all
the major data structures, file formats, output formats, and the major modules in the
system and their specifications are decided.
During, Detailed Design, the internal logic of each of the modules specified in
system design is decided. During this phase, the details of the data of a module is
usually specified in a high-level design description language, which is independent
of the target language in which the software will eventually be implemented.
In system design the focus is on identifying the modules, whereas during
detailed design the focus is on designing the logic for each of the modules. In other
works, in system design the attention is on what components are needed, while in
detailed design how the components can be implemented in software is the issue.
Design is concerned with identifying software components specifying
relationships among components. Specifying software structure and providing blue
print for the document phase. Modularity is one of the desirable properties of large
systems. It implies that the system is divided into several parts. In such a manner,
the interaction between parts is minimal clearly specified.
During the system design activities, Developers bridge the gap between the
requirements specification, produced during requirements elicitation and analysis,
and the system that is delivered to the user.
Design is the place where the quality is fostered in development. Software
design is a process through which requirements are translated into a representation
of software.
A graphical tool used to describe and analyze the moment of data through a system
manual or automated including the process, stores of data, and delays in the system.
Data Flow Diagrams are the central tool and the basis from which other components
are developed. The transformation of data from input to output, through processes,
may be described logically and independently of the physical components
associated with the system. The DFD is also know as a data flow graph or a bubble
chart.
DFDs are the model of the proposed system. They clearly should show the
requirements on which the new system should be built. Later during design activity
this is taken as the basis for drawing the system’s structure charts. The Basic
Notation used to create a DFD’s are as follows:
4. Data Store: Here data are stored or referenced by a process in the System.
Illustrate classes with rectangles divided into compartments. Place the name of
the class in the first partition (centered, bolded, and capitalized), list the
attributes in the second partition, and write operations into the third.
Active Class
Active classes initiate and control the flow of activity, while passive classes
store data and serve other classes. Illustrate active classes with a thicker border.
Visibility
Use visibility markers to signify who can access the information contained
within a class. Private visibility hides information from anything outside the
class partition. Public visibility allows all other classes to view the marked
information. Protected visibility allows child classes to access information they
inherited from a parent class. .
Associations
Multiplicity (Cardinality)
Constraint
Generalization
Generalization is another name for inheritance or an "is a" relationship. It refers
to a relationship between two classes where one class is a specialized version of
another. For example, Honda is a type of car. So the class Honda would have a
generalization relationship with the class car.
In real life coding examples, the difference between inheritance and aggregation
can be confusing. If you have an aggregation relationship, the aggregate (the
whole) can access only the PUBLIC functions of the part class. On the other
hand, inheritance allows the inheriting class to access both the PUBLIC and
PROTECTED functions of the super class.
Use case diagrams model the functionality of a system using actors and use
cases. Use cases are services or functions provided by the system to its users.
System
Draw your system's boundaries using a rectangle that contains use cases. Place
actors outside the system's boundaries.
Use Case
Draw use cases using ovals. Label with ovals with verbs that represent the
system's functions.
Actors
Actors are the users of a system. When one system is the actor of another
system, label the actor system with the actor stereotype.
Relationships
Illustrate relationships between an actor and a use case with a simple line. For
relationships among use cases, use arrows labeled either "uses" or "extends." A
"uses" relationship indicates that one use case is needed by another in order to
perform a task. An "extends" relationship indicates alternative options under a
certain use case.
Sequence Diagram
Class roles
Class roles describe the way an object will behave in context. Use the UML
object symbol to illustrate class roles, but don't list object attributes.
Activation
Messages
Messages are arrows that represent communication between objects. Use half-
arrowed lines to represent asynchronous messages. Asynchronous messages are
sent from an object that will not wait for a response from the receiver before
continuing its tasks.
Various message types for Sequence and Collaboration diagrams
Lifelines
Lifelines are vertical dashed lines that indicate the object's presence over time.
Destroying Objects
Objects can be terminated early using an arrow labeled "<< destroy >>" that
points to an X.
Loops
Collaboration Diagram
Class roles
Class roles describe how objects behave. Use the UML object symbol to
illustrate class roles, but don't list object attributes.
Association roles
Messages
Activity Diagram
Action states
Action states represent the non interruptible actions of objects. You can draw an
action state in Smart Draw using a rectangle with rounded corners.
Action Flow
Action flow arrows illustrate the relationships among action states.
Object Flow
Initial State
Final State
An arrow pointing to a filled circle nested inside another circle represents the
final action state.
Branching
Synchronization
Swimlanes
Swimlanes group related activities into one column.
States
States represent situations during the life of an object. You can easily illustrate a
state in Smart Draw by using a rectangle with rounded corners.
Transition
A solid arrow represents the path between different states of an object. Label the
transition with the event that triggered it and the action that results from it.
Initial State
Final State
An arrow pointing to a filled circle nested inside another circle represents the
object's final state.
Component
Interface
Dependencies
Component
Association
Use case diagrams are considered for high level requirement analysis of a
system. So when the requirements of a system are analyzed the functionalities
are captured in use cases.So we can say that uses cases are nothing but the
system functionalities written in an organized manner. Now the second things
which are relevant to the use cases are the actors. Actors can be defined as
something that interacts with the system.
The actors can be human user, some internal applications or may be some
external applications. So in a brief when we are planning to draw an use case
diagram we should have the following items identified.
Actors
supervised unsupervised
classifying regression
features lables
predict(result)
testing&training
SVM
plotting
SEQUENCE DAIGRAM
1 : supervised()
2 : unsupervised()
3 : classifying()
4 : regrission()
5 : features()
6 : lables()
7 : SVM algorithm()
8 : SVR algorithm()
10 : results()
COLLEBARATION DAIGRAM:
user
dadaset
clasification
supervised unsupervised
regression classifying
features(days) lables(price)
testing&training
plotting
Component
matplotlib
datasets
server
numpy
sklearn
Implementation
Source Code:
import tkinter as tk
import cv2,os
import shutil
import csv
import numpy as np
import pandas as pd
import datetime
import time
window = tk.Tk()
window.title("Face_Recogniser")
dialog_title = 'QUIT'
#window.geometry('1280x720')
window.configure(background='blue')
#window.attributes('-fullscreen', True)
window.grid_rowconfigure(0, weight=1)
window.grid_columnconfigure(0, weight=1)
#path = "profile.jpg"
#Creates a Tkinter-compatible photo image, which can be used everywhere Tkinter expects
an image object.
#img = ImageTk.PhotoImage(Image.open(path))
#The Label widget is a standard Tkinter widget used to display a text or image on the screen.
#cv_img = cv2.imread("img541.jpg")
#canvas.pack(side="left")
message.place(x=200, y=20)
lbl.place(x=400, y=200)
txt.place(x=700, y=215)
lbl2.place(x=400, y=300)
txt2.place(x=700, y=315)
lbl3.place(x=400, y=400)
message = tk.Label(window, text="" ,bg="yellow" ,fg="red" ,width=30 ,height=2,
activebackground = "yellow" ,font=('times', 15, ' bold '))
message.place(x=700, y=400)
lbl3.place(x=400, y=650)
message2.place(x=700, y=650)
def clear():
txt.delete(0, 'end')
res = ""
message.configure(text= res)
def clear2():
txt2.delete(0, 'end')
res = ""
message.configure(text= res)
def is_number(s):
try:
float(s)
return True
except ValueError:
pass
try:
import unicodedata
unicodedata.numeric(s)
return True
pass
return False
def TakeImages():
Id=(txt.get())
name=(txt2.get())
cam = cv2.VideoCapture(0)
harcascadePath = "haarcascade_frontalface_default.xml"
detector=cv2.CascadeClassifier(harcascadePath)
sampleNum=0
while(True):
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
sampleNum=sampleNum+1
#saving the captured face in the dataset folder TrainingImage
cv2.imshow('frame',img)
break
elif sampleNum>60:
break
cam.release()
cv2.destroyAllWindows()
writer = csv.writer(csvFile)
writer.writerow(row)
csvFile.close()
message.configure(text= res)
else:
if(is_number(Id)):
message.configure(text= res)
if(name.isalpha()):
message.configure(text= res)
def TrainImages():
recognizer = cv2.face_LBPHFaceRecognizer.create()#recognizer =
cv2.face.LBPHFaceRecognizer_create()#$cv2.createLBPHFaceRecognizer()
harcascadePath = "haarcascade_frontalface_default.xml"
detector =cv2.CascadeClassifier(harcascadePath)
faces,Id = getImagesAndLabels("TrainingImage")
recognizer.train(faces, np.array(Id))
recognizer.save("TrainingImageLabel\Trainner.yml")
message.configure(text= res)
def getImagesAndLabels(path):
#print(imagePaths)
faces=[]
Ids=[]
#now looping through all the image paths and loading the Ids and the images
pilImage=Image.open(imagePath).convert('L')
imageNp=np.array(pilImage,'uint8')
#getting the Id from the image
Id=int(os.path.split(imagePath)[-1].split(".")[1])
faces.append(imageNp)
Ids.append(Id)
return faces,Ids
def TrackImages():
recognizer = cv2.face.LBPHFaceRecognizer_create()#cv2.createLBPHFaceRecognizer()
recognizer.read("TrainingImageLabel\Trainner.yml")
harcascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(harcascadePath);
df=pd.read_csv("StudentDetails\StudentDetails.csv")
cam = cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
col_names = ['Id','Name','Date','Time']
while True:
ret, im =cam.read()
gray=cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
faces=faceCascade.detectMultiScale(gray, 1.2,5)
for(x,y,w,h) in faces:
cv2.rectangle(im,(x,y),(x+w,y+h),(225,0,0),2)
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
aa=df.loc[df['Id'] == Id]['Name'].values
tt=str(Id)+"-"+aa
attendance.loc[len(attendance)] = [Id,aa,date,timeStamp]
else:
Id='Unknown'
tt=str(Id)
noOfFile=len(os.listdir("ImagesUnknown"))+1
attendance=attendance.drop_duplicates(subset=['Id'],keep='first')
cv2.imshow('im',im)
if (cv2.waitKey(1)==ord('q')):
break
ts = time.time()
date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
timeStamp = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
Hour,Minute,Second=timeStamp.split(":")
fileName="Attendance\Attendance_"+date+"_"+Hour+"-"+Minute+"-"+Second+".csv"
attendance.to_csv(fileName,index=False)
cam.release()
cv2.destroyAllWindows()
#print(attendance)
res=attendance
message2.configure(text= res)
clearButton = tk.Button(window, text="Clear",
command=clear ,fg="red" ,bg="yellow" ,width=20 ,height=2 ,activebackground =
"Red" ,font=('times', 15, ' bold '))
clearButton.place(x=950, y=200)
clearButton2.place(x=950, y=300)
takeImg.place(x=200, y=500)
trainImg.place(x=500, y=500)
trackImg.place(x=800, y=500)
quitWindow.place(x=1100, y=500)
copyWrite.tag_configure("superscript", offset=10)
copyWrite.configure(state="disabled",fg="red" )
copyWrite.pack(side="left")
copyWrite.place(x=800, y=750)
window.mainloop()
setup.py
import sys,os
PYTHON_INSTALL_DIR = os.path.dirname(os.path.dirname(os.__file__))
base = None
if sys.platform == 'win32':
base = None
packages = ["idna","os","sys","cx_Freeze","tkinter","cv2","setup",
"numpy","PIL","pandas","datetime","time"]
options = {
'build_exe': {
'packages':packages,
},
}
setup(
name = "ToolBox",
options = options,
version = "0.0.1",
executables = executables
8.1.INTRODUCTION
UNIT TESTING
MODULE TESTING
Component Testing
SUB-SYSTEM TESING
SYSTEM TESTING
Integration Testing
ACCEPTANCE TESTING
User Testing
Use the design of the code and draw correspondent flow graph.
V(G)=E-N+2 or
V(G)=P+1 or
In this part of the testing each of the conditions were tested to both true
and false aspects. And all the resulting paths were tested. So that each path that
may be generate on particular condition is traced to uncover any possible errors.
This type of testing selects the path of the program according to the
location of definition and use of variables. This kind of testing was used only
when some local variable were declared. The definition-use chain method was
used in this type of testing. These were particularly useful in nested statements.
5. LOOP TESTING
In this type of testing all the loops are tested to all the limits possible. The
following exercise was adopted for all loops:
All the loops were tested at their limits, just above them and just below them.
All the loops were skipped at least once.
For nested loops test the inner most loop first and then work outwards.
For concatenated loops the values of dependent loops were set with the help
of
connected loop.Unstructured loops were resolved into nested loops or
concatenated loops and tested as above.
Output Screens
Conclusion
Alpaydin, E. (2014) Introduction to Machine Learning. 3rd ed edn. Cambridge: Cambridge: The
MIT Press.
Anthony, S. (2014) Facebook's facial recognition software is now as accurate as the human
brain, but what now?. Available at: http://www.extremetech.com/extreme/178777-facebooks-
facial-recognition-software-is-now-as-accurate-as-the-human-brain-but-what-now (Accessed:
09/01/2018).
Baseer, K. (2015) 'A Systematic Survey on Waterfall Vs. Agile Vs. Lean Process Paradigms', I-
Manager's Journal on Software Engineering, 9 (3), pp. 34-59.
Belaroussi, R. and Milgram, M. (2012) 'A comparative study on face detection and tracking
algorithms', Expert Systems with Applications, 39 (8), pp. 7158-7164.
C, R.,Kavitha and Thomas, S.,Mary (2011) 'Requirement Gathering for small Projects using Agile
Methods', IJCA Special Issue on “Computational Science - New Dimensions & Perspectives, pp.
122-128.
Carro, R. C., Larios, J. -. A., Huerta, E. B., Caporal, R. M. and Cruz, F. R. (2015) 'Face
recognition using SURF', Lecture Notes in Computer Science (Including Subseries Lecture Notes
in Artificial Intelligence and Lecture Notes in Bioinformatics), 9225 pp. 316-326.
Castrillón, M., Déniz, O., Hernández, D. and Lorenzo, J. (2011) 'A comparison of face and facial
feature detectors based on the Viola–Jones general object detection framework', Machine Vision
and Applications, 22 (3), pp. 481-494.
Cheng, Y. F., Tang, H. and Chen, X. Q. (2014) 'Research and improvement on HMM-based face
recognition', Applied Mechanics and Materials, 490-491 pp. 1338-1341.
Da Costa, Daniel M. M., Peres, S. M., Lima, C. A. M. and Mustaro, P. (2015) Face recognition
using Support Vector Machine and multiscale directional image representation methods: A
comparative study. Killarney,Ireland. Neural Networks (IJCNN), 2015 International Joint
Conference on: IEEE.
Dagher, I., Hassanieh, J. and Younes, A. (2013) Face recognition using voting technique for the
Gabor and LDP features. Dallas, TX, USA. Neural Networks (IJCNN), The 2013 International
Joint Conference on: IEEE.
Feraud, R., Bernier, O., Viallet, J. E. and Collobert, M. (2000) 'A fast and accurate face detector
for indexation of face images', Automatic Face and Gesture Recognition,
2000.Proceedings.Fourth IEEE International Conference on, pp. 77-82.
Hadizadeh, H. (2015) 'Multi-resolution local Gabor wavelets binary patterns for gray-scale texture
description', Pattern Recognition Letters, 65 pp. 163-169.
Hiremath, P. S. and Hiremath, M. (2014) '3D Face Recognition based on Radon Transform, PCA,
LDA using KNN and SVM', International Journal of Image, 6 (7), pp. 36-43.
Hjelmås, E. and Low, B. K. (2001) 'Face Detection: A Survey', Computer Vision and Image
Understanding, 83 (3), pp. 236-274.
Jadhav, D. V. and Holambe, R. S. (2010) 'Rotation, illumination invariant polynomial kernel
Fisher discriminant analysis using Radon and discrete cosine transforms based features for face
recognition', Pattern Recognition Letters, 31 (9), pp. 1002-1009.
Jafri, R. and Arabnia, H. ((2009).) 'A Survey of Face Recognition Techniques.', Journal of
Information Processing Systems, 5 (2), pp. 41-68.
Jeong, G. and Choi, S. (2013) 'Performance evaluation of face recognition using feature feedback
over a number of Fisherfaces', IEEJ Transactions on Electrical and Electronic Engineering, 8 (6),
pp. 541-545.
Kashif, M., Deserno, T. M., Haak, D. and Jonas, S. (2016) 'Feature description with SIFT, SURF,
BRIEF, BRISK, or FREAK? A general question answered for bone age assessment', Computers
in Biology and Medicine, 68 pp. 67-75.
Leigh-Pollitt, P. (2001) The Data Protection Act explained. 3rd ed. edn. London: London : The
Stationery Office.
Lemley, J.Bazrafkan, S. and Corcoran, P. (2017) 'Deep Learning for Consumer Devices and
Services: Pushing the limits for machine learning, artificial intelligence, and computer vision'
Consumer Electronics Magazine, IEEE, 6 (2), pp. 48-56. 10.1109/MCE.2016.2640698.
Lenc, L. and Král, P. (2014) 'Automatic face recognition approaches', Journal of Theoretical and
Applied Information Technology, 59 (3), pp. 759-769.
Li, C., Tan, Y., Wang, D. and Ma, P. (2017) 'Research on 3D face recognition method in cloud
environment based on semi supervised clustering algorithm', Multimedia Tools and Applications,
76 (16), pp. 17055-17073.
Li, S. Z.Rong Xiao, S. Z.Li, Z. Y. and Hong, J. Z. (2001) 'Nonlinear mapping from multi-view
face patterns to a Gaussian distribution in a low dimensional space' Recognition, Analysis, and
Tracking of Faces and Gestures in Real-Time Systems, 2001.Proceedings.IEEE ICCV Workshop
on, pp. 47-54. 10.1109/RATFG.2001.938909.
Linna, M., Kannala, J. and Rahtu, E. (2015) 'Online face recognition system based on local binary
patterns and facial landmark tracking', Lecture Notes in Computer Science (Including Subseries
Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9386 pp. 403-414.
Lowe, D. G. (1999) 'Object recognition from local scale-invariant features', Computer Vision,
1999.the Proceedings of the Seventh IEEE International Conference on, 2 pp. 1150-1157.
Marciniak, T., Chmielewska, A., Weychan, R., Parzych, M. and Dabrowski, A. (2015) 'Influence
of low resolution of images on reliability of face detection and recognition', Multimedia Tools
and Applications, 74 (12), pp. 4329-4349.