Report 2

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 48

CHAPTER 1

INTRODUCTION
1.1 OVERVIEW

Today, the security system field is a very important area in smart cities, offices,
and homes. Security of the house and the family is important for everybody.
Likewise, smart systems can provide Internet of Things (IoT) . The IoT can be
applied in smart cities in order to give various benefits that enhance citizens . In
other terms, smart homes can be made by utilizing the IoT. It has the ability to
control and automate exact things of houses such as lights, doors, fridges,
distributed multimedia, windows and irrigation systems . The IoT is becoming
popular in many sides of life, such as smart security, smart cities, healthcare,
smart transportation, smart grids and online business. The objectivity of
utilizing IoT is to share information and knowledge with everyone in
everywhere around the world . Computer vision can present more security
system in the IoT platform for smart houses. It has abilities to recognize a
person in the incorrect area and at the wrong time because this person may be a
malicious one for the environment . Face recognition system grow to be one of
the most active research areas especially in recent years. It has an assortment of
large applications in the ranges: public security, access control, credit card
verification, criminal identification, law enforcement commerce, information
security, human computer intelligent interaction, and digital libraries. Generally,
it recognizes persons in public areas such as houses, offices, airports, shopping
centers and banks. This mechanism permits secure access to the house by
detecting motion controlled by the embedded system. The face is the most
important part of human’s body. So, it can reflect many emotions of a person.
Long year ago, humans were using the non-living things like smart cards,
plastic cards, PINS, tokens and keys for authentication, and to get grant access
in restricted areas like ISRO, NASA and DRDO. The most important features of
the face image are nose, eyes and mouth which are related to facial extraction .
Face detection and recognition system is simpler, cheaper, more accurate, and
non-intrusive process as it is compared to other biometrics. The system will fall
into two categories; face detection and face recognition. There are many
methods to implement face detection such as Haar-like features, Eigenface and
Fisher-face. Then, analyzing the geometric features of facial images, such as,
distance and location amongst eyes, nose and mouth were provided by several
face recognition techniques . There are a few techniques for fetching the most
important features from face images to implement face recognition. One of
these feature is extraction technique called Local Binary Pattern (LBP). LPB
describes the shape and texture of a digital image. This technique provides good
results and efficient for real-time applications. Haar-like features and LBP are
robust when compared to the others. According to many studies to get fast
discriminatory performance and good results, LBP technique was chosen for
face recognition. LBP generates the binary code that describes local texture
pattern. From the LBP face image, the nose and eyes area are extracted, and for
each image's pixel the LBP histograms will be drawn .

1.2 PIR BASED HUMAN SEARCH AND RESCUE ROBOT

Swarm robotics is a part of multi-robotics where n number of robots coordinate


and communicate with each other in a distributed and decentralized way to
accomplish a common goal. It works based on the habit of local rules, A large
number of simple robots are used considering the complexity of the task to be
accomplished, they are developed and designed by imitating behaviours of
insects such as birds, bees and ants. Simple swarm robots can perform complex
tasks far effective way than a sole complex robot, giving robustness to the
swarm. Accomplishing the search and reporting of survivors in the least amount
of time is the major task in search and rescue which can easily be accomplished
by swarm robotics. 1As it can cover, border area by the increasing number of
bots in the affected area, it can also help each other in case of a breakdown and
is more reliable. These multiple robots communicate via the cloud. As a first
step the current paper discusses and implements on best suitable hardware and
software to be included on a swarm robot our second step is to develop several
robots with similar configurations and apply swarm algorithms and test its
effectiveness. This paper discusses about swarm robots using an ad-hoc network
to follow firefighters, they help in finding humans and robots. The robots move
in the lower area of the building which has low smoke and lower temperature in
case of fire and this can help guide firefighters. This robot moves on small rocks
and wreckage its equipped with a robotic arm to help lift objects, the robots are
controlled via Bluetooth in a mobile device. Major focus of the blue swarm 2.5
is on an inexpensive swarm where the rescue team need not worry about the
cost of the robot in case of damage. It had various sensors such as a Devantech
SRF04 ultrasonic rangefinder for obstacle detection, infrared range finders and
emitters for collision avoidance and Melexis MLX90601 infrared temperature
used to detect heat signatures from survivors, CMU Cam for color blob
detection. Proposes an alter rain robot with a robotic arm which could carry out
‘n’ number of tasks on its own.
The installed Cameras, CMU cam, and FLIR Thermal Camera are used to serve
the sole purpose of surveillance in different manners. CMU cam carries out
vision, image processing and signature detection whereas Thermal Camera used
for night vision and life detection. Apart from these there’s an array of
Ultrasonic Sensor on the bottom of the chassis for terrain mapping and directly
uploading it to the web server also used in collision detection the robot also
performs terrain mapping. The issue of area coverage can be overcome by the
usage of swarm robotics, this system also doesn’t have an edge detection system
which can lead to disasters if there are huge steep during the search mission.
Proposes rescue operation system that can be carried out after receiving the
location of live humans tracked by GPS which is sent to the receiver (PC,
Mobile etc.) using IOT. Ultrasonic sensors are used in this paper for the
obstacle detection and for autonomous movement of the robot. A GPS module
is used, which can connect to numerous satellites and determine the exact
location. Any motion is detected by PIR sensor it sends the location to the
particular IP address. The PIR sensor used here can send irrelevant data to the
rescue team as PIR sensors detect any kind of movements in front of it. This
issue can be overcome by using infrared thermal cameras and a face detection
system. There is no set path or destination to which this robot moves. Random
traversal can lead to repeated search area and waste of time. implements rescue
robot with thermostat for temperature detection, an end effecter to lift objects
weighing 200-300grams, also includes a RF pro for communications, and they
have used Viola-jones face detection algorithm to detect survivors. This robot is
built with pallets which will prevent tipping and falling while moving, its
equipped with RF pro for communication, Bluetooth for GPS communication,
arm with wrist and elbow and robot is entirely controlled by an android app.
This paper offers a mobile robot based on Wireless Sensor Network, which is
designed for human existence & detection in an unmanned location. Also
includes a module to record, analyze conditions of the human body and
communicate data, PIR detector, Gas sensor and a bomb sensor, temperature
sensor. wireless sensor network structure is designed for locating.

1.3 SECURITY ALERT USING FACE RECOGNITION

Security is one of the at most requirements of homes and business .In


today high technology environment, organizations are becoming more and more
dependent on their information systems. Many organizations will identify
information as an area of their operation that needs to be highly secured as part
of their system of internal control. This paper aims to identify a person through
face recognition and provide alert when the security is at risk. Face recognition
is one of the applications of image processing. Image processing method is that
it will convert an image into digital form and perform some operations on the
image, in order to get an enhanced image or to excerpt some useful information
from it. It is a type of signal dispensation in which input is image, like video
frame or photograph and output may be image or attribute associated with that
image. Usually Image Processing system includes treating images as two
dimensional signals while applying already set signal processing methods to
them. We can make advantage of image processing and face recognition in our
CCTV cameras. Video surveillance and the analysis of the obtained footage is a
process which needs a huge memory. Video surveillance using CCTV is now
being used everywhere . But the effective video surveillance is not implemented
anywhere. The current practice of video surveillance is installing a camera and
analyzing the footages which are stored. But with the same cost we can
implement something better. That is rather than analyzing the footages after the
incident occurred, notify the authority of organization at the time of incident so
that higher authority can take necessary action without any delay. The system
introduced in this paper includes many features which is cost effective and more
secured. In our system a camera is installed in the room which is to be secured.
Along with the camera a PIR sensor is used so that there is no need of keeping
camera turned on. So when human presence is detected by PIR sensor, camera
turns on and start capturing the video. From the frames obtained from the
captured video human face is detected and facial features are extracted using
Viola Jones algorithm. The image is compared with the image stored in the
dataset as the reference image using PCA algorithm. When the face is
recognized a message is sent to the higher authority regarding the time and date
of access. When face is not recognized an alarm will be played to notify the
security guard about the unauthorized entry attempt. Also a message is sent to
higher authority regarding the entry time and date.
1.4 INTERNET OF THINGS

The Internet of Things, in like manner called The Internet of Objects, implies a
remote framework between objects generally the framework will be remote and
self organizing, for instance, family machines. The expression "Internet of
Things" has come to depict various advances and research trains that empower
the Internet to connect into this present reality of physical articles. Expanding
the present Internet and giving association, correspondence, and between
systems administration amongst gadgets and physical articles, or Things, is a
developing pattern that is frequently alluded to as the Internet of Things. The
Internet of Things (IoT), now and then alluded to as the Internet of Objects, will
change everything including ourselves. IoT speaks to the following
advancement of the Internet, taking a gigantic jump in its capacity to assemble,
investigate, and appropriate information that we can transform into data,
learning and at last, shrewdness. The Internet of Things (IoTs) can be portrayed
as associating ordinary items like advanced mobile phones, Internet TVs,
sensors and actuators to the Internet where the gadgets are keenly connected
together empowering new types of correspondence amongst things and
individuals, and between things themselves. Presently anybody, from whenever
and anyplace can have availability for anything and it is normal that these
associations will broaden and make an altogether propelled dynamic system.
IoTs innovation can likewise be connected to make another idea and wide
advancement space for savvy homes to give insight, comfort and to enhance the
personal satisfaction. Present day propels in hardware and interchanges
Technologies have prompt the scaling down and change of the execution of
PCs, sensors and systems administration. These progressions have offered
ascend to the advancement of a few home computerization advances and
frameworks. As indicated by, home robotisation can be valuable to the
individuals who need to access home apparatuses abide far from their home and
can unbelievably enhance the lives of the incapacitated .

The articulation "Smart city" has been utilized for quite a while by a number of
innovation organizations and fills in as a depiction for the utilization of
compound frameworks to incorporate the operation of urban foundation and
administrations, for example, structures, transportation, electrical what's more,
water dispersion, and open security. A shrewd city can be depicted as a city
that: • Allows certifiable urban information to be gathered and examined by the
utilization of programming frameworks, server substructure, arrange
foundation, and customer gadgets.

• Implements arrangements, with the help of instrumentation furthermore,


interconnection of sensors, actuators, and cell phones.

• Can join benefit creation and an astute condition, abuses available data in its
exercises furthermore, basic leadership and embraces data streams between the
district and the urban or business group. The city might be considered as an
administration association with subjects as the clients - it gives administrations
to its nationals. There is a request for more brilliant, compelling, proficient and
more reasonable urban communities, pushing the aggregate knowledge of urban
communities ahead, which can enhance the capacity to gauge and oversee urban
streams, and coordinate the measurements of the physical, computerized and
institutional spaces of a provincial agglomeration.
CHAPTER -2
LITERATURE SURVEY
2.1 Facial Expression Recognition in the Encrypted Domain Based on
Local Fisher Discriminant Analysis
Facial expression recognition forms a critical capability desired by human-
interacting systems that aim to be responsive to variations in the human's
emotional state. Recent trends toward cloud computing and outsourcing has led
to the requirement for facial expression recognition to be performed remotely
by potentially untrusted servers. This paper presents a system that addresses the
challenge of performing facial expression recognition when the test image is in
the encrypted domain. More specifically, to the best of our knowledge, this is
the first known result that performs facial expression recognition in the
encrypted domain. Such a system solves the problem of needing to trust servers
since the test image for facial expression recognition can remain in encrypted
form at all times without needing any decryption, even during the expression
recognition process. Our experimental results on popular JAFFE and MUG
facial expression databases demonstrate that recognition rate of up to 95.24
percent can be achieved even in the encrypted domain. The effectiveness of
subspace of fused dataset has been measured with different dimensional
parameters of Gabor filter. The experimental results reveal that performance of
the subspace approaches for high dimensional proposed feature level fused
dataset yields higher accuracy rates compared to state of art approaches.
2.2 EmoNets: Multimodal deep learning approaches for emotion
recognition in video

he task of the emotion recognition in the wild (EmotiW) Challenge is to assign


one of seven emotions to short video clips extracted from Hollywood style
movies. The videos depict acted-out emotions under realistic conditions with a
large degree of variation in attributes such as pose and illumination, making it
worthwhile to explore approaches which consider combinations of features
from multiple modalities for label assignment. In this paper we present our
approach to learning several specialist models using deep learning techniques,
each focusing on one modality. Among these are a convolutional neural
network, focusing on capturing visual information in detected faces, a deep
belief net focusing on the representation of the audio stream, a K-Means based
"bag-of-mouths" model, which extracts visual features around the mouth region
and a relational auto encoder, which addresses spatiotemporal aspects of videos.
We explore multiple methods for the combination of cues from these modalities
into one common classifier. This achieves a considerably greater accuracy than
predictions from our strongest single-modality classifier. Our method was the
winning submission in the 2013 Emotion challenge and achieved a test set
accuracy of 47.67% on the 2014 dataset.
2.3 A novel feature extraction method based on late positive potential for
emotion recognition in human brain signal patterns

The use of stimulation strategy may help to enhance the emotion recognition
from human brain signals. The late positive potential (LPP) was analyzed in
order to select the features for emotion classification. The LPP based
electroencephalography (EEG) features were selected under multiple frequency
bands. The emotion classification was performed by using support vector
machine (SVM) and k nearest neighbours (KNN).These findings offer
experimental evidence that the LPP components may be possible features for
emotion recognition. Several methods for collecting psychophysiological data
from humans have been developed, including galvanic skin response (GSR),
electromyography (EMG), electroencephalography (EEG), and the
electrocardiogram (ECG). This paper proposes a feature extraction method for
emotion recognition in EEG-based human brain signals. In this research,
emotions were elicited from subjects using emotion-related stimuli from the
International Affective Picture System (IAPS) database. We selected four kinds
of emotional stimuli in the arousal-valence domain. Raw brain signals were
preprocessed using independent component analysis (ICA) to remove artifacts.
We introduced a feature extraction method using LPP, and implemented a
benchmark based on statistical and frequency domain features. The LPP-based
results show the highest accuracy when using SVM in the all-selected feature
set. The results also provide evidence and suggest a way for further developing
a more specialized emotion recognition system using brain signals. Display
Omitted
2.4 . EEG-Based Emotion Recognition Approach for E-Healthcare
Applications

Emotions play an extremely important role in how we make a decision,


planning, reasoning and other human mental states. The recognition of these
emotions is becoming a vital task for e-healthcare systems. Using bio-sensors
such as Electroencephalogram (EEG) to recognise the mental state of patients
that could need a special care offers an important feedback for Ambient
Assisted Living (AAL). This paper presents an EEG-based emotion recognition
approach to detect the emotional state of patients. The proposed approach
combines wavelet energy, modified energy, wavelet entropy and statistical
features to classify four emotion states. Three different classifiers are used
(quadratic discriminant analysis, k-nearest neighbor, and support vector
machines) to recognise the emotion of patients robustly. The new approach is
tested based on the EEG database “DEAP” using four electrodes. It shows high
performance compared to existing algorithms. An overall classification
accuracy of 83.87% is obtained.

2.5 A New Class of Convolutional Neural Networks (SICoNNets) and their


Application to Face Detection

Emotion states are associated with wide variety of human feelings,


thoughts and behaviors; hence, they affect our ability to act rationally, in cases
such as decision-making, perception and human intelligence. Therefore, studies
on emotion recognition using emotional signals enhance the braincomputer
interface (BCI) systems as an effective subject for clinical applications and
human social interactions [ Artificial neural networks (ANNs), evolved from
biological insights, have equipped computers with the capacity to actually learn
from examples using real world data. With this remarkable ability, ANNs are
able to extract patterns and detect trends that are too complex to be noticed or
perceived by either humans or classical computer techniques. Nevertheless, as
the amount of data to be processed increases significantly there is a demand for
developing other types of artificial neural networks to perform complex pattern
recognition tasks. In this article, a new class of convolutional neural networks,
namely shunting inhibitory convolutional neural networks (SICoNNets), is
introduced, and a training algorithm is developed using supervised learning
based on resilient backpropagation with momentum. Three different network
topologies, ranging from fully-connected to partially-connected, are
implemented and trained to discriminate between face and non-face patterns.
All three architectures achieve more than 96% correct face classification; the
best architecture achieves 97.6% correct face classification at a false alarm rate
of 3.4%. the recognition of videos has not achieved good results in some
application filed, such as the recognition of surveillance videos. In order to
achieve better recognition results, in this paper, we propose a new algorithm to
recognize video by five coherent pictures. Firstly, the features of the video
frames are extracted by Resnet, and then the features are sent to a 2-layer LSTM
for processing, and finally classification by gathering the fully connected layer.
We use the collected shipping data as a dataset to detect the algorithm model in
this paper. The results of experiment show that the recognition of the proposed
algorithm are better than other methods, and the total accuracy increased from
0.967 to 0.981.

2.6 Aiwac: affective interaction through wearable computing and cloud


technology
Deep learning applies multiple processing layers to learn representations of data
with multiple levels of feature extraction. This emerging technique has reshaped
the research landscape of face recognition (FR) since 2014, launched by the
breakthroughs of Deepface method. Since then, deep FR technique, which
leverages hierarchical architecture to stitch together pixels into invariant face
representation, has dramatically improved the state-of-the-art performance and
fostered successful real-world applications. In this survey, we provide a
comprehensive review of the recent developments on deep FR, covering both
broad topics on algorithm designs, databases and protocols, and application
scenes. First, we summarize different network architectures and loss functions
proposed in the rapid evolution of the deep FR methods. Second, the related
face processing methods are categorized into two classes: “one-to-many
augmentation” and “many-to-one normalization”. Then, we summarize and
compare the commonly used databases for both model training and evaluation.
Third, we review miscellaneous scenes in deep FR, such as cross-factor,
heterogenous, multiple media and industry scenes. Finally, the technical
challenges of current methods and several promising directions on biometric
security, fusion, and privacy are highlighted. Face recognition (FR) has been the
prominent biometric technique for identity authentication and has been widely
used in many areas, such as military, finance, public security and daily life. FR
has been a long-standing research topic in the CVPR community.

2.7 Home Security System and Door Access Control Based on Face
Recognition

Smart security system has become indispensable in modern daily life. The
proposed security system has been developed to prevent robbery in highly
secure areas like home environment with lesser power consumption and more
reliable standalone security device for both Intruder detection and for door
security. The door access control is implemented by using face recognition
technology, which grants access to only authorized people to enter that area.
The face recognition and detection process is implemented by principal
component analysis (PCA) approach and instead of using sensor devices
intruder detection is achieved by performing image processing on captured
video frames of data, and calculating the difference between the previously
captured frame with the running frames in terms of pixels in the captured
frames. This is the stand alone security device has been developed by using
Raspberry Pi electronic development board and operated on Battery power
supply, wireless internet connectivity by using USB modem. Auto Police e-
Complaint registration has been achieved by sending security breach alert mails
to the nearby police station e-mail id. This proposed is more effective, reliable,
and this system consumes very less data and power compared to the other
existing systems An efficient and accurate home security and access control to
the doors system which is based on face recognition is very important for wide
range of security application. Security is an important aspect or feature in the
smart home applications . Most of the countries are gradually adopting smart
door security system. The most important major part of any door security
systems are identifying accurately the persons who enter through the door. Face
recognition is probably the most natural way to perform authentication between
human beings. Additionally, it is the most popular biometric authentication trait,
after fingerprint technology.

2.8 A face recognition method based on LBP feature for CNN

Face recognition is a kind of biometrics which based on the facial feature


information of human. And face recognition has wide application value in
computer information security, medical treatment, security monitoring, human-
computer interaction and finance. Face feature extraction is the key of face
recognition technology, and it is related to the selection and recognition of face
recognition algorithm. Local Binary Pattern is a texture description method that
describes the local texture feature of an image in a gray-scale range. In recent
years, many researchers have successfully applied it to facial feature description
and recognition in face recognition, and achieved remarkable results.
Convolutional Neural Networks is one of the most representative network
structures in deep learning technology, and it has achieved great success in the
field of image processing and recognition.

2.9 Face Detection with the Faster R-CNN

While deep learning based methods for generic object detection have improved
rapidly in the last two years, most approaches to face detection are still based on
the R-CNN framework , leading to limited accuracy and processing speed. In
this paper, we investigate applying the Faster RCNN, which has recently
demonstrated impressive results on various object detection benchmarks, to face
detection. By training a Faster R-CNN model on the large scale WIDER face
dataset report state-of-the-art results on the WIDER test set as well as two other
widely used face detection benchmarks, FDDB and the recently released IJB-A.

2.10 Home Security System and Door Access Control Based on Face
Recognition

Artificial neural networks (ANNs), evolved from biological insights, have


equipped computers with the capacity to actually learn from examples using
real world data. With this remarkable ability, ANNs are able to extract patterns
and detect trends that are too complex to be noticed or perceived by either
humans or classical computer techniques. Nevertheless, as the amount of data to
be processed increases significantly there is a demand for developing other
types of artificial neural networks to perform complex pattern recognition tasks.
In this article, a new class of convolutional neural networks, namely shunting
inhibitory convolutional neural networks (SICoNNets), is introduced, and a
training algorithm is developed using supervised learning based on resilient
backpropagation with momentum. Three different network topologies, ranging
from fully-connected to partially-connected, are implemented and trained to
discriminate between face and non-face patterns. All three architectures achieve
more than 96% correct face classification; the best architecture achieves 97.6%
correct face classification at a false alarm rate of 3.4%.

2.11 Access Control System Based on Face Recognition

Face recognition is a technology that uses face image of someone to verify his
identity by finding this person in a given photos database. It becomes very
practical in access control systems as it does not require any physical interaction
for gaining access as traditional ways with keys. Moreover, these systems only
require a camera for recognition and are easy to install and use. This is why they
are already in use by companies as access control to their offices, in home
automation systems, etc.In this paper, different approaches to face recognition
are studied. The first step of any face recognition system is face detection and
cropping so we analyzed classical Viola-Jones face detection and MultiTask
Convolutional Neural Networks (MTCNN) in terms of detection quality and
processing time.The final classifier obtained is capable of matching face from
the online camera to image from a given database. We also considered
decreasing the vulnerability of standalone face recognition by adding a spoof
detection method so that the system does not react to every approach to bypass
the system like showing a photo of an allowed person shown on a phone's
screen.

2.12 Research and Implementation of Face Detection, Tracking and


Recognition Based on Video

With the continuous development of computer, many advanced technologies are


mostly based on computer vision, face recognition technology plays an
important role. The camera captures the image or video stream containing the
human face, and detects and tracks the face in the image. This series of related
technologies are also called face detection and recognition. The method of face
recognition is more natural, more intuitive, and has the characteristics of
indirect and concurrency. It has been widely used in many fields. Now,
computer vision is an active research field. The methods of face detection and
recognition in the field of computer vision. The user interface is designed with
PyQt, and the function of face model training, reading the image to be
recognized, extracting the image to be recognized and the recognition of human
face are realized. The experimental results show that the proposed method has
high recognition efficiency, and a complete algorithm system based on face
tracking detection and face recognition is realized, which can lay a good
foundation for future research on the aspect of vision.

2.13 Smart Inventory Access Monitoring System (SIAMS) using Embedded


System with Face Recognition

Smart Inventory Access Monitoring System (SIAMS) that integrates an


embedded system with face recognition into an inventory system. It is
developed to prevent theft in warehouses from authorized staff. The embedded
system is attached with an RGB camera and deployed three software modules:
image capturing, face detection, and face recognition. The face detection
module sends detected face images to the face recognition module to identify a
person as the person's name or unknown class using a deep learning approach.
The system achieved competitive accuracy by performing standard evaluation
metrics for face detection and recognition. The inventory system that was
developed will receive data via TCP/IP socket communication to log access
history. The retrieved information can be used to investigate an unusual
situation. The system can be improved with object detection and person tracking
system to detect theft in real-time.
2.14 Development of Application and Face Recognition for Smart Home

This paper presents the application design and facial recognition for a smart
home system, which is easy to use, having a security system, a low cost, and a
home appliance voice control. It also includes face recognition system to
authenticate each user and alert when the intruder access in the house.
Moreover, home appliances can be monitoring and controlled with a
smartphone application. The main components of the systems are ESP8266
microcontroller, Arduino UNO, PIR Sensor, LDR Sensor, temperature Sensor
(DHT11), voice recognition module, face recognition module, Android / iOS
application and service platform on NETPIE. In the study, it was found that face
recognition can work well if the user's face is less than a meter from the camera.
And more accurate by increased the number of user images.

2.15 Raspberry Pi and computers-based face detection and recognition


system

This paper aims to deploy a network that consists a group of computers


connected with a microcomputer with a camera. The system takes images of
people, analyze, detect and recognize human faces using image processing
algorithms. The system can serve as a security system in public places like
Malls, Universities, and airports. It can detect and recognize a human face in
different situations and scenarios. This system implements “Boosted Cascade of
Simple Features algorithm” to detect human faces. “Local Binary Pattern
algorithm” to recognize these faces. Raspberry Pi is the main component
connected to a camera for image capturing. All needed programs were written
in python. Tests and performance analysis were done to verify the efficiency of
this system.
CHAPTER III

3.SYSTEM OVERVIEW

3.1 EXISTING SYSTEM

Automatic face analysis which includes, e.g., face detection, face recognition,
and facial expression recognition has become a very active topic in computer
vision research. A key issue in face analysis is finding efficient descriptors for
face appearance. Different holistic methods such as Principal Component
Analysis (PCA), Linear Discriminant Analysis (LDA), and the more recent 2D
PCA have been studied widely but lately also local descriptors have gained
attention due to their robustness to challenges such as pose and illumination
changes. A novel descriptor based on local binary pattern texture features
extracted from local facial regions. Using local photometric features for object
recognition in the more general context has become a widely accepted
approach. In that setting, the typical approach is to detect interest points or
interest regions in images, perform normalization with respect to affine
transformations, and describe the normalized interest regions using local
descriptors. This bag-of-keypoints approach is not suited for face description as
such since it does not retain information on the spatial setting of the detected
local regions but it does bear certain similarities to local feature-based face
description.

3.1.1 DISADVANTAGES

• SLOW PROCESS
• ACCURACY LEVEL IS VERY LOW.
• EXPENSIVE METHOD
3.2PROPOSED SYSTEM

Detecting the presence of human can be done in different techniques and


methodologies. One such technique is using a PIR sensor to find the direction of
movement by the concepts of polarization . Another technique by using PIR
sensors along with Symbolic Dynamic Filtering on seismic waves, from these
seismic waves the features were extracted using SDF and checked if presence is
of vehicle or animal/human. After classifying, it is further classified between
human or animal along with their movement type (running, walking) . There is

a technique which focus on use of PIR sensors to detect human beings. The PIR
sensor is used to detect movement in the specific area. Afterwards, The camera
capture the images. Raspberry Pi 3 is utilized and Raspberry Pi camera is
connected to it. The motion detection module detects any motion by using PIR
sensor. Afterward, the algorithm will search for human faces and then face
detection will be processed. a face recognition method named Sparse Individual
Low-rank component-based Representation (SILR) for IoT-based system,
which testing images’ representation is based on individual subjects’ low-rank
components instead of all samples from all subjects. Low-rank components
obtained by low-rank matrix recovery from the training samples present
effective features that would contribute to accurate recognition. In the proposed
method, the major contribution is that the l2-norm constraint is put on intra
subject coefficients to represent a testing image. It makes the blocks of
coefficient for one subject denser than before, which can make testing images
be better represented with the collaboration of intra-subject images. Thus, the
sparse representation of testing images coincide with the individual subject,
which can reduce the impact of the same inter subject variation. In this way, the
impact of undersampled data can also be weakened. Then, we further propose a
procedure to get the result of the proposed SILR by solving a convex
minimization problem, which is a regularized optimization solvable in
polynomial time This system is very useful and important to secure a place. The
system sends the images to a IOT devices via the Internet. Then, the face will be
detected and recognized in the captured image. Finally, the images and
notifications will be sent to a Email application.

3.2.1 ADVANTAGES

• The highest accuracy in various probe sets.

• The face detection algorithm are more reasonable for real-time because
they need less CPU resource and low costs.
3.3 SYSTEM DESIGN

Power supply

Raspberry pi DISPLAY
PIR Sensor

Raspberry pi WEB SERVER


Raspberry Pi Camera
LOCAL NETWORK

SMTP

IOT DEVICES

3.1 BLOCK DIAGRAM


3.2 Circuit Diagram

3.4 METHADOLOGY

A smart movement and face recognition for a personal area. The Raspberry Pi
camera successfully captures the image when PIR sensor detects any
movement. The system was able to successfully identify the faces in the
captured images. The algorithm has been applied to all the images. The face
detection and recognition algorithm are more reasonable for real-time because
they need less CPU resource and low costs. The sparse coefficient _ of the
aforementioned SRC-based methods is over the whole dictionary. They all
assume that images from the same subject are more correlated than those from
different others. The significant representation coefficient in a sparse
representation is expected to focus on the correct subject. For real-world
terrorist recognition, testing images collected from the crime scene can often
subject to bad various occlusions or lighting conditions, but training images of a
terrorist are standard front-on faces with neutral lighting. However, face images
captured from the different subjects under the same condition may have more
correlation than those captured from the same subject under different
conditions. Besides, due to some similar facial appearances, the subspaces
spanned by different subjects may have more correlation. When face data are
under sampled and captured under the same intra-subject variation, both data
quantity and quality bring big challenges to construct a useful model or
representation. This application performs operations such as motion detection,
capturing images, face recognition with sending output images and notifications
to the user's smartphone. The system will be activated when a motion is
detected. At the same time, the camera captures the events. The notifications
and the images will be sent to concern mail ids.

CHAPTER IV

HARDWARE DESCRIPTION

4.1 COMPONENTS DETAILS

 POWER SUPPLY
 RASPBERRY PI

 PIR SENSOR

 RASPBERRY PI CAMERA

 RASPBERRY PI DISPLAY
 IOT

4.1.1 POWER SUPPLY

An Introduction:

The power supply is very important section of all electronic devices as all the
electronic devices works only in DC. One important aspect of the project is that
the power supply should be compact. Most electronic devices need a source of
DC power.

Power supply unit consists of following units:

1. Step down transformer.

2. Rectifier unit.
3. Input filter.

4. Regulator unit.

5. Output filter.

The circuit is powered by a 12V dc adapter, which is given to LM7805 voltage


regulator by means of a forward voltage protection diode and is decoupled by
means of a 0.1 uf capacitor. The voltage regulator gives an output of exactly 5V
dc supply. The 5V dc supply is given to all the components including the
Microcontroller, the serial port, and the IR transmitters and sensors.

The AC supply which when fed to the step down transformer is leveled down to
12 volts AC. This is then fed to full wave rectifer which converts it in to 12
volts DC. This is then passed to a filter to remove the ripples. Then it is fed to a
voltage regulator that converts 12 V to 5 V stable voltages and currents.

Step-down Transformer

The step down transformer is used to step down the main supply voltage from
230AC to lower value. This 230AC voltage cannot be used directly, thus its
stepped down. The transformer consists of primary and secondary coils. To
reduce or step down the voltage, the transformer is designed to contain less
number of turns in its secondary core. Thus the conversion from AC to DC is
essential. This conversion is achieved by using the rectifier
circuit.
Rectifier Unit

The Rectifier circuit is used to convert AC voltage into its corresponding DC


voltage. There are Half-Wave and Full-Wave rectifiers available for this
specific function. The most important and simple device used in rectifier circuit
is the diode. The simple function of the diode is to conduct when forward biased
and not to conduct when reverse biased. The forward bias is achieved by
connecting the diode’s positive with of positive of battery and negative with
battery’s negative. The efficient circuit used is full wave bridge rectifier circuit.
The output voltage of the rectifier is in rippled form, the ripples from the
obtained DC voltage are removed using other circuits available. The circuit used
for removing the ripples is called Filter circuit.

Input Filter

Capacitors are used as filters. The ripples from the DC voltage are removed and
pure DC voltage is obtained. The primary action performed by capacitor is
charging and discharging. It charges in positive half cycle of the AC voltage and
it will discharge in its negative half cycle, so it allows only ACC voltage and
does not allow the DC voltage. This filter is fixed before the regulator. Thus the
output is free from ripples.
Regulator Unit

Regulator regulates the output voltage to be always constant. The output voltage
is maintained irrespective of the fluctuations in the input AC voltage. As and
then the AC voltage changes, the DC voltage also changes. To avoid this,
regulators are used. Also when the internal resistance of the power supply is
greater than 30 ohms, the pull up gets affected. Thus this can be successfully
reduced here. The regulators are mainly classified for low voltage and for high
voltage.

IC Voltage Regulators

Voltage regulators comprise a class of widely used ICs. Regulator IC units


contain the circuitry for reference source, comparator amplifier, control device
and overload protection all in a single IC. Although the internal construction of
the IC somewhat is different from that described for discrete voltage regulator
circuits, the external operation is much the same. IC units provide the
regulation of a fixed positive voltage, a fixed negative voltage or an adjustably
set voltage.

A Power Supply can be built using a transformer connected to the AC supply


line to step the ac voltage to desired amplitude, then rectifying that ac voltage
using IC regulator. The regulators can be selected for operation with load
currents from hundreds of milli amperes to tens of amperes, corresponding to
power ratings from milli watts to tens of watts.
The purpose of the regulator is to maintain the output voltage constant
irrespective of the fluctuations in the input voltage. The Micro controller and
PC work at a constant supply voltage of +5V,-5Vand +12V and -12V
respectively. The regulators are mainly classified for positive and negative
voltage.

LM 7805 Voltage Regulator

Features

1. Output Current up to 1A.

2. Output Voltages of 5, 6, 8, 9, 10, 11, 12, 15, 18, 24V.

3. Thermal Overload Protection.

4. Short Circuit Protection.

5. Output Transistor Safe Operating area Protection.

Description

The MC78XX/LM78XX series of three-terminal positive regulators are


available in the TO-220/D-PAK package and with several fixed output voltages,
making them useful in a wide range of application.
Each type employs internal current limiting, thermal shut-down and safe
operating area protection, making it essentially indestructible. If adequate heat
sinking is provided, they can deliver over 1A output current.

Output Filter

The filter circuit is often fixed after the regulator circuit. Capacitor is most often
used as filter. The principle of the capacitor is to charge and discharge. It
charges during the positive half cycle of the AC voltage and discharges during
the negative half cycle. So it allows AC voltage and not DC voltage. This filter
is fixed after the regulator circuit to filter any of the possibly found ripples in
the output received finally.

• All the electronic components from diode to Intel IC’s only work with a
DC supply ranging from -+5v to -+12v.

• This voltage achieved by step down, rectification and filtering process.

• The input supply to this power supply circuit is 230V & 50Hz.
• The primary purpose of a regulator is to aid the rectifier and filter circuit
in providing a constant DC voltage to the device.

• IC7812 and 7912 is used in this project for providing +12v and –12v DC
supply

• The rectifier converts AC into DC.

• Rectification is normally achieved using a solid state diode.

• The rectifier used here is Bridge Rectifier.

4.1.2 RASPBERRY PI

Raspberry Pi is the name of a series of single-board computers made by the


Raspberry Pi Foundation, a UK charity that aims to educate people in
computing and create easier access to computing education. The Raspberry Pi
launched in 2012, and there have been several iterations and variations released
since then. The original Pi had a single-core 700MHz CPU and just 256MB
RAM, and the latest model has a quad-core 1.4GHz CPU with 1GB RAM. All
over the world, people use Raspberry Pis to learn programming skills, build
hardware projects, do home automation, and even use them in industrial
applications. The Raspberry Pi is a very cheap computer that runs Linux, but it
also provides a set of GPIO (general purpose input/output) pins that allow you
to control electronic components for physical computing and explore the
Internet of Things (IoT).

Voltages

Two 5V pins and two 3V3 pins are present on the board, as well as a number of
ground pins (0V), which are configurable. The remaining pins are all general
purpose 3V3 pins, meaning outputs are set to 3V3 and inputs are 3V3-tolerant.

Outputs
A GPIO pin designated as an output pin can be set to high (3V3) or low (0V).

Inputs

A GPIO pin designated as an input pin can be read as high (3V3) or low (0V).
This is made easier with the use of internal pull-up or pull-down resistors. Pins
GPIO2 and GPIO3 have fixed pull-up resistors, but for other pins this can be
configured in software.

As well as simple input and output devices, the GPIO pins can be used with a
variety of alternative functions, some are available on all pins, others on specific
pins.

PWM (pulse-width modulation)

Software PWM available on all pins

Hardware PWM available on GPIO12, GPIO13, GPIO18, GPIO19

SPI

SPI0: MOSI (GPIO10); MISO (GPIO9); SCLK (GPIO11); CE0 (GPIO8), CE1
(GPIO7)

SPI1: MOSI (GPIO20); MISO (GPIO19); SCLK (GPIO21); CE0 (GPIO18);


CE1 (GPIO17); CE2 (GPIO16) I2C

Data: (GPIO2); Clock (GPIO3)

EEPROM Data: (GPIO0); EEPROM Clock (GPIO1)

Serial

TX (GPIO14); RX (GPIO15)

GPIO pinout

It's important to be aware of which pin is which. Some people use pin labels
(like the RasPiO Portsplus PCB, or the printable Raspberry Leaf).
4.1.3 RASPBERRY PI CAMERA MODULE

The Pi camera module is a portable light weight camera that supports Raspberry
Pi. It communicates with Pi using the MIPI camera serial interface protocol. It
is normally used in image processing, machine learning or in surveillance
projects. It is commonly used in surveillance drones since the payload of
camera is very less. Apart from these modules Pi can also use normal USB
webcams that are used along with computer.

PiCam Features

 5MP colour camera module without microphone for Raspberry Pi

 Supports both Raspberry Pi Model A and Model B

 MIPI Camera serial interface

 Omnivision 5647 Camera Module

 Resolution: 2592 * 1944

 Supports: 1080p, 720p and 480p

 Light weight and portable (3g only)

After interfacing the hardware, we have to configure the Pi to enable Camera.


Use the command “sudo raspi-config” to open the configuration window. Then
under interfacing options enable camera. Finally reboot the Pi and your camera
module is ready to use. Then, you can make the Pi to take photos or record
videos using simple python scripts.

Applications

 Surveillance projects

 Time-lapse video recording


 Image processing

 Machine learning

 Robotics

4.1.4 RASPBERRY PI DISPLAY

The Raspberry Pi Touch Display is an LCD display which connects to the


Raspberry Pi through the DSI connector. In some situations, it allows for the
use of both the HDMI and LCD displays at the same time (this requires
software support).

Board Support

The DSI display is designed to work with all models of Raspberry Pi, however
early models that do not have mounting holes (the Raspberry Pi 1 Model A and
B) will require additional mounting hardware to fit the HAT-dimensioned
stand-offs on the display PCB.

Physical Installation

The following image shows how to attach the Raspberry Pi to the back of the
Touch Display (if required), and how to connect both the data (ribbon cable)
and power (red/black wires) from the Raspberry Pi to the display. If you are not
attaching the Raspberry Pi to the back of the display, take extra care when
attaching the ribbon cable to ensure it is the correct way round. The black and
red power wires should be attached to the GND and 5v pins respectively.
The other three pins should be left disconnected, unless connecting the display
to an original Raspberry Pi 1 Model A or B. See the section on legacy
support for more information on connecting the display to an original Raspberry
Pi.

Screen Orientation

LCD displays have an optimum viewing angle, and depending on how the
screen is mounted it may be necessary to change the orientation of the display to
give the best results. By default, the Raspberry Pi Touch Display and Raspberry
Pi are set up to work best when viewed from slightly above, for example on a
desktop. If viewing from below, you can physically rotate the display, and then
tell the system software to compensate by running the screen upside down.

KMS and FKMS Mode

KMS and FKMS modes are used by default on the Raspberry Pi 4B. KMS and
FKMS use the DRM/MESA libraries to provide graphics and 3D acceleration.
To set screen orientation when running the graphical desktop, select the Screen
Configuration option from the Preferences menu. Right click on the DSI display
rectangle in the layout editor, select Orientation then the required option.

To set screen orientation when in console mode, you will need to edit the kernel
command line to pass the required orientation to the system.

Specifications

 800×480 RGB LCD display

 24-bit colour

 Industrial quality: 140-degree viewing angle horizontal, 130-degree


viewing angle vertical

 10-point multi-touch touchscreen

 PWM backlight control and power control over I2C interface

 Metal-framed back with mounting points for Raspberry Pi display


conversion board and Raspberry Pi

 Backlight lifetime: 20000 hours

 Operating temperature: -20 to +70 degrees centigrade

 Storage temperature: -30 to +80 degrees centigrade

 Contrast ratio: 500

 Average brightness: 250 cd/m2

 Viewing angle (degrees):

o Top - 50

o Bottom - 70

o Left - 70
o Right - 70

 Power requirements: 200mA at 5V typical, at maximum brightness.

Mechanical Specification

 Outer dimensions: 192.96 × 110.76mm

 Viewable area: 154.08 × 85.92mm

4.1.5 PIR SENSOR

PIR sensors are more complicated than many of the other sensors explained
in these tutorials (like photocells, FSRs and tilt switches) because there are
multiple variables that affect the sensors input and output. To begin
explaining how a basic sensor works, we'll use this rather nice diagram
The PIR sensor itself has two slots in it, each slot is made of a special
material that is sensitive to IR. The lens used here is not really doing much
and so we see that the two slots can 'see' out past some distance (basically
the sensitivity of the sensor). When the sensor is idle, both slots detect the
same amount of IR, the ambient amount radiated from the room or walls or
outdoors. When a warm body like a human or animal passes by, it first
intercepts one half of the PIR sensor, which causes a positive
differential change between the two halves. When the warm body leaves the
sensing area, the reverse happens, whereby the sensor generates a negative
differential change. These change pulses are what is detected.
The IR sensor itself is housed in a hermetically sealed metal can to improve
noise/temperature/humidity immunity. There is a window made of IR-
transmissive material (typically coated silicon since that is very easy to
come by) that protects the sensing element. Behind the window are the two
balanced sensors.
PIR sensors are rather generic and for the most part vary only in price and
sensitivity. Most of the real magic happens with the optics. This is a pretty
good idea for manufacturing: the PIR sensor and circuitry is fixed and costs
a few dollars. The lens costs only a few cents and can change the breadth,
range, sensing pattern, very easily.
In the diagram up top, the lens is just a piece of plastic, but that means that
the detection area is just two rectangles. Usually we'd like to have a
detection area that is much larger. To do that, we use a simple lens such as
those found in a camera: they condenses a large area (such as a landscape)
into a small one (on film or a CCD sensor). For reasons that will be apparent
soon, we would like to make the PIR lenses small and thin and moldable
from cheap plastic, even though it may add distortion. For this reason the
sensors are actually Fresnel lenses.

4.1.6 IOT

The Internet of things refers to a type of network to connect anything with the
Internet based on stipulated protocols through information sensing equipment’s
to conduct information exchange and communications in order to achieve smart
recognitions, positioning, tracing, monitoring, and administration.
An IoT ecosystem consists of web-enabled smart devices that use embedded
processors, sensors and communication hardware to collect, send and act on
data they acquire from their environments. IoT devices share the sensor data
they collect by connecting to an IoT gateway or other edge device where data is
either sent to the cloud to be analyzed or analyzed locally. Sometimes, these
devices communicate with other related devices and act on the information they
get from one another. The devices do most of the work without human
intervention, although people can interact with the devices -- for instance, to set
them up, give them instructions or access the data.

The connectivity, networking and communication protocols used with these


web-enabled devices largely depend on the specific IoT applications deployed.

Benefits of IoT

The internet of things offers a number of benefits to organizations, enabling


them to:

monitor their overall business processes;

improve the customer experience;

save time and money;

enhance employee productivity;

integrate and adapt business models;

make better business decisions; and

generate more revenue.

IoT encourages companies to rethink the ways they approach their businesses,
industries and markets and gives them the tools to improve their business
strategies.
Consumer and enterprise IoT applications

There are numerous real-world applications of the internet of things, ranging


from consumer IoT and enterprise IoT to manufacturing and industrial IoT
(IIoT). IoT applications span numerous verticals, including automotive, telco,
energy and more.

In the consumer segment, for example, smart homes that are equipped with
smart thermostats, smart appliances and connected heating, lighting and
electronic devices can be controlled remotely via computers, smartphones or
other mobile devices.

Wearable devices with sensors and software can collect and analyze user data,
sending messages to other technologies about the users with the aim of making
users' lives easier and more comfortable. Wearable devices are also used for
public safety -- for example, improving first responders' response times during
emergencies by providing optimized routes to a location or by tracking
construction workers' or firefighters' vital signs at life-threatening sites.

In healthcare, IoT offers many benefits, including the ability to monitor patients
more closely to use the data that's generated and analyze it. Hospitals often use
IoT systems to complete tasks such as inventory management, for both
pharmaceuticals and medical instruments.

IoT applications

Smart buildings can, for instance, reduce energy costs using sensors that detect
how many occupants are in a room. The temperature can adjust automatically --
for example, turning the air conditioner on if sensors detect a conference room
is full or turning the heat down if everyone in the office has gone home.
In agriculture, IoT-based smart farming systems can help monitor, for instance,
light, temperature, humidity and soil moisture of crop fields using connected
sensors. IoT is also instrumental in automating irrigation systems. In a smart
city, IoT sensors and deployments, such as smart streetlights and smart meters,
can help alleviate traffic, conserve energy, monitor and address environmental
concerns, and improve sanitation.

IoT security and privacy issues

The internet of things connects billions of devices to the internet and involves
the use of billions of data points, all of which need to be secured. Due to its
expanded attack surface, IoT security and IoT privacy are cited as major
concerns.

One of the most notorious recent IoT attacks was Mirai, a botnet that infiltrated
domain name server provider Dyn and took down many websites for an
extended period of time in one of the biggest distributed denial-of-service
(DDoS) attacks ever seen. Attackers gained access to the network by exploiting
poorly secured IoT devices.

Because IoT devices are closely connected, all a hacker has to do is exploit one
vulnerability to manipulate all the data, rendering it unusable. And
manufacturers that don't update their devices regularly -- or at all -- leave them
vulnerable to cybercriminals.

Additionally, connected devices often ask users to input their personal


information, including names, ages, addresses, phone numbers and even social
media accounts -- information that's invaluable to hackers.

However, hackers aren't the only threat to the internet of things; privacy is
another major concern for IoT users. For instance, companies that make and
distribute consumer IoT devices could use those devices to obtain and sell users'
personal data.
Beyond leaking personal data, IoT poses a risk to critical infrastructure,
including electricity, transportation and financial services.

The future of IoT

There is no shortage of IoT market estimations. For example:

Bain & Company expects annual IoT revenue of hardware and software to
exceed $450 billion by 2020. McKinsey & Company estimates IoT will have an
$11.1 trillion impact by 2025.

IHS Markit believes the number of connected IoT devices will increase 12%
annually to reach 125 billion in 2030. Gartner assesses that 20.8 billion
connected things will be in use by 2020, with total spend on IoT devices and
services to reach $3.7 trillion in 2018.

CHAPTER V

5.1 RESULT
5.2 CONCLUSION

In this paper, an embedded face detection and recognition with smart security
system are designed to be able to capture an image and send it to a mail using
matlab. So, when a face is detected and recognized, the system will notify the
user by using a smartphone and displays who is he in that area. By adding the
face recognition system, people will be easily recognized and a safer city will be
built. This system helps to enhance and automate the security of industries,
cities, homes and towns.

5.2 FUTURE WORK


In Future for a better global prediction from local predictions, temporal
adjustments to improve consistency and accuracy over sequences of predictions,
and AVFD for improving the speed of detection. With all of the previously
mentioned improvements, it has been shown to have the best cross-dataset
performance. On the other hand, FERNet is a compact neural network that
achieves a similar accuracy to the state of the art with a significantly faster
performance on the same datasets and a better accuracy on new data than
ResNet.

APPENDIX

REFERENCE

[1] N. A. Othman, I. Aydin, “A New IoT Combined Face Detection of People


by Using Computer Vision for Security Application,” in Proc. 2017 IEEE
International Conference on Artificial Intelligence and Data Processing
(IDAP17), pp. 1-5.
[2] N. A. Othman, I. Aydin, “A New IoT Combined Body Detection of People
by Using Computer Vision for Security Application,” in Proc. 2017 IEEE
International Conference on Computational Intelligence and Communication
Networks (CICN 2017), pp.1-5.

[3] H. Gu, D. Wang, “A content-aware fridge based on RFID in smart home for
homehealthcare,” in Proc. 2009 11th Advanced Communication Technology
Conference, pp. 987–990.

[4] S. Luo, H. Xia, Y. Gao, J.S. Jin, R. Athauda, “Smart fridges with
multimedia capability for better nutrition and health,” in Proc. 2008IEEE
International Symposium on Ubiquitous Multimedia Computing, pp. 39–44.

[5] M. Rothensee, “A high-fidelity simulation of the smart fridge enabling


product based services,” in Proc. 2007 3rd IET International Conference on
Intelligent Environment, pp. 529–532.

[6] E. Borgia, “The Internet of things vision: Key features, applications and
open issues,” Computer Communications. Vol. 54, 1–31, December 2014.

[7] Vienna University of Technology, European Smart Cities,


2015.http://www.smart-cities.eu (accessed 28.09.16).

[8] P. N. Belhumeur, J.P. Hespanha, and D. Kriegman, “Eigenfaces vs.


fisherfaces: recognition using class specific linear projection,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. pp.
711–720, July 1997.

[9] T. Ojala and M. Pietikainen. “Multiresolution Gray-Scale and Rotation


Invariant Texture Classification with Local Binary Patterns,” IEEE Trans on
Pattern Analysis and Machine Intelligence, vol. 24, pp. 971- 987, July 2002.
[10] T. Ahonen, A. Hadid, M. Pietikainen, “Face description with LBP:
Application to face recognition,” IEEE Transactions on PatternAnalysis and
Machine Intelligence, vol. 28, pp. 2037–2041, December 2006.

[11] M. F. Valstar, T. Almaev, J. M. Girard, G. McKeown, M. Mehu, L. Yin,


M. Pantic, and J. F. Cohn, ``FERA 2015second facial xpression recognition and
analysis challenge,'' in Proc. 11th IEEE Int. Conf. Workshops Autom. Face
Gesture Recognit. (FG), May 2018, pp. 18.

[12] G. Zhao and M. Pietikäinen, ``Dynamic texture recognition using local


binary patterns with an application to facial expressions,'' IEEE Trans. Pattern
Anal. Mach. Intell., vol. 29, no. 6, pp. 915928, Jun. 2019.

[13] T. Ojala, M. Pietikäinen, and T. Mäenpää, ``Multiresolution gray-scale and


rotation invariant texture classication with local binary patterns,'' IEEE Trans.
Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 971987, Jul. 2019.

[14] T. S. Lee, ``Image representation using 2D Gabor wavelets,'' IEEE Trans.


Pattern Anal. Mach. Intell., vol. 18, no. 10, pp. 959971, Oct. 1996.

[15] S. Agrawal and P. Khatri, ``Facial expression detection techniques: Based


on viola and jones algorithm and principal component analysis,'' in Proc. 5th Int.
Conf. Adv. Comput. Commun. Technol., Feb. 2018, pp. 108112.

You might also like