Professional Documents
Culture Documents
unit 2 - Copy.ppt
unit 2 - Copy.ppt
FINGERPRINT BIOMETRICS
Finger print using vein pattern of palm
2
UNIT-2
3
UNIT-2
6
UNIT-2
The system captures and analyzes the unique patterns and ridges
present on an individual's fingertips.
Minutiae points
Minutiae points are the major features of a fingerprint image and are
used in the matching of fingerprints. These minutiae points are used
to determine the uniqueness of a fingerprint image. A good quality
fingerprint image can have 25 to 80 minutiae depending on the
fingerprint scanner resolution and the placement of finger on the
sensor.
Chaincode processing
This technique is based on the chaincode representation of object
contours and the pixel image can be recovered fully from the
chaincode of its contour. In this method, the transitions from white
background to black foreground are identified by scanning the
image from top to bottom and right to left.
FACE RECOGNITION
Background of face recognition:-
The dawn of Facial Recognition – 1960s
The earliest pioneers of facial recognition were Woody Bledsoe,
Helen Chan Wolf and Charles Bisson. In 1964 and 1965, Bledsoe,
along with Wolf and Bisson began work using computers to
recognise the human face.
FACE RECOGNITION
However, it was later revealed that their initial work involved the
manual marking of various “landmarks” on the face such as eye
centres, mouth etc.
FACE RECOGNITION
Advancing the accuracy of Facial Recognition – 1970s
Carrying on from the initial work of Bledsoe, the baton was picked
up in the 1970s by Goldstein, Harmon and Lesk who extended the
work to include 21 specific subjective markers including hair colour
and lip thickness in order to automate the recognition.
FACE RECOGNITION
Using linear algebra for Facial Recognition – 1980s/90s
It wasn’t until the late 1980s that we saw further progress with the
development of Facial Recognition software as a viable biometric for
businesses. In 1988, Sirovich and Kirby began applying linear
algebra to the problem of facial recognition.
FACE RECOGNITION
In 1991, Turk and Pentland carried on the work of Sirovich and Kirby
by discovering how to detect faces within an image which led to the
earliest instances of automatic facial recognition. This significant
breakthrough was hindered by technological and environmental
factors, however, it paved the way for future developments in Facial
Recognition technology.
FACE RECOGNITION
FERET Programme – 1990s/2000s
The Defence Advanced Research Projects Agency (DARPA) and
the National Institute of Standards and Technology (NIST) rolled out
the Face Recognition Technology (FERET) programme in the early
1990s in order to encourage the commercial facial recognition
market.
The project involved creating a database of facial images. Included
in the test set were 2,413 still facial images representing 856
people.
The hope was that a large database of test images for facial
recognition would inspire innovation and may result in more
powerful facial recognition technology.
22OBM103- BIOMETRICS AND ITS APPLICATION
45
UNIT-3
FACE RECOGNITION
Face Recognition Vendor Tests – 2000s
The National Institute of Standards and Technology (NIST) began
Face Recognition Vendor Tests (FRVT) in the early 2000s.
FACE RECOGNITION
Face Recognition Grand Challenge – 2006
Launched in 2006, the primary goal of the Face Recognition Grand
Challenge (FRGC) was to promote and advance face recognition
technology designed to support existing face recognition efforts in
the U.S. Government.
The results indicated that the new algorithms were 10 times more
accurate than the face recognition algorithms of 2002 and 100 times
more accurate than those
22OBM103- of 1995, showing
BIOMETRICS AND ITS the advancements in
APPLICATION
facial recognition technology over the past decade. 47
UNIT-3
FACE RECOGNITION
Social Media – 2010-Current
Back in 2010, Facebook began implementing facial recognition
functionality that helped identify people whose faces may feature in
the photos that Facebook users update daily.
FACE RECOGNITION
iPhone X – 2017
Facial Recognition technology advanced rapidly from 2010 onwards
and September 12, 2017, was another significant breakthrough for
the integration of facial recognition into our day to day lives. This
was the date that Apple launched the iPhone X – the first iPhone
users could unlock with FaceID – Apple’s marketing term for facial
recognition.
FACE RECOGNITION
NEC and Facial Recognition
Border controls, airlines, airports, transport hubs, stadiums,
mega-events, concerts, and conferences.
FACE RECOGNITION
Face recognition can often prove one of the best biometrics
because images can be taken without touching or interacting with
the individual being identified, and those images are recorded and
instantly checked against existing databases.
FACE RECOGNITION
With the ability to process and analyse multiple camera feeds and
thousands of faces per minute, NEC’s powerful face recognition is
able to address the largest and most difficult security challenges
with unparalleled efficiency, sensitivity, and perception.
FACE RECOGNITION
With the ability to process and analyse multiple camera feeds and
thousands of faces per minute, NEC’s powerful face recognition is
able to address the largest and most difficult security challenges
with unparalleled efficiency, sensitivity, and perception.
FACE RECOGNITION
DESIGN OF A FACE RECOGNITION SYSTEM
A throughout survey has revealed that various methods and
combination of these methods can be applied in development of a
new face recognition system.
FACE RECOGNITION
DESIGN OF A FACE RECOGNITION SYSTEM
FACE RECOGNITION
DESIGN OF A FACE RECOGNITION SYSTEM
This stage consists of image capturing, database of faces, feature
extraction process and another database for storing templates of
features extracted using a suitable training method.
FACE RECOGNITION
DESIGN OF A FACE RECOGNITION SYSTEM
The features are extracted and converted into equivalent
representations as templates.
FACE RECOGNITION
DESIGN OF A FACE RECOGNITION SYSTEM
The features are extracted and converted into equivalent
representations as templates.
FACE RECOGNITION
DESIGN OF A FACE RECOGNITION SYSTEM
FACE RECOGNITION
DESIGN OF A FACE RECOGNITION SYSTEM
As discussed in the previous section, there are several ways in
which face recognition systems can be designed. They are based
on templates or the training methods used. Some of the face
recognition systems are template and geometrical feature-based
methods.
FACE RECOGNITION
DESIGN OF A FACE RECOGNITION SYSTEM
Other methods under template-based methods use NNs in different
ways.
FACE RECOGNITION
neural network for face Recognition
FACE RECOGNITION
neural network for face Recognition
FACE RECOGNITION
neural network for face Recognition
FACE RECOGNITION
neural network for face Recognition
FACE RECOGNITION
neural network for face Recognition
FACE RECOGNITION
neural network for face Recognition
FACE RECOGNITION
neural network for face Recognition
The NNs have several layers of neurons called hidden layers. The
number of neurons may be different for different hidden layers and
these layers are chosen depending on the specific data to learn.
FACE RECOGNITION
neural network for face Recognition
FACE RECOGNITION
Training of Neural Network
Like a brain is trained and weights of neurons are adjusted to give a
decision, the NNs are trained so that learning mechanism takes
place in the network. There are many different techniques for
training the NNs, but they are mainly categorized into two
types-supervised and unsupervised learning. The NNs can be
operated in an adaptive manner just by adjusting the weights of
neurons or nodes of the network.
FACE RECOGNITION
Weights of neurons are modified by a learning process.
Usually, in order to get the desired response of network or desired
output, the w weights need to be modified.
The process of changing weights for getting the desired output is
called learning of network.
Again, the learning can be compared to a human brain. Many things
are not taught to us- children learn from the environment without
being taught by anyone.
This is called unsupervised learning. When we are taught the
learning process by someone else, then it is called supervised
learning.
FACE RECOGNITION
Weights of neurons are modified by a learning process.
Usually, in order to get the desired response of network or desired
output, the w weights need to be modified.
The process of changing weights for getting the desired output is
called learning of network.
Again, the learning can be compared to a human brain. Many things
are not taught to us- children learn from the environment without
being taught by anyone.
This is called unsupervised learning. When we are taught the
learning process by someone else, then it is called supervised
learning.
FACE RECOGNITION
An overview of NN and how it helps in biometrics or computer vision
has been answered using a practical example has been discussed
in this chapter.
1.Supervised learning: A single neuron can be trained by an
ordinary least mean square algorithm. This algorithm
minimizes the mean square error and calculates the inverse of
an auto-correlation matrix.
2. If the network is multilayered consisting of many hidden
layers, then the most popularly used back propagation
algorithm can be used.
FACE RECOGNITION
The back propagation algorithm has a slow convergence
speed and is therefore affected by many variations and
improvements, such as variable learning speed.
Weights of neurons are adjusted going back to previous
layers so as to achieve the desired output. The supervised
learning or training is used to obtain a desired output.
FACE RECOGNITION
2. Unsupervised learning: Unsupervised learning is often used
when there is no target signal.
FACE RECOGNITION
Detecting faces in video sequences involves applying face detection
algorithms to each frame of the video. One popular approach is to
use a method like the Viola-Jones algorithm or deep learning-based
methods like Single Shot Multibox Detector (SSD), Faster R-CNN,
or You Only Look Once (YOLO).
Here's a general overview of how you could approach face
detection in video sequences:
Frame Extraction: Extract frames from the video sequence at a
certain frame rate or keyframes.
Face Detection: Apply a face detection algorithm to each frame.
This could be a classical method like Haar cascades (used in
Viola-Jones) or a deep learning-based approach like SSD, Faster
R-CNN, or YOLO.
22OBM103- BIOMETRICS AND ITS APPLICATION
76
UNIT-3
FACE RECOGNITION
Tracking (Optional): Optionally, you might want to incorporate
tracking algorithms to track faces across frames to improve
robustness and efficiency. Popular tracking algorithms include
Kalman filters, Particle filters, or Deep SORT.
FACE RECOGNITION
Visualization/Output: Visualize the detected faces in the video
frames or output the results in a desired format (e.g., bounding box
coordinates, timestamps, etc.).
FACE RECOGNITION
Visualization/Output: Visualize the detected faces in the video
frames or output the results in a desired format (e.g., bounding box
coordinates, timestamps, etc.).
FACE RECOGNITION
1. Frame Extraction:
Before detecting faces in a video sequence, you need to break it
down into individual frames.
FACE RECOGNITION
2. Face Detection:
Once you have extracted frames from the video, you apply a face
detection algorithm to each frame. Here are some common
approaches:
FACE RECOGNITION
Deep Learning-Based Methods:
Single Shot Multibox Detector (SSD): SSD is a deep
learning-based object detection algorithm. It predicts bounding
boxes and class labels for multiple objects in a single pass
through the network.
Faster R-CNN: Faster R-CNN is another deep learning-based
object detection algorithm. It consists of two main modules: a
region proposal network (RPN) for generating region proposals
and a network for detecting objects within those proposals.
You Only Look Once (YOLO): YOLO is a real-time object
detection system. It divides the input image into a grid and
predicts bounding boxes and probabilities for each grid cell.
22OBM103- BIOMETRICS AND ITS APPLICATION
82
UNIT-3
FACE RECOGNITION
Deep Learning-Based Methods:
Single Shot Multibox Detector (SSD): SSD is a deep
learning-based object detection algorithm. It predicts bounding
boxes and class labels for multiple objects in a single pass
through the network.
Faster R-CNN: Faster R-CNN is another deep learning-based
object detection algorithm. It consists of two main modules: a
region proposal network (RPN) for generating region proposals
and a network for detecting objects within those proposals.
You Only Look Once (YOLO): YOLO is a real-time object
detection system. It divides the input image into a grid and
predicts bounding boxes and probabilities for each grid cell.
22OBM103- BIOMETRICS AND ITS APPLICATION
83
UNIT-3
FACE RECOGNITION
3. Tracking (Optional):
While face detection in individual frames can be effective, it may not
maintain consistency across frames. Therefore, incorporating
tracking algorithms can be beneficial. Tracking algorithms help
associate detections in successive frames, enabling you to track
faces over time.
FACE RECOGNITION
Common tracking algorithms include:
Kalman Filters: Kalman filters are recursive estimators that use a
series of measurements observed over time to estimate the state of
a process.
Particle Filters: Particle filters, also known as sequential Monte
Carlo methods, represent the posterior density by a set of random
samples, known as particles.
Deep SORT (Simple Online and Realtime Tracking with a Deep
Association Metric): Deep SORT combines deep learning-based
object detection with a Kalman filter-based tracking algorithm.
FACE RECOGNITION
4. Post-processing (Optional):
After face detection and possibly tracking, you may want to perform
post-processing steps to refine the results. This could include:
Non-maximum Suppression: This technique is used to eliminate
redundant detections by keeping only the most confident ones
among overlapping bounding boxes.
Size or Position Filtering: Filtering out detections based on their
size or position within the frame can help remove false positives or
irrelevant detections.
FACE RECOGNITION
5. Visualization/Output:
Finally, you can visualize the detected faces in the video frames by
drawing bounding boxes around them or output the results in a
desired format, such as bounding box coordinates, timestamps, or
labels.
FACE RECOGNITION
Face recognition methods
Aim to identify or verify individuals by analyzing and comparing their
facial features. Here are some common approaches to face
recognition:
1. Eigenfaces:
Eigenfaces is a classic method for face recognition that uses
Principal Component Analysis (PCA) to represent faces as vectors
in a high-dimensional space. It reduces the dimensionality of face
images and extracts a set of "eigenfaces" that represent the
principal components of facial variation. Face recognition is then
performed by comparing the projections of a face onto the
eigenfaces.
22OBM103- BIOMETRICS AND ITS APPLICATION
88
UNIT-3
FACE RECOGNITION
Face recognition methods
2. Fisherfaces (Linear Discriminant Analysis):
Similar to Eigenfaces, Fisherfaces uses dimensionality reduction
techniques like Linear Discriminant Analysis (LDA) to find a set of
discriminant features that maximize the ratio of between-class
variance to within-class variance.
FACE RECOGNITION
Face recognition methods
2. Fisherfaces (Linear Discriminant Analysis):
Similar to Eigenfaces, Fisherfaces uses dimensionality reduction
techniques like Linear Discriminant Analysis (LDA) to find a set of
discriminant features that maximize the ratio of between-class
variance to within-class variance.
FACE RECOGNITION
Face recognition methods
3. Local Binary Patterns (LBP):
Local Binary Patterns is a texture descriptor that characterizes the
local structure of images. In face recognition, LBP extracts texture
features from facial images by comparing the intensity of pixels with
neighboring pixels. These features are then used for classification or
matching faces.
FACE RECOGNITION
Face recognition methods
3. Local Binary Patterns (LBP):
Local Binary Patterns is a texture descriptor that characterizes the
local structure of images. In face recognition, LBP extracts texture
features from facial images by comparing the intensity of pixels with
neighboring pixels. These features are then used for classification or
matching faces.
FACE RECOGNITION
Face recognition methods
4. Histogram of Oriented Gradients (HOG):
HOG is another feature descriptor commonly used in object
detection and recognition tasks, including face recognition. It
computes the distribution of gradient orientations in localized
portions of an image. In face recognition, HOG descriptors capture
the spatial layout of facial features such as edges and corners.
FACE RECOGNITION
Face recognition methods
5. Convolutional Neural Networks (CNNs):
Deep learning-based approaches, particularly Convolutional Neural
Networks (CNNs), have gained significant popularity in recent years
for face recognition tasks.
FACE RECOGNITION
Face recognition methods
6. Siamese Networks:
Siamese networks are neural networks with shared weights that
take pairs of images as input and learn to output a similarity score
indicating whether the images belong to the same person or not.
FACE RECOGNITION
Face recognition methods
7. Triplet Loss:
Triplet loss is a loss function used in training siamese networks and
other deep learning-based face recognition models. It minimizes the
distance between the embeddings of matching pairs of images while
maximizing the distance between embeddings of non-matching
pairs, encouraging the network to learn discriminative features.
8. Ensemble Methods:
Ensemble methods combine multiple base classifiers to improve
overall recognition performance. Techniques such as bagging,
boosting, and stacking can be applied to face recognition models to
enhance robustness and generalization.
22OBM103- BIOMETRICS AND ITS APPLICATION
96
UNIT-3
FACE RECOGNITION
Face recognition methods
Principal Component Analysis (PCA) is a dimensionality reduction
technique commonly used in various fields, including image
processing, pattern recognition, and data analysis. PCA aims to
transform high-dimensional data into a lower-dimensional space
while preserving most of the variance in the data. It achieves this by
identifying the directions, or principal components, along which the
data varies the most..
FACE RECOGNITION
Face recognition methods
1. Data Standardization:
PCA requires that the data is standardized (i.e., centered around
zero with unit variance) since it's sensitive to the scale of the
features.
2. Covariance Matrix Computation:
Given a dataset with �m samples and �n features, PCA computes
the covariance matrix ΣΣ of the data. The covariance between two
features measures how they vary together.
Σ=1����Σ=m1XTX
where �X is the �×�m×n data matrix with each row representing
a sample and each column representing a feature.
22OBM103- BIOMETRICS AND ITS APPLICATION
98
UNIT-3
FACE RECOGNITION
Face recognition methods
3. Eigendecomposition:
PCA then performs an eigendecomposition of the covariance matrix
ΣΣ to find its eigenvectors and eigenvalues.
The eigenvectors represent the directions (principal components)
along which the data varies the most, and the eigenvalues represent
the amount of variance explained by each principal component.
FACE RECOGNITION
Face recognition methods
4. Principal Component Selection:
The principal components are sorted in descending order of their
corresponding eigenvalues. The principal component with the
highest eigenvalue explains the most variance in the data, followed
by the second principal component, and so on.
Typically, only a subset of the principal components that capture
most of the variance (e.g., 95%) is retained.
FACE RECOGNITION
Face recognition methods
5. Projection:
Finally, PCA projects the original data onto the selected principal
components to obtain the lower-dimensional representation.
FACE RECOGNITION
Face recognition methods
PCA is widely used for various purposes, including: