Professional Documents
Culture Documents
Age and Gender Classiication Report
Age and Gender Classiication Report
Age and Gender Classiication Report
Age and gender, two of the key facial attributes, play a very
foundational role in social interactions, making age and gender
estimation from a single face image an important task in intelligent
applications, such as access control, human-computer interaction, law
enforcement, marketing intelligence and visual surveillance, etc.
Automatic age and gender classification has become relevant to an
increasing amount of applications, particularly since the rise of social
platforms and social media. Nevertheless, performance of existing
methods on real-world images is still significantly lacking, especially
when compared to the tremendous leaps in performance recently
reported for the related task of face recognition. Age and Gender
Classification using Convolutional Neural Networks gives more
accuracy compared to the previous one.
CHAPTER 1
INTRODUCTION
for object detection. The goal of face emotion analysis is to detect and
from a given image to predict the gender (male or female) based on their
appearance like baldness, long hair, beard and mustache .In age
face etc. Deep learning which is a subset of machine learning that uses
from the given input images. The concept of deep learning was inspired
by how neurons function in our brain hence it’s also called deep neural
an image and also capable of differentiating one from the other. When
LITERATURE SURVEY
are actually used for prediction and how these features depend on
classification
is used to extract the features from the input images while ELM
The classification of human's age and gender from speech and face
database.
Classification
for age and gender classification. It turns out that the mouth and
eyes regions are useful for age classification, whereas just the eye
Internet. The devices are able to share their information for various
eyes closed resting state using EEG sensor. Using EEG sensor,
between 6 and 55 years and gender (i.e., male and female) have
SYSTEM ANALYSIS
The face of human holds significant amount of qualities and data about
classification, and age. Humans can identify and examine these data
like gender classification. The most of existing systems use only opencv
for age and gender classification. The Existing system not suitable for
large enterprises.
3.1.1 DRAWBACK
Low accuracy.
consuming process
3.2 PROPOSAL SYSTEM
and 0 for the absence of the disease. We have 303 rows of people data
many other risk factors based on this features our project will predict the
heart stroke. The dataset shows different levels of heart stroke disease
3.2.1 ADVANTAGE
Low complexity.
the existing system doesn’t incur any kind of drastic increase in the
expenses. Python is open source and ready available for all users. Since
efficient.
CHAPTER 4
RAM - 4 GB
and fit that data into models that can be understood and utilized by
algorithms instead allow for computers to train on data inputs and use
Any technology user today has benefitted from machine learning. Facial
Here in this thesis, we are providing basic info of the common machine
trains algorithms based on example input and output data that is labeled
with no labeled data in order to allow it to find structure within its input
Supervised learning is where you have input variables (x) and an output
variable (Y) and you use an algorithm to learn the mapping function
from the input to the output Y = f(X) . The goal is to approximate the
mapping function so well that when you have new input data .(x) that
you can predict the output variables (Y) for that data. Techniques of
machines. Supervised learning requires that the data used to train the
on a dataset of images that are properly labeled with the species of the
model that can predict the value of the dependent attribute from the
attribute variables. The difference between the two tasks is the fact that
classification.
4.3.3 CLASSIFICATION
into sub- categories. But, by a machine. If that doesn’t sound like much,
TYPES OF CLASSIFICATION
disease or not.
• Multiclass Classification : The number of classes is more than 2.
For Example
variables, called features. Which means there are two possible outcomes:
1. The patient has the said disease. Basically a result labeled “Yes” or
“True”.
called training data set, which comprises of sample data with actual
and use that model to predict whether a certain patient will have the
Fig : Generalized Classification Block Diagram.
observations.
model.
7. ML Algorithm: The algorithm that is used to update weights w’,
• Bayesian Regression
at their core.
4.3.4 REGRESSION
the simplest is the linear regression. It tries to fit data with the best
learn more about the data. These are called unsupervised learning
problems.
4.3.5.1 CLUSTERING
into a number of groups such that data points in the same groups are
more similar to other data points in the same group and dissimilar to the
points in the graph below clustered together can be classified into one
single group. We can distinguish the clusters, and we can identify that
These data points are clustered by using the basic concept that the data
point lies within the given constraint from the cluster center. Various
distance methods and techniques are used for calculation of the outliers.
among the unlabeled data present. There are no criteria for a good
clustering. It depends on the user, what is the criteria they may use
the dense region having some similarity and different from the lower
dense region of the space. These methods have good accuracy and
forms a tree type structure based on the hierarchy. New clusters are
formed using the previously formed one. It is divided into two categories
• Agglomerative (bottom up approach)
clusters and each partition forms one cluster. This method is used to
All the clustering operation done on these grids are fast and independent
Clustering Algorithms:
• K-Means Clustering.
4.4.1 PYTHON
multiple programming paradigms,
including structured (particularly, procedural), object-oriented,
counting.
for 2015), and "Python 2.7.18 is the last Python 2.7 release and therefore
the last Python 2 release." No more security patches or other
4.4.1.1 FEATURES
Readability counts.
Rather than having all of its functionality built into its core, Python was
to Perl's "there is more than one way to do it" motto, Python embraces a
Foundation and Python book author, writes that "To describe something
translates a Python script into C and makes direct C-level API calls into
and reference materials, such as examples that refer to spam and eggs
bar.
ARCHITECTURE
SYSTEM MODULES
6.1 MODULES
This is almost similar to the gender detection part except that the
corresponding prototxt file and the caffe model file are
deploy_agenet.prototxt” and “age_net.caffemodel”. Furthermore, the
CNN’s output layer (probability layer) in this CNN consists of 8
values for 8 age classes (“0–2”, “4–6”, “8–13”, “15–20”, “25–32”,
“38–43”, “48–53” and “60-”)
SAMPLE CODE
import cv2
import math
import argparse
frameOpencvDnn=frame.copy()
frameHeight=frameOpencvDnn.shape[0]
frameWidth=frameOpencvDnn.shape[1]
net.setInput(blob)
detections=net.forward()
faceBoxes=[]
for i in range(detections.shape[2]):
confidence=detections[0,0,i,2]
if confidence>conf_threshold:
x1=int(detections[0,0,i,3]*frameWidth)
y1=int(detections[0,0,i,4]*frameHeight)
x2=int(detections[0,0,i,5]*frameWidth)
y2=int(detections[0,0,i,6]*frameHeight)
faceBoxes.append([x1,y1,x2,y2])
return frameOpencvDnn,faceBoxes
parser=argparse.ArgumentParser()
parser.add_argument('--image')
args=parser.parse_args()
faceProto="opencv_face_detector.pbtxt"
faceModel="opencv_face_detector_uint8.pb"
ageProto="age_deploy.prototxt"
ageModel="age_net.caffemodel"
genderProto="gender_deploy.prototxt"
genderModel="gender_net.caffemodel"
MODEL_MEAN_VALUES=(78.4263377603, 87.7689143744,
114.895847746)
genderList=['Male','Female']
faceNet=cv2.dnn.readNet(faceModel,faceProto)
ageNet=cv2.dnn.readNet(ageModel,ageProto)
genderNet=cv2.dnn.readNet(genderModel,genderProto)
padding=20
while cv2.waitKey(1)<0 :
hasFrame,frame=video.read()
if not hasFrame:
cv2.waitKey()
break
resultImg,faceBoxes=highlightFace(faceNet,frame)
if not faceBoxes:
face=frame[max(0,faceBox[1]-padding):
min(faceBox[3]+padding,frame.shape[0]-
1),max(0,faceBox[0]-padding)
:min(faceBox[2]+padding, frame.shape[1]-1)]
genderPreds=genderNet.forward()
gender=genderList[genderPreds[0].argmax()]
print(f'Gender: {gender}')
ageNet.setInput(blob)
agePreds=ageNet.forward()
age=ageList[agePreds[0].argmax()]
CHAPTER 8
TESTING
Testing begins at the module level and works “outward” toward the
integration of the entire computer based System.
CHAPTER 9
CONCLUSION
Artificial Intelligence systems have grown rapidly over the last years.
This enabled us to create, using multiple models and frameworks, a
system capable of detecting faces and classifying them by age and
gender. The main objective of this research work was to create an
efficient system that was able to detect faces in images and to classify
such faces into age and gender, and to evaluate wrong outputs in order to
find an underlying reason for such failures. In order to fulfill such
objective, various frameworks capable of detecting faces in images and
capable of classifying those into age and gender classes were tested and
validated in order to understand which would fit better into our problem.
APPENDIX
A.1 SCREENSHOTS
REFERENCE
[1] Yann LeCun, Yoshua Bengio & Geoffrey Hinton, “Deep Learning”,
2015, Nature Volume 521 p. 436–444. doi: 10.1038/nature14539
[3] Haoxiang Li, Zhe Lin, Xiaohui Shen, Jonathan Brandt & Gang Hua,
“A Convolutional Neural Network Cascade for Face Detection”, 2015,
In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition p. 5325-5334. doi: 10.1109/CVPR.2015.7299170
[4] Shuo Yang, Ping Luo, Chen Change Loy & Xiaoou Tang, “From
Facial Parts Responses to Face Detection: A Deep Learning Approach”,
2015, In Proceedings of the IEEE International Conference on Computer
Vision p. 3676-3684. doi: 10.1109/ICCV.2015.419
[7] Xudong Suna, Pengcheng Wua & Steven C.H. Hoi, “Face detection
using deep learning: An improved faster RCNN approach”, 2018. doi:
10.1016/j.neucom.2018.03.030
[8] Shuo Yang, Yuanjun Xiong, Chen Change Loy & Xiaoou Tang,
“Face Detection through Scale-Friendly Deep Convolutional Networks”,
2017, Neurocomputing Volume 299 p.42-50
[9] Gil Levi & Tal Hassner, “Age and Gender Classification using
Convolutional Neural Networks”, 2015, In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition p.34-42. doi:
10.1109/CVPRW.2015.7301352.
[10] Eran Eidinger, Roee Enbar & Tal Hassner , “Age and Gender
Estimation of Unfiltered Faces”, 2014, IEEE Transactions on
Information Forensics and Security Volume 9 p. 2127-2179. doi:
10.1109/TIFS.2014.2359646
[11] Afshin Dehghan, Enrique G. Ortiz, Guang Shu & Syed Zain
Masood, “DAGER: Deep Age, Gender and Emotion Recognition Using
Convolutional Neural Networks”, 2017, Retrieved December 3, 2018
[12] Rajeev Ranjan, Swami Sankaranarayanan, Carlos D. Castillo &
Rama Chellappa, “An All-InOne Convolutional Neural Network for
Face Analysis”, 2017, In Proceedings of the IEEE International
Conference on Automatic Face & Gesture Recognition. doi:
10.1109/FG.2017.137
[15] Tsun-Yi Yang, Yi-Hsuan Huang, Yen-Yu Lin, Pi-Cheng Hsiu &
Yung-Yu Chuang, “SSR- Net: A Compact Soft Stagewise Regression
Network for Age Estimation”, 2018, In Proceedings of the Twenty-
Seventh International Joint Conference on Artificial
Intelligence. doi: 10.24963/ijcai.2018/150