Professional Documents
Culture Documents
44
44
44
AbstractThe facial landmarks can be used in many research found in the literature is the use of X-ray images [7], three-
areas as: biometric facial recognition, 2D and 3D facial recon- dimensional models analyzes craniofacial [8], [9], reconstruc-
struction and in others several areas in forensic science (or tion [10] and facial pose from listing [11], among others.
forensics). In the forensic science, the use of facial landmarks,
specifically in the forensic medicine are, pay attention of the Among the various imaging techniques, pattern recognition
scientific community and forensics experts, especially focused and computer vision can be adapted to aid the forensic expert
on the analysis of the cephalometric landmarks. Previous works in their activities. This paper discusses the use of photo
demonstrated that the descriptive adequacy of these anatomical anthropometry, using automated techniques for identifying
references for an indirect application (photo anthropometric facial cephalometric points from frontal view of images. In the
description) increased the marking precision of these points, con-
tributing to a greater reliability of these analyzes. However, most current contribution, we aim to address an implementation of
of them are performed manually and all of them are subjectivity an efficient prototype for automatic detection of cephalometric
inherent to the examiners. In this sense, the purpose of this landmarks in frontal face images.
work is the development and validation of automatic techniques This paper is organized as follows: In Section II, the main
to detect cephalometric landmarks from digital images of frontal works about facial cephalometric landmarks with special at-
faces. The proposed approach uses a combination of computer
vision techniques and image processing to achieve accuracy tention to photo anthropometry found in the scientific literature
scores, using a F1-score as a evaluate technique, about 100% are presented. Section III shows the proposed method and
when compared with a group of human manual cephalometric all processing details for identifying and extracting these
landmarks markers and to others state-of-art facial landmarks landmarks in the image; In Section IV the experimental results
frameworks. are exposed; and Sections V and VI presents the discussion
and conclusions, respectively.
I. I NTRODUCTION
Facial landmarks can be used in many research areas as: II. R ELATED W ORKS
biometric facial recognition [1], 2D and 3D facial reconstruc- The facial cephalometric landmarks are traditionally pro-
tion [2] and in others several areas in forensic science (or duced from direct analysis or indirect surveys of X-ray im-
forensics). In the forensic science, the use of facial landmarks, ages, commonly used in lateral cephalometric standard for
specifically in the forensic medicine are, pay attention of the orthodontic planning [12]. Generally the process of identify-
scientific community and forensics experts, especially focused ing individuals requires that the settings of these points are
on the analysis of the cephalometric landmarks. obtained with a high degree of precision in its location on
The forensic analysis to identify persons, for example, is the face region. From the extraction of the correct location of
a complex process which can be used in various methods these points it is possible to identify the individual, in addition
executed by the forensic agent such as DNA test, papiloscopy to extracting relevant information that may characterize this
and dental analysis, as described in [3]. In regards using facial individual in a specific population, and may correlate it to a
images for human identification, most of these techniques particular geographical region of the country [13], [14].
use a specialist and manual approach to achieve an expected The first computational techniques for digital image pro-
result and depends on the experience, expertise and experts cessing, pattern recognition and computer vision were de-
commitment during the tests [4], [5]. On facial images, dif- veloped in mid-1964 and had the purpose of correcting the
ferent issues can interfere with the quality, as illumination, distortions present in the images transmitted by the American
camera position to the person, image resolution, face position space probe Ranger 7, which was the first spacecraft to
and three-dimensional information lossy in two-dimensional successfully transmit images of the lunar surface to Earth [15].
images [6]. In the late 60s and early 70s digital computers began to be used
Nowadays, with the advancement of computing power, im- in medical image processing in remote sensing capabilities and
provement of programming languages and the effective arrival astronomy. Since then, the area has attracted great interest.
of the high-resolution images, using digital image processing However, there is a big gap in the use of these techniques in
in order to improve and/or automate forensic processes came the forensic field, mainly due to multidisciplinary involved.
to increase the accuracy of work to be carried out by the The main goal is focused on the development and en-
forensic expert. Several examples of these applications can be hancement of digital image processing techniques, pattern
recognition and computer vision to aid in the characterization
and identification of individuals for forensic purposes. In this
context we include: i) Automating the process of obtaining
stable cephalometric landmarks and other information that can
be used in the process of identification and characterization
of individuals; ii) Facilitate the creation of protocols for
characterization of populations from obtaining cephalometric
landmarks; iii) Improve the forensic expert techniques in the
field of facial photo anthropometry.
Currently, forensic experts from the Federal Police daily
use SAFF 2D
R (Sistema de Anlise Facial Forense por Ima-
gens 2D) software. This was developed by Expertise Service
in Audiovisual and Electronics of the National Institute of Fig. 1. Graphic interface of software SAFF 2D
during
R marking manually
Criminalist in the Federal Police department, and it is used of the cephalometric landmarks. The image used in the process was taken
from [25].
in manual extraction process of cephalometric points using
frontal face images [16], [17].
CSV Points
Ground
Truth
Database Haar Cascade Landmarks Landmarks
Detectors Trainning Set Ground
Truth Statistical
Information
1) The user defines directorys path where all images that Fig. 4. The red circle is the landmark from the ground truth file; and also is
the center of the black boundary edge, where it is the limit (defined in pixel)
will be processed is stored and then defines the file (a adopted to specify the hit range for automatic process. (a) The software
Comma-separated values file - CSV file or a Databank counts as hit when the green circle is inside the black boundary edge. (b)
connection information) contemning the ground truth The software counts as miss when the green circle is in outside of the black
boundary edge.
information to evaluated all detected landmarks.
2) For each face image is applied a pre-processing tech-
niques on the image to boost features and it increases previously. This approach is a modified version of the pro-
the automated process to identify the cephalometric posed by [37]. Using the ground truth point as a reference, we
landmarks. The next step the Haar-Cascade face detec- propose a bounded region on image with dimensions D D,
tion technique [36] allows to identify specific bounded as shows the Figure 4. The dimension D defines a square,
regions of interest on the image as face, eyes, mouth and which allows to evaluate the error influence in according to
nose. The eye alignment in according to face is achieved the required accuracy. If the value of D decreases, the region
from the analysis of vector field of image gradients, as size decreases, increasing the detection technique precision
proposed in [29]. in relation to the ground truth point. When the value of D
3) From the segmented regions achieved in the previous increases, the area increases and decreases the precision of
step (face, eyes, mouth and nose), we apply a specialized the automatic detection process in relation to the ground truth
training set by region to find all landmarks by region. point. The influence of the size D in the accuracy of the
4) After detected all points on face, the accuracy is eval- proposed technique will be discussed later in this work.
uated by compare the points from the ground truth This bounded region consists of two defined regions: Hit
file. In this step, each point is analyzed and a set of and Miss. If the point achieved in the automatic landmark
statistic values are achieved as: minimum, maximum, detection, is inside of bounded region is considered a Hit as
average, spatial centroid position, analysis of variance shows the Figure 4a, otherwise it is considered as a Miss as
(ANOVA) and standard deviation (STD). In this case, if shows the Figure 4b.
the image ground truth is available from several manual In a classification process, usually all results need a test
markers, for the same image by Inter and Intra-observer, accuracy analysis, a good measure to be used is the F-
all statistic values are evaluated automatically. score [38]. This accuracy metric is composed , basically, of
5) The last step is storage all information as: all images four parameters: True positives (ntp ), True Negative (ntn ),
and 28 points (printed on image for further checking), False Positive (nf p ) and False Negative (nf n ). At the end of
CSV file and statistical analysis. the execution process, after achieve all landmark results, we
define ntp (true positive) when the automated method marks
IV. E XPERIMENTAL R ESULTS the landmark inside of the bounded region and a nf p (false
As described in Subsection IV-A, we used 300 input images positive) when the automated method marks the landmark out
to evaluate the accuracy of the automatic landmark detection of bounded region. In this analysis, we have used the F-
process (Subsection III-D). Each image uses a ground truth score method shown in Equation 1, where variable precision
file used to validate the automatic landmark detection process and recall are defined as precision = ntp /(ntp + nf p ) and
that contains one cephalometric landmark for each point. recall = ntp /(ntp + nf n ), respectively. All results of F-
All cephalometrics landmark was collected by a specialist in score metric, for each cephalometric point, after the automatic
legal medicine using the metrics adopted in real scenarios in landmark detection is ranged between 1 (best) and 0(worst).
forensics.
precision recall
F score = 2 (1)
A. Accuracy Metrics precision recall
To validate our results, we have adopted a metric using B. Tests
the pixel position, horizontal and vertical, limited by ground To test the automatic identification process we analyzed the
truth point from the file collected by an expert, as described pixel variation to create the boundary edge limit. Those tests
1 1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
F-score
F-score
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 1 2 3 4 5 6 7 8 9 10 11 12 15 20 30 40 50 0 1 2 3 4 5 6 7 8 9 10 11 12 15 20 30 40 50
Range in Pixel Range in Pixel
Fig. 5. Graphic presenting the general F -score results for each group test.
Fig. 6. Graphic presenting the comparison between the general F -score
results and the F -score results of both Zygion landmark in all tests.
was used to know what is the limit that automatic process
of cephalometric points identification has an acceptable result
when it is compared to the points collected manually by an The range of 5 pixels from a ground truth point to the
expert observer. boundary edge is equals less than 0.8% of the image height,
Using all the 300 images that counts 8400 points (28 which we were considered as rigid classification criteria for
landmark per photo), we have got run 18 case tests varying all landmarks, and using the Cohens kappa coefficient [39]
the pixel range size from the point till the limit border. For adopted for statistical classifications in interpreting indices of
each group test were separated by pixel variation as follow: agreement, these have supported that F-score value of 0.79 is
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15, 20, 30, 40 and 50 also a significant result.
pixels. In this case, the group test of 5 pixels, analyzing each point
Finalizing separately all 18 group tests, as result we have separately, is the first group test that has no F-score value
found the F-score results for each landmark and also the bellow 0.75. The results of the tests show the difficult to
general scope, which the F-score results for each group identify correctly the cephalometric Zygion landmark position.
test that allows measure performance from all landmarks. This landmark tends to be highly variable when determining
Figure 5 shows all general F-score values for each group test, their exact anatomical location. Even in manual analysis some
meanwhile Table I shows the F-score values for all landmarks works describe the difficult to mark manually this point for
in each group test. not having a reference to help, as in other facial points. The
statistical analysis achieves a expressive dispersion, in this
V. D ISCUSSION case less than, when compared to the others facial landmarks
Analyzing the results, the tests of first group (0 pixel) detection approaches [16], [17], [40], [41].
showed that the automatic process has identified 379 land-
marks at the same point marked manually as the expert A. Others facial landmarks detection frameworks
observer. In the other hand, the other tests - from 10 to 50 A comparative analysis is presented using four main frame-
pixels - the results did not vary significantly compared to works in the literature for the facial landmarks detection in
manual marking process. In this tests, the automatic process images, as follows: DLIB [42], [43], CLandmark [44][46],
has hit 8337 of all 8400 cephalometric landmarks varying the CSIRO FaceAnalisys [47] and Superviseddescent [33], [48].
general F-score result in 0.002 even for 40 pixels of difference. All tools were used to detect the points available by the
We conclude in our tests that in a resolution image of challenge 300-W [49], which were not collected by specialists
640x480, 5 pixels is the limit that is comparable to an expert in anthropology, odontology, facial cephalometry or criminal
observer. In general the F-score result is 0.985, analyzing each expert. Thus, all tests are performed using these tools, aimed
landmark separately the best and worst F-score results are 1 to demonstrate that these techniques, not achieve an accurate
and 0.79 respectively, where in this case the worst results are identification of facial cephalometric points, bring them im-
both Zygion landmark. possible to use for forensic purposes.
To all tests the Zygion landmarks have got the worst F- The tests aim to assess the training ability of these tech-
score result compared to the general F-score in all the tests. niques when them are trained to detect the cephalometric
The others cephalometric landmarks reached the F-score value points, so we use for all framework the same image dataset
above 0.9 in the test using 4 pixels, however the both Zygion presented in the previous Section III. In order to compare these
landmarks stabilized only from the test using the range limit techniques with the proposed methodology, the 300 images
of 8 pixels. The tests have shown that both Zygion landmarks presented in Subsection III-C were also used.
have lower F-score value compared to general F-score result As presented in the Subsection IV-A, the tolerance criteria
presented in Figure 6. of 5 pixel was used for all tools to define the Hit region
TABLE I
TABLE PRESENTING ALL F- SCORE RESULTS FOR EACH CEPHALOMETRIC LANDMARK IN EACH GROUP TEST.
Landmark 0 px 1 px 2 px 3 px 4 px 5 px 6 px 7 px 8 px 9 px 10 px 11 px 12 px 15 px 20 px 30 px 40 px 50 px
al e 0.113 0.646 0.929 0.988 0.997 1 1 1 1 1 1 1 1 1 1 1 1 1
al d 0.083 0.636 0.909 0.990 0.997 1 1 1 1 1 1 1 1 1 1 1 1 1
ch e 0.071 0.500 0.819 0.953 0.988 0.997 1 1 1 1 1 1 1 1 1 1 1 1
ch d 0.119 0.540 0.828 0.940 0.973 0.992 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998
cph e 0.052 0.446 0.732 0.903 0.976 0.993 0.998 1 1 1 1 1 1 1 1 1 1 1
cph d 0.101 0.492 0.737 0.915 0.980 0.995 0.998 1 1 1 1 1 1 1 1 1 1 1
ex e 0.089 0.707 0.947 0.988 0.997 1 1 1 1 1 1 1 1 1 1 1 1 1
ex d 0.089 0.624 0.921 0.992 1 1 1 1 1 1 1 1 1 1 1 1 1 1
en e 0.119 0.608 0.913 0.986 1 1 1 1 1 1 1 1 1 1 1 1 1 1
en d 0.148 0.611 0.942 0.990 0.998 1 1 1 1 1 1 1 1 1 1 1 1 1
g 0.077 0.595 0.905 0.981 0.997 1 1 1 1 1 1 1 1 1 1 1 1 1
gn 0.065 0.537 0.851 0.957 0.986 0.997 0.998 0.998 1 1 1 1 1 1 1 1 1 1
go e 0.095 0.547 0.844 0.951 0.980 0.993 0.997 0.998 0.998 0.998 1 1 1 1 1 1 1 1
go d 0.160 0.592 0.844 0.944 0.995 0.990 0.998 1 1 1 1 1 1 1 1 1 1 1
il e 0.107 0.763 0.986 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
il d 0.089 0.734 0.980 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
im e 0.089 0.747 0.981 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
im d 0.065 0.729 0.988 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
li 0.046 0.565 0.868 0.951 0.981 0.990 0.995 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997 0.997
ls 0.089 0.519 0.853 0.958 0.988 0.998 0.998 1 1 1 1 1 1 1 1 1 1 1
mid 0.033 0.251 0.670 0.889 0.980 0.995 1 1 1 1 1 1 1 1 1 1 1 1
n 0.089 0.588 0.891 0.986 0.995 0.998 0.998 1 1 1 1 1 1 1 1 1 1 1
pu e 0.065 0.783 0.990 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
pu d 0.071 0.770 0.990 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
sn 0.137 0.598 0.895 0.974 0.995 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998 0.998
sto 0.113 0.661 0.955 0.997 0.998 1 1 1 1 1 1 1 1 1 1 1 1 1
zy e 0.013 0.204 0.446 0.611 0.690 0.790 0.830 0.874 0.915 0.951 0.964 0.978 0.988 0.995 0.998 0.998 0.998 0.998
zy d 0.013 0.165 0.454 0.592 0.687 0.790 0.835 0.879 0.913 0.947 0.964 0.974 0.985 0.992 0.995 0.995 0.995 0.995
VI. C ONCLUSIONS Fig. 8. Graphic of the general F-score comparing the proposed method to
the others landmark frameworks.
In this paper was presented the use of photo anthropometry,
using automated techniques for identifying facial cephalomet-
ric points from frontal view of images. The main contribution automatic detection of cephalometric landmarks in frontal face
was address to an implementation of an efficient prototype for images.
TABLE II
R ESULTS : G ENERAL F- SCORE OF PROPOSED METHOD AND OTHERS LANDMARK FRAMEWORKS .
The presented proposed methodology has achieved results [2] G. S. Hsu, H. C. Peng, and K. H. Chang, Landmark based facial
equivalent to specialists in manual facial landmarks marking component reconstruction for recognition across pose, in 2014 IEEE
Conference on Computer Vision and Pattern Recognition Workshops,
process. And, when compared to the main techniques available June 2014, pp. 3439.
in the current literature for cephalometric landmark detection, [3] R. Vaughn and D. Dampier, Digital forensicsstate of the science and
the proposed methodology achieved better results than all the foundational research activity, in System Sciences, 2007. HICSS 2007.
40th Annual Hawaii International Conference on, Jan 2007, pp. 263
analyzed techniques. 263.
As suggested further works, for example, currently in the [4] J. P. Davis, T. Valentine, and R. E. Davis, Computer assisted photo-
Federal Police Department, the forensic experts agents use a anthropometric analyses of full-face and profile facial images, Forensic
science international, vol. 200, no. 1, pp. 165176, 2010.
manual approach (SAFF 2D
R software) to collect the facial [5] D. Gibelli, Z. Obertova, S. Ritz-Timme, P. Gabriel, T. Arent, M. Rat-
cephalometric landmarks on photos. The development of the nayake, D. De Angelis, and C. Cattaneo, The identification of living
proposed methodology, addressed to automatic cephalometric persons on images: A literature review, Legal Medicine, vol. 19, pp.
5260, 2016.
landmarks identification, allows them achieve improvements [6] R. Moreton and J. Morley, Investigation into the use of photoanthro-
in all manual tasks reducing failures, in forensics analysis for pometry in facial image comparison, Forensic science international,
example, which the human error could compromises the entire vol. 212, no. 1, pp. 231237, 2011.
[7] T. Mondal, A. Jain, and H. Sardana, Automatic craniofacial structure
forensic analysis. detection on cephalometric images, Image Processing, IEEE Transac-
The result shows that computer vision and image processing tions on, vol. 20, no. 9, pp. 26062614, 2011.
[8] C. Xu, Y. Xu, and S. Ma, Cephalometric image measurement and
techniques, applied in to photo anthropometry and others prediction system, in Instrumentation and Measurement Technology
anthropology research areas, promises to achieve goals be- Conference, 1998. IMTC/98. Conference Proceedings. IEEE, vol. 1.
yond of cephalometric landmark identification. Several other IEEE, 1998, pp. 2225.
anthropological analyzes can be benefited from this proposed [9] M. A. Mosleh, M. S. Baba, N. Himazian, and B. AL-Makramani, An
image processing system for cephalometric analysis and measurements,
methodology, allow them achieve improvements and reducing in Information Technology, 2008. ITSim 2008. International Symposium
the analysis time. on, vol. 4. IEEE, 2008, pp. 18.
[10] A. Ghahari and M. Mosleh, Hybrid clustering-based 3d face model-
ing upon non-perfect orthogonality of frontal and profile views, in
Computer Information Systems and Industrial Management Applications
R EFERENCES (CISIM), 2010 International Conference on. IEEE, 2010, pp. 578584.
[11] J. Ma, N. Ahuja, C. Neti, and A. W. Senior, Recovering frontal-
[1] J. C. Chen, V. M. Patel, H. T. Ho, and R. Chellappa, Dictionary-based pose image from a single profile image, in Image Processing, 2000.
video face recognition using dense multi-scale facial landmark features, Proceedings. 2000 International Conference on, vol. 2. IEEE, 2000,
in 2014 IEEE International Conference on Image Processing (ICIP), Oct pp. 243246.
2014, pp. 733737. [12] Y. Chen, B. Potetz, B. Luo, X. w. Chen, and Y. Lin, Cephalometric
landmark tracing using deformable templates, in 2011 IEEE First In- [33] P. Huber, Z.-H. Feng, W. Christmas, J. Kittler, and M. Ratsch, Fitting
ternational Conference on Healthcare Informatics, Imaging and Systems 3d morphable face models using local features, in Image Processing
Biology, July 2011, pp. 112119. (ICIP), 2015 IEEE International Conference on. IEEE, 2015, pp. 1195
[13] N. Narang and T. Bourlai, Gender and ethnicity classification using 1199.
deep learning in heterogeneous face recognition, in 2016 International [34] T. Hastie, R. Tibshirani, and J. Friedman, The elements of statistical
Conference on Biometrics (ICB), June 2016, pp. 18. learning: data mining, inference and prediction, 2nd ed. Springer, 2009.
[14] D. Riccio, G. Tortora, M. D. Marsico, and H. Wechsler, Ega 2014; [Online]. Available: http://www-stat.stanford.edu/tibs/ElemStatLearn/
ethnicity, gender and age, a pre-annotated face database, in 2012 IEEE [35] G. Bradski, Open source computer vision library, http://www.opencv.
Workshop on Biometric Measurements and Systems for Security and org/, 2015.
Medical Applications (BIOMS) Proceedings, Sept 2012, pp. 18. [36] P. Viola and M. J. Jones, Robust real-time face detection, International
[15] R. C. Gonzalez and R. E. Woods, Digital image processing, Nueva journal of computer vision, vol. 57, no. 2, pp. 137154, 2004.
Jersey, 2008. [37] M. Everingham, L. Gool, C. K. Williams, J. Winn, and A. Zisserman,
[16] M. R. P. Flores, Proposta de metodologia de analise fotoantropometrica The pascal visual object classes (voc) challenge, Int. J. Comput.
para identificacao humana em imagens faciais em norma frontal, Mas- Vision, vol. 88, no. 2, pp. 303338, Jun. 2010. [Online]. Available:
ters thesis, Faculdade de Odontologia de Ribeirao Preto, Universidade http://dx.doi.org/10.1007/s11263-009-0275-4
de Sao Paulo, 2014. [38] D. M. W. Powers, Evaluation: From precision, recall and f-measure
[17] C. E. P. Machado, Fotoantropometria para estimativa de idade de to roc., informedness, markedness & correlation, Journal of Machine
criancas e adolescentes com emprego de imagens faciais em norma Learning Technologies, vol. 2, no. 1, pp. 3763, 2011.
frontal: relacoes iridianas, Ph.D. dissertation, Departamento de Patolo- [39] J. R. Landis and G. G. Koch, The measurement of observer agreement
gia e Medicina Legal da Faculdade de Medicina de Ribeirao Preto, for categorical data, biometrics, pp. 159174, 1977.
Universidade de Sao Paulo, 2015. [40] B. Campomanes-Alvarez, O. Ibanez, F. Navarro, I. Aleman, O. Cordon,
[18] N. H. El-Mangoury, S. I. Shaheen, and Y. A. Mostafa, Landmark and S. Damas, Dispersion assessment in the location of facial land-
identification in computerized posteroanterior cephalometrics, Ameri- marks on photographs, International journal of legal medicine, vol.
can Journal of Orthodontics and Dentofacial Orthopedics, vol. 91, no. 1, 129, no. 1, pp. 227236, 2015.
pp. 5761, 1987. [41] M. Cummaudo, M. Guerzoni, L. Marasciuolo, D. Gibelli, A. Cigada,
[19] L. Wiskott, J.-M. Fellous, N. Kuiger, and C. Von Der Malsburg, Face Z. Obertova, M. Ratnayake, P. Poppa, P. Gabriel, S. Ritz-Timme et al.,
recognition by elastic bunch graph matching, Pattern Analysis and Pitfalls at the root of facial assessment on photographs: a quantitative
Machine Intelligence, IEEE Transactions on, vol. 19, no. 7, pp. 775779, study of accuracy in positioning facial landmarks, International journal
1997. of legal medicine, vol. 127, no. 3, pp. 699706, 2013.
[20] J. Shi, A. Samal, and D. Marx, How effective are landmarks and [42] D. E. King, Dlib-ml: A machine learning toolkit, Journal of Machine
their geometry for face recognition? Computer Vision and Image Learning Research, vol. 10, pp. 17551758, 2009.
Understanding, vol. 102, no. 2, pp. 117133, 2006. [43] V. Kazemi and J. Sullivan, One millisecond face alignment with an
[21] E. Vezzetti, F. Marcolin, and G. Fracastoro, 3d face recognition: An ensemble of regression trees, in Proceedings of the IEEE Conference
automatic strategy based on geometrical descriptors and landmarks, on Computer Vision and Pattern Recognition, 2014, pp. 18671874.
Robotics and Autonomous Systems, vol. 62, no. 12, pp. 17681776, 2014. [44] M. Uricar, V. Franc, D. Thomas, S. Akihiro, and V. Hlavac, Real-
[22] J. Cech, V. Franc, and J. Matas, A 3d approach to facial landmarks: time Multi-view Facial Landmark Detector Learned by the Structured
Detection, refinement, and tracking, in Pattern Recognition (ICPR), Output SVM, in 11th IEEE International Conference and Workshops
2014 22nd International Conference on. IEEE, 2014, pp. 21732178. on Automatic Face and Gesture Recognition (FG), 2015, vol. 02, May
[23] T. Le-Tien and H. Pham-Chi, An approach for efficient detection of 2015, pp. 18.
cephalometric landmarks, Procedia Computer Science, vol. 37, pp. [45] M. Uricar, V. Franc, D. Thomas, A. Sugimoto, and V. Hlavac,
293300, 2014. Multi-view facial landmark detector learned by the structured output
[24] H. Dibeklioglu, F. Alnajar, A. Ali Salah, and T. Gevers, Combining SVM, Image and Vision Computing, vol. 47, pp. 4559, March 2016,
facial dynamics with appearance for age estimation, Image Processing, 300-W, the First Automatic Facial Landmark Detection in-the-Wild
IEEE Transactions on, vol. 24, no. 6, pp. 19281943, 2015. Challenge. [Online]. Available: http://www.sciencedirect.com/science/
[25] P. J. Phillips, H. Moon, S. Rizvi, P. J. Rauss et al., The feret evaluation article/pii/S0262885616300105
methodology for face-recognition algorithms, Pattern Analysis and [46] M. Uricar, V. Franc, and V. Hlavac, Facial landmark tracking by tree-
Machine Intelligence, IEEE Transactions on, vol. 22, no. 10, pp. 1090 based deformable part model based detector, in Proceedings of the IEEE
1104, 2000. International Conference on Computer Vision Workshops, 2015, pp. 10
[26] L. Morecroft, N. Fieller, and M. Evison, Investigation of anthropometric 17.
landmarking in 2d, Computer-Aided Forensic Facial Comparison, CRC [47] M. Cox, J. Nuevo-Chiquero, J. Saragih, and S. Lucey, Csiro face
Press, Florida, pp. 7187, 2010. analysis sdk, Brisbane, Australia, 2013.
[27] R. R. Ramires, L. P. Ferreira, I. Q. Marchesan, D. M. Cattoni et al., [48] X. Xiong and F. De la Torre, Supervised descent method and its
Medidas faciais antropometricas de adultos segundo tipo facial e sexo, applications to face alignment, in Proceedings of the IEEE conference
Revista CEFAC, vol. 13, no. 2, pp. 245252, 2011. on computer vision and pattern recognition, 2013, pp. 532539.
[28] C. Cattaneo, Z. Obertova, M. Ratnayake, L. Marasciuolo, J. Tutkuviene, [49] C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, 300 faces
P. Poppa, D. Gibelli, P. Gabriel, and S. Ritz-Timme, Can facial propor- in-the-wild challenge: The first facial landmark localization challenge,
tions taken from images be of use for ageing in cases of suspected child in Proceedings of the IEEE International Conference on Computer
pornography? a pilot study, International journal of legal medicine, vol. Vision Workshops, 2013, pp. 397403.
126, no. 1, pp. 139144, 2012.
[29] F. Timm and E. Barth, Accurate eye centre localisation by means of
gradients, in Proceedings of the Int. Conference on Computer Theory
and Applications (VISAPP), vol. 1. Algarve, Portugal: INSTICC, 2011,
pp. 125130.
[30] M. Uricar, V. Franc, D. Thomas, S. Akihiro, and V. Hlavac, Real-time
multi-view facial landmark detector learned by the structured output
svm, in BWILD 2015: Proceedings of the 11th IEEE International
Conference on Automatic Face and Gesture Recognition Conference and
Workshops, 2015.
[31] A. L. Yuille, P. W. Hallinan, and D. S. Cohen, Feature extraction from
faces using deformable templates, Int. J. Comput. Vision, vol. 8, no. 2,
pp. 99111, Aug. 1992.
[32] Information technology Biometric data interchange formats Part
5: Face image data, International Organization for Standardization,
Standard, Mar. 2005.