Professional Documents
Culture Documents
(Advances in Computer Vision and Pattern Recognition) Ajay Kumar-Contactless 3D Fingerprint Identification-Springer International Publishing (2018)
(Advances in Computer Vision and Pattern Recognition) Ajay Kumar-Contactless 3D Fingerprint Identification-Springer International Publishing (2018)
Ajay Kumar
Contactless
3D Fingerprint
Identification
Advances in Computer Vision and Pattern
Recognition
Founding editor
Sameer Singh, Rail Vision, Castle Donington, UK
Series editor
Sing Bing Kang, Microsoft Research, Redmond, WA, USA
Advisory Board
Horst Bischof, Graz University of Technology, Austria
Richard Bowden, University of Surrey, Guildford, UK
Sven Dickinson, University of Toronto, ON, Canada
Jiaya Jia, The Chinese University of Hong Kong, Hong Kong
Kyoung Mu Lee, Seoul National University, South Korea
Yoichi Sato, The University of Tokyo, Japan
Bernt Schiele, Max Planck Institute for Computer Science, Saarbrücken, Germany
Stan Sclaroff, Boston University, MA, USA
More information about this series at http://www.springer.com/series/4205
Ajay Kumar
Contactless 3D Fingerprint
Identification
123
Ajay Kumar
The Hong Kong Polytechnic University
Kowloon, Hong Kong
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Person identification using epidermal ridge impressions from fingers has been
widely studied for over hundred years. It is widely employed in a range of forensic,
e-business and e-governance applications around the world. Traditional acquisition
of fingerprint images by rolling or pressing of fingers against hard surface like glass
or polymer often results in degraded images due to skin deformations, slippages,
smearing or residue of latent from previous impressions. As a result, full potential
from the fingerprint biometric cannot be realized. Contactless 2D fingerprint sys-
tems have emerged to provide improved hygiene and ideal solutions to above
intrinsic problems. Contactless 3D fingerprints can potentially provide significantly
more accurate personal identification, as rich information is available from con-
tactless 3D fingerprint images.
Contactless 3D fingerprints offer exciting opportunities to improve the user
convenience, hygiene and the matching accuracy over the fingerprint biometric
technologies available today. Introduction of videos, or addition of an additional
temporal dimension, was a leap forward that revolutionized the usage of 2D images
in the entertainment, e-governance and e-business. Similarly, the addition of one
more dimension from 3D fingerprints, has potential to significantly alter the way
this biometric is perceived and employed for the civilian and e-governance appli-
cations. Such advancements will not be limited in e-security or e-business, but also
enable dramatic advancements in forensics where the latent or lifted fingerprint
impressions are matched with suspects fingerprint images. For example, the 3D
fingerprints from possible suspects can be employed to simulate latent fingerprint
impressions on a variety of hard or soft real-life materials (door, paper, glass, gun,
etc.) and under variety of pressure, occlusions and deformations, which is expected
to enable more accurate match with the corresponding latent fingerprints that are
lifted from the crime scene. The potential of contactless 3D fingerprints offers
exciting opportunities but requires significant research and development efforts for
its realization.
Availability of a book that is exclusively devoted to the techniques, comparisons
and promises from the contactless 3D fingerprint identification is expected to help
in advancing much needed further research in this area. Some of the contents in this
v
vi Preface
book have appeared in some of our research publications and US patents. However,
many of the important details, explanation and results that have been missed in the
publications are included in this book. The contents in this book attempt to provide
a systematic introduction to the 3D fingerprint identification, including most
updated advancements in contactless 2D and 3D sensing technologies, and expla-
nation of every important aspect towards the development of an effective 3D fin-
gerprint identification system.
This book is organized into eight different chapters. Chapter 1 introduces current
trends in the acquisition and identification of fingerprint images. This introductory
chapter discusses the nature of fingerprint impressions and the sensing techniques,
which includes completely contactless 2D fingerprint sensors. This chapter bridges
the journey from rolled and inked fingerprint impressions, to the more advanced
smartphone-based fingerprint sensors, in terms of their resolution and sensing area.
It also provides details on publicly accessible implementations on fingerprint
matchers and most updated list/details on publicly available fingerprint databases
along with respective weblinks to enable easy accessibility.
Chapter 2 in this book presents a range of 3D fingerprint imaging techniques
along with their comparative technical details. Image acquisition methods presented
in this chapter have been grouped into four categories: optical, non-optical, geo-
metric and photometric methods. Details on five different methods to acquire 3D
fingerprint images using stereo vision, pattern lighting, optical coherence tomog-
raphy, ultrasound imaging and photometric stereo, along with potential from other
methods, appear in this chapter.
Chapter 3 in this book is devoted to in-depth details on a low-cost and effective
method for the online 3D fingerprint image acquisition. Systematic details on such
photometric stereo-based setup are detailed in this chapter, i.e. from hardware,
calibration, preprocessing and specular reflection removal, to the choice of recon-
struction methods. This chapter also shares our insights and results on the attempts
made to consider non-Lambertian nature of finger surface. Resulting computational
complexity for such online 3D fingerprint imaging system also appears in this
chapter.
Chapter 4 provides details on more efficient 3D fingerprint imaging approach
using coloured photometric stereo. This approach is introduced to address two key
problems associated with practical 3D fingerprint imaging: involuntary finger
motion and complexity for online applications. This approach revisits the method
detailed in Chap. 3 and contactless 3D fingerprint images acquired using the setup
introduced in this chapter are also publicly made available.
Contactless 3D fingerprint data often requires preprocessing operations to sup-
press the accompanying noise and to enhance or accentuate the ridge–valley fea-
tures. Chapter 5 in this book details on such preprocessing operations on the cloud
point 3D fingerprint data. This chapter also provides detailed explanation on spe-
cialized enhanced operations required for the contactless 2D fingerprint images that
are employed for the reconstruction of 3D fingerprints.
Preface vii
ix
x Contents
The finger skin patterns are widely considered as unique to humans, serve as the basis
of forensic science and increasingly employed in large-scale national identification
(ID) programmes for security and e-governance. The inked impressions of fingers on
paper or the latent finger impressions on objects have been historically used to estab-
lish identity of individuals and commonly referred to as the fingerprint identification.
However, modern imaging does not require such inked impressions. Therefore, the
finger image identification is essentially same as fingerprint identification and is
interchangeably used in this book.
The fingerprint ridges are essentially the combinations of ridge units which are
combined under some random forces to form a continuous ridge flow patterns. The
discontinuities in such ridge patterns are used to uniquely discriminate the finger-
prints and commonly referred to as minutiae. These ridge discontinuities can appear
as a pattern depicting tiny incomplete ridge spur, an abrupt bifurcation or abrupt
termination of ridges. The spatial location and relationship between these known
kinds of minutia types are unique for each person or even among his/her different
fingers. The minutiae extracted from the fingerprint images can also be represented
as connected graph whose nodes represent a known kind of minutia. The recovery
and matching of such minutia patterns form the scientific basis of fingerprint identi-
fication. The process of ridge units formation has been scientifically linked [1] to the
skin cells that are generated and periodically migrated towards epidermal surface.
The formation of ridge patterns or the fingerprint starts before the birth and these
patterns are already in the foetus during the fifth month of the pregnancy.
Traditional fingerprinting process requires the subjects to press or roll their fingers
on a paper or on some hard surfaces generally made of a polymer or glass platen
which forms front-end interface of a solid state sensor. The complexity of finger-
© Springer Nature Switzerland AG 2018 1
A. Kumar, Contactless 3D Fingerprint Identification, Advances in Computer Vision
and Pattern Recognition, https://doi.org/10.1007/978-3-319-67681-4_1
2 1 Introduction to Trends in Fingerprint Identification
printing process varies with the requirements of impression types and has been fairly
standardized by the law enforcement agencies around the world.
(i) Rolled Fingerprint Impressions: The rolled fingerprint impressions can cover
largest finger areas and acquires rich information which is known to achieve
extremely accurate fingerprint matching. The rolled fingerprint impressions
are individually acquired from each of the fingers by rolling them from nail-
to-nail. The acquisition process is quite time-consuming and can also result in
poor quality images due to uneven pressure from fingers. The rolled fingerprint
impressions are widely used in the property documents and in prisons by the
law enforcement departments.
(ii) Plain Impressions: This process of imaging does not require rolling of fingers
and impressions are acquired pressing finger surface against a flat surface or
a paper. Plain impressions from single fingers or thumb are least complex and
widely used in e-governance (HK ID cards [2] and e-channel border crossings) ,
e-commerce and smartphones.
(iii) Slap Impressions: These impressions can be simultaneously acquired for mul-
tiple fingers and this process is significantly faster particularly when ten prints
from both hands are to be acquired. Tenprint plain impressions using slap fin-
gerprinting is used in large-scale identification programmes, e.g. Aadhaar [3]
in India or USVIST [4] programme in the USA, and consists of 4-4-2 method;
acquiring slap impressions from the right hand, followed by a left hand and
followed by those from two thumbs. The slap fingerprint images are often auto-
matically segmented into plain fingerprint images from the individual fingers.
(iv) Latent Impressions: Latent fingerprints are leftover impressions of fingers on
any object surface by the individual whose identity is yet to be established. The
residue of such impressions is carefully lifted by the forensic experts using
specialized techniques. These techniques typically use treatment, i.e. spraying
the surface with chemicals, whose choice can vary depending on the nature
of background surface, to enhance the impressions and acquiring photographs
under ultraviolet illumination. The latent fingerprint impressions are generally
of extremely low quality and with least clarity due to uncontrolled background.
Therefore, the matching of latent fingerprint impressions is most challenging
and also widely debated in courtroom arguments (Table 1.1).
Automated improvement of fingerprint image quality is employed in almost all the
commercially available fingerprint sensors. In addition to the use of image enhance-
ment algorithm to enhance clarity of ridge and valleys, several hardware-based solu-
tions are also incorporated into many of these systems. Such enhancement in the
quality of fingerprint images from dry skin is achieved with the use of a deformable
membrane on glass platen (e.g. as in [5]) and/or with the use of a heated platen
(e.g. as in [6]) to suppress or eliminate the undesirable influence of finger moisture.
Many online fingerprint image acquisition software can also include detection of
latent impressions left from previous scans, incomplete or partial images or finger
slippage.
1.1 Contact-Based Fingerprint Identification 3
Table 1.1 Average number of minutiae recovered from typical fingerprint impressions
Impression type Average number of minutiae Sensor area
Rolled fingerprints ~80 ~422 mm2 a
Flat fingerprints ~20–30 211 mm2 b
Latent fingerprints ~13–22c –
Smartphone fingerprints ~5–8 50–100 mm2
a Require about twice the area than for flat impressions and generally scanned from inked impressions
b At least 12.8 mm wide and 16.5 mm high (for 500 dpi as per ANSI/NIST/UIDAI Specifications
[37], [8])
3 Estimated from NIST SD27 [38] dataset which has a varying number of usable minutiae
Quality of a fingerprint image is often judged by the clarity of ridge patterns and
is quantified by imaging resolution in pixels per inch (PPI). Several law enforcement
departments (e.g. FBI [7]) and national ID programmes (e.g. Aadhaar [8]) have
standardized the imaging requirements and require 500 dpi imaging sensors for their
applications. Accordingly approved lists of commercially available sensors that meet
respective criterion are publicly made available [5, 7] for the users. In order to acquire
full plain impression of a fingerprint, the sensing area should be at least one by one
squared inches and also has been standardized in these specifications. However, a
variety of fingerprint sensors with lower resolutions 200–300 dpi are also employed
in a range of e-business and stand-alone applications like for the office attendance
or in the laptops. The choice of fingerprint sensor is generally a trade-off between
the required/offered level of security, available sensing area, cost, speed and storage
that are affordable for the respective application.
Interoperability among the fingerprint sensors is critical to the success of large-
scale ubiquitous identification programmes. Such interoperability is facilitated with
the help of standards and certification programmes. Aadhaar and FBI publicly provide
a list of certified fingerprint scanners, in [9] and [7], respectively, that generates
fingerprints with image quality that is in compliance with the expectations for the
large-scale identification process. Commercially available contact-based fingerprint
sensors employ a variety of sensing technologies and these can be grouped into
following categories:
(a) Optical: The frontal side of imaging platen makes contact with the finger ridges
during imaging while other side senses the light reflected from the ridges using
CMOS or CCD sensor. The image formation in such sensors is based on the prin-
ciple of total internal reflection where the lights reflected by the valleys appear
brighter while the light randomly scattered by ridges creates darker impres-
sions in the sensor images. Optical sensors are not susceptible to influences
from electrostatic discharge in the surrounding but require relatively larger size.
Optical scanning technology is the basis of majority of slap and/or stand-alone
commercial fingerprint scanners available today.
4 1 Introduction to Trends in Fingerprint Identification
(b) Silicon: Unlike optical sensors, the image formation in silicon fingerprint sen-
sors relies on the capability of capacitive, thermal, electric or piezoelectric sen-
sors in generating discriminative signals at spatial locations corresponding to
the ridges and the valleys. Therefore, according to the nature of sensors, sil-
icon fingerprint sensors are classified as capacitive, thermal, electric field or
piezoelectric fingerprint sensors. Capacitive fingerprint sensors are insensitive
to ambient illumination and more resistant to the contaminations. Capacitive
fingerprint sensors, with silicon or polysilicon as the base material, also offer
cost advantage and widely used in smartphones available today.
More recent research and development efforts result in development of trans-
parent fingerprint image sensing from the entire display area of widely used
smartphones which can offer higher user-friendliness by operating on touch
me anywhere mode or provide simultaneous acquisition of multiple fingerprints
with ease. Such fingerprint sensor can reside on the top of existing smartphone
display but embedded under the cover glass. These fingerprint sensors also
operate on capacitive difference between ridges and valleys introduced during
human finger touch on smartphone displays. Figure 1.1 illustrates an example
of such transparent fingerprint sensor, along with the sensed fingerprint images,
from [10]. Such fingerprint sensing can achieve more than 500 dpi of resolution,
enabling the acquisition of pores, and offer an attractive alternative for a range
of e-business and law enforcement applications.
(c) Ultrasound: Ultrasound fingerprint sensors use response from acoustic signals
from finger ridges to reconstruct the fingerprint images. High bulk and cost
are the key reasons that such ultrasound fingerprint sensors are hardly used in
real applications. New ultrasonic fingerprint sensors, e.g. sense ID ultrasonic
fingerprint sensor [11]. Qualcomm Fingerprint Sensor and Lamberti et al. [12,
13] have been recently been introduced to address these limitations for mobile
applications.
(d) Multispectral: Multispectral fingerprint sensors simultaneously image the fin-
gers at different wavelengths to generate composite fingerprint image. The
multispectral imaging attempts to simultaneously recover subsurface and sur-
face characteristics. The combination of such characteristics can improve image
quality for dry and moist fingers. Multispectral imaging can generate superior
quality images but is more complex and costly.
(e) Swipe Sensors: These sensors use small rectangular area and the users are
required to sweep his/her finger on this area. The sensor reconstructs finger-
print image from the image slices that are acquired during the finger movement.
These sensors require smallest area and do not suffer from the problem of left-
over latent impressions. Swipe sensors require complex user interaction and
additional image reconstruction process which can influence the accuracy and
applicability of these sensors.
1.1 Contact-Based Fingerprint Identification 5
is also the main reason that the accuracy of latent fingerprint matching algorithms is
generally reported in identification rate, rather than using receiver operating charac-
teristics from verification experiments, due to nature of applications/deployments.
The flat or rolled fingerprint matching algorithms are largely based on uniqueness
in spatial localization of various minutiae types in the fingerprints that are matched.
However, recovery and reproducibility of every minutia in individual’s fingers from
the fingerprints acquired using modern or real imaging devices cannot be guaran-
teed. Therefore, most successful approaches for fingerprint matching accommodate
adverse (but frequent) impact of missing minutiae and quality of recovered minu-
tiae in their algorithms. There are a range of publicly available implementations for
the quantifying the fingerprint image quality. Among these, the open source NFIQ
2.0 (NIST Finger Image Quality) [16] is most recent, popular, and quantifies the
fingerprint image quality in 0–100 range according to [17]. NFIQ 2.0 is however
specifically developed to quantify image quality for plain fingerprint impressions
acquired from optical sensors or scanned inked impressions and should not be used
for other sensing technologies like contactless fingerprints, etc. There are also pub-
licly available implementations of fingerprint image matching algorithms that are
based on most reliable minutiae features. Among these, those provided from NIST,
i.e. NBIS [18] which includes NFSEG for the fingerprint segmentation, MINDCT to
locate and detect minutiae features, and BOZORTH3 to generate match scores from
MINDCT feature templates, include source codes and are more popular. A large-scale
evaluation of fingerprint matching algorithms from different vendors was conducted
by NIST, which involved comparisons between 10,000 matching subjects and 20,000
nonmatching subjects fingerprint images in a database of about 10 million subjects.
The detailed report on this assessment was publicly released in 2015 [19] and pro-
vided a comparative summary of performance using identification accuracy and also
the matching speed. A large-scale evaluation of fingerprint matching performance
also appears in [20] which involved 84 million different subjects in the gallery. Large-
scale evaluation results in these two reports: one-to-one fingerprint matching in [19]
while one-to-many fingerprint matching in [20], using database of millions of sub-
jects which suggests that a lot of further work is required to improve the capabilities
of fingerprint matchers before these can be considered for stand-alone high-security
applications.
Development of accurate, fast and fully automated fingerprint matching algo-
rithms has attracted significant attention in academia, government and industry.
Several fingerprint databases have been made publicly available to promote such
development and advance research in this area. Table 1.2 summarizes such publicly
available databases, along with their references that can be used to access or down-
load them for research and development purpose. Reference [51] provides details on
multisensory optical and latent fingerprint databases in [39]. Another more recent
NIST fingerprint database from nail-to-nail fingerprint challenge, with a series of
rolled and plain fingerprint impressions, is available from Ref. [52].
Table 1.2 Summary of contact-based fingerprint image databases in public domain
Impression type Name of database No. of subjects No. of images Image resolution Reference to
access
NIST Special Database 27-latent 258 2580 800 × 768 [38]
Latent IIIT-D Multi-sensor Optical and Latent 100 4000 Variable [39]
Fingerprint (MOLF)-DB4
IIIT-D Multi-sensor Optical and Latent 100 1600 1924 × 1232 [39]
Fingerprint (MOLF)-DB5
NIST Special Database 10-plain N/A 5520 832 × 768 [40]
NIST Special Database 14-plain 13,500 27,000 N/A [41]
NIST Special Database 27-rolled 258 2580 800 × 768 [42]
Rolled NIST Special Database 29-rolled 216 2160 N/A [43]
1.1 Contact-Based Fingerprint Identification
(continued)
8
(a) (b)
Fig. 1.2 Contactless fingerprint image acquisition setups: The setup in a uses support to limit
the range of finger movement which can reduce the cost of sensor or camera optics. The setup in
brequires larger depth of focus and view, which adds to the cost of contactless fingerprint sensing
10 1 Introduction to Trends in Fingerprint Identification
Fig. 1.3 Sample images of commercial contactless 2D fingerprint sensors: a touchless fingerprint
reader IDOne [21], b on the fly from [23] and c finger on go from [22]
been some recent success in reducing the cost of such sensing and acquiring large
area subsurface 2D fingerprints using full-field OCT. Reference [29] details acqui-
sition of 1.7 cm × 1.7 cm fingerprint area with a resolution of 2116 dpi s in 0.21 s.
Such efforts can be considered as remarkable advancement and such full-field OCT
can be a cost-effective alternative as it uses an inexpensive camera and a light source
to acquire OCT images.
The choice of illumination, its content and positioning, expected distance between
the sensor and fingers, along with the positioning and optics of camera should be
judiciously chosen to optimize the grey-level difference, i.e. image contrast, between
the ridges and valleys in the acquired contactless fingerprint images. Despite such
efforts, contactless fingerprint imaging generally results in lower contrast between
ridge–valley structures. Therefore, additional contrast enhancement is required,
before conventional fingerprint enhancement algorithms. This additional contrast
enhancement typically employs homomorphic filtering and is detailed in Chap. 3.
Another limitation with contactless fingerprint imaging is associated with the reduc-
tion in size of effective fingerprint area for identification. The perspective distortion
in the camera, for the portions of curved finger skin that are far away from the centre,
decreases the ridge–valley separation. Therefore, advanced algorithms are required
to enhance and/or correct frequency varying ridge–valley regions. Several research
efforts in academia [30] and industry have resulted in the simultaneous acquisition
of five contactless 2D fingerprints, e.g. images in Fig. 1.3. Contactless fingerprint
images require specialized algorithms for the preprocessing, which consists of
image enhancement and image normalization, before incorporating the conventional
approach [18] to generate minutiae-based fingerprint templates for the matching.
Table 1.3 Comparison between touch-based, contactless 2D and contactless 3D fingerprint iden-
tification
Touch-based 2D Contactless 2D Contactless 3D
fingerprint fingerprint fingerprint
Recognition accuracy High High Very High
Security hazards with High Very Low Very Low
sensor usage
Skin deformation High NIL NIL
Sensor surface High Very Low Very Low
smear/noise
Identification of spoof Low Medium High
and alterations
Sensor cost Low High Very High
Bulk/size Compact Medium/large Bulky
noise, dirt, deformations, slippages or leftover latent from previous users. Contact-
less 3D fingerprint identification, in addition to providing key benefits associated
with contactless 2D fingerprint images, can also incorporate additional information
from 3D fingerprint surface. The use of such additional 3D geometrical informa-
tion, like height, depth or curvature information, is expected to further improve the
identification accuracy using 3D fingerprints. Traditional fingerprint identification
using contact-based fingerprint sensors uses ridge information to generate minutiae
templates while details on the valley are considered as background and therefore
discarded. However, contactless fingerprint imaging can also preserve such valley
details and can also be used to further enhance identification accuracy for contactless
2D and contactless 3D fingerprint matching. Table 1.3 presents a comparative sum-
mary of the expected benefits and indicative performances for contact-based, contact-
less 2D and contactless 3D fingerprint technologies. Replacement of contact-based
fingerprint identification by contactless fingerprint identification can offer higher
throughput, user convenience and hygiene in a range of e-governance applications.
However, such replacement can also introduce challenges relating to accessibility,
ergonomics, anthropometrics and user acceptability. A recent report from NIST in
[31] presents a comparative study on a range of human factors between usage of
contact-based and contactless 2D fingerprint sensors shown in Fig. 1.2b–c. These
findings are quite encouraging and could lead to gradual replacement or deployment
of contactless fingerprint sensors for e-governance applications in coming years.
Personal identification using contactless 3D fingerprint image has attracted the
attention of researchers in the last one decade, and several efforts have been pub-
lished [32–35] in the literature. However, there can be some confusion between
fingerprint identification capabilities introduced from the 2D images acquired from
different 3D views of a given finger and 3D fingerprint systems that acquire or use the
depth information in contactless 3D fingerprint images, as both of these identification
approaches have been referred to as 3D fingerprint identification in the literature. Use
of multiple 2D fingerprint images from the same finger but using different 3D views
1.3 Contactless 3D Fingerprint Identification 13
appear in [32, 35]. Such an approach generates rolled equivalent of fingerprints along
its 3D shape and uses multiple but fixed cameras to generate such rolled fingerprint
image in a contactless manner. The rolled fingerprint impressions [36] can cover large
3D surface area and therefore generate fingerprint templates with a relatively large
number of minutiae (Table 1.1). Therefore, similar 2D fingerprint templates, using
multiple 3D views/cameras, can also lead to more accurate identification. However,
true 3D fingerprint images are expected to include depth information corresponding
to the fingerprint ridges, or at least the shape of 3D finger surface. Such contactless
3D fingerprint images are expected to offer significantly higher (Table 1.3) matching
accuracy as these technologies are expected to discriminate the most reliable minu-
tiae features in 3D spaces. There have been some interesting efforts to develop 3D
fingerprint imaging capabilities and will be discussed in the next chapter.
References
1. Chen Y (2009) Extended feature set and touchless imaging for fingerprint matching, Ph.D.
Thesis, Michigan State University
2. The Smart Identity Card (2018) Immigration Department, Hong Kong, https://www.immd.gov.
hk/eng/services/hkid/smartid.html
3. Aadhaar Authentication, https://www.uidai.gov.in/authentication/authentication-
overview/authentication-en.html 2018
4. US VISIT—Entry/Exit System (2016) http://www.immihelp.com/visas/usvisit.html
5. Secure Outcomes, LS 110 Fingerprint Sensor, http://www.secureoutcomes.net
6. Cross Match Technologies Inc., Guardian R2 Fingerprint Sensor, http://www.crossmatch.com
7. FBIs Certified Product List (CPL), https://www.fbibiospecs.org/IAFIS/Default.aspx
8. UIDAI Biometric Device Specifications, BDCS(A)-03–08, May 2012, http://www.stqc.gov.
in/sites/upload_files/stqc/files/UIDAI-Biometric-Device-Specifications-Authentication-14-
05-2012_0.pdf
9. List of UIDAI Certified Biometric Authentication Devices, Feb. 2018, https://uidai.gov.in/
images/resource/List_of_UIDAI_Certified_Biometric_Devices_13072017.pdf
10. Seo W, Pi J.-E, Cho S. H, Kang S.-Y, Ahn S.-D, Hwang C.-S, Jeon H.-S, Kim J.-U, Lee M.
(2018) Transparent fingerprint system for large flat panel display. Sensors 293. https://doi.org/
10.3390/s18010293
11. Tang H, Lu Y (2016) 3-D ultrasonic fingerprint sensor-on-a-chip. IEEE J Solid-State Circuits
51:2522–2533
12. Qualcomm Fingerprint Sensor, https://www.qualcomm.com/solutions/mobile-computing/
features/security/fingerprint-sensors. 2017
13. Lamberti N, Caliano G, Iula A, Savoia AS (2011) A high frequency cmut probe for ultrasound
imaging of fingerprints. Sens Actuators, A 172(2):561–569
14. Ulery BT, Hicklin RA, Roberts MA, Buscaglia J (2016) Interexaminer variation of minutiae
markup on latent fingerprints. Forensic Sci Int 246:89–99
15. FBI IAFIS CJIS Division, http://www.fbi.gov/about-us/cjis/fingerprints_biometrics/iafis/iafis_
latent_hit_of_the_year. 2017
16. NFIQ 2.0, NIST Fingerprint Image Quality, https://www.nist.gov/services-resources/software/
development-nfiq-20. April 2018
17. ISO/IEC. IS 29794-1:2016, information technology biometric image sample quality Part 1:
Framework. ISO Standard Jan. 2016
18. NIST biometric image software (NBIS), Release 5.0 (2015) https://www.nist.gov/services-
resources/software/nist-biometric-image-software-nbis
14 1 Introduction to Trends in Fingerprint Identification
19. Watson C, Fiumara G, Tabassi E, Cheng S.-L. Flanagan P. Salamon W (2014) Fingerprint
vendor technology evaluation, evaluation of fingerprint matching algorithms. NIST Interagency
Report 8034. http://dx.doi.org/10.6028/NIST.IR.8034
20. Role of Biometric Technology in Aadhaar Enrollment, UIDAI, India, Jan 2012
21. ANDI ONE—Touchless Fingerprint Reader, http://www.andiotg.com/andi-oneprint
22. ANDI GO Zero Contact Fingerprint Identification System, Advanced Optical Systems Inc.,
http://www.andiotg.com/
23. https://www.idemia.com/sites/corporate/files/morphowave-tower-brochure-012018.pdf. May
2018
24. Parziale G, Chen Y (2009) Advanced technologies for touchless fingerprint identification. In:
Tistarelli M, et al (eds), Handbook of remote biometrics. Springer–Verlag, London
25. Song Y, Lee C, Kim J (2004) A new scheme for touchless fingerprint recognition system. In:
Proceedings of international symposium on intelligent signal processing and communication
systems, pp 524–527
26. Sano E, Maeda T, Nikamura T, Shikai M, Sakata K, Matsushita M (2006) Fingerprint authen-
tication device based on optical characteristics inside finger. In: Proceedings of CVPR 2006
Biometrics Workshop, pp 27–32
27. Darlow LN, Conan J, Singh A (2016) Performance analysis of a hybrid fingerprint extracted
from optical coherence tomography fingertip scans. In: Proceedings of ICB 2016. Sweden
28. Liu G, Chen Z (2013) Capturing the vital vascular fingerprint with optical coherence tomog-
raphy. Appl Opt 52(22):5473–5477
29. Anksorius E, Boccara AC (2017) Fast subsurface fingerprint imaging with full-field optical
coherence tomography system equipped with a silicon camera. arXiv:1705.06272v2, https://
arxiv.org/abs/1705.06272
30. Noh D, Choi H, Kim J, (2011) Touchless sensor capturing five fingerprint images by one
rotating camera. Optical Eng 50:113202
31. Furman S, Stanton B, Theofanos M, Libert JM, Grantham J (2017) Contactless fingerprint
device usability test. NIST IR Mar. https://doi.org/10.6028/NIST.IR.8171
32. Parziale G, Diaz-Santana E, Hauke R (2003) The surround imager: a multi-camera touchless
device to acquire 3d rolled-equivalent fingerprints. In: Proceedings of ICB 2006, vol. 3832.
LNCS
33. Wang Y, Hao Q, Fatehpuria A, Lau DL, Hassebrook LG (2009) Data acquisition and quality
analysis of 3-Dimensional finger-prints. Proc IEEE Conf Biometrics, Identity and Security,
Tampa, Florida, Sep, pp 22–24
34. Wang Y, Lau DL, Hassebrook LG (2010) Fit-sphere unwrapping and performance analysis of
3D fingerprints. Appl Opt 49(4):592–600
35. Touchless Biometrics Systems, Switzerland, https://www.tbs-biometrics.com/en/3d-
enrollment/. 2018
36. Recording Legible Fingerprints (2016) https://www.fbi.gov/about-us/cjis/fingerprints_
biometrics/recording-legible-fingerprints
37. American National Standards for Information Systems—Data Format for the Interchange of
Fingerprint, Facial, & Other Biometric Information—Part 2 (XML Version); ANSI/NIST-ITL
2-2008, NIST Special Publication #500-275; National Institute of Standards and Technology,
U.S. Government Printing Office, Washington, DC, 2008. Available online at http://www.nist.
gov/customcf/get_pdf.cfm?pub_id=890062
38. Fingerprint Minutiae from Latent and Matching Tenprint Images, NIST Special Database 27,
Gaithersburg, MD, USA, http://www.nist.gov/srd/nistsd27.cfm
39. IIIT-D Multi-sensor Optical and Latent Fingerprint (MOLF)-DB4, http://iab-rubric.org/
resources/molf.html
40. NIST Special Database 10, NIST Supplemental Fingerprint Card Data (SFCD), https://www.
nist.gov/publications/nist-special-database-10-nist-supplemental-fingerprint-card-data-sfcd-
special-database. Feb. 2017
41. NIST Special Database 14-plain, http://www.nist.gov/srd/nistsd14.cfm
42. NIST Special Database 27-rolled, http://www.nist.gov/srd/nistsd27.cfm
References 15
Fig. 2.1 Overview of existing techniques to scan 3D fingerprints images from a live user
Stereo vision based methods use at least two cameras to simulate human binocular
vision and compute the depth information during live 3D fingerprint scans. The dis-
tance between the two cameras for such an approach is generally the distance between
the human eyes, or intra-ocular distance of about 6.35 cm and greater distances can
be employed for higher 3D details. Stereo vision based systems compute the depth
information using the principle of triangulation. Active triangulation approach to scan
3D fingerprints can use a digital camera that records the response from a known laser
signal, after its projection to 3D finger surface. The location of laser beam source, the
camera and the contact-point of the laser beam to the 3D fingerprint surface forms
a triangle and can be used to compute the depth information from such geometri-
cal relationship. An example of such triangulation-based range sensor is Vivid 910
[3] which can offer accuracy up to one-tenth of a millimetre and has been used for
biometrics research, e.g. to acquire palmprint data in [4]. Active triangulation-based
range scanning methods available today generally illustrate high-frequency noise
and its accuracy is limited in fixed multiples of sampling frequency. Motion of the
fingers during the image acquisition process also limits the use of such range imaging
methods for the practical 3D fingerprint identification.
In passive triangulation-based stereo vision methods, the laser (active) source is
replaced by a second camera, which forms passive component in the triangulation.
Such triangulation stereo, or shape from silhouette, based methods generally employ
more than two cameras to enable nail-to-nail acquisition of 3D fingerprint shape
details as shown in Fig. 2.2. The stereo vision theory explains [5] that the spatial
location of an unknown object point in 3D space can be computed from the 2D images
acquired at a different plane, at the same time. Such an approach also requires that
the spatial locations of the camera and their correspondingly matching spatial points,
referred to as correspondence points, are available. In case of multiple camera-based
3D fingerprint acquisition, the stereo image pairs are acquired from the adjacent
cameras. The surround-imager in [6] employed five cameras while imaging system
in [7] and [35] uses two cameras. In order to acquire nail-to-nail representation [8]
of fingerprints, the arrangement in [6] employs simultaneously acquired five images
2.1 Stereo Vision 19
from different 3D views. Such an arrangement uses silhouettes from each of the
five-segmented images to generate a cylindrical model for the presented finger.
Passive triangulation-based methods to scan live 3D fingerprints using multiple
cameras are simple to implement, fast and can acquire nail-to-nail representation of
3D fingerprints. Such shape from silhouette approaches can, however, only provide
the shape of the finger and lacks details on the 3D fingerprint ridge information. Since
the ridge information in such images is essentially derived from the 2D images, it
is adversely influenced from the variations in skin pigmentation, reflectance or its
shape. Several studies have indicated that automatically locating the correspondence
points in two fingerprint images, from different 3D views, is quite challenging and
is another limiting factor for precise acquisition of 3D fingerprints.
Another class of active imaging methods for live 3D fingerprint scans illuminate the
fingers with structured lighting of known patterns, e.g. stripes, circles, etc., and mea-
sured deformation of the projected patterns to determine the 3D shape of presented
finger. Such structured lighting based methods, unlike those in Fig. 2.2, acquire 3D
fingerprint ridge details from their 3D geometry instead of their surface albedo. A
faster and more versatile approach is to project many stripes, fringes or patterns once
and simultaneously acquire multiple samples [9, 10]. Therefore, structured lighting
patterns are often coded to automatically recover the correspondences between the
projected pattern points and those observed from respective points in the acquired
images. These correspondence points can be triangulated to recover 3D shape infor-
mation.
A variety of structured lighting stripes or patterns have been investigated in the
literature [9] and can be broadly categorized into one of the three categories, i.e. time-
multiplexing (e.g. binary codes or phase shifting), spatial neighbourhood (e.g. De
Bruijn sequences or M-arrays) and (c) colour coding (codification based on colour or
grey levels). A comparative evaluation of representative techniques from each of three
categories indicates [11] that the time-multiplexing techniques are easy to implement,
offers high spatial resolution and can provide more accurate depth details. Maximum
resolution from such temporal coding techniques can be achieved by the combina-
tion of phase shifting and grey code but requires a large number of patterns or image
frames that can pose problems for 3D fingerprint imaging due to involuntary motion
of fingers. Therefore, 3D fingerprint imaging approaches implemented in [12, 13]
uses phase measuring profilometry [14] (PMP) for encoding structural illumination
to enable better precision with smaller number of image frames. These patterns are
imaged from a calibrated camera which is synchronized with the projected patterns.
The observed distortions in projected patterns from the camera images are computed
by measuring their phase differences with reference as detailed in [12]. Figure 2.3
illustrates computation of 3D fingerprint height, at any specific point on 3D finger-
print surface, using calibrated phase difference measurements. The PMP approach
projects sine-wave patterns whose phase is shifted several times. Traditional spatial
phase unwrapping methods can accumulate errors from fingerprint ridge and valley
discontinuities. Therefore, the implementation detailed in [13] computes the absolute
phase pixel-by-pixel by incorporating optimum three-fringe number selection [15,
16] with a series of colour sinusoidal fringes.
Structured lighting based 3D fingerprint scanners have been commercially intro-
duced [17] and offers one of the most promising solutions available today. Active
3D fingerprint imaging using coherent light source like laser or structured lighting
can also enable liveness detection by analysing dynamic interference patterns or
biospeckle as investigated in [18]. The projection and imaging of structured light-
ing patterns require time and, therefore, such methods can suffer from instability
problems due to involuntary finger motions. In addition to the limitations with the
reconstruction accuracy for the shape of high-frequency ridges, such methods gen-
erally require setup with higher cost and bulk, largely due to the requirement of a
projector and high-speed camera.
2.3 Optical Coherence Tomography 21
Unlike popular methods of fingerprint imaging which are known to acquire finger
skin surface ridge or dermal details, tomographic imaging of fingers can reveal inner
skin layer between dermis and epidermis. Such 3D imaging of fingers can not only
offer higher antispoofing capabilities [19] but can also enable recovery of damaged
fingerprint ridges, due to scars or external factors, when dermis or surface-based
3D fingerprint acquisition is not viable. The OCT based 3D fingerprint imaging is
essentially based on the principle of interefometery [20] where the light reflected
from the finger skin, at various depth to recover subsurface 3D profile, is combined
with the light reflected from a fixed mirror in an attached interferometer. Such an
approach generally uses broadband near infrared illumination which can penetrate
finger skin surface, or epidermis, to the subsurface layers. The interference patterns
between light reflected by the fixed mirror and the finger surface layers are measured
by a spectrometer. The depth information is retrieved by the analysis of spectral
modulations in these patterns.
There have been many promising studies to acquire 3D fingerprints using OCT.
Reference [21] reports recovery of small 4 mm × 4 mm × 2 mm sized 3D fingerprints.
The work reported in the literature [22, 23] on the recovery of 3D fingerprints is
quite preliminary as it acquires very small volume and under higher acquisition time
which can pose challenges due to involuntary finger motion during the imaging. Large
fingerprint imaging area is highly desirable as it can ensure adequate overlap between
the 3D fingerprints acquired during the enrolment and subsequent verification stages.
More recent efforts in [24] using full-field OCT enable the acquisition of 1.7 cm ×
1.7 cm area but requires several seconds to build volumetric 3D fingerprint data from
multitude of such images. In this context, the Fourier-domain OCT [25, 34], can
acquire entire 3D fingerprint data in a single scan and offers much faster alternative
but at a significantly higher cost. The cost of OCT-based sensors is prohibitively
high, even full-field OCT sensor can currently cost over ten thousand US$, which
limits its possible usage in popular practical fingerprint identification applications.
Fig. 2.4 Ultrasonic 3D fingerprint sensor interface using piezoelectric micromachined ultrasonic
transducers
Fig. 2.5 Acquisition of 3D fingerprints using minimum/three 2D images under Lambertian surface
assumption. A fixed camera acquires I 1 , I 2 , I 3 images, respectively, under L 1 , L 2 , L 3 illumination.
The unknown surface normal N (nx , ny , nz ) at every pixel location is computed by solving three
equations with three unknowns under the assumption of globally constant albedo/rho. The depth at
every pixel points is computed from the integration of surface normals
offline steps. The changes in the observed grey-level intensity at every pixel locations
in images (Fig. 2.5) depend on the 3D surface orientation and surface reflectance, in
addition to the properties/locations of illumination. Therefore, by acquiring multiple
images of 3D fingerprint, under each of the known or different illumination, a system
of equations with relationships between unknown 3D surface orientation (along with
surface reflectance) and known locations of fixed illuminations is generated. These
equations are then solved to extract 3D surface orientation and surface reflectance
information which is used to reconstruct 3D fingerprints surfaces and ridges. Recov-
ery of 3D fingerprints using photometric stereo is sensitive to ambient illumination
changes. Therefore, imaging setup must ensure that external illumination does not
influence 3D fingerprint acquisition process. Acquisition of 3D fingerprints using
photometric stereo can be achieved with a low-cost general purpose camera and
fixed light-emitting diodes (LEDs). This approach is attractive due to its low cost
and high precision that enables recovery of high-frequency ridge details. Therefore,
acquisition of 3D fingerprints using photometric stereo is discussed in detail in the
next chapter.
Low-cost, precise and fast acquisition of live 3D fingerprints is critical for the success
of emerging 3D fingerprint identification technologies. Therefore, a range of other 3D
imaging methods has been attempted in the literature to acquire live 3D fingerprint
24 2 3D Fingerprint Image Acquisition Methods
images. Shape from shading (SFS) is another attractive method, among shape from X
techniques [29] that can be used to recover 3D fingerprint from a single 2D fingerprint
image. This approach describes the 3D shape of a finger surface in terms of surface
normal and recovers them from gradual intensity variations that are observed due to
shading in a single 2D fingerprint image. Reference [30] details an improved SFS
method for recovering 3D fingerprints by incorporating additional constraints on
the brightness gradients and decomposing the acquired 2D fingerprint image into
two frequency bands, which helps to minimize the errors in the recovery of smaller
ridge details in 3D fingerprints. The SFS approach is highly sensitive to noise in the
fingerprint image and its solution relies on surface continuity assumption, which can
be very hard to meet due to nature of fingerprint ridges.
Another approach for the acquisition of 3D fingerprint detailed in [31] uses combi-
nation of fringe pattern projection and photometric stereo to recover 3D fingerprints.
The pattern projection approach is used to reconstruct the structural part of the shad-
ing image while the texture or ridge patterns are recovered using cylindrical ridge
model by considering them as semi-cylindrical structure. This investigation also uses
a single fixed camera but the usage of a projector enhances the complexity of setup
than those for the photometric stereo based 3D fingerprint acquisition. Time-of-flight
(TOF) cameras [32] also offer another potential method of low-cost 3D fingerprint
acquisition which is yet to be investigated in the literature. The TOF imaging approach
employs a similar principle as for the 3D laser scanners. This approach computed the
depth information from the phase shift between the (modulated) incident illumina-
tion and the reflected signal. The TOF imaging can acquire the depth information of
the entire object simultaneously, unlike point-by-point scans for laser scanners, and
can offer more attractive alternative to address involuntary finger motions during the
3D fingerprint image acquisition.
This chapter presented promises and limitations of live 3D fingerprint scan meth-
ods that have been introduced in the literature. Table 2.1 attempts to summarize
emerging 3D fingerprint acquisition methods using the nature of imaging technique,
acquisition mode, relative cost, expected reconstruction accuracy and also provides
representative references. Acquisition of 3D fingerprints using laser-based range
scanning [3] can address sensitivity of photometric method to the ambient illumi-
nation variations while acquiring absolute depth data. However, slow speed of such
range scanning is the key limitations for its usage in 3D fingerprint acquisition.
Photometric stereo and triangulation-based techniques are quite attractive for live
3D fingerprint imaging. The triangulation-based techniques can suffer from high-
frequency noise but are known to offer robust reconstruction of low-frequency fin-
gerprint surface shape. On the other hand, the photometric stereo based methods are
known to perform very well in recovering high-frequency object details, such as the
ridges structures required from 3D fingerprints. Therefore, next chapter includes a
detailed explanation of this approach for the low-cost 3D fingerprint acquisition.
2.6 Other Methods
References
25. Wiser W, Bidermann BR, Klein T, Eigenwillig CM, Huber R (2010) Multi megahertz OCT:
high quality 3D imaging at 20 million A-scans and 4.5 GVoxels per second. Opt Express
18:14685–14704
26. Lu Y, Tang H, Fung S, Wang Q, Tsai JM, Daneman M, Boser BE, Horsley DA (2015) Ultrasonic
fingerprint sensor using a piezoelectric micromachined ultrasonic transducer array integrated
with complementary metal oxide semiconductor electronics. Appl Phys Lett 106:263503
27. Tang H-Y, Lu Y, Assaderagh F, Daneman M, Jiang X, Lim M, Li X, Ng E, Singhal U, Tsai JM,
Horsley DA, Boser BE (2016) 3D ultrasonic fingerprint sensor-on-a-chip. In: Proceedings of
international solid state circuits conference ISSCC 2016, pp 202–204
28. Jiang X, Lu Y, Tang H-Y, Tsai JM, Ng EJ, Daneman MJ, Boser BE, Horsley DA (2017)
Monolithic ultrasound fingerprint sensor. Microsyst Nanoeng 3:17059. https://doi.org/10.1038/
micronano.2017.59
29. Buelthoff HH (1991) Shape-from-X: psychophysics and computation. In: Fibers’ 91, SPIE,
Boston, MA, pp 305–330
30. Balogiannis G, Yova D, Politopoulos K (2014) 3D Reconstruction of skin surface using an
improved shape-from-shading technique. In: Proceedings of IFMBE, vol 41. Springer, pp
439–442. https://doi.org/10.1007/978-3-319-00846-2_109
31. Balogiannisa G, Yovaa D, Politopoulos K (2016) A novel non-contact single camera 3D finger-
print imaging system based on image decomposition and the cylindrical ridge model approxi-
mation. Int J Comput 20(1):174–198
32. Basler Time-of-Flight Camera, https://www.baslerweb.com/en/products/cameras/3d-cameras/
time-of-flight-camera. May 2018
33. Kumar A, Kwong C (2013) Towards contactless, low-cost and accurate 3D fingerprint identi-
fication. In: Proceedings of CVPR, Portland, USA, June 2013, pp 3438–3443
34. Aum J, Kim J-H, Jeong J (2016) Live acquisition of internal fingerprint with automated detec-
tion of subsurface layers using OCT. IEEE Photonics Technol Lett 28:163–166
35. Liu F, Zhang D (2014) 3D fingerprint reconstruction system using feature correspondences and
prior estimated finger model. Pattern Recogn 47:178–193
Chapter 3
Contactless and Live 3D Fingerprint
Imaging
Online and precise acquisition of 3D finger images is one of the major technical
challenges for the success of 3D fingerprint technologies. Photometric stereo based
imaging offers low-cost approach to scan live 3D fingers and has also shown its effec-
tiveness to reproduce high-frequency fingerprint ridge details. Therefore, detailed
description of this approach is provided in this chapter.
where ρ is the albedo and I 0 is the incident radiance. The surface gradients p (x, y)
and q (x, y) can be defined as follows:
Let us choose 3D coordinate system where the image plane coincides with x-y
plane and the z-axis coincides with the viewing direction of fixed camera. The 3D
finger surface can be reconstructed by recovering the surface height information z f
(x, y). We approximate/consider finger surface as the Lambertian surface [1, 2] which
T
is illuminated by multiple, say m, fixed light sources, i.e. L l 1 , l 2 , . . . l m . We
also assume that light source and the camera are far away from the 3D finger surface
to ensure consistency of illumination/viewing directions across the 3D finger surface.
Each of these light sources (LED’s) is fixed in an imaging device. The locations of
T
each of these LED source direction l l x , l y , l z , along with its radiance l, are
known from the calibration stage. This calibration step is detailed in next section.
T
Let n n x , n y , n z be the unknown unit surface normal vectors at some point
of interest on the 3D finger surface. The observed image pixel intensities y, from
the m 2D finger images, corresponding to the respective illumination sources can be
written as follows:
y L.x (3.3)
T
where y [y1 , y2 , . . . ym ]T and x ρ n x , n y , n z . We assume that the light source
directions are not coplanar so that the matrix L is non-singular. Equation (3.3) illus-
trates linear relationship between 3D finger surface, observed pixel intensities from
2D finger image and the unit surface normal vectors x. The unknown vector x can be
estimated from the standard [2] least squared error based solution technique using
the following equation:
−1
x LT L LT y ≡ ρn (3.4)
Fig. 3.1 Acquisition of 3D fingerprints using minimum/three 2D images under Lambertian surface
assumption. A fixed camera acquires I 1 , I 2 , I 3
tre. The calibration process [3] can be fully automated and only the distance of LEDs
from the imaging surface (hm ) is manually measured while they are fixed around a
circle.
There are several approaches to automatically locate the positions of LEDs and
a more simplified one is detailed in the following. We can use a small nail or pin to
automatically identify the spatial location of different LEDs in the imaging setup.
This nail is firstly placed somewhere in the middle of the field of view for the camera.
We then illuminate each of the LED sources, seven in example case in Fig. 3.1,
one by one and acquire one image for the correspondingly lit LED. These images
will depict the expected shadow corresponding to the tail of nail and are shown in
Fig. 3.2. Here, each of these images is of 768 × 1024 (M × N) pixel size. The radius
of circle for LEDs (r in Fig. 3.1) is 6.5 cm and the height (hm ) is 13.5 cm, while
the pixel-to-centimetre ratio is 403/1. Let (x, y) represent the centre of circle where
LEDs are symmetrically placed and (x , y ) be the spatial position of the nail or pin
placed on the imaging surface during the calibration.
The radius r and height hm are firstly converted from centimetre scale to the pixel
scale. Using each of the images in Fig. 3.2, we can measure positions of nail tip from
the direction of accompanying shadow and these can be represented as (x 1 , y1 ), (x 2 ,
y2 ), … (x 7 , y7 ). Next step is to transform the LED centre (x, y), pin centre (x , y ) and
each of the shadow/nail tip positions to the world coordinate system. Figure 3.3 details
the conversion of image coordinates to the world coordinates during the calibration
process.
It can be noted that the LED circle centre (x, y) and radius r form a circle while pin
centre position (x , y ) and shadow tip position (e.g. x 1 , y1 ) form a line. This can also be
observed in Fig. 3.4 which shows respective line for one image in Fig. 3.2. Therefore,
the position of LEDs is essentially the point of interaction with this extended line
with the LED circle. In order to enhance the reliability in measurements, the nail is
32 3 Contactless and Live 3D Fingerprint Imaging
Fig. 3.2 Image samples acquired during the calibration process that represent the shadow from
respective LED illumination in the imaging setup
placed at several different positions on the imaging surface and corresponding seven
images (similar to as in Fig. 3.2) are acquired to compute the spatial position of each
of the seven LEDs, in the same manner as discussed earlier. Figure 3.4a illustrates
this interaction of extended line with LED circle while Fig. 3.4b illustrates the final
locations of the intersection points that determine the location of LEDs. We can now
use the height hm , in pixel scale, to identify the spatial position of each of the seven
LEDs in the world coordinate system.
Let us represent the position of such LEDs as posLx, posLy, posLz (=hm ,) in
world coordinate system. Next step is to transform and compute the pixel positions,
corresponding to each pixel in the acquired image, into the image coordinate system.
This transformation is achieved as follows:
where (M) and (N) represent the size of acquired image as discussed earlier. Each of
the transformed pixel locations (posx, posy, posz) are further normalized as follows:
x , y , z [x, y, z]/ x 2 + y 2 + z 2 (3.6)
3.1 Contactless 3D Finger Image Acquisition Using Photometric Stereo 33
(a) (b)
2000
1000
-1000
-2000
-3000
-4000 -3000 -2000 -1000 0 1000 2000 3000
Fig. 3.4 a Localization of surrounded LEDs using the intersection of respective shadow line during
the calibration and b Localized spatial positions of LEDs using average of multiple measurements
Fig. 3.5 Plain paper images acquired under different LEDs for the illumination normalization
The normalized pixel positions, for each of the LEDs, are stored and represent
the key data from the calibration process. These positions represent L in Eq. (3.4)
−1
and are fixed for a given imaging setup. Therefore, LT L can also be computed
offline and retrieved/read during the online 3D finger imaging to compute the surface
normal vectors.
The illumination received by different areas on the imaging surface, or from LEDs
itself, can be different. Such uneven illumination from different LEDs can adversely
influence the accuracy of surface normal estimation. Therefore, the illumination
normalization is also performed during the calibration stage. Simplest approach for
such illumination normalization is to acquire image from a plain white paper, under
illumination from individual LEDs as shown in Fig. 3.5. The image normalization
can be achieved as follows: I c (i, j) I(i, j)/Iw (i, j), where I is the original image and
34 3 Contactless and Live 3D Fingerprint Imaging
Fig. 3.6 Sequence of seven sample images acquired in quick succession under respective (LED)
The setup in Fig. 3.1 uses seven symmetrically distributed LED and a low-cost digital
camera which can acquire 1280 × 1024 pixel images with 15 frames per second.
Figure 3.6 shows seven such sample images acquired from a live 3D finger. Each
of the live 3D fingers provides seven images and each under a different or unique
LED illuminating the finger. Before locating the region of interest (ROI) for the
reconstruction of 3D model, it is useful to combine all seven images and generate
a unified or stacked greyscale
image. The intensity of the resulting stacked image
7 7
is normalized, i.e. Iall i0 Ii /max i0 Ii , where max(I i ) is the maximum
intensity of the image. This stacking or combination of images with different LEDs
is a linear operation, and the result is expected to be the same as a single image which
is acquired under all (seven) simultaneously lit LEDs.
The stacked image is firstly subjected to edge detection operation, e.g. Canny, to
localize the image boundaries. The resulting image is used to localize the boundaries
of the finger. We scan the edge detected image to locate the first lines that overlap
the edge lines from top and bottom in y-direction [4]. The average value of two lines
plus and minus half of predefined ROI height will be the upper bound and lower
bound of ROI. Simply tracing the upper bound and lower bound ROI lines from the
right in x-direction, the position of the first edge pixel can be marked. The average of
two positions in y-direction plus the offset serves as the right boundary of the ROI.
The left boundary of the ROI is the right boundary minus the predefined ROI width.
Once the ROI is defined using the unified or the stacked image, it can be easily
mapped to extract seven respective greyscale ROI images. Figure 3.7 illustrates such
3.1 Contactless 3D Finger Image Acquisition Using Photometric Stereo 35
Fig. 3.7 The ROI images automatically segmented from the images shown in Fig. 3.6. The size of
these sample images is reduced from 2000 × 1400 to 500 × 350 pixels
Multiple ROI images acquired from calibrated imaging setup are used to recover the
−1
surface normal vectors using least squared solution of (3.4), i.e. x L T L L T y.
In example case of images acquired using the setup shown in Fig. 3.1, L ∈ R7×3 is
the pre-computed illumination direction matrix, y ∈ R7×1 represents the intensity
or the vector representing grey levels observed at a given pixel position from each
of the seven images and x ∈ R3×1 unknown surface normal vector at respective
pixel positions. Surface reflectance or the unknown albedo for every 3D fingerprint
pixel positions is recovered from the norm, i.e. ρ norm(x). The unit surface
normal (n x , n y , n z ) for every pixel position is recovered from the norm, i.e. n x/ρ.
Figure 3.8 illustrates albedo and surface normal vectors recovered from a sample
image. Recovered unit surface normal vectors are shown in red colour arrows on the
albedo map.
36 3 Contactless and Live 3D Fingerprint Imaging
Fig. 3.8 Surface reflectance or albedo map (left) recovered from least squared solution for sample
image and corresponding unit normal vectors (right) shown in red coloured arrows
Fig. 3.9 Surface gradient p(−n x /n z ) on the left and q(−n y /n z ) for the sample in Fig. 3.8
Surface gradients (p, q) and the reflectance (ρ) computed from the solution of (3.4)
are used to recover cloud point data, which represents fingerprint height z in 3D space
3.1 Contactless 3D Finger Image Acquisition Using Photometric Stereo 37
Fig. 3.10 Reconstructed sample 3D fingerprint surface using Frankot–Chellappa algorithm. Sam-
ples shown in a are without while those in b are with the usage normal amplification filter
38 3 Contactless and Live 3D Fingerprint Imaging
and forms the raw 3D fingerprint data. There are several algorithms in the literature
to reconstruct such 3D fingerprint cloud point data from the surface normals. This
reconstruction operation is essentially the integration of gradient field recovered
from the surface normal under the assumption that the recovered surface normals are
continuous along the closed 3D surface. This assumption requires that the integral
of gradient fields along any closed path should be equal to zero and such integration
should be independent of the choice of integration path. The gradient fields recovered
from real 3D finger surfaces are often non-integrable. This can be largely attributed
to the noise in the estimation process, imaging errors or the assumptions made from
the imaging setup in manipulation of surface normals.
Lack of consistency among the recovered surface normals is popularly known as
the integrability problem [6] and several solutions are proposed in the literature. Most
of these methods manipulate the recovered surface normals to achieve desired goal
and the 3D image is recovered from the 2D integration of these manipulated surface
normals. Such manipulation is achieved by incorporating integrability constraints
to remove inherent ambiguities in the recovered surface normal or to regularize the
desired solution. Online recovery of 3D fingerprint requires computationally simpler
methods for the reconstruction, and therefore two methods, Frankot and Chellappa
algorithm [7, 37] and Poisson Solver [6, 8], are briefly discussed for their usage. The
implementations of these algorithms are simpler and also accessible [9, 10] in public
domain.
Let the original 3D fingerprint surface required to be reconstructed, using the
measured but non-integrable surface gradient field gradients (p, q) from (3.4), be
p p p
represented by S3d . Let us use (gx , g y ) to represent the true gradient field of unknown
p
3D fingerprint surface S3d . A common approach for the reconstruction of 3D surface
p
is to minimize the least squared error E(S3d ):
¨
p p 2 2
E S3d gx − p + g yp − q dxdy (3.7)
The Fourier transform of unknown 3D surface and Fourier transform of the gra-
dients can be directly related using the Perseval’s theorem [xx] as follows:
p p
p uG x (u, v) + vG y (u, v)
S3d (u, v) − j (3.8)
u 2 + v2
p p
where G x (u, v) and G y (u, v), respectively, represent the Fourier transform of gradi-
p p p
ent gx and g y . The Fourier transform of unknown 3D fingerprint surface S3d is repre-
p
sented as S3d (u, v). It is straightforward now to recover the unknown (or integrable)
p
3D fingerprint surface gradients, or S3d , by computing inverse Fourier transform of
(8), i.e. S3d F −1 S3d (u, v) . This approach is referred to as Frankot–Chellappa
p p
[7, 37] algorithm and has shown to be robust to noise [11] with several publicly acces-
sible implementations [10]. Availability of Fast Fourier Transform implementations
is also another reason for the popularity of this approach for online applications and
is used here for the reconstruction of 3D fingerprints.
3.1 Contactless 3D Finger Image Acquisition Using Photometric Stereo 39
Fig. 3.11 Sample 3D fingerprint surface reconstructed using the Poisson solver
In order to ensure that surface gradients are integrable its curl along the closed
p
path should be zero curl (∇ × S3d 0). Therefore, Euler–Lagrange relation, which
gives the following Poisson equation offers another approach [8] to recover corrected
or manipulated gradient fields.
∇ 2 gxp , g yp div( p, q) (3.9)
This approach is referred to as Poisson solver and detailed in [8] with public imple-
mentations in [10]. It offers another computationally simpler approach to reconstruct
p
3D fingerprint S3d . Figure 3.10 shows sample 3D fingerprints reconstructed using
the Frankot–Chellappa algorithm and the Poisson solver. These recovered surface
normals can be further enhanced or amplified for their visibility as in [12]. Such
amplification step firstly computes the average of surface normals over a local patch,
say b. The difference between the actual surface normal and the computed average
is amplified for the enhancement and this operation can be written as follows:
b
1
ñ n + α n − ni (3.10)
bb i1
(a) (b)
30
100
80
20 60
40
10 20
0
-20
0
-40
-60
-10 -80
-100
-20
10 20 30 40 50 60 70 80 20 40 60 80
Fig. 3.12 Sample ground truth surface cross section a and the surface generated using
Frankot–Chellappa algorithm b
makes it very difficult to evaluate the accuracy from a given imaging or reconstruction
algorithm. Therefore, a known 3D surface model is generated as its cross section is
shown in Fig. 3.12a. It can be observed from Fig. 3.12b that using Frankot–Chellappa
algorithm, the slope of reconstructed surface has been changed to ensure continuity
with the boundary points. The padding operation for surface normal vector field
can reduce the distortion at boundary surface. However, it was observed that the
reconstruction using Frankot–Chellappa algorithm is weak or susceptible to errors
at discontinuous points, i.e. near the edges of 3D fingerprint or ROI. The ground
truth model for the evaluation of 3D reconstruction is currently obtained from a
laser scanner [13]. A 3DMD scanner [14], which is popular for medical imaging
applications, with 0.5 mm RMS was employed. There are ranges of industrial laser
scanners which can offer much higher accuracy (up to 0.001 inch or 0.00254 cm).
These can be employed to generate ground truth 3D fingerprint data and used to
ascertain accuracy of reconstructed 3D fingerprints.
and can be approximated for the blue light (λ 550 nm) used during the imaging
which also has shorter wavelength. The approximated MRS is therefore 2.630 μm.
It should be noted that for a fixed camera-based imaging system, the image res-
olution is not constant over the imaged area. The PIV standards [15] introduced
from FBI are widely used to evaluate the capabilities of commercial 2D fingerprint
sensors. These standards mandate the use of calibration targets, sine gratings on a
flat surface, to facilitate the measurements of quantitative imaging thresholds from
fingerprint sensors. In medical imaging applications, like for computerized tomog-
raphy or radiography, the 3D targets or phantoms that represents the characteristics
of expected imaging targets during the deployments are widely employed. Similar
method has been detailed in reference [16] to evaluate contactless 2D fingerprint
sensors using 3D cylindrical targets that are mapped with 2D fingerprint ridge—
valley patterns. Fabrication of such 3D fingerprint targets [17] using the materials
that closely resemble the human fingers, i.e. ridge–valley characteristics and surface
reflectance, can be used to evaluate the fidelity of 3D fingerprint images. Availabil-
ity of commercial 3D fingerprint sensors in coming years is expected to initiate the
standardization and development of 3D fingerprint targets in coming years.
Finger skin surface, like many other non-lambertian surfaces, is often accompanied
by sweat, oily or greasy contaminations which generate specular reflections in the
contactless 2D images acquired for the 3D reconstruction. Such specular reflec-
tion from unwanted contaminations generates noise and degrades the accuracy of
reconstructed 3D fingerprints. Therefore, it is desirable to suppress or eliminate such
specular pixels in the 2D images before the reconstruction. Hierarchical selection of
Lambertian reflectance [18] is one of the more effective methods to automatically
identify these specular pixels and is briefly discussed. Let us assume that the vector
Is {I 1 , I 2 …I m }, with m ≥ 3, represents 2D fingerprint ROI pixels. In 3D spaces,
any four (illumination) vectors are linearly dependent and therefore we can write
a1 I 1 + a2 I 2 + a3 I 3 + a4 I 4 0 (3.11)
for some real coefficients ak with k 1,…,4. Both sides of the above equation can
be multiplied by surface normal vector and albedo to generalize the Lambertian
error [19]. In summary, the key idea is that the four vectors or pixels are linearly
dependent or related such that the coefficients in above equation can be computed
from the zero-sum relationship. For specific example of imaging setup in Fig. 3.1
with seven illumination sources (m 7), this error can be written as follows:
7!
4!3! 7
EL aik ISk (3.12)
i0 k1
42 3 Contactless and Live 3D Fingerprint Imaging
Fig. 3.13 Sample images a–b with specular pixels acquired from setup in Fig. 3.1 and the image
c representing the mapping of specular light affected pixels
which is the sum of error for every combination of selecting four vectors in I Sk from
the seven vectors. When E L is larger than a threshold, the highest value in I Sk can be
considered as the specular light/pixel and is ignored, i.e. we use the rest six darker
pixel values in the computations. Figure 3.13 shows sample contactless fingerprint
images with specular reflections and sample results generated from this approach for
identifying specular pixels.
This approach [18] is computationally complex as it requires us to compute solu-
tions of (3.11) for each of the presented 3D fingerprints. Our observations have
indicated that this approach generates slightly better results for the reconstructed
3D fingerprints. However, such marginal improvement is not sufficient to justify
computational requirements for online system, and therefore we preferred a simpler
approach to identify/suppress specular reflection. We top K percentage of high-
intensity pixels from all the 2D fingerprint ROI images, with different illumination
profiles which are acquired for a 3D fingerprint reconstruction, as the specular reflec-
tions. The magnitude of K is empirically determined and was fixed as 0.228 during
all our experiments (Figs. 3.10 and 3.11).
3.1 Contactless 3D Finger Image Acquisition Using Photometric Stereo 43
The real-world 3D fingerprint source image data is not expected to comply with
the Lambertian surface reflectance requirements in (3.1) the formation of images.
Such non-Lambertian effects can be attributed to the specular reflection, shadows,
sensor/source noise or importantly due to the nature of finger skin which can be
largely considered as a translucent surface. In order to address such non-Lambertian
influences on classical photometric stereo method, several approaches have been
introduced in the literature. These approaches can be broadly categorized into two
classes and are briefly introduced in the following two sections.
The first category of approaches is those, which use statistical methods that attempt
to eliminate the adverse influence from the non-Lambertian effects. Given the illu-
mination matrix L in (3.3), acquired during the calibration of setup in Fig. 3.1, the
photometric stereo (PS) problem can be formulated as the solution to following
optimization problem:
2
min y j − LT b j
(3.13)
bj
Fig. 3.14 Surface normal orientations (blue arrows) recovered from a sample 3D fingerprint using
a classical PS method, and b SBL method. Reconstructed 3D fingerprint from different views using
c classical PS method and d SBL method. Heat map of recovered surface normal from classical PS
method in e, SBL method in f and their difference in g. The mean angular difference between the
surface normal orientations in e and f is 9.6°
complexity. Therefore, reference [23] revisits the photometric stereo problem under
non-Lambertian reflectance assumption and derives a new optimization formulation
for real objects. This approach attempts to minimize the error between the pixel
intensity from the real-world illumination reflectance and the corresponding Lam-
bertian reflectance. Figure 3.15 illustrates comparative experimental results from a
3D fingerprint sample acquired using setup shown in Fig. 3.1.
The images in Figs. 3.15, 3.16a, b and c, respectively, illustrate reconstructed
results from the two different samples using the imaging setup in Fig. 3.1. In each of
these two figures, the first column of images represents the results from LS method
using all seven images, the second column of images represents the results from the
method in [23] using three images and the third row of images represents the results
from LS method using three images. Two different rows in each of these figures
provide two different views of the reconstructed result. It can also be ascertained
3.1 Contactless 3D Finger Image Acquisition Using Photometric Stereo 45
Fig. 3.15 Sample results from fingerprint reconstruction using LS method: a all seven images, b
method in [23] using only three images and c LS method using only three images
Fig. 3.16 Sample results from fingerprint reconstruction using LS method: a all seven images, b
method in [23] using only three images and c LS method using only three images
from the image samples in Figs. 3.15 and 3.16 that the results using the method in
[23] offer better details in recovering fingerprint ridges than those from using the LS
method.
46 3 Contactless and Live 3D Fingerprint Imaging
The HK model assumes that the light reflected by finger skin, or any material that
exhibits subsurface scattering, can be computed from the arithmetic combination
of incident light reflected by each layer multiplied by the percentage of incident
light that actually reaches the respective layer. The finger skin is modelled as a
two-layer surface, consisting of the epidermis and the dermis, each with different
reflectance properties. Figure 3.17 illustrates how the incident light L i is reflected
and the components of reflected light L r from different subsurface layers. In the
original HK model [29], each layer is parameterized by the absorption cross section
σa , the scattering cross section σs , the thickness of the layer τd , g is the mean cosine
value of the phase function and ϕ represents the angle between the incident light
direction and the view direction. The intensity of scattered light from a subsurface
layer (Fig. 3.17), using this model, can be defined [29] as follows:
σq 1 − g2 cosθi cosθ +cosθ
−σt d cosθi cosθrr
L r,v 1 − e + ρ cosθi
σt 1 + g2 − 2g cosϕ 23 (cosθi + cosθr )
i
where L 1r,v represents the first-order scattering term and ρ cosθi is the approximation
for higher order terms (L r,h ). Above equation uses simplified HK model where only
the response from epidermis layer is accounted while the scattering in dermis layer
is ignored. This approximation reduces the contributions in the final image but fur-
ther attempts to compute too many unknown parameters can degrade the estimation
accuracy of main parameters.
Li σs T 21 T 12 σs
σq , σt σs + σa , ζ (3.15)
4π σt
Lr L r,s + L r,v (3.16)
3.1 Contactless 3D Finger Image Acquisition Using Photometric Stereo 47
Fig. 3.17 Modelling scattering, reflection and refraction from finger skin under non-Lambertian
assumption using HK model
M
k 2
Optimization problem : arg min E xj , where E xj Lr − Ik (3.17)
k1
where M is the number of valid input pixel intensity and I represents grey level of
pixels from each of the observed image. Using [29], we can choose {τ d , σ s , σ a , g} as
{0.12 mm, 30 mm−1 , 4.5 mm−1 , 0.81}, respectively. Similarly, from details in [12,
30], σs and σa can be fixed as 6 and 8.5 mm−1 for the blue colour light which peaks at
470 nm. The depth of epidermis can be assumed to be 0.325 mm as in [31]. Therefore,
{τ d , σ s , σ a , g} can, respectively, {0.325 mm, 6 mm−1 , 8.5 mm−1 , 0.81} be another
set of fixed parameters for our investigation. Assuming T 12 and T 21 as fixed param-
eters, σq is then proportional to T 12 and T 21 .The set of unknown parameter vector in
j j j j
(3.17) can be consolidated in a vector x j n x , n y , n z , ρ j , σq and its transforma-
j
tion to the spherical coordinate system can be represented as x j θ j , φ j , ρ j , σq ,
where θ arcsin(nz ) and φ arctan(ny /nx ). In order to solve optimization problem
in (3.17), it is desirable to select a robust optimization method, which can converge
efficiently. The method employed in our experimentation was based on the uncon-
48 3 Contactless and Live 3D Fingerprint Imaging
(n 1 − n 2 )2
R0 , (3.18)
(n 1 + n 2 )2
where the refractive index n is 1.0 for the air and 1.38 for the epidermis [32]. Using
Schlick’s approximation in [33, 34], the Fresnel reflection factor R can be computed
as follows:
Rp |R|2 (3.21)
n 22
T 12 1 − Rp12 (3.22)
n 21
n2
T 21 12 1 − Rp21 (3.23)
n2
T 12 T 21 1 − Rp12 1 − Rp21
2 2
1 − R0 + (1 − R0 )(1 − cos θi )5 1 − R0 + (1 − R0 )(1 − cos θr )5
(3.24)
multiple 3D fingerprint samples indicated that the results from Lambertian model
and the HK model are very similar.
One possible indicator of comparative performance is the energy, as also employed
in (3.17), and comparative numbers for two sample images are shown in Table 3.1.
The second sample results in this table are from the images shown in Fig. 3.18. The
results from HK model illustrate smaller energy but smaller energy is not an indicator
for better 3D reconstruction results. For the spatial points with poor conditions, e.g.
specular points and shadowed points, the Lambertian model generally illustrated
higher energy than the HK model but it is closer to the real value (Fig. 3.19). Despite
lack of comparatively superior results using HK model, such approach offers lot
of promises and underlines the need for further work. The key shortcoming in the
use of HK model in our work is plausibly in the consideration of 3D geometry
of fingerprint surface. Further work is required to account for the ridge–valley 3D
geometry of finger skin and higher order subsurface scattering terms in (3.14) which
were ignored in the experiments.
Fig. 3.18 Comparative results from same finger using HK model and Lambertian model
Table 3.1 Total energy from the two methods for two sample fingerprint images
Fingerprint sample 1 Fingerprint sample 2
Lambertian model 8.3246 × 109 2.7087 × 107
HK model 8.3237 × 109 2.7068 × 107
50 3 Contactless and Live 3D Fingerprint Imaging
Fig. 3.19 Reconstruction results from shadowed region: a results using HK model and b results
using Lambertian model
References
1. Horn B (1990) Height and gradient from shading. Int J Comput Vision 5:37–75
2. Kumar A, Kwong C (2015) Towards contactless, low-cost and accurate 3D fingerprint identi-
fication. IEEE Trans Patt Analy Mach Intell 37:681–696
3. Zhang Z (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal
Mach Intell 22:1330–1334
4. Kumar A, Kwong C (2012) Contactless 3D biometric feature identification system and method
thereof. US Provisional application No. 61/680,716
5. Ikehata S, Wipf D, Matsushita Y, Aizawa K (2012) “Robust photometric stereo using sparse
regression. In: Proceedings of CVPR 2012, Providence, USA, pp 318–325
6. Agrawal A, Raskar R Chellappa R (2006) What is the range of surface reconstructions from a
gradient field? In: Proceedings of ECCV 2006, Graz, Austria
7. Frankot RT, Chellappa R (1987) A method for enforcing integrability in shape from. Proc Int.
Conf., Computer Vision, ICCV
8. Simchony T, Chellappa R, Shao M (1990) Direct analytical methods for solving poisson equa-
tions in computer vision problems. IEEE Trans Pattern Anal Machine Intell 435–446
9. http://www.peterkovesi.com/matlabfns/Shapelet/frankotchellappa.m. May 2018
10. http://www.amitkagrawal.com/eccv06/RangeofSurfaceReconstructions.html
11. Schlüns K, Klette R (1997) Local and global integration of discrete vector fields. In: Solina F,
Kropatsch WG, Klette R, Bajcsy R (eds) Advances in computer vision. Advances in computing
science. Springer, Vienna
12. Xie W, Song Z, Zhang X (2010) A novel photometric method for real-time 3D reconstruction
of fingerprint. In: Proceedings international symposium on visual computing LNCS 6454,
Springer, pp 31–40
13. Seitz S, Curless B, Diebel J, Scharstein D, Szeliski R (2006) A comparison and evaluation of
multi-view stereo reconstruction algorithms. Proc CVPR 2006:519–526
14. http://www.dirdim.com/prod_laserscanners.htm. Accessed May 2018
15. Personal Identity Verification (PIV) of Federal Employees and Contractors, US Department of
Commerce, FIPS PUB 201–2, Aug. 2013, https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.
201-2.pdf
16. Orandi S, Byers F, Harvey S, Garris M, Wood S, Libert JM, Wu JC (2016) Standard calibration
target for contactless fingerprint scanners, US Patent No. 9,349,033 B2
17. Engelsma J, Arora SS, Jain AK, Paulter NG (2018) Universal 3D wearable fingerprint targets:
advancing fingerprint reader evaluations. IEEE Trans Info Forensics Security 13(6):1564–1578
18. Bringer B, Bony A, Khoudeir M (2012) Specularity and shadow detection for the multisource
photometric reconstruction of a texture surface. J Opt Soc Am A 29:11–21
19. Barsky S, Petrou M (2003) The 4-source photometric stereo technique for three-dimensional
surfaces in the presence of highlights and shadows. IEEE Trans Patt Analy Mach Intell
25:1230–1252
20. Woodham RJ (1994) Gradient and curvature from photometric stereo including local confidence
estimation. J Opt Soc America 11:3050–3068
21. Wu L, Ganesh A, Shi B, Matsushita Y, Wang Y, Ma Y (2011) Robust photometric stereo
via low-rank matrix completion and recovery. In: Proceedings of ACCV 2010, Springer, pp
703–717
22. Ikehata S, Wipf D, Matsushita Y, Aizawa K (2012) Robust photometric stereo using sparse
regression. Proc CVPR 2012:318–325
23. Zheng Q, Kumar A, Pan G (2015) On accurate recovery of 3D surface normal using minimum
2D images. Technical Report No. COMP-K-20. http://www.comp.polyu.edu.hk/~csajaykr/
COMP-K-20.pdf
24. Georghiades AS (2003) Incorporating the torrance and sparrow model of reflectance in uncal-
ibrated photometric stereo. Proc ICCV 2003:816–823
25. Torrance KE, Sparrow EM (1967) Theory for off-specular reflection from roughened surfaces.
J Opt Soc Am 57(9):1105–1112
52 3 Contactless and Live 3D Fingerprint Imaging
26. Drbohlav O, Chaniler M (2005) Can two specular pixels calibrate photometric stereo? Proc
ICCV 2005:1850–1857
27. Drbohlav OR, Šára R (2002) Specularities reduce ambiguity of uncalibrated photometric stereo.
In: Proceedings of ECCV 2002, Springer, pp 46–60
28. Chandraker M, Bai J, Ramamoorthi R (2013) On differential photometric reconstruction for
unknown, isotropic brdfs. IEEE Trans Patt Anal Mach, Intell 35:2941–2955
29. Hanrahan P, Krueger W (1993) Reflection from layered surfaces due to subsurface scattering.
In: Proceedings of SIGGARPH, pp 165–174
30. Li L, Ng CS-L (2009) Rendering human skin using a multi-layer reflection model. Int J Math-
ematics Comput Simul 3:44–53
31. Jacques SL (1989) Skin optics. Oregon Medical Laser Center News, https://omlc.org/news/
jan98/skinoptics.html. Accessed July 2018
32. Weyrich T, Matusik W, Pfister H, Bickel B, Donner C, Tu C, McAndless J, Lee J, Ngan
A, Jensen HW Gross M (2006) Analysis of human faces using a measurement-based skin
reflectance model. In: Proceedings of SIGGRAPH 2006, Boston, pp 1013–1024
33. Schlick C (1994) An inexpensive BRDF model for physically-based rendering. Comput Graph-
ics Forum 13:233–246
34. Greve BD (2006) Reflections and refractions in ray tracing, https://graphics.stanford.edu/
courses/cs148-10-summer/docs/2006–degreve–reflection_refraction.pdf
35. FFTW (2018) C subroutine library for discrete Fourier transform, http://www.fftw.org
36. Pang X, Song Z, Xie W (2013) Extracting valley-ridge lines from point-cloud-based 3D fin-
gerprint models. IEEE Comput Graphics Appl 73–81
37. Frankot RT, Chellappa R (1988) A method for enforcing integrability in shape from shading
Algorithms. IEEE Trans Pattern Anal Machine Intell 439–451
Chapter 4
3D Fingerprint Acquisition Using
Coloured Photometric Stereo
This image acquisition setup uses a single fixed 2D camera and six coloured LEDs,
i.e., two blue, two green and two red LEDs, respectively. These LEDs are symmetri-
cally distributed, around the camera lens as shown in Fig. 4.1. It is ensured that the
camera lens does not receive direct illumination from any of the LEDs. The average
distance between the camera and finger in this setup was about 75 mm. The acqui-
sition of fingerprint image is synchronized with the switching of LED illumination.
(b)
R2
R1
Camera
Lens G1
B2
G2
B1
When one red, one green and one blue LEDs are simultaneously illuminated, one
imaging shot is automatically acquired using the same program/software which con-
trols the switching of LEDs. The imaging setup is calibrated offline and an iron ball
was used to automatically locate and calibrate the LEDs positions using the method
in [3]. The calibrated pixel positions (discussed in Chap. 3) are also made publicly
available [4] for further research.
Three imaging shots are automatically acquired in sequence. The first two images
are used for 3D fingerprint reconstruction and the last shot is the repetition of the
first illumination position, which is used to detect the finger motion. The 1400 ×
900 pixels’ region of interest (ROI) from the acquired fingerprint is automatically
segmented (Fig. 4.2), using similar approach as discussed in the last chapter. Each of
the segmented contactless 2D fingerprint images is first subjected to preprocessing
operations, including the removal of specular reflection, and these are discussed in
the next chapter. Figure 4.1 illustrates simplified block diagram for coloured LEDs
based on fingerprint image acquisition setup.
4.2 Finger Motion Detection and Image Acquisition 55
Fig. 4.2 Positional differences during 3D fingerprint imaging. The images in the first two rows are
acquired image sample with motion. The images in the last two rows are sample without motion
Two imaging shots are acquired from the presented fingers to reconstruct the 3D
fingerprint. Despite high-speed imaging, it is still possible to observe involuntary
motion of fingers during successive imaging. In order to address such limitation, it
is important to incorporate finger motion detection and accelerate image acquisition
process. Two image shots are acquired in a short time interval of about 250 ms. In
order to detect finger motion, third image shot is also acquired from presented finger,
i.e., three consecutive imaging shots are acquired in an interval of about 800 ms. The
third image shot is essentially the repetition of the image with the same illumination
positions as the first one and is used to ascertain the finger motion. By computing
the mean squared error (MSE) and key point positional differences in the first and
the third images, fingerprint image samples with motion are ignored with thresholds
(MSE > 5 and X, Y > 10 pixels). Figure 4.2 illustrates such finger motion detec-
56 4 3D Fingerprint Acquisition Using Coloured Photometric Stereo
tion. The sharpness of each fingerprint image ROI is measured by computing the
magnitude of image gradient. If the average magnitude of such fingerprint gradient
image is larger than predetermined threshold, the acquired image can be considered
as the blurred image and automatically discarded.
In addition to the finger motion, the specular reflection from finger surface also
degrades the quality of reconstructed 3D fingerprint image. Reference [5] describes
the use of SUV colour space to remove such specular reflection. This approach is quite
effective in separating the specular and diffuse components into S channel and UV
channels. However, this approach is computationally complex, i.e., required 0.322 s
in our implementation, and is less attractive for online usage. We instead preferred a
simplified approach which uses a predefined threshold to identify specular reflection
pixels and replace or fill these identified pixels with average value of neighbourhood
pixels.
This imaging approach is significantly faster than those using the setup in Fig. 4.1
of Chap. 3, and required 0.064 s in our implementation. The region of interest (ROI)
images are automatically segmented from the acquired images using background
detection. The ROI images from two colour images are split into RGB channels and
such an image sample is shown in Fig. 4.3. Automatically generated six grey-level
images are used to reconstruct 3D fingerprint as discussed in the next section.
Fig. 4.3 a Acquired image sample from the setup in Fig. 4.1, b segmented colour ROI image, and
(c–e), respectively, are the red, green and blue colour components of image sample in (b)
4.3 Reconstructing 3D Fingerprint Using RGB Channels 57
The colour (RGB) photometric stereo approach [10] is incorporated to reconstruct the
3D fingerprint. This approach works well under the assumption that finger surfaces
are nearly Lambertian. Therefore, we consider Lambertian model for the surface
reflectance where ρ represents its surface albedo. Let I(x, y) represent the acquired
2D fingerprint images and n(x, y, z) be the unit surface normal vectors at respective
finger surface. The LED illumination L(x, y, z) sources are calibrated as discussed
in Chap. 3.
I ρn · L (4.1)
The reflectance representing the surface albedo can be computed from the norm,
i.e., ρ |ñ|. For the different RGB components of the acquired image I, i.e., I R , I G
and I B , we can rewrite Eq. (4.1) as follows:
I R ρ R Ln (4.4)
IG ρG Ln (4.5)
I B ρ B Ln (4.6)
The least squared solution from (4.2) can be used to recover surface normal for
the above equations. The depth map of 3D fingerprints can be computed from the
Fig. 4.4 Sample 3D fingerprint image acquired using colour photometric stereo with Frankot–Chel-
lappa algorithm
58 4 3D Fingerprint Acquisition Using Coloured Photometric Stereo
F ω1 ∗ FR + ω2 ∗ FG + ω3 ∗ FB (4.7)
Fig. 4.5 The synthetic 3D model (a) employed to evaluate the reconstruction accuracy. The recon-
structed 3D model using the Frankot–Chellappa method (b), using Poisson solver (c) and using
shapelets (d)
Contactless 3D fingerprint imaging systems may have to cope with the contamina-
tions on finger skin surfaces. Therefore, this section presents results from a brief
study on the influence of coloured markings on finger skin on 3D images acquired
using coloured photometric stereo. Six different fingerprint samples with and with-
out red colour markings were acquired from the two different subjects/fingers, and
these images are shown in Fig. 4.6. The corresponding reconstructed 3D fingerprint
surfaces are illustrated in Fig. 4.7. As can be observed from the figure, that such
red markings have very little or negligible effect on the reconstructed 3D fingerprint
surface. Our further results in matching these contactless 3D fingerprints in Fig. 4.7,
60 4 3D Fingerprint Acquisition Using Coloured Photometric Stereo
Fig. 4.6 a, b,d, e Two sample images from two different 2D fingerprints without red marking; (c,
f) illustrate respective two different fingerprint samples, with the red marking on fingers, for the
experiments
using the algorithm in [2], suggest that such colour markings are not expected to
degrade the performance for the 3D fingerprint reconstruction, minutiae extraction
and the 3D fingerprint matching.
It is useful to ascertain the influence of the same rakings/contaminations for the
conventional contact-based fingerprint identification. Therefore, we also acquired
the corresponding contact-based 2D fingerprints with and without markings from
the same two subjects using URU 4000 fingerprint reader. As observed from the
sample images in Fig. 4.8, the red colour marking can have some effects on the
ridge–valley patterns observed from the contact-based 2D fingerprints. The matching
score for sample (a1) and (a2) is 0.7050, and for sample (b1) and (b2) is 0.6688. These
scores can be compared with respective scores of 0.7603 and 0.7327 for matching
respective fingerprints without markings. Our observations indicated degradation of
genuine match scores while matching same fingers, with and without red colour
markings on the finger skin, using contact-based fingerprint sensor.
4.6 Summary
Subject A Subject B
(a1) (a2) (b1) (b2)
Fig. 4.8 (a1) and (a2) contact-based 2D fingerprint samples without marking and respective
(b1)–(b2) contact-based 2D fingerprint samples with red markings
such an approach cannot benefit from the advantages associated with contactless 3D
fingerprint imaging as it requires physical contact with elastomeric sensor. Acquired
3D fingerprint images, either using photometric stereo or any other methods discussed
in Chap. 2, requires a range of postprocessing operations to recover the features.
These details on preprocessing and postprocessing operations are discussed in the
next chapter.
References
1. Spence A, Chantier M (2006) Optimal illumination for three image photometric stereo using
sensitivity analysis. IET Vis Image Sig Proc 153(2):149–159
2. Lin C, Kumar A (2017) Tetrahedron based fast 3D fingerprint identification using colored
LEDs illumination. In: IEEE transaction pattern analysis and machine intelligence, pp 1–10
3. Zhou W, Kambhamettu C (2002) Estimation of illuminant direction and intensity of multiple
light sources. In: Proceedings of ECCV 2002. Springer, pp 206–220
4. The Hong Kong Polytechnic University 3D fingerprint Database V2, http://www.comp.polyu.
edu.hk/~csajaykr/3Dfingerv2.htm, February 2018
5. Mallick SP, Zickler T, Belhumeur PN, Kriegman DJ (2006) Specularity removal in images and
videos: a pde approach. In: Proceedings of ECCV 2006. Springer, pp 550–563
6. Xiong Y, Chakrabarti A, Basri R, Gortler SJ, Jacobs DW, Zickler T (2015) From shading to
local shape. IEEE Trans Pattern Anal Mach Intell 37:67–79
7. Johnson MK, Cole F, Raj A, Adelson EH (2011) Microgeometry capture using an elastomeric
sensor. ACM Trans Graphics (TOG) 30(4):46
8. Agrawal A, Raskar R, Chellappa R (2006) What is the range of surface reconstructions from
a gradient field?. In: Proceedings of ECCV 2006. Springer, pp 578–591
9. Kovesi P (2005) Shapelets correlated with surface normals produce surfaces. Proc ICCV
2005:994–1001
10. Christensen PH, Shapiro LG (1994) Three dimensional shape from color photometric stereo.
Int J Comput Vision 13(2):213–227
Chapter 5
3D Fingerprint Image Preprocessing
and Enhancement
3D grid or voxelized cloud and its 3D coordinates are implicitly defined from
its position on the grid. Voxelization is a useful representation for efficient 3D
data processing. Voxelized cloud is often the preferred representation of the 3D
fingerprint data acquired using the OCT.
• Polygon Mesh: The polygon mesh or triangle mesh is a topological representation
of 3D finger surface data using a collection of 3D surface vertices and their con-
nections. This representation provides orientation of vertices from the directions
of their connections and is useful for rendering. The raw point cloud data is often
converted to polygon mesh and is most widely used 3D data representation format
in computer graphics.
Contactless 3D fingerprint raw data can enable (a) full 3D view fingerprint which
provides 360° view 3D fingerprint data covering the dorsal/nail view, or (b) 2,5 D view
fingerprint which provides a 3D view of the regions visible from the sensor position.
The premise of fingerprint biometric is based on the singularity of continuous friction
ridges, which are not visible in the dorsal region. Therefore, unless explicitly stated,
the contactless 3D fingerprint data essentially represents 2.5 D view fingerprint.
where wi j is a finite support weighting function and is chosen as the inverse of the
−1
distance between the vertex z i and its neighbours z j , i.e. wi j z j − z i . The
reconstructed 3D fingerprint surface data, for the images shown in Chap. 3, were
smoothed after 40 iterations with 0.5 and the neighbours j were chosen within
±2 pixel in the x and y directions from vertex z i .
The normal vectors of the cloud point 3D fingerprint data for the smoothed surface
(above operations) are then computed by the gradient of z f (x, y). The surface
normal vector is an upward normal with (−gx , −gy , 1), where gx and gy are the
gradients along x and y directions. These normalized surface normals are then used for
the feature extraction process. Many effective feature extraction strategies, including
5.2 Contactless 3D Fingerprint Image Enhancement 65
The differential geometric properties of 3D surface are invariant under rigid trans-
formation and can be potentially used to compute intrinsic 3D fingerprint surface
descriptors and can describe the nature of ridge-valley information in 3D space. These
local differential 3D surface geometric properties, i.e. principal curvature, unit nor-
mal vector and surface directions, can themselves be used to match 3D fingerprints
and such results are also discussed in Chap. 7. The principal surface curvature typi-
cally measures local bending of 3D fingerprint surface at each of the surface points
while the principal surface directions indicate the directions of minimum and max-
imum 3D surface bending. Several algorithms are available in the literature to esti-
mate the surface curvature using local surface fitting. Goldfeather and Interrante [2]
have described three approaches, i.e. quadratic surface approximation, normal sur-
face approximation and adjacent-normal cubic order surface approximation. Among
these three approaches, adjacent-normal cubic order approximation algorithm has
shown to be quite effective [3] for the contactless 3D fingerprint data and merits
detailed discussion.
Let a given 3D fingerprint surface point be s with its normal N and its u neighbour-
ing points be t i with their normal vectors K i where i 1, 2, … u. In the coordinate
system with s as the origin (0, 0, 0) and N as the z-axis, the position of neighbours
t i is (x i , yi , zi ) and the position of K i is (ai , bi , ci ). Using the adjacent-normal cubic
order algorithm, we attempt to automatically locate a surface that can fit the vertex
and its neighbouring points such that
a 2 c
z f (x, y) x + bx y + y 2 + d x 3 + ex 2 y + f x y 2 + g y 3 (5.2)
2 2
The normal vector of the surface point s in the approximated surface can be written
as follows:
N (x, y) f x (x, y), f y (x, y), −1 (5.3)
N (x, y) ax + by + 3d x + ex y + f y , bx + cy + ex + 2 f x y + 3g y , −1
2 2 2 2
(5.4)
The cubic order surface fitting, for both the neighbouring surface points and their
normal, generates the following three equations for each of the surface points.
66 5 3D Fingerprint Image Preprocessing and Enhancement
⎛1 ⎞ ⎛ ⎞
x2 xi yi 1 2
y xi3 xi2 yi xi yi2 yi3 zi
2 i 2 i
⎜ ⎟ ⎜ − ai ⎟
⎜ xi 0 ⎟ ⎜ ⎟
⎝ yi 0 3xi2 2xi y I yi2 ⎠ ⎝ ci ⎠ (5.5)
0 xi yi 0 xi2 2xi yi 3yi2 − cbIi
K R (5.6)
The eigenvalues of Weingarten matrix are the maximum and minimum principal
curvature of the surface (k max and k min ), and their eigenvectors are the principal
direction vectors (hmax and hmin ) which can be directly computed. The shape index
of a surface at vertex s can quantify the local 3D shape of the 3D fingerprint surface.
One effective approach to quantify the 3D curvature information is to compute the
shape index C i (s) values [4], which is independent of scale and can be estimated as
follows.
1 1 kmax + kmin
Ci (s) − atan (5.8)
2 π kmax − kmin
The surface curvature map generated from the quantification of local surface index
is in the interval [0–1] and can also be directly employed to match two 3D fingerprint
surfaces which is discussed in Chap. 7. Figure 5.1 illustrates a sample grayscale 2D
image representing surface curvature map for a 3D fingerprint.
Fig. 5.1 Surface curvature map represented as 2D image from a sample 3D fingerprint cloud point
data
where F is the Fourier transform operator, G(u, v) is the high-pass filter with D0
cut-off frequency and n represents the order of this filter. The term D(u,v) represents
the Euclidean distance from the origin in FFT, i.e. sqrt(x.ˆ2 + y.ˆ2). The filter G(u,
v) can also be designed to suppress the contributions from the low-frequency part
representing the illumination component I 0 (x, y). This approach is more effective in
the presence of strong external illumination like those acquired under the illumination
setup for the photometric stereo. Therefore, a bandpass filter H(u, v), resulting from
the combination of low-pass and high-pass filter, G1 (u, v) and Gs (u, v) as shown in
Fig. 5.3, is more appropriate choice for the contrast enhancement.
⎛ ⎞
⎜ 1 1 ⎟
H (u, v) min⎝ 2n , 2 ⎠ (5.11)
D0 D(u,v)
1+ D(u,v)
1+ D1
where D0 , D1 and n are the high-pass cut-off frequency, low-pass cut-off frequency
and order of filter. The enhanced fingerprint image is generated from the exponent
of the inverse Fourier transform of the frequency domain image as shown in (5.9).
In order to achieve normalized greyscale values in 0–1 or 0–255 range, the
enhanced image Ie (x, y) is also subjected to adaptive histogram equalization. Contact-
less 2D fingerprint images, which are simultaneously generated or acquired with 3D
fingerprint images, are preprocessed using homomorphic operation before subject-
ing to the conventional fingerprint enhancement [5, 7] steps. The surface curvature
images from the 3D fingerprint preprocessing operations do not have low-contrast
concerns in the low-frequency part and more frequent high-frequency noise is elim-
inated using the preprocessing steps discussed in Sect. 5.3 of this chapter.
References
1. Belyaev A (2006) “Mesh smoothing and enhancing. curvature estimation,” Saarbrucken. www.
mpi-inf.mpg.de/~ag4-gm/handouts/06gm_surf3.pdf
2. Goldfeather J, Interrante V (2004) A novel cubic-order algorithm for approximating principal
direction vectors. ACM Trans Graphics 23(1):45–63
3. Kumar A, Kwong C (2013) Towards contactless, low-cost and accurate 3D fingerprint identifi-
cation. In: Proceedings of CVPR 2013, Portland, USA, pp 3438–3443
4. Dorai C, Jain AK (1997) COSMOS—a representation scheme for 3D free-form objects. T-PAMI,
pp 1115–1130
5. Chen Y (2009) Extended feature set and touchless imaging for fingerprint matching, Ph.D.
thesis, Michigan State University
6. Gonzalez RC, Woods RE (2018) Digital image processing, 4th edn. Pearson
7. O’Gorman L, Nickerson JV (1989) An approach to fingerprint filter design. Pattern Recogn
22:29–38
Chapter 6
Representation, Recovery and Matching
of 3D Minutiae Template
The minutiae features from the 2D fingerprints are widely used and it will be useful
to review on their representation and matching in 2D space. Another reason for this
discussion is that the simultaneously acquired of available contactless fingerprint
images can also be used to recover such features and further improve the performance
for 3D fingerprint identification.
Conventional methods of 2D fingerprint image preprocess first employ enhance-
ment [6], ridge detection and detection of minutiae features from the discontinuities
in the ridge patterns as detailed in the literature [3]. These minutiae features form the
feature representation or the fingerprint template. Two such arbitrary 2D fingerprint
templates, say P and Q, can be matched to generate a match score as follows. We first
select a minutiae pair, consisting of a minutia from the reference template (Q) and
a minutia from the query template (P), to generate match distances between them
using the alignment-based approach as illustrated in Figs. 6.1 and 6.2. All the other
minutiae in template P are also converted to the spherical coordinate as [r, As , Aθ ,
T ], with reference to the centre at the chosen reference minutia in Q and aligned the
angle with θ r .
r (x − xr )2 + (y − yr )2 (6.1)
y − yr
As atan2 − θr (6.2)
x − xr
Aθ (θ − θr ) (6.3)
where r is the distance of respective minutiae with the reference minutia, As is the
angular separation of the minutia, Aθ is the new orientation of the minutia with
respect to the chosen reference minutia and T is the type of minutia. If the difference
m2
S2D (6.4)
MP MQ
where m is the total number of matched minutiae pairs with a chosen reference
minutia from template Q and M P , and M Q is the number of minutiae in query and
template image, respectively. The maximum of all the possible match scores (6.4),
generated by selecting each of the available minutiae as the reference minutia, is
selected as the final matching score between fingerprint image P and Q.
such local ridge surface is utilized (or masked) for tracing 3D orientation. The angle
φ is then computed by estimating the principle axes [8] of the masked ridge surface.
However, for the end type minutiae, the local valley surface is masked since the
direction θ is, in this case, is pointing in the outward direction. The extended minutiae
representation in 3D space can be denoted/localized as (x, y, z, θ , φ), where (x, y, θ )
is the respective minutiae representation in 2D space. In our further discussion, we
refer to this minutiae representation (x, y, z, θ , φ) as the 3D minutia.
Source 3D fingerprint image data can be used to generate local surface curvature
images which can provide a 2D representation of 3D source information. Figure 6.4
illustrates such sample 3D curvature images using the shape index as detailed in Eq.
(5.8) of the previous chapter. These images encode local 3D fingerprint ridge–valley
6.3 Recovering Minutiae in 3D Space from the 3D Fingerprint Images 75
Fig. 6.4 Sample images representation of local 3D surface curvature using shape index images
Fig. 6.5 Sample 3D fingerprint images illustrating ridge–valley structure identified from the local
surface curvature values
information in 0–1 range and are shown as grey-level images. This information can
be used to identify the 3D fingerprint ridges and the valleys. When a 3D surface is
flat, the shape index values in (6.4) are expected to be zero. For the 3D fingerprint
ridges, the shape index values are expected to be smaller than 0.5 (convex surface).
Similarly, when the shape index values are higher than 0.5, the 3D surface is expected
to be part of 3D fingerprint valleys.
Therefore, a simple the binarization of surface curvature images, i.e. If Ci (s) ≤ 0.5
assign as 1 (ridge) else 0 (valley), can be used to label the 3D point cloud data as
ridge–valley image.
• Shape Index for flat surface → 0
• Valley → If shape index > 0.5
• Ridge → If shape index ≤ 0.5.
Such sample 3D fingerprint images describing the ridge–valley details are shown
in Fig. 6.5.
76 6 Representation, Recovery and Matching of 3D Minutiae Template
Fig. 6.6 Localization of minutiae in 3D space by incorporating minutiae height in (a) and illustra-
tion of 3D fingerprint with recovered minutiae locations in 3D space (b)
from same subjects. Any robust 3D minutiae template matching algorithm should
also consider the influence of missing or spurious minutiae, which are frequently
observed from contactless 3D fingerprint images. An effective approach to generate
match scores, which can also consider adverse influence from missing and/or spurious
minutiae, between 3D minutiae templates is introduced [5] in the following.
Let us consider two arbitrary 3D minutiae fingerprint templates, say P and Q
respectively with M P and M Q number of minutia, for generating a quantitative match
score. We first select a (any) reference minutia from the template P and template Q;
(a) All other minutiae in template P are then transformed to the spherical coordinates
using respectively the chosen minutia from this template as the reference, i.e. we align
all other minutiae in template P with the x-axes and z-axes (using θ and φ values
of the chosen reference minutiae). Similar to the previous step (a), we transform all
the other minutiae in Q to spherical coordinates and align with x-axes and z-axes
of chosen reference minutia in this template. This alignment step ensures that the
reference minutiae location (in both template P and Q) for the alignment can serve
as a universal origin/reference (Fig. 6.7) to measure other/relative minutiae distances
in the respective templates, when the same 3D minutia from the same finger appears
in both templates, i.e. the case when the chosen reference minutia in P and Q are
the same and part of genuine comparisons. If an aligned minutia is represented as mr
[x r , yr , zr , θ r , φ r ] in template P, the relative representation of other 3D minutiae
in template P,1 say m (see Figs. 6.7 and 6.8 to visualize relative representation of
two 3D minutiae), can be denoted as m [r, As , Aθ , Ag , Aφ ], where r is the radial
distance with reference minutiae, Aθ is the azimuth angle and Aφ is the elevation
angle that localizes the minutiae m in 3D plane, while As and Ag are the azimuth
and the elevation angles respectively that localize the radial vector r (with respect
to reference minutiae mr ) in 3D space. Let Rz (θ ) and Ry (φ) be the rotation matrix
along z and y direction in Cartesian coordinate, and sph(x, y, z) be the Cartesian to
Spherical coordinate transformation with unit length 1:
⎡ ⎤ ⎡ ⎤
cos θ − sin θ 0 cos φ 0 − sin φ
⎢ ⎥
Rz (θ ) ⎣ sin θ cos θ 0 ⎦, Ry (φ) ⎣ 0 1 0 ⎦ (6.5)
0 0 1 sin φ 0 cos φ
sph([x yz]) atan2(y, x) sin−1 z (6.6)
where atan2 is the four-quadrant inverse tangent function [9]. The parameters for the
relative representation (feature vector) of minutiae m are computed as follows:
r (x − xr )2 + (y − yr )2 + (z − z r )2 (6.7)
1
x y z
T
Ry (−φr )Rz (−θr ) [x − xr y − yr z − z r ]T (6.8)
r
As Ag sph x y z (6.9)
1 Also in template Q since the reference minutiae have been aligned to serve as the universal
reference/origin.
78 6 Representation, Recovery and Matching of 3D Minutiae Template
T T
Aθ Aφ sph Ry (−φr )Rz (−θr ) sph−1 ([θ φ]) (6.10)
We perform the same process as discussed in above paragraph for the minutiae in
template Q, i.e. using an arbitrarily selected minutia in Q, align all other minutiae in
template Q and compute the relative representation (rQi , As Qi , Aθ Qi , Ag Qi , Aφ Qi ) of
all other minutiae using (6.5)–(6.10). Next step is to identify the number of matched
minutiae, if any, between the aligned minutiae in P and Q. Two 3D minutiae in the two
fingerprint template P and Q are considered as matched pair if the difference between
Fig. 6.7 Relative localization of a 3D minutiae with a reference minutiae (mr ) in a given 3D fin-
gerprint template using the graphical illustration of relative distances/angles between the reference
3D minutia (mr ) and other 3D minutia (m). The x-axis of Cartesian coordinate is aligned with the
direction of mr (the bold arrow). The azimuthal angles As and Aθ are in the range of [0, 360] degree.
The polar angles Ag and Aφ are in the range of [−90, 90] degree. It may be noted that the magni-
tude/value of (r, As , Aθ , Ag , Aφ ) are exaggerated simply to illustrate these values clearly (rather than
use exact from left-hand size image) and most of the finger surface is convex. In the left figure, the
green line for m illustrates the orientation of 3D minutiae while the red line illustrates its projection
in x–y plane
6.4 Matching 3D Minutiae Templates 79
Fig. 6.8 Another sample 3D fingerprint minutiae template to illustrate relative localization of a 3D
minutiae m, using feature vector r (r, As , Ag , Aθ , Aϕ ), with the reference minutia mr
m2
S3D (6.16)
MP MQ
where m is the total number or the maximum number of matched minutiae pairs
using (6.11)–(6.15).
80 6 Representation, Recovery and Matching of 3D Minutiae Template
Fig. 6.9 Matching two genuine 3D minutia templates in (a) and the imposter templates in (b) from
template P (in red colour) and template Q (in blue colour). The colour lines are links from the
reference minutia while the black lines illustrate the minutiae pairs that are regarded as matched
Quantification of minutia quality can represent the confidence in the accurate recov-
ery of minutia from the presented finger images. In case of 2D fingerprint images,
there are several measures to quantify the minutiae quality [7, 10] and these are largely
related to the quality of greyscale images or ridge–valley regions which determine the
accuracy in localization or recovery of minutiae features. In case of 3D fingerprints,
the 3D minutiae also includes (x, y, θ ) features as for 2D fingerprints and therefore
the minutiae quality extracted from the surface curvature images (Fig. 6.5) can also
reflect the quality of 3D minutiae and was investigated during this research. Quan-
tification of 3D minutiae quality, independent to the method of 3D imaging, should
also include some measure of confidence in the accurate recovery of height (z) and
elevation angle (φ). Development of such unified 3D minutia quality representation
yet to be addressed and part of ongoing research.
The quality of minutiae recovered from the 3D surface curvature images, extracted
from 3D fingerprint data, can be incorporated to estimate minutia quality and the
number. Reference [10] from NIST provides details on the reliability measure that
represents minutiae and image quality in five different quality levels, i.e. from level 0
to 4 (worst to best). The template file from MINDCT [7] provides the minutiae qual-
ity number Qm (0–99 range) and when image quality L is 4 (highest), the minutiae
quality Qm ≥ 50, i.e. for L 4, Qm 0.50 + (0.49 * R), where R represents greyscale
reliability and is computed from the variance and mean value greyscale pixels sur-
rounding the minutiae point (more details in [10]). A study on minutiae quality
using this measure, on a database of 135 different clients 2D and 3D fingerprints
acquired using the setup detailed in Chap. 3, was performed to ascertain the number
of minutiae or varying quality level from different preprocessed images. Table 6.1
summarizes the statics from the minutiae file where Q3DS represents minutiae quality
in 3D fingerprints generated from the 3D surface curvature images, Q2D7u represents
6.4 Matching 3D Minutiae Templates 81
Fig. 6.10 Matching two 3D minutiae templates to generate the match score. The difficulty in syn-
chronizing or aligning different minutiae among different capture can be addressed by considering
every 3D minutiae in the given template/capture as a reference and using the reference 3D minutiae
which generates the maximum number of matched minutiae (or match scores) as the final reference
for computing the match score
minutiae quality from the unified image generated from a set of 7 2D images acquired
for 3D reconstruction and Q2D7 represents minutiae quality from 7 images acquired
82 6 Representation, Recovery and Matching of 3D Minutiae Template
for the 3D reconstruction. The numbers in this table are specific to the database but
indicate that the quality of 2D fingerprint minutiae can be used for improving 3D
fingerprint matching performance. One such possible approach is discussed in the
following section.
Fig. 6.11 The location of minutiae (endpoint or bifurcation), from a sample clients 2D fingerprint
images under different illuminations, with lighting (cross) and the clustered location (circle)
k1 and min θcm,k − θi , 360 − θcm,k − θi ≤ k2 where cmk ∈ CL, then we update
the cmt such that ct ≥ ci for all cmi ∈ {TL}. x, y, θ , q value of updated cmt will
be the average value of existing cluster members and new member, and c will be
increased by one. We choose k1 as 4 since the minutiae location in different (LED)
fingerprint images would not shift too far away and the square root of 4 (=2) which
is slightly smaller the half width of the observed ridge (~5 pixel in the employed
fingerprint images). The constant k2 is set to 25 for the acceptable angle difference
which can help to ensure that the clusters have similar direction while k3 is set to 32
to decrease the overlapping/double cluster with the similar direction and location.
After updating CL, we picked the subset of CL as DL with c ≥ 2. If two clusters
groups are too close, we merge them together to reduce the possibility that a single
minutia is recovered as two minutiae. The final list of minutia is the merged list of DL
which is assumed (shown/suggested from the experiments) to have higher reliability
and matching. The 3D minutiae corresponding to these 2D minutiae locations are
employed during the matching stage. This strategy to select reliable 3D minutiae is
summarized by the algorithm S3DM.
84 6 Representation, Recovery and Matching of 3D Minutiae Template
Algorithm: S3DM
Input: List of minu ae from 7 LED images {L1,...,L7}, where Li={m1, m2,… mn} and mk = [xk, yk, ,θk, qk]
CL := {} // the tuple of CL is [x, y, θ, q, c] where c is number of count
for each minu a mk in L1 do
CL := CL U [xk, yk, ,θk, qk, 1]
end for
for each minu a mk in L2,L3,...,L7 do
TL := {}
for mCL ϵ CL do
TL := TL U mCL
end if
end for
if TL ≠ {}
maxC := 0
for mTL ϵ TL do
if cTL > maxC
mt := mTL
maxC = cTL
end if
end for
[xs,ys, θs, qs] := (ct*[xt,yt, θt, qt]+ [xk, yk, ,θk, qk])/(ct+1)
mt := [xs,ys, θs, qs, ct+1]
else
CL = CL U [xk, yk, ,θk, qk, 1]
end if
end for
DL := {}
for mCL ϵ CL do
if ccl >= 2
DL := DL U mcl
end if
end for
for mDL ϵ DL do
for mk ϵ DL do
[xs, ys, θs, qs] := (cDL*[xDL, yDL, θDL, qDL]+ck* [xk, yk, ,θk, qk])/(cDL + ck)
mDL := [xs, ys, θs, qs, cDL + ck]
mk := {}
end if
end for
end for
Output: DL
6.5 Development of Unified Distance for 3D Minutiae Matching 85
The minutiae representation in 3D space requires 5 tuple values and each of these
can be analysed to compute some unified or nonlinear distance for the matching.
There are some interesting references [11] that have detailed the use of a function
to compute the similarity score of 2D minutiae and usage of a threshold for the
rejection of falsely matched minutiae. Assuming that there are some falsely matched
3D minutiae pairs, after the comparison with some hard threshold, it is possible to
reject some of them by transforming all features to a scalar product. Therefore, it
is judicious to develop a unified matching distance function for matching two 3D
minutiae which can reject falsely matched minutiae using the comparison with a
fixed decision threshold. This decision threshold can be computed offline during the
system calibration or during the training stage.
The motivation explained earlier requires us to study the variation of five-tuple 3D
minutiae components with the distance from the origin or the reference 3D minutia
which generates a best match score for the genuine matches. Figure 6.12 presents
the graphical illustration of the value of Ag , As , Aθ , cos N, r and Aφ , of the matched
minutiae pair matching scores with different percentile against the distance from the
reference minutiae. The values shown in these figures are sampled using a sliding
window (distance −25, distance + 25) which illustrates the relationship between (Ag ,
As , Aθ , cos N, r, Aφ ) and distance from the 3D reference minutia. The trends shown in
Fig. 6.11 suggest that the percentile values of cosN, Ag , As and Aθ are relatively
stable for smaller distances from the reference 3D minutia. The percentile values of
r (labelled as dist on the respective figure caption), and Aφ (labelled as Aphi
on the respective figure caption) suggest some dependence or the relationship with
the distance, especially near the region of interest (50–110), from the reference 3D
minutia. Therefore, the thresholds for r and Aφ f can be set as a function of the
distance.
This study to combine the difference vectors for 3D minutiae, for generating a
unified matching distance, suggests that Eqs. (6.11)–(6.15) can be generalized to
more accurately account for relative variations in 3D feature distances as follows:
a b c d e f
r As Ag Aθ Aφ 1 − cos N
funRSG(v) + + + + +
f (r ) A B C f (r ) D
(6.17)
where v is the vector of difference values as computed in (6.10) and cos N is the
value of normal vector (Fig. 6.3) cosine similarity between two matched minutiae.
Above equation has an independent set of power term {a, b, …, f } while f (r) can be
some function of distance. The matching score between two 3D minutiae template
P and Q can be computed from Eq. (6.16) using the number of matched minutiae
pairs.
86 6 Representation, Recovery and Matching of 3D Minutiae Template
(a) (b)
(c) (d)
(e) (f)
Fig. 6.12 Percentile position against distance between minutia and reference minutia respectively
for a Ag , b As , c Aθ , d cosN, e r, and f Aφ
Lack of publicly available database from 3D fingerprint image is one of the key
limitations to advance further research in this area. Therefore, new databases for 3D
fingerprint images, using photometric stereo methods discussed in Chaps. 3–4, were
6.6 Performance Evaluation 87
developed [12, 16]. The contactless 3D fingerprint database in [15, 16] has been
acquired from 240 distinct clients; by ‘client’, here, we refer a distinct finger, even if
it belongs to the same person. We first acquired six images (impressions) from each of
the fingers/clients which resulted in a total of 1440 impressions or 1440 3D fingerprint
images reconstructed from 10,080 2D fingerprint images. The imaging hardware was
developed to generate illuminations from seven different LEDs while acquiring the
2D image from the corresponding illumination for each of the fingerprint impressions.
This entire 3D fingerprint database is now publicly accessible, along with image
calibration (pixel positions) data along with source images, [16] to further research
efforts in 3D fingerprint reconstruction and the matching.
Each of the acquired 2D fingerprint images is downsampled by four and then used
to automatically extract 500 × 350 pixels size region of interest. This is achieved by
subjecting the acquired images to an edge detector and then scanning the result-
ing image from boundaries to locate image centre which is used to crop fixed size
region of interest images as discussed earlier. There are several 2D minutiae match-
ing algorithms available in the literature [3, 7, 13] and our performance evaluation
used BOZORTH3 [7] public implementation from NIST. Six 3D fingerprint images
reconstructed from each of the 240 clients generated 3600 (240 × 15) genuine and
2,064,960 (240 × 6 × 239 × 6) impostor matching scores. The average number of
3D minutiae recovered from the 3D fingerprint images was 34.64 while the average
number of 2D minutiae per 2D fingerprint image, acquired for the 3D fingerprint
reconstruction, was 40.28.
The method of 3D reconstruction can also influence the matching performance or
the accuracy of recovered features. In case of photometric stereo based acquisition,
the Poisson solver generates direct analytical results to the least square problem by
solving a Poisson equation and has been shown to generate 3D fingerprint surface
which has close resemblances to its natural shape. Our experiments for matching
recovered 3D minutiae from the 3D fingerprints reconstructed using Poisson solver
achieved superior performance than those from the 3D fingerprints reconstructed
using Frankot–Chellappa algorithm discussed in Chap. 3. Figure 6.13 illustrates such
comparative results for matching the recovered 3D minutiae from first 10 clients’
3D fingerprints. Therefore, this solution was preferred for matching 3D fingerprints
using 3D minutiae in further experiments.
The experiments performed for the performance evaluation generated matching
results from 240 client’s 3D fingerprint templates. As argued earlier, multiple 2D
fingerprint images acquired for the 3D fingerprint reconstruction can themselves be
utilized for generating matching scores from the 2D fingerprint minutiae. However,
the nature of imaging employed requires that each of these be acquired under differ-
ent illumination, therefore we refer them as noisy 2D fingerprint images, as the 3D
shape from shading is the key to reconstruct 3D fingerprint information. Therefore,
the minutiae features extracted from the respective noisy 2D fingerprint images can
be different and we attempted to generate best matching scores by matching all the
available 2D fingerprint images from two client fingers. Figure 6.14a illustrates the
experimental results from the usage of 2D fingerprint images when all such seven
images corresponding to the query 3D fingerprint are matched with all the possible
88 6 Representation, Recovery and Matching of 3D Minutiae Template
Fig. 6.13 Comparative matching accuracy from the 3D fingerprints reconstructed using different
least square solutions
matches of the seven fingerprint are matched with all the possible matches of the
images from the corresponding probe 3D fingerprint and using the best matching
score as the decision score (for genuine or impostor classification) from the 2D fin-
gerprint matching. As shown from the experimental results using receiver operating
characteristics (ROC) in Fig. 6.14a, such an approach generates superior performance
as compared to the case when only the best performing 2D fingerprint matching score
is employed as the decision score. Therefore in our further experiments, we employed
this superior approach to generating 2D fingerprint matching scores, corresponding
to the reconstructed 3D fingerprint, by using all respective 2D fingerprint images uti-
lized for the 3D fingerprint reconstruction. Figure 6.14b illustrates the distribution
of (normalized) genuine and impostor matching scores obtained from 3D fingerprint
and the corresponding 2D fingerprint matching.
In Sect. 6.5, a new function to combine the difference vector in (6.11)–(6.15) for
generating unified matching distance was introduced. We also performed experiments
to ascertain the performance and the employed nonlinear function (6.18) can be
written as follows:
6.6 Performance Evaluation 89
Fig. 6.14 a The ROC using 2D (noisy) fingerprint images with different illumination (240 clients),
b distribution of matching scores, c comparative performance from 3D minutiae matching strategies
considered in experiments
r 0.8 As 0.8 Ag 0.8 Aθ 0.8
funRSG(v) + + +
65 30 15 18
Aφ 0.8 1 − cos N 0.8
+ + (6.18)
42 0.075
Fig. 6.15 The receiver operating characteristics for the a relative/comparative performance using
reconstructed 3D fingerprint images, and b performance using combination of 3D fingerprint and
2D fingerprint images acquired during photometric stereo based reconstruction
6.6 Performance Evaluation 91
Table 6.2 Individual and combined match performance from 2D and 3D fingerprint images
Experiments 2D minutiae 3D curvature 3D minutia 2D 2D
(%) (%) (%) minutiae + minutiae +
3D curvature 3D Minutiae
(%) (%)
Equal error rate 2.12 12.19 2.73 1.73 1.02
from 240 clients
(ROC in Fig. 6.15)
Equal error rate 5.44 32.26 9.28 4.36 3.33
from 240 clients
and 20 unknowns
(DET in Fig. 6.16)
Rank-one accuracy 94.56 68.21 90.72 95.64 96.67
from 240 clients
and 20 unknowns
(CMC in Fig. 6.17)
of equal error rate and rank-one recognition accuracy from the experimental results
shown in Figs. 6.15–6.17. The score-level combination of matching scores using
adaptive fusion [14] is employed in these experiments as it is judicious to exploit
3D matching scores only when the matching scores from 2D fingerprints are below
some predetermined threshold. The threshold limit (Fig. 6.15b) for 3D minutiae
was empirically fixed to 0.1 while this limit while combining 3D surface curvature
was fixed to 0.09. It can be ascertained from this figure that the combination of 3D
minutiae matching scores with the available 2D minutiae matching scores achieves
superior performance. This performance improvement is significant and suggests
that the 2D fingerprint images utilized for reconstructing 3D fingerprints, using pho-
tometric stereo, can be simultaneously used to improve the performance for the 3D
fingerprint matching.
Any automated biometric system is also expected to effectively identify the
unknown clients, i.e. able to reject those clients which are not enrolled in the
database. In order to explicitly ascertain such capability, we additionally acquired 20
new clients’ 3D fingerprint images and employed them as unknown clients. These
unknown clients were then identified from the proposed approach to ascertain the
performance. Figure 6.16b shows the plot of number of unknown clients identified
as unknown versus known clients rejected as unknown. These experimental results
also suggest superior performance for the 3D minutiae representation and achieve
further improvement with the combination of conventional 2D minutiae features. The
performance from the proposed identification schemes for the FPIR (false positive
identification rate) and FNIR (false negative identification rate) was also observed
and is illustrated in Fig. 6.16a. The performance improvement using the combination
of 3D minutiae representation, extracted from reconstructed 3D fingerprint images,
and 2D minutiae representation is observed to be quite consistent in FPIR versus
FNIR plots.
92 6 Representation, Recovery and Matching of 3D Minutiae Template
Fig. 6.16 a FPIR versus FNIR characteristics from the experiments and b corresponding perfor-
mance for the unknown subject rejection using 240 clients and 20 unknowns
Although the key objective the performance evaluation was to ascertain perfor-
mance from 3D fingerprint verification approach, we also performed the experiments
for the recognition tasks using the same protocol/parameters as used for Fig. 6.13 for
the verification task. Figure 6.17 illustrates cumulative match characteristics (CMC)
from the recognition experiments for the comparison and comparison and combina-
tion of 2D/3D fingerprint features. It is widely believed [17] that the verification and
recognition are two different problems. However, the illustrated results for recogni-
tion also suggest superior performance using 3D minutiae representation, over 3D
curvature representation, and also illustrate the improvement in average (rank-one)
recognition accuracy using combination of available minutiae features from the 2D
fingerprint images acquired during the 3D fingerprint reconstruction. The score-level
combination shown in Fig. 6.15b was also attempted with other popular methods
and these results are shown in Fig. 6.18. The performance from other popular fusion
approaches is also quite close and can also be employed to improve performance for
3D fingerprint identification using simultaneously available/acquired 2D fingerprint
images.
Fig. 6.17 The cumulative match characteristics for the a average recognition performance on using
reconstructed 3D fingerprint images and b respective performance using combination of 3D finger-
print and 2D fingerprint images
Fig. 6.18 The ROC (a) and CMC (b) from the combination of 3D fingerprint matching scores and
2D fingerprint matching scores using different fusion rules
mental results and the analysis in Sect. 6.4.2 underlines the need to develop quality
measure for 3D minutiae. Such development of 3D fingerprint image quality mea-
sures would first require defining or standardizing the image resolution for intended
applications, just like existing 500 dpi (level 2) or 1000 dpi (level 3 features) for con-
ventional 2D fingerprint images. One simplified approach, which will also support
interoperability between 3D and 2D fingerprints, would be to automatically compute
these resolutions from the corresponding 2D images, e.g. 3D resolution of source
or cloud point data whose projection on x–y plane would meet standardized 500 or
1000 dpi resolution.
94 6 Representation, Recovery and Matching of 3D Minutiae Template
References
When the shape index Ci (Eq. 5.8 in Chap. 5) is close to 0.75, the shape of the 3D
surface is more likely to be the ridge shape. On 3D fingerprint surface, the Ci ’s were
observed to be concentrated in numeric values representing fingerprint valley (0.25)
and ridge (0.75) regions. The surface index is therefore likely to be largely distributed
in this zone. Therefore, our encoding scheme splits the 3D fingerprint surface into
five zones: cup, rut, saddle, ridge and cap. The direction of the dominant principle
curvature (max(|k max |, |k min |) is portioned into six directions. Rut and ridge zones are
further divided since cup, saddle and cap’s |k max | and |k min | are close; therefore, t max
and t min (defining the shape index Ci ) are not as accurate as those in rut and ridge
zones. The resulting feature representation has 15 different values. Each of these 15
values can therefore be represented using a 4-bit binary representation and thus forms
a binary code for each pixel. Table 7.1 provides a summary of 3D surface curvature
encoding using different shape index values and the corresponding 15 values that
are encoded in 4-bit binary numbers for every 3D fingerprint surface points. This
binarized representation of 3D fingerprint surface is referred to as Finger Surface
Code [4] in this chapter and is similar to IrisCode representation in [5] or DoN in
[6].
The matching score between two M × N Finger Surface Codes, say J and K, is
computed using their normalized Hamming distance HD(a, b) as follows:
1 N M
Score3D Finger HD(J ( p, q), K ( p, q)) (7.1)
4 × M × N p1 q1
1, if a b
HD(a, b) (7.2)
0, if a b
where a, b ∈ {0,1}. Two finger surface code templates are shifted left and right,
and match score using (7.1) is generated for each of these shifts. The minimum
score among all such shifts is designated as the final match score between two 3D
Fingerprints.
Contactless 3D fingerprints from 136 different clients are used for the experimen-
tal evaluation. The finger surface codes were shifted by 51 bits to generate the best
match score among two 3D fingerprints. Figure 7.1 shows the distribution of match
scores using 3D finger surface codes and also using surface codes introduced in [3].
The distributions shown in this figure suggest that the finger surface code can further
separate the genuine match scores and the impostor match scores. The comparative
performance using the ROC is illustrated in Fig. 7.2. It can be observed from this
figure that finger surface code-based approach for 3D fingerprint identification can
offer significantly improved performance over those from surface codes.
The matching of 3D fingerprints using binary coding scheme, finger surface code,
can be further enhanced by incorporating improved matching strategy. Such improved
matching scheme [7] revisits the Hamming distance in (7.2) which is quite effec-
tive when all the values in the binarized code encode equally important information
for discriminative identities. However, whether all the values in the coding space
Table 7.1 The zones of the Finger Surface Code
Ci 0–0.0625 0.0625–0.4375 0.4375–0.5625 0.5625–0.9375 0.9375–1
Angle(pi/6) – 0 1 2 3 4 5 / 0 1 2 3 4 5 –
Code 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
7.1 Fast 3D Fingerprint Matching Using Finger Surface Code
97
98 7 Other Methods for 3D Fingerprint Matching
Fig. 7.1 Distribution of genuine and impostor match scores for 136 clients 3D fingerprints using
(a) surface code and (b) finger surface code
Fig. 7.2 Comparative performance for matching 3D fingerprints using finger surface code and
surface code for 136 clients 3D fingerprints
effective similarity measure SM(a, b), to replace Hamming distance (7.2) HD(a, b),
that can judiciously consider the individual importance of features in the coding
space for computing the matching scores.
⎧
⎪
⎨ 2 − s, if a b 1
SM(a, b) s, if a b 0 (7.3)
⎪
⎩ 0, if a b
The parameter s in above equation can control the significance of one of the
coding pairs. Hamming distance is a special case when s is set to be 1. If the four
possible scenarios (ab ∈ {00,01,10,11}) are equally likely, the expected similarity
score will be 0.5, which is independent of the parameter s. The experimental results
using this approach (s 0.75), referred to as efficient finger surface code, are shown
in Fig. 7.3. These experiments use 3D fingerprints from 240 different clients [8] for
matching comparative performance evaluation. The 3D fingerprints from 240 clients,
each with six images, resulted in 3600 (240 × 15) genuine and 1,032,480 (239 × 6 ×
6 × 240/2) imposter match scores. In order to account for the translation variations in
this database, the templates are shifted with vertical and horizontal translations. The
minimum score obtained from such shifting is considered as the final match score.
Comparative performance shown in Fig. 7.3 indicates that such matching strategy can
further improve the matching accuracy and offers efficient alternative for matching
3D fingerprint images.
Fig. 7.3 Comparative experimental results for 3D fingerprint matching from 240 different clients
with different 3D fingerprints
100 7 Other Methods for 3D Fingerprint Matching
where l1 represents the ith side of the minutiae triangle and the lengths of three sides
are sorted in ascending order, i.e. l1 ≤ l2 ≤ l3 ϕ represents the largest angle of
such minutia triangle and m1 represents the ith minutia. Features are first extracted
from these triangulations and then minutiae alignment and fingerprint matching are
performed. A 3D space, the Delaunay triangulation generates tetrahedra that satisfies
the empty circumsphere criterion, similar to empty circumcircle criterion in 2D case.
The key idea is to generate Delaunay tetrahedrons based on the minutiae mi in 3D
space. For simplicity, the term tetrahedron is used here to represent 3D Delaunay
tetrahedron. A general tetrahedron is defined as a convex polyhedron consisting
Fig. 7.4 a 2D minutiae Delaunay triangulation and features. li represents 2D minutia, presents the
side of minutiae triangle, presents the largest angle of minutiae triangle. b The triangulation of a
3D minutiae is composed of tetrahedra
7.2 Tetrahedron-Based 3D Fingerprint Matching 101
of four triangular faces. It fills the convex hull of the points with tetrahedron so
that the vertices of the tetrahedron are those of the data points, i.e. minutiae. It
can be specified by its polyhedron vertices as (xi , yi , z i ) where i 1, . . . , 4. The
circumscribing sphere of any tetrahedron does not contain any other point inside
sphere. The algorithm of generating such tetrahedron is detailed in [12]. Every four
3D minutiae are connected and generate one tetrahedron. Each of the four minutiae
corresponds to tetrahedron’s four vertices (m 1 , m 2 , m 3 , m 4 ). Each vertex can be
represented as m [x, y, z, θ, φ]. The tetrahedron can be uniquely represented
from the following 8-tuple representation in 3D space:
lmax
lmax , lmin , , ϕ, θ̃max θ̃min , φ̃max , φ̃min (7.5)
ls_max
where lmax and lmin present the largest side and smallest side of the tetrahedron,
respectively. ls_max is the second largest side, while ϕ is the largest angle of each
tetrahedron’s four faces. The length of any tetrahedron side can be computed as
follows:
Besides these geometric features, the differences of minutiae direction and orien-
tation can be computed as features.
θ̃ θ1 − θ2 (7.7)
φ̃ φ1 − φ2 (7.8)
θ̃max computes the 2D orientation difference between two vertices of the largest
side in tetrahedron, i.e. azimuth angle difference between two 3D minutiae connect-
ing largest side lmax , φ̃max is the 3D orientation difference between two vertices
of the largest side in tetrahedron, i.e. elevation angle difference between two 3D
minutiae connecting the largest side lmax . These eight features describing a tetrahe-
dron in (7.5) are employed to match two arbitrary Delaunay tetrahedrons. For each
matched tetrahedron, we perform 3D alignment based on its vertex, i.e. 3D minu-
tiae. Figure 7.5a illustrates a 3D minutiae tetrahedron, and 7.5b illustrates one of its
minutia representation in 3D space. This sample figure in (a) uses m 1 , m 2 , m 3 , m 4 to
represent the four 3D minutiae, lmax and ϕ represents the two tetrahedron features
(7.8), while the sample figure in (b) defines measurements for one of the 3D minutia
m3 of this tetrahedron with θ 3 and φ 3 being the 2D and 3D orientation of this 3D
minutia m 3 . Redline in this figure is the projection of 3D minutia orientation φ 3 (blue
line) on the x–y plane.
The 3D minutiae tetrahedron matching algorithm can generate a numerical match
score between the two 3D fingerprint minutiae templates. A reference minutia tetrahe-
dron sample TPi and probe minutia tetrahedron sample TQj are, respectively, selected
from the 3D fingerprint templates P and Q. The features in (7.8) are computed from
102 7 Other Methods for 3D Fingerprint Matching
Fig. 7.5 a A tetrahedron sample and its features: mi represents 3D minutia, l max in red line repre-
sents the largest side in this tetrahedron, ϕ represents the largest angle of the tetrahedron’s face. b
Minutiae sample of tetrahedron and its minutiae direction θ 3 and orientation φ 3 (blue line). The red
colour line is the projection of blue colour line on x–y plane, representing orientation of m3 minutia
and is used to illustrate the measurement of angle θ 3 representing minutiae direction
these two minutiae tetrahedron samples. If the difference between the TPi and TQj
features is smaller than a given threshold, these two minutiae tetrahedron samples
can be considered as being matched.
7.2 Tetrahedron-Based 3D Fingerprint Matching 103
l Pi max l Q j max
l − , ϕ ϕ Pi − ϕ Q j (7.9)
l Pis_max l Q js_max
θ̃min θ̃ P − θ̃ Q (7.10)
i min j min
φ̃max φ̃ Pi max − φ̃ Q j max
φ̃min φ̃ P − φ̃ Q (7.11)
i min j min
when lmax < thlmax , lmin < thl ,l < thl , ϕ < thϕ ,θ̃max < thθ̃ .
min
θ̃min < thθ̃ , φ̃max < thφ̃ , φ̃min < thφ̃ , two tetrahedra are considered as
being matched. Then, 3D minutiae tetrahedron alignment is performed from the
transformation matrix from (6.9)–(6.10) of Chap. 6. The match score between two
3D fingerprints can be computed as follows:
m2
S3DT (7.12)
NP ∗ NQ
where m is the total number of matched 3D minutiae pairs, N P and N Q is the number
of 3D minutiae in 3D fingerprint template P and Q. This algorithm for matching two
3D fingerprints is also summarized in the algorithm TM.
Fig. 7.6 a Tetrahedron with minutiae quality > 0.7, b Tetrahedron with minutiae quality > 0.5 c
Tetrahedron with all minutiae
7.2 Tetrahedron-Based 3D Fingerprint Matching 105
Another approach to match 3D fingerprints considers [14] the unit normal vector
location, at all the 3D fingerprint surface points or only sampled at 3D minutiae
106 7 Other Methods for 3D Fingerprint Matching
Fig. 7.7 Contactless 3D fingerprint matching results using the ROC; a using first session 3D
fingerprint matching and b using two session fingerprint matching
locations. The unit normal vector at 3D fingerprint surface point can be estimated
using lowest eigenvalues (Eq. 5.7 in Chap. 5) which are available while determining
principal axes of the masked ridge surface for the minutiae direction angle φ. Since
the direction of principal axis is the normal vector over the masked ridge surface, it
has least noise as compared to the case when the normal vector is estimated/measured
on the exact minutiae location. The match score between the two unit normal vectors,
say n1 and n2 from the 3D minutiae of query image (input file) and that of the stored
template file, is generated using their dot product, i.e. using cos(N ) n 1 · n 2 . If
cos(N) is bigger than a predefined threshold, then the normal vectors of the minutiae
pair are considered to be matched. If cos(N ) is larger than a predefined threshold, then
the normal vector(s) from the two 3D fingerprints are considered to be matched. The
match scores between the surface normal vectors from 3D fingerprints can provide
additional cue and used to further improve the matching accuracy. Figure 7.8 illus-
trates such sample experimental results from 3D fingerprints of 10 different clients.
The ROCs in this figure also presents the results using score-level combination from
surface norms and finger surface codes. These results indicate that the 3D fingerprint
matching accuracy can be further improved by incorporating surface normal match
scores.
Fig. 7.8 Experimental results from unit surface normal vector field based 3D fingerprint matching
and performance improvement using its adaptive combination finger surface code-based matching
intra-class variations, i.e. in their elevation angles, azimuth angles and distances with
sensor, can be considered as a partial 3D fingerprint matching problem [16] which is
expected to degrade the matching accuracy. Acquisition of nail-to-nail 3D fingerprint
images can significantly improve the matching accuracy for 3D fingerprints with
large intra-class variations. Unlike for the partial-3D fingerprints generated from a
fixed sensor, nail-to-nail 3D fingerprints can offer significantly higher image details
to accurately match 3D fingerprints even under higher intra-class variations.
References
1. Koenderink JJ, van Doorn AJ (1992) Surface shape and curvature scales. Image Vis Comput
(8):557–564
2. Dorai C, Jain AK (1997) COSMOS-A representation scheme for 3D free-form objects. IEEE
Trans Pattern Anal Mach Intell 19(10):1115–1130
3. Kumar A, Kwong C (2013) Towards contactless, low-cost and accurate 3D fingerprint identi-
fication. In: Proceedings of the CVPR 2013, Portland, USA, June 2013, pp 3438–3443
4. Kanhangad V, Kumar A, Zhang D (2011) A unified framework for contactless hand identifi-
cation. IEEE Trans Inf Forensics Secur 20(5):1415–1424
5. Daugman J (2003) The importance of being random: statistical principles of iris recognition.
Pattern Recogn 36:279–291
6. Zheng Q, Kumar A, Pan G (2016) A 3D feature descriptor recovered from a single 2D palmprint
image. IEEE Trans Pattern Anal Mach Intell 38:1272–1279
7. Cheng KHM, Kumar A (2018) Advancing surface feature description and matching for more
accurate biometric recognition. In: Proceedings of the ICPR 2018, Beijing, Aug 2018
8. The Hong Kong Polytechnic University 3D Fingerprint Images Database (2015) http://www.
comp.polyu.edu.hk/~csajaykr/myhome/database.htm
9. Uz T, Bebis G, Erol A et al (2007) Minutiae-based template synthesis and matching using
hierarchical Delaunay triangulations. In: Proceedings of the BTAS, 2007, pp 1–8
10. Parziale G, Niel A (2004) A fingerprint matching using minutiae triangulation. In: Proceedings
of the ICBA 2004, Hong Kong, Springer Berlin Heidelberg, pp 241–248
11. Ahuja N (1982) Dot pattern processing using voronoi neighborhoods. IEEE Trans Pattern Anal
Mach Intell 4(3):336–343
12. Fang TP, Piegl LA (1995) Delaunay triangulation in three dimensions. IEEE Trans Comput
Graphics Appl 15(5):62–69
13. The Hong Kong Polytechnic University 3D Fingerprint Images Database Version 2.0 (2017)
http://www.comp.polyu.edu.hk/~csajaykr/3Dfingerv2.htm
14. Kumar A, Kwong C (2015) Contactless 3D biometric feature identification system and method
thereof. U.S. Patent No. 8953854, Feb 2015
15. Galbally J, Bostrom G, Beslay L (2017) Full 3d touchless fingerprint recognition: sensor,
database and baseline performance, Proc. IJCB 2017, pp. 225–232
16. Lin C, Kumar A (2018) Contactless and Partial 3D Fingerprint Recognition using Multi-view
Deep Representation, Pattern Recognition, pp. 314–327, Nov. 2018
Chapter 8
Individuality of 3D Fingerprints
2D fingerprint impressions have been widely used to uniquely establish the identity
of an individual for over hundred years. Several attempts have been made in the
literature to quantitatively establish the limits on the number of different identities,
i.e. uniqueness, that can be established from the 2D fingerprint images. Reference
[1] provides critical analysis of many such fingerprint image individuality models
introduced in the literature. These models can be classified into five different cate-
gories [2]: grid-based models, ridge-based models, fixed probability models, relative
measurement models and generative models, and [3, 4, 5, 6 and 7] are examples of
representative samples in each of these five categories, respectively. Studies on the
individuality of fingerprint biometric have attracted renewed attention following sev-
eral lawsuits in US courts that have challenged the admissibility of personal identifi-
cation using fingerprint biometric-based evidences. These challenges have primarily
questioned the uncertainty associated with the experts’ judgment and quantitative
estimation on the likelihood of erroneous decisions, which are generally made on
the basis of false random matches between two arbitrary fingerprint samples from
different individuals.
Uniqueness of fingerprint biometric has also been widely accepted by the experts
on the basis of manual inspection from the 2D fingerprint images. However, any
scientific study on the uniqueness of 3D fingerprints, or the merit of employing 3D
fingerprint identification over the conventional 2D fingerprint identification, has not
yet been performed. This chapter attempts to answer one of the most fundamen-
tal questions on the availability of inherent discriminable information from the 3D
fingerprints. There are many choices for representing 3D fingerprint data. The finger-
print representation based on 3D minutiae introduced in the previous chapter is the
most effective to quantitatively study the discriminability of 3D fingerprint biometric.
The permanence of fiction ridges on finger surfaces, or the resulting minutiae features,
has been supported by several morphogenesis and anatomical studies [8, 9]. There-
fore, the key topic of interest in establishing the individuality of 3D fingerprints is to
measure the amount of discriminable details in two different 3D fingerprint images
using the minutiae features. The images in Fig. 8.1 indicate merit in employing 3D
fingerprints to enhance uniqueness of fingerprint biometric.
There are many possibilities to ascertain the individuality of 3D fingerprints. In
this work, we formulate the individuality problem [7] as the probability of finding
sufficiently similar 3D fingerprint in a given population, i.e. given two 3D finger-
prints with same resolution but originating from two different sources, what is the
probability that these 3D fingerprints can be declared as sufficiently similar with
respect to a given or popular matching criterion? In order to compute such probabil-
ity of false random correspondence between two arbitrarily chosen 3D fingerprints,
we will need to first select a matching criterion and the representation for 3D finger-
prints. Contactless 3D fingerprints can be represented by many features and given the
popular usage of minutiae features in the law enforcement, forensics and in most 2D
fingerprint identification systems, it is judicious to choose 3D minutiae representa-
tion introduced in Chap. 6 to ascertain the individuality of 3D fingerprints. Any such
scientific basis to ascertain the individuality of 3D fingerprints can help to determine
the merit of employing 3D fingerprints, over conventional 2D fingerprints, for the
(a) (c)
(b) (d)
Fig. 8.1 a Processed 2D fingerprint image sample with two bifurcations (blue square) and two
endings (red circle), along with the orientation (green line) marked on the image; b thinned images
from 2D image in (a) with respective minutiae. c same fingerprint as in image (a) using 3D fingerprint
imaging; d reconstructed 3D fingerprint from the same finger as in image (a) and with four of five
minutiae marked in 3D image using blue colour spots. Acquisition of 3D fingerprints enables
recovery of same 2D minutiae features with additional discriminative information, i.e. height and
elevation angle, in 3D space that significantly enhances the limits on the individuality of fingerprint
biometric
8 Individuality of 3D Fingerprints 111
Let P and Q denote two arbitrarily selected 3D fingerprint minutiae templates using
3D minutiae representations, with m and n, respectively, representing the number of
truly recovered 3D minutiae that are available for the matching.
where r 0 , θ 0 and φ0 represent the tolerance limit in measurements of the distance and
tolerance limits in measurements of the angles, respectively. Let V be the overlapping
volume between two arbitrary 3D fingerprint surfaces being matched, while H(p)
be the height of the aligned surface with point p on the projection on x–y plane.
Figure 8.2 illustrates the spherical region of tolerance between two 3D minutiae
matching regions within the overlapping volume of two matched 3D fingerprints.
The V can be estimated as follows:
p∈A H ( p)s − H ( p)t
N N
U (8.6)
s1 t1,ts
A(s, t)
2
p∈A H ( p)s − H ( p)t − U
N N
σ (8.7)
s1 t1,ts
A(s, t)
√
V U + 2 σ A(s, t) (8.8)
where A(s, t) is the overlapped area on x–y plane of two aligned surfaces s and t.
The variance σ can be estimated by statistics of the reconstructed 3D fingerprint
surface in the database and represents twice standard deviation limits that cover 95%
112 8 Individuality of 3D Fingerprints
We can similarly extend (8.14) to estimate the probability that any of the n minu-
tiae from input templates can be matched/corresponded with any of the m template
minutiae in the target template as follows:
mC V − mC
P3D (mn, 1) n
V V −C
n mC V − mC
(8.15)
1 V V −C
The probability that p minutiae among the n input template minutiae can be falsely
matched with m minutiae in the target templates can be written as follows:
n mC (m − 1)C (m − p − 1)C
P3D (m, n, p) ... ...
p V V −C V − ( p − 1)C
V − mC V − (m − 1)C (V − (m − (n − p + 1))C
× ... ... (8.16)
V − pV V − ( p + 1)C V − (n − 1)C
The first line term in above equation represents the probability that p minutiae from
input templates are matched, while the second line term represents the probability
that the rest (n–p) of the minutiae are not matched. We can rewrite (8.16) to compute
the probability, when there can be exactly p minutiae that are matched, among n
minutiae in template Q and m minutiae in template P, as follows:
m J −m
p n−p
P3D (J, m, n, p) (8.17)
J
n
where l is the probability of matching a 3D minutiae along orientation θ once they are
falsely matched at 3D location (x, y, z). This probability or l is the same as computed
in Eq. (8.10). The probability of matching q minutiae in both position (x, y, z) and
direction θ can be written as follows:
⎛ ⎞
m J −m
(m,n) ⎜
⎟
min
⎜ p n−p p ⎟
P3D (J, m, n, q) ⎜ × p−q ⎟
(l) (1 − l) ⎟ (8.19)
q
⎜ q
pq ⎝ J ⎠
n
With q minutiae’s position and direction θ already being matched, the probability
of r minutiae’s direction φ being matched can be estimated as follows:
q
(k)r (1 − k)q−r (8.20)
r
Assuming that the images from the same fingers are acquired from conventional 2D
fingerprint sensors, we can estimate the expected improvement in the individuality
of fingerprints using 3D fingerprint systems. The probability of false random corre-
spondence using conventional minutiae representation in 2D space [7], i.e. matching
q minutiae both in spatial location (x, y) and also along the direction θ , is given by
⎛ ⎞
m M −m
(m,n) ⎜
⎟
min
⎜ p n−p p ⎟
P2D (M, m, n, q) ⎜ × (l)q
(1 − l) p−q ⎟
⎜ q ⎟ (8.22)
pq ⎝ M ⎠
n
where M represents the nearest integer for M A/ π r02 . Here, we assume that the
same two source fingerprints are matched in 2D space and 3D space, and J M,
using (8.22) we can rewrite (8.21) as
p
q
P3D (M, m, n, r ) (k)r (1 − k)q−r × P2D (M, m, n, q) (8.23)
qr
r
trates the possible improvement in the uniqueness of fingerprints when the same
minutiae features are matched in 3D space. The results in this table indicate signif-
icantly enhanced limits in the number of persons that can be accurately identified
using the 3D fingerprints over the currently believed such limits with the usage of
conventional 2D fingerprint imaging.
where p* is the probability of observing exactly u minutiae which matches among the
m minutiae recovered in template I and n minutiae recovered in template T . Reference
8.2 Probability of False Random Correspondence Using Noisy Minutiae Matching 117
[10, 11] provides a closed-form expression for p* using Poisson probability mass
function with mean λ(I, T ),
λ(I, T ) mn p(I, T )
where
2 2 2
p(I, T ) P xi − x j + yi − y j + z i − z j ≤ r0 and min θi − θ j , 360
− θi − θ j ≤ θ0 and min φi − φ j ≤ φ0 (8.26)
Equations (8.9–8.11) assume that the probability of the features for a given minutia
has uniform distribution in the given space. Reference [10] suggests finite mixture
models for the minutiae variability in the fingerprint which fit the cluster of features
representing the minutiae; therefore, p(I, T ) can be defined as follows:
˚
p(I, T ) f I (s1 , θ1 , φ1 ) f T (s2 , θ2 , φ2 )ds2 dθ2 dφ2 ds1 dθ1 dφ1
(s2 ,θ2 ,φ2 )∈B(s1 ,θ1 ,φ1 )
(8.27)
G
f (s, θ, φ|G ) τg f gX s|μg , g · f gD θ |νg , κg , pg · f gφ φ|ηg , ζg . (8.28)
g1
where τ g is the weight and f gX s|u g , g is the density of Bivariate Gaussian random
variable with mean μg and covariance matrix g . The second term can be estimated
by a two-component mixture of von Mises distributions:
⎧
⎨ pg υ(θ ), if θ ∈ [0, π)
f gD θ|νg , κg , pg (8.29)
⎩ 1 − pg υ(θ − π), if θ ∈ [π, 2π)
2π
I0 κ g e(κg cos(θ−νg )) dθ (8.31)
0
where ν g and κ g are the mean angle and the precision of von Mises distribution, pg
and (1 − pg ) are the probability of 3D minutiae direction for θ and θ + π. The last
term in (8.28) is the distribution function for φ which is defined by another von Mises
distribution with mean angle ηg and precision ζ g . When low-quality 3D fingerprint
images are used for matching, the PRC is expected to be higher. Therefore, similar
to as in [10], we can model the relationship of PRC(w|m, n) with the image quality
as follows:
where μ is the overall mean of the logarithm of p(I,T ), γ(q(I ),q(T )) is the effect on the
image quality on template I and T , q(S) is the quality measure for 3D fingerprint,
ch(S) represents the characteristic of fingerprint class, ε(I, T ) is the distribution of
random error with zero mean and variance σ 2 . The Finger(S) represents variation in
3D reconstruction for impression S, from the same finger (I and I ), which is expected
to have some correlation as follows:
σα2 2
Corr log p(I, T ), log p I , T (8.33)
2σα + σ
2 2 2+v
References
1. Stony D (2001) Measurement of fingerprint individuality. In: Lee HC, Gaensslen RE (eds)
Advances in fingerprint technology. CRC Press
2. Srihari S, Srinivasan H Individuality of fingerprints: comparison models and measurements.
CEDAR Technical Report TR-02-07. http://www.cedar.buffalo.edu/~srihari/papers/TR-02-07.
pdf
3. Osterburg J (1997) Development of a mathematical formula for the calculation of fingerprint
probabilities based on individual characteristics. J Am Statistical Ass 772:72
4. Roxburgh T (1933) On the evidential value of fingerprints. Sankhya: The Indian J. Statistics
1:89
5. Henry E (1900) Classification and uses of fingerprints. Routledge & Sons London, pp 54
6. Champod C, Margot P (1996) Computer assisted analysis of minutiae occurrences on finger-
prints. In: Almog J, Spinger E (eds) Proceeding international symposium fingerprint detection
and identification. pp 305
7. Pankanti S, Prabhakar S, Jain AK (2002) On the individuality of fingerprints. IEEE Trans
Pattern Anal Mach Intell 24
References 119
A 3D minutiae selection, 82
Absorption cross section, 46 3D minutiae overlap volume, 71, 76
Acquisition time, 21 3D minutia height, 73
Active 3D fingerprint sensors, 22, 41 3D minutia orientation, 74, 101
Adaptive fusion, 91 3D minutia quality, 80
Adjacent-normal cubic order surface DET, 91
approximation, 65 Diffuse components, 56
AFIS, 5
Albedo, 19, 23, 29, 30, 35, 36, 41, 50, 57 E
EER, 90
C Efficient surface code, 99
Calibration of LEDs, 30, 33, 34 Euler–Lagrange relation, 39
Classical Lambertian method, 43
Classical photometric stereo, 43 F
Coloured photometric stereo, 53 False Negative Identification Rate (FNIR), 91,
Combined Match Performance (CMC), 91–93 92
Complexity online 3D fingerprint imaging, 50 Faster library, 50
Computational time, 50, 58 FFTW, 50
Contact-based fingerprints, 67 Field of View (FOV), 9, 31
Contactless 2D fingerprints, 11, 66, 92, 95 Finger motion detection, 55, 56
Contactless 3D fingerprint, 12, 13, 17, 39, 50, Finger skin contamination, 59
53, 54, 59, 60, 62–66, 71, 77, 95, 96, Finger surface code, 96–98, 107
106, 110 Finger-on-fly, 9
Finger-on-go, 9
D Fingerprint enhancement, 11, 67, 68
2.5D view fingerprint, 64 Fingerprint matching, 2, 5, 6, 9, 60, 67, 71, 82,
3D Delaunay triangulation, 100 88–91, 93, 99, 100, 104–107, 115, 116
3D fingerprint data format, 63 Fourier-domain OCT, 21
3D fingerprint representation, 109 FPIR, 91, 92
3D fingerprint ridge–valley recovery, 17, 19 Frankot and Chellappa algorithm, 38
3D minutiae, 65, 71, 76–83, 85, 87, 89–93, 95, Fresnel transmittance coefficients, 47
100, 101, 103–106, 109–112, 114–116, Fringe number selection, 20
118 Full-field OCT, 11, 21
Similarity measure, 99 T
Simplified Hanrahan–Krueger (HK) model, 46, Tetrahedra, 100, 103
47 Tetrahedron, 100–105
Skin deformation, 9, 12 Thickness of layer, 46
Slap fingerprints, 2, 3 Time-multiplexing, 19
Smartphone fingerprints, 3 Tolerance limit, 79, 111
Spectrometry, 21 Tomography, 41
Specular reflectance, 48 Transmission type contactless 2D fingerprint
Specular reflection removal, 41–43, 54, 56 sensors, 10
Spoof fingerprint, 12 Triangulation-based techniques, 24
Stacked greyscale image, 34
Stereo vision, 18 U
Structured lighting, 19, 20, 63, 71, 90, 95 Ultrasonic 3D fingerprint sensor, 22
Subsurface layer, 46 Ultrasound fingerprint sensors, 4
Surface code, 95, 96, 98 Unified 3D minutiae distance, 80
Surface curvature, 65–68, 73–75, 80, 90, 91, Uniqueness of 3D fingerprints, 109, 115
95, 96, 104
Surface gradients, 29, 36, 38, 39 V
Surface normal vectors, 30, 33, 35, 36, 43, 57, Volume of tolerance, 112
106 von Mises distribution, 117, 118
Surface reflectance, 23, 29, 35, 36, 41, 43, 46, Voxelized cloud, 63, 64
57, 67
SUV colour space, 56 W
Swipe fingerprint sensors, 4 Weingarten curvature matrix, 66
Synthetic 3D model, 58, 59 World coordinates, 32