Professional Documents
Culture Documents
Fingerprint Liveness Detection Using Gradient-Based Texture Features
Fingerprint Liveness Detection Using Gradient-Based Texture Features
Fingerprint Liveness Detection Using Gradient-Based Texture Features
DOI 10.1007/s11760-016-0936-z
ORIGINAL PAPER
Abstract Fingerprint-based recognition systems have been order co-occurrence array and outperform the state-of-the-art
increasingly deployed in various applications nowadays. methods.
However, the recognition systems can be spoofed by using an
accurate imitation of a live fingerprint such as an artificially Keywords Biometrics · Fingerprint liveness detection ·
made fingerprint. In this paper, we propose a novel software- Image texture · Image gradient · Co-occurrence array
based fingerprint liveness detection method which achieves
good detection accuracy. We regard the fingerprint liveness
detection as a two-class classification problem and construct 1 Introduction
co-occurrence array from image gradients to extract features.
In doing so, the quantization operation is firstly conducted Fingerprint has been used as biometric characteristics in
on the images. Then, the horizontal and vertical gradients forensic science and practice for hundreds of years. Nowa-
at each pixel are calculated, and the gradients of large days in the information age, fingerprint earns even more
absolute values are truncated into a reduced range. Finally, applications because of its uniqueness and easiness to collect.
the second-order and the third-order co-occurrence arrays are However, fingerprint-based systems are vulnerable to spoof
constructed from the truncated gradients, and the elements attacks, as the spoof fingerprint can be easily made using
of the co-occurrence arrays are directly used as features. cheap materials such as gelatin and silicon [1,2]. The
The second-order and the third-order co-occurrence array attackers could clandestinely retrieve a user’s fingerprint and
features are separately utilized to train support vector make a spoof one to achieve illegal access. In another way,
machine classifiers on two publicly available databases used a user may make a spoof fingerprint of himself to deceive
in Fingerprint Liveness Detection Competition 2009 and attendance system.
2011. The experimental results have demonstrated that the In order to address the above problem, many fingerprint
features extracted with the third-order co-occurrence array liveness detection techniques have been proposed to judge
achieve better detection accuracy than that with the second- whether a fingerprint image was captured from an actual
user or not. The existing methods can be divided into two
B Zhihua Xia classes: hardware-based methods applied at the acquisition
xia_zhihua@163.com
stage and software-based methods applied at the processing
1 Jiangsu Engineering Center of Network Monitoring, Jiangsu stage [1,2].
Collaborative Innovation Center on Atmospheric The hardware-based methods try to add new hardware
Environment and Equipment Technology, College of components to obtain life signs of the finger such as
Computer and Software, Nanjing University of Information
Science and Technology, Nanjing 210044, China
temperature, pulse, pulse oximetry, conductivity, and blood
2
pressure. Then, spoof fingers can be discriminated from real
Department of Computer Science, New York University,
New York, NY 10012, USA
ones by analyzing these signs. The hardware-based solutions
3
can prevent spoof attacks to some degree. However, this
Department of Electrical and Computer Engineering,
New Jersey Institute of Technology, New York, NJ 07102,
type of techniques requires the support of additional devices
USA which increases the overall expenses of the fingerprint
123
SIViP
identification system. On the other hand, the software-based and Schuckers [12] revealed that artificial fingers have a
methods add liveness detection capability to the existing distinct noise distribution due to the material’s properties
systems by updating the software architecture. The software- and extracted features by analyzing the noise distribution
based methods distinguish the live fingers from the spoof along the valley structure. Ghiani et al. [13] utilized binarized
ones by analyzing the images obtained from the existing statistic image features which is quite similar to local
fingerprint sensors without the need of new hardware. This binary pattern and local phase quantization descriptor.
type of methods is less expensive and is more flexible to Gragnaniello et al. [13] proposed a local-descriptor based
future adaptions. methods to handle fingerprint liveness detection problem.
In this paper, a novel software-based fingerprint liveness The features are extracted from both the spatial and frequency
detection method is proposed. We regard the liveness detec- domains. Jia et al. [14] considered that the multi-scale local
tion as a two-class classification problem, i.e., classifying binary pattern (LBP) could reflect the texture of fingerprint
a test fingerprint image into either a living or a spoof images more adequately than original LBP and proposed a
one. fingerprint liveness detection method by using two kinds
The remainder of the paper is organized as follows. In of multi-scale LBP. Their method achieves good detection
Sect. 2, we give more related works related to fingerprint accuracy.
liveness detection. Section 3 describes the feature extraction
process. In Sect. 4, we present a brief introduction to the two
databases and the support vector machine and then conduct 3 Feature extraction
comparative experiments and furthermore, we discuss the
simulation results. Finally, we conclude our work and discuss In this paper, the fingerprint liveness detection is considered
further work in Sect. 5. as a two-class classification problem. Feature extraction is a
crucial step for classification problems [15–19]. Here, X =
(X i, j ) ∈ {0, . . . , 255}n 1 ×n 2 represents an 8-bit grayscale
2 Related works image of the size n 1 × n 2 . The symbol X i, j denotes the
grayscale value of the pixel located at (i, j), and n 1 and
The software-based methods show that live and spoof n 2 denote the height and width of the image, respectively.
fingerprints can be discriminated by analyzing the features The process of the feature extraction includes the following
extracted from fingerprint images. Abhyankar and Schuckers four steps. Firstly, the image is quantized with a quantization
[3] developed a method based on multi-resolution texture factor. Secondly, the horizontal and vertical gradients are
features and local ridge frequency features. Their texture fea- calculated from the adjacent quantized pixels. Thirdly, the
tures include: (1) the first-order features, i.e., energy, entropy, gradients of large absolute values are truncated into a reduced
median, and variance of the histogram and (2) the second- range. Finally, the co-occurrence arrays are constructed from
order features, i.e., cluster shade and cluster prominence of the gradients. The elements of the co-occurrence arrays are
the co-occurrence matrix. Coli et al. [4] claimed that the high- directly used as the discriminative features.
frequency details of the spoof fingerprint images were greatly
reduced and extracted features from the power spectrum
3.1 Quantization
for the classification. Nikam and Agarwal proposed several
liveness detection methods based on the texture analysis
In the process of feature extraction, the image pixel values
of the fingerprint images. The authors extracted many
are firstly quantized as:
distinguishable features through various texture measure
methods such as the Gabor filters [5], the curvelet transform
[6], the Ridgelet transform [7], the wavelet transform [8], X i, j ← X i, j /Q (1)
and the gray-level co-occurrence matrices combined with
the wavelet transform [9]. Jiang and Liu [10] proposed a where Q ≥ 1 is a quantization factor. The quantization
spoof fingerprint detection methods based on difference co- operation will cause the loss of image information, but will
occurrence matrix. All of these features can successfully not affect the overall texture of a fingerprint image. The
address the fingerprint liveness detection problem to some quantization operation can largely reduce the dynamic range
degree. Jin et al. [11] proposed a fingerprint liveness detection of X i, j , and thus helping to reduce the dimensionality of the
method based on band-selective Fourier spectrum. The feature vector. Moreover, the experimental results presented
authors revealed that the live fingerprint images showed in Sect. 4 have revealed that the feature vector extracted from
stronger Fourier spectrum in the ring patterns than the spoof the quantized images achieve better detection accuracy than
ones and classified live and spoof fingerprint images by that from the unquantized ones when a suitable quantization
analyzing the band-selective Fourier spectral energies. Tan factor is used.
123
SIViP
0.3
0.4
0.3
0.25
Percentage
Percentage
Percentage
0.2 0.3
0.2
0.15 0.2
0.1
0.1
0.1
0.05
0 0 0
−200 −100 0 100 200 −100 −50 0 50 100 −100 −50 0 50 100
Value of Gradient Value of Gradient Value of Gradient
0.6 0.6
0.6
0.5 0.5
Percentage
Percentage
Percentage
0.4 0.4
0.4
0.3 0.3
0.2 0.2
0.2
0.1 0.1
0 0 0
−50 0 50 −20 0 20 −15 −10 −5 0 5 10 15
Value of Gradient Value of Gradient Value of Gradient
Fig. 1 The histograms of horizontal gradients calculated from quantized fingerprint images, Q = 1, 2, 4, 8, 16, 32
3.2 Image gradient and truncation operation We can cover a major part of gradients with a smaller T after
quantizing the pixels with a larger quantization parameter Q.
Image gradient measures a directional change in the intensity The truncation operation is defined as:
of an image, which can be used for effective texture feature
extraction [20,21]. Here, the horizontal and vertical gradient G(i, j) ← truncT (G(i, j)) (3)
arrays G H and G V are defined by calculating the differences
between the adjacent pixels as: where G(i, j) before truncating denotes the gradient calcu-
lated according to Formula (2) along horizontal or vertical
⎧
⎪ G (i, j) = X i, j − X i, j+1 , direction. Here, if G(i, j) > T , truncT (G(i, j)) = T ;
⎪
⎪ H
⎪
⎨ if G(i, j) < −T , truncT (G(i, j)) = −T ; if G(i, j) ∈
for i ∈ {0, . . . , n 1 − 1}, j ∈ {0, . . . , n 2 − 2};
[−T, T ], truncT (G(i, j)) = G(i, j).
⎪ G V (i, j) = X i, j − X i+1, j ,
⎪
⎪
⎪
⎩ for i ∈ {0, . . . , n 1 − 2}, j ∈ {0, . . . , n 2 − 1}. 3.3 Co-occurrence array
(2)
Co-occurrence matrix (array) is a widely used tool to identify
In our method, the elements of the co-occurrence array the texture of images. Often, gray-level co-occurrence
are directly used as features which are arranged to com- matrix (GLCM) is defined based on the gray values of
pose the feature vector. The dimensionality of the feature an image and is typically large in dimensionality [22].
vector depends on the dynamic range of the gradients. The For an 8-bit grayscale image, the general GLCM will
quantization operation will decrease the dynamic range of generate 256 × 256 = 65536 elements which can not be
gray values and accordingly decrease the dynamic range of directly used as features due to the too high dimensionality.
gradients. As shown in Fig. 1, the gradients calculated from In order to extract a compact feature vector, various metrics
the quantized images have a narrower dynamic range than of the matrix, such as energy, entropy, contrast, ,and
that from the original images. A large quantization factor will correlation, are often further applied, such as shown in
reduce the dynamic range of the gradients and thus help to [3]. In the proposed method, the co-occurrence arrays are
reduce the dimensionality of the feature vector. Specifically, constructed from the gradients between adjacent pixels.
the histogram of the gradients can be approximated by And the dimensionality of the co-occurrence array can be
Laplacian distribution. Thus, we can truncate gradients to a efficiently reduced by quantization and truncation operations.
small range [−T, T ] without losing much useful information. Thus, the elements of the co-occurrence array can be directly
123
SIViP
used as features. In this paper, we calculate second-order Table 1 Dimensionality of the SCAG feature vector and average
and third-order co-occurrence arrays from the gradient array percentages of the covered gradients when different parameter pairs
(Q, T ) are chosen
G H and G V . Here, the second-order co-occurrence array is
parameterized from the pairs of adjacent gradients, and the (Q, T ) Dimensionality of the Average percentage of
third-order co-occurrence array is parameterized from the SCAG feature vector covered gradients (%)
triplets of adjacent gradients. Specifically, the second-order (1, 10) 882 70.73
co-occurrence arrays on gradients (SCAG) along horizontal (2, 10) 882 81.23
and vertical directions are defined as: (4, 10) 882 91.43
⎧ (8, 5) 242 90.67
n1 −1 n
2 −3
⎪
⎪
⎪
⎪ SCAGH (s, t) = ϕ(G H (i, j), s) (16, 4) 162 96.05
⎪
⎪
⎪
⎪ (32, 4) 162 99.74
⎪
⎪
i=0 j=0
⎪
⎨ × ϕ(G H (i, j + 1), t),
(4)
⎪
⎪ n1 −3 n
2 −1
⎪
⎪
⎪
⎪ SCAGV (s, t) = ϕ(G V (i, j), s) of elements in TCAG H and TCAGV is 2 × (2T + 1)3 , which
⎪
⎪
⎪
⎪ i=0 j=0 will increase much faster with a larger truncation factor T .
⎪
⎩
× ϕ(G V (i + 1, j), t), Thus, we further reduce the number of elements in the third-
order co-occurrence arrays by averaging the elements of the
1, if x = y arrays in the two directions. Specifically, for the third-order
where s, t ∈ {−T, . . . , T }, and ϕ(x, y) = . co-occurrence array, the final features are calculated as:
0, if x = y
And likewise, the third-order co-occurrence arrays on
gradients (TCAG) along horizontal and vertical directions TCAGH (s, t, r ) + TCAGV (s, t, r )
are defined as: TCAGs,t,r = (8)
2
⎧ n1 −1 n
2 −4
⎪
⎪
⎪
⎪ TCAG (s, t, r ) = ϕ(G H (i, j), s)
⎪
⎪
H We denote the feature vector constructed with the third-
⎪
⎪
⎪
⎪
i=0 j=0
order co-occurrence array as TCAG whose dimensionality is
⎪
⎪
⎪
⎪ × ϕ(G H (i, j + 1), t) reduced to (2T + 1)3 .
⎪
⎪
⎨ × ϕ(G H (i, j + 2), r ),
(5)
⎪
⎪ n1 −4 n
2 −1
⎪
⎪
⎪
⎪ TCAGV (s, t, r ) = ϕ(G V (i, j), s)
⎪
⎪ 4 Experiments
⎪
⎪ i=0 j=0
⎪
⎪
⎪
⎪ × ϕ(G V (i + 1, j), t)
⎪
⎪ In our experiment, the average classification error (ACE) is
⎩
× ϕ(G V (i + 2, j), r ), defined as the evaluation criterion:
123
SIViP
123
SIViP
average detection result. When the quantization factor Q 4.4 Comparison with previous methods
is set to 4, we need to set the truncation factor T to 10
so as to cover 91.43 % of gradients, which generates 882 The fingerprint liveness detection problem has been rese-
features. When the quantization factor Q is set to 8, we need arched for more than ten years, and many excellent methods
to set the truncation factor T to 5 so as to cover 90.67 % of have been proposed to identify the vitality signs of fingerprint
gradients, which generates only 242 features. However, the with high accuracy. According to the results shown in Tables
feature vector extracted with the parameter pair (8, 5) achieve 2 and 3, we choose the TCAG feature vector with the
better detection accuracy than that with (4, 10). This tells us parameter pair (16, 3) to compare with the previous methods.
that the quantization operation with a suitable factor will help Ghiani et al. [30] investigated the performance of sev-
to extract better features and reduce the dimensionality of the eral classical fingerprint liveness detection methods on
feature vector. LivDet11DB in 2012. In 2014, Jia et al. [14] proposed an
As shown in Table 3, the TCAG feature vectors extracted excellent fingerprint liveness detection method based on
with the parameter pair (16, 3) obtain the best average multi-scale local binary pattern and tested their method on
detection result. In addition, the TCAG feature vectors LivDet11DB. Thus, we cite the results reported in [30] and
achieve overall better detection results than that of the [14] and the best results achieved in LivDet 2011 [24] for
SCAG ones. It can be attributed to that the third-order co- performance comparison on LivDet11DB.
occurrence array can describe the gradient texture better than Galbally et al. [31] proposed an excellent fingerprint
the second-order one for the fingerprint liveness detection liveness detection method based on image quality and
purpose. conducted the comparison experiments on LivDet09DB.
Here, we cite the results reported in [31] and the best results
achieved in LivDet 2009 [23] for performance comparison
on LivDet09DB. As shown in Tables 4 and 5, the proposed
123
SIViP
method achieves better results than the previous methods in ing (KJR1402), Fund of MOE Internet Innovation Platform (KJRP1403),
general. CICAEET, and PAPD fund.
5 Conclusion References
123
SIViP
14. Jia, X., Yang, X., Cao, K., Zang, Y., Zhang, N., Dai, R., Zhu, 27. Gu, B., Sheng, V.S.: A robust regularization path algorithm for
X., Tian, J.: Multi-scale local binary pattern with filters for spoof v-support vector classification. IEEE Trans. Neural Netw. Learn.
fingerprint detection. Inf. Sci. 268, 91–102 (2014) Syst. PP(99), 1–8 (2016)
15. Chen, B., Shu, H., Coatrieux, G., Chen, G., Sun, X., Coatrieux, 28. Gu, B., Sun, X., Sheng, V.S.: Structural minimax probability
J.L.: Color image analysis by quaternion-type moments. J. Math. machine. IEEE Trans. Neural Netw. Learn. Syst. PP(99), 1–11
Imaging Vis. 51(1), 124–144 (2014) (2016)
16. Li, J., Li, X., Yang, B., Sun, X.: Segmentation-based image copy- 29. Chang, C.-C., Lin, C.-J.: Libsvm: a library for support vector
move forgery detection scheme. IEEE Trans. Inf. Forensics Secur. machines. ACM Trans. Intell. Syst. Technol. 2(3), 27 (2011)
10(3), 507–518 (2015) 30. Ghiani, L., Denti, P., Marcialis, G.L.: Marcialis, experimental
17. Pan, Z., Zhang, Y., Kwong, S.: Efficient motion and disparity results on fingerprint liveness detection. In: Perales, F.J., Fisher,
estimation optimization for low complexity multiview video R.B., Moeslund, T.B. (eds.) Articulated Motion and Deformable
coding. IEEE Trans. Broadcast. 61(2), 1–1 (2015) Objects, pp. 210–218. Springer, Berlin (2012)
18. Zheng, Y., Byeungwoo, J., Xu, D., Wu, Q.M.J., Zhang, H.: Image 31. Galbally, J., Marcel, S., Fierrez, J.: Image quality assessment for
segmentation by generalized hierarchical fuzzy c-means algorithm. fake biometric detection: application to iris, fingerprint and face
J. Intell. Fuzzy Syst. 28(2), 4024–4028 (2015) recognition. IEEE Trans. Image Process. 23, 710–724 (2014)
19. Wen, X., Shao, L., Xue, Y., Fang, W.: A rapid learning algorithm 32. Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-
for vehicle classification. Inf. Sci. 295, 395–406 (2015) scale and rotation invariant texture classification with local binary
20. Wei, S.-D., Lai, S.-H.: Robust and efficient image alignment based patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987
on relative gradient matching. IEEE Trans. Image Process. 15(10), (2002)
2936–2943 (2006) 33. Ojala, T., Pietikäinen, M., Harwood, D.: A comparative study of
21. Xia, Z., Wang, X., Sun, X., Liu, Q., Xiong, N.: Steganalysis of lsb texture measures with classification based on featured distributions.
matching using differences between nonadjacent pixels. Multimed. Pattern Recognit. 29(1), 51–59 (1996)
Tools Appl. 75(4), 1947–1962 (2016) 34. Tan, B., Schuckers, S.: Spoofing protection for fingerprint scanner
22. Xia, Z., Wang, X., Sun, X., Wang, B.: Steganalysis of least by fusing ridge signal and valley noise. Pattern Recognit. 43(8),
significant bit matching using multi-order differences. Secur. 2845–2857 (2010)
Commun. Netw. 7(8), 1283–1291 (2014) 35. Nikam, S., Agarwal, S.: Fingerprint liveness detection using
23. Marcialis, G.L., Lewicke, A., Tan, B., Coli, P., Grimberg, D., curvelet energy and co-occurrence signatures. In: Fifth Inter-
Congiu, A., Tidu, A., Roli, F., Schuckers, S.: First international national Conference on Computer Graphics, Imaging and
fingerprint liveness detection competition–livdet 2009. In: Foggia, Visualisation, pp. 217–222, IEEE (2008)
P., Sansone, C., Vento, M. (eds.) Image Analysis and Processing- 36. Tan, B., Schuckers, S.: Liveness detection for fingerprint scanners
ICIAP 2009, pp. 12–23. Springer, Berlin (2009) based on the statistics of wavelet signal processing. In: Conference
24. D. Yambay, L., Ghiani, P., Denti, G.L., Marcialis, F., Roli, Schuck- on Computer Vision and Pattern Recognition Workshop, pp. 26–26,
ers, S.: Livdet 2011-fingerprint liveness detection competition IEEE (2006)
2011. In: IAPR International Conference on Biometrics, pp. 208– 37. Marasco, E., Sansone, C.: Combining perspiration-and
215, IEEE (2012) morphology-based static features for fingerprint liveness detection.
25. Gu, B., Sheng, V.S., Tay, K.Y., Romano, W., Li, S.: Incremental Pattern Recognit. Lett. 33(9), 1148–1156 (2012)
support vector learning for ordinal regression. IEEE Trans. Neural 38. Moon, Y.S., Chen, J., Chan, K., So, K., Woo, K.: Wavelet based
Netw. Learn. Syst. 26(7), 1403–1416 (2014) fingerprint liveness detection. Electron. Lett. 41(20), 1112–1113
26. Gu, B., Sheng, V.S., Wang, Z., Ho, D., Osman, S., Li, S.: (2005)
Incremental learning for v-support vector regression. Neural Netw
67(C), 140–150(2015)
123