Fingerprint Liveness Detection Using Gradient-Based Texture Features

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

SIViP

DOI 10.1007/s11760-016-0936-z

ORIGINAL PAPER

Fingerprint liveness detection using gradient-based


texture features
Zhihua Xia1 · Rui Lv1 · Yafeng Zhu1 ·
Peng Ji1 · Huiyu Sun2 · Yun-Qing Shi3

Received: 1 December 2015 / Revised: 16 June 2016 / Accepted: 8 July 2016


© Springer-Verlag London 2016

Abstract Fingerprint-based recognition systems have been order co-occurrence array and outperform the state-of-the-art
increasingly deployed in various applications nowadays. methods.
However, the recognition systems can be spoofed by using an
accurate imitation of a live fingerprint such as an artificially Keywords Biometrics · Fingerprint liveness detection ·
made fingerprint. In this paper, we propose a novel software- Image texture · Image gradient · Co-occurrence array
based fingerprint liveness detection method which achieves
good detection accuracy. We regard the fingerprint liveness
detection as a two-class classification problem and construct 1 Introduction
co-occurrence array from image gradients to extract features.
In doing so, the quantization operation is firstly conducted Fingerprint has been used as biometric characteristics in
on the images. Then, the horizontal and vertical gradients forensic science and practice for hundreds of years. Nowa-
at each pixel are calculated, and the gradients of large days in the information age, fingerprint earns even more
absolute values are truncated into a reduced range. Finally, applications because of its uniqueness and easiness to collect.
the second-order and the third-order co-occurrence arrays are However, fingerprint-based systems are vulnerable to spoof
constructed from the truncated gradients, and the elements attacks, as the spoof fingerprint can be easily made using
of the co-occurrence arrays are directly used as features. cheap materials such as gelatin and silicon [1,2]. The
The second-order and the third-order co-occurrence array attackers could clandestinely retrieve a user’s fingerprint and
features are separately utilized to train support vector make a spoof one to achieve illegal access. In another way,
machine classifiers on two publicly available databases used a user may make a spoof fingerprint of himself to deceive
in Fingerprint Liveness Detection Competition 2009 and attendance system.
2011. The experimental results have demonstrated that the In order to address the above problem, many fingerprint
features extracted with the third-order co-occurrence array liveness detection techniques have been proposed to judge
achieve better detection accuracy than that with the second- whether a fingerprint image was captured from an actual
user or not. The existing methods can be divided into two
B Zhihua Xia classes: hardware-based methods applied at the acquisition
xia_zhihua@163.com
stage and software-based methods applied at the processing
1 Jiangsu Engineering Center of Network Monitoring, Jiangsu stage [1,2].
Collaborative Innovation Center on Atmospheric The hardware-based methods try to add new hardware
Environment and Equipment Technology, College of components to obtain life signs of the finger such as
Computer and Software, Nanjing University of Information
Science and Technology, Nanjing 210044, China
temperature, pulse, pulse oximetry, conductivity, and blood
2
pressure. Then, spoof fingers can be discriminated from real
Department of Computer Science, New York University,
New York, NY 10012, USA
ones by analyzing these signs. The hardware-based solutions
3
can prevent spoof attacks to some degree. However, this
Department of Electrical and Computer Engineering,
New Jersey Institute of Technology, New York, NJ 07102,
type of techniques requires the support of additional devices
USA which increases the overall expenses of the fingerprint

123
SIViP

identification system. On the other hand, the software-based and Schuckers [12] revealed that artificial fingers have a
methods add liveness detection capability to the existing distinct noise distribution due to the material’s properties
systems by updating the software architecture. The software- and extracted features by analyzing the noise distribution
based methods distinguish the live fingers from the spoof along the valley structure. Ghiani et al. [13] utilized binarized
ones by analyzing the images obtained from the existing statistic image features which is quite similar to local
fingerprint sensors without the need of new hardware. This binary pattern and local phase quantization descriptor.
type of methods is less expensive and is more flexible to Gragnaniello et al. [13] proposed a local-descriptor based
future adaptions. methods to handle fingerprint liveness detection problem.
In this paper, a novel software-based fingerprint liveness The features are extracted from both the spatial and frequency
detection method is proposed. We regard the liveness detec- domains. Jia et al. [14] considered that the multi-scale local
tion as a two-class classification problem, i.e., classifying binary pattern (LBP) could reflect the texture of fingerprint
a test fingerprint image into either a living or a spoof images more adequately than original LBP and proposed a
one. fingerprint liveness detection method by using two kinds
The remainder of the paper is organized as follows. In of multi-scale LBP. Their method achieves good detection
Sect. 2, we give more related works related to fingerprint accuracy.
liveness detection. Section 3 describes the feature extraction
process. In Sect. 4, we present a brief introduction to the two
databases and the support vector machine and then conduct 3 Feature extraction
comparative experiments and furthermore, we discuss the
simulation results. Finally, we conclude our work and discuss In this paper, the fingerprint liveness detection is considered
further work in Sect. 5. as a two-class classification problem. Feature extraction is a
crucial step for classification problems [15–19]. Here, X =
(X i, j ) ∈ {0, . . . , 255}n 1 ×n 2 represents an 8-bit grayscale
2 Related works image of the size n 1 × n 2 . The symbol X i, j denotes the
grayscale value of the pixel located at (i, j), and n 1 and
The software-based methods show that live and spoof n 2 denote the height and width of the image, respectively.
fingerprints can be discriminated by analyzing the features The process of the feature extraction includes the following
extracted from fingerprint images. Abhyankar and Schuckers four steps. Firstly, the image is quantized with a quantization
[3] developed a method based on multi-resolution texture factor. Secondly, the horizontal and vertical gradients are
features and local ridge frequency features. Their texture fea- calculated from the adjacent quantized pixels. Thirdly, the
tures include: (1) the first-order features, i.e., energy, entropy, gradients of large absolute values are truncated into a reduced
median, and variance of the histogram and (2) the second- range. Finally, the co-occurrence arrays are constructed from
order features, i.e., cluster shade and cluster prominence of the gradients. The elements of the co-occurrence arrays are
the co-occurrence matrix. Coli et al. [4] claimed that the high- directly used as the discriminative features.
frequency details of the spoof fingerprint images were greatly
reduced and extracted features from the power spectrum
3.1 Quantization
for the classification. Nikam and Agarwal proposed several
liveness detection methods based on the texture analysis
In the process of feature extraction, the image pixel values
of the fingerprint images. The authors extracted many
are firstly quantized as:
distinguishable features through various texture measure
methods such as the Gabor filters [5], the curvelet transform
[6], the Ridgelet transform [7], the wavelet transform [8], X i, j ← X i, j /Q (1)
and the gray-level co-occurrence matrices combined with
the wavelet transform [9]. Jiang and Liu [10] proposed a where Q ≥ 1 is a quantization factor. The quantization
spoof fingerprint detection methods based on difference co- operation will cause the loss of image information, but will
occurrence matrix. All of these features can successfully not affect the overall texture of a fingerprint image. The
address the fingerprint liveness detection problem to some quantization operation can largely reduce the dynamic range
degree. Jin et al. [11] proposed a fingerprint liveness detection of X i, j , and thus helping to reduce the dimensionality of the
method based on band-selective Fourier spectrum. The feature vector. Moreover, the experimental results presented
authors revealed that the live fingerprint images showed in Sect. 4 have revealed that the feature vector extracted from
stronger Fourier spectrum in the ring patterns than the spoof the quantized images achieve better detection accuracy than
ones and classified live and spoof fingerprint images by that from the unquantized ones when a suitable quantization
analyzing the band-selective Fourier spectral energies. Tan factor is used.

123
SIViP

Q=1 Q=2 Q=4


0.35 0.4 0.5

0.3
0.4
0.3
0.25
Percentage

Percentage

Percentage
0.2 0.3
0.2
0.15 0.2
0.1
0.1
0.1
0.05

0 0 0
−200 −100 0 100 200 −100 −50 0 50 100 −100 −50 0 50 100
Value of Gradient Value of Gradient Value of Gradient

Q=8 Q=16 Q=32


0.7 0.7 0.8

0.6 0.6
0.6
0.5 0.5
Percentage

Percentage

Percentage
0.4 0.4
0.4
0.3 0.3

0.2 0.2
0.2
0.1 0.1

0 0 0
−50 0 50 −20 0 20 −15 −10 −5 0 5 10 15
Value of Gradient Value of Gradient Value of Gradient

Fig. 1 The histograms of horizontal gradients calculated from quantized fingerprint images, Q = 1, 2, 4, 8, 16, 32

3.2 Image gradient and truncation operation We can cover a major part of gradients with a smaller T after
quantizing the pixels with a larger quantization parameter Q.
Image gradient measures a directional change in the intensity The truncation operation is defined as:
of an image, which can be used for effective texture feature
extraction [20,21]. Here, the horizontal and vertical gradient G(i, j) ← truncT (G(i, j)) (3)
arrays G H and G V are defined by calculating the differences
between the adjacent pixels as: where G(i, j) before truncating denotes the gradient calcu-
lated according to Formula (2) along horizontal or vertical

⎪ G (i, j) = X i, j − X i, j+1 , direction. Here, if G(i, j) > T , truncT (G(i, j)) = T ;

⎪ H

⎨ if G(i, j) < −T , truncT (G(i, j)) = −T ; if G(i, j) ∈
for i ∈ {0, . . . , n 1 − 1}, j ∈ {0, . . . , n 2 − 2};
[−T, T ], truncT (G(i, j)) = G(i, j).
⎪ G V (i, j) = X i, j − X i+1, j ,



⎩ for i ∈ {0, . . . , n 1 − 2}, j ∈ {0, . . . , n 2 − 1}. 3.3 Co-occurrence array
(2)
Co-occurrence matrix (array) is a widely used tool to identify
In our method, the elements of the co-occurrence array the texture of images. Often, gray-level co-occurrence
are directly used as features which are arranged to com- matrix (GLCM) is defined based on the gray values of
pose the feature vector. The dimensionality of the feature an image and is typically large in dimensionality [22].
vector depends on the dynamic range of the gradients. The For an 8-bit grayscale image, the general GLCM will
quantization operation will decrease the dynamic range of generate 256 × 256 = 65536 elements which can not be
gray values and accordingly decrease the dynamic range of directly used as features due to the too high dimensionality.
gradients. As shown in Fig. 1, the gradients calculated from In order to extract a compact feature vector, various metrics
the quantized images have a narrower dynamic range than of the matrix, such as energy, entropy, contrast, ,and
that from the original images. A large quantization factor will correlation, are often further applied, such as shown in
reduce the dynamic range of the gradients and thus help to [3]. In the proposed method, the co-occurrence arrays are
reduce the dimensionality of the feature vector. Specifically, constructed from the gradients between adjacent pixels.
the histogram of the gradients can be approximated by And the dimensionality of the co-occurrence array can be
Laplacian distribution. Thus, we can truncate gradients to a efficiently reduced by quantization and truncation operations.
small range [−T, T ] without losing much useful information. Thus, the elements of the co-occurrence array can be directly

123
SIViP

used as features. In this paper, we calculate second-order Table 1 Dimensionality of the SCAG feature vector and average
and third-order co-occurrence arrays from the gradient array percentages of the covered gradients when different parameter pairs
(Q, T ) are chosen
G H and G V . Here, the second-order co-occurrence array is
parameterized from the pairs of adjacent gradients, and the (Q, T ) Dimensionality of the Average percentage of
third-order co-occurrence array is parameterized from the SCAG feature vector covered gradients (%)
triplets of adjacent gradients. Specifically, the second-order (1, 10) 882 70.73
co-occurrence arrays on gradients (SCAG) along horizontal (2, 10) 882 81.23
and vertical directions are defined as: (4, 10) 882 91.43
⎧ (8, 5) 242 90.67
n1 −1 n
2 −3



⎪ SCAGH (s, t) = ϕ(G H (i, j), s) (16, 4) 162 96.05



⎪ (32, 4) 162 99.74


i=0 j=0

⎨ × ϕ(G H (i, j + 1), t),
(4)

⎪ n1 −3 n
2 −1



⎪ SCAGV (s, t) = ϕ(G V (i, j), s) of elements in TCAG H and TCAGV is 2 × (2T + 1)3 , which



⎪ i=0 j=0 will increase much faster with a larger truncation factor T .


× ϕ(G V (i + 1, j), t), Thus, we further reduce the number of elements in the third-
order co-occurrence arrays by averaging the elements of the

1, if x = y arrays in the two directions. Specifically, for the third-order
where s, t ∈ {−T, . . . , T }, and ϕ(x, y) = . co-occurrence array, the final features are calculated as:
0, if x = y
And likewise, the third-order co-occurrence arrays on
gradients (TCAG) along horizontal and vertical directions TCAGH (s, t, r ) + TCAGV (s, t, r )
are defined as: TCAGs,t,r = (8)
2
⎧ n1 −1 n
2 −4



⎪ TCAG (s, t, r ) = ϕ(G H (i, j), s)


H We denote the feature vector constructed with the third-




i=0 j=0
order co-occurrence array as TCAG whose dimensionality is



⎪ × ϕ(G H (i, j + 1), t) reduced to (2T + 1)3 .


⎨ × ϕ(G H (i, j + 2), r ),
(5)

⎪ n1 −4 n
2 −1



⎪ TCAGV (s, t, r ) = ϕ(G V (i, j), s)

⎪ 4 Experiments

⎪ i=0 j=0



⎪ × ϕ(G V (i + 1, j), t)

⎪ In our experiment, the average classification error (ACE) is

× ϕ(G V (i + 2, j), r ), defined as the evaluation criterion:

where s, t, r ∈ {−T, . . . , T }. In order to eliminate the effect


caused by the size of the image, the elements of the co- ACE = (FAR + FRR)/2 (9)
occurrence arrays are normalized as:

⎪ SCAGH (s, t) where false accept rate (FAR) is the proportion of spoof

⎪ SCAGH (s, t) ← T T ,

s=−T t=−T SCAGH (s, t) fingerprints being incorrectly accepted, and false reject rate
(6)

⎪ SCAGV (s, t) (FRR) is the proportion of real fingerprints being incorrectly

⎩ SCAGV (s, t) ← T T .
s=−T t=−T SCAGV (s, t) rejected.

⎪ TCAGH (s, t, r ) The performances achieved by using the SCAG and TCAG

⎪ TCAGH (s, t, r ) ← T T T ,
⎨ are separately tested on two databases named LivDet09DB
s=−T t=−T r =−T TCAGH (s, t, r )
(7) [23] and LivDet11DB [24]. In this section, we firstly present

⎪ TCAGV (s, t, r )

⎩ TCAGV (s, t, r ) ← T T T .
a brief introduction to the two databases and the support
s=−T t=−T r =−T TCAGV (s, t, r )
vector machine (SVM). Secondly, we conduct comparative
For the second-order co-occurrence array, all the elements experiments on LivDet11DB to choose a proper quantization
of SCAG H and SCAGV are used as features. We denote factor Q and a proper truncation factor T . Finally, the
the feature vector constructed with the second-order co- performance of the proposed method with the chosen
occurrence array as SCAG whose dimensionality is 2×(2T + parameters is compared with that achieved by the state-of-
1)2 . For the third-order co-occurrence array, the total number the-art works.

123
SIViP

4.1 Databases RBF kernel is used to train classifiers in the experiments,


and the tool “Cross-validation and Grid-search” is utilized
The proposed detection methods are tested on two databases, to search the penalty parameter C and kernel parameter γ .
named LivDet09DB and LivDet11DB. The LivDet09DB In our experiment, most of parameters are found in [25 , 230 ]
is the database used in Fingerprint Liveness Detection for C, and in [2−5 , 210 ] for γ .
Competition 2009 [23]. It consists of fingerprint images
taken from three different sensors: Biometrika, Identix,
and Crossmatch. The spoof fingers are generated using 4.3 Quantization factor Q and truncation factor T
three different materials: silicone, gelatin, and playdoh. The
LivDet11DB is the database used in Fingerprint Liveness In order to obtain a compact feature vector and good
Detection Competition 2011 [24]. It consists of images detection accuracy, the selection of the quantization factor
from four different sensors: Biometrika, Digital Persona, Q and truncation factor T requires proper attention. In this
Italdata, and Sagem. The spoof fingers are generated using six subsection, we report the comparison experiment conducted
different materials: gelatin, latex, playdoh, silicone, ecoflex, on LivDet11DB to figure out the relatively better choices.
and wood glue. All of the fingerprint images are transformed First, we set Q = 1, 2, 4, 8, 16, 32 to quantize the images.
into gray images before used. Each of the datasets has been Under each of the quantization step, we compute the average
divided into two non-overlapping parts: training and testing percentages of gradients that fall into the range [−T, T ],
sets, which are used respectively in the training and testing which then help us to choose the truncation factor T .
processes of the classification. For the two databases, images In our method, the dimensionality of the SCAG feature
captured from each sensor are tested separately. vector is equal to 2×(2T +1)2 . Generally, we want to take as
many gradients into consideration as possible in constructing
feature vector. However, a larger T will lead to a feature
4.2 Support vector machine (SVM) vector of larger dimensionality. By compromising on the
percentages of the covered gradients and the dimensionality
SVM is utilized to train the classifier with feature vector of the SCAG feature vector, we choose T = 10 when Q is
in this paper. Treating the data as two sets of points in an set to 1, 2 and 4, T = 5 when Q is set to 8, and T = 4 when
n-dimensional space, SVM builds an separating hyperplane Q is set to 16 and 32. For each parameter pair (Q, T ), the
by Lagrangian multipliers to differentiate the negative data dimensionality of the SCAG feature vector and the average
points from the positive ones [25–28]. Intuitively, a good percentages of the covered gradients are listed in Table 1.
separation is achieved when a separating hyperplane has the Note that it is not guaranteed that the parameters used here
largest distance to the boundary points of both classes. are the best choices.
LIBSVM [29] is a free software for support vector clas- The dimensionality of the TCAG feature vector is equal to
sification. LIBSVM implements four basic kernels, among (2T + 1)3 and will increase much faster with a larger T than
which the radial basis function (RBF) kernel, e−γ xi −x j  , is
2
that of the SCAG feature vector. Therefore, we choose T = 3
widely suggested as the best choice by the users. LIBSVM for all Q = 1, 2, 4, 8, 16, 32. Thus, the dimensionalities of
also provides a tool named “Cross-validation and Grid- all the TCAG feature vectors are equal to 343.
search” to search the appropriate penalty parameter C and The different parameter pair (Q, T ) produces different
kernel parameter γ for an RBF kernel. With this tool, one can feature vectors. Each of these feature vectors is used to train
firstly set a relatively large range to find the parameters. Next, SVM classifier separately. The detection accuracies of the
after running for several rounds, one can estimate where classifiers are listed in Tables 2 and 3.
the better parameters could be found and shrink the search As shown in Table 2, the SCAG feature vectors extracted
range to refine the search. In this paper, the LIBSVM with with the parameter pairs (8, 5) and (16, 4) achieve the best

Table 2 The average


Parameter Average classification error (ACE) (%)
classification error (ACE) of the
pair (Q, T )
SCAG feature vector calculated LivDet11DB#1 LivDet11DB#2 LivDet11DB#3 LivDet11DB#4 Average
with different parameter pairs
(Q, T ) (1, 10) 15.70 15.40 16.50 5.50 15.78
(2, 10) 12.25 17.50 22.35 5.45 14.39
(4, 10) 8.95 13.65 20.95 4.91 12.12
(8, 5) 8.00 14.15 16.90 4.86 10.98
(16, 4) 8.45 15.35 14.85 5.26 10.98
(32, 4) 10.45 15.90 18.15 7.56 13.02

123
SIViP

Table 3 The average


Parameter Average classification error (ACE) (%)
classification error (ACE) of the
pair (Q, T )
TCAG feature vector calculated LivDet11DB#1 LivDet11DB#2 LivDet11DB#3 LivDet11DB#4 Average
with different parameter pairs
(Q, T ) (1, 3) 12.10 5.80 29.25 3.39 12.63
(2, 3) 8.40 5.00 26.40 4.13 10.98
(4, 3) 7.60 3.10 22.50 4.03 9.31
(8, 3) 6.45 3.80 13.45 4.13 6.96
(16, 3) 6.70 4.75 11.75 3.34 6.63
(32, 3) 9.10 6.50 12.75 4.91 8.32

average detection result. When the quantization factor Q 4.4 Comparison with previous methods
is set to 4, we need to set the truncation factor T to 10
so as to cover 91.43 % of gradients, which generates 882 The fingerprint liveness detection problem has been rese-
features. When the quantization factor Q is set to 8, we need arched for more than ten years, and many excellent methods
to set the truncation factor T to 5 so as to cover 90.67 % of have been proposed to identify the vitality signs of fingerprint
gradients, which generates only 242 features. However, the with high accuracy. According to the results shown in Tables
feature vector extracted with the parameter pair (8, 5) achieve 2 and 3, we choose the TCAG feature vector with the
better detection accuracy than that with (4, 10). This tells us parameter pair (16, 3) to compare with the previous methods.
that the quantization operation with a suitable factor will help Ghiani et al. [30] investigated the performance of sev-
to extract better features and reduce the dimensionality of the eral classical fingerprint liveness detection methods on
feature vector. LivDet11DB in 2012. In 2014, Jia et al. [14] proposed an
As shown in Table 3, the TCAG feature vectors extracted excellent fingerprint liveness detection method based on
with the parameter pair (16, 3) obtain the best average multi-scale local binary pattern and tested their method on
detection result. In addition, the TCAG feature vectors LivDet11DB. Thus, we cite the results reported in [30] and
achieve overall better detection results than that of the [14] and the best results achieved in LivDet 2011 [24] for
SCAG ones. It can be attributed to that the third-order co- performance comparison on LivDet11DB.
occurrence array can describe the gradient texture better than Galbally et al. [31] proposed an excellent fingerprint
the second-order one for the fingerprint liveness detection liveness detection method based on image quality and
purpose. conducted the comparison experiments on LivDet09DB.
Here, we cite the results reported in [31] and the best results
achieved in LivDet 2009 [23] for performance comparison
on LivDet09DB. As shown in Tables 4 and 5, the proposed

Table 4 Performance comparison in terms of average classification error (ACE) on LivDet11DB


Methods Average classification error ACE (%)
LivDet11DB#1 LivDet11DB#2 LivDet11DB#3 LivDet11DB#4 Average

TCAG with (16,3) 6.7 4.75 11.75 3.34 6.635


MSLBP1 [14] 7.3 2.5 14.8 5.3 7.475
MSLBP2 [14] 10.6 6.7 12.6 5.6 8.875
MLBP [32] reported in [14] 10.8 7.1 16.6 6.4 10.225
Original LBP [33] reported in [14] 13.0 10.8 24.1 11.5 14.85
Tan’s method [34] reported in [14] 43.8 18.2 29.6 24.7 29.075
Valleys wavelet [12] reported in [30] 29.0 13.0 23.6 28.0 23.4
Curvelet GLCM [35] reported in [30] 22.9 18.3 30.7 28.0 24.975
Power spectrum [4] reported in [30] 30.6 27.1 42.8 31.5 33
Wavelet energy [8] reported in [30] 50.2 14.0 46.8 22.0 33.25
Curvelet energy [35] reported in [30] 45.2 21.9 47.9 28.5 35.875
Ridges wavelet [36] reported in [30] 38.8 27.5 56.9 20.5 35.925
Best result in LivDet 2011 [24] 20.0 36.1 21.8 13.8 22.925

123
SIViP

Table 5 Performance comparison in terms of average classification error (ACE) on LivDet09DB


Methods Average classification error ACE (%)
LivDet09DB#1 LivDet09DB#2 LivDet09DB#3 Average

TCAG with (16,3) 11.3 5.4 2.0 6.2


IQA-based [31] 12.8 10.7 1.2 8.2
Best result in LivDet 2009 [23] 18.2 9.4 2.8 10.1
Marasco et al. [37] 12.6 15.2 9.7 12.5
Moon et al. [38] reported in [37] 23.0 23.5 38.2 28.2
Nikam et al. [6] reported in [37] 28.3 18.7 30.3 25.8
Abhyankar et al. [3] reported in [37] 31.7 31.5 47.2 36.8

method achieves better results than the previous methods in ing (KJR1402), Fund of MOE Internet Innovation Platform (KJRP1403),
general. CICAEET, and PAPD fund.

5 Conclusion References

1. Sousedik, C., Busch, C.: Presentation attack detection methods for


In this paper, we regard the fingerprint liveness detection fingerprint recognition systems: a survey. IET Biom. 3(4), 219–233
as a two-class classification problem and present a novel (2014)
software-based fingerprint liveness detection method which 2. Al-Ajlan, A.: Survey on fingerprint liveness detection. In:
achieves good detection accuracy. For the first time, the International Workshop on Biometrics and Forensics, pp. 1–5,
IEEE (2013)
elements of co-occurrence array constructed from image 3. Abhyankar, A., Schuckers, S.: Fingerprint liveness detection using
gradients have been used for the fingerprint liveness detection local ridge frequencies and multiresolution texture analysis tech-
purpose. In doing so, firstly, quantization operation is applied niques. In: IEEE International Conference on Image Processing,
to reduce the dynamic range of pixel value. Secondly, image pp. 321–324, IEEE (2006)
4. Coli, P., Marcialis, G.L., Roli, F.: Power spectrum-based fingerprint
gradients are calculated from adjacent quantized pixels vitality detection. In: IEEE Workshop on Automatic Identification
along horizontal and vertical directions. Thirdly, the second- Advanced Technologies, pp. 169–173, IEEE (2007)
order and third-order co-occurrence arrays are respectively 5. Nikam, S.B., Agarwal, S.: Gabor filter-based fingerprint anti-
constructed from the truncated gradients. The elements spoofing. In: Advanced concepts for intelligent vision systems, pp.
1103–1114, Springer (2008)
of the co-occurrence arrays are directly used as features. 6. Nikam, S.B., Agarwal, S.: Curvelet-based fingerprint anti-
The second-order and the third-order co-occurrence array spoofing. Signal Image Video Process. 4(1), 75–87 (2010)
features are separately used to train SVM classifier on two 7. Nikam, S.B., Agarwal, S.: Ridgelet-based fake fingerprint
popularly utilized databases. The experimental results have detection. Neurocomputing 72(10), 2491–2506 (2009)
8. Nikam S.B., Agarwal, S.: Texture and wavelet-based spoof
demonstrated that the proposed methods outperform many fingerprint detection for fingerprint biometric systems. In: First
state-of-the-art methods in general. International Conference on Emerging Trends in Engineering and
The proposed method may be further improved. Firstly, Technology, pp. 675–680, IEEE (2008)
image gradients are measured by the differences between 9. Nikam, S.B., Agarwal, S.: Wavelet energy signature and
GLCM features-based fingerprint anti-spoofing. In: International
adjacent pixels in this paper, which is a simple method and Conference on Wavelet Analysis and Pattern Recognition, vol. 2,
other sophisticated methods can be used to measure the image pp. 717–723, IEEE (2008)
gradient better. Secondly, using a combination of various 10. Jiang, Y., Liu, X.: Spoof fingerprint detection based on co-
features that are calculated with different parameters is likely occurrence matrix. Int. J. Signal Process. Image Process. Pattern
Recognit. 8(8), 373–384 (2015)
to achieve a better performance. Thirdly, our experimental 11. Jin, C., Kim, H., Elliott, S.: Liveness detection of fingerprint based
results have shown that the third-order co-occurrence array on band-selective fourier spectrum. In: Nam, K.-H., Rhee, G. (eds.)
features achieve better detection accuracy than second-order Information Security and Cryptology-ICISC 2007, pp. 168–179.
ones. Better results might be obtained by using higher order Springer, Berlin (2007)
12. Tan, B., Schuckers, S.: New approach for liveness detection in
co-occurrence arrays. However, one may face a larger feature fingerprint scanners based on valley noise analysis. J. Electron.
dimensionality. These will be our future research goals. Imaging 17(1), 011009–011009 (2008)
13. Ghiani, L., Hadid, A., Marcialis, G.L., Roli, F.: Fingerprint liveness
Acknowledgments This work is supported by the NSFC (61173141, detection using binarized statistical image features. In: IEEE Sixth
U1536206, 61232016, U1405254, 61373133, 61502242, 61572258), International Conference on Biometrics: Theory, Applications and
BK20150925, Fund of Jiangsu Engineering Center of Network Monitor- Systems, pp. 1–6, IEEE (2013)

123
SIViP

14. Jia, X., Yang, X., Cao, K., Zang, Y., Zhang, N., Dai, R., Zhu, 27. Gu, B., Sheng, V.S.: A robust regularization path algorithm for
X., Tian, J.: Multi-scale local binary pattern with filters for spoof v-support vector classification. IEEE Trans. Neural Netw. Learn.
fingerprint detection. Inf. Sci. 268, 91–102 (2014) Syst. PP(99), 1–8 (2016)
15. Chen, B., Shu, H., Coatrieux, G., Chen, G., Sun, X., Coatrieux, 28. Gu, B., Sun, X., Sheng, V.S.: Structural minimax probability
J.L.: Color image analysis by quaternion-type moments. J. Math. machine. IEEE Trans. Neural Netw. Learn. Syst. PP(99), 1–11
Imaging Vis. 51(1), 124–144 (2014) (2016)
16. Li, J., Li, X., Yang, B., Sun, X.: Segmentation-based image copy- 29. Chang, C.-C., Lin, C.-J.: Libsvm: a library for support vector
move forgery detection scheme. IEEE Trans. Inf. Forensics Secur. machines. ACM Trans. Intell. Syst. Technol. 2(3), 27 (2011)
10(3), 507–518 (2015) 30. Ghiani, L., Denti, P., Marcialis, G.L.: Marcialis, experimental
17. Pan, Z., Zhang, Y., Kwong, S.: Efficient motion and disparity results on fingerprint liveness detection. In: Perales, F.J., Fisher,
estimation optimization for low complexity multiview video R.B., Moeslund, T.B. (eds.) Articulated Motion and Deformable
coding. IEEE Trans. Broadcast. 61(2), 1–1 (2015) Objects, pp. 210–218. Springer, Berlin (2012)
18. Zheng, Y., Byeungwoo, J., Xu, D., Wu, Q.M.J., Zhang, H.: Image 31. Galbally, J., Marcel, S., Fierrez, J.: Image quality assessment for
segmentation by generalized hierarchical fuzzy c-means algorithm. fake biometric detection: application to iris, fingerprint and face
J. Intell. Fuzzy Syst. 28(2), 4024–4028 (2015) recognition. IEEE Trans. Image Process. 23, 710–724 (2014)
19. Wen, X., Shao, L., Xue, Y., Fang, W.: A rapid learning algorithm 32. Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-
for vehicle classification. Inf. Sci. 295, 395–406 (2015) scale and rotation invariant texture classification with local binary
20. Wei, S.-D., Lai, S.-H.: Robust and efficient image alignment based patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987
on relative gradient matching. IEEE Trans. Image Process. 15(10), (2002)
2936–2943 (2006) 33. Ojala, T., Pietikäinen, M., Harwood, D.: A comparative study of
21. Xia, Z., Wang, X., Sun, X., Liu, Q., Xiong, N.: Steganalysis of lsb texture measures with classification based on featured distributions.
matching using differences between nonadjacent pixels. Multimed. Pattern Recognit. 29(1), 51–59 (1996)
Tools Appl. 75(4), 1947–1962 (2016) 34. Tan, B., Schuckers, S.: Spoofing protection for fingerprint scanner
22. Xia, Z., Wang, X., Sun, X., Wang, B.: Steganalysis of least by fusing ridge signal and valley noise. Pattern Recognit. 43(8),
significant bit matching using multi-order differences. Secur. 2845–2857 (2010)
Commun. Netw. 7(8), 1283–1291 (2014) 35. Nikam, S., Agarwal, S.: Fingerprint liveness detection using
23. Marcialis, G.L., Lewicke, A., Tan, B., Coli, P., Grimberg, D., curvelet energy and co-occurrence signatures. In: Fifth Inter-
Congiu, A., Tidu, A., Roli, F., Schuckers, S.: First international national Conference on Computer Graphics, Imaging and
fingerprint liveness detection competition–livdet 2009. In: Foggia, Visualisation, pp. 217–222, IEEE (2008)
P., Sansone, C., Vento, M. (eds.) Image Analysis and Processing- 36. Tan, B., Schuckers, S.: Liveness detection for fingerprint scanners
ICIAP 2009, pp. 12–23. Springer, Berlin (2009) based on the statistics of wavelet signal processing. In: Conference
24. D. Yambay, L., Ghiani, P., Denti, G.L., Marcialis, F., Roli, Schuck- on Computer Vision and Pattern Recognition Workshop, pp. 26–26,
ers, S.: Livdet 2011-fingerprint liveness detection competition IEEE (2006)
2011. In: IAPR International Conference on Biometrics, pp. 208– 37. Marasco, E., Sansone, C.: Combining perspiration-and
215, IEEE (2012) morphology-based static features for fingerprint liveness detection.
25. Gu, B., Sheng, V.S., Tay, K.Y., Romano, W., Li, S.: Incremental Pattern Recognit. Lett. 33(9), 1148–1156 (2012)
support vector learning for ordinal regression. IEEE Trans. Neural 38. Moon, Y.S., Chen, J., Chan, K., So, K., Woo, K.: Wavelet based
Netw. Learn. Syst. 26(7), 1403–1416 (2014) fingerprint liveness detection. Electron. Lett. 41(20), 1112–1113
26. Gu, B., Sheng, V.S., Wang, Z., Ho, D., Osman, S., Li, S.: (2005)
Incremental learning for v-support vector regression. Neural Netw
67(C), 140–150(2015)

123

You might also like