Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

J. Vis. Commun. Image R.

21 (2010) 219231

Contents lists available at ScienceDirect

J. Vis. Commun. Image R.


journal homepage: www.elsevier.com/locate/jvci

A secure digital camera based ngerprint verication system


Bee Yan Hiew a,1, Andrew Beng Jin Teoh b,*, Ooi Shih Yin a,1
a b

Faculty of Information Science and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75450 Melaka, Malaysia School of Electrical and Electronic Engineering, Yonsei University, Seoul, South Korea

a r t i c l e

i n f o

a b s t r a c t
Contemporary ngerprint system uses solid at sensor which requires contact of the nger on a platen surface. This often results in several problems such as image deformation, durability weakening in the sensor, latent ngerprint issues which can lead to forgery and hygienic problems. On the other hand, biometric characteristics cannot be changed; therefore, the loss of privacy is permanent if they are ever compromised. Coupled with template protection mechanism, a touch-less ngerprint verication system is further provoked. In this issue, a secure end-to-end touch-less ngerprint verication system is presented. The ngerprint image captured with a digital camera is rst pre-processed via the proposed pre-processing algorithm to reduce the problems appear in the image. Then, Multiple Random Projections-Support Vector Machine (MRP-SVM) is proposed to secure ngerprint template while improving system performance. 2009 Elsevier Inc. All rights reserved.

Article history: Received 8 April 2009 Accepted 15 December 2009 Available online 24 December 2009 Keywords: Touch-less based ngerprint system Template protection Support Vector Machine Fingerprint verication Cancellable biometrics Random projection Gabor lters Principle component analysis

1. Introduction Most of the ngerprint sensors available today use touch method. However, the durability of a touch-based ngerprint scanner is weakened if it is used heavily. Additionally, the pressure of the physical contacts degrades the quality of the touch-based ngerprint images. Besides, a latent ngerprint (the trail of the ngerprint on the contact surface of the sensor) could lead to fraudulent use and hygienic problems [1]. Conversely, touch-less ngerprint acquisition technology is free from these problems [2]. Hence, touch-less ngerprint recognition is receiving great attention. For a biometric system, it is known that biometric templates are not revocable, hence their compromise is permanent [3]. Thus, protecting the biometric templates is a major concern [4]. Cancellable biometrics is a concept initiated to secure biometric template. A good cancellable biometrics formulation must full four requirements [5]: (i) diversity; (ii) reusability; (iii) one way transformation; (iv) performance. 1.1. Related works 1.1.1. Touch-less ngerprint recognition Digital Descriptor System Inc. (DDSI) has a patent application pending (led in 1998) for a contact-less ngerprinting system. La* Corresponding author. E-mail addresses: byhiew@mmu.edu.my (B.Y. Hiew), andrew_tbj@yahoo.com, bjteoh@yonsei.ac.kr (Andrew B.J. Teoh), syooi@mmu.edu.my (S.Y. Ooi). 1 Fax: +60 6 2318840. 1047-3203/$ - see front matter 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.jvcir.2009.12.003

ter, DDSI [6] produced the world very rst contact-less ngerprint capture device which can be integrated with Fingerprint Matching Solution [7]. Touchless Sensor Technology (TST) group holds global patent for touch-less optical recognition of ngers, palms and hands technology [8]. In 2002, TST Corona-GmbH developed biometric recognition systems using novel touch-less optical ngerprint scanning technology. Nevertheless, details of report concerning both DDSIs and TSTs touch-less ngerprint recognition algorithms are not available. Song et al. proposed a pre-processing technique for their designed touch-less sensor [2]. [1] resolved the three-dimensional (3D) to two-dimensional (2D) image mapping problem that was introduced by Song et al. [2] with a strong view difference image rejection method. Pre-processing of the ngerprint images captured with mobile camera has been proposed by [9]. Based on the optical transmittance characteristics of a ngers interior, Sano et al. [10] introduced a contact-less ngerprint authentication device which is able to form images of ngerprint patterns. Most recently, Parziale and Diaz-Santona [11] introduced a new touch-less device which can acquire 3D rolled-equivalent ngerprints. To make 3D touchless ngerprints interoperable with the current Automated Fingerprint Identication System (AFIS), Chen et al. [12] proposed an unwrapping algorithm that unfolds the 3D touch-less ngerprint images into 2D representations which are comparable with the legacy rolled ngerprints.

1.1.2. Cancellable biometrics Ratha et al. introduced the rst notion of cancellable biometrics formulation. The underlying idea is to distort the biometric image

220

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

MRP-SVM Pre-processing Gabor filter feature extraction based

PCA

MRP

Fingerprint verification using SVM

Fig. 1. Flow chart of the proposed touch-less ngerprint recognition.

in a repeatable but non-reversible manner. It helps to generate deformed biometric data [4]. By using polynomial-based secret sharing scheme, Clancy et al. [13] extracted a group of minutia points from input ngerprint to bind in a locking set. Then, with the purpose of shadow the key to maximise the unlocking computational complexity, a non-related chaff point was added intentionally. The secret key could only be recuperated if there is a considerable overlap between the input and testing ngerprint. Hypothetically, the method has been veried secure in protecting the secrecy of ngerprint. However, it is impractical to be used owing to the high False Reject Rate (FRR) at 2030%. Another cancellable biometrics approach was introduced by Andrew et al. [14] using iterative inner product between tokenised pseudo-random numbers (PRN) and biometric data. However, this formulation will suffer from the stolen-token scenario when the genuine token is stolen and utilised by the impostor to claim as the genuine user. Savvides et al. [15] encrypted the training images which are used to synthesise the correlation lter for biometrics authentication. They showed that different templates can be obtained from the same biometrics by varying the random convolution kernels. Thus, those templates can be cancelled. Ang et al. [16] generated a cancellable ngerprint template thru a key-dependent geometric transformation on a minutiae based ngerprint representation. Nevertheless, the matching accuracy is degraded notably in the distorted domain. In [17], the authors brought in the concept of biometric-based tokens that support robust distance computations, which offers cryptographic security such that it can be revoked and replaced a new one. Most recently, Andrew and Chong [18] introduced Multispace Random Projection (MRP) as one of the cancellable biometrics approaches in the eld of face recognition. The MRP fulls good cancellable biometrics requirements as stated above. However, the Equal Error Rate (EER) result is only %30% due to the poor classication capability of the matching metric. 1.2. Our approach A secure digital camera based ngerprint verication system is reported in this issue. This system uses touch-less method with a digital camera to capture image. The problems appear in the digital camera acquired ngerprint images are: (i) low ridges valleys contrast; (ii) the depth of the eld of the camera is small, some part of the ngerprint regions are in focus but some parts are out of focus; and (iii) motion blurriness. To overcome these problems, the ngerprint images are pre-processed using the proposed method initially as depicted in Fig. 1. Proposed pre-processing comprises skin colour detection, local normalisation, ngerprint segmentation, image enhancement, and core point detection. To protect the template and improve the system performance, a variant of MRP [18], named as Multispace Random ProjectionsSupport Vector Machine (MRP-SVM), is proposed. In MRP-SVM, MRP, which used normalised dot product, is replaced with a more powerful classier Support Vector Machine (SVM) while still retains the properties of MRP. As shown in Fig. 1, MRP-SVM is performed after pre-processing via: (i) Gabor lter based feature extraction; (ii) feature compression using Principle Component Analysis (PCA); (iii) ngerprint biometric data concealment thru MRP; (iv) SVM verication. Section 2 provides the details of the proposed system. Section 3 shows the image acquisition, pre-processing results, verication

results; followed by the performance results and the diversity test of MRP. Finally, conclusions are given in Section 4. 2. Touch-less ngerprint verication algorithm 2.1. Pre-processing Fig. 2 shows the ow chart of the proposed ngerprint pre-processing. At rst, the captured ngerprint image in Red-Green-Blue (RGB) format is converted to grey scale [0255]. To reduce the problem of non-uniform lighting in the image, local normalisation is adopted to reduce the intensity variations in the image. Local normalisation of o(xo, yo) is computed as follows [19]:

Sxo ; yo

oxo ; yo mo xo ; yo ro xo ; yo

where o(xo, yo) is the original image, mo(xo, yo) denotes an estimation of a local mean of o(xo, yo), ro(xo, yo) signies an estimation of the local standard deviation. Next, the ngerprint region is segmented from the raw image by applying skin colour detection, adaptive thresholding, and morphological processing [20]. After that, the local normalised image is multiplied with the ngerprint binary mask to get the segmented image. The resulting image is cropped and enhanced by using the Short Time Fourier Transform (STFT) analysis [21]. Lastly, the ridge orientation is calculated and the core point is detected from the enhanced image [20,22]. After the core point detection, the local normalised image is cropped into a size of 200 200 with the core point as the centre. The created pre-processed image will be used in feature extraction. 2.2. Multispace Random Projections-Support Vector Machine (MRPSVM) 2.2.1. Brief review of multispace random projections (MRP) MRP covers two stages: (i) feature extraction: feature vector, f 2 Rd , with xed length d is extracted; (ii) random projections:

Image acquisition

Skin colour detection

RGB to grey scale image conversion

Adaptive thresholding

Fingerprint segmentation

Local normalisation

Morphological processing

Image cropping

Image enhancement using STFT analysis

Core point detection

Ridge orientation calculation

Fig. 2. Flow chart of the proposed ngerprint pre-processing.

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

221

len-token scenario reverts to its original state in feature vector level. Thus, the recognition performance is maintained like original system before MRP is applied. 2.2.2. Multispace Random Projections-Support Vector Machine (MRPSVM) MRP-SVM is applicable to biometrics features that represented in feature vector format. This technique consists of three stages: (i) feature extraction, (ii) random projection, and (iii) SVM classication. 2.2.2.1. Feature extraction. The individuals feature matrix is extracted from the pre-processed image through Gabor lter feature extractor. The adopted feature extraction method is similar to the method described in [23]. Initially, the pre-processed images are sampled by the Gabor lters. Given that a band of X Gabor lters is applied in an experiment, the ltered images are divided into a set of M M non-overlapping blocks, respectively. The resulting magnitude will be next converted to a scalar number by calculating its standard deviation value. The scalar numbers form the Gabor features of each image. Finally, N = (200/M) (200/M) X Gabor features are extracted from each image. Then, PCA is used to compress the feature vectors. An eigenspace is built during the training phase. During the testing phase, each testing Gabor feature vector is projected onto the eigenspace to form eigenGabors, f with feature length d [24]. Before applying Multispace Random Projection, each eigenGabors is normalised to unit length, ||f|| = 1. 2.2.2.2. Random projection. Subsequently, the unit length feature vector is projected onto a random subspace as described in Section 2.2.2.1. The user-specic random projection (RP) vector, p 2 Rm is produced through the random projection process, which is dened in Eq. (1). The one way transformation property can be assessed by looking at Eq. (1). p can be regarded as a set of underdetermined systems of linear equations (more unknowns than equations) if m < d. Thus, it is impossible to nd the exact values of all the elements in f by solving an underdetermined linear equation system p in p 1= mRf if m < d, based on the premise that the possible solutions are innite [18]. The detail of the analysis can refer to [25]. In the event of compromised, the template could be renewed by just changing R in Eq. (1) of the compromised biometric template. From this way, reusability property of MRP-SVM is fullled. 2.2.2.3. Support Vector Machine (SVM) classication. Afterwards, the random projected vector p is fed into a SVM to discriminate genuine and impostor. Let a training set be (pi, qi), i = 1, . . ., Q, p 2 Rm , q e {+1, 1}, where pi either belongs to genuines or impostors random projection vector, qi indicates the label (+1 for genuine, 1 for impostor), Q denotes the number of training samples, and m signies the feature length. During training, SVM looks for an optimal hyper-plane, which takes the form wTp + b = 0 where w is the normal to the plane, b denotes the bias term. The optimal hyper-plane is a separating hyper-plane, which gives the largest margin. Assuming that genuine and impostor samples are linearly sepa-

Fig. 3. Genuine-impostor distributions of the original system and performance behaviour of MRP in legitimate-token, stolen-token and stolen-biometrics scenarios.

the feature vector is projected onto a random subspace, which is formed by externally derived random matrix, R 2 Rmd , where m 6 d. The R is formed by the independent, zero mean and unit variance Gaussian distributed random bases. Thus, the user-specic random projected vector, p 2 Rm is described as

p p 1= mRf

Verication is carried out by using the normalised dot product, a dissimilarity measure, c = 1 xTy, where x and y are the normalised feature vectors. From the performance perspective, three scenarios are considered when MRP is applied: (i) Legitimate-token: in which the genuine biometrics is mixed with the user-specic token; (ii) Stolen-token: wherein an impostor has possessed genuine token and used by the impostor to claim as the genuine user; (iii) Stolen-biometrics: where an impostor assesses intercepted biometric data of high possibility to be considered genuine. Through the theoretical and experimental analysis [18], Fig. 3 illustrates the original system performance and performance behaviour of MRP in legitimate-token, stolen-token and stolen-biometrics scenarios using the genuine-impostor distributions. By referring to Fig. 3, MRPs genuine, impostor and genuine-impostor distributions in legitimate-token, stolen-token and stolen-biometrics scenarios are summarised in Table 1. Generally, the amount of overlap between two distributions (genuine and impostor distributions) determines the biometric system accuracy. The smaller overlap between two distributions, the better the system is. Therefore, a good system should be able to separate the genuine and impostor into two clean distributions. From Fig. 3 and Table 1, it can be observed that the genuine-impostor distributions are clearly separated in legitimate-token and stolen-biometrics scenarios. Hence, it is concluded that the recognition performance is signicantly improved in these two scenarios. Conversely, the genuine-impostor distribution of sto-

Table 1 MRPs genuine, impostor and genuine-impostor distributions in legitimate-token, stolen-token and stolen-biometrics scenarios. Scenario Legitimate-token Stolen-biometrics Genuine distribution (intra-class variation) Preserved Impostor distribution (inter-class variation) Impostor distribution is amplied. The mean is 1 whereby the curve is centred at 1; the distribution proles shrinking rate p (standard deviation) follows 1= Y Preserved Genuine-impostor distributions Clear separation can be attained and hence zero EER, if Y is sufciently large as depicted in Fig. 1 Reverts to its original state in feature vector level

Stolen-token

222

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

rated, the Langrage Dual optimisation problem for maximal margin separation is:

max LD

Q X i1

ai 1=2

Q Q XX i1 j1

ai aj qi qj pT pj i

PQ subject to i1 ai qi 0 and C ! ai ! 0 where ai are the Langrage multipliers, C is the regularisation constant [26]. The extension to non-linear boundaries is determined through projecting each data point to a higher dimensional feature space [26], which may be carried out through the use of kernels: (i) poly P nomial kernel as Kpi ; pj pT pj 1 (P: the order of the polynoi mial) or (ii) radial basis function (RBF) kernel as K(pi, pj) = exp[1/ 2(||pi pj||/r)2] (r: the width of the radial basis function). Similar to linear SVM, constructing the optimal hyper-plane for SVM with non-linear kernel K(pi, pj) involves the following dual:

As stated above, to extend the idea to non-linearly separable data, a non-linear kernel is needed. The dot product of pT pj in input i space is substituted with K(pi, pj) in the Langrage dual optimisation problem as displayed in Eq. (4). On the other hand, the dot product of pT p in input space is substituted with Kp; pj in deciding the j decision value as shown in Eq. (6). The non-linear kernel measures the similarity between two feature vectors. Typically, it represents the dot product between two representations in the transformed space in which the data are linearly separable. Depending on the choice of the kernel, a number of variants of SVM can be developed with varying performance. In short, SVM is another classier that can be used in MRP as it contains the property of dot product and non-linear kernel. SVM is still preserved under MRP framework. Moreover, it has advantage to compare with normalised matching metric for its better discrimination power. 3. Experiment evaluation 3.1. Image acquisition As there is no standard benchmark database available for ngerprint images captured by digital camera, we constructed an independent database. The digital camera used in the image acquisition is Canon The PowerShot Pro1. The setting of the digital camera is shown in Table 2. All ngerprint images are acquired in one session in a controlled environment as illustrated in Fig. 4 [20]. After the ngerprint images are collected, those images are downloaded to personal computer and stored in jpeg format. The constructed independent database comprises of 1938 ngerprint images. These images come from 103 ngers (103 classes). For some people, both of their ngers (left and right ngers) images are taken; for some people, only one of their ngers (left nger or right nger only) images are captured. Several images are collected from each nger, resulting in 1938 ngerprint images. No standard number of images is acquired from each nger as the number of images can not be xed when the shutter button is pressed once.

max LD

Q X i1

ai 1=2

Q Q XX i1 j1

ai aj qi qj Kpi ; pj

P subject to Q ai qi 0 and C ! ai ! 0. i1 During verication, w and b learned from training phase are used. The decision value, Gdv of a test sample of vector p for linear SVM:

Gdv wT p b

NS X j1

aj qj pT p b j

and non-linear SVM:

Gdv wT p b

NS X j1

aj qj Kp; pj b

are calculated, where pj are the support vectors. pT p (in Eq. (5)) and Kp; pj (in Eq. (6)) are a similarity measure j comparing p (test sample) and pj (support vectors) in input space and feature space respectively. Gdv is a weighted sum of the similarity between p and pj. Comparison is made between Gdv and a threshold, Thdv. The claimed user is accepted if Gdv < Thdv and rejected if Gdv ! Thdv . 2.2.3. Discussions Support Vector Machines (SVMs) are promising methods for classication due to their concrete mathematical foundations which express several prominent properties that other methods hardly offer. Nevertheless, the training complexity of SVMs is extremely dependent on the data sets size. Thus, feature vectors with large feature length are not feasible for SVMs. Classication involves large feature vectors is too expensive to perform and is not computationally feasible. Besides, feature vectors with large feature length need larger storage size. This is not favourable in the real-world application. Hence, PCA is adopted to reduce the feature vectors dimensionality. PCA is able to identify the most signicant dimensions (which contain the important condense information), removing less signicant dimensions (which contain noise or other uninteresting data). So, PCA allows less data to be used for pattern matching and thus increasing the matching speed. On the other hand, PCA could effectively increase the system performance as shown in Sections 3.3.1 and 3.3.2. MRP maps the data onto a random subspace while preserving the pair-wise distance that is quantied by dot product. Hence, MRP can be generalised for linear SVM as the original data points are quantied by the dot product, pT pj in Lagrange dual optimisai tion problem as shown in Eq. (3). Furthermore, a dot product pT p j also appears in decision stage as exhibited in Eq. (5). Therefore, SVM inherited the MRP characteristics that described in [18].

Table 2 Digital camera setting for ngerprint image acquisition. Image size/resolution Image quality Super macro Colour space Drive mode 640 480 Superne On (to get clearer foreground pattern and blurred background pattern) Standard RGB Continuous shooting (several images are collected by just pressing the shutter button once)

Fig. 4. Experiments setup.

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

223

Fig. 5. Results for the proposed pre-processing.

3.2. Pre-processing experiment results Pre-processing experiments are conducted by using all ngerprint images in the database (1938 images) and the results are assessed subjectively by the visual inspection. The proposed preprocessing provides pleasant results as shown in Fig. 5.

By counting the number of false core point detection, it reveals that the proposed pre-processing algorithm can achieve an accuracy of 95.44% of core point detection whereas the false core point detection is only 4.56%. Deep wrinkle, motion blurriness and defocus problems are the causes of the core point detection failure. Figs. 68 show some samples of the false core point detection

Original image

Core point detected image

Fig. 6. Samples of false core point detection caused by deep wrinkle image.

224

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

Original image

Core point detected image

Fig. 7. Samples of false core point detection caused by motion blurriness.

Original image

Core point detected image

Fig. 8. Samples of false core point detection caused by defocus.

caused by deep wrinkle, motion blurriness and defocus, respectively [20]. Deep wrinkles are high contrast lines [27] which causes the orientation estimation error that cannot be corrected easily. Consequently, the ngerprint core point is detected inaccurately. Motion blur is caused by the relative motion between the camera and the object (nger) during exposure time [28]. It causes low ridge valley contrast [1] and degrades the image quality [29]. Thus, it will indeed affect the core point detection result. Depth of eld refers to the amount of distance between the nearest and furthest nger that appear in acceptably sharp focus in an image. Generally, defocus occurs as the ngerprint region is outside the depth of eld [30]. As some parts of the ngerprint are in focus, some parts of the ngerprint are out of focus. The information provided in the ngerprint image is inadequate in enhancement and orientation calculation, leading to inaccurate core point detection nally. The input image is in grey scale format for generic ngerprint pre-processing. Thus, the RGB image is converted to grey scale before pre-processing. By using the conventional enhancement techniques like Hongs enhancement [31], root ltering [32] and STFT analysis [21], those grey scale images are enhanced as depicted in Fig. 9. Nevertheless, the enhancement results are rather poor. Fig. 9 shows the results for different enhancement algorithms. Fig. 10 depicts the comparison of the proposed algorithm to those of existing enhancement methods in the literature by using cropped local normalised image of our independent database. As illustrated by the region rounded by the circle or oval shown in Fig. 10, some portions of the images are not enhanced well by Hong enhancement and root ltering. It can be seen that the proposed algorithm performs better than Hongs and root ltering. Besides, the enhancement results shown in Fig. 10 are better than the enhancement results shown in Fig. 9. It indicates that image cropping is needed to eliminate the undesired noisy background; local normalisation helps in reducing the non-uniform lighting effect.

The nal set of pre-processing experiment is devoted to the examination on effect of local normalisation towards enhancement via STFT analysis and core point detection results. Fig. 11 depicts the results attained as a local normalised image is utilised. On the other hand, Fig. 12 shows the results obtained when a grey scale image without local normalisation is directly enhanced. Note that the original image used to show the results in Figs. 11 and 12 is the same. By comparing the result shown in Figs. 11 and 12, it shows that grey scale images need to be local normalised before enhancement. Without local normalisation, the enhancement and core point detection results are not satisfying because of the uneven lighting problem in the grey scale images. 3.3. Verication experiment results Core point centred local normalised ngerprint images with size 200 200 pixels are the nal output of pre-processing. Instead of using all of the raw images in the database as mentioned in Section 3.1 directly, a total of 1030 core point centred local normalised ngerprint images are used in performing the verication experiments. These ngerprint images originate from 103 different ngers with 10 images for each nger. The system performance is determined by using EER, i.e. False Accept Rate (FAR) = FRR. A comparative study of different feature matching procedures is performed on eigenGabors and uncompressed Gabor features. The main feature matching method investigated in the verication experiment is SVM. In addition to SVM, we have also implemented the conventional decision rules based on matching metric, i.e. Euclidean distance (ED) for experimental comparison. Fig. 13 depicts the ow chart of verication using different feature matching procedures on different representations. The parameters of the Gabor lter feature extractor are set as: (i) number of Gabor lters (X) = 6; (ii) frequency (fg) = 10; (iii) var  iance r2g and r2g 128; (iv) number of non-overlapping blocks x y

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

225

Fig. 9. Results of different enhancement algorithms on grey scale image.

(M) = 8. To get eigenGabors, the features derived from the Gabor lter based feature extractor are compressed using PCA. One Gabor feature vector from each nger class is used for training, while the rest will be used as testing data. Note that both eigenGabors and uncompressed Gabor feature vectors are normalised to be of unit length before verication test is conducted. For the experiment evaluation using SVM classier, the experiment setting for uncompressed Gabor features and eigenGabors are xed and shown in Tables 3 and 4, respectively. To enhance the reliability of the assessment, we perform 10 runs for each of Tr samples with different random partitions between training and testing images, and the results are averaged. On the other hand, Table 5 exhibits the conguration for the experiments using ED. 3.3.1. Verication using Support Vector Machine (SVM) classier The verication accuracy is tested using linear SVM, polynomial SVM (SVM with the polynomial kernel) and RBF SVM (SVM with the RBF kernel). The effects of normalised eigenGabors and normalised uncompressed Gabor features are also evaluated. For the verication of SVM classier using eigenGabors, to x the feature length, the eigenvectors are extracted from the Gabor feature vectors with the dimensionality of 100.

Before comparing the performance of linear, polynomial and RBF SVM, the optimal parameter values for polynomial (degree) and RBF SVM (gamma and C) are investigated. This investigation is necessary as these parameter values will affect the results. In this study, the polynomial of degree between 2 and 5 are tested. Besides, different parameter values of RBF SVM (gamma = 1, C = 10; gamma = 10, C = 1; gamma = 1, C = 1; gamma = 10, C = 10) for both uncompressed Gabor features and eigenGabors are investigated. After parameter value tuning, the best parameter values of polynomial SVM (degree = 5 for uncompressed features and degree = 2 for eigenGabors) and RBF SVM (gamma = 1 and C = 10 for both uncompressed Gabor features and eigenGabors) are xed for further evaluation tests. Next, the performance of linear SVM, polynomial SVM and RBF SVM on eigenGabors and uncompressed Gabor features are compared. From Figs. 14 and 15, one interesting observation is that RBF SVM is the best whereas linear SVM is the worst for both eigenGabors and uncompressed features. By comparing Figs. 14 and 15, it can be seen that eigenGabors are surpassing uncompressed features. Furthermore, the processing speed for eigenGabors is much faster than uncompressed features.

226

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

Fig. 10. Comparative results for different enhancement algorithms.

3.3.2. Verication using Euclidean distance The EER of ED using uncompressed Gabor features is 6.39%. During the experiments of verication using ED for eigenGabors, the eigenvectors are extracted from the Gabor feature vectors with feature length varied from 10 to 100 in intervals of 10. Fig. 16 depicts the result of ED using eigenGabors. From Fig. 16, we observed that the performance is improving better till a certain point (when d = 50) and become poorer after that. Based on the experiment results, the following conclusions can be drawn:

1. RBF SVM is the best SVM classier for both normalised eigenGabors and normalised uncompressed Gabor features. 2. The results of normalised eigenGabors are better than normalised uncompressed Gabor features for both matching metrics and SVM classier. It can be concluded that the rst few eigenGabors contain the largest variance directions, which have the most signicant amount of energy. Conversely, if uncompressed features are used, the insignicant information in the features, such as noises, will decrease the system performance.

Fig. 11. Results attained when local normalised image is used: (a) cropped local normalised image; (b) enhanced image; (c) core point detected image.

Fig. 12. Results obtained when grey scale image without local normalisation is used: (a) cropped grey scale image; (b) enhanced image; (c) core point detected image.

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

227

Gabor filter based feature extraction

Fingerprint verification using SVM

Pre-processed image

(a)

Gabor filter based feature extraction

PCA

Fingerprint verification using SVM

Pre-processed image

(b)

Gabor filter based feature extraction

Fingerprint verification using ED

Pre-processed image

(c)

Gabor filter based feature extraction

PCA

Fingerprint verification using ED

Pre-processed image

(d)

Fig. 13. Flow chart of verication using: (a) RBF SVM on uncompressed features; (b) RBF SVM on eigenGabors; (c) ED on uncompressed features; (d) ED on eigenGabors.

Table 3 Conguration for the experiments using uncompressed features. Setting No. of training image, Tr 1 2 3 4 No. of testing image, 10 Tr 9 8 7 6 No. of client No. of impostor

45.00 40.00 35.00 30.00 EER (%) 25.00 20.00 15.00 10.00 5.00 0.00 Linear 1 40.68 29.07 13.53 2 31.02 17.37 6.98 3 23.95 11.79 4.91 4 19.32 8.48 3.63

1 2 3 4

9 103 = 927 8 103 = 824 7 103 = 721 6 103 = 618

9 103 102 = 94,554 8 103 102 = 84,048 7 103 102 = 73,542 6 103 102 = 63,036

Table 4 Conguration for the experiments using SVM on eigenGabors. Setting No. of training image, Tr 1 2 3 4 No. of testing image, 9 Tr 8 7 6 5 No. of client No. of impostor

Polynomial (degree=5) RBF (gamma=1, C= 10)

No of Training Images, Tr
Fig. 14. EER results (%) of linear SVM, polynomial SVM and RBF SVM using uncompressed Gabor features with different number of training image, Tr.

1 2 3 4

8 103 = 824 7 103 = 721 6 103 = 618 5 103 = 515

8 103102=84,048 7 103102=73,542 6 103102 = 63,036 5 103 102 = 52,530

3. By comparing the best results of ED and SVM classier for both normalised eigenGabors and normalised uncompressed Gabor features as indicated in Table 6, the best result is obtained as RBF SVM verication is conducted by using normalised eigenGabors.

Table 5 Conguration for the experiments using ED. Setting 1 2 Type of feature eigenGabors Uncompressed Gabor features No. of client (8 9)/ 2 103 = 3708 (9 10)/ 2 103 = 4635 No. of impostor (103 102)/ 2 9 = 47,277 (103 102)/ 2 10 = 52,530

3.4. Multispace random projection (MRP) experiments 3.4.1. Performance results To evaluate the performance of MRP on the best performing matching metric and SVM classier, we conceal the eigenGabors through MRP before feature matching. Fig. 17 illustrates the ow chart of MRP experiments using: (a) RBF SVM; (b) ED.

228

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

20.00

10.00 8.00

15.00 EER (%)

EER (%)

6.00 4.00 2.00

10.00

5.00
pcamrp-m

0.00

10 7.34 8.97

20 3.77 3.75 4.64

30 2.18 2.14 3.68

40 1.76 1.70 2.79

50 1.20 1.20 2.63

60 1.06 1.13 2.31

70 0.91 0.91 2.27

80 0.74 0.76 2.08

90 0.61 0.66 1.93

100 0.53 0.53 1.88 1.88

0.00 Linear Polynomial(degree=2) RBF(gamma =1, C=10)

pcamrp-m(stolen-biometrics) 5.90

1 18.89 14.57 5.09

2 9.47 6.26 2.75

3 6.48 3.25 1.83

4 4.21 1.76 1.23

pcamrp-m(stolen-token)

pca

Feature Length,d
Fig. 18. EER results (%) of eged, mrped-m, mrped-m (stolen-token) and mrped-m (stolen biometrics) for ED with different feature length, d.

No of Training Images,Tr
Fig. 15. EER results (%) of linear SVM, polynomial SVM and RBF SVM using eigenGabors (dimensionality = 100) with different number of training image, Tr.

3.00 2.50 2.00 EER (%) 1.50 1.00 0.50 0.00

10

20

30

40

50

60

70

80

90

100

Euclidean Distance 2.77

1.88 1.79 1.68 1.68 1.73 1.77 1.80 1.85 1.88 Feature Length

Fig. 16. EER results (%) of ED using eigenGabors of different dimensionality.

dom partitions between training and testing images. Then, the results are averaged. In MRP-SVM experiment, egsvm, mrpsvm-m, mrpsvm-m(stolen-token), mrpsvm-m(stolen-biometrics) denote eigenGabors, legitimate-token, stolen-token and stolen-biometrics scenarios, respectively. The feature length of eigenGabors (eged and egsvm) are xed as d = 100 for the above two experiments. The eigenGabors is normalised to unit length before MRP. To infer the recognition performance of MRP-ED and MRP-SVM, three different scenarios (as described in Section 2.2.1) are considered, i.e. legitimate-token, stolen-token and stolen-biometrics scenarios. Figs. 18 and 19 show the performance comparison of eigenGabors, legitimate-token, stolen-token and stolen biometrics scenarios for ED and RBF SVM, respectively. For both ED and RBF SVM, it is clearly shown that legitimate-token outperforms the original method (egca and egsvm in this context). Legitimate-token attains better EER results than the original method for both ED and RBF SVM in the range of 40 6 m 6 100. Besides, it can be seen that the EER of stolen-

Table 6 Comparison of the best EER results (%) of SVM and ED for both eigenGabors and uncompressed features. Experiments Verication Verication Verication Verication using using using using ED on normalised uncompressed features ED on normalised eigenGabors RBF SVM on normalised uncompressed features RBF SVM on normalised eigenGabors EER (%) 6.39 1.88 3.63 1.23

50.00 40.00 EER (%) 30.00 20.00 10.00 0.00


10 20 30 40 0.63 0.87 2.31 50 0.37 0.70 1.58 60 0.36 0.56 1.51 70 0.31 0.43 1.16 80 0.30 0.42 1.12 90 0.37 0.49 1.19 100 0.48 0.57 1.23 1.23

In the verication using ED on concealed features (MRP-ED), the conguration of the experiment is the same as setting 1 of Table 5. In the experiment of MRP-ED, eged, mrped-m, mrped-m(stolen-token), mrped-m(stolen-biometrics) denote eigenGabors, legitimatetoken, stolen-token and stolen-biometrics scenarios, respectively. In the experiment of MRP-SVM, setting 4 of Table 4 is adopted. Again, we perform 10 runs for each of 4 samples with different ran-

mrpsvm-m mrpsvm-m(stolen-biometrics) mrpsvm-m(stolen-token) egsvm

36.69 10.38 1.84 35.98 11.06 2.23 39.75 14.58 4.80

Feature Length,d
Fig. 19. EER results (%) of egsvm, mrpsvm-m, mrpsvm-m (stolen-token) and mrpsvm-m (stolen biometrics) for RBF SVM with different feature length, d.

Gabor filter based feature extraction Pre-processed image

PCA

MRP

Verification using RBF SVM

(a)
Gabor filter based feature extraction Pre-processed image PCA MRP Verification using ED

(b)
Fig. 17. Flow chart of MRP experiments using: (a) RBF SVM; (b) ED.

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231 Table 7 Statistics measurement of eged and mrped-m(stolen-token) for ED. m Eged mrped-m (stolen-token) 100 90 80 70 60 50 40 30 20 10

229

lgenuine
0.0050 0.0050 0.0050 0.0050 0.0050 0.0049 0.0050 0.0050 0.0050 0.0049 0.0049

limposter
0.0140 0.0140 0.0139 0.0141 0.0139 0.0139 0.0140 0.0139 0.0141 0.0138 0.0136

rgenuine
0.0029 0.0029 0.0029 0.0029 0.0029 0.0029 0.0029 0.0029 0.0030 0.0029 0.0032

rimposter
0.0015 0.0015 0.0015 0.0016 0.0016 0.0016 0.0018 0.0018 0.0021 0.0024 0.0032

biometrics scenario is better than eigenGabors for both ED and RBF SVM in the range of 40 6 m 6 100. On the other hand, the EER of eigenGabors and stolen-token scenarios are equal when m % d for both ED and RBF SVM. In other words, stolen-token scenario reverts the system to its original state when m % d. From the above ndings, an important proviso is observed. The performance of MRP is directly proportional to the classication ability of the classier. For legitimate-token and stolen-biometrics scenarios, the MRP performance can be boosted through a classier with better separation whilst the classier quality will determine the performance of MRP in stolen-token scenario. Yet, it would not be poorer than its original method, i.e. classication using eigenGabors is without MRP. This is favourable in practical application whereby MRP survives either in stolen-token or in stolen-biometrics attack and also offers signicant improvement in verication setting with better classier. From Tables 7 and 8, for both ED and SVM, it can be observed that that the impostor distributions mean and standard deviation for both eigenGabors and stolen-token scenario are the same when m % d. Similar result can be seen in the genuine distribution where the mean and standard deviation of stolen-token scenario are equivalent to the mean and standard deviation of eigenGabors when m % d. This indicates that the preservation of genuineimpostor distribution is maintained under ED and SVM framework when m % d. It is also depicted in Figs. 20 and 21 where the separation of genuine-impostor class distribution for eigenGabors and stolen-token scenario is almost identical. The observations lead to the conclusion that although different matching techniques are employed, both methods lead to preservation of the intra-class variations (genuine distribution) as well as inter-class variations (impostor distribution). Moreover, Table 8 and Fig. 21 recite our assertion that SVM statistical properties are preserved under the MRP framework. 3.4.2. Diversity test In order to full the diversity requirement of cancellable biometrics, Pairwise Independent Test is conducted to inspect
Table 8 Statistics measurement of egsvm and mrpsvm-m(stolen-token) for RBF SVM. m Egsvm mrpsvm-m (stolen-token) 100 90 80 70 60 50 40 30 20 10

Fig. 20. Genuine and impostor class distribution for: (a) eged; (b) mrped-m (stolentoken).

lgenuine
0.6445 0.6445 0.6582 0.6742 0.6867 0.6935 0.6897 0.6440 0.3766 0.4513 0.9244

limposter
1.0668 1.0668 1.1212 1.1880 1.2799 1.3971 1.5195 1.6600 1.7365 1.3946 1.0647

rgenuine
0.4428 0.4428 0.4410 0.4380 0.4363 0.4549 0.4517 0.5040 0.5865 0.3854 0.2172

rimposter
0.3188 0.3188 0.3463 0.3776 0.4298 0.5094 0.5755 0.6513 0.6831 0.5092 0.3227

whether the MRP template with PRN A and MRP template with PRN B (both with same ngerprint feature) are associated. The same ngerprint feature is mixed with different PRNs. We observe that the scores generation procedure exactly has followed the impostor distribution as described above. Fig. 22(a) and (b) exhibit the Pairwise Independent Test of MRP when ED and RBF SVM are used respectively. As indicated in Fig. 22(a), the mean and standard deviation are 0.0141 and 0.0007, respectively; the mean and standard deviation are 1.0177 and 0.2982, respectively as shown in Fig. 22(b). It can be concluded that MRP is Pairwise independent as the histogram of both Fig. 22(a) and (b) approaches the independent and similarly distributed (i.i.d) random variables drawn from Gaussian distribution, N(1610.2, 407.3). This means that there is almost no correlation between the refreshed MRP template and old MRP template. Therefore, random number refreshment is equivalent to issue a new template to the user. In the real application, every user is assigned a unique random number. Hence, only the respective template is renewed in the event of compromise. 4. Conclusions and future works A complete secure digital camera based ngerprint verication system is proposed. This system uses touch-less based method.

230

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231

Fig. 22. Pairwise independent test of MRP using: (a) ED; (b) RBF SVM. Fig. 21. Genuine and impostor class distribution for: (a) egsvm; (b) mrpsvm-m (stolen-token).

The proposed pre-processing which includes skin colour detection, local normalisation, ngerprint segmentation, image enhancement, and core point detection resolves the problems exist in its images. After pre-processing, MRP-SVM which comprises Gabor feature extraction, PCA, MRP and SVM verication is performed. MRP-SVM protects the template whilst improving system performance. In MRP-SVM, SVM with high discrimination power is preserved under MRP framework as it has the property of dot product and non-linear kernel. It is noted that the MRP performance is proportional to the classier quality. Experiments show that MRP-SVM contributes high recognition performance and functions well without compromising the verication performance in the event of compromised token. Furthermore, MRP-SVM is proven fullled other cancellable biometrics properties, i.e. diversity property and non-reversible property. The new proposed system is under experimental and still under enhancements. A future direction of this work is package the proposed system into a real-time application. Nevertheless, several constraints need to be taken into account for real-time application, such as time complexity. Besides, before that expansion, we plan to come out with automated core point detection in our pre-processing as our current core point detection is accessed subjectively through visual inspection. We also plan to enhance our pre-pro-

cessing method devised in this research. So that, problems of motion blurriness, defocus and deep wrinkles that lead to false core point detection can be solved. Besides, we plan to improve our secure digital camera based ngerprint recognition. We have the intention not to acquire the ngerprint images using our current experiment setup. Our next very challenging goal is: the ngerprint recognition still could be done with good performance although ngerprint images are captured under any kind of environments. Acknowledgment This work was supported by the Korea Science and Engineering Foundation (KOSEF) through the Biometrics Engineering Research Center (BERC) at Yonsei University. (Grant Number: R112002105080020[2009]). References
[1] C. Lee, S. Lee, J. Kim. A study of touchless ngerprint recognition system. In: Proceedings of Joint IAPR International Workshops, SSPR 2006 and SPR 2006, 2006, pp. 358365. [2] Y. Song, C. Lee, J. Kim. New Scheme for Touchless Fingerprint Recognition System, in: Proceedings of International Symposium on Intelligent Signal Processing and Communication Systems, 2004, pp. 52427. [3] J. Schneider, Biometrics: uses and abuses, Commun. ACM 42 (1999) 136.

B.Y. Hiew et al. / J. Vis. Commun. Image R. 21 (2010) 219231 [4] N. Ratha, J. Connell, R. Bolle, Enhancing security and privacy in biometricsbased authentication systems, IBM Sys. J. 40 (3) (2001) 614634. [5] D. Maltoni et al., Handbook of Fingerprint Recognition, Springer-Verlag, New York, 2003. [6] J.F. Mainguet, Fingerprint Sensing Techniques. [Online] No date; Available from <http://perso.orange.fr/ngerchip/biometrics/types/ngerprint_sensors_physics. htm>. [7] Digital Descriptor Systems Inc., Amendment to Registration of Securities of a Small-Business Issuer. [Online] 2000; Available from <http://www.secinfo.com/ dsvrb.52N5.htm>. [8] TST Biometrics Touchless Sensor Technology, Technology. [Online] No date, Available from <http://www.tst-ag.de/>. [9] C. Lee, et al., Preprocessing of a Fingerprint Image Captured with a Mobile Camera, in: Proceedings of International Conference on Biometrics 2006, pp. 348355. [10] E. Sano, et al., Fingerprint authentication device based on optical characteristics inside a nger, in: Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop 2006, pp. 27. [11] G. Parziale, E. Diaz-Santana, The surround imager: a multi-camera touchless device to acquire 3D rolled-equivalent ngerprints, in: Proceeding of IAPR International Conference on Biometrics, 2006, pp. 244250. [12] Y. Chen, et al., 3D Touchless Fingerprints: Compatibility with Legacy Rolled Images, in: Proceedings of Biometric Symposium, Biometric Consortium Conference, 2006. [13] T.C. Clancy, N. Kiyavashand, D.J. Lin, Secure smartcard-based Fingerprint Authentication, in: Proceedings of the ACM SIGMM 2993 Multimedia, Biometrics Methods & Applications Workshop, 2003, pp. 4552. [14] Andrew B.J. Teoh, David C.L. Ngo, Alywn Goh, Personalised cryptographic key generation based on facehashing, Comput. Secur. J. 23 (2004) 606614. [15] M. Savvides, B.V.K. Vijaya Kumar, P.K. Khosla, Cancelable biometrics lters for face recognition, Int. Conf. Pattern Recogn. 3 (2004) 922925. [16] R. Ang, S.N. Rei, L. McAven, Cancelable key-based ngerprint templates, in: Proceedings of the 10th Australasian Conference on Information Security and Privacy, 2005, pp. 242252. [17] T. Boult, Robust Distance Measures for Face-recognition supporting revocable biometrics token, in: 7th International Conference Automatic Face and Gesture Recognition, 2006. [18] Andrew B.J. Teoh, T.Y. Chong, Cancellable biometrics realization with multispace random projections, in: IEEE Transaction on Systems, Man and Cybernetics Part B Special Issue on Recent Advances in Biometrics Systems, vol. 37(5), 2007, pp. 10961106. [19] Biomedical Imaging Group, Local Normalization, [Online] 2006 11 February 2002, Available from <http://bigwww.ep.ch/demo/jlocalnormalization/>. [20] B.Y. Hiew, Andrew B.J. Teoh, David C.L. Ngo, Preprocessing of ngerprint images captured with a digital camera, in: Ninth International Conference on Control, Automation, Robotics and Vision (ICARCV 2006), 2006. [21] S. Chikkerur, A. Cartwright, V. Govindaraju, Fingerprint image enhancement using STFT analysis, Pattern Recogn. 40 (1) (2005) 198211. [22] Andrew B.J. Teoh, et al., Automatic ngerprint center point determination by using modied directional eld and morphology, in: AI 2003: Advances in Articial Intelligence: 16th Australian Conference on AI 2003. [23] C.J. Lee, S.D. Wang, Fingerprint feature extraction using Gabor lters, Electron. Lett. 35 (4) (1999) 288290. [24] M. Turk, A. Pentland, Eigenfaces for recognition, J. Cogn. Neurosci. (1991). [25] H. Kargupta et al., Random-data perturbation techniques and privacypreserving data mining, Knowl. Inform. Syst. 7 (4) (2005) 387414. [26] C.J.C. Burges, A tutorial on support vector machines for pattern recognition, Knowl. Discov. Data Mining 2 (2) (1998) 121167. [27] T. Nakamura, et al., Fingerprint enhancement using a parallel ridge lter, in: Proceedings of the 17th International Conference on Pattern Recognition 2004, 2004, pp. 536539. [28] Anon., Motion blur, [Online] 2009 25 February 2009, Available from <http:// en.wikipedia.org/wiki/Motion_blur>. [29] L.D. Cai, Objective assessment of restoration of global motion-blurred images, in: Proceedings of Third International Conference on Image and Graphics 2004, pp. 69.

231

[30] Anon., Depth of Field, 2009, Available from <http://en.wikipedia.org/wiki/ Depth_of_eld>. [31] L. Hong, Y. Wan, A.K. Jain, Fingerprint image enhancement: algorithm and performance evaluation, IEEE-PAMI Trans. Pattern Anal. Machine Intell. 20 (8) (1998) 777789. [32] C.I. Watson, G.T. Candela, P.J. Grother, Comparison of FFT ngerprint ltering methods for neural network classication, in: National Institute of Standards and Technology (NIST) Report, 1994, National Institute of Standards and Technology.

Hiew Bee Yan received her B.IT (Hons) and her Master MSc (IT) from University of Malaya and Multimedia University respectively. She is presently a lecturer of Faculty of Information Science and Technology, Multimedia University. Her research interests include ngerprint recognition and image processing.

Andrew Teoh Beng Jin obtained his BEng (Electronic) in 1999 and Ph.D. degree in 2003 from National University of Malaysia. His research interest is in biometrics security, watermarking and pattern recognition. He had published around 130 international journal and conference papers in his area.

Ooi Shih Yin obtained her B.IT (Hons) Business Information System in 2004 and her MSc (IT) in 2008 from Multimedia University. She is currently a lecturer of Faculty of Information Science and Technology, Multimedia University. Her research interests include biometrics signature verication, image processing, and computer vision.

You might also like