Professional Documents
Culture Documents
Finger Veins Recognition Using Machine L
Finger Veins Recognition Using Machine L
a r t i c l e i n f o a b s t r a c t
Article history: This study aims The Finger veins are a unique feature of the human, this differs from other biometric
Received 30 March 2021 signs. That impossibility for an imposter to discovered and penetrate the system and difficult to know
Accepted 5 April 2021 because the veins are under the skin, and is distinguished from the rest by its flexibility because the per-
Available online xxxx
son has more than one finger to take. In this paper, we will present impostor detection using seven
machine-learning techniques, preceded description of preprocessing, and features extraction. These steps
Keywords: implemented on two datasets. The best performance of the classifiers was naive bias followed by random
Machine learning
forest. While the lower classifier accuracy was JRip.
Veins
Imposter
Ó 2021 Elsevier Ltd. All rights reserved.
Naïve Bayes Selection and peer-review under responsibility of the scientific committee of the Emerging Trends in
k-Nearest Neighbor Materials Science, Technology and Engineering.
Random forest
1. Introduction extraction. the second way is to attempt to depart the vein patterns
from the image, and then extract features on the pure vein patterns
Finger veins are attracting more researchers than in previous can called vein-level feature extraction [6]. Table1 show the Survey
decades because they are easy to reach, highly accurate, and of existing biometric characteristics.
impossible to replicate [1]. Most Finger veins identification meth-
ods endure the ill consequences of the downsides related to the
extraction of important features because of bad quality pictures 2. Literature survey
and weak permeability of the Finger veins. Bad quality pictures
can be because of low control of infrared radiation, low lighting Since the beginning of the current century, the methods have
conditions and light scattering in tissues coating the vein construc- been widely developed finger vein identification systems. In this
tion to be captured [2]. Image collection, image preprocessing, fea- section, some of the previous work related to this research that
ture extraction, and feature matching are the four phases of a presented veins recognition using machine learning classifiers will
biometric method based on finger veins. Near infrared light applied be reviewed:
to acquire images in two different ways: light reflection and light In 2021, they achieved an efficient finger vein recognition con-
refraction [3]. The use of a good image acquisition system is essen- sisting of the hybrid Local Phase Quantization (LPQ) for strong fea-
tial; otherwise, there would be too much preprocessing. With a ture extraction and Grey Wolf Optimization-based SVM (GWO-
tidy and clean picture, several current finger vein identification SVM) to calculate the best parameter combination of SVM for opti-
models function well [4]. Even if the image is not clear and the fin- mal decisions of binary classification. GWO-SVM is used for classi-
ger location is perverted or depraved, changes are needed. After fication in order to maximize the classification accuracy by
obtaining a vein image, it is important to preprocess it in order determining the optimal SVM parameters the effectiveness of the
to improve the image’s efficiency [5]. feature extraction of images suggested framework on four tested finger vein datasets, which
can be taken out in two ways the first is to see an image as a gen- performing a recognition accuracy of 98% [7]. In 2021, they intro-
eral image, through some mature feature extraction algorithms in duced an adaptive k-nearest centroid neighbour (akNCN) achieved
the area of image processing can be called image-level feature as an enhancement to the kNCN classifier. Two new rules added to
adaptively choose the neighbourhood size of the test sample. The
⇑ Corresponding author. neighbourhood size for the test sample is modified, the size of
E-mail addresses: cs.19.12@grad.uotechnology.edu.iq (A. Tahseen Ali),
the neighbourhood is adaptively modified to j. Experimental
110014@uotechnology.edu.iq (H.S. Abdullah), mohammad.n.fadhil@uotechnology. results on the Finger Vein (FV-USM) image database demonstrate
edu.iq (M.N. Fadhil). the promising results in which the classification of the akNCN clas-
https://doi.org/10.1016/j.matpr.2021.04.076
2214-7853/Ó 2021 Elsevier Ltd. All rights reserved.
Selection and peer-review under responsibility of the scientific committee of the Emerging Trends in Materials Science, Technology and Engineering.
Please cite this article as: A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil, Finger veins recognition using machine learning techniques, Materials Today: Pro-
ceedings, https://doi.org/10.1016/j.matpr.2021.04.076
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx
Table 1
The Survey of existing biometric characteristics [3].
sifier is 85.64% [8]. In 2020, They achieved finger vein improve- The preprocessing stage’s main advantage is that it organizes
ment in the motivations behind ID and check, s based on the equal- the data, making the recognition task easier. All operations relating
ization of fuzzy histograms. A mixture of Hierarchical Centroid and to image are referred to as ‘‘preprocessing.”
Gradient Histograms was applied to extract features. Both the
enhancement stages were evaluated using 6 fold stratified cross- a) Increase Lustring using Gamma Correlation
validation. The results are altogether tried with KNN and moreover After capturing the finger image, it was dark and dim, the
with SVM. KNN’s forecasts on test information demonstrated con- following step is to increase its lustre with Gamma modifica-
siderably more exact. Employing delineated 6- overlap investiga- tion. By increasing the dynamic range of pixel forces by con-
tion on all hands fingers in the SDUMLA information dataset, the trolling the pixels in a nonlinear way. It’s cultivated by
2
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx
Table 2
System’s Finger veins Datasets [1].
Dataset ame No of images No of persons Finger Number Image Number Image resolution Format Typical Image
per person per Finger
SDMULA-HMT 3816 106 6 (Index, ring, middle, 6 320 240pxl .BMP
of both hands)
diminishing co-event network homogeneity. This plans to As a result, the following successful supervised classification
hold the essential distinctions to the surface and make them machine learning algorithms will be used:
more noticeable. It significantly enhances image quality by
adjusting normal image brilliance in an excellent way. a) Naïve Bayes (NB)
b) Color to gray level image using Grayscale Image The Nave Bayes classifier is a simple probabilistic classifier
Because only one channel is dealt with within the grayscale based on Bayes’ Theorem and strict independence assump-
domain instead of three as in RGB (Red, Green, and Blue), tions, as shown in Eq. (3), with all features being equally
converting a color image to the grayscale domain reduces independent [13]. Using the NB classifier of the probability
data and improves processing speed. A weighted average that the function belongs to a class of prior probability, the
method was used to convert the RGB colored image to a gray feature will be assigned to the class of posterior probability.
image; this method was chosen because it produces a purer The class with the highest posterior likelihood is the product
image in which the colors are not equally weighted. Because of prediction. This classifier accurately and quickly predicts
pure green is lighter than pure blue and pure red, it has a the test data set’s class, and it also performs well in multi-
heavier weight. As can be seen in Eq. (1), pure blue is the class prediction [13].
darkest of the three and thus gets the least weight [10].
pðxjcÞpðcÞ
pðcjxÞ ¼ ð3Þ
Gray scale = 0.2989 Red + 0.5870 Green + 0.1140 Blue [10] pðxÞ
where
c) Contrast enhancement using Histogram Equalization
The principal purpose of HE is to level the probability inten-
pðcjxÞ: the posterior probability of class (c, target) given predic-
sity function of the input image and remap the grey levels to
tor (x, attributes).
create a treated image with enhanced contrast. It meaning-
pðcÞ: the prior probability of class.
ful changes the average brightness of the processed image
pðxjcÞ : the likelihood which is the probability of predictor given
with regard to the initial image. It also includes noise and
class.
density saturation effects which appear in a loss of image
pðxÞ : the prior probability of predictor.
details and give the appearance of the processed image
b) k-Nearest Neighbor
unnatural [11]. (1)
The KNN is a supervised learning system that allows machi-
d) Resizing and Cropping (R&C)
nes to classify objects, problems, or situations using data
(R&C) are the basic image editing functions. Both must care-
that has already been fed into them [14]. An unlabeled vec-
fully considered because they have the potential to degrade
tor (a query or test point) is classified by assigning the mark
image quality. Resizing an image alters its dimensions,
that appears most frequently among the k training samples
resulting in a larger file size (and, thereby, image quality).
closest to that query point, where k is a user-defined con-
Cropping often involves removing a portion of the original
stant in the classification process. For continuous variables,
image, resulting in the loss of some pixels.
Euclidean distance was used as a distance metric, which
was measured using Eq. (4) below: [14]
3.3. Feature extraction using (LDA)
qffiX
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
n
(LDA) is a conventional analytical technique. (LDA) depends on dðx; yÞ ¼ i¼1 i
ða ðxÞ ai ðyÞÞ2 ð4Þ
linear combinations of variables to distinguish among classes that
result in linear decision limits. The technique search for a linear
transformation that maximizes class separability in a reduced
c) Random Forest (RF)
dimensional space [12]. Distinctive essential that merge linear fea-
Random Forest Classification (RFC) is a decision tree-based
tures, determine axes that maximize the variance, can dimension
supervised classification technique for machine learning
reduction. Calculated according to Eq. (2) below:
[15]. Each tree in the collection is made by randomly select-
y ¼ k1 þ ax1 þ bx2 . . . axn ½lda ð2Þ ing a small group of features to split on for each node, and
then deciding the best split based on these features in the
training set. Each tree gives the particular feature vector a
3.4. Proposed system classifiers
vote. The forest selects the class with the most votes for each
feature vector. It’s easy to make and forecast, runs quickly on
Machine learning classifiers, including feature extraction tech-
large datasets, estimates missing data quickly, and main-
niques, are critical in assessing the overall effectiveness of the
tains accuracy even when a large percentage of data is
speaker recognition model. This is a classification problem since
missing.
we want to classify audios and figure out who is speaking in them.
3
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx
Begin
SDMULA-HMT UTFV // Input //
Pre-processing Phase
:
Output
End
4
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx
0.972
0.974
0.972
0.972
0.999
0.971
0.028
UTFV
1440
1400
40
For evaluating a model’s performance, certain parameters are
used to determine its behavior. The results are influenced by the
SDMULA-HMT
size of the training data, the quality of the audio files, and, most
importantly, the type of machine-learning algorithm used. The fol-
lowing criteria are used to assess the models’ efficacy [20]:
0.847
0.926
0.846
0.864
0.153
0.998
0.845
3816
3232
KNN
584
Accuracy: Percentage of examples correctly categorized from
0.981
all given examples. It is calculated as [20]:
0.982
0.981
0.981
0.999
0.019
UTFV
1413
1440
0.98
27
tp þ tn
Accuracy ¼ ð5Þ
tp þ tn þ fp þ fn
SDMULA-HMT
0.932
0.931
0.932
0.931
0.999
0.931
0.068
3816
3558
258
tp
Precision ¼ ð6Þ
tp þ fp
SDMULA-HMT
0.369
0.996
0.627
3816
2409
0.63
0.63
539
tp
Recall ¼
0.572
0.581
0.572
0.566
0.428
0.993
0.565
ð7Þ
UTFV
1440
842
616
tp þ fn
SDMULA-HMT
0.476
0.471
0.523
0.995
0.472
3816
1996
1820
0.92
RT
precision recall
F1 ¼ 2 ð8Þ
precision þ recall
0.993
0.994
0.993
0.994
0.991
0.993
0.007
UTFV
1431
1440
where
9
SDMULA-HMT
0.999
0.971
0.028
3816
3710
0.994
0.454
3816
1753
2063
0.46
0.58
0.64
JRip
0.994
0.654
UTFV
1440
0.66
0.66
0.34
950
490
SDMULA-HMT
0.595
0.597
0.996
0.592
0.607
0.404
PART
3816
2274
1542
TN
Specificity ¼ 100% ð10Þ
TN þ FP
Total instances
Total incorrect
Total correct
F- measure
Specificity
Error rate
Precision
Accuracy
5
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx
Fig. 6. Accuracy measured for Classifiers. Fig. 7. Precision measured for Classifiers.
6
A. Tahseen Ali, H.S. Abdullah and M.N. Fadhil Materials Today: Proceedings xxx (xxxx) xxx
Fig. 9. F-measure measured for Classifiers. Fig. 13. Results Comparison Figure.
Table 4
Results Comparison.
6. Experiential results
7. Results comparison
8. Conclusions [12] T. Zarra, G.K. Mark, F.C. Galang, V.B. Ballesteros, V. Naddeo, Instrumental odour
monitoring system classification performance optimization by analysis of
different pattern-recognition and feature extraction techniques, Sensors 21 (1)
It turns out that the finger veins are one of the biometric fea- (2021) 114.
tures that are most difficult for the imposter to gain and avoid. [13] P. Dhakal, P. Damacharla, A.Y. Javaid, V. Devabhaktuni, A near real-time
automatic speaker recognition architecture for voice-based user interface,
Choosing two datasets in different formats. Applying preprocessing
Machine Learn. Knowledge Extraction 1 (1) (2019) 504–520.
on images to remove noise, enhance quality, resize, and cropping [14] A. Bombatkar, G. Bhoyar, K. Morjani, S. Gautam, V. Gupta, Emotion recognition
image. Feature extraction is an important phase, using the LDA using Speech Processing Using k-nearest neighbor algorithm, Int. J. Eng. Res.
Appl. (IJERA) (2014) 2248–9622, ISSN 2014.
method for features extractions. In the training and testing phase
[15] J.T. Senders, M.M. Zaki, A.V. Karhade, B. Chang, W.B. Gormley, M.L. Broekman,
using cross-validation. Machine-learning techniques are used that T.R. Smith, O. Arnaout, An introduction and overview of machine learning in
achieved ranged performances, (NV) achieved the highest average neurosurgical care, Acta Neurochirurgica 160 (1) (2018) 29–38.
accuracy which was 98.25%, followed by an (RF) outcome was [16] P. Song, W. Zheng, Feature selection based transfer subspace learning for
speech emotion recognition, IEEE Trans. Affective Comput. (3) (2018) 373–
95.7%. 382.
[17] A.K. Sandhu, R.S. Batth, Software reuse analytics using integrated random
Declaration of Competing Interest forest and gradient boosting machine learning algorithm, Software: Pract.
Experience 51 (4) (2021) 735–747.
[18] Dhakal, Parashar, Praveen Damacharla, Ahmad Y. Javaid, and Vijay
The authors declare that they have no known competing finan- Devabhaktuni. ‘‘Detection and Identification of Background Sounds to
cial interests or personal relationships that could have appeared Improvise Voice Interface in Critical Environments.” in: 2018 IEEE
International Symposium on Signal Processing and Information Technology
to influence the work reported in this paper. (ISSPIT), pp. 078-083. IEEE, 2018.
[19] Bilal Alhayani, Rane Milind, Face recognition system by image processing, Int.
References J. Electron. Commun. Eng. Technol. (IJCIET) 5 (5) (2014) 80–90.
[20] A.H. Meftah, Y.A. Alotaibi, S.-A. Selouani, Evaluation of an Arabic speech corpus
of emotions: a perceptual and statistical analysis, IEEE Access 6 (2018) 72845–
[1] K. Syazana-Itqan, A.R. Syafeeza, N.M. Saad, N.A. Hamid, W.H.B.M. Saad, A
72861.
review of finger-vein biometrics identification approaches, Indian J. Sci.
[21] M.J. Warrens, Kappa coefficients for dichotomous-nominal classifications, Adv.
Technol. 9 (32) (2016) 1–9.
Data Anal. Classification (2020) 1–16.
[2] Zidan, Khamis A., and Shereen S. Jumaa. ‘‘A New Finger Vein Verification
[22] B. Al Hayan, H. Ilhan, Visual sensor intelligent module based image
Method Focused On The Protection Of The Template.” in: IOP Conference
transmission in industrial manufacturing for monitoring and manipulation
Series: Materials Science and Engineering, vol. 993, no. 1, p. 012108. IOP
problems, J. Intell. Manuf. 32 (2021) 597–610.
Publishing, 2020.
[23] B. Al-Hayani, H. Ilhan, Efficient cooperative image transmission in one-way
[3] H.G. Hong, Min Beom Lee, Kang Ryoung Park, ‘‘onvolutional neural network-
multi-hop sensor network, Int. J. Electr. Eng. Educ. 57 (4) (2020) 321–339.
based finger-vein recognition using NIR image sensors, Sensors 17 (6) (2017)
[24] B. Alhayani, A. Abdallah,, Manufacturing intelligent corvus corone module for a
1297.
secured two way image transmission under WSN, Eng. Comput. 37 (2020) 1–
[4] Y.i. Liu, J. Ling, Z. Liu, J. Shen, C. Gao, Finger vein secure biometric template
17.
generation based on deep learning, Soft Comput. 22 (7) (2018) 2257–2265.
[5] Tagkalakis, Fotios, Dimitrios Vlachakis, Vasileios Megalooikonomou, and
Athanassios Skodras. ‘‘A novel approach to finger vein authentication.” Further Reading
In 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI
2017), pp. 659-662. IEEE, 2017. [1] Milind E. Rane, Umesh S. Bhadade, Comparative Study of ROI Extraction of
[6] Q. Yao, D. Song, X.u. Xiang, K. Zou, A novel finger vein recognition method Palmprint, IJCSN Int. J. Comput. Sci. Network 5 (2) (2016).
based on aggregation of radon-like features, Sensors 21 (5) (2021) 1885. [2] Milind. Rane and Umesh. Bhadade, ‘‘ Multimodal score level fusion for
[7] B.A. Rosdi, N. Mukahar, N.T. Han, Finger Vein Recognition Using Principle recognition using face and palmprint”, The International Journal of Electrical
Component Analysis and Adaptive k-Nearest Centroid Neighbor Classifier, Int. Engineering & Education, PP1-19, 2020.
J. Integr. Eng. 13 (1) (2021) 177–187. [3] Milind Rane, Tejas Latne, Umesh Bhadade, Biometric recognition using fusion,
[8] K. Kapoor, S. Rani, M. Kumar, V. Chopra, G.S. Brar, Hybrid local phase ICDSMLA 2019 (2019) 1320–1329.
quantization and grey wolf optimization based SVM for finger vein [4] B. ALhayani, H. Ilhan, Image transmission over decode and forward based
recognition, Multimedia Tools Appl. (2021) 1–39. cooperative wireless multimedia sensor networks for Rayleigh fading channels
[9] Khanam, Ruqaiya, Ramsha Khan, and Rajeev Ranjan. ‘‘Analysis of finger vein in medical internet of things (MIoT) for remote health-care and health
feature extraction and recognition using DA and KNN methods.” In 2019 Amity communication monitoring, J. Medical Imaging Health Inf. 10 (2020) 160–168.
International Conference on Artificial Intelligence (AICAI), pp. 477-483. IEEE, [5] Bilal Alhayani, Husam Jasim Mohammed, Ibrahim Zeghaiton Chaloob, Jehan
2019. Saleh Ahmed, Effectiveness of artificial intelligence techniques against cyber
[10] K. Padmavathi, K. Thangadurai, Implementation of RGB and grayscale images security risks apply of IT industry, Mater. Today: Proceedings (2021), https://
in plant leaves disease detection–comparative study, Indian J. Sci. Technol. 9 doi.org/10.1016/j.matpr.2021.02.531.
(6) (2016) 1–6. [6] Bilal Alhayani, Sara Taher Abbas, Dawood Zahi Khutar, Husam Jasim
[11] Zhu, Xiangyuan, Xiaoming Xiao, Tardi Tjahjadi, Zhihu Wu, and Jin Tang. ‘‘Image Mohammed, Best ways computation intelligent of face cyber attacks, Mater.
enhancement using fuzzy intensity measure and adaptive clipping histogram Today Proc. (2021), https://doi.org/10.1016/j.matpr.2021.02.557.
equalization.” arXiv preprint arXiv:2101.05922 (2021).