Professional Documents
Culture Documents
06 - Chapter 3
06 - Chapter 3
44
3.1.1.1 Fingerprint Acquisition
There are two primary methods of capturing a fingerprint image:
inked (off-line) and live scan (ink-less) [1], [39]. An inked
fingerprint image is typically acquired in the following way: a
trained professional obtains an impression of an inked finger on a
paper, and the impression is then scanned using a flat-bed
document scanner. The live scan fingerprint is a collective term for
a fingerprint image directly obtained from the finger without the
intermediate step of getting an impression on a paper. Acquisition
of inked fingerprints is cumbersome; in the context of an identity-
authentication system, it is both infeasible and socially unacceptable
for identity verification. The most popular technology to obtain a
live-scan fingerprint image is based on the optical frustrated total
internal reflection (FTIR) concept [190].
45
proposed for interfacing sensor with the computer to capture
fingerprint images [192]. For our research we are using an optical
scanner for reading fingerprints. We are using Futronics FS88 USB
compatible scanner as shown in Fig 3.2. The FS88 fingerprint
scanner uses advanced CMOS sensor technology and precise
optical.
The results for fingerprint scan using Futronics FS88 & designed
interface is shown in Fig. 3.3.
(a) (b)
Fig. 3.3 & 3.4 show some live fingerprint scans. Good quality
fingerprints are shown in Fig. 3.3 (a) & (b), if the person has not
cleaned the finger or if the finger is wet due to sweating then it will
result in dry or wet fingerprints as shown in Fig 3.4 (a) & (b)
respectively. Dry fingerprints have unclear edges and wet
fingerprints have smudged edges in both the cases feature
46
extraction is difficult and this gives rise to error rate. The captured
fingerprints are subjected to preprocessing.
Fig. 3.4. Different Quality Fingerprints (a) Dry Fingerprint (b) Wet
Fingerprint (c) Good Quality Fingerprint
First two fingerprints are poor as the ridge structure is distorted, Dry
fingerprints have very weak ridges and wet fingerprints have smudged
edges they lead to failure in feature extraction resulting in low accuracy.
47
wetness of the finger and difference in the applied pressure while
scanning the fingerprint.
The preprocessing is a multi-step process. Fingerprint
Preprocessing Steps [1], [3], [62] are as follows:
1. Smoothening Filter.
2. Intensity Normalization.
3. Orientation Field Estimation.
4. Fingerprint Segmentation.
5. Ridge Extraction/ Core point Detection.
6. Thinning / ROI Extraction.
The list given above is exhaustive but depending on the application
and captured data subset of this may be used.
Smoothening & Intensity Normalization
V0 *( I ( x, y) M i ) 2
Ni ( x, y) M 0 If I(x,y) > Mi
Vi
V0 *( I ( x, y) M i ) 2
Ni ( x, y) M 0 Otherwise (3.1)
Vi
48
This method gives output as shown in Fig 3.5, we have selected M0
=100, V0=100, and applied the method discussed above for the full
image.
49
obtain reliable ridge structure, most widely used approach is to go
through the gradients of grey intensity [197]. There are some other
methods available in literature like filter-bank based approach,
spectral estimation, waveform projection, however the gradient
based method provide better results [36], [197].
Gradient based technique also have variations, researchers have
proposed different ways to estimate orientation form gradients. W.
Lee et al. have proposed a simple technique based on direct
calculation of orientation based on gradient in [198]. In [197],
[199] authors have discussed orientation estimation based on Eigen
values of local structure tensor. Bazen [200] has discussed PCA &
Structure tensor based Orientation estimation algorithm based on
gradients. In [197] authors have proposed a modified technique
based on gradient calculation which exploits the fact that the
orientation field tend to be continuous in the neighboring regions.
They have put an algorithm which assigns the orientation of central
point based on the orientation of neighboring blocks at four corners
and their field strength (also called as coherence). Hong [193] et al.
discussed a mechanism to achieve a smoother orientation field by a
continuous vector field approach. They use an averaging filter to the
continuous vector filed calculated from the local gradient angle.
Both the approaches give reasonably good approximation of the
orientation field. Fig. 3.6 (a) shows such a scenario. In [193], to
calculate the orientation of block ‘C’, continuous vector field of
neighboring 5*5 blocks is considered and averaged. In [197]
authors use blocks shown in Fig 3.6 (b) , Blocks 1,2,3,4 are used to
estimate the orientation of center block ‘C’.
(a) (b)
Fig. 3.6. Blocks under Considerations (a) Central Block (b) Neighborhood
Block as used in [197].
50
dryness of finger. We have proposed an algorithm for orientation
field estimation based on optimized neighborhood averaging [201].
Next we discuss the proposed technique.
A. Pre-Requisites of Proposed Scheme
51
g sx ( g x2 g y2 )
g sx W W
(3.3)
g sy g sy 2gx g y
W W
Now,
g xx g x2 (3.4)
W
g yy g y2 (3.5)
W
g xy gx g y
W
(3.6)
The terms in above mentioned equations (gxx, gyy & gxy) are
estimates for variance & cross-covariance of gx & gy, averaged over
window ‘W’. We divide the input fingerprint image into equal size
blocks of WxW pixels, and average over each block independently.
The direction of orientation field is calculated as follows
W W
2 g x (i, j ) g y (i, j )
1 1 i 1 i 1
W tan W W (3.7)
2 2 2 2
( g (i, j ) g (i, j ))
x y
i 1 i 1
( g xx g yy ) 2 4 g xy2
CohW (3.8)
g xx g yy
B. Block Numbering
If we go for the orientation calculation by Eqn. (3.2) to (3.7) the
resultant orientation is not smooth. To make it proper schemes are
proposed in [193], [197] here we propose another scheme which
address the same problem. We divide the fingerprint image in to
blocks of size W X W pixels. To find orientation of a specific block
we consider neighborhood area of surrounding 8 blocks. Then we
define total 25 locations for the neighborhood blocks to find the
52
orientation and estimate the orientation of central blocks using this
25 neighborhood orientation, we also analyze the feasibility of using
less number of blocks for faster calculations. The blocks are given
ID’s (Identification Numbers) from 1 to 25; their locations are
shown in Fig. 3.8. The block under consideration for estimating
orientation is Block No. 9.
53
calculated. When the overall image is considered the calculations
are reduced by the optimization scheme is proposed, and
calculation time required is suitable for real-time applications. In the
next section we discuss the algorithm in detail.
C. Proposed Orientation Estimation Algorithm
54
(Value 4 in column of Block No.2 means updating 4th orientation for
BlockNo.2).
Table 3.1
Look-Up Table for Neighborhood Update
Update ID 1 2 3 4 5 6 7 8 9
1 i-16, j-16 9 4 2 1
2 i, j-16 5 9 4 3 1 2
3 i+16, j-16 5 9 2 3
4 i-16, j 7 6 9 2 1 4
5 i+16,j 8 7 9 3 2 5
6 i-16, j+16 7 9 4 6
7 i, j+16 8 6 5 9 4 7
8 i+16, j+16 7 5 9 8
9 i, j 8 7 6 5 4 3 2 1 9
10 i-8, j-16 13 12 11 10
11 i+8,j-16 13 12 10 11
12 i-8, j 15 14 13 11 10 12
13 i+8, j 15 14 12 11 10 13
14 i-8, i+16 15 13 12 14
15 i+8, j+16 14 13 12 15
16 i-8, j-8 21 19 18 16
17 i, j-8 20 17
18 i+8, j-8 21 19 16 18
19 i-8, j+8 21 18 16 19
20 i, j+8 17 20
21 i+8, j+8 19 18 16 21
22 i-16,j-8 20 19 17 22
23 i+16,j-8 25 20 17 23
24 i-16,j+8 20 17 22 24
25 i+16,j+8 20 23 17 25
55
Consider the first row, suppose that we are calculating 25 values for
a block with center co-ordinates (i, j), which becomes Block No. 9.
When we calculate gradient of Block No. 1 by procedure given in
next part, first we update central block with Update ID 1, and next
we reuse this value to add to orientation set of Block No. 1, 2 and 4
with update ID 9, 4 and 2. Hence 9th, 4th and 2nd Block Orientation
for Block No. 1, 2 and 4 need not be calculated again. Using this
optimized mechanism as we calculate 25 values for a block,
simultaneously we update neighboring 84 orientations besides its
own set of 21 orientations.
Optimized Neighborhood Averaging Algorithm:
56
All 25 values can be considered or some selection can be
made.
10. Display the orientation map. Proceed for next step of pre-
processing.
Wang et al. [197] have used the block with maximum coherence
(Eqn. 3.8), to assign orientation to the central block, in a set of four
neighboring blocks. This algorithm is a special case of proposed
algorithm, if we use only block. No.16, 18, 19 and 21 for orientation
estimation it becomes the same as proposed in [197] only we have
to consider block with high field strength (Coherence Value). As we
are using more blocks we use averaging which gave better results.
We have also studied using different combinations of blocks to
estimate orientation, as well as our algorithm is compared with
existing techniques [193], [197]. This is discussed in next section.
D. Results
57
4.Block Nos. - 9,12,13,16,17,18,19,20,21 - With Contiuous Vector
Field[193]
The algorithm with Option #3 gave best results as these block
are in close vicinity of central block and the orientation tend to be
continuous over this set or blocks, Option #4 combines our
technique with the smoothening filter approach on continuous
vector field [193], did not gave significant improvement. Table 3.2
shows the results. It can be observed that only using Squared
Gradients (SQG) the results are poor; the field is having fluctuations
and distortions. Second Column shows result of using CVF and
Smoothening filter [193], the results are good, field estimation is
much better, but when the optimized neighborhood averaging is
used the results have one clear advantage, that is the field is much
smoother than [193]. This fact is highlighted in Fig. 3.10. The field
inside the box for optimized neighborhood averaging (a) is much
smoother than other two approaches (b), (c).
Fig. 3.10. Orientation Field Formations (a) Squared Gradient only (b)
Contentious Vector Field (c) Optimized Neighborhood Averaging
Field in (a) is not Smooth and as compared to the orientation shown in
(b) field in (c) is much smoother.
58
Table 3.2
Continuous Optimized
Squared Gradient Coherence
Fingerprint Vector Field Neighborhood
[193], [197] Map
[193] Averaging
(a)
FS88
Optical
Scanner
Image
63ms 62 ms 515 ms
(b)
(a)
FVC 2000
DB1 78ms 79ms 703 ms
(b)
(a)
59
As we are using more blocks for calculation the time required for
calculation is higher for our algorithm (ONA). We show the
comparison for timing requirement considering all 25 blocks, i.e.
maximum load. Still the maximum timing was found in the range of
503-750 milliseconds as compared to 55-70 milliseconds for
previous approaches. When only 9 blocks are used (Option # 3) this
timing reduces to 200-400 milliseconds. This fact indicates that
though our algorithm takes more time for execution, still it is
attractive for real time applications.
Here we have discussed a mechanism for orientation field
estimation for a fingerprint image. The proposed an algorithm
calculates orientation using gradients and performs neighborhood
averaging for smoother orientation field. This scheme reuses
calculated orientations by copying values in appropriate location,
using a unique look-up table. Though more blocks are used as
compared to existing schemes the execution time is still attractive
for real-time application. Achieved orientation field gives better
estimation by closely approximating actual values, though
tremendous increase in quality is not found we get considerable
improvement over the existing methods, as the existing methods
are also proposing performance improvement over conventional
mechanism. Scheme proposed in [197] is a special case of proposed
algorithm. This algorithm can be integrated with existing fingerprint
preprocessing techniques to achieve performance improvement.
Fingerprint Segmentation
60
applications, and reduces errors due to change in background
condition, change in scanning device, difference in applied finger
pressure etc.
(a) (b)
61
1 x2 y2
h(x, y, , f ) = exp{- }cos(2 fx )
2 x2 2
y
(3.9)
(k 1)
k (3.10)
m
k = 1. . . m,
Where m denotes the number of orientations (currently m = 8). For
each image block of size W ×W centered at (X,Y), with W even, we
extract the Gabor Magnitude [207] as follows for k = 1, . . . ,m:
( W / 2) 1 ( W / 2) 1
g(X , Y , k , f , x , y ) | I ( X x0 , Y y 0 ) h ( x0 , y 0 , k , f , x , y ) |
x0 W /2 y0 W /2
(3.11)
Where, I(x, y) denotes the gray level of the pixel (x, y). As a result,
we obtain m Gabor features for each W ×W block of the image. In
blocks with ridge pattern, the values of one or several Gabor
features will be higher than the others (those values whose filter
angle is similar to the ridge angle of the block). If the block is noisy
62
or having non-oriented background, the m values of the Gabor
features will be similar. Therefore, the standard deviation ‘Sd’ of the
m Gabor features allows to segment foreground and background. If
‘Sd’ is less than a given threshold, the block is labeled as
background block; otherwise the block is labeled as foreground
block. This technique has been implemented in [207]. They have
used overlapping block structure for enhancing the performance.
The threshold for standard deviation is a crucial factor, which is
decided manually. We use the Gabor magnitude feature in our
algorithm as discussed in [207], but extend the algorithm for
deciding the threshold automatically [45].
B. Otsu’s Thresholding
Otsu’s Thresholding method is mainly used in segmenting object
from background [210], [211]. The threshold is determined from
the histogram of the image. This method is based on minimizing
within class variance and maximizing between class variance of grey
levels. The object pixels come in one class and the background pixel
come in another class as shown in Fig. 3.14.
T 1 L 1
qb (T ) P(i) , qo (T ) P(i) (3.12)
i 0 i T
63
Grey level mean and variance
Next step is to calculate grey level mean and variance for the
background and object pixels. Grey level mean for background will
be given by
T 1
iP(i)
i 0 1 T1
b (T ) T 1
iP(i) (3.13)
qb (T ) i 0
P(i)
i 0
L 1
(i o )2 P(i)
2 1 L1
o (T ) i T
L 1
(i o )2 P(i) (3.17)
qo (T ) i T
P(i)
i T
Within-class is given by
64
2 2 2
W (T ) qb (T ) b (T ) qo (T ) o (T ) (3.19)
65
4. Scale the Gabor Feature Map values to fit in the range of [0-
255]. This is performed to generate a Gabor Magnitude
Histogram with Possible Range of [0-255]. We find GMmin, GMmax
of the Gabor Magnitude GM(i,j) , We get the scaled values as
follows
GM (i, j ) GM min
GM Scaled (i, j ) 255 (3.23)
GM max
5. Generate histogram for Scaled Gabor feature map GM Scaled (i, j ) .
6. Generate threshold for scaled Gabor feature map using Otsu’s
Threshold method as discussed above.
7. Generate a segmentation mask M for the image block M(i,j)=1,
means corresponding block(i,j) is object block (Region of Interest
ROI) else it is background block. Set all values in M to zero.
8. If GM Scaled (i, j ) Threshold the mark block (i,j) in input image as
background, M(i,j)=0 else mark it as foreground (ROI) M(i,j)=1.
9. Remove isolated holes (Zeros) in Segmented foreground (ROI)
by filling operation on the mask M(i,j) (Marking such blocks as
foreground). Morphological closing also can be performed on the
mask.
10. Display segmented image by displaying block with segmentation
mask M(i,j)=1( ROI blocks) .
We demonstrate this algorithm by applying it to a fingerprint
image. We use a fingerprint image of 320 X 480 pixels size as
scanned by the Futronics FS88 fingerprint Scanner. Fig. 3.15(a)
shows the normalized input image, Fig. 3.15(b) Shows
corresponding Gabor magnitude feature map. Red spots indicate
maximum values, otherwise brighter the block, higher is the
magnitude, and background blocks have very low values indicated
by black background.
Fig. 3.15(c) indicates the segmented fingerprint. This is
performed by calculating threshold from histogram shown in Fig.
3.15(d) by Otsu’s Method. The calculated threshold is 29 for current
case. We have tested this algorithm on various types of input
images. The results for these are discussed in the next section.
66
(a) (b) (c)
(d)
Fig. 3.15. Segmentation Process (a) Normalized Input Image (b) Gabor
Magnitude Feature Map (c) Segmented Fingerprint (d) Histogram for
Gabor Magnitude Feature map (Threshold value is 29)
D. Results
67
current algorithm. The algorithm performs better than existing
techniques. The performance of modified gradient based method is
also good, but the threshold is highly varying and good quality
output is obtained by several tries. The threshold cannot be a fixed
value for any of these methods as it is clearly seen in the Table 3.4.
The unique advantage of current algorithm is that the threshold is
automatically decided and need not to be calculated manually, this
technique is also using a corrective mechanism by filling isolated
blocks caused by noisy areas. This is also having impact on quality
of segmented fingerprint. Fig 3.16 shows Graphical view of the
Segmentation Accuracy.
Table 3.3
Comparison of Segmentation Results
Accurately Accurately Poorly Poorly
Execution
Sr. Algorithm Segmented Segmented Segmented Segmented
Time ms
Number % Number %
Gabor
1 207/220 94 13/220 06 234
Automatic
100 94 91
90 85 82
80
70
60
50
40
30
15 18
20 9
10 6
0
Accurately Segmented % Poorly Segmented %
Gabor Automatic Mean & Variance Direction Based Modified Gradients
68
Table 3.4
Fingerprint Segmentation Results
Input Image Mean Variance Direction Based Mod Gradient based Gabor Automatic
method [35] [200], [205] [206] Thresholding [45]
FS88 (500dpi)
T=40 T=0.340 T=300 T=29
Time=10ms Time =98ms Time = 970ms Time = 203ms
FVC2000
T=29
T=25 T=0.240 T=1 70
Time = 270ms
Time=10ms Time=50ms Time = 531ms
FVC 2002
T=25 T=0.500 T=120 T=33
Time=20ms Time=93ms Time = 980ms Time = 225ms
Fingerprint by Ink on
Paper 500dpi T=40 T=0.200 T=28 T=29
Time=15ms Time=62ms Time = 828ms Time = 259ms
69
The processing time is also compared for the above methods; we
also compare the Gabor magnitude segmentation [35] with our
method. The results are shown in Table 3.3. The proposed
algorithm takes average 234ms for one fingerprint, as Gabor filter
requires complex calculation and we are using overlapped method
[35], the mean and variance based method is fastest method. This
can be a tradeoff between the performance and speed. Though the
algorithm needs more execution time the performance is best in the
class and timings are comparable with the modified gradient based
method. We can see that proposed segmentation method is
accurate and fast enough for real time systems.
The segmented fingerprint will be used for feature extraction as
we have clear fingerprint ridge structure available and the noisy
background is removed. We are interesting correlation based
fingerprint systems. Such systems have to find a core point or a
consistent registration point over a fingerprint. We discuss core
point detection method in the next section.
Core point Detection
In case of correlation based techniques, rather than detecting
minutiae, we go for global matching of ridge valley structure, here
we try to match the texture of fingerprint. Such techniques are
robust but less accurate [1], [3], [48], [80], [82]; but for matching
the global ridge structure, we need a consistent point for aligning
the fingerprints. This point is called as Registration Point, the
fingerprints have various ridge structures present on them, out of
which the core points can be detected and used as Registration
Point. In case of the fingerprint which don’t have core point we go
for detection of point with high curvature regions or Low Coherence
Strength. An example of Core point is shown in Fig. 3.17.
Jain et al. [36] have described a fingerprint matching system
based on Gabor filters, which uses circular tessellation around the
core point and extracts Gabor magnitude in 8 directions, this is used
as a feature vector, similar approach which uses a filter bank of
Gabor filters, they are using a set of filters to extract the fingerprint
feature vectors and this is used for the training of classifier.
Cavusoglu et al. [48] have proposed a robust approach which
operates on grey levels of fingerprint and calculates global feature
vector taking registration point at the reference. Success of all
these approaches is based on accurate determination of the core
point (Registration Point). We have developed a core point
70
detection technique based on multiple features derived from
fingerprints ridge structure.
Fig. 3.17. Two Fingerprints of Same Finger Showing the Core Point
A. Proposed Technique
71
coherence, yellow and blue region indicate decreasing coherence.
The regions around core point have low value.
Fig. 3.18. (a) Original Fingerprint (b) Coherence Map (c) Neighborhood
Averaged Coherence Map
1 N 1
Poincare(i, j ) * (k ) (3.24)
2 k 0
(3.25)
72
This index gives presence of core point by evaluating orientation
angle over a closed digital curve. We first detect presence of core
point using the unique mask that operates on orientation field
(discussed in the next section) and then find Poincare index in the
selected 7X7 Block size. The Poincare index is high at high
curvature point or the core, the map of Poincare index at selected
part is shown below, this map is used to improve accuracy of core
point detection. This is shown in Fig. 3.19, where the actual core
point region and corresponding Poincare index map is shown. It
shows that in the core point region Poincare index is high.
73
4. Orientation Field Mask [212]
This feature is used to locate the core point region. This method
is based on the fact that the core points are having specific pattern
of the orientation field, the patterns appear like a loop formed in the
region of core point. We are using a mask which gives maximum
magnitude of convolution in the region of core point.
(a) (b)
Fig. 3.21. Orientation Field at the Core Point (a) Core Point (b) Loop
Formed by the Orientation Field
74
is calculated for each element in the orientation map and next step
is to threshold this magnitude array.
1 1 1 1 0 1 1 1 1
1 1 1 1 0 1 1 1 1
1 1 1 1 0 1 1 1 1
1 1 1 1 0 1 1 1 1
OM 0 0 0 0 0 0 0 0 0
1 1 1 1 0 1 1 1 1
1 1 1 1 0 1 1 1 1
1 1 1 1 0 1 1 1 1
1 1 1 1 0 1 1 1 1
Fig. 3.22. Orientation Field Mask
75
7. Threshold the loop field strength array to locate the core
point, this threshold for our method ranges in the order of
(.34 to .45). Take the centroid of the region if it consists more
than one block.
8. If more than one core points regions are located then take the
region towards upper end or take centroid.
9. Separate 5X5 block size (Each block of size 16X 16 Pixels
considered here) area of the Fingerprint & its corresponding
parameters.
10. Using These Parameters determine the exact core point
location by weighted sum as discussed below. This first stage
(Step 1 to 9) gives the core point region as shown below in
Fig. 3.23.
11. We copy the region into a 5X5 size array and evaluate above
discussed features for the regions. Final core point is decided
by weighted sum of the above parameters for core point the
coherence & cosine component sum should be minimum and
Poincare index should be maximum, hence we find final region
weighted sum as
Core[x, y] = Coherence[x, y] + Angular Coherence [x, y] -
Poincare[x, y] (3.30)
This gives final region is shown in Fig. 3.24. The final region map is
shown as core point region, we select the minimum value from this
map as the core point block, the red block in the final map shows
selected core point in Fig 3.24 (c). Fig 3.24(a) shows the Feature
vectors and final Core point block.
B. Results
This algorithm is implemented on MS Visual Studio 2005 platform,
using Visual C# 2.0. The program was tested on AMD Athlon 64
processor running at 1.8 GHZ with 1.5GB DDRII RAM, operating
system is Windows XP SP3. Block size considered is 16 X 16 pixels.
For testing purpose we have used FVC 2000, 2002 & 2004
Databases [202], we have also used optical fingerprint scanner
Futronics FS88 for real-time application. The application was mainly
designed for Futronics FS88 Scanner with 500dpi fingerprint image
with resolution of 320*480 Pixels. The fingerprint data using FS88
scanner was collected from 60 individuals per subject 10
76
fingerprints from two thumbs. For testing the database was
unconstrained, i.e. it contained normal, dry & wet fingerprints.
Fig. 3.24. (a) Core Point Feature Vectors (b) Selected Fingerprint (c)
Fingerprint with Marked Core Point
77
The fingerprints were of all type, having clear core as well as
without core, having arch or high curvature region. Overall the
conditions were random. One extra set is created for fingerprints
containing clear core point as shown in Fig. 3.24 (b). Total 200 tests
were performed on collected fingerprints and the results are as
follows.
Table 3.5
Core point Detection Test Results
Fingerprints
FS88 FVC
Parameter with clear
Database 2002,2004
Core point
Accuracy % 84 68 98
Average Error (Pixels) 5.57 6.13 2.50
Average Execution Time (ms) 500ms 490ms 520ms
Different test cases are shown in Table 3.6. We have tested this
algorithm on different types on fingerprints, mainly Fingerprints
having clear core points (Case 1 & 4), Fingerprints having High
Curvature region but no clear core point (Case 5) and Fingerprints
having weaker curvature region (Case 2 & 3).
We can see that in case of clear core point the region is
concisely given by orientation field mask output, finally the core
point detection is accurate observed average pixel error is 2.5
pixels (distance from actual core point).
In case of fingerprints with high curvature regions but no core
point we have to decide location of high curvature point. In
case 5 we can see that the orientation field mask output shows
a broader region. Average pixel error was observed in the
range of 4-6 pixels depending on the database used.
In case of fingerprints having no core point and have weaker or
low curvature region, even dry fingerprints. It becomes very
difficult to find the registration points, we have to rely on
coherence field strength only or depending on quality of
fingerprint we select the centroid. In such cases the average
pixel error is very high > 6 pixels and even algorithm fails. We
are using centroid of the fingerprint if the algorithm fails to
detect the core point.
Finally the area surrounding the selected Registration point
(Core point) is selected for feature extraction. We are selecting
144X144 pixels region as ROI.
78
Table 3.6
Core Point Detection Results for Different Fingerprints
Orientation Core point
Segmented Field Mask Feature Vector Segmented Core
Coherence Map
Fingerprint Output Thresholded point region
(Thresholded) Output
(1)
(2)
(3)
(4)
(5)
79