Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 90

Personal Identification Based on Iris Texture Analysis

Personal Identification Based

on

Iris Texture Analysis


Personal Identification Based on Iris Texture Analysis

ABSTRACT

With an increasing emphasis on security, automated personal identification based on


biometrics has been receiving extensive attention over the past decade. Iris recognition,
as an emerging biometric recognition approach, is becoming a very active topic in both
research and practical applications. In general, a typical iris recognition system includes
iris imaging, iris livens detection, and recognition. This paper focuses on the last issue
and describes a new scheme for iris recognition from an image sequence. We first
assess the quality of each image in the input sequence and select a clear iris image
from such a sequence for subsequent recognition. A bank of spatial filters, whose
kernels are suitable for iris recognition, is then used to capture local characteristics of
the iris so as to produce discriminating texture features. Experimental results show that
the proposed method has an encouraging performance. In particular, a comparative
study of existing methods for iris recognition is conducted on an iris image database
including 2,255 sequences from 213 subjects. Conclusions based on such a
comparison using a nonparametric statistical method (the bootstrap) provide useful
information for further research.
Personal Identification Based on Iris Texture Analysis

INTRODUCTION

Authentication (from Greek: αυθεντικός ; real or genuine, from authentic; author) is the


act of establishing or confirming something (or someone) as authentic, that is, that
claims made by or about the subject are true ("authentification" is a French language
variant of this word). This might involve confirming the identity of a person, tracing the
origins of an artifact, ensuring that a product is what its packaging claims to be, or
assuring that a computer program is a trusted one.

Biometrics comprises methods for uniquely recognizing humans based upon one or


more intrinsic physical or behavioral traits. In computer science, in particular, biometrics
is used as a form of identity access management and access control. It is also used to
identify individuals in groups that are under surveillance.

Biometric characteristics can be divided in two main classes [citation needed]:

 Physiological are related to the shape of the body. Examples include, but are
not limited to fingerprint, face recognition, DNA, Palm print, hand geometry, iris
recognition, which has largely replaced retina, and odor/scent.
 Behavioral are related to the behavior of a person. Examples include, but are
not limited to typing rhythm, gait, and voice. Some researchers[1] have coined the
term behaviometrics for this class of biometrics.

Strictly speaking, voice is also a physiological trait because every person has a


different vocal tract, but voice recognition is mainly based on the study of the way a
person speaks, commonly classified as behavioral.
Personal Identification Based on Iris Texture Analysis

It is possible to understand if a human characteristic can be used for biometrics in terms


of the following parameters:[2]

 Universality – each person should have the characteristic.


 Uniqueness – is how well the biometric separates individuals from another.
 Permanence – measures how well a biometric resists aging and other variance
over time.
 Collectability – ease of acquisition for measurement.
 Performance – accuracy, speed, and robustness of technology used.
 Acceptability – degree of approval of a technology.
 Circumvention – ease of use of a substitute.

A biometric system can operate in the following two modes [citation needed]:

 Verification – A one to one comparison of a captured biometric with a stored


template to verify that the individual is who he claims to be. Can be done in
conjunction with a smart card, username or ID number.
 Identification – A one to many comparison of the captured biometric against a
biometric database in attempt to identify an unknown individual. The identification
only succeeds in identifying the individual if the comparison of the biometric sample
to a template in the database falls within a previously set threshold.

The first time an individual uses a biometric system is called an enrollment. During the
enrollment, biometric information from an individual is stored. In subsequent uses,
biometric information is detected and compared with the information stored at the time
of enrollment. Note that it is crucial that storage and retrieval of such systems
themselves be secure if the biometric system is to be robust.
Personal Identification Based on Iris Texture Analysis

The first block (sensor) is the interface between the real world and the system; it has to
acquire all the necessary data. Most of the times it is an image acquisition system, but it
can change according to the characteristics desired.

The second block performs all the necessary pre-processing: it has to remove artifacts
from the sensor, to enhance the input (e.g. removing background noise), to use some
kind of normalization, etc. In the third block necessary features are extracted. This step
is an important step as the correct features need to be extracted in the optimal way.

A vector of numbers or an image with particular properties is used to create a template.


A template is a synthesis of the relevant characteristics extracted from the source.
Elements of the biometric measurement that are not used in the comparison algorithm
are discarded in the template to reduce the file size and to protect the identity of the
enrollee.

If enrollment is being performed, the template is simply stored somewhere (on a card or
within a database or both). If a matching phase is being performed, the obtained
template is passed to a matcher that compares it with other existing templates,
estimating the distance between them using any algorithm (e.g. Hamming distance).
The matching program will analyze the template with the input. This will then be output
for any specified use or purpose (e.g. entrance in a restricted area) [citation needed].
Personal Identification Based on Iris Texture Analysis

The basic block diagram of a biometric system

Performance

The following are used as performance metrics for biometric systems:[3]


false accept rate or false match rate (FAR or FMR) – the probability that the system
incorrectly matches the input pattern to a non-matching template in the database. It
measures the percent of invalid inputs which are incorrectly accepted.

False reject rate or false non-match rate (FRR or FNMR) – the probability that the
system fails to detect a match between the input pattern and a matching template in the
database. It measures the percent of valid inputs which are incorrectly rejected.
Receiver operating characteristic or relative operating characteristic (ROC) – The ROC
plot is a visual characterization of the trade-off between the FAR and the FRR. In
general, the matching algorithm performs a decision based on a threshold which
determines how close to a template the input needs to be for it to be considered a
match.
Personal Identification Based on Iris Texture Analysis

If the threshold is reduced, there will be less false non-matches but more false accepts.
Correspondingly, a higher threshold will reduce the FAR but increase the FRR. A
common variation is the Detection error trade-off (DET), which is obtained using normal
deviate scales on both axes. This more linear graph illuminates the differences for
higher performances (rarer errors).

equal error rate or crossover error rate (EER or CER) – the rate at which both accept
and reject errors are equal. The value of the EER can be easily obtained from the ROC
curve. The EER is a quick way to compare the accuracy of devices with different ROC
curves. In general, the device with the lowest EER is most accurate. Obtained from the
ROC plot by taking the point where FAR and FRR have the same value. The lower the
EER, the more accurate the system is considered to be.

failure to enroll rate (FTE or FER) – the rate at which attempts to create a template from
an input is unsuccessful. This is most commonly caused by low quality inputs.
failure to capture rate (FTC) – Within automatic systems, the probability that the system
fails to detect a biometric input when presented correctly. Template capacity – the
maximum number of sets of data which can be stored in the system..

Current, emerging and future applications of biometrics

Proposal calls for biometric authentication to access certain public networks


John Michael (Mike) McConnell, a former vice admiral in the United States Navy, a
former Director of US National Intelligence, and Senior Vice President of Booz Allen
Hamilton promoted the development of a future capability to require biometric
authentication to access certain public networks in his Keynote Speech[4] at the 2009
Biometric Consortium Conference.
Personal Identification Based on Iris Texture Analysis

A basic premise in the above proposal is that the person that has uniquely
authenticated themselves using biometrics with the computer is in fact also the agent
performing potentially malicious actions from that computer. However, if control of the
computer has been subverted, for example in which the computer is part of a botnet
controlled by a hacker, then knowledge of the identity of the user at the terminal does
not materially improve network security or aid law enforcement activities.

Issues and concerns

Privacy and discrimination

Data obtained during biometric enrollment could be used in ways the enrolled individual
does not consent to.

Danger to owners of secured items

When thieves cannot get access to secure properties, there is a chance that the thieves
will stalk and assault the property owner to gain access. If the item is secured with a
biometric device, the damage to the owner could be irreversible, and potentially cost
more than the secured property. For example, in 2005, Malaysian car thieves cut off the
finger of a Mercedes-Benz S-Class owner when attempting to steal the car.[5]
Personal Identification Based on Iris Texture Analysis

Cancelable biometrics

One advantage of passwords over biometrics is that they can be re-issued. If a token or
a password is lost or stolen, it can be cancelled and replaced by a newer version. This
is not naturally available in biometrics.

If someone’s face is compromised from a database, they cannot cancel or reissue it.
Cancelable biometrics is a way in which to incorporate protection and the replacement
features into biometrics. It was first proposed by Rather et al. [6]

Several methods for generating cancelable biometrics have been proposed. The first
fingerprint based cancelable biometric system was designed and developed by
Tulyakov et al.[7] Essentially, cancelable biometrics perform a distortion of the biometric
image or features before matching. The variability in the distortion parameters provides
the cancelable nature of the scheme. Some of the proposed techniques operate using
their own recognition engines, such as Teoh et al. [8] and Savvides et al.,[9] whereas other
methods, such as Dabbah et al., [10] take the advantage of the advancement of the well-
established biometric research for their recognition front-end to conduct recognition.
Although this increases the restrictions on the protection system, it makes
the cancellable templates more accessible for available biometric technologies.

International trading of biometric data

Many countries, including the United States, already trade biometric data. To quote a
2009 testimony made before the US House Appropriations Committee, Subcommittee
on Homeland Security on “biometric identification” by Kathleen Kraninger and Robert A
Mocny [11] According to article written by S. Magnuson in the National Defense
Magazine, the United States Defense Department is under pressure to share biometric
data.[12] To quote that article:
Personal Identification Based on Iris Texture Analysis

10

Miller, (a consultant to the Office of Homeland Defense and America’s security


affairs) said the United States has bi-lateral agreements to share biometric data
with about 25 countries. Every time a foreign leader has visited Washington
during the last few years, the State Department has made sure they sign such an
agreement.

Governments are unlikely to disclose full capabilities of biometric deployments

Certain members of the civilian community are worried about how biometric data is
used. Unfortunately, full disclosure may not be forthcoming to the civilian community. [13]

Iris Recognition 
Iris recognition is a method of biometric authentication that uses pattern-recognition
techniques based on high-resolution images of the irises of an individual's eyes.

Not to be confused with another, less prevalent, ocular-based technology, retina


scanning, iris recognition uses camera technology, with subtle infrared illumination
reducing specular reflection from the convex cornea, to create images of the detail-rich,
intricate structures of the iris. Converted into digital templates, these images provide
mathematical representations of the iris that yield unambiguous positive identification of
an individual.

Iris recognition efficacy is rarely impeded by glasses or contact lenses. Iris technology
has the smallest outlier (those who cannot use/enroll) group of all biometric
Personal Identification Based on Iris Texture Analysis

11

technologies. Because of its speed of comparison, iris recognition is the only biometric
technology well-suited for one-to-many identification. A key advantage of iris recognition
is its stability, or template longevity, as, barring trauma, a single enrollment can last a
lifetime.

Breakthrough work to create the iris-recognition algorithms required for image


acquisition and one-to-many matching was pioneered by John G. Daugman, Ph.D,
OBE (University of Cambridge Computer Laboratory). These were utilized to effectively
debut commercialization of the technology in conjunction with an early version of the Iris
Access system designed and manufactured by Korea's LG Electronics. Daugman's
algorithms are the basis of almost all currently (as of 2006) commercially deployed iris-
recognition systems. (In tests where the matching thresholds are—for better
comparability—changed from their default settings to allow a false-accept rate in the
region of 10−3 to 10−4 [1], the Iris Code false-reject rates are comparable to the most
accurate single-finger fingerprint matchers [2].)

Visible Wavelength (VW) vs Near Infrared (NIR) Imaging


The majority of iris recognition benchmarks are implemented in Near Infrared (NIR)
imaging by emitting 750 nm wavelength light source. This is done to avoid light
reflections from cornea in iris which makes the captured images very noisy. Such
images are challenging for feature extraction procedures and consequently hard to
recognize at the identification step. Although, NIR imaging provides good quality
images, it loses pigment melanin information, which is a rich source of information for
iris recognition.
Personal Identification Based on Iris Texture Analysis

12
The melanin, also known as chromospheres, mainly consists of two distinct
heterogeneous macromolecules, called eumelanin (brown–black) and pheomelanin
(yellow–reddish) [3], [4]. NIR imaging is not sensitive to these chromospheres, and as a
result they do not appear in the captured images.

In contrast, visible wavelength (VW) imaging keeps the related chromospheres


information and, compared to NIR, provides rich sources of information mainly coded as
shape patterns in iris. Hosseini et al. [5] provide a comparison between these two
imaging modalities and fused the results to boost the recognition rate. An alternative
feature extraction method to encode VW iris images was also introduced, which is
highly robust to reflectivity terms in iris. Such fusion results are seemed to be alternative
approach for multi-modal biometric systems which intend to reach high accuracies of
recognition in large databanks.

Visible Wavelength Iris


Near Infrared (NIR) version
Image
Personal Identification Based on Iris Texture Analysis

13

Operating principle

An iris-recognition algorithm first has to identify the approximately concentric circular


outer boundaries of the iris and the pupil in a photo of an eye. The set of pixels covering
only the iris is then transformed into a bit pattern that preserves the information that is
essential for a statistically meaningful comparison between two iris images. The
mathematical methods used resemble those of modern lossy compression algorithms
for photographic images. In the case of Daugman's algorithms, a Gabor
wavelet transform is used in order to extract the spatial frequency range that contains a
good best signal-to-noise ratio considering the focus quality of available cameras.

The result is a set of complex numbers that carry local amplitude and phase
information for the iris image. In Daugman's algorithms, all amplitude information is
discarded, and the resulting 2048 bits that represent an iris consist only of the complex
sign bits of the Gabor-domain representation of the iris image. Discarding the amplitude
information ensures that the template remains largely unaffected by changes in
illumination and virtually negligibly by iris color, which contributes significantly to the
long-term stability of the biometric template.

To authenticate via identification (one-to-many template matching) or verification (one-


to-one template matching), a template created by imaging the iris is compared to a
stored value template in a database. If the Hamming distance is below the decision
threshold, a positive identification has effectively been made.

A practical problem of iris recognition is that the iris is usually partially covered by
eyelids and eyelashes. In order to reduce the false-reject risk in such cases, additional
algorithms are needed to identify the locations of eyelids and eyelashes and to exclude
the bits in the resulting code from the comparison operation.
Personal Identification Based on Iris Texture Analysis

14

Advantages

The iris of the eye has been described as the ideal part of the human body for biometric
identification for several reasons:

 It is an internal organ that is well protected against damage and wear by a highly
transparent and sensitive membrane (the cornea). This distinguishes it from
fingerprints, which can be difficult to recognize after years of certain types of
manual labor.

 The iris is mostly flat, and its geometric configuration is only controlled by two
complementary muscles (the sphincter pupillae and dilator pupillae) that control
the diameter of the pupil. This makes the iris shape far more predictable than, for
instance, that of the face.

 The iris has a fine texture that—like fingerprints—is determined randomly during
embryonic gestation. Even genetically identical individuals have completely
independent iris textures, whereas DNA (genetic "fingerprinting") is not unique for
the about 0.2% of the human population who have a genetically identical twin.

 An iris scan is similar to taking a photograph and can be performed from about
10 cm to a few meters away. There is no need for the person to be identified to
touch any equipment that has recently been touched by a stranger, thereby
eliminating an objection that has been raised in some cultures against fingerprint
scanners, where a finger has to touch a surface, or retinal scanning, where the
eye can be brought very close to a lens (like looking into a microscope lens).
Personal Identification Based on Iris Texture Analysis

15

 Some[who?] argue that a focused digital photograph with an iris diameter of about


200 pixels contains much more long-term stable information than a fingerprint.

 The originally commercially deployed iris-recognition algorithm, John Daugman's


IrisCode, has an unprecedented false match rate (better than 10 −11).

 While there are some medical and surgical procedures that can affect the colour
and overall shape of the iris, the fine texture remains remarkably stable over
many decades. Some iris identifications have succeeded over a period of about
30 years.

Disadvantages

 Iris scanning is a relatively new technology and is incompatible with the very
substantial investment that the law enforcement and immigration authorities of
some countries have already made into fingerprint recognition.

 Iris recognition is very difficult to perform at a distance larger than a few meters
and if the person to be identified is not cooperating by holding the head still and
looking into the camera. However, several academic institutions and biometric
vendors are developing products that claim to be able to identify subjects at
distances of up to 10 meters ("standoff iris" or "iris at a distance").
Personal Identification Based on Iris Texture Analysis

16

 As with other photographic biometric technologies, iris recognition is susceptible


to poor image quality, with associated failure to enroll rates. 

 As with other identification infrastructure (national residents databases, ID cards,


etc.), civil rights activists have voiced concerns that iris-recognition technology
might help governments to track individuals beyond their will.

Security considerations

As with most other biometric identification technology, a still not satisfactorily solved
problem with iris recognition is the problem of live-tissue verification. The reliability of
any biometric identification depends on ensuring that the signal acquired and compared
has actually been recorded from a live body part of the person to be identified and is not
a manufactured template. Many commercially available iris-recognition systems are
easily fooled by presenting a high-quality photograph of a face instead of a real face,
which makes such devices unsuitable for unsupervised applications, such as door
access-control systems. The problem of live-tissue verification is less of a concern in
supervised applications (e.g., immigration control), where a human operator supervises
the process of taking the picture.
Personal Identification Based on Iris Texture Analysis

17

Methods that have been suggested to provide some defence against the use of fake
eyes and irises include:

 Changing ambient lighting during the identification (switching on a bright lamp),


such that the pupillary reflex can be verified and the iris image be recorded at
several different pupil diameters

 Analysing the 2D spatial frequency spectrum of the iris image for the peaks
caused by the printer dither patterns found on commercially available fake-iris
contact lenses

 Analysing the temporal frequency spectrum of the image for the peaks caused by
computer displays

 Using spectral analysis instead of merely monochromatic cameras to distinguish


iris tissue from other material

 Observing the characteristic natural movement of an eyeball (measuring


nystagmus, tracking eye while text is read, etc.)

 Testing for retinal retroreflection (red-eye effect)


Personal Identification Based on Iris Texture Analysis

18

 Testing for reflections from the eye's four optical surfaces (front and back of both
cornea and lens) to verify their presence, position and shape

 Using 3D imaging (e.g., stereo cameras) to verify the position and shape of the
iris relative to other eye features

A 2004 report by the German Federal Office for Information Security noted that none of
the iris-recognition systems commercially available at the time implemented any live-
tissue verification technology. Like any pattern-recognition technology, live-tissue
verifiers will have their own false-reject probability and will therefore further reduce the
overall probability that a legitimate user is accepted by the sensor.
Personal Identification Based on Iris Texture Analysis

19

Image quality

Image quality is a characteristic of an image that measures the perceived image


degradation (typically, compared to an ideal or perfect image). Imaging systems may
introduce some amounts of distortion or artifacts in the signal, so the quality
assessment is an important problem.

In photographic imaging

In digital or film-based photography, an image is formed on the image plane of the


camera and then measured electronically or chemically to produce the photograph. The
image formation process may be described by the ideal pinhole camera model, where
only light rays from the depicted scene that pass through the camera aperture can fall
on the image plane[A]. In reality, this ideal model is only an approximation of the image
formation process, and image quality may be described in terms of how well the camera
approximates the pinhole model.

An ideal model of how a camera measures light is that the resulting photograph should
represent the amount of light that falls on each point at a certain point in time.
Personal Identification Based on Iris Texture Analysis

20

This model is only an approximate description of the light measurement process of a


camera, and image quality is also related to the deviation from this model.

In some cases, the image for which quality should be determined is primarily not the
result of a photographic process in a camera, but the result of storing or transmitting the
image. A typical example is a digital image that has been compressed, stored or
transmitted, and then decompressed again. Unless a lossless compression method has
been used, the resulting image is normally not identical to the original image and the
deviation from the (ideal) original image is then a measure of quality. By considering a
large set of images, and determining a quality measure for each of them, statical
methods can be used to determine an overall quality measure of the compression
method.

In a typical digital camera, the resulting image quality depends on all three factors
mentioned above: how much the image formation process of the camera deviates from
the pinhole model, the quality of the image measurement process, and the coding
artifacts that are introduced in the image produced by the camera, typically by
the JPEG coding method.

By defining image quality in terms of a deviation from the ideal situation, quality
measures become technical in the sense that they can be objectively determined in
terms of deviations from the ideal models. Image quality can, however, also be related
to the subjective perception of an image, e.g., a human looking at a photograph.
Examples are how colors are represented in a black-and-whiteimage, as well as in color
images, or that the reduction of image quality from noise depends on how the noise
correlates with the information the viewer seeks in the image rather than its overall
strength.
Personal Identification Based on Iris Texture Analysis

21

Another example of this type of quality measure is Johnson's criteria for determining


the necessary quality of an image in order to detect targets in night vision systems.

Subjective measures of quality also relate to the fact that, although the camera's
deviation from the ideal models of image formation and measurement in general is
undesirable and corresponds to reduced objective image quality, these deviations can
also be used for artistic effects in image production, corresponding to high subjective
quality.

Image quality assessment categories

There are several techniques and metrics that can be measured objectively and
automatically evaluated by a computer program. Therefore, they can be classified as
full-reference (FR) methods and no-reference (NR) methods. In FR image quality
assessment methods, the quality of a test image is evaluated by comparing it with a
reference image that is assumed to have perfect quality. NR metrics try to assess the
quality of an image without any reference to the original one.

For example, comparing an original image to the output of JPEG compression of that
image is full-reference – it uses the original as reference.
Personal Identification Based on Iris Texture Analysis

22

Image quality factors

 Sharpness determines the amount of detail an image can convey. System


sharpness is affected by the lens (design and manufacturing quality, focal length,
aperture, and distance from the image center) and sensor (pixel count and anti-
aliasing filter). In the field, sharpness is affected by camera shake (a good tripod can
be helpful), focus accuracy, and atmospheric disturbances (thermal effects and
aerosols). Lost sharpness can be restored by sharpening, but sharpening has limits.
Oversharpening, can degrade image quality by causing "halos" to appear near
contrast boundaries. Images from many compact digital cameras are
oversharpened.
 Noise is a random variation of image density, visible as grain in film and pixel
level variations in digital images. It arises from the effects of basic physics— the
photon nature of light and the thermal energy of heat— inside image sensors.
Typical noise reduction (NR) software reduces the visibility of noise by smoothing
the image, excluding areas near contrast boundaries. This technique works well, but
it can obscure fine, low contrast detail.
 Dynamic range (or exposure range) is the range of light levels a camera can
capture, usually measured in f-stops, EV (exposure value), or zones (all factors of
two in exposure). It is closely related to noise: high noise implies low dynamic range.
 Tone reproduction is the relationship between scene luminance and the
reproduced image brightness.
Personal Identification Based on Iris Texture Analysis

23

 Contrast, also known as gamma, is the slope of the tone reproduction curve in a
log-log space. High contrast usually involves loss of dynamic range — loss of detail,
or clipping, in highlights or shadows.
 Color accuracy is an important but ambiguous image quality factor. Many
viewers prefer enhanced color saturation; the most accurate color isn't necessarily
the most pleasing. Nevertheless it is important to measure a camera's color
response: its color shifts, saturation, and the effectiveness of its white balance
algorithms.
 Distortion is an aberration that causes straight lines to curve near the edges of
images. It can be troublesome for architectural photography and metrology
(photographic applications involving measurement). Distortion is worst in wide angle,
telephoto, and zoom lenses. It often worse for close-up images than for images at a
distance. It can be easily corrected in software.

 Vignetting, or light falloff, darkens images near the corners. It can be significant
with wide angle lenses.
 Exposure accuracy can be an issue with fully automatic cameras and with video
cameras where there is little or no opportunity for post-exposure tonal adjustment.
 Some even have exposure memory: exposure may change after very bright or
dark objects appear in a scene.
Personal Identification Based on Iris Texture Analysis

24

 Lateral chromatic aberration (LCA), also called "color fringing", including purple


fringing, is a lens aberration that causes colors to focus at different distances from
the image center. It is most visible near corners of images. LCA is worst with
asymmetrical lenses, including ultrawides, true telephotos and zooms. It is strongly
affected by demosaicing.

 Lens flare, including "veiling glare" is stray light in lenses and optical systems
caused by reflections between lens elements and the inside barrel of the lens. It can
cause image fogging (loss of shadow detail and color) as well as "ghost" images that
can occur in the presence of bright light sources in or near the field of view.

 Color moiré is artificial color banding that can appear in images with repetitive
patterns of high spatial frequencies, like fabrics or picket fences. It is affected by lens
sharpness, the anti-aliasing (low pass) filter (which softens the image),
and demosaicing software. It tends to be worst with the sharpest lenses.

 Artifacts – software (especially operations performed during RAW conversion)


can cause significant visual artifacts, including Data compression and transmission
losses (e.g. Low quality JPEG), over sharpening "halos" and loss of fine, low-
contrast detail.
Personal Identification Based on Iris Texture Analysis

25

Personal Identification Based on Iris Texture Analysis

INTRODUCTION

THE recent advances of information technology and the increasing requirement for
security have led to a rapid development of intelligent personal identification systems
based on biometrics. Biometrics [1], [2] employs physiological or behavioral
characteristics to accurately identify each subject. Commonly used biometric features
include face, fingerprints, voice, facial thermo grams, iris, retina, gait,
palm-prints, hand geometry, etc. [1], [2]. Of all these biometric features, fingerprint
verification has received considerable attention and has been successfully used in
law enforcement applications. Face recognition and speaker recognition have also been
widely studied over the last 25 years, whereas iris recognition is a newly emergent
approach to personal identification [1], [2]. It is reported in [3] that iris recognition is one
of the most reliable biometrics.

The human iris, an annular part between the pupil (generally, appearing black in an
image) and the white sclera as shown in Fig. 8, has an extraordinary structure and
provides many interlacing minute characteristics such as freckles, coronas, stripes, etc.
These visible characteristics, which are generally called the texture of the iris, are
unique to each subject [5], [6], [7], [12], [15], [16], [17], [18], [19], [20],
[21]. Individual differences that exist in the development of
anatomical structures in the body result in such uniqueness.
Personal Identification Based on Iris Texture Analysis

26

Compared with other biometric features (such as face, voice, etc.), the iris is more
stable and reliable for identification [1], [2], [3]. Furthermore, since the iris is an
externally visible organ, iris-based personal identification systems can be noninvasive to
their users [12], [16], [17], [18], [19], [20], [21], which is of great importance for
practical applications. All these desirable properties (i.e., uniqueness, stability, and
noninvasiveness) make iris recognition a particularly promising solution to security in
the near future. A typical iris recognition system is schematically shown
in Fig. 1. It involves three main modules. . Image acquisition. It is to capture a sequence
of iris images from the subject using a specifically designed
sensor. Since the iris is fairly small (its diameter is about 1 cm) and exhibits more
abundant texture features under infrared lighting, capturing iris
images of high quality is one of the major challenges for practical applications.
Fortunately, much work has been done on iris image acquisition [9], [10], [11],
[12], [20], [21], which has made noninvasive imaging at distance possible. When
designing an image acquisition apparatus, one should consider three
main aspects, namely, the lighting system, the positioning system, and the physical
capture system [20]. More recent work on iris imaging may be found
on an iris recognition Web site [14]. . Iris livens detection. Being easy to be forged and
used illegally is a crucial weakness of traditional personal identification methods.
Similarly, it is also possible that biometric features are forged and illegally used. Iris
livens detection aims to ensure that an input image sequence is from a live subject
instead of an iris photograph, a video playback, a glass eye, or other artifacts. However,
efforts on iris
Personal Identification Based on Iris Texture Analysis

27

livens detection are still limited, though iris livens detection is highly desirable.
Daugman’s [1], [19], and Wildest [21] mentioned this topic in their work, but they only
described some possible schemes and did not document a specific method. How to
utilize the optical and physiological characteristics of the live eye to implement effective
livens detection remains to be an important research topic.

Recognition. This is the most key component of an iris recognition system and
determines the system’s performance to a large extent. Iris recognition produces the
correct result by extracting features of the input images and matching these features
with known patterns in the feature database.

Such a process can be divided into four main stages: image quality assessment and
selection, preprocessing, feature extraction, and matching. The first stage solves the
problem of how to choose a clear and well-focused iris image from an image sequence
for recognition. Preprocessing provides an effective iris region in a selected image for
subsequent feature extraction and matching.
Personal Identification Based on Iris Texture Analysis

28

In addition to recognition, our work on iris-based personal identification also involves iris
sensor design [13] and implementation of a specific method for iris livens detection
based on the two schemes described by Daugman’s [1], [19]. In this paper, we detail a
texture analysis-based recognition method. Experimental results on an iris image
database including 2,255 image sequences from 213 subjects have demonstrated that
the proposed method is highly feasible and effective for personal identification. The
novelty of this paper includes the following:

1. In order to select a suitable image from an image sequence for accurate recognition,
an effective scheme for image quality assessment is proposed.

2. A bank of spatial filters, whose kernels are suitable for iris recognition, is defined to
capture local details of the iris so as to produce discriminating texture
features.

3. Using a nonparametric statistical approach, extensive performance comparison of


existing schemes for iris recognition is conducted on a reasonably sized
iris database (To the best of our knowledge, this is the first comparative study on iris
recognition). The remainder of this paper is organized as follows: Section 2 briefly
summarizes related work. A detailed description of the proposed method for iris
recognition is given in Section 3. Section 4 reports experiments and
results. Section 5 concludes this paper.
Personal Identification Based on Iris Texture Analysis

29

2 RELATED WORKS

Using iris patterns as an approach to personal identification and verification goes back
to the late 19th century [8], [21], but most work on iris recognition [9], [10], [11], [12],
[13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [28], [29], [30],
[31] is done in the last decade. Existing methods for iris recognition mainly concentrate
on iris representation and matching which is also one of the focuses of this paper.
Unlike fingerprints, it is difficult to classify and localize semantically meaningful features
in an iris image. From the viewpoint of feature extraction, previous iris recognition
methods can be roughly divided into three major categories: phase-based methods
[16], [17], [18], [19], zero-crossing representation methods [22], [25], and texture-
analysis based methods [20], [23], [28], [29], [30]. Daugman’s [17],
[18], [19] used multistage quadrature wavelets to extract texture phase structure
information of the iris to generate a 2,048-bit iris code and compared the difference
between a pair of iris representations by computing their Hamming
distance. In [24], Sanchez-Reillo and Sanchez-Avila provided a partial implementation
of the algorithm by Daugman’s.

Boles and Boashash [22] calculated a zero-crossing representation of 1D wavelet


transform at various resolution levels of a concentric circle on an iris image to
characterize the texture of the iris. Iris matching was based on two dissimilarity
functions. Sanchez-Avila and Sanchez- Reillo [25] further developed the method of
Boles and Boashash by using different distance measures (such as Euclidean distance
and Hamming distance) for matching.
Personal Identification Based on Iris Texture Analysis

30

Wildest et al. [20] represented the iris texture with a Laplacian pyramid constructed with
four different resolution levels and used the normalized correlation to determine whether
the input image and the model image are from the same class. Lim et al. [23]
decomposed an iris image into four levels using 2D Haar wavelet transform and
quantized the fourth-level high frequency information to form an 87-bit code. A modified
competitive learning neural network (LVQ) was used for classification. Our previous
work [28], [29] adopted a well-known texture analysis method (multichannel Gabor
filtering) to capture both global and local details in an iris. More recently, Tisse et
al. [26] constructed the analytic image (a combination of the original image and its
Hilbert transform) to demodulate the iris texture. Emergent frequency images used for
feature extraction are in essence samples of the phase gradient fields of the analytic
image’s dominant components [27]. Similar to the algorithm by Daugman, they sampled
binary emergent frequency images to form a feature vector and used Hamming distance
for matching. It should be noted that all these algorithms are based on gray images, and
Personal Identification Based on Iris Texture Analysis

31

color information is not used. The main reason is that the most important information in
recognition (i.e., texture variations of the iris) is the same in both gray and color
images.

A great deal of progress in iris recognition has been made through these efforts,
therefore, a detailed performance comparison of these algorithms is not trivial. Such
evaluation will be discussed in Section 4. In this paper, we propose a new iris
recognition algorithm based on texture analysis. Of particular concern and importance in
this method are image quality assessment, feature extraction, and matching. These
very key issues will be addressed in the following section.

3 IRIS RECOGNITION

In our framework, an iris recognition algorithm includes four basic modules: image
quality assessment and selection, preprocessing, feature extraction, and matching. Fig.
2 shows how the proposed algorithm works. The solid boxes are the processed data at
different stages and the dashed boxes denote four different processing steps,
respectively. Detailed descriptions of these steps are introduced as follows.

3.1 Image Quality Assessment and Selection

When capturing iris images, one usually obtains a sequence of images rather than a
single image. Unfortunately, not all images in the input sequence are clear and sharp
enough for recognition.
Personal Identification Based on Iris Texture Analysis

32

As shown in the top row of Fig. 3, the second image is out of focus, the third one
contains many noticeable interlacing lines (especially in regions close to the boundary)
caused by eye motion, and the last one is an example of severe occlusions by eyelids
and eyelashes. Therefore, it is necessary to select a suitable image of high quality from
an input sequence before all other operations. Image quality assessment is an
important issue of iris recognition is the quality of an image strongly affects recognition
accuracy. However, efforts on image quality assessment are still limited. Daugman’s
[18] measured the total high frequency power in the 2D Fourier spectrum of an image to
assess the focus of the image.

If an image can pass minimum focus criterion, it will be used for recognition. However,
Daugman’s did not provide a detailed description of his method. Zhang and Salganicoff
[31] analyzed the sharpness of the boundary between the pupil and the iris to determine
whether an image is in focus. Both these methods aim to measure the focus of an iris
image. The former is a common approach to focus detection able to be used in various
applications, whereas the latter considers the specific properties of the iris image.

Here, we present an effective scheme to assess image quality by analyzing the


frequency distribution of the iris image. Iris images of low quality can be roughly
categorized into three classes, namely, out-of-focus images (also called defocused
images), motion blurred images, and images severely occluded by eyelids and
eyelashes. When the subject is far from the focus plane of the camera, a defocused
image like Fig. 3b will form. If the subject moves during imaging, a motion blurred image
as shown in Fig. 3c will result.
Personal Identification Based on Iris Texture Analysis

33

When the subject opens his eye partially, the resulting image as shown in Fig. 3d
contains little useful information.

These images often occur in a captured sequence since the eye is in the state of
continual motion and noninvasive image acquisition also requires users to adjust their
position (hence, body motion). Here, the region of interest in an image is the iris, and we
thus focus on only two iris sub regions in the horizontal direction as shown in Fig. 3 for
further analysis. That is, we will utilize information of the iris image as much as
possible. From the viewpoint of frequency analysis, the spectrum of a defocused iris is
greatly dominated by low

Fig. 2. The flowchart of our approach.


Fig. 3.
Personal Identification Based on Iris Texture Analysis

34

frequency components, and an iris image having the eye partially opened contains
significant middle and high frequency components resulted from the heavy eyelashes.
As far as motion blurred images are concerned, there are two cases. If the iris sensor
works in the interlaced scan mode (i.e., a frame is divided into two fields which are
captured in an interval of 20 ms or less), the resulting image as shown in Fig. 3c
involves obvious interlacing lines(hereinafter, called aliasing) in the horizontal direction
in the boundary regions. Such aliasing corresponding to vertical high frequency
components in Fourier spectrum is more noticeable in regions close to the pupil and the
eyelashes because the pupil and the eyelashes generally stand in high contrast against
their surroundings. If the iris sensor works in the progressive scan mode (i.e., a
complete frame is generated in one time), smearing along the motion direction instead
of serious aliasing will occur in the image.

Different from the former, the second kind of motion blurred images lacks middle and
high frequency components and has frequency distribution similar to that of
defocused images [32]. In our experiments, motion blurred images belong to the former
since our iris sensor only works in the interlaced scan mode. In comparison with these

images of low quality, a clear and properly focused iris image has relatively uniform
frequency distribution. This can be observed in Fig. 3. We thus define the following
quality descriptor:
Personal Identification Based on Iris Texture Analysis

35

corresponding frequency components. In our experiments, three frequency pairs of (0,


6), (6, 22), and (22, 32) are used. The quality descriptor D consists of two discriminating
frequency features. The first feature is the total spectrum power of an iris region which
can effectively discriminate clear iris images from severely occluded iris images. The
Personal Identification Based on Iris Texture Analysis
second feature is the ratio of the middle frequency power to other frequency power. It
36
should be larger for the clear image than for the defocused and motion blurred image
since the former has much more middle frequency information.

A complete diagram of the proposed algorithm for quality assessment is plotted in Fig.
4. We first locate two 64 x 64 iris regions and compute their quality descriptors,
respectively. Then, the mean of the resulting two local quality descriptors is regarded as
an appropriate quality measure of an iris image. For a given quality
descriptor, the SVM method is used to distinguish whether the corresponding iris image
is clear. Here, because defocused images, motion blurred images, and occluded
images do not form a compact cluster in the feature space defined by the quality
descriptor, we use the SVM method to characterize the distribution boundary of the
quality descriptor between low quality images and clear images
Personal Identification Based on Iris Texture Analysis

37

(see Fig. 9). Note that the scheme for coarse localization of the pupil is the same as that
introduced in Section 3.2.1. Using the above algorithm, one can accurately assess the
quality of an image. Because the input data is an image sequence in our experiments,
we adopt a simple selection scheme as follows:

1. Compute the quality descriptor of each image in the input sequence.

2. Generate a candidate image set where each image should successfully pass the
above quality assessment (or the quality descriptor of each image is close
to the decision boundary).
Personal Identification Based on Iris Texture Analysis
3. Select the image whose quality descriptor is the farthest to the decision boundary in
38
all quality descriptors of the candidate images.

The combination of the quality assessment algorithm and the image selection scheme
makes sure that an iris image of high quality is identified from the input sequence. In the
following sections, we will focus on iris representation and matching based on a single
image.

3.2 Image Preprocessing

An iris image, as shown in Fig. 5a, contains not only the region of interest (iris) but also
some “unuseful” parts (e.g., eyelid, pupil, etc.). A change in the camera-to-eye distance
may also result in variations in the size of the same iris. Furthermore, the brightness is
not uniformly distributed because of nonuniform illumination.

Therefore, before feature extraction, the original image needs to be preprocessed to


localize iris, normalize iris, and reduce the influence of the factors mentioned above.
Such preprocessing is detailed in the following subsections.

3.2.1 Iris Localization

The iris is an annular part between the pupil (inner boundary) and the sclera (outer
boundary). Both the inner boundary and the outer boundary of a typical iris can
approximately be taken as circles. However, the two circles are usually not concentric
[17]. We localize the iris using the following simple but effective method.

1. Project the image in the vertical and horizontal direction to approximately estimate

the center coordinates of the pupil. Since the pupil is

generally darker than its surroundings, the coordinates


Personal Identification Based on Iris Texture Analysis
corresponding to the minima of the two
39

projection profiles are considered as the center coordinates of the pupil.


Personal Identification Based on Iris Texture Analysis

40
2. Binarize a 120x 120 region centered at the point by adaptively
selecting a reasonable threshold using the gray-level histogram of this region. The
centroid of the resulting binary region is considered as a more accurate estimate of the
pupil coordinates. In this binary region, we can also roughly compute
the radius of the pupil.

3. Calculate the exact parameters of these two circles using edge detection (Canny
operator in experiments) and Hough transform in a certain region
determined by the center of the pupil.

In the above method, the first two steps provide an approach to coarse localization of
the pupil which can be used in image quality assessment. In experiments, we
perform the second step twice for a reasonably accurate estimate. Compared with the
localization method by Wildest et al. [20] where the combination of edge detection
and Hough transform is also adopted, our method approximates the pupil position
before edge detection and Hough transform. This will reduce the region for edge
detection and the search space of Hough transform and, thus, result in lower
computational demands.
Personal Identification Based on Iris Texture Analysis

41

3.2.2 Iris Normalization

Rises from different people may be captured in different size and, even for irises from
the same eye, the size may change due to illumination variations and other factors.
Such elastic deformation in iris texture will affect the results of iris matching. For the
purpose of achieving more accurate recognition results, it is necessary to compensate
for the iris deformation. Daugman’s [17], [18], [19] solved this problem by projecting the
original iris in a Cartesian coordinate system into a doubly dimensionless pseudo polar
coordinate system. The iris in the new coordinate system can be represented in a fixed
parameter interval. That is, this method normalizes irises of different size to the same
size. Similar to this scheme, we counterclockwise unwrap the iris ring to a rectangular
block with a fixed size. Such unwrapping can be denoted as:
Personal Identification Based on Iris Texture Analysis

42

.
3.2.3 Image Enhancement

The normalized iris image has low contrast and may have nonuniform brightness
caused by the position of light sources. All these may affect the subsequent processing
in feature extraction and matching. In order to obtain a more well-distributed texture
image, we first approximate intensity variations across the whole image. The mean of
each 16x16 small block constitutes a coarse estimate of the background illumination.
This estimate is further expanded to the same size as the normalized image by bicubic
interpolation.

The estimated background illumination as shown in Fig. 5d is subtracted from the


normalized image to compensate for a variety of lighting conditions. Then, we
enhance the lighting corrected image by means of histogram equalization in each 32 _
32 region.

Such processing compensates for the nonuniform illumination, as well as


improves the contrast of the image. Fig. 5e shows the preprocessing result of an iris
image, from which we can see that finer texture characteristics of the iris become
clearer than those in Fig. 5c.
Personal Identification Based on Iris Texture Analysis

43

3.3 Feature Extraction

The iris has a particularly interesting structure and provides abundant texture
information. So, it is desirable to explore representation methods which can capture
local underlying information in an iris.

From the viewpoint of texture analysis, local spatial patterns in an iris mainly involve
frequency and orientation information. Generally, the iris details spread along the radial
direction in the original image corresponding to the vertical direction in the normalized
image (see Figs. 7 and 8). As a result, the differences of orientation information among
irises seem to be not significant. That is, frequency information should account for the
major differences of irises from different people. We thus propose a scheme to capture
such discriminating frequency information which reflects the local structure of the iris. In
general, the majority of useful information of the iris is in a frequency band of about
three octaves [18]. Therefore, a bank of filters is constructed to reliably acquire such
information in the spatial domain.

As
Personal Identification Based on Iris Texture Analysis

44
Personal Identification Based on Iris Texture Analysis

45

we know, coefficients of the filtered image effectively indicate the frequency distribution
of an image. Two statistic values are thus extracted from each small region
in the filtered image to represent local texture information of the iris. A feature vector is
an ordered collection of all features from the local regions. More details of this
algorithm are presented as follows.

3.3.1 Spatial Filters

A spatial filter is an optical device which uses the principles of Fourier optics to alter
the structure of a beam of coherent light or other electromagnetic radiation. Spatial
filtering is commonly used to "clean up" the output of lasers, removing aberrations in the
beam due to imperfect, dirty, or damaged optics, or due to variations in the laser gain
medium itself. This can be used to produce a laser beam containing only a
single transverse mode of the laser's optical resonator.

An example diffraction pattern.


Personal Identification Based on Iris Texture Analysis

46

In spatial filtering, a lens is used to focus the beam. Because of diffraction, a beam that
is not a perfect plane wave will not focus to a single spot, but rather will produce a
pattern of light and dark regions in the focal plane. For example, an imperfect beam
might form a bright spot surrounded by a series of concentric rings, as shown in the
figure to the right. It can be shown that this two-dimensional pattern is the two-
dimensional Fourier transform of the initial beam's transverse intensity distribution. In
this context, the focal plane is often called the transform plane. Light in the very center
of the transform pattern corresponds to a perfect, wide plane wave. Other light
corresponds to "structure" in the beam, with light further from the central spot
corresponding to structure with higher spatial frequency.

A pattern with very fine details will produce light very far from the transform plane's
central spot. In the example above, the large central spot and rings of light surrounding
it are due to the structure resulting when the beam passed through a circular aperture.

The spot is enlarged because the beam is limited by the aperture to a finite size, and
the rings relate to the sharp edges of the beam created by the edges of the aperture.
This pattern is called an Airy pattern, after its discoverer George Airy.

By altering the distribution of light in the transform plane and using another lens to
reform the collimated beam, the structure of the beam can be altered. The most
common way of doing this is to place an aperture in the beam that allows the desired
light to pass, while blocking light that corresponds to undesired structure in the beam.
Personal Identification Based on Iris Texture Analysis

47
In particular, a small circular aperture or "pinhole" that passes only the central bright
spot can remove nearly all fine structure from the beam, producing a smooth transverse
intensity profile, which may be almost a perfect Gaussian beam. With good optics and a
very small pinhole, one could even approximate a plane wave.

In practice, the diameter of the aperture is chosen based on the focal length of the lens,
the diameter and quality of the input beam, and its wavelength (longer wavelengths
require larger apertures). If the hole is too small, the beam quality is greatly improved
but the power is greatly reduced. If the hole is too large, the beam quality may not be
improved as much as desired.

The size of aperture that can be used also depends on the size and quality of the optics.
To use a very small pinhole, one must use a focusing lens with a low f-number, and
ideally the lens should not add significant aberrations to the beam. The design of such a
lens becomes increasingly more difficult as the f-number decreases.

In practice, the most commonly used configuration is to use a microscope objective


lens for focusing the beam, and an aperture made by punching a small, precise, hole in
a piece of thick metal foil. Such assemblies are available commercially

In the spatial domain, one can extract information of an image at a certain orientation
and scale using some specific filters, such as Gabor filters [33], [34], [35], [36], [37].
Recently, Gabor filter based methods have been widely used in computer vision,
especially for texture analysis [35], [36], [37]. Gabor elementary functions are
Gaussians modulated by oriented complex sinusoidal functions. Here,
according to the characteristics of the iris texture, we define new spatial filters to capture
local details of the iris.
Personal Identification Based on Iris Texture Analysis

48

The difference between Gabor filter and the defined filter lies in the modulating
sinusoidal function. The former is modulated by an oriented sinusoidal function,
whereas the latter by a circularly symmetric sinusoidal function. Their kernels
are given as follows (here, we only consider even symmetric Gabor filters):
Personal Identification Based on Iris Texture Analysis

49
Personal Identification Based on Iris Texture Analysis

50

shows more interest in information in x or y direction (determined by _x and _y). This is


greatly different from a Gabor filter which can only provide information of an
image at a certain orientation. Fig. 6 clearly shows the differences between a Gabor
filter and the defined spatial filter. As mentioned earlier, local details of the iris generally
spread along the radial direction, so information density in the angular direction
corresponding to the horizontal direction in the normalized image is higher than that in
other directions, which is validated by our experimental results in Section 4.3.

Thus, we should pay more attention to useful information in the angular direction. The
defined filter can well satisfy such requirements of iris recognition. In other words, the
defined kernel is suitable for iris recognition.
Personal Identification Based on Iris Texture Analysis

51

In our experiments, we find that the upper portion of a normalized iris image
Corresponding to regions closer to the pupil) provides the most useful texture
information for recognition (see Fig. 7).

In addition, eyelids and eyelashes rarely occlude this section. So, we extract features
only in this section (called region of interest, ROI) shown as the region above the dotted
line in Fig. 7. As mentioned above, useful iris information distributes in a specific
frequency range. We therefore use the defined spatial filters in two channels to acquire
the most discriminating iris features. X and y used in the first channel are 3 and 1.5, and
the second channel 4.5 and 1.5. In a much shorter version of this method in [30], we
vertically divided the ROI into three sub regions of the same size and estimated the
energy of each sub region\within a frequency band.

These energy measures were used as features. In contrast, using multiple filters with
different frequency response for the entire ROI can generate more discriminating
features since different irises have distinct
dominant frequencies. This means that the improved scheme would be more effective
than our earlier one [30].
Personal Identification Based on Iris Texture Analysis

52

3.3.2 Feature Vector

According to the above scheme, filtering the ROI (48 x 512) with the defined
multichannel spatial filters results in
Personal Identification Based on Iris Texture Analysis

53

each small block, two feature values are captured. This generates 1,536 feature
components. The feature values used in the algorithm are the mean m and the average
absolute deviation _ of the magnitude of each filtered block defined as

where w is an 8 x 8 block in the filtered image, n is the number of pixels in the block w,
and m is the mean of the block w. These feature values are arranged to form a
1D feature vector
Personal Identification Based on Iris Texture Analysis

54

3.4 Iris Matching

After feature extraction, an iris image is represented as a feature vector of length 1,536.
To improve computational efficiency and classification accuracy, Fisher linear
discriminate is first used to reduce the dimensionality of the
feature vector and then the nearest center classifier is adopted for classification.
Two popular methods for dimensionality reduction are principal component analysis and
Fisher linear discriminate.

Compared with principal component analysis, Fisher


linear discriminate not only reduces the dimensionality of features but also increases
class reparability by considering both information of all samples and the
underlying structure of each class.

This is also the reason that Wildest et al. [20] adopted Fisher linear discriminate rather
than general distance measures for iris matching
(though the feature vector includes only four components in their method). Fisher linear
discriminate searches for projected vectors that best discriminate different classes in
terms of maximizing the ratio of between-class to within class scatter. Further details of
Fisher linear discriminate may be found in [38], [39]. The new feature vector f can be
denoted as:
Personal Identification Based on Iris Texture Analysis

55

where W is the projection matrix and V is the original feature vector derived in Section

3.3.2. The proposed algorithm employs the nearest center classifier defined in
(8) for classification in a low-dimensional feature space.
Personal Identification Based on Iris Texture Analysis

56

It is desirable to obtain an iris representation invariant to translation, scale, and rotation.


In our algorithm, translation invariance and approximate scale invariance are achieved
by normalizing the original image at the preprocessing step. Most existing schemes
achieve approximate rotation invariance either by rotating the feature vector before
matching [17], [18], [19], [22], [24], [26], or by registering the input
image with the model before feature extraction [20]. Since features in our method are
projection values by feature reduction, there is no explicit relation between features and
the original image.
Personal Identification Based on Iris Texture Analysis

57

We thus obtain approximate rotation invariance by unwrapping the iris ring at different
initial angles. Considering that the eye rotation is not very large in
practical applications, these initial angle values are -9, -6, -3, 0, 3, 6, and 9 degrees.
This means that we define seven templates which denote the seven rotation angles for
each iris class in the database. When matching the input feature vector with the
templates of an iris class, the minimum of the seven scores is taken as the final
matching distance.

4 EXPERIMENTS

This paper presents a new method for identifying individuals from an iris image
sequence. We thus perform a series of experiments to evaluate its performance.
Moreover, we compare the proposed method with some existing methods
for iris recognition and present detailed discussions on the overall experimental results.
Evaluating the performance of biometric algorithms is a difficult issue since it is greatly
influenced by all sources of noise (such as sensor noise and environment noise), the
test database and the evaluation method. Obviously, it is impossible to model all noise
sources and build a test data set including biometric samples from all subjects in the
world.
Thus, using modern statistical methods to measure the performance of biometric
algorithms is a desirable approach. In this paper, the bootstrap [43], which provides
a powerful approach to estimating the underlying distribution of the observed data using
computer-intensive methods, is adopted to estimate the error rates of a
biometric method.
Personal Identification Based on Iris Texture Analysis

58

The bootstrap can infer how much variation in performance measures resulted from a
limited data set can be expected in a larger subject population using confidence
intervals of performance measures. The bootstrap is in nature a nonparametric
empirical method.

Given that we have an original sample x including n observed data

from an unknown probability distribution F, we can empirically


estimate the distribution F and some characteristics of interest _ðFÞ associated with F
by the bootstrap. A key step in the bootstrap is to generate thousands of random

samples (called bootstrap samples) with the


same size as the original sample x by drawing with replacement. Using the resulting
bootstrap samples, one can easily estimate the statistics of interest. More details of
the bootstrap may be found in [40], [41], [43].

We exploit both interval estimation (a confidence interval) and commonly used point
estimation (only a numerical value) of statistical measures to characterize the
performance of the methods for iris recognition. This means that the evaluation is more
accurate and effective. The proposed algorithm is tested in two modes: identification
(i.e., one-to-many matching) and verification (i.e., one-to one matching). In identification
mode, the algorithm is measured by Correct Recognition Rate (CRR), the ratio of
the number of samples being correctly classified to the total number of test samples. In
verification mode, the Receiver Operating Characteristic (ROC) curve is used to report
the performance of the proposed method.
Personal Identification Based on Iris Texture Analysis

59

The ROC curve is a False Match Rate (FMR) versus False NonMatch Rate (FNMR)
curve [3], [4] which measures the accuracy of matching process and shows the overall
performance of an algorithm. The FMR is the probability of accepting an imposter as an
authorized subject and the FNMR is the
probability of an authorized subject being incorrectly rejected. Points on this curve
denote all possible system operating states in different trade offs.

4.1 Image Database

Unlike fingerprints and face, there is no common iris database of a reasonable size.
Most existing methods for iris recognition used small image sets for performance
evaluation, and only the method by Daugman’s has been tested on a
larger image set involving over 200 subjects [3], [19]. urgently, there is also no detailed
comparison among the methods in [16], [17], [18], [19], [20], [21], [22], [23], [24], [25],
[26], [28], [29], [30]. So, we construct an iris image database named CASIA Iris
Database to compare their performance and provide detailed discussions as well.
The CASIA Iris Database includes 2,255 iris image sequences from 213 subjects
(note that this is currently the largest iris database we
Personal Identification Based on Iris Texture Analysis

60

can find in the public domain). Each sequence of the CASIA Iris Database contains 10
frames acquired in about half a second. All images are captured using a homemade
digital optical sensor [13]. This sensor works in PAL mode (i.e.,
25 frames/second) and provides near infrared illumination under which the iris exhibits
more abundant texture features. The subject needs to position himself about 4 cm
in front of the sensor to obtain a clear iris image. Moreover, a surface-coated
semitransparent mirror is placed in front of the lens so that a person can see and keep
his eye in the center of the sensor. The captured iris images are 8-bit gray
images with a resolution of 320 x 280. In general, the diameter of the iris in an image
from our database is greater than 200 pixels.

This makes sure that there is enough texture information for reliable recognition. During
image acquisition, short-sighted subjects are requested to take off their
eyeglasses to obtain high quality iris images. However, contact eyewears are an
exception. In our database, about 5.2 percent of the subjects wear contacts. The profile
of the database is shown in Table 1. The subjects consist of 203 members of the CAS
Institute of Automation and 10 visiting students from Europe.

The CASIA Iris Database is gradually expanded to contain more images from more
subjects. Currently, it is composed of two main parts. The first one (namely our earlier
database [29]) contains 500 sequences from 25 different people. Each individual
provides 20 sequences (10 for each eye) captured in two different stages. In the first
stage, five sequences of each eye are acquired.
Personal Identification Based on Iris Texture Analysis

61

Four weeks later, five more sequences of each eye are obtained. The other part
contains 1,755 sequences from 188 subjects, which form 256 iris classes (note that
not every individual provides iris image sequences of both eyes). These sequences are
captured in three stages. In the first stage, three image sequences of each eye are
obtained. One month later, at least two sequences of each eye are captured (often
three or four sequences per eye). In the third stage (i.e., three months later), 30 out of
188 subjects provide 138 sequences again.

The total number of iris classes is thus 306 (2 _ 25 þ 256). Since all existing methods
for iris recognition only use one image for matching, we make use of the proposed
scheme for image quality assessment and selection described in Section 3.1 to form an
image set for algorithm comparison. The resulting set includes 2,255 images
corresponding to 306 different classes. Some samples from this set are shown
in Fig. 8. 4.2 Performance Evaluation of Image Quality

Assessment

In order to evaluate the performance of the proposed algorithm for image quality
assessment, we manually collect 982 clear images, 475 motion blurred images,
431 occluded images, and 429 defocused images from the CASIA Iris Database. One
third of each image class are used for training and the rest for testing. Fig. 9 shows the
distributions of the quality descriptor for different types of images in training and testing
stages and the two axes respectively denote two feature components of the quality
descriptor (see (1) for definition).
Personal Identification Based on Iris Texture Analysis

62

From this figure, we can draw the following conclusions:

1. The clear images are well clustered and separated from images from the other three
classes, indicating good discriminating power of the defined quality descriptor. This is
further confirmed by the results
shown in Table 2.

2. Severely occluded iris images are rich in middle and high frequency components
caused by the eyelashes, which is an important factor in discriminating such
Images from clear images. The results in Fig. 9 confirm this observation.

3. The quality descriptors of motion blurred images are similarly distributed as those of
defocused images. The former include many high frequency components
in the vertical direction inherently caused by the scan mode of the CCD camera,
whereas the latter are governed by low frequency components. Since
they all lack middle frequency components, the corresponding ratios of middle
frequency power to other frequency power are close.

Table 2 illustrates the classification results for the training and testing samples. The
results clearly demonstrate the effectiveness of the proposed scheme for image
quality assessment. Both Daugman’s method [18] (measuring the total high frequency
power of the Fourier spectrum of an iris image) and the method by Zhang and
Salganicoff [31] (detecting the sharpness of the pupil/iris boundary)
concentrate on assessing the focus of an iris image.
Personal Identification Based on Iris Texture Analysis

63

However, our algorithm can discriminate clear images from not only defocused images
but also from motion blurred images and severely occluded images. The results indicate
that the proposed scheme should be highly feasible in practical applications.
Personal Identification Based on Iris Texture Analysis

64

4.3 Experimental Results of Iris Recognition

For each iris class, we choose three samples taken at the first session for training and
all samples captured at the second and third sessions serve as test samples. This is
also consistent with the widely accepted standard for biometrics algorithm testing [3],
[4]. Therefore, there are 918 images for training and 1,237 images for testing (To satisfy
requirement of using images captured in different stages for training and testing,
respectively, 100 images taken at the first session are not used in the experiments).
Computing a point estimation of a performance measure has been extensively adopted
in pattern recognition and is also easy to use. Here, it is necessary to briefly introduce
how we obtain the interval estimation of a performance measure using the bootstrap. In
our experiments, the CRR, FMR, and FNMR are three performance measures. A major
assumption of the bootstrap for estimating an unknown distribution F is that the
observations fx1; x2 . . . xNP in an original sample x are independent and identically
distributed. But often, the case is just the contrary. With the FNMR as an
example, if more than one matching pair per iris class is available in the known sample
(i.e., at least two test samples per class), the observed data is dependent. Moreover,
the bootstrap demands sampling with replacement from n observations of the known
sample to form thousands of bootstrap samples.

This implies that the observed data in a bootstrap sample is a subset of the original
observations. To satisfy these two requirements of the bootstrap, we compute
confidence intervals of performance measures as follows:
Personal Identification Based on Iris Texture Analysis

65

1. Construct a template set including 306 different classes using 918 iris images.

2. Create a test set containing 306 iris classes by drawing known iris classes with
replacement. This means that one iris class likely appears multiple
times in the test set.

3.For each iris class in the resulting test set, only one test sample is chosen at random
from all available test samples of this class.

4. Compute the CRR, FMR, and FNMR using the constructed template and test set.
5. Repeat Steps 2, 3, and 4 5,000 times and then, respectively, estimate the 95 percent
confidence intervals of the CRR, FMR, and FNMR by the percentile method [40].
Personal Identification Based on Iris Texture Analysis

66

The percentile method establishes a 100 percent confidence interval by computing the
accumulated probability of a probability distribution from both sides. If the
accumulated probability exceeds a in l on left side and in u on right side, the confidence
interval is In the following results, if the performance measure is denoted
by only a numerical value, this says that all 1,237 test samples are used to estimate this
measure. If the performance measure is expressed by a confidence interval, this
means that we adopt the bootstrap method described above to calculate this interval.
Personal Identification Based on Iris Texture Analysis

67

4.3.1 Choice of Similarity Measures

Similarity measures play an important role in iris matching. We thus perform a series of
experiments to select a suitable similarity measure for texture features generated by the
proposed method. Table 3 shows the recognition results obtained with three typical
measures based on the original features and the dimensionality-reduced features,
respectively. The dimensionality of the reduced feature vector is 200, whereas that of
the original feature vector is 1,536.

As shown in Table 3, the three similarity measures lead to very similar results when the
original features are used and the method’s performance does not vary drastically
after dimensionality reduction. This demonstrates that both dimensionality reduction and
similarity measures have very small impact on recognition accuracy. The results also
show that the cosine similarity measure is slightly better than the other two. Fig. 10
describes variations of the recognition rate with changes of dimensionality of the
reduced feature vector using the cosine similarity measure.

From this figure, we can see that with increasing dimensionality of the reduced feature
vector, the recognition rate also increases rapidly. However, when the dimensionality
of the reduced feature vector is up to 150 or higher, the recognition rate starts to level
off at an encouraging rate of about 99.27 percent. In particular, our method achieves a
recognition rate of 99.43 percent using only 200 features. In the subsequent
experiments, we utilize 200 features and the cosine similarity measure for matching.
Personal Identification Based on Iris Texture Analysis

68

4.3.2 Recognition Results

We evaluate the proposed algorithm in two modes, identification and verification. In


identification tests, an overall correct recognition rate of 99.43 percent is achieved
using all test samples and the corresponding 95 percent

confidence interval is [98.37 percent, 100 percent]. The black lines of Fig. 13 show the
verification results using the ROC curve with confidence intervals. Table 4 lists three
typical system operating states in verification. In particular, if one and only one false
match occurs in 100,000 trails, one can predict that the false nonmatch rate is less than
2.288 percent with a 95 percent confidence. These results are highly encouraging and
indicate high performance of the proposed algorithm.
Personal Identification Based on Iris Texture Analysis

69

4.3.3 Performance Evaluation of the Defined Spatial Filter

The spatial filter defined in Section 3.3.1 is thought to be highly suitable for iris
recognition since it is constructed based on the observations about the characteristics of
the iris. This is confirmed by our experimental results shown in
Fig. 11. Filters used in our experiments include well-known directional filters (i.e., Gabor
filters in the horizontal direction) and the defined spatial filters using the same
parameters as Gabor filters. Similar to the scheme for showing the ROC curve with
confidence intervals in [42], we denote confidence intervals of the FMR and FNMR,
respectively. Fig. 11 shows the verification results of the proposed method using
different filters, which reveals that the defined filters outperform Gabor filters. Gabor
filters only capture iris information in the horizontal direction,
whereas the defined filters not only show interest in information in the horizontal
direction but also consider information from other directions. That is, the latter can
obtain more information for recognition. The results also show that one can achieve
good results using Gabor filters in the horizontal direction. This indicates that
information in the horizontal direction in a normalized iris is more discriminating than
that in other directions, namely, higher information density in the angular direction in an
original image. We find from Fig. 11 that there is considerable overlap in the FNMR
confidence intervals on the two ROC curves. This is because both the defined filters
and Gabor filters used in our experiments aim to capture discriminating iris information
in the horizontal direction. The defined Filters, however, make effective use of more
information in other directions, which results in higher accuracy. The above results
demonstrate that iris information in the angular direction has higher discriminability and
information in other directions is a helpful supplement for more accurate recognition.
Personal Identification Based on Iris Texture Analysis

70

4.3.4 Comparison with Existing Methods

The previous methods [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29],
[30], [31] for iris recognition mainly focus on feature representation and matching.
Therefore, we only analyze and compare the accuracy and efficiency of feature
representation and matching of these methods. The methods proposed by Daugman’s
[18], Wildest et al. [20], Boles and Boashash [22] are probably the best-known. They
characterize local details of the iris based on phase, texture analysis and zero-crossing
representation respectively. Here, we will present a detailed comparison between the
current method and their methods (and our previous work) described in [18], [20], [22],
[29], [30] on the CASIA Iris Database. For the purpose of comparison, we implement
these methods according to the published papers [12], [16], [17], [18], [19], [20], [21],
[22]. Because the method by Wildes et al. [20] only works in verification mode, we do
not test the performance of this method in identification mode. Table 5 and Fig. 12 show
the identification results, and Fig. 13 describes the verification results.
Personal Identification Based on Iris Texture Analysis

71
Personal Identification Based on Iris Texture Analysis

72
Personal Identification Based on Iris Texture Analysis

73

Using the bootstrap, we can approximately predict the recognition rate distributions of
these methods for a larger test population. Fig. 12 shows the estimated distributions
for our previous methods, Boles’s method and the proposed method. Table 5 gives the
95 percent confidence intervals of these methods. Since Daugman’s method obtains
100 percent recognition rate, we do not plot its probability distribution (the probability of
100 percent recognition rate is 1) in Fig. 12 in order to express the differences of other
distributions more clearly. Looking at the results shown in Table 5 and Fig. 12, we can
find that Daugman’s method has the best performance, followed by the proposed
method and the methods described in [30], [29], and [22], respectively.

This conclusion is further consolidated by the verification results given in Fig. 13.
Personal Identification Based on Iris Texture Analysis

74

Fig. 13 shows the ROC curves with confidence intervals for these methods. The left and
right plots show the 95 percent confidence interval (CI) of the FMR and FNMR,
respectively. The solid lines denote the bounds of the CI derived by the bootstrap and
the dash-dot lines are the ROC curves using all test samples. Fig. 13 not only indicates
the performance of different methods but also provides information of how much the
performance of a given method can vary.

In general, the confidence interval of the FMR is smaller than that of the FNMR since
one can obtain more nonmatching pairs (for estimating the FMR) than matching pairs
(for estimating the FNMR) in experiments.

We divide these methods into two groups and, respectively, show the results in the top
and bottom plots in Fig. 13 in order to improve the legibility of the plots. The first group
includes [22] and [29], and the second [18], [20], [30] and the proposed method. Two
observations can be made from Fig. 13. First, in terms of performance, a clear ranking
for these methods from best to worst is as follows: Daugman’s
Personal Identification Based on Iris Texture Analysis

75
Personal Identification Based on Iris Texture Analysis

76

method, the proposed method, our previous method [30], Wildes’s method, our previous
method [29], and Boles’s method. Second, Boles’s method is close to our previous
method [29] and they are much worse than the other methods. The above results show
that the proposed method is only inferior to Daugman’s method and much better than
the others. Fig. 13 also shows that there is overlap in some of the
FNMR confidence intervals on the ROC curves, especially among Daugman’s method,
the proposed method, and our earlier one [30].

As we know, the infraclass and the


interclass distance distributions jointly determine the verification performance of an
algorithm. That is, the ROC curve is directly related to these two distance distributions.
In the experiments, we learned that large matching scores from the infraclass
distribution and small no matching scores from the interclass distribution result
in the observed overlap in Fig. 13 (assume that the intraclass distance is less than the
interclass distance). Large matching scores which correspond to false nonmatch pairs
are mainly caused by occlusions of eyelids and eyelashes, inaccurate
localization, and pupil movement (hence, iris distortion). Small nonmatching scores
which correspond to false match pairs are decided to a greater extent by the inherent
discrimination ability of a method.

if test samples include some images which can lead to large matching scores or small
nonmatching scores, the ROC curve of a method will inevitably deteriorate. Large
matching scores can affect all algorithms (our current comparison experiments do not
include eyelid and eyelash detection.
Personal Identification Based on Iris Texture Analysis

77

To reduce false nonmatching caused by large matching scores, we are working on


eyelid and eyelash detection, more accurate localization and more effective
normalization.).
However, small nonmatching scores depend on the specific iris representation. The
obsevered overlap indicates that the discrimination ability of a given method may
slightly vary with the iris characteristics (hence, test samples). For example, for iris
images containing very few distinct characteristics, the texture analysis based methods
have a relatively low accuracy.

If test samples do not include such images, both the proposed method and the method
in [30] can also achieve a quite high performance as shown in Fig. 13. Based on the
above analysis, we can conclude that the range of a performance measure’s confidence
intervals reflects the corresponding method’s dependence on test samples (the wider
the range is, the stronger the dependence should be) and that the overlap in
performance measures’ confidence intervals among different methods
implies that the performance of these methods can be almost the same on some
specific test samples (the larger the overlap is, the closer the performance). Considering
these two points, we can maintain the above performance ranking for the six methods
shown in Fig. 13. Our previous method [29] divided the normalized iris
into eight blocks of the same size (64 x 64) and represented the texture of each block
using Gabor filters at five scales and four directions.

This implies that it captures less local texture information of the iris. Boles et al. only
employed extremely little information along a concentric circle on the iris to represent
the whole iris, which resulted in a lower accuracy as shown in Fig. 13.
Personal Identification Based on Iris Texture Analysis

78

Different from these two methods, our current method captures much more local
information. Lim et al. [23] made use of the fourth-level high frequency information of 2D
Haar wavelet decomposition of an iris image for feature extraction. As we know,
the fourth-level details of an image’s wavelet decomposition contain essentially very low
frequency information.
That is, their method did not effectively exploit middle frequency components of the iris
which play an important role in recognition as well. Moreover, iris features in their
method are variant to eye rotation as the discrete wavelet transform is not translation
invariant. When there is a rotation between a pair of irises from the same subject, their
method will generate a false NonMatch.

Therefore, there is no reason to expect that their method outperforms ours. The
proposed method is a significant extension of our earlier algorithm presented in [30].
Compared with this earlier method, the current method utilizes multichannel spatial
filters to extract texture features of the iris within a wider frequency range. This indicates
that the extracted features are more discriminating, and the current method thus
achieves higher accuracy. Wildest et al. made use of differences of binomial low-pass
filters (isotropic Gaussian filters) to achieve overall band pass iris representation,
whereas we design more suitable spatial filters for recognition according to the iris
characteristics.

This leads to better performance of our method. Furthermore, their method relies on
image registration and matching and is computationally demanding. In both
identification and verification tests, Daugman’s method is slightly better than the
proposed method. It should be noted that these two methods explore different schemes
to represent an iris.
Personal Identification Based on Iris Texture Analysis

79

We extract local characteristics of the iris from the viewpoint of texture analysis,
whereas Daugman used phase information to represent local shape of the iris details. In
essence, Daugman analyzed the iris texture by computing and quantizing the similarity
between the quadrature wavelets and each local region, which requires that the
size of the local region must be small enough to achieve high accuracy. Therefore, the
dimensionality of the feature vector (2,048 elements) in Daugman’s method is far higher
than ours (200 elements). That is, his method captures much more information in much
smaller local regions, which makes his method better than ours.

4.4 Discussions and Future Work

Based on the above results and analysis, we can draw a number of conclusions as well
as find that some issues need to be further investigated.

1. The proposed scheme for iris image quality assessment is quite feasible. It can
effectively deal with the defocused, motion blurred, and occluded images.
The motion blurred images used in our experiments are captured by an iris sensor
working in the interlaced scan mode. As stated in Section 3.1, the
motion blurred images taken by an iris sensor working in the progressive scan mode are
expected to have similar frequency distribution to the defocused
images. However, such an expectation need to be further verified using real images.
Personal Identification Based on Iris Texture Analysis

80

2. The comparison experiment described in Section 4.3.3 reveals that information along
the angular direction is highly discriminating for recognition, which is consistent
with the implicit reports by Daugman’s in [18]. We construct new multichannel spatial
filters according to such a distinct distribution of the iris details and the proposed
method is thus expected to achieve a satisfying recognition accuracy.

In fact, our current method achieves the best performance among the existing methods
based on texture analysis [20], [21], [23], [28], [29], [30]. 3. Table 5 and Figs. 12 and 13
show that the proposed algorithm is only worse than Daugman’s method.

Increase of the dimensionality of the feature vector improves the recognition accuracy of
our method but it does not outperform Daugman’s method. By analyzing these two
algorithms carefully, it is not difficult to find that different viewpoints for feature
representation determine their differences in performance.

Phase information characterized by Daugman reflects in essence local shape features


of the iris, whereas texture features used by us denote statistical frequency information
of a local region. By careful examination on the appearance of numerous iris images,
we learn that a remarkable and important characteristic of the iris is the randomly
distributed and irregular details. Local shape features are thus expected to better
represent such iris details than texture features.

The experimental results and analysis have indicated that local shape features could
be more discriminating iris features. We are currently working on representing the iris
characteristics using the shape description method in order to achieve higher accuracy.
Personal Identification Based on Iris Texture Analysis

81

4. The number and the class of iris samples used in our experiments are of a
reasonable size. Therefore, the conclusions using the statistical bootstrap method
based on such a data set are useful for both research and applications. We intend to
expand our iris database to include much more iris image sequences
from more races and make it publicly available to promote research on iris recognition.

5 CONCLUSIONS

Biometrics based personal identification methods have recently gained more interests
with an increasing emphasis on security. In this paper, we have described an efficient
method for personal identification from an iris image sequence. A quality descriptor
based on the Fourier spectra of two local regions in an iris image is defined to
discriminate clear iris images from low quality images due to motion blur, defocus, and
eyelash occlusion. According to the distinct distribution of the iris characteristics,
a bank of spatial filters is constructed for efficient feature extraction. It has been proven
that the defined spatial filter is suitable for iris recognition. The experimental
results have demonstrated the effectiveness of the proposed method. A detailed
performance comparison of existing methods for iris recognition has been conducted on
the CASIA Iris Database. Such comparison and analysis will be helpful to further
improve the performance of the iris recognition methods.
Personal Identification Based on Iris Texture Analysis

82

REFERENCES

[1] Biometrics: Personal Identification in a Networked Society, A. Jain,


R. Bolle and S. Pankanti, eds. Kluwer, 1999.

[2] D. Zhang, Automated Biometrics: Technologies and Systems. Kluwer,


2000.

[3] T. Mansfield, G. Kelly, D. Chandler, and J. Kane, “Biometric


Product Testing Final Report,” issue 1.0, Nat’l Physical Laboratory
of UK, 2001.

[4] A. Mansfield and J. Wayman, “Best Practice Standards for Testing


and Reporting on Biometric Device Performance,” Nat’l Physical
Laboratory of UK, 2002.

[5] F. Adler, Physiology of the Eye: Clinical Application, fourth ed.


London: The C.V. Mosby Company, 1965.

[6] H. Davision, The Eye. London: Academic, 1962.

[7] R. Johnson, “Can Iris Patterns Be Used to Identify People?”


Chemical and Laser Sciences Division LA-12331-PR, Los Alamos
Nat’l Laboratory, Calif., 1991.
Personal Identification Based on Iris Texture Analysis

83

[8] A. Bertillon, “La Couleur de l’Iris,” Rev. of Science, vol. 36, no. 3,
pp. 65-73, 1885.

[9] T. Camus, M. Salganicoff, A. Thomas, and K. Hanna, Method and


Apparatus for Removal of Bright or Dark Spots by the Fusion of
Multiple Images, United States Patent, no. 6088470, 1998.

[10] J. McHugh, J. Lee, and C. Kuhla, Handheld Iris Imaging Apparatus


and Method, United States Patent, no. 6289113, 1998.

[11] J. Rozmus and M. Salganicoff, Method and Apparatus for Illuminating


and Imaging Eyes through Eyeglasses, United States Patent,
no. 6069967, 1997.

[12] R. Wildes, J. Asmuth, S. Hsu, R. Kolczynski, J. Matey, and S.


Mcbride, Automated, Noninvasive Iris Recognition System and
Method, United States Patent, no. 5572596, 1996.

[13] T. Tan, Y. Wang, and L. Ma, A New Sensor for Live Iris Imaging, PR
China Patent, no. ZL 01278644.6, 2001.

[14] http://www.iris-recognition.org/, 2002.


Personal Identification Based on Iris Texture Analysis
[15] L. Flom and A. Safir, Iris Recognition System, United Sates Patent,
84
no. 4641394, 1987.

[16] J. Daugman, Biometric Personal Identification System Based on Iris


Analysis, United States Patent, no. 5291560, 1994.

[17] J. Daugman, “High Confidence Visual Recognition of Persons by a


Test of Statistical Independence,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 15, no. 11, pp. 1148-1161, Nov. 1993.

[18] J. Daugman, “Statistical Richness of Visual Phase Information:


Update on Recognizing Persons by Iris Patterns,” Int’l J. Computer
Vision, vol. 45, no. 1, pp. 25-38, 2001.

[19] J. Daugman, “Demodulation by Complex-Valued Wavelets for


Stochastic Pattern Recognition,” Int’l J. Wavelets, Multiresolution
and Information Processing, vol. 1, no. 1, pp. 1-17, 2003.

[20] R. Wildes, J. Asmuth, G. Green, S. Hsu, R. Kolczynski, J. Matey,


and S. McBride, “A Machine-Vision System for Iris Recognition,”
Machine Vision and Applications, vol. 9, pp. 1-8, 1996.

[21] R. Wildes, “Iris Recognition: An Emerging Biometric Technology,”


Proc. IEEE, vol. 85, pp. 1348-1363, 1997.

[22] W. Boles and B. Boashash, “A Human Identification Technique


Personal Identification Based on Iris Texture Analysis
Using Images of the Iris and Wavelet Transform,” IEEE Trans.
85
Signal Processing, vol. 46, no. 4, pp. 1185-1188, 1998.

1532 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE


INTELLIGENCE, VOL. 25, NO. 12, DECEMBER 2003

[23] S. Lim, K. Lee, O. Byeon, and T. Kim, “Efficient Iris Recognition


through Improvement of Feature Vector and Classifier,” ETRI J.,
vol. 23, no. 2, pp. 61-70, 2001.

[24] R. Sanchez-Reillo and C. Sanchez-Avila, “Iris Recognition With


Low Template Size,” Proc. Int’l Conf. Audio and Video-Based
Biometric Person Authentication, pp. 324-329, 2001.

[25] C. Sanchez-Avila and R. Sanchez-Reillo, “Iris-Based Biometric


Recognition Using Dyadic Wavelet Transform,” IEEE Aerospace
and Electronic Systems Magazine, pp. 3-6, Oct. 2002.

[26] C. Tisse, L. Martin, L. Torres, and M. Robert, “Person Identification


Technique Using Human Iris Recognition,” Proc. Vision
Interface, pp. 294-299, 2002.

[27] T. Tangsukson and J. Havlicek, “AM-FM Image Segmentation,”


Proc. IEEE Int’l Conf. Image Processing, pp. 104-107, 2000.

[28] Y. Zhu, T. Tan, and Y. Wang, “Biometric Personal Identification


Personal Identification Based on Iris Texture Analysis
Based on Iris Patterns,” Proc. Int’l Conf. Pattern Recognition, vol. II,
86
pp. 805-808, 2000.

[29] L. Ma, Y. Wang, and T. Tan, “Iris Recognition Based on


Multichannel Gabor Filtering,” Proc. Fifth Asian Conf. Computer
Vision, vol. I, pp. 279-283, 2002.

[30] L. Ma, Y. Wang, and T. Tan, “Iris Recognition Using Circular


Symmetric Filters,” Proc. 16th Int’l Conf. Pattern Recognition, vol. II,
pp. 414-417, 2002.

[31] G. Zhang and M. Salganicoff, Method of Measuring the Focus of


Close-Up Images of Eyes, United States Patent, no. 5953440, 1999.

[32] K. Castleman, Digital Image Processing. Prentice-Hall, 1997.

[33] J. Daugman, “Uncertainty Relation for Resolution in Space, Spatial


Frequency, and Orientation Optimized by Two-Dimensional
Visual Cortical Filters,” J. Opticl Soc. of Am. A, vol. 2, pp. 1160-
1169, 1985.

[34] T. Lee, “Image Representation Using 2D Gabor Wavelets,” IEEE


Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 10,
pp. 959-971, Oct. 1996.

[35] D. Clausi and M. Jernigan, “Designing Gabor Filters for Optimal


Personal Identification Based on Iris Texture Analysis
Texture Separability,” Pattern Recognition, vol. 33, pp. 1835-1849,
87
2000.

[36] A. Jain, S. Prabhakar, L. Hong, and S. Pankanti, “Filterbank-Based


Fingerprint Matching,” IEEE Trans. Image Processing, vol. 9, no. 5,
pp. 846-859, 2000.

[37] J. Zhang and T. Tan, “Brief Review of Invariant Texture Analysis


Methods,” Pattern Recognition, vol. 35, no. 3, pp. 735-747, 2002.

[38] P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs.


Fisherfaces: Recognition Using Class Specific Linear Projection,”
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7,
pp. 711-720, July 1997.

[39] C. Liu and H. Wechsler, “Gabor Feature Based Classification


Using the Enhanced Fisher Linear Discriminant Model for Face
Recognition,” IEEE Trans. Image Processing, vol. 11, no. 4, pp. 467-
476, 2002.

[40] J. Beveridge, K. She, B. Draper, and G. Givens, “A Nonparametric


Statistical Comparison of Principal Component and Linear
Discriminant Subspaces for Face Recognition,” Proc. IEEE Conf.
Computer Vision and Pattern Recognition, pp. 535-542, 2001.

[41] R. Bolle, S. Pankanti, and N. Ratha, “Evaluation Techniques for


Biometric-Based Authentication Systems (FRR),” IBM Computer
Personal Identification Based on Iris Texture Analysis
Science Research Report RC 21759, 2000.
88

[42] J. Wayman, “Confidence Interval and Test Size Estimation for


Biometric Data,” Nat’l Biometric Test Center (collectedworks), pp. 91-
101, 2000..

[43] B. Efron and R. Tibshirani, “Bootstrap Methods for Standard


Errors, Confidence Intervals, and Other Measures of Statistical
Accuracy,” Statistical Science, vol. 1, pp. 54-75, 1986.
Li Ma received the BSc and MSc degrees in automation engineering from Southeast
University, China, in 1997 and 2000, respectively, and
the PhD degree in computer science from the National Laboratory of Pattern
Recognition, Chinese Academy of Sciences, in 2003. Currently,
he is a research member of IBM China Research Lab. His current research interests
include image processing, pattern recognition, biometrics, and multimedia.
Tieniu Tan (M’92-SM’97) received the BSc degree in electronic engineering from Xi’an
Jiaotong University, China, in 1984, and the MSc, DIC, and PhD degrees in electronic
engineering from the Imperial College of Science, Technology and Medicine, London,
UK, in 1986, 1986, and 1989, respectively.. He joined the Computational Vision Group
in the Department of Computer Science at The University of Reading, England, in
October 1989, where he worked as a research fellow, senior research fellow, and
lecturer. In January 1998, he returned to China to join the National Laboratory of Pattern
Recognition, the Institute of Automation of the Chinese Academy of Sciences, Beijing,
China. He is currently a professor and the director of the National Laboratory of Pattern
Recognition as well as president of the Institute of Automation.
Personal Identification Based on Iris Texture Analysis

89

He has published widely on image processing, computer vision, and pattern


recognition. His current research interests include speech and image processing,
machine and computer vision, pattern recognition, multimedia, and robotics. He serves
as referee for many major national and international journals and conferences. He is an
associate editor of Pattern Recognition and IEEE Transactions on Pattern Analysis and
Machine Intelligence, the Asia editor of Image and Vision Computing. Dr. Tan was an
elected member of the executive committee of the British Machine Vision Association
and Society for Pattern Recognition (1996-1997) and is a founding cochair of the IEEE
International Workshop on Visual Surveillance. He is a senior member of the IEEE
and a member of the IEEE Computer Soceity. Yunhong Wang received the BSc degree
in electronic engineering from Northwestern Polytechnical University, the MS degree
(1995) and the PhD degree (1998) in electronic engineering
from Nanjing University of Science and Technology. She joined the National Lab of
Pattern Recognition, Institute of Automation, Chinese Academy of Sciences in 1998,
where she has been an associate professor since 2000. Her
research interests include biometrics, pattern recognition, and image processing. She is
a member of the IEEE and the IEEE Computer Society.
Dexin Zhang received the BSc degree in automation engineering from Tsinghua
University in 2000.
Personal Identification Based on Iris Texture Analysis

90

Then, he joined the National Laboratory of Pattern Recognition, Chinese Academy of


Sciences, to pursue his master’s degree. His research interests include biometrics,
image processing, and pattern recognition.

. For more information on this or any other computing topic,


please visit our Digital Library at http://computer.org/publications/dlib

You might also like