Performance analysis of 3-dimensional fingerprint scan system

advertisement
ABSTRACT OF THESIS PERFORMANCE ANALYSIS OF 3-DIMENSIONAL
FINGERPRINT SCAN SYSTEM
Fingerprint recognition has been extensively applied in both forensic law enforcement and security
involved personal identification. Traditional fingerprint acquisition is generally done in 2-D, with a
typical automatic fingerprint identification system (AFIS) consisting of four modules: image acquisition,
preprocessing, feature extraction, and feature matching. In this thesis, we present a technology of noncontact 3-D fingerprint capturing and processing for higher performance fingerprint data acquisition and
verification as the image acquisition and preprocessing models, and a new technology of unraveling the
3D data to 2D fingerprint. We use NIST fingerprint software as the feature extraction and feature
matching models. Our scan system relies on a novel real-time and low-cost 3-D sensor using structured
light illumination (SLI) which generates both texture and detailed ridge depth information. The high
resolution 3-D scans are then converted into 2-D unraveled equivalent images, using our proposed best fit
sphere unravel algorithm. As a result, many limitations imposed upon conventional fingerprint capturing
and processing are relaxed by the unobtrusiveness of the system and the extra depth information obtained.
In addition, expect for the small distortions that may be caused by camera and projector, compared to the
techniques used nowadays, the whole process defuses distortion, and the unraveled fingerprint is
controlled to 500 dots per inch. The image quality is evaluated and analyzed using NIST fingerprint
image software. A comparison is performed between the converted 2-D unraveled equivalent fingerprints
and their 2-D ink rolled counterparts. Then, NIST matching software is applied to the 2-D unraveled
fingerprints, and the results are given and analyzed, which shows strong relationship between matching
performance and quality of the fingerprints. In the end, some incremental future works are proposed in
order to make further improvements to our new 3D fingerprint scan system.
Index Terms: 3-D fingerprints, fingerprint acquisition, best fit sphere algorithm, rolled-equivalent image,
fingerprint verification, fingerprint quality, matching performance.
PERFORMANCE ANALYSIS OF 3-DIMENSIONAL
FINGERPRINT SCAN SYSTEM
By Yongchang Wang Director of Thesis Director of Graduate Students Data RULES FOR THE USE OF THESES
Unpublished theses submitted for the Master’s degree and deposited in the University of Kentucky Library are as a rule open for inspection, but are to be used only with due regard to the rights of the authors. Bibliographical references may be noted, but quotations or summaries of parts may be published only with the permission of the author, with the usual scholarly acknowledgments. Extensive copying or publication of the thesis in whole or in part also requires the consent of the Dean of the Graduate School of the University of Kentucky. A library that borrows this thesis for use by its patrons is expected to secure the signature of each user. Name Date THESIS
Yongchang Wang
The Graduate School
University of Kengtucky
2008
PERFORMANCE ANALYSIS OF 3-DIMENSIONAL
FINGERPRINT SCAN SYSTEM
THESIS A thesis submitted in partial fulfillment of the
requirements for the degree of Masters of Science in
the College of Engineering
at the University of Kentucky
By
Yongchang Wang
Lexington, Kentucky
Director: Dr.Daniel L. Lau, Associate Professor of Electrical Engineering
Lexington, Kentucky
2008 MASTER’S THESIS RELEASE I authorize the University of Kentucky Libraries
to reproduce this thesis in
whole or in part for purposes of research
Signed:________________
Date:__________________ Dedicated to Cui Li – my loving wife
ACKNOWLEDGEMENTS
I would like to thank all those people who have helped me for this thesis.
Here, firstly, my sincere thanks to my advisor, Dr. Daniel Lau for giving me an
opportunity to work on this project and lead me into this area. Also Dr. Lau taught me so
much from the whole idea to the technique details. It is his persistence for perfection and
interest in little details that have supplemented my own quest for knowledge. I would also
like to extend my great thanks to Dr. L. G. Hassebrook for his constant encouragement
and support during all the time. Whenever I met problems Dr. Lau and Dr. Hassebrook
always helped me and taught me how to solve them.
I am also grateful to Veer Ganesh Yalla for providing the 3-D fingerprint scans as and
when required. Thank Abhishika Fatehpuria, especially when I first came into this area,
Abhishika gave me so much help. And thank Kai Liu, it is him that lead me into
programming. Discussions with Abhishika, Kai and Ganesh regarding the work, were
always interesting and intellectual.
This work would not have been possible without the support and love of my parents and
wife. They were the ones who always encouraged and motivated me to go ahead. I also
thank all my friends for always putting a smile on my face during tough times and
support me.
iii TABLE OF CONTENTS ACKNOWLEDGEMENTS ……………………………………………………………………………………….. iii LIST OF TABLES ……………………………………………………………………………………………………. vii LIST OF FIGURES ………………………………………………………………………………………………….. viii Chapter 1 Introduction ……………………………………………………………………………………….. 1 1.1
Fingerprint Acquisition ………………………………………………………………………… 3 1.2
Classification of Fingerprint …………………………………………………………………. 9 1.3
Fingerprint Matching …………………………………………………………………………… 11 1.4
Previous Work ……………………………………………………………………………………… 16 Chapter 2 Post Processing of 3D Fingerprint ……………………………………..…………………. 18 2.1 Fit a Sphere to the 3D Surface ………………………………………………………………….. 18 2.1.1 Calculate the Sphere ………………………………………………………………….. 18 2.1.2 Change the North Pole of the Sphere ………..…………..………………….. 21 2.2 Unravel the 3D Fingerprint ……………………………………………………………………….. 22 2.2.1 Create Grid Data …………………………………………………………………………. 22 2.2.2 Unravel the 3D Surface ……………………………………………………………….. 25 2.2.3 Apply filters to Unraveled 2D Fingerprint ……………………………………. 26 2.2.4 Down Sample to Standard …………………………………………………………… 27 2.2.5 Further Distortion Correction ……….……………………………………………… 29 2.3 Apply NIST Software to the Data ………………………………………………………………… 37 2.4 Fingerprints Quality Analysis …………………………………………………………………….… 39 iv 2.4.1 2D Ink Rolled Fingerprint Scanning .…………………………………………….. 42 2.4.2 2D Ink Fingerprint Experimental Results and Analysis ..……………….. 43 2.4.3 3D Unraveled Fingerprint Experimental Results and Analysis ..……. 50 2.4.4 Compare between 2D Inked and 3D Unraveled Fingerprint ..…..….. 58 Chapter 3 Fingerprint Image Software ………………………………………………………………….. 64 3.1 NIST Minutiae Detection (MINDTCT) System ……………………………………………….. 65 3.2 NIST Fingerprint Pattern Classification (PCASYS) System ……………………………… 67 3.3 NIST Fingerprint Image Quality (NFIQ) System …………………………………………….. 68 3.4 NIST Fingerprint Matcher (BOZORTH3) System ……………………………………………. 69 3.4.1 Construct Intra‐Fingerprint Minutia Comparison Tables ……………… 70 3.4.2 Construct an Inter‐Fingerprint Compatibility Table ……………………… 72 3.4.3 Traverse the Inter‐Fingerprint Compatibility Table ……………………… 73 Chapter 4 Experiments and Results ……………………………………………………………………… 76 4.1 Matching Result of 3D Unraveled Fingerprints ………………………………………….. 77 4.2 Relationship between Fingerprint Quality and Matching Score …………………. 79 Chapter 5 Conclusions and Future Works …………………………………………………………….. 88 5.1 Conclusions ……………………………………………………………………………………………….. 88 5.2 Future Works …………………………………………………………………………………………….. 91 Appendix A 3D Unravelled fingerprint images from subject 0 to subject 14 ………………………….. 93 Bibliography …………………………………………………………………………………………………………. 101 Vita ………………………………………………………………………………………………………………………. 105
v List of Tables 1.1 Basic and composite ridge characteristics (minutiae) ………………………………………….. 13 2.1 Results of running PCASYS, MINDTCT and NFIQ on the 2D images of Subjects 0 through 14 ………………………………………………………………………………………………………….. 43 2.2 Results of running PCASYS, MINDTCT and NFIQ on the 3D unraveled fingerprint images of Subjects 0 through 14 …………………………………………………………………………. 50 3.1 Feature Vector Description[NIST] ……………………………………………………………………….. 68 vi List of Figures 1.1 Various types of fingerprint impressions: (a) rolled inked fingerprint (from NIST 4 database); (b) latent fingerprint; (c) fingerprint obtained using an optical sensor; (d) fingerprint obtained using a solid state sensor …………………………………………………….. 4 1.2 Fingerprint sensors ……………………………………………………………………………………………… 5 1.3 Classification of fingerprints ………………………………………………………………………………… 11 1.4 A sample fingerprint image showing different ridge patterns and minutiae type … 12 1.5 Matching process between two fingerprints ……………………………………………………….. 15 1.6 Experimantal setup used for 3D fingerprint scanner ……………………………………………. 16 2.1 Three different views of original input 3D fingerprint …………………………………………. 19 2.2 3D views of finger print data and sphere. To get a clear view, the data is great down sampled, from 1392*1040 to 65*50. Blue points ‘.’ represent points from the finger print data. Red points ‘*’ represent points from the ‘best fit’ sphere. From left to right are four different views of the 3D data ………………………………………………………. 20 2.3 Compare before and after moving the north pole. Black points clouds in figures are the points down sampled from the surface of the 3D fingerprint, where the light line is the axis. Fig (a) shows that before the moving of the north pole, the unravel center is not on the fingerprint’s surface. Fig (b) shows that after the rotation and translation, the unravel center is changed to the fingerprint’s center …………………. 22 2.4 Value distributions of theta and phi. Figure (a) is theta, and figure (b) is phi. Theta and phi, whose values are equally distributed, together form a mesh, which is vii projected onto the 3D finger print. The density of this created mesh is 3 times higher than the 3D finger print data ……………………………………………………………………. 23 2.5 Grid created data on the 3D finger print. Each point on the figure is from the mesh points. For a clear view, the data is greatly down sampled from 1392 by1040 to 130 by 100. From left to right are four different views of the 3D data ……………………….. 24 2.6 Unraveled 2D finger print. The density of the unraveled 2D finger print is 3 time higher than the original density from the 3D finger print ……………………………………. 25 2.7 2D finger print after high pass filter. The Gaussian low pass filter is a 20×20 size filter with hsize equals to 3×3 and sigma equals to 40. Fig. a is the filter out data, and fig. b is the data after subtracting the low frequency from the unraveled 2D fingerprint …………………………………………………………………………………………………………… 26 2.8 finger print after post filter process. To be similar to inked finger print, the color is inverted. Image size is [870, 1180], which is with three times higher points density than the original 3D finger print data …………………………………………………………………… 27 2.9 Plot of distance along theta direction ………………………………………………………………….. 28 2.10 Plot of distance along theta direction after FFT ………………………………………………….. 28 2.11 Distance along theta direction after scale to 500 dpi …………………………………………. 29 2.12 (a) Distance plot along x = 300; (b) Distance plot along y = 300 …………………………. 30 2.13 (a) Distance plot along x = 450; (b) Distance plot along y = 150 …………………………. 31 2.14 (a) New theta map; (b) New phi map …………………………………………………………………. 32 2.15 (a) New theta map with size [600, 600]; (b) New phi map with size [600, 600] …. 33 2.16 (a) Distance pot along x = 300; (b) Distance plot along y = 300 …………………………… 34 2.17 (a) Distance plot along x = 450; (b) Distance plot along y = 150 ………………………….. 35 2.18 Variance comparison before and after distortion correction. (a) Variances along viii theta and phi direction before correction. (b) Variances along theta and phi direction after correction ……………………………………………………………………………………. 36 2.19 2D unraveled fingerprint from scanned 3D data ………………………………………………… 37 2.20 Binarilized fingerprint of the 2D unraveled fingerprint from 3D ………………………… 38 2.21 Quality image of the 2D unraveled finger print. White color represents 4, which is the highest quality. From white to dark, darker color is poorer quality. 0 is lowest quality, which means there is no meaningful data. The average quality of this sample is 3.1090 …………………………………………………………………………………………………. 38 2.22 Schematic flow chart of the Best Fit Sphere algorithm ………………………………………. 39 2.23 An example fingerprint card ………………………………………………………………………………. 43 2.24 Variation of the number of foreground blocks in quality zones 1‐4 with respect to the overall quality number for 2D rolled inked fingerprints. The number of blocks in quality zone 4 decreases, while that in quality zone 2 increases with decrease in overall quality from best to unusable …………………………………………………………………. 47 2.25 Variation of the number of minutiae with quality greater than 0.5, 0.6, 0.75, 0.8, 0.9, 0.95, 0.97 with respect to the overall quality number for 2D rolled inked fingerprints. The number of minutiae with quality greater than 0.5, 0.6, 0.75 decreases with a decrease in overall quality from best to unusable …………………… 48 2.26 Scatter plot between number of blocks in quality zone 4 and number of minutiae with quality greater than 0.75 for 2D rolled inked fingerprints. The plot shows a strong correlation between the two parameters ……………………………………………….. 49 2.27 Plot of classification confidence number generated by the PCASYS system with respect to the overall quality number for 2D rolled inked fingerprints ………………. 49 2.28 Variation of the number of foreground blocks in quality zones 1‐4 with respect to ix the overall quality number for 2D unraveled fingerprints obtained from 3D scans. The number of blocks in quality zone 4 decreases, while that in quality zone 2 increases with decrease in overall quality from best to unusable. The distribution is much the same as 2D inked ……………………………………………………………………………….. 55 2.29 Variation of the number of minutiae with quality greater than 0.5, 0.6, 0.75, 0.8, 0.9, 0.95 and 0.97 with respect to the overall quality number for 2D unraveled finger prints obtained from 3D scans. The number of minutiae decreases with a decrease in the first four quality numbers and increase at the last quality number, which is similar to 2D inked fingerprints …………………………………………………………….. 56 2.30 Scatter plot between number of blocks in quality zone 4 and number of minutiae with quality greater than 0.75 for 2D unraveled fingerprints obtained from 3D scans. The plot shows a strong correlation between the two parameters, which is much the same as 2D inked fingerprints …………………………………………………………….. 57 2.31 Plot of classification confidence number generated by the PCASYS system with respect to the overall quality number for 2D unraveled fingerprints obtained from 3D scans, which is much the same as 2D inked …………………………………………………… 57 2.32 Number of blocks in quality zones 4 with respect to the overall quality number, which shows the 2D unraveled fingerprints have a higher percentage of quality zone 4 than the 2D inked fingerprints ………………………………………………………………… 59 2.33 Number of minutiae with quality greater than 0.75, with respect to the overall quality number and classification confidence number, which shows 2D unraveled fingerprints have more minutiae with quality bigger than 0.75 ………………………….. 60 2.34 Overall quality number for 2‐D rolled inked fingerprints and 2‐D unraveled fingerprints obtained from 3‐D scans …………………………………………………………………. 61 x 2.35 Distributions of minutiae with quality greater than 0.8, for 2‐D rolled inked fingerprints and 2‐D unraveled fingerprints obtained from 3‐D scans, which shows the 2D unraveled fingerprints have a better result …………………………………………….. 62 3.1 Flow‐chart of minutiae detection process …………………………………………………………… 66 3.2 Minutiae Detection Result. The detection is based on the binary image. Left image is unraveled 2D binary finger print. Right image is the corresponding minutiae detection result with quality larger than 50, where the minutiae are marked by small black square ……………………………………………………………………………………………….. 67 3.3 Results of running NFIQ package on the example fingerprints. Each group figures how the generated quality map (right) for the corresponding finger print image(left). The average quality of each quality map is also shown below the images ………………………………………………………………………………………………………………… 69 4.1 Histogram of match and non‐match distributions. All the data is scanned from index fingers …………………………………………………………………………………………………………………. 78 4.2 ROC of overall test data. All the data is scanned from index fingers. When the FAR is 0.01, the TAR is 0.891. And for the FAR 0.1, the TAR is 0.988 ……………………………… 79 4.3 Distribution of matching scores, when matched fingerprints are from the same finger ………………………………………………………………………………………………………………….. 81 4.4 Distribution of matching scores, when matched fingerprints are from different fingers …………………………………………………………………………………………………………………. 82 4.5 ROC, when apply local quality classification ………………………………………………………… 83 4.6 Distribution of matching scores, when matched fingerprints are from the same finger ………………………………………………………………………………………………………………….. 84 4.7 Distribution of matching scores, when matched fingerprints are from different xi fingers …………………………………………………………………………………………………………………. 85 4.8 ROC, when apply overall quality classification ……………………………………………………… 86 A 2D unraveled fingerprint images from subject 0 to 14 ……………………………………….….. 93 xii Chapter 1 Introduction A fingerprint is an impression of the friction ridges of all or any part of the finger.
Fingerprint identification (sometimes referred to as dactyloscopy) [8] is the process of
comparing questioned and known friction skin ridge impressions from fingers, palms,
and toes to determine if the impressions are from the same finger (or palm, toe, etc.) [6,
8, 9]. Among all the biometric techniques, fingerprint-based identification is the oldest
method which has been successfully used in numerous applications [10]. Everyone is
known to have unique, immutable fingerprints [2].
The science of fingerprint
Identification stands out among all other forensic sciences for many reasons, including
the following:
•
Has served all governments worldwide during the past 100 years to provide
accurate identification of criminals. No two fingerprints have ever been found alike in
many billions of human and automated computer comparisons. Fingerprints are the
very basis for criminal history foundation at every police agency [10, 11].
•
Established the first forensic professional organization, the International
Association for Identification (IAI), in 1915. [10]
•
Established the first professional certification program for forensic scientists,
the IAI's Certified Latent Print Examiner program (in 1977), issuing certification to
those meeting stringent criteria and revoking certification for serious errors such as
erroneous identifications [11].
•
Remains the most commonly used forensic evidence worldwide - in most
jurisdictions fingerprint examination cases match or outnumber all other forensic
examination casework combined. [12]
1
•
Continues to expand as the premier method for identifying persons, with tens of
thousands of persons added to fingerprint repositories daily in America alone - far
outdistancing similar databases in growth [7].
•
Outperforms DNA and all other human identification systems to identify more
murderers, rapists and other serious offenders (fingerprints solve ten times more
unknown suspect cases than DNA in most jurisdictions) [10].
Other visible human characteristics change - fingerprints do not. The flexibility of
friction ridge skin means that no two finger or palm prints are ever exactly alike (never
identical in every detail), even two impressions recorded immediately after each other.
Fingerprint identification (also referred to as individualization) occurs when an expert
(or an expert computer system operating under threshold scoring rules) determines that
two friction ridge impressions originated from the same finger or palm (or toe, sole) to
the exclusion of all others.
The history of fingerprinting can be traced back to prehistoric times based on the
human fingerprints discovered on a large number of archaeological artifacts and
historical items [9]. In 1686, Marcello Malpighi, a professor of anatomy at the
University of Bologna, noted in his treatise; ridges, spirals and loops in fingerprints. He
made no mention of their value as a tool for individual identification. A layer of skin
was named after him; "Malpighi" layer, which is approximately 1.8mm thick [14].
During the 1870's, Dr. Henry Faulds, the British Surgeon-Superintendent of Tsukiji
Hospital in Tokyo, Japan, took up the study of "skin-furrows" after noticing finger
marks on specimens of "prehistoric" pottery [9]. A learned and industrious man, Dr.
Faulds not only recognized the importance of fingerprints as a means of identification,
but devised a method of classification as well.
In 1880, Faulds forwarded an
2
explanation of his classification system and a sample of the forms he had designed for
recording inked impressions, to Sir Charles Darwin. Darwin, in advanced age and ill
health, informed Dr. Faulds that he could be of no assistance to him, but promised to
pass the materials on to his cousin, Francis Galton. Also in 1880, Dr. Faulds published
an article in the Scientific Journal, "Nature" (nature). He discussed fingerprints as a
means of personal identification, and the use of printers ink as a method for obtaining
such fingerprints [11, 12, 14]. He is also credited with the first fingerprint identification
of a greasy fingerprint left on an alcohol bottle. Later, Juan Vucetich made the first
criminal fingerprint identification in 1892 [12, 14].
Today, the largest AFIS repository in America is operated by the Department of
Homeland Security's US Visit Program, containing over 63 million persons'
fingerprints, primarily in the form of two-finger records (non-compliant with FBI and
Interpol standards). Fingerprint identification is divided into four modules,
(i)acquisition, (ii)preprocessing, (iii)feature extraction, and (iv)feature matching. What
we will mainly discuss in this thesis is acquisition and preprocessing.
1.1 Fingerprint Acquisition
There are many different ways of imaging the ridge and valley patterns of finger skin,
each with its own strengths, weaknesses, and idiosyncrasies. Based on the method of
acquisition, these processes can be classified as either offline or online processes. A
digital fingerprint image can be characterized by its resolution, area, number of pixels,
dynamic range (depth), geometric accuracy, and image quality [15].
3
Fig 1.1 Various types of fingerprint impressions: (a) rolled inked fingerprint (from NIST 4 database); (b) latent fingerprint; (c) fingerprint obtained using an optical sensor; (d) fingerprint obtained using a solid state sensor. 4
Fig. 1.2 Fingerprint sensors These sensors in Fig. 1.2 can be classified into optical sensors, solid state sensors,
ultrasonic sensors and others. Details of all nowadays used techniques are discussed in
[2, 15].
For optical sensors, frustrated Total Internal Reflection (FTIR) is the oldest and the most
widely used live scan technique [16, 17] where light is focused on a glass-to-air interface at
an angle exceeding the critical angle for total reflection. Reflection is disrupted at the point
of contact on the glass-to-air interface. This reflected beam is focused on an electro-optical
array, consisting of a lens and a CCD or CMOS image sensor where the fingerprint
impression is captured. Since these devices map the real 3-D finger on the electro-optical
5
array, it is very difficult to deceive these devices by presentation of a photograph or printed
image, but a distortion is introduced in the captured image as the fingerprint surface is not
parallel to the imaging surface. Hologram based methods help in avoiding this problem [20,
21, 22], provided that fingerprints have high spatial fidelity, but these Hologram based
methods cannot be miniaturized because reducing the optical path results in severe
distortions at image edges. A relatively new and hygienic technology currently in use is
that of direct or non-contact reading [3, 13], which use high-quality cameras to directly
focus the fingertip. The finger is not in contact with any surface, and a mechanical support
is provided for the user to present the finger at a suitable distance. Although competent in
overcoming most of the difficulties faced by optical live scanners, obtaining well focused
and high contrast images with this technique is very difficult.
The solid state scanners became commercial in mid 1990s [25]. These sensors consist of an
array of pixels with each pixel being a tiny sensor itself. The user directly touches the
silicon surface and hence, the need for optical components and a CCD or CMOS sensor is
eliminated. The cost of these scanners is high. A capacitive sensor [28, 29, 30, 31, 32, 33,
34] consist of a 2-D array of micro-capacitor plates embedded in a chip, with the finger
being the other plate for each capacitor. When the finger is placed on the chip, small
electrical charges are created between the surface of the finger and the silicon chip, the
magnitude of which depends on the distance between them. Thus, fingerprint ridges and
valleys result in different capacitance patterns in the plates, which can be mapped into a
digital fingerprint image. Thermal sensors are made up of pyro-electric material that
generates current based on temperature differentials [23, 24]. The ridges, which are in
contact with the sensor surface, produce a different temperature differential than the valleys,
which are away from the surface.
Ultrasonic sensors are based on sending acoustic signals toward the fingertip and capturing
the echo signal. This echo signal is used to compute the ridge structure of the finger. The
sensor has a transmitter that generates ultrasonic pulses, and a receiver that detects the
reflected sound signals from the finger surface [26, 27]. These scanners are resilient to dirt
6
and oil accumulations on the fingerprint surface and hence, result in good quality images.
However, this scanner is large and expensive and takes a few seconds to acquire the image.
Furthermore, this technology is still in its nascent stage and needs further research and
development.
Precise fingerprint image acquisition has some peculiar and challenging problems [35]. The
fingerprint imaging system introduces following distortions and noise in the acquired
images. The pressure and contact of the finger on the sensor surface determine how the
three dimensional shape of the finger gets mapped onto the two dimensional image. The
mapping function is thus uncontrollable and results in different inconsistently mapped
fingerprint images. To completely capture the ridge structure of a finger, the ridges should
be in complete contact with the sensor surface. However, some non ideal contact situation,
like, dryness of the skin, skin disease, sweat, dirt, humidity in the air may lead to non ideal
contact. Accidents or injuries to the finger may inflict cuts and bruises on the finger,
thereby changing the ridge structure either permanently or semi permanently. This may
introduce additional spurious features or modify the existing ones, which may generate
false match. The live scan techniques, usually acquire fingerprints using the dab-method, in
which a finger is impressed on the surface without rolling, thus losing important
information. The inked fingerprint technique that acquires the fingerprint by rolling it from
nail to nail [36] is very cumbersome, slow and messy.
Almost all of the aforementioned sensors are plagued by these limitations [6, 8, 37]. As the
majority of these limitations arise due to contact of the finger surface with the sensor
surface, a direct reading or a non-contact 2D or 3D scanner can overcome most of the
shortcomings [6]. For these reasons, a new generation of non-contact based fingerprint
scanners is developed. Our fingerprint scan system is PMP technique based 3D scan system.
In an attempt to build such a system, we have been developing a scanning system [1, 2, 94]
as a means of acquiring 3-D scans of all the five fingers and the palm with sufficient high
resolution as to record ridge level details, which is mainly consisted by a ViewSonic PJ250
projector (1280 by 1024) and a Pulnix TM1400CL, 8 bit camera (1392 by 1040) [2]. The
7
prototype system will be designed such that it can sit on top of a table or desk with
adjustable base for precise height adjustment. Our 3D fingerprint scan system has the
following properties:
•
To avoid distortion of fingerprint, our 3D fingerprint scan system is non-touch
based.
•
The scanning time of our 3D fingerprint scan system should be limited. Now, it
costs around 10 seconds to scan a finger. And using the palm scan system,
which is much the same as the 3D fingerprint scan system, it costs around 10
seconds to scan 10 fingers, which is much faster than the traditional fingerprint
acquisition system.
•
There is post-processing of the fingerprint system, after obtaining the 3D
fingerprint data, which would unravel the 3D fingerprint so that it would be
much the same as the traditional inked roll fingerprint. Since the scan system
does not introduce distortion, it makes possible that the whole process of
obtaining 2D unraveled fingerprint is free of distortion. Our new proposed
unraveling algorithm is free of distortion, and thus we control the Dots Per Inch
(dpi) value of the whole fingerprint to the standard dpi value we wanted.
Compared to the existing unraveling algorithm, the spring algorithm proposed
by Abhishika [2], this new algorithm proposed in this thesis not only costs much
less computation but also successfully controls the dpi value of the whole
fingerprint. We will talk this post-processing algorithm in details in chapter 2.
•
In order to acquire 3D fingerprint, we implement a Phase Measuring
Profilometry (PMP) 3D scan system, which is discussed into details in [1, 3, 4,
5].
In this thesis, we use our new 3D scan system to acquire the 3D fingerprint, and use
best fit sphere algorithm, proposed in this thesis, to unravel the 3D fingerprint to 2D
such that the quality test and matching system can be applied to our result. Details
about the 3D scan system are discussed in [1]. As most of nowadays’ techniques
8
introduce distortion when acquire the fingerprint data or post process the fingerprint
data, our new system introduces no distortion when acquires the fingerprint 3D data,
since it is non-contact scan. And the best fit sphere algorithm not only does not
introduce any distortion when unravels the 3D data to 2D fingerprint, but also totally
controls the dpi of the whole 2D fingerprint, such that at any area of the 2D fingerprint
the definition is controlled to 500 dpi standard. Then, after we apply the NIST quality
and matching system to our unraveled 2D fingerprint, based on the results, we conclude
that the higher quality 2D fingerprints will indicate a higher performance of matching.
1.2 Classification of Fingerprint
Before computerization replaced manual filing systems in large fingerprint operations,
manual fingerprint classification systems were used to categorize fingerprints based on
general ridge formations (such as the presence or absence of circular patterns in various
fingers), thus permitting filing and retrieval of paper records in large collections based on
friction ridge patterns independent of name, birth date and other biographic data that
persons may misrepresent. The most popular ten print classification systems include the
Roscher system, the Vucetich system, and the Henry system [38]. Of these systems, the
Roscher system was developed in Germany and implemented in both Germany and Japan,
the Vucetich system was developed in Argentina and implemented throughout South
America, and the Henry system was developed in India and implemented in most Englishspeaking countries.
In the Henry system of classification, there are three basic fingerprint patterns: Arch, Loop
and Whorl. There are also more complex classification systems that further break down
patterns to plain arches or tented arches. Loops may be radial or ulnar, depending on the
side of the hand the tail points towards. Whorls also have sub-group classifications
including plain whorls, accidental whorls, double loop whorls, and central pocket loop
whorls [2].
9
Plain Arch
Plain Whorl
Tented Arch
Central Pocket Loop
Ulnar Loop
Double Loop Whorl
Radial Loop
Accidental Whorl
Fig 1.3 Classification of fingerprints. The five classes commonly used by today’s classification techniques are (i)arch, (ii)tented
arch, (iii)left loop, (iv)right loop, and (v)whorl (Figure 1.3) The distribution of the classes
in nature is not uniform with the probabilities of each class being approximately 0.037,
0.038, 0.317, 0.029 and 0.279 for the arch, left loop, right loop, tented arch, and whorl
respectively [39]. In order to classify fingerprint images, some features have to be extracted.
In particular, almost all the methods are based on one or more of the following features:
directional image, singular points, ridge flow, and structural features. A directional image
effectively summarizes the information contained in a fingerprint pattern and can be
reliably computed from noisy fingerprints. Also, the local directions in damaged areas can
be restored by means of a regularization process and hence, fingerprint directional images
are the most widely used for fingerprint classification. The ridge lines often produce local
10
singularities, called core and delta, by deviating from their often parallel flow. The core is
defined as the point at the top of the innermost curving ridge, and the delta is defined as the
point where two ridges, running side-by-side, diverge closest to the core. These singular
points can be very useful for aligning fingerprints with respect to a fixed point and for
classification. Ridge flow is an important discriminating characteristic and is typically
extracted from the directional image or by binarizing the image so that each ridge is
represented by a single pixel line. Ridge flow features are more robust than singular points
for classification purposes. Structural features record the relationship between low-level
elements like minutiae, local ridge orientation, or local ridge pattern and can be useful for
fingerprint classification.
To sum up, human fingerprints are unique to each person and can be regarded as a sort of
signature, certifying the person's identity. The most famous application of this kind is in
criminology. However, nowadays, automatic fingerprint matching is becoming increasingly
popular in systems which control access to physical locations, computer/network resources,
bank accounts, or register employee attendance time in enterprises. To improve the
accuracy, a more reliable way to acquire the finger print data and preprocessing it becomes
necessary. For more details, please refer to [2].
1.3 Fingerprint Matching
11
Fig. 1.4 A sample fingerprint image showing different ridge patterns and minutiae types.
The uniqueness of a fingerprint is determined by the topographic relief of its ridge structure,
which exhibits anomalies in local regions of the fingertip, known as minutiae. The position
and orientation of these minutiae are used to represent and match fingerprints [40]. A
sample fingerprint image with the various ridge patterns and the common minutiae types
marked is shown in Fig. 1.4. Minutiae are the discontinuities of the ridges:
Endings, the points at which a ridge stops
Bifurcations, the point at which one ridge divides into two
Dots, very small ridges
Islands, ridges slightly longer than dots, occupying a middle space between two
temporarily divergent ridges
Ponds or lakes, empty spaces between two temporarily divergent ridges
Spurs, a notch protruding from a ridge
Bridges, small ridges joining two longer adjacent ridges
Crossovers, two ridges which cross each other
The core is the inner point, normally in the middle of the print, around which swirls,
loops, or arches center. It is frequently characterized by a ridge ending and several acutely
curved ridges.
Deltas are the points, normally at the lower left and right hand of the fingerprint, around
12
which a triangular series of ridges center.
There are many kinds of minutiae features, some minutiae features can be classified into
the following table.
Table 1.1 Basic and composite ridge characteristics (minutiae) Minutiae
Ridge ending
Example
Bifurcation
Dot
Island (short ridge)
Pond
Spur
Bridge
Crossover
Double bifurcation
Trifurcation
Opposed bifurcation
Ridge ending/opposed bifurcation
13
The ridge patterns along with the core and delta define the global configuration while the
minutiae points define the local structure of a fingerprint. Typically, the global
configuration is used to determine the class of the fingerprint while the distribution of
minutiae points is used to match and establish similarity between two fingerprints.
Fingerprint matching techniques can be placed into two categories: minutiae-based and
correlation based. Minutiae-based techniques first find minutiae points and then map
their relative placement on the finger. However, there are some difficulties when using
this approach. It is difficult to extract the minutiae points accurately when the
fingerprint is of low quality. Also this method does not take into account the global
pattern of ridges and furrows [38, 39, 40]. The correlation-based method is able to
overcome some of the difficulties of the minutiae-based approach. However, it has
some of its own shortcomings. Correlation-based techniques require the precise
location of a registration point and are affected by image translation and rotation.
14
Fig 1.5 Matching process between two fingerprints. Fingerprint matching based on minutiae has problems in matching different sized
(unregistered) minutiae patterns. Local ridge structures can not be completely characterized
by minutiae [41]. We are trying an alternate representation of fingerprints which will
capture more local information and yield a fixed length code for the fingerprint. The
matching will then hopefully become a relatively simple task of calculating the Euclidean
distance will between the two codes [42].
Generally, as shown in figure 1.5, an ordinary fingerprint has about 50 minutiae in it. The
"location" and "direction" are extracted from the minutia. The matching is based on these
pieces of information on the minutia. However, the information on the location and a
direction of the minutia points alone is not enough for fingerprint identification because of
the flexibility of fingerprint skin. For this reason, we add information called a "relation".
The relation is the number of ridges between the minutiae. This relation information
significantly improves the matching accuracy when combined with the information on the
15
minutia. In this thesis we will use BOZORTH3 developed by NIST, as the matching
system to test our result. In this application, the degree of similarity is given by a similarity
number, and we will discuss it in details in chapter 3.
1.4 Previous Work
Ganesh set up the first fingerprint scanner in [3, 4]. The fingerprint scan system is multi-
frequency Measuring Profilometry (PMP) technique based 3D scan system. In an attempt
to build such a system, we have been developing a non-contact scanning system (Fig. 1.6
that uses multiple, high-resolution, commodity, digital cameras and employs Structured
Light Illumination (SLI) [1, 4] as a means of acquiring 3-D scans of all the five fingers and
the palm with sufficient high resolution as to record ridge level details. The system will
operate in both Autonomous Entry and Operator Controlled Entry interfaces. The prototype
system will be designed such that it can sit on top of a table or desk with adjustable base for
precise height adjustment. For more details about the setup of the 3D fingerprint scan
system, please refer to [3].
Fig 1.6 Experimantal setup used for 3D fingerprint scanner. After acquiring the 3D fingerprint, in [2], Abhishika uses spring algorithm unravel the 3D
into 2D fingerprint. She also describes certain quantitative measures that will help evaluate
2D unraveled fingerprints. Specifically, she uses some image software components
16
developed by the National Institute of Standards and Technology (NIST), to derive the
performance metrics. A comparison is also made between 2D fingerprint images obtained
by the traditional means and the 2D images obtained after unrolling the 3D scans and the
quality of the acquired scans is quantified using the metrics. It is shown that both the 2D
inked and 3D unraveled fingerprints have the similar quality distribution and trend.
However, based on the spring algorithm and experimental 3D fingerprint scanner, the 2D
inked fingerprint shows a higher performance than the 3D fingerprints. In this thesis, by
employing the new 3D fingerprint scanner and the best fit sphere unraveling algorithm, not
only we reduce the post processing computation, but also we make the 3D fingerprints
better perform than the 2D inked fingerprints. In addition, we introduce the matching
software and show that the quality of the 3D unraveled fingerprints has a strong relation
with matching performance, where higher quality fingerprint achieves higher possibility of
better matching performance.
17
Chapter 2 Post Processing of 3D Fingerprint In order to assess our 3D fingerprints system, it is necessary to find a system to evaluate the 3D
fingerprint data and compare it with 2D inked fingerprint. Since nowadays the most used
fingerprints are all in 2D, the most commonly used evaluation systems are also based on 2D data.
That means it is necessary first to unravel the 3D fingerprints into 2D and extract all the ridges
information before we assess the new method. The 2-D equivalent rolled image from the 3D
scanned fingerprint data is necessary for (i) using NIST software to extract minutiae, analyze the
quality and match between fingerprints and (ii) compare our results with others. To obtain a 2-D
equivalent rolled image from the extracted finger surface, Abhishika used spring algorithm [2].
However, the computation of the spring algorithm is expensive. To make the computation more
efficient and improve the quality of the unravel results, here we introduce best fit sphere algorithm.
At the end of this chapter, NIST software for fingerprint quality measuring is performed on our
unraveled fingerprints by best fit sphere and the result is compared with 2D inked fingerprints and
fingerprints unraveled by spring algorithm.
2.1 Fit a Sphere to the 3D Surface
2.1.1 Calculate the Sphere
The first step to unravel the finger print is to fit a sphere to the 3D data. A sphere can be defined by
specifying its center point (xc, yc, zc) and its radius r [43, 44]. So, the goal here is to develop a
program which will compute out the center point and the radius based on least squares, which
means the sphere minimizes the sum of the squared distance from the points on the sphere to the
corresponding points on the finger print.
18
Fig. 2.1 Three different views of original input 3D fingerprint. Because our purpose is to minimize the sum of the squared distances, the function to compute out
the distance for each point to the center of sphere is needed, which is given as following:
d0 = (xf-xc)2 + (yf-yc)2 + (zf-zc)2
(2.1)
where (xf, yf, zf) is the point from the 3D finger print and (xc, yc, zc) is the center of sphere. After
computing d0, the distance for each point from the surface of finger print to the surface of the
sphere is further given as:
d = sqrt [(xf-xc)2 + (yf-yc)2 + (zf-zc)2] – r
(2.2)
That is, the distance from the surface of sphere to the finger print is equal to the distance from the
center of sphere to the finger print less the radius of the sphere [45, 46]. The distance would be
positive if the finger print point is inside the sphere, would be negative if it is outside the sphere
[47]. Although it does not matter now because the distance is squared now, it is useful for further
detailed computation. Suppose we have n input points (xf1, yf1, zf1), ..., (xfn, yfn, zfn), by equation
2.1, for each point, we have:
a×xf + b×yf + czf – d0 +(xf2 + yf2 + zf2) = 0
(2.1)
when n>4, it can be further write as:
19
⎡a ⎤
⎡ x f 1 , y f 1 , z f 1 , −1, ( x + y + z ) ⎤ ⎢b ⎥
⎢
⎥⎢ ⎥
...
⎢
⎥ ⎢c ⎥ = 0
2
2
2
⎢
⎥ ⎢d ⎥
⎣ x fn , y fn , z fn , −1, ( x fn + y fn + z fn ) ⎦ ⎢ 0 ⎥
⎢⎣ s ⎥⎦
2
f1
2
f1
2
f1
(2.1)
The parameter s is the scale value. We can solve the function by SVD decomposition of A, where
A = UDVT and the last column of V is the solution. Thus, a = a/s, b = b/s, c = c/s, and d0 = d0 /s.
And r = sqrt((a2+b2+c2)/4-d0), the center point (xc, yc, zc) = (-a/2, -b/2, -c/2).
Fig. 2.2 3D views of finger print data and sphere. To get a clear view, the data is great down sampled, from 1392×1040 to 65×50. Blue points ‘.’ represent points from the finger print data. Red points ‘*’ represent points from the ‘best fit’ sphere. From left to right are four different views of the 3D data. The last step to find the best fit sphere is to change the north axis of the sphere to the center of the
20
fingerprint, after we get the r and center point coordinate of the sphere. This step is necessary,
because after unraveling we want the 2D fingerprint located to the center of the image, and also it
helps a lot when we try to down sample the whole image to500 dpi, which we will talk about in the
following of this chapter. This goal is done by keeping the sphere unchanged and rotate the whole
3D fingerprint, such that the center of the fingerprint is rotated to the north pole of the sphere.
2.1.2 Change the North Pole of the Sphere
By only compute out the center and radius of the sphere, we can get little information about where
the north pole of the sphere is. Thus, the unravel center of the fingerprint may not even locate on
the 3D fingerprint surface. To ensure that the unravel center is also the center of the fingerprint, we
first move the origin point to the center of the sphere and then rotate the whole fingerprint such
that the north pole point out the center of the fingerprint.
As shown in figure 2.3, where the light lines represent axes, after the rotation and translation, the
unravel center, which is on the z axis, is changed onto the surface of the fingerprint and the z axis
goes through the center of the fingerprint points cloud. This step is important to defuse distortion
brought by the fit sphere unraveling program.
21
(a)
(b)
Fig. 2.3 Compare before and after moving the north pole. Black points clouds in figures are the points down
sampled from the surface of the 3D fingerprint, where the light line is the axis. Fig (a) shows that
before the moving of the north pole, the unravel center is not on the fingerprint’s surface. Fig (b)
shows that after the rotation and translation, the unravel center is changed to the fingerprint’s center.
2.2 Unravel the 3D Fingerprint
2.2.1 Create Grid Data
Generally, the part near the camera has a higher points’ density than the part far from the camera.
So, the points on the finger print are not equally distributed, before unravel the finger print, the
corresponding grid data is created to ensure that the points’ density is equally distributed on the
surface. To achieve this, the whole data set is converted from (x, y, z) cart to (theta, phi, rho)
22
sphere dimension. Then, a uniform mesh, consisted by theta and phi, is created. So that, if we
consider the finger print is perfectly fitted to the sphere, the created mesh is equally projected onto
the finger print, and the value of each point on the mesh is linearly computed based on the data
from the finger print. To save more information of original 3D finger print, the mesh is created
with generally 3 or 4 times higher density than the original 3D finger print data. The grid data is
shown in fig 2.4.
100
100
200
200
300
300
400
400
500
500
600
600
700
700
800
800
900
900
200
400
600
800
1000
(a)
1200
200
400
600
800
1000
1200
(b)
Fig. 2.4 Value distributions of theta and phi. Figure (a) is theta, and figure (b) is phi. Theta
and phi, whose values are equally distributed, together form a mesh, which is
projected onto the 3D finger print. The density of this created mesh is 3 times higher
than the 3D finger print data.
23
Fig. 2.5 Grid created data on the 3D finger print. Each point on the figure is from the created mesh points. For a clear view, the data is greatly down sampled from 1392×1040 to 130×100. From left to right are four different views of the 3D data. From Fig. 2.5, besides the fact that the part of the finger near to the camera has a higher dentsity,
since the 3D figure print is not perfectly fitted to the ‘best fit’ sphere, the point density becomes
higher when these two surfaces become closer. However, the mesh is equally spaced as we can see
from Fig. 2.4. Further unravel is based on grid created data. When we get these two theta and phi
maps, we integrate the rho value to form a new rho map, based on what we perform the unraveling
process. Another thing we can find in figure 2.5 is that the density of the points is changing, which
is mainly caused by the un-fitness of the sphere to the 3D fingerprint. Actually, no sphere can be
perfectly fit to a 3D fingerprint model. And since we are creating linear maps of theta and phi, as
shown in figure 2.4, this un-fitness will result a un-equally spaced points density in 3D. Thus,
when we unravel the 3D fingerprint into 2D fingerprint, the distortion is created by the algorithm
24
of best fit sphere. And further steps to defuse this distortion are needed, which we will discuss in
the following part of this chapter.
2.2.2 Unravel the 3D Surface
After for each point on the mesh the rho value is computed, the unraveling will be processed based
on the unraveling of the created mesh. It is just to arrange the computed rho value according to Fig.
2.4.
100
200
300
400
500
600
700
800
900
200
400
600
800
1000
1200
Fig. 2.6 Unraveled 2D finger print. The density of the unraveled 2D finger print is 3 time higher than the original density from the 3D finger print. The unraveled result is given in Fig.2.6. As we can expect, a better fit of the sphere will result a
better unraveled 2D finger print. And this figure is actually created based on the theta and phi
maps, shown in figure 2.3. The rho values of the fingerprint are integrated based on the theta and
phi maps to create a new rho map of the fingerprint. This new rho value has mainly two uses.
Firstly, this rho map is used to create the unraveled fingerprint. If we want the unraveled
fingerprint, filters should be applied to this new created rho map, such that after the bound pass
filter, we can get the information how the rho values change along the surface of the finger.
Secondly, this rho map is used to correct the distortion of fingerprint caused by the algorithm of
best fit sphere.
25
2.2.3 Apply Filters to the Unraveled 2D Fingerprint
After we get the new rho map, shown in Fig.2.6, we firstly use this map to get the unraveled 2D
fingerprint, which is the variation of rho along the surface of the fingerprint. To achieve this, a
bound pass filter should be performed to abstract the finger ridges. Here, in order to apply a bound
pass filter, we first perform a high pass filter and then subtract the high frequency.
100
100
200
200
300
300
400
400
500
500
600
600
700
700
800
800
900
900
200
400
600
800
1000
(a)
1200
200
400
600
800
1000
1200
(b)
Fig. 2.7 2D finger print after high pass filter. The Gaussian low pass filter is a 20×20 size
filter with hsize equals to 3×3 and sigma equals to 40. Fig. a is the filter out data,
and Fig. b is the data after subtracting the low frequency from the unraveled 2D
fingerprint.
As shown in figure 2.7 b, after the high pass filter, the unraveled 2D fingerprint is almost there.
However, there is lot of high frequency noises caused by the collecting of the 3D fingerprint
process. The frequency of this noise is generally much higher than the frequency of the ridges on
the finger. To reduce noise of the data in Fig. 2.7 b, an appropriate low pass filter is needed.
If both the high pass and the low pass filters are regarded as one process, a bound pass filter is
actually performed to the unraveled 2D finger print. After the filter, the un-meaningful filter
created edge data is cropped out, and a hist-euqal function is performed to the data.
26
Fig. 2.8 2D finger print after post filter process. To be similar to inked finger print, the color is
inverted. Image size is [870, 1180], which is with three times higher points density than
the original 3D finger print data.
2.2.4 Down Sample to Standard
According to the standard, it is 500 dpi, which means there should be 500 dots per inch. In order to
save as much information as the original scanned 3D finger print data, the data size for the unravel
process becomes 3 times larger than the original, which is in this data sample 4 times larger than
the standard. The algorithm is straight forward. We compute out how many points per inch in our
unraveled data, and then by down sample all the data, the data achieves the 500 dpi standard.
Following is the plot of the distance along the theta and phi directions. Here we take the distance
along the theta direction as an example.
27
0.014
0.0138
0.0136
0.0134
0.0132
0.013
0.0128
0.0126
0.0124
0.0122
0
200
400
600
800
1000
1200
Fig 2.9 Plot of distance along theta direction
Since there is noise in these wave forms, if we directly scale it to 500 dpi, the distance change
caused by noise and ridges would also be considered into, which is not wanted. So, after getting
these distances, we firstly apply a filter of fourier transform (FFT) to them such that the wave form
can be more smooth.
0.014
0.0138
0.0136
0.0134
0.0132
0.013
0.0128
0.0126
0.0124
0.0122
0
200
400
600
800
1000
1200
Fig 2.10 Plot of distance along theta direction after FFT
The unit used here is millimeter, if we want 500 dpi, the distance between points would be
D = 25.4/500 = 0.0508
So, we get
28
0.051
0.051
0.0509
0.0509
0.0508
0.0508
0.0507
0.0507
0.0506
0
50
100
150
200
250
300
350
Fig 2.11 Distance along theta direction after scale to 500 dpi
As we can see from 2.11, the distances between every two points are changed to around 0.0508,
which means the 500 dpi resolution. Thus, the dpi values along the two center lines are change to
500, based on which we can say the whole fingerprint is down sampled to 500 dpi.
2.2.5 Further Distortion Correction
One thing should be noticed here is, since we are not taking the distance caused by noise into
consideration, after scaling, the noise would be still there, which is also true for the ridges. In this
way, what we scaled is only the size of the image, not the distance between ridges and noise. And
this scale will also create a problem. That is, since we are scaling only in the center theta and
center phi lines, when the points come to the edges, the error becomes larger and larger. This
problem can be explained in the following figures.
As we claimed, 500 dpi means, if ideal, the distance between every two points is 0.0508 mm. Here,
the image size we are scaling to is [600,600]. So, the center line along the vertical direction is x =
300, the center line along the horizontal direction is y = 300. After scaling, the distance between
every two points along these two lines are shown below:
29
0.0512
0.0511
0.051
0.0509
0.0508
0.0507
0.0506
0.0505
0
50
100
150
200
250
300
350
(a)
0.0511
0.051
0.0509
0.0508
0.0507
0.0506
0.0505
0
50
100
150
200
250
300
350
400
450
(b)
Fig. 2.12 (a) Distance plot along x = 300; (b) Distance plot along y = 300. From the above figures, we can see the distances are all around 0.0508, which means the
dpi is 500. However, when the image comes to the edge, the distortion becomes large.
We take the two following plots as example.
30
0.054
0.0535
0.053
0.0525
0.052
0.0515
0.051
0.0505
0.05
0
50
100
150
200
250
300
350
(a)
0.049
0.0485
0.048
0.0475
0.047
0.0465
0.046
0.0455
0
50
100
150
200
250
(b)
Fig. 2.13 (a) Distance plot along x = 450; (b) Distance plot along y = 150. The above two plots show that the dpi values of that area are not 500. The reason for this is mainly
that we are creating a linear theta map and phi map, as shown in figure 2.3, based on which we
scale the image. However, the distance between points on the finger is not linear. This is the
distortion caused by the best fit sphere algorithm. In other word, no sphere can perfectly fit the
shape of a finger. So, when we unravel the fingerprint based on the sphere, the distortion is created.
To solve this problem, we should create nonlinear theta and phi maps. The new created theta and
phi maps should be distorted such that the distances between every two points both along the
vertical and horizontal directions are 0.0508. Thus, we created the following two maps.
31
100
200
300
400
500
600
700
800
100
200
300
400
500
600
(a)
100
200
300
400
500
600
100
200
300
400
500
600
700
800
900 1000 1100
(b)
Fig. 2.14 (a) New theta map; (b) New phi map. Different from the theta and phi maps we are using before, these new created two maps are
nonlinear. As shown in figure 2.7, the size of the input image is [870, 1180]. The theta map is
created by every horizontal line, which means the value on every horizontal line is not linear. For
every horizontal line on the input 3D fingerprint, we can calculate the total distance of that line and
the distances between every two points. So, the number of points on every horizontal line of the
theta map gives us the number of points after scaling. On the other hand, for the phi map, we scan
the vertical lines. For every vertical line on the 3D fingerprint, we calculate the distance between
every two points. Then, we take a FFT to get rid of the ridges and noise. After that, similarly we
will get the smooth plot. This process is much the same as what we use to get the figure 2.10. So,
we can scale all the distances along this line to 0.0508 mm, which means 500 dpi. And accumulate
all these vertical lines, we will get the new phi map. In other word, the theta map defines the
32
horizontal length of the fingerprint, and the phi map defines the vertical length of the fingerprint.
However, after all these, we still have not got the maps that can be directly used to scale the
fingerprint. As you can see from the two plots above, the image sizes are not still [600,600]. So,
further we scaled the maps so that the sizes are [600, 600] and the distance between points are
unchanged. The rule to down sample the theta and phi maps is that the theta map gives us the
horizontal length of the fingerprint, and the phi map gives us the vertical length. We can use this
rule to short the vertical length of the theta map and the horizontal length of the phi map.
100
200
300
400
500
600
100
200
300
400
500
600
400
500
600
(a)
100
200
300
400
500
600
100
200
300
(b)
Fig. 2.15 (a) New theta map with size [600, 600]; (b) New phi map with size [600, 600]. Thus, we get our two new nonlinear maps, which are distorted based on the distances between
points. Next, we wrote a function to further arrange these two maps, such that after down sample
33
to [600, 600] image size, the distances between points are still 0.0508. After this distortion
correction steps, again let us look at the distances between points on the unraveled fingerprint.
Here, first we will still take a look at the distances along two center lines, after unraveling:
0.0509
0.0509
0.0509
0.0508
0.0508
0.0508
0.0508
0.0508
0.0507
0.0507
0
50
100
150
200
250
300
350
(a)
0.0511
0.051
0.051
0.0509
0.0509
0.0508
0.0508
0.0507
0.0507
0.0506
0
50
100
150
200
250
300
350
400
450
(b)
Fig. 2.16 (a) Distance pot along x = 300; (b) Distance plot along y = 300. Compared to the figure 2.12, the small low frequency wave is defused in figure 2.16, which means
the distances along these two lines become more stable and small distortions are defused. To
compare with previous result, still, we take x = 450 and y = 150.
34
0.0509
0.0508
0.0508
0.0507
0.0507
0.0507
0.0506
0.0505
0
50
100
150
200
250
300
350
(a)
0.0509
0.0509
0.0508
0.0508
0.0507
0.0507
0.0507
0.0506
0.0505
0
20
40
60
80
100
120
140
160
180
200
(b)
Fig. 2.17 (a) Distance plot along x = 450; (b) Distance plot along y = 150. Compare figure 2.17 to figure 2.13, the distances are corrected to 0.0508 mm, which also means
that the dpi value of these two lines are corrected to 500. Examples along other lines also show that
the distances are changed to 0.0508, which means 500 dpi. And by doing this distortion correction
steps, not only the dpi is controlled to 500, but also the distortion is defused.
35
-4
3.5
x 10
theta direction
phi direction
3
variance
2.5
2
1.5
1
0.5
0
0
before distortion correction
(a)
-9
3
x 10
theta direction
phi direction
2.5
variance
2
1.5
1
0.5
0
0
after distortion correction
(b)
Fig 2.18 Variance comparison before and after distortion correction. (a) Variances along theta and phi direction before correction. (b) Variances along theta and phi direction after correction. In fig 2.18, we plot out the variances before and after the distortion correction. It is based on 120
fingerprints. As shown, the variances along theta and phi direction before the distortion correction
are 9.6941×10-5 and 3.2257×10-4. On the other hand, after the distortion correction, the variances
are changed to 2.5712×10-9 and 1.9442×10-9, which are much lower than before.
If we assume that the 3D data is accurate, which means we can get the accurate distances between
every two points in 3D space, this unravel result will be total free of unraveling distortion and the
dpi is 500. By this improvement, we can get rid of the effect of best fit sphere algorithm. Since no
matter where the center of the sphere is, the distances from every point to the sphere center are
36
calculated from the (x, y, z) values of that points, and so are the phi and theta values, we still have
accurate distance measure between points on the 3D finger. So, by down sampling in a nonlinear
way, we correct the distortion from the algorithm of best fit sphere and control the dpi of the whole
unraveled finger print to 500.
After scaling steps, the unraveled 2D fingerprint from 3D scanned fingerprint with 500 dpi is
created. Next step would be to apply the NIST software to the 2D unraveled fingerprint and
evaluate the fingerprint.
Fig 2.19 2D unraveled fingerprint from scanned 3D data. 2.3 Applying NIST Software to the Data
The evaluation process is performed by NIST software. Here we will apply the binarization system
(MINDTCT) and local quality system (NFIQ) to the 2D unraveled data to get the binary
fingerprint and its corresponding quality map.
37
Fig 2.20 Binarilized fingerprint of the 2D unraveled fingerprint from 3D. 100
200
300
400
500
600
100
200
300
400
500
600
Fig. 2.21 Quality image of the 2D unraveled finger print. White color represents 4, which is the
highest quality. From white to dark, darker color is poorer quality. 0 is lowest quality,
which means there is no meaningful data. The average quality of this sample is 3.1090.
38
Fig.2.22 Schematic flow chart of the Best Fit Sphere algorithm 2.4 Fingerprints Quality Analysis
In this part, we develop some quantitative measures for evaluating the performance of the
presented 3-D fingerprint scanning and processing. In demonstrating our validity for using
the NIST software [48] and the corresponding statistical measures of overall quality,
number of local quality zone blocks, minutiae reliability, and the classification confidence
number [2] to do scanner and unraveling program evaluation, we have also collected the
conventional 2-D rolled inked fingerprints along with the 3-D scans for the same subjects,
and run the NIST software on both 2D rolled ink fingerprints and unraveled 3D based on
fit-sphere algorithm. The NIST software components are run over both these sets of data
39
and the results are compared. The better of the two scanning technologies should generate a
higher confidence number on classification and should have a higher number of reliable
minutiae (greater than 20) and higher number of local blocks in quality zone 4 along with a
higher overall image quality. For the details of NIST quality measuring software, please
refer to [2].
A fingerprint database, created for this purpose at the University of Kentucky, was used for
the performance evaluation of the scanner system. The database consists of both 2-D, rolled
inked, fingerprint images and unraveled 2-D images of corresponding 3-D scans. For
obtaining the 3D fingerprint scans, each subject was scanned using the single finger, by
using the SLI prototype scanner described in Chapter 2. The 3-D scans were post processed
and run through the NIST filters along with their 2D equivalents to get the desired
statistical values. In this chapter, we will firstly apply the NIST systems to all the 2D inked
and 2D unraveled fingerprints, and we will show that:
•
Our 2D unraveled fingerprints have a similar distribution about the overall quality,
local quality zones, high confidence minutiae, and the confidence of classification,
which is also shown in [2] when she applied spring algorithm to unravel the 3D
fingerprints.
•
Our 2D unraveled fingerprints have higher quality, more high confidence minutiae,
higher confidence number of classification and better performance of matching than
2D inked fingerprints, where the results from [2] have a lower performance than 2D
inked fingerprints.
We will begin our assessment with PCASYS, which classifies the fingerprint images into
five basic categories depending on the position of the singular components like the core
and the delta and the direction of the ridge flow. Based on these characteristics, fingerprints
are divided in to right loop, left loop, arch, tented arch and whorl classes. Along with the
fingerprint class, PCASYS also outputs an estimated posterior probability of the
hypothesized class, which is a measure of the confidence that may be assigned to the
classifier’s decision. This confidence number is used as a quantitative metric for assessing
40
scanner performance. As the quality of the scan improves, the confidence number output of
the classifier will be closer to 1.
As a second approach, the NIST minutiae detection software MINDTCT automatically
detects the minutiae on the fingerprint image. It also assesses the quality of the minutiae
and generates an image quality map. To locally analyze the image, MINDTCT divides the
image into a grid of blocks and assesses the quality of each block by computing the
direction flow map, low contrast map, low flow map, and high curvature map, the last three
of which detect unstable regions in the fingerprint where minutiae detection is unreliable.
The information in these maps is integrated into one general map and contains 5 levels of
quality (4 being the highest quality and 0 being the lowest). The background has a score of
0 while a score of 4 means a very good region of the fingerprint. The quality assigned to a
specific block is determined based on its proximity to blocks flagged in the abovementioned maps. The total number of blocks with quality 1 or better is regarded as the
effective size of the image or the foreground. The percentage of blocks with quality 1, 2, 3,
and 4 are calculated and are regarded as quality zones 1, 2, 3, and 4 respectively.
Fingerprints with higher quality zone number of 4 are more desirable.
The MINDTCT software also computes quality or reliability measures to be associated
with each detected minutiae point based on two factors. The first is based on the location of
the minutiae point within the quality map, and the second is based on simple pixel intensity
statistics (mean and standard deviations) within the neighborhood of the point. Based on
these two factors, a quality value in the range from 0.01 to 0.99 is assigned to each
minutiae, where a low quality minutiae number represents minutiae detected in low quality
regions of the image whereas a high quality minutiae value indicates minutiae detected in
the higher quality regions. Minutiae with quality number less than 0.5 are considered
unreliable, while a quality number greater than 0.75 are considered to be highly reliable
[refer to NISTIR 7151].
Thirdly, the NIST fingerprint image quality software NFIQ assigns an overall quality
41
number to the fingerprint image by computing a feature vector using the quality image map
and the minutiae quality statistics generated by the MINDTCT system [49], which is then
used as an input to a Multi-Layer Perceptron (MLP) neural network classifier. The output
activation level of the neural network classifier is used to determine the fingerprint image
quality. There are five quality levels with 1 being the highest quality and 5 being the lowest
quality. The quality number information is a useful quantitative metric in that the
information provided by the quality number can be used to determine low quality samples
and, hence, can subsequently help in improving scanning technology.
2.4.1 2D Ink Rolled Fingerprint Scanning
To obtain the 2D, rolled equivalent, inked images, each subject was escorted to the
University of Kentucky’s Police Department, where their prints were taken manually by a
trained police officer. The fingerprint impressions were taken on a fingerprint card using a
black printer ink Fig. 5.1. The fingerprint card is an 8” × 8” single copy white card with a
“Face” side and an “Impression” side. The “Face” side provides blocks for administrative
information, while the “Impression” side provides blocks for descriptive information and
fingerprint impressions. Both the rolled, fingerprint impressions and the plain or dab finger
impressions are taken on the card. An example fingerprint card is as shown in the following
Figs. The fingerprint card, obtained for each subject, was then scanned manually using a
HP flatbed scanner at 500 dots per inch, according to the CJIS specifications, and manually
segmented in Adobe Photoshop to obtain the images for individual fingers.
42
Fig 2.23 An example fingerprint card
2.4.2 2D Fingerprint Experimental Results and Analysis
To validate our hypothesis that the quantitative measures of local image quality zones,
minutiae reliability, and the classification confidence number can quantify the scanner
performance, we first will analyze the 2D rolled inked fingerprints obtained at the police
station through the NIST PCASYS, MINDTCT, and NFIQ components [50]. To protect the
identity of the subjects, each was assigned a number rather than identifying them with their
name. The letter “L” denotes that the particular finger belongs to the left hand, while the
letter “R” denotes that it belongs to the right hand. The individual digits on each hand are
denoted, D2 for index finger, D3 for middle finger, D4 for ring finger, D5 for little finger.
So a file named subject4 L D2 will represent the left hand index finger of subject number 4.
Run the software on the 2D inked fingerprint data base, we will get the table following.
43
Table 2.1 Results of running PCASYS, MINDTCT and NFIQ on the 2D images of Subjects 0 through
14.
Subject
Subject 0
Hand
Left
Right
Subject 1
Left
Right
Subject 2
Left
Right
Subject 3
Left
Right
Subject 4
Left
Right
Subject 5
Left
Digit
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
PCASYS
Class Conf.
No.
W
1.00
R
0.35
L
0.99
L
0.50
W
1.00
R
0.99
R
0.98
R
0.82
L
0.68
L
1.00
W
0.54
T
0.81
R
1.00
W
1.00
W
1.00
R
0.99
R
0.91
L
0.94
L
0.87
L
0.99
L
0.82
L
0.49
R
0.53
R
0.51
W
1.00
W
1.00
W
1.00
L
1.00
W
1.00
W
1.00
W
1.00
R
0.99
W
1.00
W
1.00
W
1.00
L
0.51
R
0.98
W
0.89
W
0.96
R
0.94
L
0.99
L
0.99
L
0.93
L
0.99
MINDTCT
Tol.
Rel.
Min
Min
160
30
202
30
174
10
194
4
140
44
161
6
187
12
166
3
189
51
136
37
201
66
185
11
125
48
218
57
180
45
169
20
141
31
174
28
188
24
150
22
215
8
254
6
169
14
202
6
151
19
130
13
205
12
95
6
137
62
135
27
172
18
156
20
213
13
190
26
218
6
171
1
140
31
202
39
238
15
180
4
159
26
164
20
166
5
138
5
NFIQ
Q.
Zone
34.20
44.23
32.39
11.05
42.40
23.67
27.14
23.32
45.81
55.66
51.50
31.58
54.37
46.63
49.49
32.44
31.26
28.09
22.07
33.17
12.49
11.58
19.05
12.40
37.43
46.79
28.81
24.72
49.13
51.33
38.38
31.79
21.11
32.45
14.52
7.40
47.85
33.44
15.39
6.83
37.83
35.64
30.38
29.32
Q. No.
4
3
3
4
3
4
4
4
1
1
1
3
3
1
1
4
3
4
4
4
4
5
4
4
3
3
4
3
1
2
3
3
4
4
4
5
3
4
4
5
3
3
3
3
44
Right
Subject 6
Left
Right
Subject 7
Left
Right
Subject 8
Left
Right
Subject 9
Left
Right
Subject 10
Left
Right
Subject 11
Left
Right
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
R
R
W
L
W
L
L
L
W
R
W
R
L
L
L
L
R
R
R
R
W
L
W
L
W
R
W
R
L
L
W
W
W
W
W
R
T
L
R
R
R
R
W
W
W
L
W
L
W
W
1.00
0.99
0.74
0.71
1.00
0.99
1.00
0.99
1.00
1.00
1.00
0.99
0.98
0.63
0.82
0.95
0.65
1.00
1.00
0.99
1.00
0.99
1.00
0.83
0.73
1.00
1.00
0.82
1.00
0.98
0.92
0.64
1.00
1.00
1.00
0.80
0.54
1.00
0.94
0.91
0.95
0.99
1.00
1.00
1.00
0.98
1.00
0.98
1.00
1.00
162
164
177
143
91
101
110
95
146
114
142
93
100
134
128
118
108
148
116
102
112
118
143
121
205
166
129
152
102
150
163
183
123
152
249
187
150
116
119
107
97
151
135
112
121
130
143
106
156
190
24
36
27
16
38
21
9
11
52
46
26
42
8
1
0
1
38
17
12
2
15
8
9
2
33
14
4
0
40
3
1
2
46
12
5
0
38
38
25
19
34
43
21
17
34
18
7
7
27
35
37.83
38.23
34.28
20.47
61.11
63.19
55.50
58.00
50.05
56.38
55.30
59.05
35.38
20.92
14.50
16.60
60.23
44.30
51.49
27.04
49.52
43.00
45.23
24.90
34.85
45.64
44.77
24.24
51.48
34.68
18.22
13.10
44.24
32.20
18.40
13.25
57.57
51.61
48.40
30.00
50.60
43.57
41.48
33.00
45.14
4.79
36.44
30.99
39.72
40.44
3
3
3
4
1
1
2
1
1
2
1
1
3
3
4
4
1
3
1
3
1
3
3
3
3
3
3
4
2
3
4
4
2
3
4
4
1
1
1
3
2
3
3
3
2
1
3
3
3
3
45
Subject 12
Left
Right
Subject 13
Left
Right
Subject 14
Left
Right
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
D2
D3
D4
D5
W
R
A
A
L
W
A
A
W
R
W
L
L
W
W
R
W
R
W
L
W
L
W
R
W
R
1.00
0.95
0.64
0.52
0.59
0.49
0.56
0.56
0.68
0.99
0.73
1.00
0.96
0.77
1.00
0.51
1.00
1.00
1.00
1.00
1.00
0.98
0.57
1.00
1.00
0.88
160
227
129
190
181
171
124
97
139
136
132
144
179
125
141
160
172
125
146
136
238
144
143
161
179
129
10
9
63
72
42
16
45
39
42
9
53
44
63
47
69
65
67
40
55
35
20
5
72
67
37
10
34.19
16.33
56.30
48.90
44.81
24.40
47.69
58.13
47.69
20.65
52.77
52.26
55.15
47.32
62.36
60.17
52.05
45.18
56.46
61.34
37.55
38.02
58.75
56.32
49.72
30.00
3
4
1
1
3
4
2
1
2
4
1
1
1
2
1
1
1
2
1
1
3
3
1
1
3
3
After running all the software and getting the results, we can plot the following figures for
analyzing. To plot these figures, we will firstly define some basic rules. Each fingerprint
image class and the corresponding confidence number are generated by the classification
routine. The minutiae detected by the minutiae detection routine are classified into different
categories depending on the minutiae quality number with min05, min06, min075, min08,
min09, min095 and min097 representing minutiae with quality greater than 0.5, 0.6, 0.75,
0.8, 0.9, 0.95 and 0.97, respectively. Minutiae with quality less than 0.5 ( i.e. minutiae
quality < min05) are not reliable and, hence, are not considered for the analysis. The
percentage of local foreground blocks in quality zones (qz) 1, 2, 3 and 4 are calculated
using the image quality routine, with 4 representing the best and 1 representing the worst
quality zone. In addition, the image quality routine also generates an overall quality for the
image with the quality being either 1, 2, 3, 4, or 5 representing best, good, average, poor
and un-useable quality fingerprints, respectively.
46
2D ink
0.7
qn1
qn2
qn3
qn4
qn5
% of blocks in local quality zones
0.6
0.5
0.4
0.3
0.2
0.1
0
0
different local quality zones
Fig 2.24 Variation of the number of foreground blocks in quality zones 1-4 with respect to the
overall quality number for 2D rolled inked fingerprints. The number of blocks in quality
zone 4 decreases, while that in quality zone 2 increases with decrease in overall quality
from best to unusable.
In figure 2.24, it illustrates the relationship between the average percentage of local
foreground blocks in quality zones (qz) 1-4 and the overall quality numbers (qn) 1-5. It is
observed that as the overall quality of the fingerprints decreases from qn1(best) to qn5
(unusable), the number of blocks in qz4 also decreases. The number of blocks in qz4, best
and good quality images, is statistically comparable. However, all other pair-wise
comparisons between the number of qz4 blocks in different overall quality categories yield
a statistically significant difference.
Figure 2.24 also shows that, as the overall quality decreases from qn1 (best) to
qn5(unusable), the number of blocks in qz2 increases. Statistics pair-wise comparisons
show that the qz2 values for best and good quality fingerprints are comparable while all the
other values are significantly different than each other with their p values being less than
0.001. The number of blocks in qz1 and qz3 for different quality numbers were either
statistically comparable or showed no significant difference in any pair-wise comparison.
The number of blocks in qz2 and qz4 show similar trends.
47
2D ink
70
qn1
qn2
qn3
qn4
qn5
number of minutiae with different quality
60
50
40
30
20
10
0
0
different local quality zones
Fig 2.25 Variation of the number of minutiae with quality greater than 0.5, 0.6, 0.75, 0.8, 0.9,
0.95, 0.97 with respect to the overall quality number for 2D rolled inked fingerprints.
The number of minutiae with quality greater than 0.5, 0.6, 0.75 decreases with a
decrease in overall quality from best to unusable.
Figure 2.25 shows the relationship between the average number of minutiae in different
reliability categories from min05 - min097 and the overall quality numbers from qn1 (best)
- qn5 (worst). It is observed that number of minutiae in min05, min06, and min075 follow
similar trends and progressively decrease as the quality decreases from qn1 to qn5.
Statistical analysis shows a significant difference for all pair-wise comparisons in these
three categories with the p values being less than 0.01. However, the number of minutiae
falling under min08 and min097 do not follow a set trend against the overall quality
numbers and also do not yield any statistically significant difference in the values when
compared pair-wise. However, min075 serves as a critical point which demarks the trends
in minutiae reliability versus the quality number. All the categories before min08 (i.e.
min05, min06, min075) show similar trends, i.e. decrease in number of minutiae with
decrease in overall quality where as those after min08 (i.e. min08 min09, min095, and
min097) show no apparent trend.
48
2D ink
3500
number of blocks in quality zone 4
3000
2500
2000
1500
1000
500
0
0
10
20
30
40
50
60
70
80
number of minutiae with quality greater than 0.75
90
100
Fig 2.26 Scatter plot between number of blocks in quality zone 4 and number of minutiae with
quality greater than 0.75 for 2D rolled inked fingerprints. The plot shows a strong
correlation between the two parameters.
2D ink
1
qn1
qn2
qn3
qn4
qn5
0.9
classification confidence number
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
overall quality number
Fig 2.27 Plot of classification confidence number generated by the PCASYS system with respect to
the overall quality number for 2D rolled inked fingerprints.
Figure 2.26 shows a scatter plot between the number of qz4 blocks and the number of
minutiae under min075. A close relationship is observed in the two parameters where, when
the number of blocks in quality zone 4 increases, so does the number of minutiae with
quality greater than 0.75. So jointly, these two quantities can be used for evaluation
purposes. Figure 2.27 shows a plot of the confidence numbers generated by the
classification system against the quality numbers qn1-qn5. The confidence numbers show
no statistically significant difference for the best, good, and average quality fingerprints,
49
but it reduces significantly as the quality of the fingerprint further deteriorates. The
important results for all the fingerprints from this analysis are given in the Tables 5.1.
2.4.3 3D Unraveled Fingerprint Experimental Results and Analysis
Now that we have validated our hypothesis over 2D fingerprint images, we will apply the
hypothesis on the 2D fingerprint images generated after unraveling the 3D fingerprint scans.
The 800 × 800 unraveled fingerprint images are run through the NIST PCASYS,
MINDTCT, and NFIQ software systems generating the fingerprint class and the confidence
number, minutiae classified into five different categories as min05, min06, min075, min08,
min09, min095 and min097, and the number of blocks in quality zone 1, 2, 3, and 4 (qz1,
qz2, qz3, qz4) along with an overall quality for the images. All the results are shown in the
Appendix.
Table 2.2 Results of running PCASYS, MINDTCT and NFIQ on the 3D unraveled fingerprint images
of Subjects 0 through 14.
Subject
Subject 0
Hand
Left
Right
Subject 1
Left
Digit PCASYS
MINDTCT
NFIQ
Class
Conf.
No.
Tol.
Min
Rel.
Min
Q.
Zone
Q. No.
D2
W
0.75
159
49
44.07
4
D3
R
0.82
175
39
34.60
2
D4
L
1.00
159
42
39.62
3
D5
L
0.99
158
45
33.10
4
D2
W
0.90
139
49
46.74
3
D3
R
0.99
145
50
54.24
3
D4
R
0.99
154
58
52.68
4
D5
R
1.00
144
35
33.72
4
D2
L
0.99
128
63
61.62
1
D3
L
1.00
148
63
53.35
2
D4
W
0.91
162
75
57.00
1
50
Right
Subject 2
Left
Right
Subject 3
Left
Right
Subject 4
Left
Right
D5
W
0.84
149
82
64.67
1
D2
W
0.97
117
56
54.22
1
D3
W
0.71
130
59
57.55
1
D4
W
0.69
171
66
55.17
1
D5
W
0.65
157
57
46.50
2
D2
R
0.59
254
33
13.59
1
D3
L
0.95
268
18
10.42
3
D4
L
0.65
216
19
10.86
5
D5
L
0.82
150
27
19.66
4
D2
R
0.34
261
27
15.09
3
D3
L
0.66
200
3
1.90
4
D4
W
0.67
256
20
11.18
5
D5
R
0.54
211
9
7.74
4
D2
W
0.84
187
27
33.45
3
D3
W
1.00
162
38
39.53
3
D4
W
0.98
159
30
34.72
2
D5
L
1.00
112
36
46.37
4
D2
W
0.81
207
37
33.73
3
D3
W
1.00
171
39
37.13
4
D4
W
0.57
191
53
35.48
1
D5
R
0.66
158
49
42.14
2
D2
W
0.49
161
41
33.63
4
D3
W
0.89
257
89
37.30
4
D4
W
0.34
201
30
20.70
2
D5
R
0.42
174
20
18.16
5
D2
R
0.96
139
52
53.05
1
51
Subject 5
Left
Right
Subject 6
Left
Right
Subject 7
Left
Right
D3
W
0.74
176
39
31.03
3
D4
W
1.00
211
30
26.08
2
D5
R
0.85
150
25
26.38
5
D2
L
0.59
206
53
34.20
4
D3
L
0.98
193
73
46.71
4
D4
R
0.45
227
51
39.69
4
D5
L
0.89
189
47
29.96
3
D2
R
0.98
159
58
48.16
3
D3
R
0.92
197
70
47.47
3
D4
W
0.91
176
63
48.28
4
D5
W
0.35
172
58
40.01
4
D2
W
1.00
130
30
37.55
2
D3
L
0.98
138
32
32.84
3
D4
L
0.96
125
45
53.95
2
D5
L
0.98
108
40
49.72
1
D2
W
0.63
145
47
42.91
3
D3
R
0.92
123
40
55.29
3
D4
W
1.00
136
37
52.67
4
D5
R
0.93
101
39
49.30
2
D2
L
0.69
99
30
36.64
2
D3
W
0.54
96
19
32.97
1
D4
L
0.91
130
32
37.15
1
D5
L
0.93
93
16
26.54
3
D2
R
0.69
96
30
46.07
1
D3
R
0.99
121
43
45.95
1
D4
R
0.97
83
35
54.04
2
52
Subject 8
Left
Right
Subject 9
Left
Right
Subject 10
Left
Right
Subject 11
Left
D5
R
0.93
107
22
33.79
2
D2
W
1.00
150
39
29.84
3
D3
L
0.82
141
18
22.50
3
D4
W
1.00
167
31
24.26
4
D5
L
0.97
140
47
41.40
3
D2
R
0.90
180
30
25.88
5
D3
R
0.99
159
37
35.32
4
D4
W
0.90
179
41
33.38
3
D5
R
0.93
136
47
43.83
2
D2
L
0.62
192
39
35.76
3
D3
L
0.89
162
27
36.12
3
D4
R
0.34
220
21
15.76
3
D5
L
0.87
130
8
16.52
4
D2
W
1.00
179
29
30.43
4
D3
R
0.51
248
36
19.03
4
D4
W
0.86
279
21
15.95
3
D5
L
0.35
185
15
13.88
5
D2
W
0.72
123
36
53.96
1
D3
L
0.99
139
44
41.21
2
D4
R
0.95
158
31
30.88
3
D5
R
0.69
150
16
19.56
5
D2
R
0.84
107
49
55.33
1
D3
R
1.00
117
38
51.30
1
D4
W
0.96
173
43
33.94
3
D5
L
0.57
140
19
24.03
4
D2
W
0.61
166
34
31.21
2
53
Right
Subject 12
Left
Right
Subject 13
Left
Right
Subject 14
Left
D3
L
0.89
194
54
33.19
3
D4
W
1.00
191
21
17.55
4
D5
L
0.98
178
52
37.18
3
D2
W
1.00
121
35
37.55
3
D3
W
0.85
257
42
26.36
4
D4
W
1.00
149
30
24.65
4
D5
L
0.44
181
36
22.36
5
D2
A
0.41
104
41
40.57
2
D3
W
0.94
115
42
51.74
3
D4
L
0.63
117
52
52.85
1
D5
W
0.65
87
28
46.34
2
D2
A
0.94
120
46
52.01
1
D3
W
0.63
98
39
59.82
1
D4
W
0.98
118
50
57.98
1
D5
R
0.90
85
33
44.95
3
D2
W
0.83
169
53
44.54
3
D3
L
0.99
159
48
48.16
2
D4
L
0.98
165
42
32.88
3
D5
W
0.99
153
32
34.33
4
D2
W
1.00
149
45
46.53
1
D3
L
0.88
185
57
44.79
5
D4
W
1.00
210
54
36.85
4
D5
R
0.62
134
46
46.83
2
D2
W
1.00
136
40
41.45
4
D3
L
0.96
184
51
36.67
4
D4
W
0.84
179
28
25.44
3
54
Right
D5
R
0.55
162
32
29.69
4
D2
W
0.60
174
37
38.97
4
D3
R
0.99
158
52
40.34
5
D4
W
0.76
209
30
26.21
4
D5
R
0.99
128
43
43.97
4
Fit sphere algorithm
0.7
qn1
qn2
qn3
qn4
qn5
% of blocks in local quality zones
0.6
0.5
0.4
0.3
0.2
0.1
0
0
different local quality zones
Fig 2.28 Variation of the number of foreground blocks in quality zones 1-4 with respect to the
overall quality number for 2D unraveled fingerprints obtained from 3D scans. The
number of blocks in quality zone 4 decreases, while that in quality zone 2 increases
with decrease in overall quality from best to unusable. The distribution is much the
same as 2D inked.
The relationship between the average percentage of local foreground blocks in quality
zones (qz) 1-4 and the overall quality numbers qn1-qn5 is illustrated in Fig. 2.28. The
number of blocks in qz4 decreases with the decrease in the overall quality of the
fingerprints from qn1 to qn5. The number of blocks in qz4 in best and good quality images
is statistically comparable. Also, the values for number of qz4 blocks in good and average
quality fingerprints were statistically comparable. However, all other pair-wise
comparisons between the numbers of qz4 blocks in different overall quality categories
yielded a statistically significant difference. It can also be observed from Fig. 2.28 that,
similar to the 2D results, the number of blocks in qz2 also increases as the overall quality
55
decreases from qn1 (best) to qn5 (unusable).
Fit sphere algorithm
number of minutiae with different quality
60
qn1
qn2
qn3
qn4
qn5
50
40
30
20
10
0
0
different local quality zones
Fig 2.29 Variation of the number of minutiae with quality greater than 0.5, 0.6, 0.75, 0.8, 0.9,
0.95 and 0.97 with respect to the overall quality number for 2D unraveled fingerprints
obtained from 3D scans. The number of minutiae decreases with a decrease in the first
four quality numbers and increase at the last quality number, which is similar to 2D
inked fingerprints.
Figure 2.29 shows the relationship between the average number of minutiae in different
reliability categories from min05 - min097 and the overall quality numbers from qn1 (best)
- qn5 (worst). It is observed that the number of minutiae in all minutiae zone follow similar
trends and progressively decrease as the quality decreases from qn1 to qn4 and increase in
qn5. Statistically, the values of qn2 and qn3 were comparable but significantly lower than
qn1 and qn2. The number of minutiae showed a trend similar to that observed for 2D
fingerprints. The number of minutiae falling under min08 and min09 did not yield any
statistically significant difference in the values when compared pair-wise.
56
Fit sphere algorithm
2000
number of blocks in quality zone 4
1800
1600
1400
1200
1000
800
600
400
200
0
0
10
20
30
40
50
60
70
number of minutiae with quality greater than 0.75
80
90
Fig 2.30 Scatter plot between number of blocks in quality zone 4 and number of minutiae with
quality greater than 0.75 for 2D unraveled fingerprints obtained from 3D scans. The plot
shows a strong correlation between the two parameters, which is much the same as 2D
inked fingerprint.
Fit Sphere algorithm
1
qn1
qn2
qn3
qn4
qn5
0.9
classification confidence number
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
overall quality number
Fig 2.31 Plot of classification confidence number generated by the PCASYS system with respect to
the overall quality number for 2D unraveled fingerprints obtained from 3D scans, which is
much the same as 2D inked.
Figure 2.30 shows a scatter plot between the number of qz4 blocks and the number of
minutiae under min075. A close relationship is observed in the two parameters where, as
the number of blocks in quality zone 4 increases, so does the number of minutiae with
quality greater than 0.75 confirming the trend also observed for the 2D fingerprints.
Moreover, the data points in Fig. 2.30 are more linearly distributed than in Fig. 2.26,
57
showing a stronger correlation between the two parameters in 3D than in 2D. Figure 2.31
shows a plot of the confidence numbers generated by the classification system against the
quality numbers qn1-qn5. The statistical results are comparable for qn1, qn2, and qn3.
However, the values of qn4 and qn5 are lower than all the others. The important results for
all the fingerprints from this analysis are given in the Tables 2.2. Compare between the 2D
inked fingerprints’ results and the 2D unraveled fingerprints’ results, from figure 2.24 to
figure 2.31, we can conclude that: (1) Variations of the number of foreground blocks in
quality zones 1-4 with respect to the overall quality number for both 2D inked fingerprints
and 2D unraveled fingerprints are similar. (2) Variations of the number of minutiae with
quality greater than 0.5, 0.6, 0.75, 0.8, 0.9, 0.95 and 0.97 with respect to the overall quality
number for both 2D inked and unraveled fingerprints are similar. (3) Both 2D inked and
unraveled have scatter plots between number of blocks in quality zone 4 and number of
minutiae and show a strong correlation between the quality zone 4 and number of minutiae.
(4) Classification confidence numbers generated by the PCASYS system with respect to
the overall quality number for both 2D inked and unraveled distribute similarly.
2.4.4 Compare between 2D Inked and 3D Unraveled Fingerprint
After evaluating the 3D fingerprint scans, we will compare the 2D rolled inked fingerprints
and the 3D unraveled fingerprint images where the unraveling program is best fit sphere.
We will use the parameters: number of foreground blocks in quality zone 4 (qz4), number
of minutiae with quality greater than 0.75 (min075), and the confidence number.
58
% of blocks with quality zone 4
60
3D
2D
50
40
30
20
10
0
qn1
qn2
qn3
qn4
qn5
Overall quality number
Fig 2.32 Number of blocks in quality zones 4 with respect to the overall quality number, which
shows the 2D unraveled fingerprints have a higher percentage of quality zone 4 than
the 2D inked fingerprints.
First of all, Figure 2.32 shows that the fingerprints from 3D unraveled have better
performance in the percentage of blocks with quality number 4. The figure also shows the
variation of qz4 blocks with the quality number for both 2D and 3D fingerprints. Statistical
analysis shows a significant difference in the values between 2D images and 3D images for
qn3, qn4, and qn5, whereas the values for qn1 and qn2 are comparable. However, both the
groups followed similar trend of decrease in the number of quality zone 4 blocks with
decrease in the quality from best to unusable.
59
Number of minutiae with quality > 75%
80
3D
2D
70
60
50
40
30
20
10
0
qn1
qn2
qn3
qn4
qn5
Overall quality number
Fig 2.33 Number of minutiae with quality greater than 0.75, with respect to the overall quality
number and classification confidence number, which shows 2D unraveled fingerprints
have more minutiae with quality bigger than 0.75.
Similarly, Fig. 2.33 also shows the 3D unraveled fingerprints have a better performance
since they can give us more minutiae with a very high reliability 0.75. The figure shows the
variation of number of minutiae having quality greater than min075 with the quality
numbers for both 2D and 3D fingerprints. Both the groups again follow the same trend,
with a significant difference between values for qn2, qn3, qn4, and qn5. The values for qn1
were not found to be significantly different.
60
Classification confidence number
3D
2D
1
0.8
0.6
0.4
0.2
0
qn1
qn2
qn3
qn4
qn5
Overall quality number
Fig 2.34 Overall quality number for 2-D rolled inked fingerprints and 2-D unraveled fingerprints
obtained from 3-D scans.
The plot between the classification confidence number and the quality number is shown in
Fig. 2.34. From the figure, we can tell for qn1, qn2, qn3 and qn4, the 3D unraveled
fingerprints have a better performance than the 2D inked fingerprint, however, for qn5, the
2D inked seem to be better than 3D unraveled fingerprints. Again this figure shows similar
trends in both the groups.
Fig. 2.35 is similar to Fig 2.33, which is focused on the number of minutiae with quality
bigger than 0.8. It also shows the 3D unraveled fingerprints have a better performance
since they can give us more minutiae with a very high reliability 0.8. The figure shows the
variation of number of minutiae having quality greater than min08 with the quality
numbers for both 2D and 3D fingerprints. Both the groups again follow the same kind of
distribution, and the 3D unraveled fingerprints is significantly with a higher number
distribution.
61
3D
2D
0.025
Probability
0.02
0.015
0.01
0.005
0
0
20
40
60
80
100
Number of minutiae (with quality >80%)
Fig 2.35 Distributions of minutiae with quality greater than 0.8, for 2-D rolled inked fingerprints
and 2-D unraveled fingerprints obtained from 3-D scans, which shows the 2-D
unraveled fingerprints have a better result.
The scatter plots between the number of quality zone blocks and the number of minutiae
with quality greater than 0.75 and 0.8 show a linear relationship between the two metrics,
where as the number of quality zone four blocks increased, the number of reliable minutiae
also increase. These results prove our hypothesis that the quantitative metrics used for
determining the quality of the traditional 2D fingerprint images, can also be applied
reliably on the 2D unraveled images obtained from the 3D scans. The quality of the
acquired scans reflects highly on scanner performance as all the other fingerprint
recognition components depend highly on good quality images. If the quality of the
acquired scan is not good, they are not much useful for identification and authentication
systems, thus rendering the scanner less viable. Hence it is important to develop and
identify some metrics to measure the quality of the acquired scans and quantify the scanner
performance. The number of image blocks in the local quality zone 4, the number of
minutiae with reliability greater than 0.75, the probability distribution of minutiae with
quality greater than 0.80 and the overall image quality generated by the NIST image quality
software are some such important metrics that can be efficiently used for quantifying the
scanner performance. From figure 2.32 to figure 2.35, we can see that although the
classification confidence number of 2D inked fingerprints is higher than the 2D unraveled
62
fingerprints, they are very close. And for the high quality minutiae, percentage of quality
zone 4 and the minutiae distribution respect to the quality number, 2D unraveled
fingerprints have a better performance than then 2D inked fingerprints. Thus, the quality of
2D unraveled fingerprints is higher than the 2D inked, as we proposed at the beginning of
this chapter. Since a higher quality will result in a better matching result, which we will
show in the following section and also since NIST matching system is minutiae based, thus
we can conclude that the 2D unraveled fingerprint will have a better matching result than
the 2D inked. In [2], the same analysis is made for 2D inked fingerprints and 3D unraveled
fingerprints by spring algorithm, however, results show that 2D inked fingerprints have a
better performance than 3D unraveled by spring algorithm. Thus, we can safely conclude
that best fit sphere unraveling algorithm has a better performance for fingerprint quality
than spring unraveling program in [2]. In chapter 4, we will show that higher fingerprint
quality will achieve higher possibility of better matching results which is the key purpose
of fingerprint technique.
63
Chapter 3 Fingerprint Image Software Biometric technologies are fast becoming the foundation of highly secure identification and
personal verification solutions. In addition to supporting homeland security and preventing
ID fraud, biometric-based systems are able to provide for confidential financial transactions
and personal data privacy [59, 60, 61]. Therefore to fully realize the benefits of biometric
technologies, comprehensive standards are necessary to ensure that information technology
systems and applications are interoperable, reliable, and secure.
NIST has a long history of involvement in biometric research and biometric standards
development. Over the years, NIST has conducted fingerprint research, developed
fingerprint identification technology and data exchange standards, developed methods for
measuring the quality and performance of fingerprint scanners and imaging systems, and
produced databases containing fingerprint images for public distribution. To carry out the
legislated requirements of Congress and to begin processing and analyzing the large
repositories of data collected by the various government agencies, NIST developed a
versatile open laboratory system called the Verification Test Bed (VTB) for conducting
applied fingerprint research [48].
The VTB serves as a minimum standard baseline for fingerprint matcher performance and
allows comparative analysis of different types of fingerprint data. The NIST Fingerprint
Image Software (NFIS) provides many of the fingerprint capabilities required by the VTB
[49]. NFIS is a public domain source code distribution organized into seven major
packages (NFSEG, PCASYS, MINDTCT, NFIQ, BOZORTH, AN2K, and IMGTOOLS):
• PCASYS is the NIST Pattern Classification System, which uses a neural network based
fingerprint pattern classification system to automatically categorize a fingerprint image as
64
an arch, left or right loop, tented arch, or whorl. Resulting binning of images helps in
reducing the number of searches required for matching a fingerprint to a print on file.
• MINDTCT is a minutiae detection system that automatically locates and records ridge
ending and bifurcations in a fingerprint image. It also assesses minutiae quality based on
local image conditions and the orientation of minutiae in addition to their location and type.
• NFIQ is a fingerprint image quality algorithm which analyzes the overall quality of the
image and returns a quality number ranging from 1 for the best quality to 5 for the worst
quality.
• BOZORTH3 is the minutiae based fingerprint matching system and uses minutiae
detected by the MINDTCT system for matching purposes. It can analyze two fingers at a
time (one-to-one matching) for verification or run in a batch mode comparing a single
finger against a large database of fingerprints (one-to-many matching) for identification
purposes.
The primary focus of this work is on the NIST fingerprint image minutiae detection system
MINDTCT and quality system NFIQ to abstract the minutiae information and the quality of
the fingerprint, then, we apply the BOZORTH3 system to match the fingerprint. For more
information about other parts, please refer to [50]. These filters will be used to define
quantitative metrics for evaluating the performance of the 3D fingerprint scanner and the
relationship between the quality of the fingerprint and the matching scores.
3.1 NIST Minutiae Detection (MINDTCT) System
Traditionally, two fingerprints have been compared using discrete features called minutiae.
These features include points in a finger’s friction skin where ridges end (called a ridge
ending) or split (called a ridge bifurcation). Typically, there are on the order of 100
minutiae on a ten print fingerprint card or per ten fingers. In order to search and match
65
fingerprints, the coordinate location and the orientation of the ridge at each minutia point
are recorded. MINDTCT takes a fingerprint image and locates all minutiae in the image,
assigning to each minutia point its location, orientation, type, and quality. The algorithms
and parameters have been developed and set for images scanned at 500 dpi and quantized
to 256 levels of gray. A functional flow-chart of the entire process is given following:
Figure 3.1 Flow‐chart of minutiae detection process Finally MINDTCT outputs a number of text files on completion. These output files include
text files for the direction map, the low contrast map, the low flow map, the high curvature
map, and the quality map. Along with these feature maps, two text files containing the
minutiae details are also generated. These minutiae text files contain information regarding
the minutiae location and the various attributes like minutiae direction and reliability. A
text file is also generated listing the minutiae points, number of nearest neighbors
corresponding to the minutiae, and the number of in-between ridges. For more details about
MINDTCT System, please refer [2] and [50].
66
Fig 3.2 Minutiae Detection Result. The detection is based on the binary image. Left image is unraveled 2D binary finger print. Right image is the corresponding minutiae detection result with quality larger than 50, where the minutiae are marked by small black square. 3.2 NIST Fingerprint Pattern Classification (PCASYS) System
An AFIS system consists of a system database that contains records of authorized users
allowed to use the system and store some information like user name, minutiae templates,
or any other information required for authentication purposes. For user authentication, the
database file fingerprint cards are efficiently matched against the incoming search
fingerprint cards. Existing automatic matcher algorithms compare fingerprints based on the
patterns of ridge endings and bifurcations (minutiae). However, the large amount of data
composing fingerprint databases seriously compromises the efficiency of the identification
task; although, the fastest minutiae matching algorithms take only a few tens of m seconds
per matching [8]. The efficiency of the matching process can be greatly increased by
partitioning the file fingerprint cards based on some sort of classification system. Once the
fingerprint class is determined, the search for a matching fingerprint is restricted only to the
class of the search fingerprint, thus improving the identification process efficiency. For
more details about the NIST Fingerprint Pattern Classification (PCASYS) System, please
refer to [2] and [49, 50].
3.3 NIST Fingerprint Image Quality (NFIQ) System
67
Most of the fingerprint matcher algorithms, commonly in use, are sensitive to clarity of
ridges and valleys, measures of number and quality of minutiae, and the size of the image.
The MINDTCT package, automatically detects minutiae and assesses the minutiae quality,
and, in addition, generates an image quality map. Thus for each fingerprint, an 11dimensional feature vector vi, as listed in Table 3.1 is computed using MINDTCT.
Table 3.1 Feature Vector Description[NIST] NAME
DESCRIPTION
foreground
number of blocks that are quality 1 or better
total
no.
of number of total minutiae found in the fingerprint
minutiae
min05
number of minutiae that have quality 0.5 or better
min075
number of minutiae that have quality 0.75 or better
min09
number of minutiae that have quality 0.9 or better
quality zone 1
percentage of foreground blocks of quality map with quality = 1
quality zone 2
percentage of foreground blocks of quality map with quality = 2
quality zone 3
percentage of foreground blocks of quality map with quality = 3
quality zone 4
percentage of foreground blocks of quality map with quality = 4
68
Average Quality: 3.0056
Average Quality: 3. 1151
Average Quality: 3.1090
Average Quality: 3.2550
Fig. 3.3 Results of running NFIQ package on the example fingerprints. Each group figures how the generated quality map (right) for the corresponding finger print image(left). The average quality of each quality map is also shown below the images. 3.4 NIST Fingerprint Matcher (BOZORTH3) System
The BOZORTH3 matching algorithm computes a match score between the minutiae from
any two fingerprints to help determine if they are from the same finger. It's a modified
version of a fingerprint matcher written by Allan S. Bozorth while at the FBI. The early
version of the matching algorithm that NIST has used internally was named bozorth98. [51,
52] The BOZORTH3 matcher is functionally the same as the bozorth98 matcher,
improvements have been made to remove bugs in the code (specifically memory leaks in
statically defined variables) and improve the speed of the matcher.
The BOZORTH3 matcher using only the location (x,y) and orientation (theta) of the
minutia points to match the fingerprints. The matcher is rotation and translation invariant.
The matcher builds separate tables for the fingerprints being matched that define distance
and orientation between minutiae in each fingerprint. These two tables are then compared
69
for compatibility and a new table is constructed that stores information showing the interfingerprint compatibility. The inter-finger compatibility table is used to create a match
score by looking at the size and number of compatible minutia clusters [61, 62].
There are two key things are important to note regarding this fingerprint matcher:
1. Minutia features are exclusively used and limited to location (x,y) and orientation ‘t’,
represented as {x,y,t}.
2. The algorithm is designed to be rotation and translation invariant.
Thus, the algorithm is comprised of three major steps:
1. Construct Intra-Fingerprint Minutia Comparison Tables
a. One table for the probe fingerprint and one table for each gallery fingerprint to be
matched against
2. Construct an Inter-Fingerprint Compatibility Table
a. Compare a probe print’s minutia comparison table to a gallery print’s minutia
comparison table and construct a new compatibility table
3. Traverse the Inter-Fingerprint Compatibility Table
a. Traverse and link table entries into clusters
b. Combine compatible clusters and accumulate a match score
3.4.1 Construct Intra-Fingerprint Minutia Comparison Tables
The first step in the Bozorth Matcher is to compute relative measurements from each
minutia in a fingerprint to all other minutia in the same fingerprint. These relative
measurements are stored in a minutia comparison table and are what provide the
algorithm’s rotation and translation invariance.
70
Fig 3.4 Intra‐fingerprint minutiae comparison. Figure 3.4 illustrates the inter-minutia measurements that are used. There are two minutiae
shown in this example. Minutia k is in the lower left of the “fingerprint” and is depicted by
the dot representing location (xk,yk) and the arrowed line pointing down and to the right
representing orientation tk. A second minutia j is in the upper right with orientation
pointing up and to the right. To account for relative translational position, the distance dkj is
computed between the two minutia locations. This distance will remain relatively constant
between corresponding points on two different finger impressions regardless of how much
shifting and rotating may exist between the two prints [49].
To make relative rotational measurements is a bit more involved. The objective for each of
the minutiae in the pair-wise comparison is to compute the angle between each minutia’s
orientation and the intervening line between both minutiae. This way, these angles remain
relatively constant to the intervening line regardless of how much the fingerprint is rotated.
In the illustration above, the angle θkj of the intervening line between minutia k and j is
computed by taking the arctangent of the slope of the intervening line. Angles βk and βj are
computed relative to the intervening line as shown by incorporating θkj and each minutia’s
orientation t. It should be noted that the point-wise comparison is conducted on minutia
positions sorted first on x-coordinate, then on y-coordinate, and that all orientations are
limited to the period (-180,180] with 0 pointing horizontal to the right and increasing
degrees proceeding counter clockwise.
71
For each pair-wise minutia comparison, an entry is made into a comparison table. Each
entry consists of:
{dkj, β1, β2, k, j, θkj}, where
β1 = min(βk,βj) and β2 = max(βk,βj)
(3.1)
so that in the illustration above,
β1 = βk and β2 = βj
(3.2)
Entries are stored in the comparison table in order of increasing distance and the table is
trimmed at the point in which a maximum distance threshold is reached. Making these
measurements between pairs of minutiae, a comparison table must be constructed for each
and every fingerprint you wish to match with or against.
3.4.2 Construct an Inter-Fingerprint Compatibility Table
The next step in the Bozorth matching algorithm is to take the minutia comparison tables
from two separate fingerprints and look for “compatible” entries between the two tables.
Figure 3.5 depicts two impressions of the same fingerprint with slight differences in both
rotation and scale. Two corresponding minutia points are shown in each fingerprint.
Fig 3.5 Compatible pair‐wise minutia measurements between two fingerprints that generate an entry into an inter‐fingerprint compatibility table. The left print represents a probe print in which all its minutiae have been pair-wised
compared with relative measurements stored in minutia comparison table P. The
measurements computed from the particular pair of minutia in this example have been
72
stored as the mth entry in table P, denoted Pm. The notation of individual values stored in
the table are represented as lookup functions on a given table entry. For example, the index
of the lower left minutia is stored in table entry Pm and is referenced as k (Pm), while the
distance between the two minutiae is also stored in table entry Pm and is referenced as d
(Pm). The right fingerprint represents a gallery print, and uses similar notation, except that
all its pair-wise minutia comparisons have been stored in table G, and the measurements
made on the two corresponding minutia in the gallery print have been stored in table entry
Gn.
The following three tests are conducted to determine if table entries Pm and Gn are
“compatible.” The first test checks to see if the corresponding distances are within a
specified tolerance Td. The last two tests check to see if the relative minutia angles are
within a specified tolerance Tβ. ∆d () and ∆β() are “delta” or difference functions.
∆d(d(Pm),d(Gn)) < Td
∆β(β1(Pm), ∆β1(Gn)) < Tβ
∆β(β2(Pm), ∆β2(Gn)) < Tβ
(3.3)
If the relative distance and minutia angles between the two comparison table entries are
within acceptable tolerance, then the following entry is entered into a compatibility table:
{∆β (θ(Pm), ∆θ (Gn)), k(Pm), j(Pm), k(Gn), j(Gn)}.
(3.4)
A compatibility table entry therefore incorporates two pairs of minutia, one pair from the
probe fingerprint (k (Pm), j(Pm)) and the other from the gallery fingerprint (k(Gn), j(Gn)).
The entry into the compatibility table then indicates that k (Pm) corresponds with k (Gn) and
j(Pm) corresponds with j(Gn). The first term in the table entry, ∆β (θ (Pm), θ (Gn)), is used
later to combine clusters that share a similar amount of global rotation between
“compatible” probe and gallery minutiae.
3.4.3 Traverse the Inter-Fingerprint Compatibility Table
73
At this point in the process, we have constructed a compatibility table which consists of a
list of compatibility association between two pairs of potentially corresponding minutiae.
These associations represent single links in a compatibility graph. To determine how well
the two fingerprints match each other, a simple goal would be to traverse the compatibility
graph finding the longest path of linked compatibility associations. The match score would
then be the length of the longest path.
Allan Bozorth implemented an algorithm that processes the compatibility table so that
traversals are initiated from various staring points. As traversals are conducted, portions or
clusters of the compatibility graph are created by linking entries in the table. Once the
traversals are complete, “compatible” clusters are combined and the number of linked table
entries across the combined clusters is accumulated to form the match score. The larger the
number of linked compatibility associations, the larger the match score, and the more likely
the two fingerprints are from the same person, same finger.
By default, BOZORTH3 produces one line for each match it computes, containing only the
match score. Ideally, the match score is high if the two sets of input minutiae are from the
same finger of one subject, and low if they're from different fingers. The implementation of
the match table traversal described above is non-exhausted and therefore does not
guarantee an optimal outcome. The resulting match score roughly (but not precisely)
represents the number of minutiae that can be matched between the two fingerprints. As a
rule of thumb, a match score of greater than 40 usually indicates a true match. For
performance evaluation, see [49, 50, 55]. The match scores from BOZORTH3 are usually,
but not always, identical to those published from "bozorth98." Changes in floating point
computations, one logic correction effecting the match table traversal, and modifications to
avoid illegal array indices are primarily responsible for the scores differing.
By default, only the 150 best-quality minutiae (minutiae quality are determined by the
minutiae extraction algorithm, currently MINDTCT) from each input fingerprint are used
[57, 58]. That should be more than enough -- a finger typically has fewer than 80 minutiae 74
- so a minutiae extractor can be used even if it's overly sensitive producing many false
minutiae.
In summary, a good quality image running through the PCASYS system will generate a
high confidence level in its corresponding class. The MINDTCT system takes a fingerprint
image and locates all the minutiae in the image, assigning to each minutiae point its
location, orientation, type and quality. It also calculates the quality and reliability of the
minutiae detected [54]. All these statistics, the confidence level, the number of reliable
minutiae, local quality measurements and the overall quality number can help us evaluate
the quality of the fingerprints obtained from the 3D scanner and as such the performance of
the 3D scanner, as against the traditional 2D scanner. In Chapter 4 we use these statistics to
evaluate both the traditional 2D rolled inked fingerprint images and the 2D unraveled
images obtained from the 3D scans, and prove that the better quality of the fingerprint will
result a higher matching performance [50].
75
Chapter 4
Experiments and Results
In this chapter, we use another fingerprint database, which is different from the database
used in chapter 2 and created for this purpose at the University of Kentucky, was used for
the performance evaluation of the scanner and unraveling system. The database consists of
unraveled 2-D images of corresponding 3-D scans. For obtaining the 3D fingerprint scans,
each subject was scanned using the single finger and for the matching purpose one single
finger was scanned for several times, by using the SLI prototype scanner described in [1].
The 3-D scans were post processed and run through the NIST filters along with their 2D
equivalents to get the desired statistical values. In this chapter, we will firstly apply the
NIST matching systems to all 2D unraveled fingerprints, and we will show that: If we
divide the data base into high quality group, median quality group and low quality group,
results of matching show that the higher quality group has an obvious higher possibility of
better performance of matching than the lower quality group.
The NIST fingerprint matcher BOZORTH3 software is applied to the 3D unraveled
fingerprints. The BOZORTH3 matcher using only the location (x,y) and orientation (theta)
of the minutia points to match the fingerprints [50]. The matcher is rotation and translation
invariant. The matcher builds separate tables for the fingerprints being matched that define
distance and orientation between minutiae in each fingerprint. These two tables are then
compared for compatibility and a new table is constructed that stores information showing
the inter-fingerprint compatibility. The inter-finger compatibility table is used to create a
match score by looking at the size and number of compatible minutia clusters. For our
purpose, a new data base is created, which has multiple scans for each the same finger. The
matching score, if the two fingerprints are the scans of the same finger, generally
distributes from 30 to 200.
76
4.1 Matching Result of 3D Unraveled Fingerprints
Fingerprint is a pattern of friction ridges on the surface of a fingertip. Both the inked 2-D
fingerprint and the 2-D unraveled fingerprint for our 3-D scanning are trying to get all the
distinguishable patterns and features so that the final matching of the fingerprint pairs can
be accurately achieved. The BOZORTH we use to match fingerprints is a minutia based
automatic fingerprint matching algorithm which employs features to minutia of the
fingerprints and produces a real valued similarity score, as shown in chapter 2. Here, the
similarity scores sii of a genuine (i.e. same person) comparisons are called match scores.
And the similarity scores sij, i is not equal to j, of imposter (i.e. different person)
comparisons are named non-match scores. Since for different fingers, e.g. thumb and index
finger, the basic shape is different, the non-match scores are generally low. To get
reasonable and efficiently analysis of our system, the data base we are using is consisted of
only 2-D unraveled fingerprints from different 30 index fingers, where for the same index
finger there are 15 randomly selected scan and correspondingly 15 2-D unraveled
fingerprints. So, for a 2-D unraveled fingerprint, there are 14 match scores Sii, and 15 × 15
non-match scores sij, i is not equal to j. Let Sm(x) denotes match score and Sn(x) non-match
score. The higher similarity score is, the higher likelihood that these 2-D unraveled
fingerprints come from the same finger is.
77
0.14
Match score
Mis-match score
0.12
Distribution
0.1
0.08
0.06
0.04
0.02
0
0
50
100
150
200
250
300
Score
Fig 4.1 Histogram of match and non-match distributions. All the data is scanned from index
fingers.
Figure 4.1 shows the histogram of match and non-match scores of all the 30 × 15
fingerprints in the data base. There are 3150 match scores and also 3150 non-match scores
[63, 64].
It is common for the match distribution to be wider and have higher scores than the nonmatch distribution. It is also typical for the two distributions overlapped. Overlapping
match and non-match distributions indicates that a given sample x may be matched falsely
if the match score Sm is lower than some non-match score Sn. Although these match and
non-match distributions are result of the complex non-linear BOZORTH algorithm [49, 50,
60], they are also strongly dependent on the 2-D fingerprint data. The better the fingerprint
system is, the better distributions the algorithm can achieve. Thus, a good fingerprint
system should be with high match scores and well separated from the non-match
distribution.
Further, let M(Sm) denote the cumulative distribution function (SDF) of the match scores
and N(Sn) the CDF of the non-match scores. The Detection Error Tradeoff Characteristic
(DET) is a plot of the false non-match rate,
FNMR = M(Sm),
78
Against the false match rate,
FMR = 1− N(Sn),
for all values of Sm and Sn. The receiver operating characteristic (ROC), is the commonest
statement of the performance of the fingerprint verification system, which is shown in
Figure 4.2.
1
0.95
0.9
TAR
0.85
0.8
0.75
0.7
0.65
0
0.02
0.04
0.06
0.08
0.1
FAR
0.12
0.14
0.16
0.18
0.2
Fig 4.2 ROC of overall test data. All the data is scanned from index fingers. When the FAR is
0.01, the TAR is 0.891. And for the FAR 0.1, the TAR is 0.988.
For all the test data, our finial evaluation criterion is the ROC curve. To study and evaluate
the performance of a system, we have to set one operating threshold. And the False Accept
Rate (FAR) and True Accept Rate (TAR) values are computed at each operating threshold.
For a generally specific FAR, usually 0.1, the TAR for our system is 0.988.
4.2 Relationship between Fingerprint Quality and Matching Score
Through NIST fingerprint system, we have two systems to measure the quality of
fingerprints [2, 48, 49]. The quality of all the unraveled fingerprints reflects highly on our
scanner performance. It also promises a higher performance of the matching results. The
matching score is related to the finger itself, since NIST matching system is minutiae based.
If the finger we are processing has many minutiae and the fingerprint is very clear, a high
79
matching score would be expected. On the other hand, if the finger we are processing has
few minutiae and the fingerprint is not that clear, a low matching score would be resulted in.
Thus, for different fingers, the matching scores would be varied, even if the qualities of the
fingerprints are the same. However, generally speaking, a higher quality of the fingerprint
will result in a higher matching score. As we showed in the finial part of chapter 2, a higher
quality of the fingerprint will give us more minutiae with high reliabilities. Thus, the
minutiae based NIST matching system will give us a higher matching score.
In this section, we will present the relationship between the quality of the fingerprint and
the matching score. Since NIST system has two systems to measure the quality of the
fingerprint, the quality number of the fingerprint and the local quality of the fingerprint, I
will discuss both the relationship between matching score and overall quality number of the
fingerprint and the relationship between matching score and the local quality of the
fingerprint.
Firstly, we will look into the relationship between the matching scores and the local quality
of the fingerprint. If we use the average value of the local quality of the fingerprint as the
judgment criterion, we can also divide the whole fingerprint data base into two groups. As
explained in [2], the local quality is defined from 0 to 4, where 0 represents the background,
and from 1 to 4, higher score it is, better quality of that area. The size of the area we are
using is 8×8. So, from the definition of the local quality, unlike the overall quality number,
the higher average value of the local quality indicates a higher quality of the fingerprint.
80
0.025
High Quality
Median Quality
Low Quality
Distribution
0.02
0.015
0.01
0.005
0
0
50
100
150
200
250
300
Matching Score
Fig. 4.3 Distribution of matching scores, when matched fingerprints are from the same finger.
Still like what we do for the discussion of overall quality number criterion, firstly, the
database is divided into high quality group and low quality group, where high quality group
has a higher average local quality and low quality group has a lower average local quality.
We match the high local quality fingerprint with other high local quality fingerprint. The
above figure is the result when the two input fingerprints are from the same finger. The
mean value of the matching scores of the low local quality fingerprints is 110.59, the mean
value of the median local quality group is 68.46, and the mean value of the high local
quality group is 73.02. As shown, the distribution of the matching scores from high local
quality fingerprints has higher scores than the distribution of the low local quality ones.
Since the fingerprint with high local quality, defined by NIST system, has the higher quality,
which generally means the area is less blurred than the lower local quality scored area. This
result means when we try to identify one fingerprint the high local quality fingerprint will
give us a higher reliability. In other word, this higher local quality fingerprint will have a
better matching performance.
81
0.1
High Quality
Median Quality
Low Quality
0.09
0.08
Distribution
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0
5
10
15
20
25
30
35
40
45
50
Mis-Matching Score
Fig. 4.4 Distribution of matching scores, when matched fingerprints are from different fingers.
Again we analysis the distributions of matching scores when we try to match the two
groups of fingerprints with different fingers. The low local quality fingerprints will have
higher matching scores, as shown in the above figure. The mean value of the matching
scores of the low local quality fingerprints is 12.78, the mean value of the median local
quality group is 12.57, and the mean value of the high local quality group is 9.73. As we
expected, this higher local quality should give us a better matching performance. When we
try to match fingerprints or identify fingerprints, higher matching scores of matching
between different fingers will raise the difficulty of matching and result in a less reliability.
Thus, from the above figure, the higher local quality fingerprints will have a better
matching performance, as we expect.
82
1
0.95
0.9
TAR
0.85
0.8
0.75
0.7
High Quality
Median Quality
Low Quality
0.65
0.6
0
0.01
0.02
0.03
0.04
0.05
FAR
0.06
0.07
0.08
0.09
0.1
Fig. 4.5 ROC, when apply local quality classification.
As shown in the above receiver operating characteristic (ROC) figure, the low local quality
group has a worse ROC than the high local quality group. For the high local quality group,
when the False Accept Rate is 0.01, the True Accept Rate is 0.992, and when the False
Accept Rate is 0.1, the True Accept Rate is 0.999. For the median local quality group, when
the False Accept Rate is 0.01, the True Accept Rate is 0.945, and when the False Accept
Rate comes up to 0.1 the True Accept Rate becomes 0.989. For the low local quality group,
when the False Accept Rate is 0.1, the True Accept Rate is 0.978, and when the False
Accept Rate is 0.2, the True Accept Rate is 0.997. When the False Accept Rate is around
0.17, the median quality gives a higher True Accept Rate than the low quality, and when the
False Accept Rate is from 0.2 to 0.6, the median quality gives a higher True Accept Rate
than the high quality, which shows the trend that higher local quality results higher
matching score is not very stable. Given the same False Accept Rate, the high local quality
group has a higher possibility of true, which indicates that the high local quality group
performs better than the high local quality group. However, this trend is not as stable as the
quality number which is given in the following section of this chapter. This is reasonable
since the quality number gives a more comprehensive quality analysis than the local quality
analysis by adding minutiae information.
Next, we will use the overall quality number, derived from NIST Fingerprint Image Quality
83
(NFIQ) System [2, 50], as the classification criterion. To achieve this goal, we first divide
the whole database into two groups. One group has all the fingerprints with the overall
quality number higher than the median quality number of the whole database, and the other
group has all the fingerprints with the lower overall quality number. As we explained in
chapter 3, higher the overall quality number is, lower the quality of the fingerprint is. And
the overall quality number ranges from 1 to 5, where 1 represents the highest quality of the
fingerprint and 5 represent the lowest quality of the fingerprint. For each relationship
between the matching scores and the overall quality number relationship plot, we have
around 120 unraveled fingerprints which result in average 1500 matching scores.
Following are the distribution of all the matching scores related to the overall quality
number.
0.025
High Quality
Median Quality
Low Quality
Distribution
0.02
0.015
0.01
0.005
0
0
50
100
150
200
250
300
Matching Score
Fig. 4.6 Distribution of matching scores, when matched fingerprints are from the same finger.
After dividing the database into high quality number group and low quality number group,
we match the high quality number fingerprint with other high quality number fingerprint.
To avoid confusing, we name the high quality number group as low quality group and low
quality number group as high quality group, which is defined by NIST software. The above
figure is the result when the two input fingerprints are from the same finger. The mean
value of the matching scores of the low quality number, high quality, group is 122.43, the
mean value of the median quality number group is 75.73, and the mean value of the high
84
quality number, low quality, group is 52.31. As shown, the distribution of the matching
scores from low quality number fingerprints has higher scores than the distribution of the
high quality number ones. Since the fingerprint with high quality number, defined by NIST
system, has the lower quality, lower matching scores are expected, which means when we
try to identify one fingerprint the low quality number fingerprint will give us a higher
reliability. In other word, this lower quality number fingerprint will have a better matching
performance.
0.07
High Quality
Median Quality
Low Quality
0.06
Distribution
0.05
0.04
0.03
0.02
0.01
0
0
5
10
15
20
25
30
35
40
45
50
Mis-Matching Score
Fig. 4.7 Distribution of matching scores, when matched fingerprints are from different fingers.
On the other hand, when we try to match two fingerprints from different fingers, the low
quality number fingerprints will have lower matching scores, as shown in the above figure.
Still to avoid confusing, we name the high quality number group as low quality and low
quality number group as high quality, which is defined by the NIST software. The mean
value of the matching scores of the low quality number, high quality, group is 9.29, the
mean value of the median quality number group is 10.74, and the mean value of the high
quality number, low quality, group is 13.39. Again, because NIST system defines the high
quality fingerprint with a lower quality number, this lower quality number should give us a
better matching performance. When we try to match fingerprints or identify fingerprints,
higher matching scores of matching between different fingers will only cause the difficulty
of matching and result in a less reliability. Thus, from the above figure, the lower quality
85
number fingerprints, higher quality ones, will have a better matching performance, as we
expect.
1
0.95
TAR
0.9
0.85
0.8
0.75
High Quality
Median Quality
Low Quality
0
0.01
0.02
0.03
0.04
0.05
FAR
0.06
0.07
0.08
0.09
0.1
Fig. 4.8 ROC, when apply overall quality classification.
From the above picture, the low quality number group, with high quality, has a better
receiver operating characteristic (ROC) than the high quality number group, with low
quality. For the low quality number group, high quality, when the False Accept Rate is 0.01,
the True Accept Rate is 0.986, and when the False Accept Rate is 0.1, the True Accept Rate
is 0.997. For the median quality number group, high quality, when the False Accept Rate is
0.01, the True Accept Rate is 0.975, and when the False Accept Rate is 0.1, the True Accept
Rate is 0.998. For the high quality number group, low quality, when the False Accept Rate
is 0.1, the True Accept Rate is 0.955, and when the False Accept Rate is 0.2, the True
Accept Rate is 0.990. Given the same False Accept Rate, the low quality number group has
a higher possibility of true, which indicates that the low quality number group performs
better than the high quality number group.
Compare the two criterions, we notice that for the second criterion with quality number,
more low matching score fingerprints go to the low quality group, which means that the
quality number criterion have a better classification performance. And no matter which one
we apply, the distributions and ROC plots both show that the higher quality of the
86
fingerprint is, the higher possibility of better matching performance it may be. This better
matching performance can be conclude from the ROC plots, the higher matching scores
when match the same finger and the lower matching scores when match different fingers.
However, the different between the distributions of the two groups is more obvious, when
we use the overall quality number as the criterion. The local quality is defined by the
quality zone of the fingerprints, which is based on the blur degree and curvature. On the
other hand, the overall quality number of the fingerprint is defined by NIST Fingerprint
Image Quality (NFIQ) System.
The NFIQ system define the overall quality of the
fingerprint based on the feature vector, which contains foreground information, total
number of minutiae, number of minutiae with reliability higher than 50%, 75%, and 90%,
and the local quality information. When we apply the NIST matching system, since the
system is minutiae based, this NFIQ system would have a more comprehensive
classification. Thus, if we apply NFIQ system to classify the whole database into higher
and lower quality fingerprint groups, the higher quality fingerprint, with lower quality
number, will have a distinct distribution of matching scores from the lower quality
fingerprint group, from which we can safely reach the conclusion that the higher quality the
fingerprint is, the higher possibility of the better matching performance it will be.
87
Chapter 5
Conclusions and Future Works
5.1 Conclusions
A fingerprint is an impression of the friction ridges of all or any part of the finger. A
friction ridge is a raised portion of the epidermis on the palmar (palm and fingers) or
plantar (sole and toes) skin, consisting of one or more connected ridge units of friction
ridge skin. Fingerprint identification (sometimes referred to as dactyloscopy) or palm-print
identification is the process of comparing questioned and known friction skin ridge
impressions (see Minutiae) from fingers or palms to determine if the impressions are from
the same finger or palm. The flexibility of friction ridge skin means that no two finger or
palm prints are ever exactly alike (never identical in every detail), even two impressions
recorded immediately after each other. Fingerprint identification (also referred to as
individualization) occurs when an expert (or an expert computer system operating
under threshold scoring rules) determines that two friction ridge impressions originated
from the same finger or palm (or toe, sole) to the exclusion of all others. And Fingerprint
Identification has been used for centuries.
In this thesis, we firstly talked about the use of fingerprint, its history and most used
techniques nowadays. By pointing out the defects existing in these techniques, we
introduce our new fingerprint scanning system in [1]. In an effort to develop such a system,
we have been developing a 3D fingerprint scanner system that uses Structured Light
Illumination (SLI) to acquire 3D finger and palm scans with ridge level details. The SLI
technique is a very common technique used for automatic inspection and measuring surface
topologies. The 3D scans thus obtained are post processed to virtually extract the finger
surface from the scan and then create 2D rolled equivalents, since the standard fingerprint
analysis software is 2D based. This post processing of the fingerprint is explained in
88
chapter 2. Here, the method we use to unraveled the 3D fingerprint into 2D unraveled
fingerprint is best fit sphere unravel method. The details of this method is given in chapter
2, whose main idea is to find a sphere best fit the fingerprint, then project the ridges onto
the sphere. And at last unravel the sphere into 2D and by scaling the every part of the
fingerprint to 500 dpi, we further defuse the distortion caused by the best fit sphere
algorithm, such that from acquiring the 3D fingerprint to unraveling the 3D to 2D, the
distortion is defused, expect for the possible distortion caused by camera and projector. And
then, we applied the NIST software system on both 2D inked fingerprints and our 2D
unraveled fingerprints. To identify the most viable measures for quantifying our new
fingerprint scanning system, a small fingerprint database consisting of both the 3D scans
and the traditional 2D, rolled inked fingerprints were created by recruiting about 15
subjects. All the 3D scans were processed and there 2D rolled equivalents was obtained and
stored in the database. The NIST software systems were first run over both the 2D rolled
inked fingerprints, and the 3D unraveled images obtained from the 3D scans and the
different quantities were evaluated. The results show that for both 2D inked and unraveled
fingerprints there is a similar direct relationship between the numbers of local image blocks
in quality zone 4 with the overall quality. As the quality of the fingerprint increases from
unusable to best, the number of these blocks also increase significantly. A reverse trend was
also observed in the number of blocks falling under quality zone 2, i. e. as the number of
these blocks increase, the overall quality of the image decreases from best to worst. But as
number of blocks in quality zone 4 are in direct proportion with the quality, this measure
seems to be a logically more appropriate choice for determining scanner performance. The
other two quality zone blocks did not show any significant trend and difference over the
various quality categories. A similar direct relationship was also found between the number
of minutiae with quality greater than 0.5, 0.6, and 0.75 and the overall quality for both the
2D inked and unraveled fingerprints. However, the minutiae with quality greater than 0.5
and 0.6 are statistically comparable and follow the same trend as the number of minutiae
with quality greater than 0.75. So, the minutiae, with reliability greater than 0.75, is more
suitable for scanner performance assessment. The classification confidence numbers do not
show significant increase or decrease with respect to the overall image quality, and, hence,
89
are not of much use for evaluation purposes. And for the minutiae with quality bigger than
0.8, the result shows that the 3D unraveled fingerprints have a better distribution than the
2D inked fingerprint, which means that more high quality minutiae are detected in the 3D
unraveled fingerprints. When these quantities were compared for both 3D scans and 2D
scans, both domains showed similar trends with a stronger correlation in the 3D domain,
verifying our hypothesis. After getting and analyzing the results, we concluded that our
new 3D fingerprint scanning system has a similar trends and distributions as the 2D inked
fingerprints, but with a higher quality.
In chapter 3, we introduce the tradition 2D inked fingerprint and the standard quality test
and the matching software, National Institute of Standards and Technology (NIST)
software systems. The software system is developed to evaluate the scanning data and the
scanning system. We assess the system by unraveling the 3D fingerprint into 2D unraveled
fingerprint. In this work, we have used some traditional quantitative metrics first
introduced to evaluate the quality of the legacy 2D images. Specifically, we are using three
software systems, developed by NIST, for evaluating the traditional 2D scans. These
software systems include the classification system PCASYS, the minutiae detection system
MINDTCT, and the image quality measure system NFIQ. PCASYS is a neural network
based fingerprint classification system that classifies the fingerprint image into 5 different
classes [2, 50]: (i) whorl, (ii) right loop, (iii) left loop, (iv) arch, and (v) tented arch. This
classification is based on identifying the presence and position of some singular points like
the core and delta on the fingerprint surface. Along with the fingerprint image class, the
system also outputs the probability of the hypothesized class, as the confidence for the
classified fingerprint. This confidence number is used as one of the quantifying measures
for scanner evaluation. And the matching software BOZORTH3, is minutiae based
fingerprint matching system, which will give a matching score to judge whether the two
fingerprints are from the same finger or not. The details of the algorithm are given in
Chapter 3.
In chapter 4, we gave a matching analysis of our new system, and the matching is based on
90
2D unraveled fingerprints from 3D. The result shows that our new method has a very good
ROC curve, which is chosen as our evaluation criterion. When the FAR is 0.01, the TAR is
0.891. And for the FAR 0.1, the TAR is 0.988. At the end of chapter 4, we divided the
whole data base into high quality fingerprints, median quality and low quality fingerprints.
The quality is defined by the NIST system. Matching results of these groups demonstrate
that the higher quality fingerprints, more possibly, result a better matching performance.
Especially for the quality number criterion, the trend that higher quality results better
matching is clear, and compared to the local quality criterion, more poor matching
performance fingerprints are classified to the low quality group.
To sum up, we present a new approach for 3-D fingerprint acquisition and matching using
structured light illumination and 2-D equivalent 3-D image data processing. Both the
processes of acquiring the 3D fingerprint and post processing the data are free of distortion.
We employed software systems developed by the National Institute of Standards and
Technology (NIST), used for conventional 2-D fingerprints, to evaluate the performance of
3-D fingerprints after unraveling them into 2-D rolled equivalent images. Compared with
the 2-D rolled inked counterpart, the new 3-D approach provides competitive performance
with more user convenience, higher robustness to experimental conditions, faster data
collection speed, and free of distortion. And also we show that higher the quality of the
fingerprint is, higher possibility of better matching performance the fingerprint will achieve.
5.2 Future Works
In this thesis, the fingerprint scanning system is single camera based, which can only cover
part of the finger. The direct result of this system is that the other part of the fingerprint
would be lost and the minutiae detected would be greatly reduced. So, next step is to use a
multi-camera scanning system. And the software and algorithm to unravel multi-camera
scans and to merging multi scans into one single whole view should also be developed.
Further, the fit sphere algorithm should also be developed since the shape of finger,
91
generally is not close to sphere but a cylinder or some finger model. Maybe some finger
model should be developed with several parameters to define the size, curve degree and
some other details of the model, such that the unravel can give us a more accurate and undistorted 2D unraveled fingerprints, which would be close to inked ones. And even further,
nowadays, the most used fingerprints are 2D fingerprints, so the quality testing and
matching software are all 2D based. As we introduced the 3D fingerprints, the quality
testing and matching software based on 3D fingerprints should also be developed so that
we don’t need to unravel the 3D fingerprints into 2D ones where great distortion would be
caused and much information would be lost during this process. If the matching can be
directly done in 3D, there would be even more information, like the curve degree of the
finger, the shape of the finger, the size of the finger, for us to get an even more accurate
matching result.
Also our future work includes camera/projector lens distortion correction, to obtain more
surrounding information and higher ridge depth precision of 3-D fingerprints. Furthermore,
the same 3-D sensor may be used to capture face, hand and palm-print images and
therefore is ideal for a fusion of comprehensive 3-D biometrics of humans.
92
Appendix
3D Unraveled Fingerprints
Subject 0
Subject 1
93
Subject 2
Subject 3
94
Subject 4
Subject 5
95
Subject 6
Subject 7
96
Subject 8
Subject 9
97
Subject 10
Subject 11
98
Subject 12
Subject 13
99
Subject 14
100
BIBLIOGRAPHY
[1] Veeraganesh Yalla. Optimal Phase Measuring Profilometry Techniques for Static and Dynamic 3D
Data Acquisition. PhD Dissertation, University of Kentucky, Lexington, USA. July, 2006.
[2] Abhishika Fatehpuria. Performance Evaluation of Non-contact 3D Fingerprint Scanner. MS Thesis,
University of Kentucky, Lexington, USA. November, 2006.
[3]
Veeraganesh Yalla. Multi-frequency Phase Measuring Profilomety. MS Thesis, University of
Kentucky, Lexington, USA. December 2004.
[4]
C. Guan. Composite Pattern for Single Frame 3D Acquisition. PhD Dissertation, University of
Kentucky, Lexington, USA. December 2004.
[5]
C Guan, L. G. Hassebrook and D. L. Lau. “Composite Structure Light Pattern for Three
Dimensional Video”. Opt. Exp., 11, 406-417, 2003.
[6] International Biometrics Group. Biometrics Market and Industry Report 2006-2010, Jan 2006.
[7] D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar. Handbook of Fingerprint Recognition. SprinterVerlag, 2003.
[8] http://en.wikipedia.org/wiki/Fingerprint
[9]
Olsen, Robert D., Sr. (1972) “The Chemical Composition of Palmar Sweat” Fingerprint and
Identification Magazine Vol 53(10).
[10] Tewari RK, Ravikumar KV. History and development of forensic science in India. J. Postgrad Med
2000,46:303-308.
[11] The History of Fingerprints. http://onin.com/fp/fphistory.html.
[12] A. K. Jain, R. Bolle, and S. Pankanti, editors. Biometrics-Personal Identification in Networked
Society. Kluwer Academic Publishers. 1999.
[13] H. C. Lee and R. Gaensslen, editors. Advances in Fingerprint Technology. Florida: CRC Press, 2
Edition, 2001.
[14] B. Laufer. History of Fingerprint System. Washington: Government Printing Office, 1912.
[15] F. Galton. Fingerprints. McMillan, London, 1892.
[16] M. Hase and A. Shimusu. Entry method of finerprint image using a prism. Trans. Institute Electron.
Commun. Eng. Japan, J67-D: 627-628, 1984.
101
[17] R. D. Bahuguna and T. Carboline. Prism fingerprint sensor that use a holographic element. Applied
Optics, 35(26): 5242-5245, 1996.
[18] CJIS-RS-0010v7. Appendix F & G, IAFIS Image Quality Specifications, January 1999.
[19] W. J. Herschel. The origin of Finger-Printing. Oxford University Press, London, 1916.
[20] I. Seigo, E. Shin, and S. Tackashi. Holographic fingerprint sensor. Fujitsu Scientific and Technical
Journal, 25(4): 287, 1989.
[21] Y. Fumio, I. Seigo, and E. Shin. Real time fingerprint sensor using a hologram. Applied Optics,
31(11): 1794, 1992.
[22] Edge lit hologram for live scan fingerprinting, 1997. http://www.ImEdge.com/.
[23] D. G. Edwards. Fingerprint sensor. US Patent 4429413, 1984.
[24] J. Klett. Thermal imaging fingerprint technology. In Proc. Biometric Consortium, Ninth Meeting,
Crystal City, VA, April 8-9 1997.
[25] X. Xia and L. O’Gorman. Innovations in fingerprint capture devices. Pattern Recognition, 36(2):
361-369, 2003.
[26] J. Scheneider and D. Wobschall. Live scan fingerprint imagery using high resolution c-scan ultrasonography. In Proc. 25th Int. Carnahan Conference on Security Technology, pages 88-95, 1991.
[27] Ultra-scan, 2000. http://www.ultra-scan.com/.
[28] C. Tsikos. Capacitive fingerprint sensor. US Patent 4353056, 1982.
[29] A. G. Knapp. Fingerprint sensing device and recognition system having predetermined electrode
activation. US Patent 5325442, 1994.
[30] C. Inglis, L. Manchanda, R. Comizzol, A. Dickinson, E. Martin, S. Mandis, P. Silveman, G. Weber,
B. Ackland, and L. O’Gorman. A robust 1.8v, 250 mw direct contact 500 dpi fingerprint sensor. In Proc.
IEEE Solid State Circuits Conference, pages 284-285, 1998.
[31] J. W. Lee, D, J. Min, J. Kim, and W. Kim. A 600dip capacitive fingerprint sensor chip and image
synthesis technique. IEEE Journal of Solid State Cirecuits, 34(4): 469-475, 1999.
[32]
A. Dickison, R. McPherson, S. Mendis, and P.C. Ross. Capacitive fingerprint sensor with
adjustable gain. US Patent 6049620, 2000.
[33] Veridicom. Error! Hyperlink reference not valid.http://www.veridicomcom.
[34] Harris semiconductor, 1998. http://www.semi.harris.com/fngrloc/index.htm.
[35] A. K. Jain, L. Hong, S. Pankanti, and R. Bolle. Identity authentication using fingerprints. In
102
Proceedings of IEEE, pages 1365-1388, 1997.
[36]
Federal Bureau of Investigation. The science of fingerprints: Classification and uses. U. S.
Government Printing office. Washington, D. C., 1984.
[37] N. Ratha and R. Bolle, editors. Automatic Fingerprint Recognition Systems. Sprinter-Verlag, 2004.
[38] http://te-effendi-kriminalistik.blogspot.com/2006/11/fingerprint-fingerprint-is-impression.html.
[39] C. Wilson, G. Candela, and C. Watson. Neural network fingerprint classification. Journal of
Artificial Neural Networks. 1(2): 203228, 1993.
[40] Anil K. Jain. http://biometrics.cse.msu.edu/Presentations/AnilJain_UniquenessOfFingerprints_NA
S05.pdf.
[41] A. K. Jain, S. Prabhakar, L. Hong, and S. Pankanti. Filterbank-based fingerprint matching. IEEE
Transactions on Image Processing, 9:846-859, 2000.
[42] B. Moayer and K. Fu. A Syntactic approach to fingerprint pattern recognition. Pattern Recognition,
7:1-23, 1975.
[43] http://www.nlreg.com/sphere.htm.
[44] T. W. J. Unti, "Best-fit sphere approximation to a general aspheric surface," Appl. Opt. 5, 319(1966).
[45] Fletcher, D.J.; Jewett, D.L.; Zhi Zhang; Amir, A. The effect of skull shape on single and multiple
dipole source localizations. Engineering in Medicine and Biology Society, 1993. Proceedings of the 15th
Annual International Conference of the IEEE Volume , Issue , 1993 Page(s):1469 – 1470.
[46] Yang, Li; Chen, Yaolong; Kley, Ernst-Bernhard; Li, Rongbin. 3rd International Symposium on
Advanced Optical Manufacturing and Testing Technologies: Advanced Optical Manufacturing
Technologies. Proceedings of the SPIE, Volume 6722, pp. 672245 (2007).
[47]
Zheng, Li-xin; Zhu, Zheng. Methods to determine best-fit sphere for off-axis aspheric
surface. SPIE.6722E.149Z 2007.
[48] http://www.nist.gov/public_affairs/general2.htm.
[49] Craig I. Watson, Micheal D. Garris, Elham Tabassi, Charles L. Wilson, R. Micheal McCabe,
Stanley Janet, Kenneth Ko. User’s Guide to Export Controlled Distribution of NIST Biometric Image
Software (NBIS-EC).
[50] Craig I. Watson, Micheal D. Garris, Elham Tabassi, Charles L. Wilson, R. Micheal McCabe,
Stanley Janet, Kenneth Ko. User’s Guide to NIST Biometric Image Software (NBIS).
103
[51] Charles Wilson. http://fpvte.nist.gov/report/ir_7123_analysis.pdf.
[52] Craig Watson, Charles Wilson, Karen Marshall, Mike Indovina, and Rob Snelick. Studies of One to
One Fingerprint Matching with Vendor SDK Matchers. NISTIR 7221, April 22 2005.
[53] A. K. Jain, L. Hong, and R. Bolle. On-line fingerprint verification. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 19(4): 302-314, 1997.
[54] B. Moyer and K. Fu. A tree system approach for fingerprint pattern recognition. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 8(3): 376-388, 1986.
[55] D. Maio and D. Maltoni. Direct grey scale minutiae detection in fingerprints. IEEE Transactions in
Pattern Analysis and Machine Intelligence, 19(1): 27-40, 1997.
[56] J. Liu, Z. Huang, and K. Chan. Direct minutiae extraction from gray-level fingerprint image by
relationship examination. In Proceedings International Conference on Image Processing, volume 2,
pages 427-430, 2000.
[57] A. Ross, J. Reisman, and A. K. Jain. Fingerprint matching using feature space correlation. In
Proceedings ECCV Workshop on Biometric Authentication, Copenhagen, 2002.
[58] N. Ratha, K. Karu, S. Chen, and A. K. Jain. A real-time matching system for large fingerprint
databases. IEEE Trans Pattern Analysis and Machine Intelligence, 18(8): 799813, 1996.
[59] D. Isenor and S. Zaky. Fingerprint identification using graph matching. Pattern Recognition, 19(2):
113122, 1986.
[60] A. Hrechak and McHugh. Automated fingerprint recognition using structural matching. Pattern
Recognition, 23(8): 893904, 1990.
[61] R. S. Garmain, A. Califano, and S. Colville. Fingerprint matching using transformation parameter
clustering. IEEE Computer Science Engineering, 4(4): 4249, 1997.
[62] Z. Kovacs-Vajna. A fingerprint verification system based on triangular matching and dynamic time
warping. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11): 12662000, 2000.
[63] Lin Hong, Yifei Wan, and Ani Jain. Fingerprint Image Enhancement: Algorithm and Performance
Evaluation. IEEE PAMI, VOL 20, NO. 8, August 1998.
[64]
Lifeng Liu, Tianzi Jiang, Jianwei Yang and Chaozhe Zhu. Fingerprint Registration by
Maximization of Mutual Information. IEEE Image Processing. VOL 15 NO. 5 May 2006.
104
VITA
PERSONAL DATA
Name: Yongchang (first) Wang (last)
Sex:
Male
Email Address:
ywang6@engr.uky.edu
Date of Birth:
17/04/1981
Nationality:
P.R.China
Phone:
1-859-230-6154
Home Address: 257 Lexington Ave Apt 3, Lexington, KY, 40508, USA
EDUCATION
Sept. 2001 – Jun. 2005 Undergraduate, Zhejiang University, Hangzhou, P.R.China
B.E. in Electrical Engineering received in June 2005
Graduate, University of Kentucky, Lexington, KY, USA
Jan. 2006 –
PAPERS
Undergraduate:
1. Wang Yongchang, Wang Hao, Chen Liang, Zhu Shanan “Java-based inverted-pendulum Control
System”. Control of China, 4, 04429, 2005.
Graduate:
2.
Yongchang Wang, Kai Liu, Qi Hao, Daniel Lau, Laurence G. Hassebrook “Multicamera Phase
Measuring Profilometry For Accurate Depth Measurement”, SPIE, Orlando, Florida, April 9-15,
2007.
105
Download