Extraction of Features From Fusion of Two Fingerprints For

advertisement
Extraction of Features From Fusion of Two
Fingerprints For Security
Miss Swapnali V Mandle
M.E(E&TC)
ADCTE Ashta
Dept.of E&Tc
Swapnali.mandle@gmail.com
Mr.S S Bidwai.
Asst.professor,ADCTE Ashta.
Dept.of E&Tc
Profssb@gmail.com
Abstract-Biometric fingerprint recognition is considered as one of the most reliable technologies and has
been extensively used in personal identification. in conventional fingerprint recognition system minutiae
position are extracted and stored in the database, but from that minutiae position fake fingerprint can be
generated so this is the main drawback of existing systems. To overcome this drawback if two different
fingerprints fusion are used then fake fingerprint problem can be partially solved. We proposed new
system that is two different fingerprints fusion for security. In this proposed system first fusion of
fingerprint take place and then extraction of features like finding of minutia position or estimation of
orientation that is combined template, which will be stored in database. For testing required two query
fingers .Again same process is required as mention above for generation of combined template, which
will be compare with stored database.
Keywords: fingerprint, image enhancement, filtering, minutiae extraction, orientation, combined
template.
I
Introduction:
In conventional method of fingerprint
recognition system fingerprint enhancement is
used, and using this technique minutiae
positions are extracted and orientations are
estimated. But if someone knows about minutiae
positions database, it is easy find original image
of fingerprint by using reconstruction technique
of fingerprint from minutiae positions. So
drawback of this system can be minimized using
fusion of two different fingerprints. In this
proposed system first fusion of two different
fingerprint[1]-[5] take place then extraction of
features like minutia positions or orientations[6]
,which will be stored into database. This is
enrollment phase. In authentication phase ,takes
two query fingerprints for testing. Again fusion
take place and features are extracted, which will
compare with stored database. Result is in the
form of matching or not matching. This can be
represented using following block diagram fig 1.
Fig1 A) Enrollment phase
Fig1B)Authentication phase
II. THE PROPOSED FINGERPRINT
PRIVACY PROTECTION SYSTEM
In this proposed system MATLAB
software is used. Using fingerprint scanner
fingerprints are scanned. In this proposed system
two different fingerprints are required. The
proposed system includes two phases.
Enrollment phase & Authentication phase.
In enrollment phase, take different two
fingerprints, using this fingerprints fusion take
place. After that extract the features using
enhancement algorithm. Extracted features are
stored in database. Working diagram of
enrollment phase is as follows:
Fig3. working diagram of Authentication phase.
Extracted features are comparing with
stored database. If database is match then
successful authentication. If database is not
matched then unsuccessful authentication.
Working diagram of enrollment phase is as
above in fig 3:
1.Fingerprint Image acquisition
Image acquisition is the first step in the
approach. It is very important as the quality of
the fingerprint image must be good and free
from any noise. A good fingerprint image is
desirable
for
better
performance
of
the
fingerprint algorithms. Based on the mode of
acquisition,
Fig 2.working diagram of Enrollment phase.
a
fingerprint
image
may
be
classified as off-line or live-scan[7][5].
An off-line image is typically obtained
In Authentication phase, take different
query fingerprints, using this fingerprints fusion
take place. After that extract the features using
enhancement algorithm.
by smearing ink on the fingertip and creating an
inked impression of the fingertip on paper. A
live-scan image, on the other hand, is acquired
by sensing the tip of the finger directly, using a
sensor that is capable of digitizing the
fingerprint on contact. Live-scan is done using
sensors. There are three basic types of sensors
used. They are optical sensors, ultrasonic
sensors and capacitance sensors.[5][7].
Optical sensors capture a digital image of the
fingerprint. The light reflected from the finger
passes through a phosphor layer to an array of
pixels which captures a visual image of the
fingerprint. Ultrasonic sensors use very high
frequency sound
waves
to
penetrate
the
epidermal layer of skin.
2 Fusion of fingerprints
After
scanning
fingerprints
image
is
mixed[1][2]. Result of fused image is as follows:
between pairs of adjacent pixels in the eightneighborhood. The ridge pixel can then be
classified as a ridge ending, bifurcation or nonminutiae point. For example, a ridge pixel with a
CN of one corresponds to a ridge ending, and a
CN of three corresponds to a bifurcation. The
Crossing Number (CN) method is used to
perform minutiae extraction. This method
extracts the ridge endings and bifurcations from
the skeleton image by examining the local
neighborhood of each ridge pixel using a 3×3
window. The CN for a ridge pixel P is given
by[9][10].
𝟖
𝑪𝑵 = 𝟎. 𝟓 ∑ 𝒑𝒊− 𝒑𝒊+𝟏
𝒑𝟗=𝑷𝟏
𝒊=𝟏
where Pi is the pixel value in the neighborhood
of P. For a pixel P, its eight neighboring pixels
are scanned in an anti-clockwise direction as
follows:
Fig.4 Result of Fused image.
Above fig 4 shows result of fused two different
fingerprints.
3.1 Minutiae Extraction
The most commonly employed method of
minutiae extraction is the Crossing Number
(CN) concept [9]. This method involves the use
of the skeleton image where the ridge flow
pattern is eight-connected. The minutiae are
extracted by scanning the local neighborhood of
each ridge pixel in the image using a window.
The CN value is then computed, which is
defined as half the sum of the differences
P3
P2
P5
P
P1
P6
P7
P8
Fig 3. scanned pixels.
3 Extraction of features of fingerprints.
After fusion features are extracted, like
minutiae positions or orientations. That mixed
features are stored into database. For extraction
of minutiae points enhancement algorithm is
used[6][10].
P4
After the CN for a ridge pixel has been
computed, the pixel can then be
classified according to the property of its CN
value. As shown in Figure 3.1, a ridge pixel with
a CN of one corresponds to a ridge ending, and a
CN of three corresponds to a bifurcation. For
each extracted minutiae point, the following
information is recorded:



x and y coordinates,
orientation of the associated ridge
segment, and
type of minutiae (ridge ending or
bifurcation).
N−1 N−1
2
𝑽𝑨𝑹(𝑰) = 𝟏/𝑁 ∑ ∑
(I(i, j) − M(I))2
i=0 j=0
Fig3.1 (a) CN = 1
(b) CN = 3
Above figure Examples of a ridge ending and
bifurcation pixel. (a) A Crossing Number of one
corresponds to a ridge ending pixel. (b) A
Crossing Number of three corresponds to a
bifurcation pixel. Minutiae extraction results are
follows.
An orientation image, O, is defined as an N × N
image, where O(i, j) represents the local ridge
orientation at pixel (i, j). Local ridge orientation
is usually specified for a block rather than at
every pixel; an image is divided into a set of w ×
w non overlapping blocks and a single local
ridge orientation is defined for each block. Note
that in a fingerprint image, there is no difference
between a local ridge orientation of 90o and
270o, since the ridges oriented at 90o and the
ridges oriented at 270o in a
local neighborhood cannot be differentiated
from each other.
3.2 Normalization
-Let I(i, j) denote the gray-level value at pixel
(i, j), M and VAR denote the estimated mean and
variance of I, respectively ,and G(i, j) denote the
normalized gray-level value at pixel (i, j). The
normalized image is defined as follows[6][7]:
𝑽𝑨𝑹(𝑰(𝒊,𝒋)−𝑴)𝟐
𝑴𝟎 + √
G(i,j)
=
√
{ 𝑴𝟎 −
Fig 4 Result of Minutiae position
For finding orientations first find mean and
variance after that normalization of image and
then orientation.
A gray-level fingerprint image, I is
defined as an N × N matrix, where I(i, j)
represents the intensity of the pixel at the ith row
and jth column. We assume that all the images
are scanned at a resolution of 500 dots per inch
(dpi). The mean and variance of a gray-level
fingerprint image, I are defined as
𝑽𝑨𝑹
𝑽𝑨𝑹(𝑰(𝒊,𝒋)−𝑴)𝟐
𝑽𝑨𝑹
𝒊𝒇 𝑰(𝒊, 𝒋) > 𝑀
𝒐𝒕𝒉𝒆𝒓𝒘𝒊𝒔𝒆
The main purpose of normalization is to reduce
the variations in gray-level values along ridges
and valleys.
3.3 Orientation Image
The orientation image represents an intrinsic
property of the fingerprint images and defines
invariant coordinates for ridges and valleys in a
local neighborhood.
Given a normalized image, G, the main steps of
the algorithm[6,7] are as follows:
1)Divide G into blocks of size w ×w (16 × 16).
N−1 N−1
𝑴(𝑰) = 𝟏/𝑁 2 ∑ ∑ I(i, j)
i=0 j=0
2)Compute the gradients ∂x(i, j) and ∂y(i, j) at
each pixel ( i , j).
3)Estimate the local orientation of each block
centered at pixel (i, j) using the following
equations:
In authentication, if same fingerprints
are used in same sequence then get result,
successful
authentication.
If
different
fingerprints are used in different sequence then
𝑽𝒙 (𝒊, 𝒋) =
∑
∑
𝒖=𝒊−𝒘/𝟐
𝒗=𝒋−𝒘/𝟐
𝒊+𝒘/𝟐
𝑽𝒚 (𝒊, 𝒋) =
∑
𝒖=𝒊−𝒘/𝟐
𝜽(𝒊, 𝒋) =
get result, un- successful authentication.
𝒋+𝒘/𝟐
𝒊+𝒘/𝟐
𝟐𝝏𝒙 (𝒖, 𝒗)𝝏𝒚 (𝒖, 𝒗)
𝒋+𝒘/𝟐
∑ (𝝏𝒙 𝟐 (𝒖, 𝒗)𝝏𝒚 𝟐 (𝒖, 𝒗))
𝒗=𝒋−𝒘/𝟐
𝟏
𝑽𝒙 (𝒊, 𝒋)
𝐭𝐚𝐧−𝟏 (
)
𝟐
𝑽𝒚 (𝒊, 𝒋)
4. Simulation results
Database successfully created result is as
Fig 5 Snapshot of successfully authentication
Conclusion -
follows:
In this paper use enhancement algorithm
for extracting the features. In this algorithm
normalization, thinning, binarisation is used.
Using this method finding minutiae extraction
and finding orientations are easy. Combination
of fingerprints and then extraction of features
can make virtual identity. Using combination of
fingerprints we can make virtual identity, and
that can be used for authentication.
References:[1] Sheng Li “Fingerprint combination for
privacy protection”. IEEE Trans. on Information
Forensics and Security. February 2013.
Fig 5 Snapshot of database created successfully
[2] Asem Othem and Arun Ross “ On mixing
Fingerprints”. IEEE Trans. on Information
Forensics and Security. January 2013.
[3] A. Ross and A. Othman, “Mixing
fingerprints for template security and privacy,”
in Proc. 19th Eur. Signal Proc. Conf.
(EUSIPCO), Barcelona,Spain, Aug. 29–Sep. 2,
2011
[4] A. Othman and A. Ross, “Mixing
fingerprints for generating virtual identities,” in
Proc. IEEE Int. Workshop on Inform. Forensics
and Security (WIFS), Foz do Iguacu, Brazil,
Nov. 29–Dec. 2, 2011
[5] B. Yanikoglu and A. Kholmatov,
“Combining multiple biometrics to protect
privacy,” in Proc. ICPR- BCTP Workshop,
Cambridge, U.K.,Aug. 2004.
[6] Lin Hong “Fingerprint image Enhancement :
Algorithm and performance evaluation.”. IEEE
Trans on pattern analysis and machine
intelligence Vol.20 no.8 August 1998.
[7] S. Kasaei, M. D., and Boashash, B.
Fingerprint feature extraction using blockdirection on reconstructed images. In IEEE
region TEN Conf.,digital signal Processing
Applications, TENCON (December 1997), pp.
303–306.
[8] Sen Wang “Fingerprint Enhancement in the
singular point area”. IEEE Signal processing
letters, Vol.11 no.1 January 2004.
[9] Ravi.J,K.B.Raja,Venugopal.K.R “Fingerprint
recognition using minutia score matching”.
International journal of Engineering Science
and technology vol.1(2)
[10] Raymond Thai Fingerprint Image
Enhancement and Minutiae Extraction.
Download