483-271

advertisement
Hand Geometric and Handprint Texture based Prototype for Identity
Authentication
Artzai Picón Ruiz
Dpto. INFOTECH / INFOTECH Dept.
ROBOTIKER, Corporación Tecnológica TECNALIA.
Parque Tecnológico, Edificio 202. E-48170 Zamudio (Bizkaia)
SPAIN
Estibaliz Garrote Contreras
Dpto. INFOTECH / INFOTECH Dept.
ROBOTIKER, Corporación Tecnológica TECNALIA.
Parque Tecnológico, Edificio 202. E-48170 Zamudio (Bizkaia)
SPAIN
Begoña Suárez Fernández
Dpto. INFOTECH / INFOTECH Dept.
ROBOTIKER, Corporación Tecnológica TECNALIA.
Parque Tecnológico, Edificio 202. E-48170 Zamudio (Bizkaia)
SPAIN
Silvia Renteria Bilbao
Dpto. INFOTECH / INFOTECH Dept.
ROBOTIKER, Corporación Tecnológica TECNALIA.
Parque Tecnológico, Edificio 202. E-48170 Zamudio (Bizkaia)
SPAIN
Abstract: -A person’s identity verification prototype, based on a scanner or camera acquired image, offers a very
good performance with very low cost. This document describes the development of a palm-print recognition
prototype based on its geometrical features and texture as well as the excellent results obtained by the use of new
classifiers in the hand recognition area such as neural network classifiers and texture based statistical ones which
obtain better performance than the currently used algorithms.
Key-Words: - Biometrics, Hand recognition, Machine vision, Neural networks, Pattern recognition,
Security
1 Introduction
Biometrics is the technology which can identify and
obtain human features using physical characteristics
or behaviour.
These technologies can make a relationship between
a person and his pattern in a safe and nontransferrable way.
In this paper some improvements in the
characteristics extraction methods are presented, as
well as new approaches in the classification methods
using not only geometrical characteristics but texture
based characteristics too, obtaining higher
performance than the present systems.
2 Developed Prototype
The main difference between biometrical methods
and the classical ones lies in that the very person is
the key. This key can’t be stolen and its falsification
is, at least very hard
The verification of the person’s identity based in the
measurements of the palmprint is presented as a safe
and very economic method allowing its
implementation in lots of Biometric applications.
There are two working modes in a biometric system.
Authentication and identification. Authentication
consist of verifying if the person is who he pretends
to be .(1vs1). Identification consist of obtaining the
identity of a person searching a database so the
customer has not to tell us who he is (1vsN).
The prototype developed consists of a software
system which takes charge of the processing and the
management of the data obtained from each user
using a database, and of a hardware module
constituted by a PC Scanner or a CCD Camera and a
compatible PC.
This prototype allows camera or scanner image
acquisition,
users
registration,
information
centralization in a database accessible from several
ID checkpoints.
To measure the performance of a biometric system
False Acceptance Rate and False Rejection Rate
terms are used [1][2].
FAR (False Acceptance Rate): it is defined as the
point per cent of the users who are accepted by the
system when shouldn’t have been accepted.
FRR (False Rejection Rate): it is defined as the point
per cent of the users who are rejected by the system
when should have been accepted.
EER (Equal Error Rate): it is defined as the threshold
in which FAR and FRR are equal so a true user have
the same possibilities of being rejected as a fake user
of being accepted.
Most of the hand recognition systems need the user’s
hand to be placed in a rigid way determined by some
bumpers so no possibility of relative movement
between the different fingers is possible [3].
However new approaches are being developed. A
characteristic extraction algorithm for computer
scanner acquired hand images has been developed
allowing a more flexible hand placement [4]
Fig.1 Developed prototype picture
The prototype is installed to allow the access into a
laboratory in Robotiker and several people use it
everyday.
3. Feature extraction
The main problem in a hand based recognition
system lies in doing the correct election of the
features which are going to be used to classify a
person. This algorithm tries to extract a set of noise
and hand placement independent characteristics.
To obtain these characteristics this method is applied
-
Hand image acquisition.
Hand isolation.
Detection of points of interest.
Geometrical characteristics extraction.
Texture characteristics location.
3.1 Geometric features extraction
A color image is directly obtained from the scanner.
Curvature  Sin ( ) 
IA  IB
(1)
IA  IB
After calculating the curvature values in the array its
observed that the finger ends are corresponded to the
valleys in the curvature graph and the fingers
intersection are corresponded to the peaks of the
graph (Fig. 4).
Fig. 1 Acquired Image
The method consists of obtaining the hand edges to
allow the detection of finger intersections and finger
ends through the hand shape curvature.
Fig. 4 Curvature graph
Searching for maximum and minimum points in the
curvature graph allows to extract these points which
we will call “main points”(Fig.5).
Fig.2 Hand Obtained Borders
The hand borders are extracted using classical
machine vision algorithms [4]. To get the hand
curvature, the whole border array is examined and
outer product is used to determine the curvature of
each point in the edge image (1).
Fig. 3 Curvature estimation
Fig. 5 Main points extracted from the captured image
Using these points, and some characteristics related
to the distance of a point inside the hand to the closest
border point, we are able to create the feature vector
of a hand image.
Fig. 6 hand extracted characteristics image
Finally 48 characteristics are extracted from a hand
and used to create a hand feature vector.
To generate the model, it is necessary to obtain a
group of feature vectors from each person. The
statistical model of a person’s features is named
template of that person.
3.2 Texture characteristic extraction
As well as the previous characteristics, a texture
feature extraction is needed to perform a texture
based classification.
4.1.1 Template generation
To create a template, a group of samples are obtained
to get their own feature vectors,
For this purpose, a trapezoidal zone is extracted using
the main points coordinates and a transform is
applied to convert this in a rectangle (Fig. 7).
Let Vi be the feature vector of a hand image of a
person with m components, being m the number of
features obtained from a hand and let n be the number
of hand images of that person.
The hand texture in the rectangle will be used as new
features for hand based classification.
In order to reduce the amount of information
contained in the texture, a PCA (Principal component
analysis) based reduction method is chosen.
The mean vector and the covariance matrix is
calculated using the n samples (2).
n
Mean 
V
i 1
i
n
n
Covariance 
 Vi  Mean  Vi  Mean
(2)
i 1
n
A mean vector and a Covariance matrix will be used
to define each person.
4.1.2 User Classification
To allow the classification of a person, a new hand
image is acquired, its feature vector is extracted and
is compared with its corresponding template by
Mahalanobis classifier (3).
Fig. 7 hand texture extraction
The PCA based reduction method uses the mean of
the textures of the hand of each person (as described
in
the training method developed for facial
recognition [5]) which can spectacularly decrease the
intraclass differences while increases the interclass
differences.
Dist  V  Mean   inv (Co var iance)  V  Mean 
'
(3)
the Dist value indicates the similarity of the present
vector with one template of a person. Dist is smaller
if the feature vector has been obtained from a hand
of the same person the template belongs.
4.2 Neural classifiers
4. Classifiers
Two sort of classifiers are proposed, statistics based
and neural networks based ones.
4.1 Statistical classifiers
These classifiers try to get an statistical model in
order to be able to estimate, under the given input
features, whether the probabilities of those features
belong to a particular class or not .
Neural network based classifiers are proposed to
perform identification and authentication tasks as an
alternative to the statistical ones.
Neural networks take advantage of having experience
training ability and the capabilities of extracting
relationships among different features unknown for
us.
Neural networks are trained with a part of obtained
vectors and they are tested against the rest of the
vectors.
Several neural networks models are proposed, these
will be explained in the experimental results section
due to the very different structure of each model.
14 templates and 85 images not used in the template
generation were used to test the system.
A success is considered when the closest template of
a test image belongs to the same person the test
image belongs to.
The obtained results are shown in Table.2.
5. Experimental Results
Correct
91,76 %
5.1 Geometric features based results
Table.2
5.1.1 Statistic classifier
Mahalanobis classifier was used to classify the
persons using their geometric features.
5.1.1.1 Authentication results
Authentication tests were accomplished using 155
hand images from 14 different persons, using 5
images from each person to create the 14 templates
and the rest of them were used to test the system.
To determine the classifier performance, each test
image was crossed against the 14 templates observing
the distances produced (3) whether they belong to the
same person or not.
The obtained performance of the biometric system is
shown in Table.1 and in Fig.8 .
FAR
0.6 %
FRR
14%
5.2.1 Statistic classifier
5.2.1.1 Neural network classifier for identification
A multilayer perceptron neural network based
classifier is considered using back propagation
training.
An output vector is chosen for each feature
vector,.This vector consist of n values where n is the
number of classes. It will have zeros if the feature
vector doesn’t belong to that class and one for the
class the vector belongs to.
The network was trained with 70 images from 14
persons, another 43 for network verification and
another 42 to perform the final test.
The network assigned all the test and verification
images (and, of course, the training ones too) in the
14 existing classes.
EER
2.25%
Table.1
Correct
100 %
Incorrect
0%
Table. 3.
100
FAR y FRR
90
Incorrect
8,24%
80
FAR
This classifier works better than the statistical one in
identification tasks.
FRR
70
60
50
40
30
20
10
0
0
0
15
00
0
14
00
0
13
00
0
12
00
90
00
11
00
80
00
10
00
70
00
60
00
50
00
40
00
30
00
50
0
20
00
45
0
10
00
40
0
35
0
30
0
25
0
20
0
15
0
0
50
10
0
0
threshold
Fig.8.
5.1.1.2 Identification results
To test the system performance in identification
process, each image is crossed against all templates
but, in this case, only the closest template is assigned
without taking into account the obtained distance.
It was observed that although the network classified
correctly all the test images, it hadn’t the ability to
discriminate those images which don’t belong to any
of the classes.
To overcome this setback, the training images were
divided in two groups, the first group with half of the
persons and the second with the other half.
A network was trained with each group and the two
networks were tested with the whole group of test
images.
In this way, the behavior of the network is observed
when no class belonging inputs are presented.
The Euclidean distance between the output vector of
the network and all the target output vectors of each
class was used as a threshold.
FAR
2,65%
FRR
18%
The network was trained using the first 595
difference vectors using another 255 for verification
and 340 for the final test.
Note that the training group only has the first 7
classes , the verification group the following 3 classes
and the test group the last 4 classes.
EER
9%
Table. 4.
The selected network was a Multilayer perceptron
with 25 neurons in the input layer (not all the
characteristics were used), an hidden layer with 12
neurons and an output layer indicating if the
authentication is correct or not. All the activation
functions were sigmoid.
FAR-FRR Clasificador de Verificación
100
90
80
70
60
FAR
50
FRR
40
The results are shown in Table.5.
30
20
10
FAR
2%
1
0. 0
00
0
0. 1
00
0
0. 2
00
0
0. 3
00
0
0. 4
00
0
0. 5
00
06
0.
00
1
0.
01
0.
1
0.
2
0.
3
0.
4
0.
5
0.
6
0.
7
0.
8
0.
9
0
FRR
2.3%
Table.5.
Fig.9.
5.2.1.3 Hand texture based results
The previous classifiers don’t use the hand texture
information to make their decision. The use of
Mahalanobis classifier is proposed to accomplish this.
The obtained results are shown in Table.4.
5.2.1.2
Neural
authentication
network
classifier
for
Unlike the previous classifier, which requires a
different networks if the number of classes changes,
or at least requires a different network for each group
of persons, this classifier is pretended to be
independent from the number of classes
to
authenticate.
To perform this, the input of the network is the
difference vector between the mean of the training
feature vectors and the present vector. The output of
the networks is 1 if the training feature vectors and
the present vectors belonged to the same person, and
is zero if they belong to different persons.
155 images were taken, 70 of them were used to
create 14 mean vectors (one for each class) were used
as template.
The other 85 feature vectors were used to create 1190
difference vectors (85 feature vectors and 14 different
mean vectors).
These difference vectors are used as the input of the
neural network.
Fig.10. Hand textures from two different persons
35 hand textures were used against the previous
calculated classifiers (as described in [5] for face
recognition systems) obtaining a very low error rates.
Mahalanobis distance was used as threshold to accept
or reject a person and the results are shown in Table.6
and in figure.11.
FAR
0%
FRR
4.16%
Table.6
EER
0.47%
The texture and geometric features will be combined
to get a better performance in identification and
authentication tasks.
%
FAR y FRR textura
100
90
80
70
60
50
40
30
20
10
0
FAR
FRR
Due to the good results obtained with texture
features, it’s planned to extend the present texture
area to other hand areas such as fingers.
0
0.25 0.5 0.75
1
1.25 1.5 1.75
2
2.25 2.5
3
Threshold
Fig.11
6. Result Discussion
The results obtained in geometric features based
classification are very promising, similar or better
than the ones obtained by other groups as is presented
in Bulatov et al [4].
- Jain et.al. FAR of 2% and a false rejection rate
(FRR) of 15%.
- Jain and Duta FAR of 2% and FRR 3:5%.
- Raul Sanchez-Reillo et. al. error rates below 10% in
verification
- Bulatov et al. FAR 1% y FRR 3 %.
Our results using the geometric features obtain a FAR
of 0.6% and a FRR of 14% and changing the
threshold sensibility we obtain a result of FAR 2%
and FRR 3% as shown in Fig.8.
Neural network based classifiers obtain a better
performance than the statistical ones in identification,
but they are not able to distinguish persons who don’t
belong to any class as well as the statistical methods.
To correct this problem a two-stages process is
proposed. First, a person is identified using the
neural classifier and then a statistical classifier is used
to validate the neural network classification.
7. Conclusions
Neural networks are able to improve the classification
in identification tasks whenever the final verification
were made by a statistical classifier.
Hand texture features based classification offers a
very good result in authentication, better than the
geometrical methods offers. A combination of both
can increase a lot the system performance.
The acceptance of the prototype by the users is being
good enough, and no complaints are being detected.
8. Acknowledgements
The authors are grateful to Basque Government for
it’s partially supporting of this project though the
project IODA: Gestión de la Seguridad Informática y
de la Identidad Digital.
References:
[1] Gutierrez. J.A et al, Estado del Arte: El
Reconocimiento Biométrico. Robotiker,2002.
[2] Virginia Espinosa Duró, Evaluación de sistemas
de reconocimiento biométrico, Departamento de
Electrónica y Automática. Escuela Universitaria
Politécnica de Mataró. 2001.
Hand texture based classification has obtained really
amazing results, better than the results obtained using
geometrical features.
[3] Raul Sanchez-Reillo, Carmen Sanchez-Avila, and
Ana Gonzalez-Marcos, Biometric identification
through hand geometry measurements, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, 22(10):1168–1171, 2000.
The results using texture features obtain a FAR of 0%
and a FRR of 4.16%, and changing the sensibility
of the threshold we get a FAR of 0.47% and a FRR of
0% (as shown in Fig.11) being these results better
than other groups obtained ones.
[4] Yaroslav Bulatov Sachin Jambawalikary Piyush
Kumarz and Saurabh Sethiay, Hand recognition using
geometric classifiers, DIMACS Workshop on
Computational Geometry, Rutgers University,
Piscataway,2002.
7. Future Work
[5] Artzai Picón, Sistema software en tiempo real
para el reconocimiento de personas en base a sus
características faciales, Proyecto fin de carrera
sección de publicaciones ETSIIT, Bilbao , 2002
To validate the result it’s planned to repeat the tests
using a huge number of images and persons.
Download