Recognition of 3D Face With Missing Parts by Premavathi.P

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 7 - Mar 2014
Recognition of 3D Face With Missing Parts by
Using FRGC Dataset
Premavathi.P1, Christo Paul.E2 , Saranya.C3
1 nd
2 year-M.E CSE, Srinivasan Engineering College, Perambalur, Tamil Nadu, India.
2 year-M.E CSE, Srinivasan Engineering College, Perambalur, Tamil Nadu, India.
3
Assistant Professor, Dept. Of CSE, Srinivasan Engineering College, Perambalur, Tamil Nadu, India.
2 nd
Abstract--The 3D face propose and experiment an original
solution to 2D face recognition that supports accurate face
matching and provide the pure accurate result. To get the
accurate face representation we first extracts key points of
the 3-D depth image of the face and then measures how the
face depth changes along facial curves connecting pairs of
key points. The Face expression evaluated by sparse
comparison of facial curves defined across in lier pairs of
matching key points between probe and Gallery scans. In the
proposed approach, distinguishing traits face are captured by
first extracting key points of the 3D depth image and then
measuring how the face depth changes along facial curves
connecting pairs of key points. Face comparison is calculated
by comparing facial curves across in lier pairs of key points
that match between gallery scans. So, facial curves of the
gallery scans are associated with a saliency measure in order
to distinguish curves that model characterizing traits of some
subjects from curves that are frequently observed in the face
of many different subjects. The recognition of face is
evaluated by using v2.o challenge
Keywords--Face Recognition Grand Challenge (FRGC),
Depth image, Facial Curves.
I. INTRODUCTION
Three-dimensional face recognition (3D face
recognition) is a modality of facial recognition methods in
which the three-dimensional geometry of the human face
is used. It has been shown that 3D face recognition
methods can achieve significantly higher accuracy than
their 2D counterparts, rivaling fingerprint recognition.3D
face recognition has the potential to achieve better
accuracy than its 2D counterpart by measuring geometry
of rigid features on the face. This avoids such pitfalls of
2D face recognition algorithms as change in lighting,
different facial expressions, make-up and head orientation.
Another approach is to use the 3D model to improve
accuracy of traditional image based recognition by
transforming the head into a known view. Additionally,
most range scanners acquire both a 3D mesh and the
corresponding texture. This allows combining the output
of pure 3D matchers with the more traditional 2D face
recognition algorithms, thus yielding better performance.
FACE recognition using 3-D scans of the face has been
recently proposed as an alternative or complementary
solution to conventional 2-D face recognition approaches
working on still images or videos. In fact, face
representations based on 3-D data are expected to be much
more robust to pose changes and illumination variations
than 2-D images, thus allowing accurate face recognition
also in real-world applications with unconstrained
ISSN: 2231-5381
acquisition. In such a case, probe scans area acquired in
unconstrained conditions that may lead to missing parts
(no frontal pose of the face, or to occlusions due to hair,
glasses, scarves, hand gestures, etc. These difficulties are
further sharpened by the recent advent of 4-D scanners (3D plus time) capable of acquiring temporal sequences of
3-D scans. In fact, the dynamics of facial movements
captured by these devices can be useful for much
application but also increases the acquisition noise and the
variability in subjects’ pose. In summary, despite the
research and applicative importance that partial face
matching solutions are gaining, just a few works have
explicitly addressed the problem of 3-D face recognition
in the case in which some parts of the facial scans are
missing.
Many matching problem a raised real word application.
Many occlusion problem occurred in 3D face. To solve
this problem 3D face recognition is proposed here. The
main technological limitation of 3D face recognition
methods is the acquisition of 3D image, which usually
requires a range camera. Alternatively, multiple images
from different angles from a common camera may be used
to create the 3D model with significant post-processing.
This is also a reason why 3D face recognition methods
have emerged significantly later than 2D methods.
Recently commercial solutions have implemented depth
perception by projecting a grid onto the face and
integrating video capture of it into a high resolution 3D
model. This allows for good recognition accuracy with
low cost off-the-shelf components. This process match the
all the present in the face. It matches the whole face. It
provides the whole face representation.
A novel geometric frame work for analyzing with
specific goals of comparing, matching, and averaging their
shape .The facial surface by radial curves emanating from
the nose tips and use elastic shape analysis of these curves.
Different data bases is used evaluate the performances
FRGCv2, GavabDB and Bosporus, each posing a different
type challenge.
Global 3-D face representations for partial face
matching have been proposed in a limited number of
works. In canonical representation of the face is proposed
which exploits the isometric invariance of the face surface
to manage missing data obtained by randomly removing
areas from frontal face scans. On a small database of 30
subjects they reported high. One of the ways to do this is
by comparing selected facial features from the image and a
facial database. It is typically used in security systems and
http://www.ijettjournal.org
Page 316
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 7 - Mar 2014
can be compared to other biometrics such as fingerprint or
eye iris recognition systems.
II. RELATED WORK
Existing system feature extraction is done using
whole face representation. In 3D face representation using
Combinations of solutions in these two categories are also
possible as well as multimodal approaches that combine
together 2-D and 3-D methods.Review 3-D face
recognition solutions that have been proposed and
evaluated using facial scans with missing part. It does not
satisfy the quality of whole face representation. Global 3D face representations for partial face matching have been
proposed in a limited number of works. The qualities of
whole face not satisfied in the existing system. The depth
of image not captured in the existing system. Tackling the
problem from an opposite perspective, some methods
divide the face into regions and try to restrict the match to
uncorrupted parts of the face. Few facial landmarks can be
accurately detected in an automatic way—from three to
ten are at manual assistance. In the case of partial face
scans, up to half of these points are typically no detectable,
so that description of such points and of their relationships
is of limited effectiveness for face recognition. The main
technological limitation of 3D face recognition methods is
the acquisition of 3D image, which usually requires a
range camera. Many occlusion problem in the 3D face
representation. The comparison with database does not
support the large quality of data. The whole face does not
predicted in existing system. The quality of the image does
not satisfy in existing approach. In existing system various
database has been used. But this database did not support
all data missed.
First, the spatial distance (Euclidean distance in the 2D plane of the depth image) between every pair of key
points is computed; Then, the key points are iteratively
grouped into a binary hierarchical cluster tree. At each
iteration step, the pair of closest key points (or clusters)
are grouped together. In this step, a single linkage strategy
is adopted for the computation of the distance between
clusters (that is, the shortest Euclidean distance between
elements in two clusters is assumed as cluster distance).
Finally, the decision of where to cut the hierarchical tree
into clusters is taken. In this step, branches off the bottom
of the hierarchical tree are pruned, and all the key points
below each cut are assigned to a single cluster. This
creates a partition of the data. The clusters are created by
detecting natural groupings in the hierarchical tree and
stopping the aggregation process when the spatial radius
of the cluster drops below a threshold of spatial coherence.
The RANSAC algorithm is used to identify outliers in
the candidate set of key point correspondences. This
involves generating transformation hypotheses using a
minimal number of correspondences and then evaluating
each hypothesis based on the number of inliers among all
features under that hypothesis.
IV.SYSTEM PRELIMINARIES
Capture
image
Preprocessing
Feature extraction
III. OUR SYSTEM AND ASSUMPTIONS
Propose an original approach to perform 3-D face
recognition in the presence of missing parts. Propose a 3D face description approach that relies on the detection of
key points on the 3-D face surface and the description of
the surface in correspondence to these key points as well
as along facial curves connecting pairs of key points. In
contrast to solutions where key points correspond to
meaningful face landmarks, such as the eyebrows, eyes,
nose, cheek and mouth. Recognition experiments from
partial and full facial scans have been performed on the
combined UND/FRGC v2.0 datasets and on the Gavab
database so as to enable comparison.
An original face representation that combines the
repeatability of key points extracted from depth images of
the face (with the descriptiveness of facial curves).A face
matching approach that combines spatial constraints for
key points matching with an original formulation of the
saliency of facial curves for gallery scans, thus allowing
weighted match of different facial curves .A thorough
experimental evaluation addressing the recognition
accuracy both in the case of scans with large pose
variations and missing parts, and scans with non neutral
facial expressions the reported experimentation also
includes a detailed comparative evaluation against
competitor solutions.
ISSN: 2231-5381
Score function
Data sets
Input
image
Matching
process
Accurate
result
Fig. 1 System Architecture
A.Capture Image
Facial feature localization is a important in many subsequent
tasks, such as face recognition,
pose normalization,
expression understanding and face tracking.
http://www.ijettjournal.org
Page 317
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 7 - Mar 2014
B.Preprocessing
To reduce the computational burden, Down sample the
high resolution face images.
C.Smoothing
This technique is used to eliminate the illumination that are
occurred in the image naturally by natural distortion s.
Smoothing removes short-term variations, or "noise" to
reveal the important underlying unadulterated form of the
data.
D.Normalization
Normalization is a process that changes the range of pixel
intensity values. Applications include photographs with poor
contrast due to glare, for example. Normalization is sometimes
called contrast stretching. In more general fields of data
processing, such as digital signal processing, it is referred to as
dynamic range expansion. The purpose of dynamic range
expansion in the various applications is usually to bring the image,
or other type of signal, into a range that is more familiar or normal
to the senses, hence the term normalization.5
(a)
(b)
(c)
(d)
(e)
Fig. 3. key Points Clustering
F. Keypoints Repeatability
A face matching approach that combines spatial
constraints for keypoints matching with an original
formulation of the saliency of facial curves for gallery
scans, thus allowing weighted match of different facial
curves. Keypoints extracted from different facial scans of
the same individual are expected to be located
approximately in the same positions on the face.
G. Facial Curves
Fig. 2. Capture Image
E. Key Points Clustering
On the detection of keypoints on the 3-D face surface and
the description of the surface in correspondence to these
keypoints as well as along facial curves connecting pairs
of keypoints. In contrast to solutions where keypoints
correspond to meaningful face landmarks, such as the
eyebrows, eyes, nose, cheek and mouth.An original face
representation that combines the repeatability of keypoints
extracted from depth images of the face ,with the
descriptiveness of facial curves.
ISSN: 2231-5381
While the majority of face recognition researchers will
agree that performance can be significantly increased,
there is a contentious debate about how to achieve this
goal. This analysis suggests that, depending on the specific
application, a trade-off between accuracy and
computational time can be found. Similar results were
obtained for the UND dataset. On this dataset, our results
are compared with those reported in and that used an
experimental setup similar to that proposed in this work.
Table V summarizes the evaluation using rank-1 RR as
performance indicator. Results clearly demonstrate that
our approach is capable of achieving or improving the
state of the art performance for all the classes of scans
except one (i.e., looking-down). As a general behavior of
the approaches under comparison, a quite large difference
in recognizing left and right side scans can be noted for
this dataset (about 11%, 14% and 16% decrease,
respectively, for our work and the approaches and
Measuring the yaw rotation for the left and right side
scans, can obtained an average angle of about 50 and 70 ,
respectively. These rotation angles are lower than the
nominal values reported in the database description, and
the difference of around 20 between the yaw rotations of
left and right scans can be motivation for the different
accuracy shown in the recognition.
http://www.ijettjournal.org
Page 318
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 7 - Mar 2014
Fig. 4. Facial Curve and Distance distribution graph
Distribution of distance values for two facial curves of the
same gallery scan. the two facial curves on the depth
image of the subject .In (c) and (f), the distribution of the
values of the distance of the two facial curves with respect
to the facial curves of all the other gallery models is
reported with a bars histogram (in the same plots, the
curve of the Weibull distribution fitting the data is drawn
in red).
H. Face Matching
Given two face scans, the decision about whether
they represent the same person or not rely on the
comparison of the facial curves detected on the two scans.
However, in order to support accurate recognition,
comparison of facial curves.
ISSN: 2231-5381
(a)
(b)
(c)
(d)
Fig. 5. Matching Process
V.CONCLUSION
To analyze the various algorithm of 3D face
recognition through which we conclude that 3D face
recognition solves the challenges which found in the result
of 2D face recognition mainly the illumination and pose
problem through various approaches. The 3D face
recognition approaches are still tested on very in various
approach. The data sets are increasing during the years
since better acquisition materials become available.
However, the datasets are increasing during the years its
used for various approach. The data rate is increased it will
lead to decreasing the performance of representation. So
the algorithms must be adjusted and improved before they
will be able to handle large datasets with the same
recognition performance. The drawback of most presented
3D face representations methods is that most algorithms
still treat the human face as a rigid object. This means that
the methods capable of handling the face representations.
To compare to 3D face recognition concept, most 2D face
recognition algorithms are already tested on large datasets
and are able to handle the size of the data tolerable well.
3D face provide the whole face representation like surface
information that can be used for face recognition. Another
major advantage is that 3D face recognition is pose
invariant. Therefore, 3D face recognition is still a
challenging but very promising research area.
http://www.ijettjournal.org
Page 319
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 7 - Mar 2014
VI. ACKNOWLEDGEMENT
First and foremost, The authors would like to
thank the God Almighty, who guides us always in the path
of knowledge and wisdom.We thank the editors and
anonymous reviewers for their valuable comments to
significantly improve the quality of this paper.We are very
much grateful to all the staff members and my friends who
helped a lot to complete this work.
REFERENCES
[1]. A. Colombo, C. Cusano, and R. Schettini, “Gappy PCA
classification for occlusion tolerant 3D face detection,” J.
Math. Imag. Vis., vol. 35, no. 3, pp. 193–207, Nov. 2009.
[2]. A. S. Mian, M. Bennamoun, and R. Owens, “An efficient
multimodal 2D-3D hybrid approach
to automatic face
recognition,” IEEE Trans.Pattern Anal. Mach. Intell., vol. 29,
no. 11, pp. 1927–1943, Nov. 2007.
[3]. A. S. Mian, M. Bennamoun, and R. Owens, “Key point
detection and local feature matching for textured 3D face
recognition,” Int. J. Compute. Vis., vol. 79, no. 1, pp. 1–12,
Aug. 2008
[4]. D. Huang, G. Zhang, M. Ardabilian, Y. Wang, and L. Chen,
“3D Face Recognition using Distinctiveness Enhanced Facial
Representations and Local Feature Hybrid Matching,” in
Proc. IEEE Int. Conf. Biometrics : Theory, Applications and
Systems (BTAS), Washington, DC, Sep. 2010, pp. 1–7.
[5]. G. Passalis, P. Perakis, T. Theoharis, and I. A. Kakadiaris,
“Using facial symmetry to handle pose variations in realworld 3D face recognition,”IEEE Trans. Pattern Anal. Mach.
Intell., vol. 33, no. 10, pp.1938–1951, Oct. 2011.
[6]. H. Drira, B. Ben Amor, M. Daoudi, and A. Srivastava, “Pose
and expression-invariant 3D face recognition using elastic
radial curves,” in Proc. British Machine Vision
Conf.,Aberystwyth , U.K., Aug. 2010, pp. 1–11.
[7]. I. A. Kakadiaris, G. Passalis, G. Toderici,
N. Murtuza, Y.
Lu, N.Karampatziakis, and T. Theoharis, “Three-dimensional
face recognition in the presence of facial expressions: An
annotated deformable approach,” IEEE Trans. Pattern Anal.
Mach. In Tel., vol. 29, no. 4, pp.640–649, Apr. 2007.
[8]. K. W. Bowyer, K. I. Chang, and P. J. Flynn, “A survey of
approachesand challenges in 3D andmultia-modal 3D+2D
face recognition,”Comput. Vis. Image Understand., vol. 101,
no. 1, pp. 1–15, Jan. 2006.
[9]. S. Berretti, A. Del Bimbo, and P. Pala, “3D face recognition
using isogeodesicstripes,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 32, no. 12, pp. 2162–2177, Dec. 2010.
[10]. Stefano Berretti, Alberto del Bimbo, and Pietro Pala,”
Sparse Matching of Salient Facial Curves for Recognition of
3-D Faces With Missing Parts”, IEEE Transactions on
information forensics and security, vol. 8, no. 2, pp.374389,2013
[11]. Y. Wang, J. Liu, and X. Tang, “Robust 3D face recognition by
local shape difference boosting,” IEEE Trans. Pattern Anal.
Mach. Intel., vol. 32, no. 10, pp. 1858–1870, Oct. 2010.
ISSN: 2231-5381
http://www.ijettjournal.org
Page 320
Download