Abstract-Papers up to 4 pages should be submitted using this format

advertisement
FACE RECOGNITION - SURVEY
G.Ramkumar1, M.Manikandan2
Assistant Professor, ECE Department, Maamallan Institute of Technology, India
Associate Professor, ECE Department, MIT (Anna University), India
ramkumar_mppl@yahoo.co.in, maniiz@yahoo.com
Abstract - Face recognition is one of the most successful
applications of image analysis and pattern recognition. Major
advancements is took place in face recognition for the past ten to
fifteen years. In this paper, an overview of some of the wellknown methods in each of these categories is provided and some
of the benefits and drawbacks of the schemes mentioned therein
are examined. Also this paper reviews the most recent algorithms
developed for this purpose and attempts to give an idea of the
state of the art of face recognition technology.
Keywords - Face recognition, Biometrics
1. Appearance based methods
2. Feature based matching methods
3. Hybrid methods
APPEARANCE BASED APPROACHES
This method takes the entire facial region as the raw input for
processing the recognition system. The initial phase of this
face recognition problem is to transform it into a face space
analysis problem and then a number of well-known statistical
methods are applied to it.
I. INTRODUCTION
A. The Eigen face Method
Biometric-based techniques have emerged as the most
promising option for recognizing individuals in recent years,
instead of authorizing people and permit them access to
physical and virtual domains based on passwords, smart cards,
plastic cards, tokens, keys and so forth, these methods
examine an individual’s physiological and/or behavioral
characteristics in order to determine and/or ascertain his
identity. Passwords and PINs are not easy to remember and
can be stolen or guessed; cards, tokens, keys and the like can
be incorrect, forgotten or duplicated; magnetic cards can
become corrupted and inaccessible. However, an individual’s
biological traits cannot be misplaced, forgotten, stolen or
forged. Biometric-based technologies include identification
based on physiological characteristics like face, fingerprints,
finger and hand geometry, hand veins, palm, iris, retina, ear
and voice and behavioral traits like gait, signature and
keystroke dynamics [1]. Face recognition appears to offer
several advantages over other biometric methods. Face
recognition is used for two reasons:
Kirby and Sirvoich first proposed Eigenfaces method for
recognition. Encouraged by their work, Turk and Pentland
improved this work by implementing Eigenfaces method
based on Principal Component Analysis (PCA) for the same
goal [16]. PCA is a Karhumen-Loeve transformation. PCA is
a recognized linear dimensionality reduction method which
determines a set of mutually orthogonal basis functions and
uses the leading eigenvectors of the sample covariance matrix
to characterize the lower dimensional space as shown in fig 1.
1. Verification: When presented with a face image of an
unknown individual along with a claim of identity, determine
whether the individual is who he or she claims to be.
2. Identification: Given an image of an unknown individual,
determining the person’s identity by comparing that image
with a database of images of known individuals.
Face recognition is also used in conjunction with other
biometrics such as speech, iris, fingerprint, ear and gait
recognition in order to enhance the recognition performance
of these methods [2-15].
II. FACE RECOGNITION TECHNIQUES
There are three important methods for face recognition:
Fig.1. Feature vectors are derived using Eigen faces [17]
Then Moghaddam et al [18] suggested Bayesian PCA method.
In this system, the Eigenface Method based on simple
subspace-restricted norms are extended to use a probabilistic
measure of similarity. Chung et al. [19] in his paper suggested
another combined approach for recognition using PCA and
Gabor Filters. Their method consists of two phases. Initially to
extract facial features he uses Gabor Filter and then use PCA
to classify the facial features optimally. Some of the recent
PCA-based algorithms discussed as follow: Kernel PCA
approaches [20] delivers generalizations which take higher
order correlations into consideration. This method handles the
non-linearity in face recognition and achieve lower error rates.
Symmetrical PCA [21] in which PCA is combined with evenodd decomposition principle. This approach uses the different
energy ratios and sensitivities of even/odd symmetrical
principal components for feature selection. Two-dimensional
PCA [22] involves framing of a 2-dimensional matrix instead
of 1 D vector. Adaptively weighted sub pattern PCA [23]
involves the division of the original whole image pattern into
sub patterns and then the features are obtained from them. The
sorting is done by adaptively computing the contributions of
each part. Weighted modular PCA [24] methods involve
partitioning the whole face into different modules or subregions like mouth, nose, eyes and forehead and then the
weighted sum of errors of all these regions is found to get the
final decision.
C. Support Vector Machines
To develop the classification performance of the PCA and
LDA subspace features, support vector machines (SVM) are
introduced [35]. SVM generally trained through supervised
learning. SVM uses a training set of images to calculate the
Optimal Separating Hyperplane (OSH), reducing the risk of
misclassification among two classes of image in some feature
space. Guo et al [36] applied this method for face recognition.
He used a binary tree classification system in which a face
image is iteratively classified as belonging to one of two
classes. A binary tree structure is circulated up until the two
classes denote individual subjects and a final classification
decision can be made. SVM has been engaged for face
recognition by some other researchers and has been shown to
return good results.
B. The Fisherface Method
The Fisherface Method is introduced by Belhumeur, 1997
[25], a derivative of Fisher’s Linear Discriminant (FLD)
which contains linear discriminant analysis (LDA) to obtain
the most discriminant features. Similar to eigenface method
Fisherface method also use both PCA and LDA to produce a
subspace projection matrix. LDA determines a set of
projection vectors which form the maximum between-class
scatter and minimum within-class scatter matrix
simultaneously (Chen et al [26]) and provides lower error
rates than Eigen face method. Fig 2 shows the example of six
different classes using LDA with large variances within
classes, but little variance within classes. Kernel FLD [27] is
able to extract the most discriminant features in the feature
space, which is same as to extract the nonlinear features in the
original input space and provides better results than the
conventional fisherface which is based on second order
statistics of an image-set and does not take into account the
higher order statistical dependencies. Some of the current
LDA-based algorithms include [28]: Direct LDA [29]
constructing the image scatter matrix from a normal 2-d image
and has the ability to resolve small sample size problem.
Further, Dual-space LDA [30] requires the full discriminative
information of face space and tries to resolve the same
problem. Direct-weighted LDA [31] combines the privileges
of both direct LDA and weighted pair wise Fisher criteria.
Block LDA [32] break down the whole image into blocks and
characterizes each block as a row vector. These row vectors
for each block form 2D matrices then LDA is applied to these
matrices. A methodology to fuse the LDA and PCA [33]
representations using two approaches: the K-Nearest
Neighbour approach (KNN) and the Nearest Mean approach
(NM) was done on the AT&T and the Yale datasets.
Fig.2.Example of Six Classes Using LDA [17]
D. Independent Component Analysis
ICA is a modified form of PCA and is considered to have
more representative power than PCA. In ICA a linear
transformation is determined to represent a set of random
variables as linear combinations of statistically independent
source variables. ICA is used to find the high order statistics
present in the image. ICA encodes face images with
statistically independent variables. These variables are not
essentially associated with the orthogonal axes and looks for
direction that are more independent from each other. ICA
decorrelates the high-order moments of the input in addition
to the second-order moments and its possible use for face
recognition has been shown by Bartlett and Sejnowski [34].
E. Probabilistic Decision Based Neural Network (PDBNN)
Probabilistic Decision Based Neural Network (PDBNN) is
proposed by Lin et al [37] comprises of three different
modules (First one is a face detector, second one is an eyes
localizer and the third one is a face recognizer).In this
technique only the facial regions of upper are considered.
FEATURE BASED APPROACHES
In these methods initial phase is to extract the geometry or the
appearance of the face local features such as the nose, eyes
and mouth. This fact is then fed into a structural classifier.
A. Face Recognition through geometric features
C. Active Appearance Model (AAM)-2D Morphable Method
In the initial phase a set of fiducial points are examined in
every face and the geometric facts like distances between
these points are explored and the image nearest to the query
face is nominated. The work in this way was done by Kanade
[41] who used the Euclidean distance for correlation between
16 extracted feature vectors constructed on a database of 20
dissimilar people with 2 images per person and achieve a
performance rate of 75%. Further, Brunelli and Poggio [38]
performs the same on 35 geometric features from a database
of 47 different people with 4 images per person as shown in
the fig 4 and achieved a performance rate of 95%. Most
recently, Cox et al. [39] derived 35 facial features from a
database of 685 images and reported a recognition
performance of 95% on a database of 685 images with a
single image for each individual.
Faces are highly variable and deformable objects. Depending
on pose, lighting, expression, faces can have different looks in
the images. Cootes, Taylor, and Edwards [42] proposed
Active Appearance Model which is capable of „explaining‟
the appearance of a face in terms of a compact set of model
parameters. AAM is an integrated statistical model. This
technique involves combining a model of shape variation with
a model of the appearance variations in a shape normalized
frame. AAM is implemented on the basis of a training set
having labelled images. The landmark points are marked on
each example face at key positions to highlight the main
features as shown in fig 6. Model parameters are found to
perform matching with the image which minimizes the
difference between the image and a synthesized model
example projected into the image.
Fig.6. Tanning image is split into shape and shape normalized texture [42]
Fig.4.Geometrical feature used by Brunelli and Poggio [38]
B. Hidden Markov Model (HMM)
The HMM was first presented by Samaira and Young [40].
HMM generally used for images with variations due to
lighting, facial expression, and orientation and thus has an
advantage over the holistic approaches. For treating images
using HMM, space sequences are considered. HMM can be
explained as a set of finite states with related probability
distributions. This method is named as a Hidden Markov
Model because the states are not visible, only the result is
visible to the external user.
This method use pixel strips to cover the forehead, eye,
mouth, nose and chin without finding the exact locations of
facial features. The face arrangement is observed as a
sequence of discrete parts. The order of this system should be
maintained for e.g., it should run from top to bottom from
forehead, eyes, nose, mouth, and chin as in fig 5. Each of
these facial regions is assigned to a state from left to right 1D
continuous HMM.
Fig.5: Left to Right HMM for face recognition
D. 3D Morphable Model
To handle the facial variations like pose, illumination etc. it is
better to represent the face using the 3 D models. 3D
morphable model is a strong, effective and versatile
representation of human faces. To make a model, high quality
frontal and half profile pictures are taken first of each subject
under ambient lighting conditions. Then the images are used
as input to the analysis by synthesis loop which yields a face
model. Blanz et al. [43] proposed this method based on a 3D
morphable face model in which he tries to find an algorithm to
recover the parameters like shape and texture from the single
image of a face and encodes them in terms of model
parameters. Finally the 3D morphable model provides the full
3D correspondence information which allows for automatic
extraction of facial components and facial regions.
HYBRID METHODS
These methods use both the holistic and feature-based
methods to recognize the face and show better results. Eigen
modules proposed by Pentland et al. [44], which uses both
global eigenfaces and local Eigen features and shows much
better results than the holistic eigenfaces. Penev and Atick
[45], gave a method called Hybrid LFA (Local Feature
Analysis). Shape-normalized Flexible appearance technique
by Lanitis et al. [46] and Component-based Face region and
components by Huang et al. [47] which combines component
based recognition and 3D morphable models for face
recognition. The major step is to generate 3D face models
using 3D morphable model from the three input images of
each person. These images are furnished under varying pose
and illumination conditions to build a large set of synthetic
images which are used to train a component-based face
recognition system [47]. A Support Vector Machine (SVM)
based recognition system is used which decomposes the face
into a set of components that are interconnected by a flexible
geometrical model so that it can account for the changes in the
head pose leading to changes in the position of the facial
components.
III. LIMITATION OF THE EXISTING SYSTEMS
Though existing tracking system have been highly successful,
most (if not all) of them do not work well in tracking faces in
surveillance applications where significant illumination
changes are of the norm. Face images from videos are usually
small, and are of low visual quality due to random noise,
blurring, occlusion, etc. In addition, variations in illumination
cause rapid changes in the appearance of the face and further
complicate the tracking problem. One theoretically possible
solution is to apply an illumination normalization method to
reduce the effect of the illumination variations before tracking.
However, this is not an effective solution because the
illumination normalization algorithms do not work well in low
resolution face images. Moreover, the computational burden
of illumination normalization is often nontrivial, which will
prevent the tracker from operating in real time. Because real
time tracking is normally required in many practical
surveillance applications, a new tracker is needed to tackle the
low resolution face tracking problem when illumination
changes occur.
IV PROPOSED METHODOLOGY
Under our proposed method, to mitigate the effects of
illumination changes, face images are converted to an
illuminationin sensitive feature space called the Gradient
Logarithm Field(GLF) feature space, and a GLF-tracker is
developed to resolve the tracking problem. Four criteria are
listed below for features to adequately characterize low
resolution faces subject to illumination changes.[48] The GLF
feature depends mainly on the intrinsic features of the face
(namely the albedo α and the surface normal n) and is weakly
dependent on the lighting source l. By utilizing the visual and
motion cues present in a video, we show that the GLF feature
is suitable for tracking a face in the presence of illumination
changes. The particle filter, incorporating a nonlinear motion
model and an online learning observation model, is then used
to implement the GLF tracker.
Four criteria are suggested for extracting illumination
insensitive features which will be effective for low resolution
visual tracking.
1) Features should be insensitive to illumination variation.
Considering the Lambertian model (I = α <n, l >),this
criterion implies that the feature should mainly depend on its
intrinsic structure - the albedo α and the surface normal s n of
a face - rather than the illumination component l.
2) Features should be global rather than block or pointbased.
Because the face region in a video is often small,the face
image consists of only a small number of pixels, such as 16 ×
16. In such a case, block or point-based features are not
adequate to characterize the face, and global features are more
powerful and flexible.
3) Features should not depend on any face model. Although a
face model often provides extra information that may assist
the tracker in dealing with illumination variations[49], it is not
effective for tracking low resolution faces in videos. This is
because most of the model-based algorithms require a high
degree of alignment between the face image and the face
model, which is challenging for low resolution face images.
Moreover, most model based algorithms are computationally
expensive, making them not useful for practical real-time face
tracking applications.
4) Features should be designed for tracking. Though many
approaches have been proposed for extracting illumination
insensitive face features, most of them are designed for
recognition or detection rather than tracking. Visual and
motion cues are valuable in addressing the tracking problem
and provide more information that helps solve the illumination
problem. Therefore, the feature used for tracking should
utilize these extra pieces of information for tracking problems.
In accordance with the foregoing criteria, we propose
the following GLF feature for tracking. Let I (x, y) be the
intensity of a video frame at position (x, y). We assume that
the face surface has a Lambertian reflectance with the albedo
α(x, y).For simplicity, the position index (x, y) is omitted. In
this paper, we consider a generic
Fig 7 An overview of GLF Tracker [48]
lighting case in which the light on the surface of the face
can come from any direction. Denoting the lighting
intensity from direction e asl(e) and k(n · e) = max(<n, e
>, 0), we have
---------(1)
Where
denotes integration over the surface of the
sphere and k is the Lambertian kernel. Following Basri
and Jacob’s seminal study [50], the lighting intensity
function l(e) can be written as a sum of harmonics
denoted by
-----(2)
Substituting l(e) with Eq.(2), Eq.(1) can be written as
---------------(3)
In the scenario that there is attached shadow, the
Lambertian kernel function k(n·e) is equal to zero. So the
integral (Eq.(3))on the lighting direction which causes
attached shadow can be ignored. Therefore, we only
consider the scenario without attached shadows. In the
absence of attached shadows, the Lambertian kernel
function is a linear function; and the harmonics are twice
continuously differentiable. Then Eq.(3)can be
straightforwardly written as follows,
-------------(4)
Where s02 is the region that n·u>0, Denoting
feature F is obtained after two steps. First, the face image
is converted into the logarithm space as follows:
Log (I) = log α + log <n, l >--------------(6)
In the logarithm feature space, the contribution of the
albedo is independent of the illumination component. In
addition when the gradient is applied, the resulting GLF
feature is insensitive to illumination changes during
tracking. Once gradient operation accomplished, we have
---------(7)
Therefore, the GLF is defined as
We offer a brief analysis to illustrate the robustness of the
GLF feature. Considering the x-axis of F, we have
-----------(8)
We assume that the lighting source l is similar
across a small patch of the face, so the last term in Eq. (8)
is negligible. Therefore, by presenting the lighting source
l as a combination of lighting intensity λ(=||l||) and
lighting direction
we can rewrite Eq. (8) as
It can be seen that the proposed GLF is not sensitive to
changes in lighting intensity. The GLF feature is also
insensitive to changes in the lighting direction during
tracking.
We have
I = αk (n.l) = α <n , l> ------------------(5)
Note that the albedo α and the surface normal n are
considered to be the intrinsic features of the face. A
straightforward method of handling illumination
variations is to separate the lighting source l from image I
[51] and extract the intrinsic features to characterize the
face. However, it is an ill-posed open problem that may
need additional models for the face, which is not
applicable to low-resolution videos (please referto
Criterion 3). We show that under some reasonable
assumptions, the performance degradations caused by
lighting source changes can be alleviated. The GLF
GLF TRACKER VERSUS EXISTING TRACKING
ALGORITHMS
we apply the tracker to a video downloaded
fromhttp://www.youtube.com/watch?v=_v3Wtpv M7_0.
The video (known as Gunner) of a person running along a
passage was captured by a surveillance camera. As shown
in Fig 8, the IVT tracker [53] fails to track the face after
frame #29 and the L1 tracker [52] cannot locate the face
region properly after frame #60 and fails to track the face
after frame #71. As the cropped images at the bottom
between frames #60 to#71 show, illumination changes
cause the L1 tracker to fail. The proposed GLF tracker is
effective and obtains robust tracking results.We report the
results of additional experiments using the challenging
videos downloaded from www.cs.toronto.edu/dross/ivt/in
which changes in illumination condition are significant.
We use the parameter settings of [53] and compare our
results with the results of [53], and the tracking results of
the L1 Tracker [52]. Because the video (known as
Handsome Fellow) was captured with a handheld camera,
the video is subject to random shakes. As can be seen
from Fig 9, the L1 tracker [52] is not robust due to
significant variations in the face region, while the IVT
tracker [53] obtains reasonable tracking results before
frame #300 as shown in [53]. However, when pose
variations also occur, the IVT tracker [53] fails to track
after frame #312. We repeat the experiment on a video
called Dudek previously used by Ross et al. [53] to
evaluate the performance of the GLF tracker. In Dudek, a
person is subject to large pose, expression and lighting
changes and experiences partial occlusions.
Fig. 9. Tracking results for the video captured outdoors (Handsome
Fellow): a person moves underneath a trellis with large illumination
variations and cast shadows while changing his pose. Red: results from
the IVT tracker [5]. Yellow: L–1 tracker [4]. Blue: proposed method
V. TRACKING RESULT
Fig 8 Tracking results for the surveillance video from YouTube
(Gunner): a person moves in a long passage. Red: results from the IVT
tracker. Yellow: L–1 tracker. Blue: proposed method.[48]
As shown in Fig 10, the proposed method
obtains better tracking results than those provided by the
IVT tracker [53] and also outperforms the L–1tracker
[52]. The face images tracked by the proposed method
appear more precise in terms of the rotation roll angle, as
shown in frames #127, #141, and # 155.We also apply the
tracker to another publicly available database: the Boston
University head tracking database [54].The tracking
results reported in Fig 11 show that the proposed
algorithm performs better than its counterparts.
We record the number of frames successfully tracked by
the proposed GLF tracker, and the IVT tracker, a tracker
on illumination preprocessing (the IP tracker), a tracker
with a gradient face feature (the GF tracker) and a tracker
with a Weberface feature (the WF tracker). The tracking
results for the 102 people in the video are shown in Table
I. The average number of frames tracked is shown in Fig
12. It can be seen that the proposed GLF tracker
outperforms the second placed tracker by more than 29%.
Therefore, we conclude that the proposed GLF tracker can
effectively track low resolution faces in videos.
As shown in Fig 8, 9 and 11, the proposed tracker is
effective in tracking faces even when the lighting
direction changes during tracking.
VI. CONCLUSION
In this paper, we proposed and evaluated a novel GLF
tracker to address the problem of tracking a low resolution
face subject to illumination changes. This new feature
possesses three desirable properties: first, it is a globally
dense feature which is effective in low-resolution videos,
in contrast with the point-based features which may not
perform well; second, because the GLF feature is easy to
implement and does not impose a heavy computational
burden, the GLF tracker can run in real-time; third, it does
not depend on a specific face model. Experiments shows
that when implemented using a particle filter, the GLF
tracker is insensitive to illumination variations and
outperforms many state-of-the-art tracking algorithms.
TABLE I
TRACKING RESULTS FOR THE CAVIAR DATABASE: THE
NUMBER OF FRAMES TRACKED BY THE TRACKERS
Fig. 10. Tracking results for the video captured in an office: a person
undergoing large pose, expression, appearance, and lighting changes, as
well as partial occlusions. Red: results from the IVT tracker. Yellow: L–
1 tracker. Blue: proposed method[48]
Fig. 12. Tracking results for the CAVIAR database: average number of
faces tracked by different trackers. IVT, IP, WF, GF [48].
References:
Fig. 11. Tracking results for the video from the BU head tracking
database (Jal5): a person moving his head in front of the camera with
illumination changes. Red: results from the IVT tracker. Yellow: L–1
tracker. Blue: proposed method.[48]
[1] A. K. Jain, R. Bolle, and S. Pankanti, Biometrics:
Personal Identification in Networked Security," A. K.
Jain, R. Bolle, and S. Pankanti, Eds.: Kluwer Academic
Publishers, 1999.
[2] T. Choudhry, B. Clarkson, T. Jebara, and A. Pentland,
"Multimodal person recognition using unconstrained
audio and video," in Proceedings, International
Conference on Audio and Video-Based Person
Authentication, 1999, pp.176-181.
[3] M. M. Rahman, R. Hartley, and S. Ishikawa, "A
Passive And Multimodal Biometric System for Personal
Identification," in International Conference on
Visualization, Imaging and Image Processing. Spain,
2005, pp.89-92.
[4] R. Brunelli and D. Falavigna, "Person identification
using multiple cues," IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol.17, pp.955- 966,
1995.
[5] M. Viswanathan, H. S. M. Beigi, A. Tritschler, and F.
Maali, "Information access using speech, speaker and face
recognition," in IEEE International Conference on
Multimedia and Expo, Vol.1, 2000, pp 493--496.
[6] A. K. Jain, K. Nandakumar, X. Lu, and U. Park,
"Integrating Faces, Fingerprints, and Soft Biometric Traits
for User Recognition," Proceedings of Biometric
Authentication
Workshop,
in
conjunction
with
ECCV2004, LNCS 3087, pp.259-269, 2004.
[7] P. Melin and O. Castillo, "Human Recognition using
Face, Fingerprint and Voice," in Hybrid Intelligent
Systems for Pattern Recognition Using Soft Computing,
Vol.172, Studies in Fuzziness and Soft Computing:
Springer Berlin / Heidelberg, 2005, pp.241-256.
[8] K. Chang, K. W. Bowyer, S. Sarkar, and B. Victor,
"Comparison and Combination of Ear and Face Images in
Appearance-Based Biometrics," IEEE Transactions on
Pattern Analysis and Machine Intelligence, Vol.25,
pp.1160-1165, 2003.
[9] R. Chellappa, A. Roy-Chowdhury, and S. Zhou,
"Human Identification Using Gait and Face," in The
Electrical Engineering Handbook, 3rd ed: CRC Press,
2004.
[10] S. Ben-Yacoub, J. Luttin, K. Jonsson, J. Matas, and J.
Kittler, "Audio-visual person verification," in IEEE
Computer Society Conference on Computer Vision and
Pattern Recognition, Vol.1. Fort Collins, CO, USA, 1999,
pp.580-585.
[11] X. Zhou and B. Bhanu, "Feature fusion of side face
and gait for video-based human identification," Pattern
Recognition, Vol.41, pp.778-795, 2008.
[12] D. Bouchaffra and A. Amira, "Structural hidden
Markov models for biometrics: Fusion of face and
fingerprint," Pattern Recognition, Vol.41, pp.852-867,
2008.
[13] H. Vajaria, T. Islam, P. Mohanty, S. Sarkar, R.
Sankar, and R. Kasturi, "Evaluation and analysis of a face
and voice outdoor multi-biometric system," Pattern
Recognition Letters, Vol.28, pp.1572-1580, 2007.
[14] Y.-F. Yao, X.-Y.Jing, and H.-S. Wong, "Face and
palmprint feature level fusion for single sample
biometrics recognition," Neurocomputing, Vol.70, pp.
1582-1586, 2007.
[15] J. Zhou, G. Su, C. Jiang, Y. Deng, and C. Li, "A face
and fingerprint identity authentication system based on
multi-route detection," Neurocomputing, Vol.70, pp.922931, 2007
[16] M. Turk and A. Pentland. “Face recognition using
eigenfaces”. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 1991
[17] Face Recognition: National Science and Technology
Council (NSTC), Committee on Technology, Committee
on Homeland and National Security, Subcommittee on
biometrics.
[18] B. Moghaddam, “Principal Manifolds and Bayesian
Subspaces for Visual Recognition”, IEEE, 1999.
[19] K. C. Chung, S. C. Kee, and S. R. Kim, “Face
Recognition using Principal Component Analysis of
Gabor Filter Responses”, IEEE, 1999.
[20] N. Sun, H. Wang, Z. Ji, C. Zou, and L. Zhao, "An
efficient algorithm for Kernel two-dimensional principal
component analysis," Neural Computing & Applications,
vol.17, 2008.
[21] Q. Yang and X. Q. Ding, "Symmetrical Principal
Component Analysis and Its Application in Face
Recognition," Chinese Journal of Computers, vol.26,
2003.
[22] J. Yang and D. Zhang, "Two-Dimensional PCA: A
New Approach to Appearance-Based Face Representation
and Recognition," IEEE Trans. Pattern Analysis and
Machine Intelligence, vol.28, pp.131- 137, 2004.
[23] K. R. Tan and S. C. Chen, "Adaptively weighted
subpattern PCA for face recognition," Neurocomputing,
vol.64, pp.505-511, 2005.
[24] A. P. Kumar, S. Das, and V. Kamakoti, "Face
recognition using weighted modular principle component
analysis," in Neural Information Processing, Vol.3316,
Lecture Notes In Computer Science: Springer Berlin /
Heidelberg, 2004, pp.362-367.
[25] Peter N.Belhumeur,JoaoP.Hespanha and David
Kreigman, “Eigenfaces vs. Fisherfaces Recognition using
class specific Linear Projection”, IEEE Trans. PAMI,
1997.
[26] Chen, H.Liao, M.Ko, J.Lin and G.Yu," A New LDABased Face Recognition System which can Solve the
Small Sample Size Problem", Pattern Recognition, 2000.
[27] S.Mika, G.Ratsch, J Weston, Scholkopf, Smola and
K R Muller, “Invariant Feature extraction and
classification in kernel spaces”. In NIPS-12, 2000.
[28] SeemaAsht, RajeshwarDass, “Pattern Recognition
Techniques: A Review”; International journal of
Computer Science Telecommunications, vol 3, Issue 8,
August 2012.
[29] H. Yu and J. Yang, "A Direct LDA Algorithm for
High-dimensional Data with Application to Face
Recognition," Pattern Recognition, vol.34, pp.2067- 2070,
2001.
[30] X. Wang and X. Tang, "Dual-space Linear
Discriminant Analysis for Face Recognition," in
Proceedings of IEEE International Conference on
Computer Vision and Pattern Recognition, 2004, pp.564–
569.
[31] D. Zhou and X. Yang, "Face Recognition Using
Direct-Weighted LDA," in 8th Pacific Rim International
Conference on Artificial Intelligence. Auckland, New
Zealand, pp.760-768, 2004.
[32] V. D. M. Nhat and S. Lee, "Block LDA for Face
Recognition," in Computational Intelligence and
BioinspiredSystems, vol.3512, Lecture Notes in
Computer Science: Springer Berlin / Heidelberg, pp.899905, 2005.
[33] Gian Luca Marcialis and Fabio Roli, “Fusion of LDA
and PCA for Face Recognition”, Department of Electrical
and Electronic Engineering - University of Cagliari Piazza
d’Armi - 09123 Cagliari (Italy).
[34] M. Bartlett and T. Sejnowski. “Independent
components of face images: A representation for face
recognition”. In Proc. the 4th Annual Joint Symposium on
Neural Computation, Pasadena, CA, May 17, 1997.
[35] B. Heisele, P. Ho, and T. Poggio, “Face recognition
with support vector machines: Global versus componentbased approach,” Proceeding of ICCV, 2001.
[36] G. Guo, S.Z. Li, K. Chan, “Face Recognition by
Support Vector Machines.”Proc. of the IEEE International
Conference on Automatic Face and Gesture Recognition,
2000.
[37] Lin Shang-Hung, Kung Sun-Yuan, Lin Long- Ji,
“Face recognition/detection by probabilistic decisionbased neural networks”, IEEE Trans.Neural Networks 8
(1), 114–132, 1997.
[38] Brunelli and Poggio, “Face Recognition: Features Vs
Templates”, IEEE Transactions on Pattern Ananlysis and
Machine Intelligence, vol 15, no. 10, October 1993.
[39] I. J. Cox, J. Ghosn, and P. N. Yianilos, "Feature
based face recognition using mixture-distance," in
Proceedings of IEEE Conference on Computer Vision and
Pattern Recognition, 1996, pp.209-216.
[40]F.Samaira and S. Young, “HMM based architecture
for Face Identification”, Image and Computer Vision,vol.
12, pp. 537-583,October 1994.
[41] T. Kanade, “Computer Recognition of Human
Faces.” Basel and Stuttgart Birkhauser, 1977
[42] T.F. Cootes, G.J. Edwards, and C.J. Taylor, “Active
appearance models,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 23, no. 6, pp. 681–685, Jun.
2001
[43] V. Blanz and T. Vetter, “A morphable model for the
synthesis of 3D faces,” in Proc. ACM SIGGRAPH, Mar.
1999, pp. 187–194.
[44] A. Pentland, B. Moghaddam, T. Starner, “ViewBased and Modular Eigenspaces for Face Recognition ,”
Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, Seattle, Washington, USA, pp.
84-91, 21-23 June 1994.
[45] P. Penev and J. Atick, "Local Feature Analysis: A
General Statistical Theory for Object Representation,"
Network: Computation Neural, 1996.
[46] A. Lanitis, C. J. Taylor and T. F. Cootes, “Automatic
Face Identification System Using Flexible Appearance
Models,” Image Vis. Comput., 1995.
[47] Huang, J., Heisele, B., and Blanz, V., “Componentbased face recognition with 3D morphable models”. In
Proceedings, International Conference on Audio- and
Video-Based Person Authentication, 2003
[48] Low-Resolution Face Tracker Robust to Illumination
Variations, Wilman W. Zou, Member, IEEE, Pong C.
Yuen, Senior Member, IEEE, and Rama Chellappa,
Fellow, IEEE VOL. 22, NO. 5, MAY 2013
[49] Y. Xu and A. Roy-Chowdhury, “A physics-based
analysis of image appearance models,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 33, no. 8, pp. 1681–1688,
Aug. 2011.
[50] R. Basri and D. Jacobs, “Lambertian reflectance and
linear subspaces,” IEEE Trans. Pattern Anal. Mach.
Intell., vol. 25, no. 2, pp. 218–233, Feb. 2003.
[51] X. Xie, W. Zheng, J. Lai, P. Yuen, and C. Suen,
“Normalization of face illumination based on large-and
small-scale features,” IEEE Trans. Image Process., vol.
20, no. 7, pp. 1807–1821, Jul. 2011.
[52] X. Mei and H. Ling, “Robust visual tracking and
vehicle classification via sparse representation,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 33, no. 11, pp.
2259–2272, Nov. 2011.
[53] D. Ross, J. Lim, R. Lin, and M. Yang, “Incremental
learning for robust visual tracking,” Int. J. Comput. Vis.,
vol. 77, no. 1, pp. 125–141, 2008.
[54] M. La Cascia, S. Sclaroff, and V. Athitsos, “Fast,
reliable head tracking under varying illumination: An
approach based on registration of texturemapped 3D
models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22,
no. 4, pp. 322–336, Apr. 2000.
Download