Example paper - Computer Vision and Robotics Research

advertisement
1
Face Recognition Using Eigenfaces
Justin D. Li, UC San Diego

Abstract— In this paper, I discuss the face recognition portion
of an automated system that can be passed a still image from either
a picture or from a live camera or generate a still image frame
from an image stream, to process, enhance, and extract the section
containing a face and identify the individual in the picture.
Although many methods for face recognition exist, I am
implementing the fairly simple, but potentially robust, method of
using eigenfaces.
II. EIGENFACE METHOD OVERVIEW
Figure 1 illustrates a high-level overview of the entire
eigenvalue approach. The entirety of this method can be
broken down into two main components, a training set to
extract eigenfaces and a testing algorithm to examine a given
test input image.
Index Terms— computer vision, eigenfaces, face recognition
A. Theoretical Approach to Generating Eigenface Sets
I. INTRODUCTION
I
NTELLIGENT systems are becoming more and more
complex and posses a wider range of functions and uses. They
are employed in a wide variety of environments and for a large
number of different applications. A key distinguishing feature of
interactive intelligent systems revolves around the issue of
computer vision, especially for the purposes of human-machine
interaction. As such intelligent and interactive systems become
more autonomous, automatic recognition of a person becomes a
necessary capability. For this, because of the great variation in
human appearance, from either different styles of clothing or
different postures and position, identifying a person by
recognizing his or her face emerges as a more efficient and
useful method.
In this paper, I present the method of eigenfaces, one simple
approach which specifically deals with only the facial
recognition portion of the computer vision module of an
intelligent system. The results presented can be combined with
other algorithms and programs designed by fellow ECE 172A
students to form the basis for an entire working system.
Over the course of this paper, I focus on using common still
images of three people obtained from searching on the internet
and a test database from a collection of face recognition data
from Dr. Libor Spacek of the University of Essex in the United
Kingdom. The pictures from the internet are cropped so only the
face remains and the background is blacked out for purpose of
simplifying the algorithm. I focus primarily on just being able to
identify the face within a fairly frontal position as belonging to
Barack Obama, Sarah Palin, Joe Biden, or John McCain or that
the picture is of a face belonging to another, unknown person or
that the picture is not of a face at all.
Face Recognition Project completed for ECE 172A at UC San Diego in the
Fall of 2008 taught by Professor Mohan M. Trivedi.
First proposed in 1987 by Lawrence Sirovich and Michael
Kirby, the concept of eigenfaces was designed specifically in
mind for use in face recognition. The underlying principle
rests on the idea that each face differs only slightly from
another face and that as such the difference can be
represented by a relatively small difference value. These
differences are embodied by the eigenvectors of the
covariance matrix of a set of the differences between some
faces and their average.
The equations that I used to create the algorithm for
generating a class data set came from a document by Dimitri
Pissarenko, whose equations in turn came from Matthew
Turk and Alex Pentland’s paper on eigenfaces in 1991 [2].
Most simply put, the process entails the following several
steps:
1) Compile a face data set for one person and find the
average face for that person.
[1]
2) Subtract the average face from each one of the input
faces in the data set. Also, at this time, each input image is
transformed from a matrix to a vector.
[1]
3) Form the covariance matrix by summing all of the
training vectors multiplied by their transpose, and then divide
by the number of images in the set.
[1]
4) Calculate the eigenvectors of the covariance matrix.
This can now easily be carried out through built-in functions
2
on more sophisticated software, such as on Matlab. These
generated eigenvectors form the set of eigenfaces which can
be used to determine the identity of a given face.
given image, at modern resolutions, the matrix representing the
face portion could easily by of a size exceeding 128 by 128
pixels. Transformed into a vector, it would have a length of
16384. In forming the covariance matrix, it would have
dimensions of 16384 by 16384, which exceeds practical
computational and memory storage allocations. Thus, the
following approach, which is mathematically equivalent, was
proposed in Turk and Pentland’s paper:
1) First, create a matrix with scalar elements obtained by
multiplying the transpose of one input image vector with a
different input vector as given in the equation.
[1]
This matrix has dimensions of N by N, with N being the
number of images used in the input training set.
2) Then, find the eigenvectors for this smaller set, L, again
using built-in functions of some mathematical software.
3) These eigenvectors correspond to the N most significant
eigenvectors of the theoretical covariance matrix presented
earlier.
4) Consequently, the eigenvectors for the actual data set
can be found by transforming the simplified set of
eigenvectors. Each eigenface is equal to the sum of its
corresponding eigenvector scaled by each input face image
vector.
[1]
Finally, in order to make use of these eigenfaces, each face in
the training set is weighted by the eigenfaces, and the results
added up. This vector is then defined as the class position vector
in the space of faces. Each class has a unique class face set,
which is then used in comparison with a test image. The
distances calculated, for both the class faces and for the test
images, followed a few simple steps:
1) A weight for each eigenface for a particular class is
calculated, by multiplying the input image less the average
for that class with the corresponding eigenface for that class.
[1]
2) All of these weights are then collected in a class weight
vector for each class.
Figure 1. High-level overview of the eigenface approach. [1]
B. Actual Implementation of the Algorithm
However, there are a number of flaws with this method which
makes it impractical to implement directly. The main
consideration is the size of the matrices to be dealt with. For a
[1]
3) These difference vectors from each class are then
compared against the class position vectors in the space of
faces. The smallest difference between the two indicates the
closest match. As long as that distance does not exceed two
thresholds, the face has been identified.
4) There are two thresholds to check with before giving a
result. One threshold ensures that the picture is actually a face,
and the other checks to ensure that the face is a definitive
match, otherwise it is recognized as an unknown face.
3
III. EXPERIMENTAL RESULTS
As shown in figure 2, the eigenfaces generated by the
implementation in this paper look much more distorted than
most other eigenfaces generally presented by other research
papers and groups. The failure rate could be considered
acceptable at this level of detail in implementation, with six out
of eight test images being identified correctly.
(a)
(b)
(c)
Figure 2. Experimental eigenfaces for Obama (a), Palin
(b), and Biden (c).
Indeed, the training portion of the algorithm works flawlessly
in generating class data sets from a given training set of images.
However, given that the training set was quite small, with only
ten images for Obama and Palin and five for Biden, the resulting
class set was not as refined as would be desirable. As such,
images such as figure 3(a) were able to be recognized, but
images such as figure 3(b) were not.
slight modification should allow the program result to display a
name instead. Additionally, the program currently lacks a way
in which an unidentified face is added to the database without
separately running the training portion of the program again.
IV. STRENGTHS AND WEAKNESSES
This eigenface approach has a number of strengths but also a
number of important weaknesses. It is, overall, a fairly robust
system given a sufficiently large training set. The code
implements a simple algorithm, which consequently can
compile, run, and present results rather quickly. For recognizing
a given image, once the program was passed the test file, it
almost instantly identified the face, too quickly to be timed.
Thus, the problems arrive, primarily in providing an
appropriate training set and test image. It is easy enough to scale
the pictures to the same square size. However, the pictures used
in generating a class data set need to be normalized, such that
key features such as the eyes and mouth are all aligned, as
shown in figure 4(a). Furthermore, lighting conditions can make
an impact on how well the picture is recognized, as it might
throw off the calculated average. One of the most restrictive
limitations is the orientation of the face. Faces that are oriented
at an angle excessively large, such that at least one eye is
obscured as shown in figure 4(b), tend to have poorer
recognition results as the weights from the eigenfaces will not
match very well. These constraints likewise apply to an image
being tested.
(a)
(a)
(b)
Figure 3. Two test images for Obama.
Overall though, the experimental results were fairly pleasing.
The program compiled quickly and the results were easily
demonstrated live. To clarify though, the pictures used, for both
training and for testing, were cropped beforehand to form a
square section containing just the face, and then the background
was blacked out.
At the moment, the algorithm is not fully complete in
providing an identity, as it merely prints out a number, which
corresponds to the location of person within the database. A
(b)
4
Figure 4. Two improperly aligned training faces for
McCain(a) and an excessively angled test face for Obama
As such, a comparison was made in Face Recognition Vendor
Test 2006 and Iris Challenge Evaluation 2006 Large-Scale
Results Report. In this report, false reject rates and false accept
rates were examined. As an older method and implementation,
eigenfaces as detailed by Turk and Pentland had a failure rate of
79%. The implementation presented in this paper, acting on a
smaller subset, but not tested with a sufficiently large test
sample, only failed 25% of the time. It seems, according to this
report at least, that the method of eigenfaces is less reliable.
From this paper’s implementation though, it is shown that
eigenfaces can be quite reliable given proper constraints.
This implementation using this method of eigenfaces was
demonstrated successfully, although a few simple
improvements could be made. A larger and more precise
training set would better separate the different faces. More faces
could be trained in order to expand the database of recognizable
faces. The system could be hooked up to a live camera in order
to test it, or to a video camera with additional coding to extract
still images from a video.
In this paper, I presented a simple face recognition approach,
and highlighted some of the weaknesses and strengths of this
method, and will undertake further improvements in the
program in the future.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Figure 5. False reject and false accept rates. [3]
V. CONCLUSIONS
Overall, the program functions satisfactorily. The algorithm,
whose idea was proposed by Sirovich and Kirby, and which was
described by Turk and Pentland, proved to work. The method,
although basic, remains a fairly robust and useful tool, and is
still referenced and used in other face recognition methods and
papers [7].
Face recognition using eigenfaces could be made more robust
if used in conjunction with other methods as well, and if
complemented with other methods that would deal with its
shortcomings.
D. Pissarenko, “Eigenface-based facial recognition”, from
http://openbio.sourceforge.net/resources/eigenfaces/eigenfaces-html/fac
esOptions.htmlEigenface-based facial recognition, February 2003.
M. A. Turk, A. P. Pentland, “Face Recognition Using Eigenfaces”, from
IEEE Conference on Computer Vision and Pattern Recognition, 1991.
P. J. Phillips, W. T. Scruggs, A. J. O’Toole, P. J. Flynn, K. W. Bowyer, C.
L. Schott, and M. Sharpe, “FRVT 2006 and ICE 2006 Large-Scale
Results” report from
http://www.frvt.org/FRVT2006/docs/FRVT2006andICE2006LargeScale
Report.pdf, March 2007.
V. Bruce, & A. Young, “Understanding face recognition”, from British
Journal of Psychology, 77 ( Pt 3), 305-27, 1986.
V. Blanz and T. Vetter, “Face Recognition Based on Fitting a 3D
Morphable Model”, from IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. 25, No. 9, September 2003.
R. Brunelli and T. Poggio, “Face Recognition: Features versus
Templates”, from IEEE Transactions on Pattern Analysis and Machine
Intelligence, Vol. 15, No. 10, October 1993.
K. S. Huang and M. M. Trivedi, “Integrated Detection, Tracking, and
Recognition of Faces with Omnivideo Array in Intelligent Environments”
at Computer Vision and Robotics Research (CVRR) Laboratory,
University of California, San Diego.
Download