The Exploration of Face Recognition Techniques Web Site: www.ijaiem.org Email: ,

advertisement
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
The Exploration of Face Recognition
Techniques
Deepali H. Shah1 , Dr. J. S. Shah2 and Dr. Tejas V. Shah3
1
Research Scholar, School of Engineering, R K University, Rajkot, Gujarat, India
Associate Professor, Instrumentation and Control Department, L. D. college of Engineering, Ahmedabad, Gujarat, India
2
Director, Samarth Institute, Himmatnagar, Gujarat, India
3
Associate Professor Instrumentation and Control Department, S. S. Engineering College, Bhavnagar, Gujarat, India
Abstract
Face recognition systems have gained a great deal of popularity due to the wide range of applications that they have proved to be
useful. From a commercial standpoint, face recognition is practical in security systems for law enforcement situations at places
like airports and international borders where the need arises for identification of individuals. Another application of face
recognition is the protection of privacy, obviating the need for exchanging sensitive personal information. In this paper, we have
explored different techniques of face recognition.
Keywords: PCA, LDA, EBGM, HMM, eigenfaces
1. INTRODUCTION
Due to reasons of security issue since the 9/11 terrorist attacks, a sophisticated security system become more important in
our daily life, especially for person identification. Nowadays, there are various ways to identify a person, which could be
classified into two categories, such as biometric and non-biometric methods[1].
Biometric-based techniques have emerged as the most promising option for recognizing individuals in recent years since,
instead of authenticating people and granting them access to physical and virtual domains based on passwords, PINs,
smart cards, plastic cards, tokens, keys and so forth, these methods examine an individual’s physiological and/or
behavioral characteristics in order to determine and/or ascertain his identity. Passwords and PINs are hard to remember
and can be stolen or guessed; cards, tokens, keys and the like can be misplaced, forgotten or duplicated; magnetic cards
can become corrupted and unreadable. However, an individual’s biological traits cannot be misplaced, forgotten, stolen or
forged[2],[3].
Biometric-based technologies include identification based on physiological characteristics (such as face, fingerprints,
finger geometry, hand geometry, hand veins, palm, iris, retina, ear and voice) and behavioral traits (such as gait,
signature and keystroke dynamics)[2],[3]. These systems have high accuracy in recognition [1],[4]. However, some of the
biometric recognition systems, such as iris and finger print scanning, are intrusive identification, in which the systems
need the cooperation of users and the equipments needed are usually expensive[1],[4].
Face recognition provides us a convenient way to identify and recognize a person in a large database. With face
recognition, we can recognize a person by just taking a photo of that person. In a face recognition system, cameras are
installed at a surveillance place, so the system can capture all the objects in real time without being noticed. As a result,
face recognition system have received significant attention[1].
Moreover, many researches have been done in face detection and recognition in recent years[1],[4]. However, most of the
researches are based on certain preconditions, and only few works have been practiced in real environment due to the
performance issues[1].
The challenges associated with face detection can be attributed to the following factors[5]:
 Pose. The images of a face vary due to the relative camera-face pose (frontal, 45 degree, profile, upside down), and
some facial features such as an eye or the nose may become partially or wholly occluded.
 Presence or absence of structural components. Facial features such as beards, mustaches, and glasses may or may not
be present and there is a great deal of variability among these components including shape, color, and size.
 Facial expression. The appearances of faces are directly affected by a person’s facial expression.
 Occlusion. Faces may be partially occluded by other objects. In an image with a group of people, some faces may
partially occlude other faces.
 Image orientation. Face images directly vary for different rotations about the camera’s optical axis.
 Imaging conditions. When the image is formed, factors such as lighting (spectra, source distribution and intensity) and
camera characteristics (sensor response, lenses) affect the appearance of a face.
Volume 3, Issue 2, February 2014
Page 238
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
2. FACE RECOGNITION SYSTEM
Face recognition is used for two primary tasks[2],[3]:
1. Verification (one-to-one matching): When presented with a face image of an unknown individual along with a
claim of identity, ascertaining whether the individual is who he/she claims to be.
2. Identification (one-to-many matching): Given an image of an unknown individual, determining that person’s
identity by comparing (possibly after encoding) that image with a database of (possibly encoded) images of known
individuals.
There are numerous application areas in which face recognition can be exploited for these two purposes, a few of which
are outlined below[2],[6]:
 Security (access control to buildings, airports/seaports, ATM machines and border checkpoints[7],[8];
computer/network security[9]; email authentication on multimedia workstations).
 Surveillance (a large number of CCTVs can be monitored to look for known criminals, drug offenders, etc.
and authorities can be notified when one is located; for example, this procedure was used at the Super Bowl
2001 game at Tampa, Florida; in another instance, according to a CNN report.
 General identity verification (electoral registration, banking, electronic commerce, identifying newborns,
national IDs, passports, drivers’ licenses, employee IDs).
 Criminal justice systems (mug-shot/booking systems, post-event analysis, forensics).
 Image database investigations (searching image databases of licensed drivers, benefit recipients, missing
children, immigrants and police bookings).
 “Smart Card” applications (in lieu of maintaining a database of facial images, the face-print can be stored in a
smart card, bar code or magnetic stripe, authentication of which is performed by matching the live image and
the stored template)[10].
 Multi-media environments with adaptive human computer interfaces (part of ubiquitous or context- aware
systems, behavior monitoring at childcare or old people’s centers, recognizing a customer and assessing his
needs)[11].
 Video indexing (labeling faces in video)[12],[13].
 Witness face reconstruction.
In addition to these applications, the underlying techniques in the current face recognition technology have also been
modified and used for related applications such as gender classification, expression recognition and facial feature
recognition and tracking; each of these has its utility in various domains: for instance, expression recognition can be
utilized in the field of medicine for intensive care monitoring while facial feature recognition and detection can be
exploited for tracking a vehicle driver’s eyes and thus monitoring his fatigue, as well as for stress detection[6].
"Face Recognition" generally involves two stages[14]:
1. Face Detection, where a photo is searched to find any face, then image processing cleans up the facial image
for easier recognition.
2. Face Recognition, where detected and processed face is compared to a database of known faces, to decide who
that person is.
It is usually harder to detect a person's face when they are viewed from the side or at an angle, and sometimes this
requires 3D Head Pose Estimation. It can also be very difficult to detect a person's face if the photo is not very bright, or if
part of the face brighter than another or has shadows or is blurry or wearing glasses, etc[14].
However, Face Recognition is much less reliable than Face Detection, generally 30-70% accurate[14].
Before the face recognition system can be used, there is an enrollment phase, wherein face images are introduced to the
system to let it learn the distinguishing features of each face. The identifying names, together with the discriminating
features, are stored in a database, and the images associated with the names are referred to as the gallery. Eventually, the
system will have to identify an image, formally known as the probe, against the database of gallery images using
distinguishing features. The best match, usually in terms of distance, is returned as the identity of the probe[15].
Generally, there are two types of techniques in the face recognition process. The first technique is based on facial feature
and the other technique considers the holistic view of the recognition problem.
 Holistic Approach: - In holistic approach, the whole face region is taken as an input in face detection system to
perform face recognition[16].
 Feature-based Approach: - In feature-based approach, local features on face such as nose and eyes are
segmented and then given to the face detection system to easier the task of face recognition[16].
 Hybrid Approach: - In hybrid approach, both local features and the whole face is used as the input to the face
detection system. It is more similar to the behavior of human being to recognize the face[16].
Volume 3, Issue 2, February 2014
Page 239
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
3. FACE RECOGNITION TECHNIQUES
3.1 PCA (Principal Component Analysis)
It is a mathematical procedure that performs a dimensionality reduction by extracting the principal components of the
multi-dimensional data. The first principal component is the linear combination of the original dimensions that has the
highest variability. The n-th principal component is the linear combination with the maximum variability, being
orthogonal to the n-1 first principal components[17].
Figure 1 PCA. x and y are the original basis. ɸ is the first principal component[17].
The scheme is based on an information theory approach that decomposes face images into a small set of characteristic
feature images called ‘Eigenfaces’, which are actually the principal components of the initial training set of face images.
An important feature of PCA is that one can reconstruct any original image from the training set by combining the
Eigenfaces. The technique used in creating Eigenfaces and using them for recognition is also used outside of facial
recognition. This technique is also used for handwriting analysis, lip reading, voice recognition, sign language/hand
gestures interpretation and medical imaging analysis[18].
During the training phase, each face image is represented as a column vector, with each entry corresponding to an image
pixel. These image vectors are then normalized with respect to the average face. Next, the algorithm finds the
eigenvectors of the covariance matrix of normalized faces by using a speedup technique that reduces the number of
multiplications to be performed. This eigenvector matrix is then multiplied by each of face vectors to obtain their
corresponding face space projections. Finally, the recognition threshold is computed by using the maximum distance
between any two face projections[19],[20].
In the recognition phase, a subject face is normalized with respect to the average face and then projected onto face space
using the eigenvector matrix. Next, the Euclidean distance is computed between this projection and all known projections.
The minimum value of these comparisons is selected and compared with the threshold value calculated during the
training phase. Based on this, if the value is greater than the threshold, the face is new. Otherwise, it is a known
face[19],[20].
3.1.1 Mathematical representation of PCA
PCA generates a set of orthonormal basis vectors, known as principal components that maximize the scatter of all
projected samples. Let X = X1, X2,……..,Xn be the sample set of original images. After normalizing the images to unity
norm and subtracting the grand mean a new image set Y = Y1, Y2,……..,Yn is obtained. Each Yi represents a normalized
image with dimensionality N, Yi = yi1, yi2,….,yint, i = 1, 2,… ,n[21]. The covariance matrix of normalized image set is
defined as
1
 y  n Y Y
i
t
j

1 t
YY
n
and the eigenvector and eigenvalue matrices U,λ are computed as[21]
 YU  U
PCA appears to work well when a single image of each individual is available, but when multiple images per person are
present, then Belhumeur et al[22] argue that by choosing the projection which maximizes total scatter, PCA retains
unwanted variations due to lighting and facial expression. As stated by Moses et al.[23], “the variations between the
images of the same face due to illumination and lighting direction are almost always larger than image variations due to a
change in face identity”[6].
Volume 3, Issue 2, February 2014
Page 240
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
Figure 2 The same person seen under varying light conditions can appear dramatically different[22].(©1997 IEEE)
3.1.2 Advantages of PCA
 Recognition is simple and efficient compared to other matching approaches.
 Data compression is achieved by the low dimensional subspace representation.
 Raw intensity data are used directly for learning and recognition without any significant low-level or mid-level
processing
 No knowledge of geometry and reflectance of faces is required[18].
3.1.3 Disadvantages of PCA
 The method is very sensitive to scale, therefore, a low-level preprocessing is still necessary for scale
normalization.
 Since the Eigenface representation is, in a leastsquared sense, faithful to the original images, its recognition rate
decreases for recognition under varying pose and illumination.
 Though the Eigenface approach is shown to be robust when dealing with expression and glasses, these
experiments were made only with frontal views. The problem can be far more difficult when there exists extreme
change in pose as well as in expression and disguise.
 Due to its “appearance-based” nature, learning is very time-consuming, which makes it difficult to update the
face database[18].
3.2 LDA (Linear Discriminant Analysis)
Linear Discriminant Analysis is a well-known scheme for feature extraction and dimension reduction. LDA is widely
used to find linear combinations of features while preserving class separability. Unlike PCA, LDA tries to model the
differences between classes. Classic LDA is designed to take into account only two classes. Specifically, it requires data
points for different classes to be far from each other, while points from the same class are close. Consequently, LDA
obtains differenced projection vectors for each class. Multi-class LDA algorithms which can manage more than two
classes are more used[17].
Whereas PCA focuses on finding the maximum variation within a pool of images, LDA distinguishes between the
differences within an individual and those among individuals[19].
The face space created in LDA gives higher weight to the variations between individuals than those of the same
individual. As a result, LDA is less sensitive to lighting, pose, and expression variations. The drawback is that this
algorithm is significantly more complicated than PCA[19].
As an input, LDA takes in a set of faces with multiple images for each individual. These images are labeled and divided
into within-classes and between-classes. The former captures variations within the image of the same individual while the
latter captures variation among classes of individuals. LDA thus calculates the within-class scatter matrix and the
between-class scatter matrix, defined by two respective mathematical formulas[19].
(i) within-class scatter matrix is given by[24]c
Nj
S w   ( X i j   j )( X i j   j ) T
j 1 i 1
where Xij is the ith sample of class j, µj is mean of class j, c is number of classes, and Nj is number of samples in class j,
(ii) Other is called between class scatter matrix[24]
Volume 3, Issue 2, February 2014
Page 241
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
c
S b   ( j   )( j   ) T
j 1
where µ represents the mean of all classes.
Next, the optimal projection is chosen such that it “maximizes the ratio of the determinant of the between-class scatter
matrix of the projected samples to the determinant of the within-class scatter matrix of the projected samples”. This
ensures that the between-class variations are assigned higher weight than the within-class variations. To prevent the
within-class scatter matrix from being singular, PCA is usually applied to initial image set. Finally, a well known
mathematical formula is used to determine the class to which the target face belongs. Since we have reduced the weight of
inter-class variation, the results will be relatively insensitive to variations[19].
3.2.1 Advantages of LDA
 The Fisherface projection approach is aimed to solve the illumination problem by maximizing the ratio of
between-class scatter to within-class scatter; however, finding an optimum way of projection that is able to
simultaneously separate multiple face classes is almost impossible[18].
 LDA based algorithms outperform PCA based ones, since the former optimizes the low dimensional
representation of the objects with focus on the most discriminant feature extraction while the latter achieves
simply object reconstruction[18].
3.2.2 Disadvantages of LDA
 An intrinsic limitation of classical LDA is the so called singularity problem, that is, it fails when all scatter
matrices are singular.
 However, a critical issue using LDA, particularly in face recognition area, is the Small Sample Size (SSS)
Problem. This problem is encountered in practice since there are often a large number of pixels available, but the
total number of training samples is less than the dimension of the feature space. This implies that all scatter
matrices are singular and thus the traditional LDA algorithm fails to use[18].
3.3 Independent Component Analysis
Independent Component Analysis aims to transform the data as linear combinations of statistically independent data
points. Therefore, its goal is to provide an independent rather that uncorrelated image representation. ICA is an
alternative to PCA which provides a more powerful data representation. It’s a discriminant analysis criterion, which can
be used to enhance PCA[17].
Whereas PCA depends on the “pair wise relationships between pixels in the image database,” ICA strives to exploit
“higher-order relationships among pixels.” That is, PCA can only represent second-order inter-pixel relationships, or
relationships that capture the amplitude spectrum of an image but not its phase spectrum. On the other hand, ICA
algorithms use higher order relationships between the pixels and are capable of capturing the phase spectrum. Indeed, it
is the phase spectrum that contains information which humans use to identify faces[19].
The ICA implementation of face recognition relies on the info max algorithm and represents the input as an ndimensional random vector. This random vector is then reduced using PCA, without losing the higher order statistics.
Then, the ICA algorithm finds the covariance matrix of the result and obtains its factorized form. Finally, whitening,
rotation, and normalization are performed to obtain the independent components that constitute the face space of the
individuals. Since the higher order relationships between pixels are used, ICA is robust in the presence of noise. Thus,
recognition is less sensitive to “lighting conditions, changes in hair, make-up, and facial expression”[19].
Two different approaches are taken by the ICA for face recognition. In first approach images are considered as random
variables and pixels as trials. In other words ICA architecture first tries to find a set of statistically independent basis
images. In second approach pixels are considered as random variables and images as trials[21].
The algorithm works as follows[21].
Let X be an n dimensional (n-D) random vector representing a distribution of inputs in environment.
Let W be an n*n invertible matrix, U=WX and Y=f(U) an n-D random variable representing the outputs of n-neurons.
Each component of ƒ = ƒ1 , ƒ2,… is an invertible squashing function, mapping real numbers into the [0,1] interval.
Typically the logistics function is used.
fiu 
1
1  e u
The U1 , U2,….Un variables are linear combinations of inputs and can be interpreted as interpreted as presynaptic
activations of n-neurons.
3.4 Elastic Bunch Graph Matching
The EBGM algorithm identifies a person in a new image face by comparing it to other faces stored in a database. The
algorithm extracts feature vectors (called Gabor jets) from interest points on the face and then matches those features to
corresponding features from the other faces in the database[25].
Volume 3, Issue 2, February 2014
Page 242
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
The EBGM algorithm operates in three phases. First, important landmarks on the face are located by comparing Gabor
jets extracted from the new image to Gabor jets taken from training imagery. Second, each face image is processed into a
smaller description of that face called a FaceGraph. The last phase computes the similarity between many FaceGraphs by
computing the similarity of the Gabor jet features. The FaceGraphs with the highest similarity will hopefully belong to
the same person. While all of these phases effect the performance of the algorithm, this example focuses on the
performance of the similarity comparison computations[25].
Figure 3 Grids for Face Recognition[25]
Like PCA, the EBGM algorithm was also designed for accuracy and had no need to run in realtime. The execution of this
algorithm is currently performed by running three separate executables on the data.
3.4.1 Advantages of EBGM[26]
 Changes in one feature (i.e. eyes open, closed) do not necessarily mean that the person is not recognized any
more.
 This algorithm makes it possible to recognize faces up to a rotation of 22 degrees.
 It is relatively insensitive to variations in face position, facial expression.
3.4.2 Disadvantages of EBGM
 These graph approaches are not ideal for real time environment where it takes too much time for matching
stored graphs[27].
 Consequently, it is stated that geometrical based graph matching is computationally expensive approach.
 It is very sensitive to lightening conditions[26].
 Lot of graphs has to be placed manually on the face. For a reliable system recognition is then done by comparing
these graphs which requires huge storage of convolution images for better performance[26].
3.5 3-D Morphable Model
Human face is a surface lying in the 3-D space intrinsically. Therefore the 3-D model would be better for representing
faces, especially to handle facial variations, such as pose, illumination etc. Blantz et al. proposed a method based on a 3-D
morphable face model that encodes shape and texture in terms of model parameters, and algorithm that recovers these
parameters from a single image of a face[28].
The morphable face model is based on a vector space representation of faces that is constructed such that any convex
combination of shape and texture vectors of a set of examples describes a realistic human face[28].
Fitting the 3D morphable model to images can be used in two ways for recognition across different viewing conditions:
Paradigm 1. After fitting the model, recognition can be based on model coefficients, which represent intrinsic shape and
texture of faces, and are independent of the imaging conditions: Paradigm 2. Three-dimension face reconstruction can
also be employed to generate synthetic views from gallery probe images. The synthetic views are then transferred to a
second, viewpoint-dependent recognition system[28].
The approach is based on a morphable model of 3D faces that captures the class-specific properties of faces. These
properties are learned automatically from a data set of 3D scans. The morphable model represents shapes and textures of
faces as vectors in a high-dimensional face space, and involves a probability density function of natural faces within face
space. The algorithm estimates all 3D scene parameters automatically, including head position and orientation, focal
length of the camera, and illumination direction. This is achieved by a new initialization procedure that also increases
robustness and reliability of the system considerably. The new initialization uses image coordinates of between six and
eight feature points[28].
3.6 HMM (Hidden Markov Model)
Stochastic modeling of nonstationary vector time series based on HMM has been very successful for speech applications.
Reference [28] applied this method to human face recognition. Faces were intuitively divided into regions such as the
Volume 3, Issue 2, February 2014
Page 243
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
eyes, nose, mouth, etc., which can be associated with the states of a hidden Markov model. Since HMMs require a onedimensional observation sequence and images are two-dimensional, the images should be converted into either 1D
temporal sequences or 1D spatial sequences.
In [28], a spatial observation sequence was extracted from a face image by using a band sampling technique. Each face
image was represented by a 1D vector series of pixel observation. Each observation vector is a block of L lines and there
is an M lines overlap between successive observations. An unknown test image is first sampled to an observation
sequence. Then, it is matched against every HMMs in the model face database (each HMM represents a different subject).
The match with the highest likelihood is considered the best match and the relevant model reveals the identity of the test
face.
One of the major drawbacks of HMM is that it is sensitive to geometrical shape[28].
3.7 Neural network
Unlike the above three algorithms, the neural networks algorithm for face recognition is biologically inspired and based
on the functionality of neurons. The perceptron is the neural network equivalent to a neuron. Just like a neuron sums the
strengths of all its electric inputs, a perceptron performs a weighted sum on its numerical inputs[19]. Using these
perceptrons as a basic unit, a neural network is formed for each person in the database.
The neural networks usually consist of three or more layers [29].An input layer takes in a dimensionally reduced (using
PCA) image from the database. An output layer produces a numerical value between 1 and -1. In between these two
layers, there usually exist one or more hidden layers. For the purposes of face recognition, one hidden layer usually
provides a good balance between complexity and accuracy. Including more than one such layer exponentially increases
the training time, while not including any results in poor recognition rates. Once this neural network is formed for each
person, it must be trained to recognize that person.
The most common training method is the back propagation algorithm [29]. This algorithm sets the weights of the
connections between neurons such that the neural network exhibits high activity for inputs that belong to the person it
represents and low activity for others. During the recognition phase, a reduced image is placed at the input of each of
these networks, and the network with the highest numerical output would represent the correct match.
The main problem with neural networks is that there is no clear method to find the initial network topologies. Since
training takes a long time, experimenting with such topologies becomes a difficult task [29]. Another issue that arises
when neural network are used for face recognition is that of online training. Unlike PCA, where an individual may be
added by computing a projection, a neural network must be trained to recognize an individual. This is a time consuming
task not well suited for realtime applications[19].
4. CONCLUSION
In this paper, we have addressed the problems needed to overcome for face recognition such as light intensity variable,
facial expression etc. And we have discussed certain requirements for a reliable and efficient face recognition system like
accuracy, efficiency. We have explored different statistical approach face recognition algorithm. We have discussed the
advantages and drawbacks of each of them.
REFERENCES
[1] PAO-CHUNG, CHAO. FACE RECOGNITION SYSTEM BASED ON FRONT-END FACIAL FEATURE
EXTRACTION USING FPGA. M.Sc. Thesis, De La Salle University, Manila, 2009
[2] Dr. Pramod Kumar, Mrs. Monika Agarwal, Miss. Stuti Nagar, A Survey on Face Recognition System - A Challenge
International Journal of Advanced Research in Computer and Communication Engineering Vol. 2, Issue 5, May
2013
[3] A. K. Jain, R. Bolle, and S. Pankanti, Biometrics: Personal Identification in Networked Security, A. K. Jain, R.
Bolle, and S. Pankanti, Eds.: Kluwer Academic Publishers, 1999.
[4] Andrea F. Abate, Michele Nappi, Daniel Riccio, Gabriele Sabatino (2007), 2D and 3D face recognition: A survey,
2007 Elsevier B.V. Available from: http://codc4fv.googlecode.com/files/faceSurvey2007.pdf
[5] Ming-Hsuan Yang, David J. Kriegman, Narendra Ahuja, Detecting Faces in Images: A Survey, IEEE Transactions
on Pattern Analysis and Machine Intelligence, Vol. 24, No. 1, January 2002
[6] Rabia jafri, Hamid R. Arabnia, A Survey of Face Recognition Techniques, Journal of Information Processing
Systems, Vol.5, No.2, June 2009
[7] K. Kim, Intelligent Immigration Control System by Using Passport Recognition and Face Verification, International
Symposium on Neural Networks. Chongqing, China, 2005, pp.147-156.
[8] J. N. K. Liu, M. Wang, and B. Feng, iBotGuard: an Internet-based intelligent robot security system using invariant
face recognition against intruder, IEEE Transactions on Systems Man And Cybernetics Part C -Applications And
Reviews, Vol.35, pp.97-105, 2005.
Volume 3, Issue 2, February 2014
Page 244
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
[9] H. Moon, Biometrics Person Authentication Using Projection-Based Face Recognition System in Verification
Scenario, in International Conference on Bioinformatics and its Applications. Hong Kong, China, 2004, pp.207-213.
[10] P. J. Phillips, H. Moon, P. J. Rauss, and S. A. Rizvi, The FERET Evaluation Methodology for Face Recognition
Algorithms, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.22, pp.1090-1104, 2000.
[11] S. L. Wijaya, M. Savvides, and B. V. K. V. Kumar, Illumination tolerant face verification of low-bit- rate JPEG2000
wavelet images with advanced correlation filters for handheld devices, Applied Optics, Vol.44, pp.655-665, 2005.
[12] E. Acosta, L. Torres, A. Albiol, and E. J. Delp, An automatic face detection and recognition system for video
indexing applications, in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal
Processing, Vol.4. Orlando, Florida, 2002, pp.3644-3647.
[13] J.-H. Lee and W.-Y. Kim, Video Summarization and Retrieval System Using Face Recognition and MPEG-7
Descriptors, in Image and Video Retrieval, Vol.3115, Lecture Notes in Computer Science: Springer Berlin /
Heidelberg, 2004, pp.179-188.
[14] Neha Rathore, Deepti Chaubey, Nupur Rajput, A Survey On Face Detection and Recognition , International Journal
of Computer Architecture and Mobility Volume 1-Issue 5, March 2013
[15] X. Lu, Image Analysis for Face Recognition, Personal Notes, May 2003, http://www.face-rec.org/interestingpapers/General/ImAna4FacRcg_lu.pdf
[16] Jigar M. Pandya, Devang Rathod, Jigna J. Jadav, A Survey of Face Recognition approach, International Journal of
Engineering Research and Applications (IJERA) Vol. 3, Issue 1, January -February 2013, pp.632-635
[17] PF de Carrera, Face Recognition Algorithms, 2010, www.ehu.es/ccwintco/uploads/e/eb/PFC-IonMarques.pdf
[18] Sanjeev Kumar and Harpreet Kaur, FACE RECOGNITION TECHNIQUES: CLASSIFICATION AND
COMPARISONS, International Journal of Information Technology and Knowledge Management July-December
2012, Volume 5, No. 2, pp. 361-363
[19] P. Kannan,Dr. R. Shantha Selva Kumari, A Literature Survey on Face Recognition System: Algorithms and its
Implementation,
http://share.pdfonline.com/af74b8894437414695fb5abce41a0fdb/A_Literature_Survey_on_Face_Recognition.htm
[20] M.A. Turk, A.P. Pentland. Face Recognition Using Eigenfaces, IEEE Conference on Computer Vision and Pattern
Recognition, pp.586--591, 1991.
[21] Mukesh Gollen, COMPARATIVE ANALYSIS OF FACE RECOGNITION ALGORITHMS, IJREAS Volume 2,
Issue 2 (February 2012)
[22] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, Eigenfaces vs. Fisherfaces: Recognition using class specific
linear projection, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.19, pp.711-720, 1997.
[23] Y. Moses, Y. Adini, and S. Ullman, Face recognition: the problem of compensating for changes in illumination
direction, in European Conf. Computer Vision, 1994, pp.286-296.
[24] Aruni Singh, Sanjay Kumar Singh, Shrikant Tiwari, Comparison of face Recognition Algorithms on Dummy Faces,
The International Journal of Multimedia & Its Applications (IJMA) Vol.4, No.4, August 2012
[25] David S. Bolme, Michelle Strout, and J. Ross Beveridge, FacePerf: Benchmarks for Face Recognition Algorithms,
Boston, MA, USA, 2007 IEEE 10th International Symposium on Workload Characterization.
[26] Sushma Jaiswal, Dr. (Smt.) Sarita Singh Bhadauria, Dr. Rakesh Singh Jadon, COMPARISON BETWEEN FACE
RECOGNITION ALGORITHM-EIGENFACES, FISHERFACES AND ELASTIC BUNCH GRAPH MATCHING
Volume 2, No. 7, July 2011 Journal of Global Research in Computer Science
[27] Muhammad Sharif, Sajjad Mohsin, Muhammad Younas Javed, A Survey: Face Recognition Techniques, Research
Journal of Applied Sciences, Engineering and Technology 4(23): 4979-4990, 2012
[28] A. S. Tolba, A.H. El-Baz, and A.A. El-Harby, Face Recognition: A Literature Review,International Journal of
Information and Communication Engineering 2:2 2006
[29] X. Li and S. Areibi, “A Hardware/Software Co-design Approach for Face Recognition,” Proc. 16th International
Conference on Microelectronics, Tunis,Tunisia, Dec 2004.
AUTHORS
Deepali H. Shah received her B.E. degree in instrumentation and control from L.D. College of Engineering,
Ahmedabad, India in 1997, the M.E. degree in applied instrumentation from L.D. College of Engineering,
Ahmedabad in 2009. She is an associate professor with Department of Instrumentation and Control
Engineering at L.D. College of Engineering, Ahmedabad. Her area of interests include embedded system,
microprocessors and microcontrollers, digital systems, IP telephony and VLSI. At present, she is pursuing PhD in the
area of computer vision.
Volume 3, Issue 2, February 2014
Page 245
International Journal of Application or Innovation in Engineering & Management (IJAIEM)
Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com
Volume 3, Issue 2, February 2014
ISSN 2319 - 4847
Dr. J. S. Shah received his B.E. degree in electronics engineering from M.S. university, Vadodara, India in
1973, the M.E. degree in electronics from Roorkee university, India, in 1975, the Ph.D. in parallel processing
from Gujarat university. He was a professor in computer engineering and worked as a principal of Government
Engineering College, Patan. His area of interests include parallel processing, software engineering and
quantum computing. Currently, he is working as Director of Samarth Institute, Himmatnagar.
Volume 3, Issue 2, February 2014
Page 246
Download