1. introduction - Academic Science

advertisement
An adaptive framework for Face Recognition Using Classifier
combination technique Based on Neural Network
Kisan lal
Rimanlal Nishad
Reshamlal Pradhan
MCA pursuing
MSIT Department
Mats University, Raipur
MCA pursuing
MSIT Department
Mats University, Raipur
MTech
Mats University, Raipur
kisanlal1990@gmail.com
rsagar198@gmail.com
ABSTRACT
In recent years, important advances have been made in the
area of recognition of facial expression. Humans are better in
various aspects like in the field of the recognition. But as
automation is increasing day by day there is need of the
efficient machine recognition system. So, there are lots of
research going on to machine recognition. The need however
to combine the two or more facial expression recognition
techniques in a naturalistic context is clear, where adaptation
to specific human characteristics and expressivity is required.
It’s because somewhere single facial expression recognition
technique alone cannot provide satisfactory evidence. This
paper presents an adaptive framework that uses the classifier
combination technique for facial expression recognition using
neural networks.
reshamlalpradhan6602@gm
ail.com
by the system. This data is used by the biometric system for
real-time comparison against biometric samples. Biometrics
offers the identity of an individual may be viewed as the
information associated with that person in a particular identity
management system [9].
A good face recognition system must be healthy to
overcome these difficulties and generalize over many
conditions to capture the essential similarities for a given
human face. A general face recognition system(fig 1.1)
consists of many processing stages: Image Normalization;
Feature Extraction; Feature Selection; and Feature
Classification. Image Normalization and Feature Extraction
phases could run simultaneously[6].
Keywords
Face recognition, Feature extraction, Feature classification,
Neural Network, Classifier combination, Feature selection,
Feedback Neural Network, Feed forward Neural Network.
1. INTRODUCTION
The Face Recognition System is a system that automatically
identifies a human face and emotions. Face recognition has
been studied for many years and has practical application in
areas such as security systems, detection of criminals and
help with speech identification system[5]. The face
recognition problem is difficult problem for human because it
represents complex, multidimensional, meaningful visual
motivation. Face Recognition is important to human
because the face plays a major role in social intercourse,
conveying emotions and feelings[5]. The human capability
to identify faces has more than a few difficulty such as:
similarity between different faces; dealing with large amount
of unknown human faces; expressions and hair can change
the face; and also face can be viewed from number of
angles in many situations[6]. Biometric is the science and
machinery of recording and authenticate individuality using
physiological or behavioral characteristics of the subject. It
is a computable quality, whether physiological or
behavioral, of a living organism that can be used to
differentiate that organism as an individual. Biometric data is
captured when the user makes an try to be authenticated
Fig:1.1 Face recognition system
In the recent years, artificial neural networks (ANN) were
used mostly for structure intelligent computer systems
related to model recognition and image processing , The
most popular ANN model is the back-propagation neural
network (BPNN) which can be educated using backpropagation training algorithm (BP). Different ANN models
were used widely in face recognition and many times they
used in combination with the above mentioned methods.
ANN simulates the way neurons work in the human brain.
This is the main reason for its role in face recognition[6,7].
independent variables that interact to influence a dependent or
class variable.
Normalization: Normalization is a process that changes the
range of pixel intensity values. Applications include
photographs with poor contrast due to glare. Normalization is
sometimes called contrast stretching or histogram stretching.
In more general fields of data processing, such as digital
signal processing, it is referred to as dynamic range
expansion. The motivation is to achieve consistency in
dynamic range for a set of data, signals, or images to avoid
mental distraction or fatigue. Normalization transforms an ndimensional grayscale image[14,16].
Multi linear subspace learning: Multi linear subspace learning
(MSL) aims to learn a specific small part of a large space of
multidimensional objects having a particular desired
property[5].
with intensity values in the range (Min, Max), into a new
image
Isomap: Isomap is a Nonlinear dimensionality reduction
technique. And is also one of a number of generally used lowdimensional embedding methods.[1] Isomap is used for
computing a quasi-isometric, low-dimensional embeding of a
set of high-dimensional data points.
intensity values in the range (newMin, newMax).
Kernel PCA: Kernel principal component analysis (kernel
PCA) [1] is an extension of principal component analysis
(PCA) using techniques of kernel methods. Using a kernel,
the originally linear operations of PCA are done in a
reproducing kernel Hilbert space with a non-linear
mapping[30].
The linear normalization of a grayscale digital image is
performed according to the formula.
Feature Extraction: Feature extraction involves reducing the
amount of resources required to describe a large set of data.
When performing analysis of complex data one of the major
problems stems from the number of variables involved.
Analysis with a large number of variables generally requires a
large amount of memory and computation power or a
classification algorithm which over fits the training sample
and generalizes poorly to new samples. Feature extraction is a
general term for methods of constructing combinations of the
variables to get around these problems while still describing
the data with sufficient accuracy. The best results are achieved
when an expert constructs a set of application-dependent
features. Nevertheless, if no such expert knowledge is
available, general dimensionality reduction techniques may
help. These include: Principal component analysis,
Multifactor dimensionality reduction, Multi-linear subspace
learning, Nonlinear dimensionality reduction, Isomap, Kernel
PCA, Multi-linear PCA, Latent semantic analysis
etc[9,11,17].
Principal component analysis: Principal component analysis
(PCA) is a arithmetical procedure that uses an orthogonal
transformation to convert a set of observations of possibly
correlated variables into a set of values of linearly
uncorrelated variables called principal components[11].
Nonlinear dimensionality reduction: High-dimensional data,
meaning data that requires more than two or three dimensions
to represent, can be difficult to interpret. One approach to
simplification is to assume that the data of interest lie on an
embedded non-linear manifold within the higher-dimensional
space. If the various is of low sufficient element, the data can
be visualized in the low-dimensional space[6].
Multi-linear PCA: Multi linear principal component analysis
(MPCA)[1] is a mathematical procedure that uses multiple
orthogonal transformations to convert a set of
multidimensional objects into another set of multidimensional
objects of lower dimensions.
Latent semantic analysis: Latent semantic analysis (LSA) is a
technique in natural language processing, in particular in
vectorial semantics, of analyzing relationships between a set
of documents and the terms they contain by producing a set of
concepts related to the documents and terms[29].
Feature Selection: Feature selection approaches are often
used in domains where there are many features and
comparatively few samples (or data points). Feature selection
approaches are a subset of the more general ground of feature
extraction. Feature extraction creates new features from
functions of the original features, whereas feature selection
returns a subset of the features. The archetypal case is the use
of feature selection in analyzing DNA microarrays, where
there are many thousands of features, and a few tens to
hundreds of samples[22,25]. Feature selection techniques
provide three main benefits when constructing predictive
models:



Improved model interpretability,
Shorter preparation times,
Enhanced generalization by reducing over fitting.
Feature based selection techniques:
Multifactor
dimensionality
reduction:
Multifactor
dimensionality reduction (MDR) is a data mining approach
for detecting and characterizing combinations of attributes or




Exhaustive search-: Evaluate all possible subsets of
features.
Branch and bound-: Use branch and bound can be
optimal.
Sequential Forward Selection(SFS)-: Evaluate growing
feature sets (starts with best feature).
Sequential
Backward
Selection(SBS)-:
Evaluate
shrinking feature sets (starts with all the features)[30].
Feature classification: Once the facial appearance are
extracted and selected, the next step is to classify the image.
Features extracted represent the geometrical qualities of the
facial part’s mutilation such as the part’s height, width and
model the element’s figure. The feature extraction is well
thought-out to be the most important step in facial expression
identification and is based on judgment sets of features that
suggest meaningful information about the facial expression.
Appearance-based face recognition algorithms use a large
diversity of classification methods. Sometimes two or more
classifiers are joint to accomplish better outcome. On the
other hand, most model-based algorithms match the samples
with the model or template. Then, a learning method is can be
used to improve the algorithm. One way or another, classifiers
have a big collision in face recognition. Classification
methods are used in many areas like data mining, finance,
signal decoding, voice recognition, natural language
processing or medicine[27,30].
Classification algorithms usually involve some learning supervised, unsupervised or semi-supervised. Unsupervised
learning is the more complex approach, as there are no tagged
examples. However, many face recognition applications
include a tagged set of subjects. Consequently, most face
detection systems execute supervised knowledge methods.
There are also cases where the labeled data set is tiny.
Sometimes, the achievement of new tagged samples can be
infeasible. Therefore, semi-supervised knowledge is
required[19].
Similarity based classifiers: Template Matching, Nearest
Mean, Subspace Method, 1-NN, K-NN, Self-Organizing
Maps(SOM).
Probabilistic based classifiers: Bayesian, Logistic Classifier,
Parzen Classifier.
Classifiers using judgment limitations: Fisher Linear
Discriminate (FLD), Binary Decision Tree, Perceptron, Multilayer Perceptron, Radial Basis Network, Support Vector
Machines.
Combiners can be grouped in three categories according to
their architecture:



Parallel: All classifiers are executed separately. The
combiner is then applied.
Serial: Classifiers run one after another. Each classifier
polishes previous results.
Hierarchical: Classifiers are combined into a tree-like
structure.
Classifier combination: Sometimes two or more classifiers
are combined to reach superior results. On the other hand,
most model-based algorithms go with the samples with the
model or template. Then, a education method is can be used to
progress the algorithm. Combiner functions can be very
simple or difficult. A little complication arrangement could
necessitate only one function to be qualified, whose input is
the scores of a single class. The highest difficulty can be
achieved by significant various functions, one for every class.
They take as parameters all scores. So, more information is
used for combination [24].
There can be a number of reasons to combine classifiers in
face recognition:
 The designer has some classifiers, each developed with a
dissimilar technique. For example, there can be a
classifier designed to recognize faces using eyebrow
templates. We may possibly combine it with a different
classifier that uses other recognition system. This may
possibly lead to a superior identification performance. .
 There can be dissimilar preparation sets, composed in
different environment and representing different facial
appearance. Each preparation set could be well suited for
a certain classifier. Those classifiers may possibly be
collective.
 One single preparation set can explain different outcome
when using dissimilar classifiers. A combination of
classifiers can be used to accomplish the best outcome.
 Some classifiers vary on their performance depending on
certain initializations. Instead of choosing one classifier,
we can combine some of them.
There is different combination system. They may vary from
each other their architectures and the collection of the
combiner. Combiner in pattern recognition usually uses a
fixed amount of classifiers. This allows taking benefit of the
strengths of each classifier. The common scheme is to propose
certain function that weights each classifier’s output “score”.
Then, there have to be a judgment border line to take a
decision based on that function. Combination methods can
also be grouped based on the stage at which they work. A
combiner may possibly work at quality level. The facial
appearance of all classifiers is combined to form a new feature
vector. Then a new classification is ready[26,28].
Classifier techniques are: Voting, Adaptive weighting,
Stacking, Logistic regression, Bagging, Boosting, Neural tree.
The proposed work is based on classifier combination using
neural tree.
Neural Network:
A Neural Network is a system that based on models of brain
structure. A neural network is a particularly equivalent
distributed system that a natural partiality for store observed
facts and makes it offered for use. Nodes are using for store
the facts. It is layer based model that means every layer
connected to each other. The layer is made by a number of
units that connected to each other and the node is hold the
activation energy .the neural network layer is: initial layer,
middle layer and result layer. The initial layer connects to
more than one middle layer and that layer sends to the data
into the middle layer. Middle layer is the main layer of the
NN that process the all received data and after processing
send to the result to the result layer that means the result layer
hold the result. Neural network is divided in two parts they
are-Artificial Neural Network and Biological Neural
Network[21,22].
ANN is a computerized structure of the human brain. It
contains the mathematical model of biological nervous
system. An ANN is a system that contains hidden layers,
inputs and outputs. ANN is consisting of huge number of easy
processing units that are connecting to each other and layers.
ANN is extremely organized networks in equivalent of easy
factor, with hierarchic group, which try to relate with the aim
of the real world in the same way that the biological nervous
structure does. It is natural evidence that a few troubles that
are outside the reach of existing machines are really solvable
by tiny force resourceful packages. An ANN consist the some
functions for solving the problems. ANN are obscene
electronic model based on neural structure of the brain[20,21].
(a)
Fig1.3: (a) artificial neuron
(b)
(b) Multilayered neuron
Types of Neural Networks : ANN is categorized in two
ways: Feed-Forward Network, Recurrent Network/FeedBackward Network.
Feed-forward Neural Network: The activation of the input
units are set and then propagated through the network until the
values of the output units are determine. It may be single layer
or multilayer neural network:
Fig 1.2: Mathematical model of a neuron
Single Layer: only one layer of weights are interconnected
sometimes the input may be connected fully to be output unit.
Some of the basic terminology of ANN is: Weights,
Activation Function, Sigmoid Function, BIAS, and Thresh
held value.
input
Weight: Weight is an information use by the neural Net to
solve number of problem
Activation Function: Activation Function is used to calculate
the output response of a neuron. Sum of weights input signal
is applied with the activation value to give better response for
output.
Sigmoid Function: Sigmoid Function used in multilayer Nets
like back propagation Network (BPN), Radial Basis Function
(RBF).
Bias: Bias improve the performance of neural network it
means Bias increase the Net Input to the unit.
Thresh Hold Value: It is factor used in calculating the
activation of the given Net.
output
Fig1.4: Single Layer Neural Network
Multi Layer: signal flow from the input unit to the output unit
through one or more hidden layers can forward direction is
called as multilayer feed-forward neural network.
Single layer and multilayer structures of neural network is
shown in figure 1.3.
Fig1.5: Multi Layer Neural Network
Feedback Neural Network: the activation of input unit are
set and then propagated through the network informed as well
as backward direction until the values of the output units are
determine[27].
number of iterations were resulted from the PatternNet
model.
Harish Kumar Dogra, Zohaib Hasan, Ashish Kumar
Dogra,(2013)”[27]. Face expression recognition using Scaledconjugate gradient Back-Propagation algorithm”, work we
have recognized six different expressions using Cohn-kanade
database and system is trained using scaled conjugate gradient
back-propagation algorithm. we are getting overall testing
accuracy up to 87.2% which is better than the as compared
to the work done using SVM.
Fig1.5: Feedback Neural Network
2. RELATED WORK
The field of face detection and emotion recognition has been
around since late 1980-90s. Face Recognition (FER) is a
quickly growing and ever green research field in the area
of Computer dream, Artificial Intelligent and Automation.
Since then, a number of methods and frameworks have been
proposed and many systems have been built to detect facial
expression. Various techniques such as association rules,
Template matching, nearest mean , Self-Organizing Maps
(SOM), Binary Decision Tree, Radial Basis Network,
classifier combination. Some of recent works on Face
recognition system are:
Sheela Shankar, V.R Udupi,(2014), “Neural Networks In
Identifying Expressions In Face Recognition Systems”[1].
Multilayer Perceptron and Self Organizing Maps are the
variants of neural networks, which are discussed . The paper
makes a scrutinizing survey of neural network techniques
to identify various expressions in human face. It was found
that neural network based approach is quite robust to
uniquely identify a specific expression.
Mr. Dinesh Chandra Jain, Dr. V. P. Pawar,(2012),” A Novel
Approach For Recognition Of Human Face Automatically
Using Neural Network Method”[7]. that proposed a new way
to recognize the face using facial recognition software and
using neural network methods. Face recognition is also very
difficult to fool. It works by comparing facial landmarks specific proportions and angles of defined facial features which cannot easily be concealed by beards, eyeglasses or
makeup.
Omaima N. A. AL-Allaf, Abdelfatah Aref Tamimi,
Mohammad A. Alia,(2013),” Face Recognition System Based
on Different Artificial Neural Networks Models and Training
Algorithms”[6]. face recognition system was suggested
based on four Artificial Neural Network (ANN) models
separately: feed forward backpropagation neural network
(FFBPNN), cascade forward backpropagation neural
network (CFBPNN), function fitting neural network (FitNet)
and pattern recognition neural network (PatternNet). The
results showed that the lowest values of MSE and
Surbhi, Mr. Vishal Arora,(2013),” The Facial expression
detection from Human Facial Image by using neural network”
[4].The facial expression recognition method involves the
optical flow method, active shape model technique,
principle component analysis algorithm (PCA) and neural
network technique.To measure the performance of
proposed algorithm by checking the results accuracy and the
algorithm was observed to give 100% result when the person
in the training and test database is same.
S.P.Khandait,
Dr.
R.C.Thool,
P.D.Khandait,(2011),”
Automatic Facial Feature Extraction and Expression
Recognition based on Neural Network”[8]. Feed forward back
propagation neural network is used as a classifier for
classifying the expressions of supplied face into seven
basic categories like surprise, neutral, sad, disgust, fear,
happy and angry. Experiments are carried out on JAFFE
facial expression database and gives better performance in
terms of 100% accuracy for training set and 95.26% accuracy
for test set.
Renu Nagpal, Pooja Nagpal, Sumeet Kaur,(2010),” Hybrid
Technique for Human Face Emotion Detection”[9]. The
proposed method uses cascading of MBFO & AMF for the
removal of noise and Neural Networks by which emotions are
classified. In this work Bacteria Foraging Optimization with
mutation is used to remove highly corrupted salt and pepper
noise with variance density up to 0.9.
R. Romero-Herrera, F. J. Gallegos-Funes, A. G. JuarezGracia, J.
López-Bonilla,(2010)”[25]. Tracking Facial
Expressions By Using Stereoscopy Video And Back
Propagation Neural Network”, Kathmandu University Journal
Of Science, Engineering And Technology. it is processed to
recognize a human face by using the Viola and Jones (VJ)
method. The result is high detection rates and low times
processing show the effectiveness of the combination of
techniques employed in the recognition of affective
states.
3. PROPOSED FRAMEWORK
Face detection system is a system of detecting and identifying
facial expressions. The system consists of sequential steps of
processing like: Feature extraction, Feature selection and
Feature classification. Feature extraction is a technique of
extracting features of images through which face recognition
or classification process is performed. There are various
Feature extraction and classification techniques as stated
above and many of researchers have worked on these. Here
the proposed work is on an adaptive ensemble model for
efficient classification of facial emotions.
and recognize them on the basis of accuracy and
computational time. But some of them contain drawbacks in
term of recognition rate or timing. The most accurate
recognition rate can be achieved though combination of two
or more technique, extract features as per our requirements
and final comparison will be performed to evaluate the results.
In Future work this framework is going to be implemented
and tested. A comparison of results with existing frameworks
should be done. Different classification technique in classifier
combination scheme is also need to be tested.
REFERENCES
[1]Sheela Shankar, V.R Udupi,(2014),” Neural Networks
In Identifying Expressions In Face Recognition Systems”,
International Journal of Industrial Electronics and
Electrical Engineering.
2]Vaibhavkumar J. Mistry, Mahesh M. Goyani,(2013),” A
literature survey on Facial Expression Recognition using
Global Features”, International Journal of Engineering and
Advanced Technology (IJEAT).
[3]O.S. Eluyode and Dipo Theophilus Akomolafe,(2013),”
Comparative study of biological and artificial neural
networks”, European Journal of Applied Engineering and
Scientific Research.
[4]Surbhi, Mr. Vishal Arora,(2013),” The Facial expression
detection from Human Facial Image by using neural
network”, International Journal of Application or Innovation
in Engineering & Management (IJAIEM).
Fig: 3.1 Face recognition model
The Framework consists of different steps for face detection:
These are image preprocessing, feature extraction, feature
selection, feature classification using classifier combination
techniques. All the processing steps have same means as
stated in introduction section. For Feature extraction Principal
Component Analysis should be used.
For feature
classification the proposed framework is consist of classifier
combination techniques and neural tree is used in feature
classification technique. As stated two or more classifiers are
combined to achieve better results. This allows to take
advantage of the strengths of each classifier. The common
approach is to design certain function that weights each
classifier’s output “score”. Then, there must be a decision
boundary to take a decision based on that function.
Combination methods can also be grouped based on the stage
at which they operate. A combiner could operate at feature
level. The features of all classifiers are combined to form a
new feature vector. Then a new classification is made.
4. CONCLUSION
In this paper an adaptive Framework for facial expression
recognition is proposed, where classifier combination
technique based on neural network is to be used for facial
expression recognition. There are various feature
classification techniques to detect Human Facial expression
[5] N. Revathy, T.Guhan,” Face Recognition System Using
Back Propagation Artificial Neural Networks”, International
Journal of Advanced Engineering Technology.
[6] Omaima N. A. AL-Allaf, Abdelfatah Aref Tamimi,
Mohammad A. Alia,(2013),” Face Recognition System Based
on Different Artificial Neural Networks Models and Training
Algorithms”, (IJACSA) International Journal of Advanced
Computer Science and Applications.
[7] Mr. Dinesh Chandra Jain, Dr. V. P. Pawar,(2012),” A
Novel Approach For
Recognition Of Human Face
Automatically Using Neural Network Method”, International
Journal of Advanced Research in Computer Science and
Software Engineering.
[8] S.P.Khandait, Dr. R.C.Thool, P.D.Khandait,(2011),”
Automatic Facial Feature Extraction and Expression
Recognition based on Neural Network”, (IJACSA)
International Journal of Advanced Computer Science and
Applications.
[9] Renu Nagpal, Pooja Nagpal, Sumeet Kaur,(2010),” Hybrid
Technique for Human Face Emotion Detection”, (IJACSA)
International Journal of Advanced Computer Science and
Applications.
[10] N. Fragopanagos*, J.G. Taylor (2005)” Emotion
recognition in human–computer interaction”, Elsevier.
[11] Spiros V. Ioannou, Amaryllis T. Raouzaiou, Vasilis A.
Tzouvaras, Theofilos P. Mailis, Kostas C. Karpouzis, Stefanos
D. Kollias*,(2005),” Emotion recognition through facial
expression analysis based on a neurofuzzy network”, Elsevier.
[12] Konrad Schindler a,∗ , Luc Van Gool a,b, Beatrice de
Gelder c,(2008) ,
” Recognizing emotions expressed by
body pose: A biologically inspired neural model”, Elsevier.
[13] George Caridakis , Kostas Karpouzis, Stefanos Kollias,
(2008),
” User and context adaptive neural networks for
emotion recognition”, Elsevier.
[14] Konrad Schindler a,∗ , Luc Van Gool a,b, Beatrice de
Gelder, (2008),” Recognizing emotions expressed by body
pose: A biologically inspired neural model”, Elsevier.
[15] Xudong Jiang∗ , Alvin Harvey Kam Siew Wah,(2002),”
Constructing andtraining feed-forwardneural networks for
pattern classi$cation”,The Journal Recognition Society.
[16] S. Dubuisson, F. Davoine, M. Masson,(2002),” A
solution for facial expression representation and recognition”,
Elsevier Image Communication.
[17] Spiros V. Ioannou, Amaryllis T. Raouzaiou, Vasilis A.
Tzouvaras,Theofilos P. Mailis, Kostas C. Karpouzis, Stefanos
D. Kollias,(2005),” Emotion recognition through facial
expression analysis based on a neurofuzzy network”,Elsevier.
[18] B. Fasel, Juergen Luettin,(2003),” Automatic facial
expression analysis: a survey”,The Journal of the Pattern
Recognition Society.
[19] M. Saaidia, A. Gattal, M. Maamri and M. Ramdani,”
Face Expression Recognition Using Ar-Burg Model And
Neural Network Classifier”, Dept. of electrical Engineering.
[20] Guoqiang Zhang, B. Eddy Patuwo, Michael Y.
Hu,(1998),” Forecasting with artificial neural networks:The
state of the art”, International Journal of Forecasting.
[21] David Rebya, Sovan Lek, Ioannis Dimopoulos, Jean
Joachim, Jacques Lauga,Ste´phane Aulagnier,(1997),”
Artificial neural networks as a classification method in the
behavioural sciences”, Elsevier Behavioural Processes.
[22] E.C. Laskari, G.C. Meletiou, D.K. Tasoulis, M.N.
Vrahatis(2006),” Studying the performance of artificial neural
networks on problems related to cryptography”,Elsevier.
[23] Jharna Majumdar, Ramya Avabhrith(2008),”
Human Face Expression Recognition”, International
Journal of Emerging Technology and Advanced
Engineering.
[24] Rabia Jafri and Hamid R. Arabnia,(2009),” A
Survey of Face Recognition Techniques”, Journal of
Information Processing Systems.
[25] R. Romero-Herrera, F. J. Gallegos-Funes, A.
G. Juarez-Gracia, J. López-Bonilla,(2010),”
Tracking Facial Expressions By Using Stereoscopy
Video And Back Propagation Neural Network”,
Kathmandu University Journal Of Science,
Engineering And Technology.
[26]
S.P.Khandait,
Dr.
R.C.Thool,
P.D.
Khandait,(2011),” Automatic Facial Feature
Extraction and Expression Recognition based on
Neural Network”, (IJACSA) International Journal
of Advanced Computer Science and Applications.
[27] Harish Kumar Dogra, Zohaib Hasan, Ashish
Kumar Dogra,(2013),” Face expression recognition
using Scaled-conjugate gradient Back-Propagation
algorithm”, International Journal of Modern
Engineering Research (IJMER).
[28] Nathan Intrator, Daniel Reisfeld, Yehezkel
Yeshurun,(1996),” Face recognition using a
hybrid
supervised/unsupervised
neural
network”,Elsevier Pattern Recognition Letter.
[29]
Pushpaja
V.
Saudagare,
D.S.
Chaudhari,(2012),” Facial Expression Recognition
using
Neural
Network
–An
Overview”,
International Journal of Soft Computing and
Engineering (IJSCE).
[30]
“Face
Recognition
Algorithms”(2010).
Download