4. Results - ER Publications

advertisement
Emotion Recognition Using Facial Expression
Mukesh Pandey1 and Devendra Giri2
Department of Computer Science
Seemant Institute of Technology, Pithoragarh- 262 501, Uttarakhand, India
Department of Electronics and Communication Engineering
Seemant Institute of Technology, Pithoragarh- 262 501, Uttarakhand, India
Email: 1mukesh3iov@gmail.com and 2devendragiri20@gmail.com,
Abstract–The interaction between human And machine can be make more natural if machine are able to
perceive and reposed to human via non verbal communication such as emotions there are several approaches
proposed to recognize emotions based on facial expression to enhance the performance of emotion recognition
system and make it more effective it is important to extract the features accurately.our approach to express
emotions the technique is Radial Basis neural network based solution combined with image processing is
used in classifying the universal emotions: Happiness, Anger, Disgust, Surprise and Normal. By the use of
rectangle shape on the face, detailed facial motions were captured with motion represented.
Keywords: Universal Emotions; RBF Networkface recognition, feature localization, operator SUSAN, corner
point extraction
Introduction
Human express their emotions with others.Emotion are reflected on the face to express our
feelings.Mehrabian[6] Indicate that the vocal part for 38%,while facial expression for 55% to express the
message. Ekman and Freisen have been initiate in this area, helping to identify six basic motions (anger, fear,
disgust, joy, surprise,sadness) that appear to be universal across humanity [7] . they categorize the physical
expression of emotions,known as the Facial Action Coding System (FACS) [8]
Recently, techniques for detection of facial feature points can be classified as: (i) approaches based on
luminance, chrominance, facial geometry and symmetry[1][2], (ii) template matching based approaches[2][3],
(iii) PCA-based approaches[1][4][9], and the curvature analysis of the intensity surface of the face images[5].
Also other facial feature detection approaches exist. Feris et al. uses a technique for facial feature localization
using a hierarchical wavelet network [6].
In this paper,we propose an approach based on geometry and symmetry of faces,which can extract the features
with the vital feature points eyes,mouth and nose and express the emotion of the particular face.this method
need the images to same size before processing because to improve the accuracy of face recognition.This
research describes a decision based neural network approach for emotion classification. We learn a
classifier that can recognize three basic emotions.
2 Face feature extraction technique
To recognize human faces, we must extract the conspicisous characteristics on the faces. Usually those features
like nose, eyes and mouth together with their geometry distribution and the shape of face is applied [10].
2.1 Locating the corners of eyes
The Basic method to locate the valley points of luminance in eye-areas. Combine the valley point searching with
directional projection and the symmetry of two eyeballs to locate eyes. the sensitivity area of two eyes. The
centres of eyes are located by searching the valley points in the local luminance image. Projecting the grads
image in the top left and top right areas of face, and then normalizing the histogram got by directional integral
projection, we can locate the probable position of the eyes in y direction based on the valley point in horizontal
integral projection image. Then let the x coordinates to change in a large scope, so we can find the valley point
in x direction of this area. The detected points are regarded as centres of two eyes
2.2 Locating feature point of nose-area
We define the feature point in nose-area to be the midpoint of two nostrils in this paper. The nose is less
important than eyes for face recognition, but the midpoint of two nostrils is relatively stable, and we can use it
as the datum mark of normalization to do the pre-treatment of face images. With reference to the two eyeballs,
the nose-area, as shown in Fig.6, is defined by using integral projection of luminance. Firstly, we choose the
strip region of two eyeballs width to get integral projection curve in y direction. Then we search along the
projection curve down from the y coordinates of eyeballs and find the first valley point to be the y coordinates
of nostrils. Through adjusting the ǻ value between peak and valley points, we can eliminate the big burrs on the
curve caused by scars on face or wearing glasses etc.
Secondly, we choose the region of the two x coordinates of eyeballs width, up and down į pixels from the y
coordinates of nostrils height, to get integral projection curve in x direction.(In this paper,
we choose the į = [y coordinates of nostrils – y coordinates of eyeballs]*0.06.) Then we search along the
projection curve from the midpoint of two x coordinates of eyeballs to left and right individually, and find two
first valley points to be the x coordinates of the left and right nostrils. By getting the accurate position of the
midpoint of two nostrils, we can define the region of nose area easily.
2.3 Locating the mouth corners
Mouth has almost the same importance as eyes for face recognition. The shape and size of mouth change greatly
because of the variety of face expression. And the whiskers could interfere with mouth-area to be recognized. So
it has great significance for face recognition to extract the mouth feature points exactly. Since the corners of
mouth have little alteration effected by expression and it can be located easily, so we define the two mouth
corners as the feature points of mouth-area. As we have got the positions of midpoint of nostrils and two
eyeballs on the face already, we use the same method to locate the mouth corners. Along the horizontal integral
projection curve of luminance in mouth area, we search the first valley point from the y coordinates of nostrils
and set it to be the y coordinates of mouth, remembering to eliminate the burrs on the curve caused by beard or
scars through adjusting the ǻ value between peak and valley points.Then we can define the mouth region and use
operator SUSAN to get the edge image of mouth. Finally the two corner points of mouth are extracted.
3 Neural networks
A neural network have been used in the field of image processing, it provides an optimistic result in
terms of quality of outcome and ease of implementation. Neural network
proved itself to be invaluable in applications where a function based model or parametric approach to
information Processing is difficult to formulate. The description of neural network can
be summarized as a collection of units that are connected in some pattern to allow communication
between the units. These units are referred as neurons or nodes generally. The output signals feed to other
units along the connections which known as weight. The weights usually excite or inhibit the signal that is
being communicated. One of the specialty of neural networks is that the hidden units factors. The function of the
hidden units or hidden cells or also called hidden neurons is to intervene between the external input and
the network output. The network which implemented neural network in it actually has the ability to
extract higher order statistics by adding one or more hidden layers [11]. Hence, this characteristic is
particularly valuable when the size of the input layer is large, specifically in the face recognition field
[12].
3.1 Show Emotion using Rradial Basic function Network Techniques:
Radial basis networks may require more neurons than standard feed-forward back-propagation networks, but
often they can be designed in a fraction of the time it takes to train standard feed-forward networks. Radial basis
networks can be designed with either newrbe or newrb.
Figure 1. Radial Basic Function Network
4. Results
Figure 2. Emotions representing.
5. Conclusions and Future Work
In this paper we have presented an approach to expression recognition in the static images. This
emotion analysis system implemented for feature selection and RBF network for classification. This paper
is designed to recognize emotional expression in human faces using the average values
calculated from the training samples. We evaluated that the system was able to express the emotion of the
images and accurately extract the features from the image.
In this paper we classify the emotional expression for four basic emotions. This gives accurate
performance. In future we may include other expression also. We can proceed with dynamic images also.
References:
[1] R. Brunelli and T. Poggio, “Face Recognition: Features versus Templates”, IEEE Trans on PAMI,
1993, 15(10), pp 1042-1052.
[2] S. Y. Lee, Y. K. Ham and R. H. Park,“Recognition of Human Front Faces Using Knowledge-Based Feature
Extraction and Neuro-Fuzzy Algorithm”, Pattern Recognition, 1996,29(11), pp 1863-1876.
[3]R. S. Feris, T. E. de Campos and R.M. Cesar Junior, “Detection and tracking of facial features in video
sequences”, Lecture Notes in Artificial Intelligence, 2000, 1793(4), pp 127-135.
[4] G. Chow and X. Li, “Towards A System for Automatic Facial Feature Detection”, Pattern Recognition,
1993, 26(12), pp 1739-1755.
[5] M. Gargesha and S. Panchanathan, “A Hybrid Technique for Facial Feature Point Detection”, IEEE
Proceedings of SSIAI’2002, 2002ˈpp 134-138
[6] A. Mehrabian, Communication without words, psychology today, vol. 2, no. 4, pp. 53-56, 1968.
[7] P. Ekman. Emotions Revealed: Recognizing Faces and Feeling to Improve Communication and Emotional
Life. Holt, 2003.
[8] P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, 1978.
[9] Z. Wang, F. K. Huangfu and J. W. Wan, “Human Face Feature Extraction Using Deformable Templates”,
Journal of Computer Aided Design and Computer Graphics of China, 2000, 12(5), pp 333-336.
[10] C. Yan and G. D. Su, “Facial Feature Location andExtraction From Front-View Images”, Journal of Image
and Graphics of China, 1998, 3(5), pp 375-380.
[11] Thaahirah S.M. Rasied, Othman O. Khalifa and Yuslina Binti Kamarudin, “Human Face
Recognition based on singular Valued Decomposition and Neural Network”, GVIP 05 Conference, 19-21
December 2005, CICC, Cairo, Egypt.
[12] P. Picton, neural Networks, Second Edition, Pal grave. 2000
[13] Jeffrey F. Cohn, Karen Schmidt, Ralph Gross, and Paul Ekman. Individual differences in facial expression:
Stability over time, relation to self-reported emotion, and ability to inform person identification.In IEEE
International Conference on Multimodal Interfaces (ICMI 2002), 2002.
[14] B. Fasel and J. Lüttin, “Automatic facial expression analysis: A survey,”Patt. Recognit., vol. 36, pp. 259–
275, 2003.
[15] Kun-hua Zhang, Jing-ru Wang, Qi-heng Zhang. A corner extraction method of many composite
characteristics [J]. China Image and Graphics Transaction, 2002,7 (4) :319-324
Download