Global Semantic Integrity Constraint Checking for XML Databases

advertisement
EMS: Expression based Mood Sharing for Social Networks
Md Munirul Haque, Mohammad Adibuzzaman
md.haque@mu.edu, mohammad.adibuzzaman@mu.edu
Department of Mathematics, Statistics, and Computer Science
Marquette University
P.O. Box 1881, Milwaukee, WI 53201
Category of Submission: Research Paper
Contact Author: Md Munirul Haque
Email: md.haque@mu.edu
1
EMS: Expression based Mood Sharing for Social Networks
ABSTRACT
Social networking sites like Facebook, twitter, and myspace are becoming overwhelmingly
powerful media in today’s world. Facebook has 500 million active users and twitter has 190
million visitors per mont h and increasing each second. On the other hand number of smart phone
users has crossed 45 millions. Now we are focusing on building an application that will connect
these two revolutionary spheres of modern science that has huge potential in different sec tors.
EMS a facial expression based mood detection model has been developed by capturing images
while the users use webcam supported laptops or mobile phones. This image will be analyzed to
classify one of several moods. This mood information will be share d in the user profile of
Facebook according to privacy settings of the user. Several activities and events will also be
generated based on the identified mood.
Keywords: Mood detection, Facebook, Eigenfaces, Web service, Distributed application.
1. INTRODUCTION
Facebook, a social networking website used throughout the world by users of all ages. The website
allows users to friend each other and share information resulting in a large network of friends. This
allows users to remain connected or reconnect by sharing information, photographs, statuses, wall posts,
messages, and many other pieces of information. In the recent years, as smartphones have become
increasingly more popular [38], the power of Facebook, has become mobile. Users can now add and see
photos, add wall posts, and change their status right from their iPhone or Android powered device.
Several researches are based on Facial Action Coding System (FACS), first introduced by Ekman and
Friesen in 1978 [18]. It is a method for finding taxonomy of almost all possible facial expressions initially
launched with 44 Action Units (AU). Computer Expression Recognition Toolbox (CERT) have been
2
proposed [32, 33, 1, 36, 35, 2] to detect facial expression by analyzing the appearance of Action Units
related to different expressions. Different classifiers like Support Vector Machine (SVM), Adabooster,
Gabor filter, Hidden Markov Model (HMM) have been used alone or in combination with others for
gaining higher accuracy. Researchers in [6, 29] have used active appearance models (AAM) to identify
features for pain from facial expression. Eigenface based method was deployed in [7] for an attempt to
find a computationally inexpensive solution. Later the authors included Eigeneyes and Eigenlips to
increase the classification accuracy [8]. A Bayesian extension of SVM named Relevance Vector Machine
(RVM) has been adopted in [30] to increase classification accuracy. Several papers [28, 4] relied on
artificial neural network based back propagation algorithm to find classification decision from extracted
facial features. Many other researchers including Brahnam et al. [34, 37], Pantic et al. [7, 8] worked in the
area of automatic facial expression detection. Almost all of these approaches suffer from one or more of
the following deficits: 1) reliability on clear frontal image, 2) out-of-plane head rotation, 3) right feature
selection, 4) fail to use temporal and dynamic information, 5) considerable amount of manual interaction,
6) noise, illumination, glass, facial hair, skin color issues, 7) computational cost, 8) mobility, 9) intensity
of expression level, and finally 10) reliability. Moreover, there has not been any work regarding automatic
mood detection from facial images in social network. We have also done an analysis on the mood related
applications of Facebook. Our model is fundamentally different from all these simple models. All these
mood related applications in Facebook require users to choose a symbol that represents his/her mood
manually but our model detects the mood without user intervention.
Our proposed system can work with laptop or hand held devices. Unlike other mood sharing applications
currently in Facebook, EMS does not need manual user interaction to set his or her mood. Mobile device
like iPhone has front camera which is perfect for EMS. The automatic finding of mood from facial
features is the first of its kind regarding Facebook applications. Thus it gains robustness and also brings
several challenges. Contexts like location and mood are considered in case of sharing to increase users’
3
privacy. Users may or may not like to share their mood when they are in specific location. Again they
may think differently about publishing their mood when they are in a specific mood. We have already
developed small prototype of the model and showed some screenshots of our deployment.
The rest of the paper is organized as follows: In Section 2, we have detailed the related works
with comparison table followed by motivation of such application in section 4. In Section 5, we present
the concept design of our approach with high level architecture. Details of implementation have been
provided in section 6 with application characteristics in section 6. Some of the critical and open issues
have been discussed in section 7 and finally offer our conclusions in section 8.
2. RELATED WORK
Automatic facial expression detection models differ from one another in terms of subject focus,
dimension, classifier, underlying technique, and feature selection strategy. Here we provide a high level
view for the categorization.
Automatic Facial
Expression Detection
Classification
Technique
Focus
Principal Component Linear Discriminant Artificial Neural
Neonates Adults Patients
Analysis (PCA)
Analysis (LDA)
Network (ANN)
Dimension
2D
3D
Figure 1: Classification of automatic facial expression detection models
Bartlett et al. [1] proposed a system that can automatically recognize frontal views of the image from a
video stream, in which 20 Action Units (AU) are detected for each frame. Here context-independent
4
training has been used. One binary classifier has been used for each of the 20 AUs. These classifiers have
been trained to recognize the occurrence of AUs regardless of their co-occurrence (a specific AU
occurring alone or with others). They also compared the performance analysis of AdaBoost and linear
SVM. Paired t-tests showed a slight advantage of AdaBoost over linear SVM. One interesting feature that
the authors tried to measure is the intensity of specific AUs. They used the output margin of the system
which actually describes the distance to the separating hyperplane as an interpretation for the intensity of
the AU.
Braathan et al. [2] address a natural problem with image collection and shift the paradigm from 2D to 3D
facial images. Older automated facial expression recognition systems have relied on posed images where
images clearly show the frontal view of the face. But this is impractical. Many times the head is in an outof-image-plane (turned or nodded) in spontaneous facial images. Braathan et al. tackled three issues
sequentially in their project. First the face geometry was estimated. Thirty images were taken for each
subject and the position of eight special features was identified (ear lobes, lateral and nasal corners of the
eyes, nose tip, and base of the center upper teeth). 3D locations of these eight features were then
recovered. These eight 3D location points were fitted in the canonical view of the subject. Later a
scattered data interpolation technique [3] was used to generate and fit the other unknown points in the
face model. Second, a 3D pose estimation technique known as Markov Chain Monte-Carlo method or
particle filtering method was used. This generated a sequence of 3D poses of the head. Canonical face
geometry was then used to warp these images on to a face model and rotate these into a frontal view.
Then this image was projected back to the image plane. Finally, Braatthan et al. used Support Vector
Machines (SVM) and Hidden Markov Models (HMM) for the training and learning of their spontaneous
facial expression detection system. Since normally the head is in a slanted position during severe pain,
this system is especially useful for analyzing real time painful expressions.
5
Jagdish and Umesh proposed a simple architecture for facial expression recognition based on token
finding and standard error-based back propagation neural network [48]. After capturing the image from a
webcam, they used the face detection technique devised by Viola and Jones [11]. In order to process the
image, histogram equalization was done to enhance image quality followed by edge detection and
thinning. Tokens, which denote the smallest unit of information, were generated from the resultant image
and passed into the neural network. The network was trained with 100 samples and could classify three
expressions. The provided report did not say anything about the number of nodes in the input layer and
hidden layer of the network. It also did not mention about the training samples.
Active Appearance Model (AAM) [6] has been proposed as an innovative way for pain recognition from
facial expressions. AAMs were used to develop the automated machine learning-based system. Use of a
Support Vector Machine (SVM) and leave-one-out procedures lead to a hit rate of 81%. The main
advantage of AAMs is the feature of decoupling appearance and shape parameters from facial images. In
AAM, shape s is expressed as a 2D triangulated mesh. Location of the mesh vertices is related to the
original image from which the shape s was derived. In AAMs, a shape s can be expressed as a
combination of a base shape s0 and a set of shape vectors si . Three AAM-derived representations are
pointed: Similarity Normalized Shape (sn), Similarity Normalized Appearance (an), and Shape
Normalized Appearance (a0). The authors developed their own representations using AAM-derived
representations that they used in painful face detection. Temporal information was not used in recognition
of pain which may increase the accuracy rate.
Monwar et al. proposed an automatic pain expression detection system using Eigenimage [7]. Skin color
modeling was used to detect the face from video sequence. Mask image technique was used to extract the
appropriate portion of the face for detecting pain. Each resultant image from masking was projected into a
feature space to form an Eigenspace based on training samples. When a new image arrived, its position in
the feature space was compared with that of the training samples and based on that a decision is drawn
6
(pain or no pain). For this experiment, 38 subjects of different ethnicity, age, and gender were videotaped
for two expressions – normal and painful. First the chromatic color space was used to find the distribution
of skin color. Later the Gaussian model was used to represent this distribution. Then the probability of a
pixel being skin was obtained. After segmenting the skin region, a meaningful portion of the face was
detected using mask image and later a bitwise ‘AND’ operation was used with the mask image and
original image to develop the resultant image. These resultant images were used as samples for training
the Eigenfaces method and M number of Eigenfaces with highest Eigenvalues being sorted out. When
detecting a new face, the facial image was projected in the Eigenspace, and the Euclidian distance
between the new face and all the faces in the Eigenspace was measured. The face that represents the
closest distance was assumed to be a match for the new image. Average hit rate was recorded to be 9092%. Later, the researchers extended their model [8] to create two more feature spaces – Eigeneyes and
Eigenlips. Portions of eyes and lips were used from facial images for Eigeneyes and Eigenlips methods.
All possible combinations of Eigenfaces, Eigeneyes, and Eigenlips (alone or in combination with others)
were used to find pain from images. A combination of all three together provided the best result in terms
of accuracy (92.08%). Here skin pixels have been sorted out using chromatic color space. However skin
color varies a lot based on race and, ethnicity, and detailed elaboration of the subjects’ skin colors is
missing in these studies.
Here [9] the authors proposed a methodology for face recognition based on information theory. Principle
component analysis and feed forward back propagation Neural Network has been used for feature
extraction and face recognition respectively. The algorithm has been used on 400 images of 40 different
people taken form Olivetti and Oracle Research Laboratory (ORL) face database. Training dataset is
composed of 60% of the images and the rest are left for test database. The Artificial Neural Network
(ANN) had 3 layers – input, output, and one hidden layer. Though this method achieved 97% accuracy
this had two major drawbacks. The method incorporates one ANN for each person in the face database
7
which is impractical in terms of scalability. Another issue is they have tested only with different images
of the same person whose picture has been used in the training of the ANN which is not practical for a
real life scenario.
Authors proposed a modified methodology for eigenface based facial expression recognition in [10]. The
authors formed a separate eigenspace for each of the six basic emotions from the training images. When a
new image comes, the new image is projected in each of the eigenspaces and a reconstructed image is
formed from each eigenspaces. A classification decision is taken by measuring the mean squeare error
between the input image and the reconstructed image. Using Cohn-Kanade and JAFFE (Japanese Female
Facial Expression) database, the method received a maximum of 83% accuracy for ‘happiness’ and 72%
accuracy for ‘disgust’. The paper does not say anything about facial contour selection. The classification
accuracy rate is also quite low.
Authors compared algorithms based on 3 different color spaces RGB, YCbCr, and HIS in [11]. Later they
combined them to derive a modified skin color based face detection algorithm. An algorithm based on the
venn diagram of set theory has been used to detect the skin color. A face portion from the original image
is taken apart by detecting 3 points – midpoint of eyes and leaps. This method showed 95% accuracy rate
on IITK database. But this method only detects the facial portion from any image. It does not say
anything about expression classification. In table I we provide a comparison table for the different models
we have reviewed.
Table I. Features of several reported facial recognition models
Name
Number of Learning
Subject/Img Model/
Computational Accuracy
2D/3D Complexity
Intensity
2D
Low
90-92%
No
Eigenfaces+ 2D
Low
92.08%
No
Classifier
Eigenfaces [7]
38 subject
Eigenface/
Principal
Component
Analysis
Eigenfaces+
38 subject
8
Eigeneyes+
Eigenlips [8]
Eigeneyes+
Eigenlips
AAM[6]
129 subject
SVM
[32,35]
Littlewort
5500 image
ANN [28]
RVM [30]
Both
Medium
81%
No
Gabor filter, Both
SVM,
Adaboost
High
72%
Some-
38 subject
Artificial
Neural
Network
2D
Medium
91.67%
No
26 subject
RVM
2D
Medium
91%
Yes
Facial Grimace 1 subject
[31]
1336 frame
High
low
filter
and 2D
pass
Low
Cont.
Somemonitoring
what
Back
propagation
[29]
Back
propagation
NN
what
204 image
100 subject
2D
Medium
high
to NA
No
Support Vector Machine - SVM, Relevance Vector Machine - RVM, ANN - Artificial Neural Network,
Cont. – continuous, NA – Not Available.
We have also done an analysis on the mood related applications of Facebook. Rimé et al. [12] argued that
everyday life emotional experiences create an urge for social sharing. Authors also showed that most
emotional experiences are shared with others shortly after they occurred. These research findings show
that mood sharing can be an important area in social networks. At present there are many applications in
Facebook which claim to do mood detection for the user. Here, we have listed top 10 such applications
based on the number of active users.
Table II. Mood applications in Facebook
Name of
Mood
Application
No. of users
Categories
My Mood
1,822,571
56
SpongeBob Mood
646,426
42
9
The
Mood
Weather Report
611,132
174
Analyzer
29,803
13
Mood Stones
14,092
-
Mood
15,965
-
My Mood Today!
11,224
9
6,694
39
day
4,349
-
Patrick’s Mood
3329
15
Name and Mood
My
Friend's
How's your mood
today?
Your Mood of the
3. MOTIVATION
In fact there is no real time mood detection and sharing application in Facebook. Normally, all
applications of this sort request the user to select a symbol that represents his/her mood. And that symbol
is published in the profile. We accepted the research challenges concerning real time mood detection
system from facial features and use Facebook as a real life application of the system. We chose Facebook
due to its immense power of connectivity. Anything can be spread to all including friends, relatives etc. in
a matter of seconds. We plan to use this stronghold of Facebook with our application to improve quality
of life.
10
Scenario 1:
Mr. Johnson is an elderly citizen living in a remote place all by himself. Yesterday he was very angry
with a banker who failed to process his pension scheme in timely manner. But as he chose not to share his
angry mood with others while using EMS that information was not leaked. But today he started with a
pensive mind and feeling depressed. His grandchildren were in his friend list of Facebook. They noticed
the depressed mood. They called him and send him an online invitation card in a restaurant which is the
most favorite of his grandfather using event manager. Within hours they can see the mood status to be
changed to happy.
Scenario 2:
Dan, a college student has just received his semester final result online. He felt upset since he did not
receive the result he wanted to achieve. EMS has taken his picture and classified it as sad. The result has
been uploaded in Dan’s Facebook account. Based on Dan’s activity and interest profile the system will
suggest for a recent movie of Tom Hanks – Dan’s favorite actor. It will also show the local theater name
and timing of the movie. Dan’s friends also see it and they all decide to go for the movie. By the time they
time return from the movie, everyone is smiling including Dan.
Scenario 3:
Mr. Jones is working as a system engineer in a company. He is a bit excited today because the
performance bonus will be announced today. But it was a shock for him as he received a very poor bonus.
His sad mood has been detected by EMS. But as Mr. John is in office, it would not publish his mood due
to its location aware context mechanism.
4. CONCEPT DESIGN
Development of the architecture can be classified in three broad categories. In the first phase the facial
portion has to be extracted out from an image. Then in the second phase the extracted image has to be
11
analyzed for facial features and classification follows to identify one of the several mood categories.
Finally we need to integrate the mobile application with Facebook. Figure 2 depicts the architecture.
Figure 2. Architecture of EMS
4.1 Face Detection
Pixels corresponding to skin have difference with other pixels in an image. Skin color modeling in
chromatic color space [5] has shown the clustering of skin pixels in a specific region. Though the skin
color of persons vary widely based on different ethnicity, research [4] shows that still they form a cluster
in the chromatic color space. After taking the image of the subject we first crop the image and take only
the head portion of the image. Then we use skin color modeling for extracting the required facial portion
from the head image.
4.2 Facial Expression Detection
For this part we plan to use a combination of Eigenfaces, Eigeneyes, and Eigenlips methods based on
Principal Component Analysis (PCA) [6,7]. This analysis method includes only the characteristic features
12
of the face corresponding to a specific facial expression and leaves other features. This strategy reduces
the amount of training sample and helps us to make our system computationally inexpensive. These
resultant images will be used as samples for training Eigenfaces method and M Eigenfaces with highest
Eigenvalues will be sorted out. When detecting a new face, the facial image will be projected in the
Eigenspace and the Euclidian distance between the new face and all the faces in the Eigenspace will be
measured. The face that represents the closest distance will be assumed as a match for the new image.
Similar process will be followed for finding results using Eigenlips and Eigeneyes methods. Here is a step
by step break down of the whole process.
1. The first step is to obtain a set S with M face images. Each image is transformed into a vector of size
N2 and placed into the set, S={Г1,Г2,Г3.. ,ГM}
2. Second step is to obtain the mean image Ψ
Ψ=
𝑀
1
∑ Г𝑛
𝑀
𝑛=1
3. We find the difference Φ between the input image and the mean image, Φi= Гi- Ψ
4. Next we seek a set of M orthonormal vectors, uM, which best describes the distribution of the data. The
kth vector, uk, is chosen such that
λ𝑘 =
𝑀
1
∑ (𝑢𝑘𝑇 Φ𝑛 )2
𝑀
𝑛=1
5. λk is a maximum, subject to
𝑢𝑙𝑇 u𝑘 = δ𝑙𝑘 =
1 , if l  k

0 , otherwise
where uk and λk are the eigenvectors and eigenvalues of the covariance matrix C
6. The covariance matrix C has been obtained in the following manner Ω
C=
1
𝑀
𝑇
𝑇
∑𝑀
𝑛=1 Φ𝑛 Φ𝑛 = 𝐴𝐴 Where, 𝐴 = [Φ1 Φ2 Φ3 . . Φ𝑚 ]
7. To find eigenvectors from the covariance matrix is a huge computational task. Since M is far less than
N2 by N2, we can construct the M by M matrix,
13
𝑇
L = 𝐴𝑇 𝐴, where L𝑚𝑛 = Φ𝑚
Φ𝑛
8. We find the M Eigenvectors, vl of L.
9. These vectors (vl) determine linear combinations of the M training set face images to form the
Eigenfaces ul
𝑀
u𝑙 = ∑
𝑘=1
v𝑙𝑘 Φ𝑘 ,
l = 1,2, … , M
10. After computing the Eigenvectors and Eigenvalues on the covariance matrix of the training images

M eigenvectors are sorted in order of descending Eigenvalues

Some top eigenvectors are chosen to represent Eigenspace
11. Project each of the original images into Eigenspace to find a vector of weights representing the
contribution of each Eigenface to the reconstruction of the given image.
When detecting a new face, the facial image will be projected in the Eigenspace and the Euclidian
distance between the new face and all the faces in the Eigenspace will be measured. The face that
represents the closest distance will be assumed as a match for the new image. Similar process will be
followed for Eigenlips and Eigeneyes methods. The mathematical steps are as followed:
 Any new image is projected into Eigenspace and find the face-key
ω𝑘 = 𝑢𝑘𝑇 (Г − Ψ)𝑎𝑛𝑑 Ω𝑇 = [ω1 , ω2 , … . , ω𝑀 ]
where, uk is the kth eigenvector and ωk is the kth weight in the weight vector Ω𝑇 = [ω1 , ω2 , … . , ω𝑀 ]
 The M weights represent the contribution of each respective Eigenfaces. The vector Ω, is taken as
the ‘face-key’ for a face’s image projected into Eigenspace.
 We compare any two ‘face-keys’ by a simple Euclidean distance measure
€ = ‖Ω𝑎 − Ω𝑏 ‖2
 An acceptance (the two face images match) or rejection (the two images do not match) is
determined by applying a threshold.
14
4.3 Integration to Facebook
The application uses device camera to capture facial images of the user and recognize and report their
mood. The mobile version, for iPhone and Android powered devices, uses the devices’ built in camera to
capture images and transfer the image to a server. The server extracts the face, the facial feature points,
and classifies the mood using a classifier, and sends the recognized mood back to the mobile
application. The mobile application can connect to Facebook allowing the user to publish their recognized
mood.
5. IMPLEENTATION
The ultimate goal is to use the application from any device, mobile, or browser. Because of the huge
computational power needed for the image processing for facial expression, we needed software like
MATLAB for image processing and facial expression recognition. Hence the total design can be thought
of the integration of three different phases. First, we need MATLAB for facial expression recognition.
Second, that MATLAB script needs to be called using a web service. That way, we ensure that the script
is available from any platform, including handheld devices. Lastly, we need an Facebook application
which will call the web service.
First we made a training database of eight persons with six basic expressions. The expressions are anger,
fear, happiness, neutral, sadness, and surprise. Initially we also took pictures for the expression
‘depression’. But we even failed to distinguish between the expression of ‘sadness’ and ‘depression’ with
naked eyes. So we later discarded images of ‘depression’ from database. The following figure 3 is a
screenshot of the training database.
15
Figure 3: Facial Expression Training Database
After training, we implemented the client side for web service call using PHP and javascript. In the client
side, we upload an image and then using the web service call, the expression is detected. In the server side
we used Apache Tomcat container as the application server with Axis2 as SOAP engine. Then using a
PHP script we called that web service from a browser. User uploads a picture from the browser and then
the facial expression is detected using the web service call. Figure 4 shows the high level architectural
overview.
16
Apache Tomcat
Container
Application Server
Server
AXIS2
MATLAB
SOAP/Web Service
Expression Detection
Engine
Script
HTTP Call
WAMP
Browser/Mobile
PHP Web Server
Client
Figure 4: Web based Expression
Detection Architecture
17
Here we provide a screenshot of a sample user with corresponding detected expression.
Figure 5: Facial Expression Recognition from a Web page.
6. CHARACTERISTICS
Our application has several unique features with research challenges compared to other such applications.
Several important functionalities of our model have been described here.
6.1 Real Time Mood to Social Media
18
Several researches have been done in facial expression detection. But the idea of real time mood detection
and integrate this with Facebook is a novel idea. The use of ‘extreme connectivity’ feature of Facebook
will help people to distribute their happiness over others at the same time this feature will help people to
overcome their sadness, sorrows, depression etc. Thus it will improve the quality of life. With the
unbelievable amount of Facebook users and its current growth, it will have a real positive impact on the
society.
6.2 Location Aware Sharing
Though people like to share their moods with friends and others there are scenarios when they do not
want to publish their mood when they are in specific location. For example if someone is in office and in
depressed mood since he had an argument with his boss, definitely he would not like to publish his
depressed mood. If that information is published he might be in a false position. Our model will take
location as a context to analyze the publishing decision.
6.3 Mood Aware Sharing
There are moods that people like to share with everyone and there are moods which people do not like to
share. For example, one might like to share all moods but anger to everyone. Or he might like to share
specific groups of mood with special people. Someone may want to share only happy face with kids (or
people below a specific age) and all available moods with others. All these issues have been taken care of.
6.4 Mobility:
Our model will work for both fixed and handheld devices. This will ensure the feature of mobility. Two
different protocols are being developed for connecting the web server from fixed devices (laptop, desktop
etc.) and handheld devices (cell phone, PDA) etc. A recent statistics showed that around 150 million
active users access Facebook from their mobiles [39]. This feature will help the users to access our mood
based application on the fly.
19
6.5 Resources of Behavioral Research
An appropriate use of this model can provide huge amount of user certified data for the use of behavioral
scientist. Right now billions of dollars are being spent on many projects that are trying to improve quality
of life. In order to do that behavioral scientists require to analyze the mental state of different age groups,
their likes and dislikes with other issues. Statistical resources about the emotion of millions of users could
provide them with invaluable information to reach a conclusive model.
6.6 Context Aware Event Manager
Event manager suggests of events that might suit with the current mood of the user. It will work as a
personalized event manager for each user by tracking the previous records of the user. When it is going to
suggest about some activity for making a person happy, it will try to find out whether there was any
special event that made the person happy before. This feature would enable the event manager to be more
specific and relevant to someone’s personal wish list and likings.
7. CRITICAL & UNRESOLVED ISSUES
7.1 Deception of Expression (suppression, amplification, simulation):
The volume of control over suppression, amplification, and simulation of a facial expression is yet to be
sorted out while determining any type of automatic facial expressions. Galin and Thorn [26] worked on
the simulation issue but their result is not conclusive. In several studies researchers obtained mixed or
inconclusive findings during their attempts to identify suppressed or amplified pain [26, 27].
7.2 Difference in Cultural, Racial, and Sexual Perception:
Multiple empirical studies performed to collect data have demonstrated the effectiveness of FACS.
Almost all these studies have selected individuals mainly based on gender and age. But the facial
expressions are clearly different in people of different races and ethnicities. Culture plays a major role in
our expression of emotions. Culture dominates the learning of emotional expression (how and when) from
20
infancy, and by adulthood that expression becomes strong and stable [19, 20]. Similarly, the same pain
detection models are being used for men and women while research shows [14,15] notable difference in
the perception and experience of pain between the genders. Fillingim [13] believed this occurs due to
biological, social, and psychological differences in the two genders. This gender issue has been neglected
so far in the literature. We have put ‘Y’ in the appropriate column if the expression detection model deals
with different genders, age groups, and ethnicity.
Table III. Comparison Table Based on the Descriptive Sample Data
Name
Age
Gender
Ethnicity
Eigenfaces[7]
Y
Y
Y
Eigenfaces+ Eigeneyes+ Y
Y
Y
Y
Y (66 F, 63 M)
NM
AAN [28]
Y
Y
Y
RVM [30]
Y (18 hrs to 3 Y (13 B, 13 G)
N
days)
Caucasian)
Eigenlips [8]
AAM[6]
[32,35] Littlewort
Facial Grimace [31]
N
N
N
Back propagation [29]
Y
Y
NM
(only
Y – Yes, N – No, Not mentioned – NM, F – Female, M – Male, B – Boy, G – Girl.
7.3 Intensity:
According to Cohn [23] occurrence/non-occurrence of AUs, temporal precision, intensity, and aggregates
are the four reliabilities that are needed to be analyzed for interpreting facial expression of any emotion.
21
Most researchers including Pantic and Rothkrantz [21], Tian, Cohn, and Kanade [22] have focused on the
first issue (occurrence/non-occurrence). Current literature has failed to identify the intensity level of facial
expressions.
7.4 Dynamic Features:
Several dynamic features including timing, duration, amplitude, head motion, and gesture play an
important role in the accuracy of emotion detection. Slower facial actions appear more genuine [25].
Edwards [24] showed the sensitivity of people to the timing of facial expression. Cohn [23] related the
motion of head with a sample emotion ‘smile’. He showed that the intensity of a smile increases as the
head moves down and decreases as it moves upward and reaches its normal frontal position. These issues
of timing, head motion, and gesture have been neglected that would have increase the accuracy of facial
expression detection.
8. CONCLUSION
Here we have proposed a real time mood detection and sharing application for Facebook. The novelties
and its impact on the society have been described. Several features of context awareness have made this
application unique compared to other applications of this kind. A customized event manager has been
incorporated for suggesting based on user mood which is a new trend in current online advertisement
strategy. A survey result has also attached to show the significance and likeliness of such application
among the population.
We have already built a small demo for detecting one expression (happy face) and integrated it with web
services. Currently we are working on detecting different other moods. We are also trying to incorporate
intensity level of the facial expressions along with the detected mood.
There are still some open issues. There is no Facebook application that appropriately handles user
privacy. Facebook also avoid its responsibility by putting the burden on developer’s shoulder. We plan to
corporate this issue especially location privacy issue in our extended model. We also plan to delve other
22
mood detection algorithms to find the most computationally inexpensive and robust method for Facebook
integration.
9. REFERENCES
[1] Bartlett, M.S., Littlewort, G.C., Lainscsek, C., Fasel, I., Frank, M.G., Movellan, J.R., “Fully automatic
facial action recognition in spontaneous behavior”, In 7th International Conference on Automatic Face
and Gesture Recognition, 2006, p. 223-228.
[2]Braathen, B., Bartlett, M.S., Littlewort-Ford, G., Smith, E. and Movellan, J.R. (2002). An approach to
automatic recognition of spontaneous facial actions. Fifth International Conference on automatic face and
gesture recognition, pg. 231-235.
[3] F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin, “Synthesizing realistic facial
expressions from photographs”, Computer Graphics, 32(Annual ConferenceSeries):75–84, 1998.
[4] Jagdish Lal Raheja, Umesh Kumar, “Human facial expression detection from detected in captured
image using back propagation neural network”, In International Journal of Computer Science &
Information Technology (IJCSIT), Vol. 2, No. 1, Feb 2010,116-123.
[5] Paul Viola, Michael Jones, “Rapid Object Detection using a Boosted Cascade of Simple features”,
Conference on computer vision and pattern recognition, 2001.
[6] A. B. Ashraf, S. Lucey, J. F. Cohn, T. Chen, K. M. Prkachin, and P. E. Solomon, “The painful face II- Pain expression recognition using active appearance models”, In International Journal of Image and
Vision Computing, 27(12):1788-1796, November 2009.
[7] Md. Maruf Monwar, Siamak Rezaei and Dr. Ken Prkachin, “Eigenimage Based Pain Expression
Recognition”, In IAENG International Journal of Applied Mathematics, 36:2, IJAM_36_2_1. (online
version available 24 May 2007)
23
[8] Md. Maruf Monwar, Siamak Rezaei: Appearance-based Pain Recognition from Video
Sequences. IJCNN 2006: 2429-2434
[9] Mayank Agarwal, Nikunj Jain, Manish Kumar, and Himanshu Agrawal, “Face recognition using
principle component analysis,eigneface, and neural network”, In International Conference on Signal
Acquisition and Processing, ICSAP, 310-314.
[10] Murthy, G. R. S. and Jadon, R. S. (2009). Effectiveness of eigenspaces for facial expression
recognition. International Journal of Computer Theory and Engineering, Vol. 1, No. 5, pp. 638-642.
[11] Singh. S. K., Chauhan D. S., Vatsa M., and Singh R. (2003). A robust skin color based face detection
algorithm. Tamkang Journal of Science and Engineering, Vol. 6, No. 4, pp. 227-234.
[12] Rimé, B., Finkenauera, C., Lumineta, O., Zecha, E., and Philippot, P. 1998. Social Sharing of
Emotion: New Evidence and New Questions. In European Review of Social Psychology, Volume 9.
[13] Fillingim, R. B., “Sex, gender, and pain: Women and men really are different”, Current Review of
Pain 4, 2000, pp 24–30.
[14] Berkley, K. J., “Sex differences in pain”, Behavioral and Brain Sciences 20, pp 371–80.
[15] Berkley, K. J. & Holdcroft A., “Sex and gender differences in pain”, In Textbook of pain, 4th edition.
Churchill Livingstone.
[16] Pantic, M. and Rothkranz, L.J.M. 2000. Expert System for automatic analysis of facial expressions.
In Image and Vision Computing, 2000, 881-905
[17] Pantic, M. and Rothkranz,, L.J.M. 2003. Toward an affect sensitive multimodal human-computer
interaction. In Proceedings of IEEE, September 1370-1390
[18] Ekman P. and Friesen, W. Facial Action Coding System: A Technique for the Measurement of Facial
Movement, Consulting Psychologists Press, Palo Alto, CA, 1978.
24
[19] Malatesta, C. Z., & Haviland, J. M., “Learning display rules: The socialization of emotion expression
in infancy”, Child Development, 53, 1982, pp 991-1003.
[20] Oster, H., Camras, L. A., Campos, J., Campos, R., Ujiee, T., Zhao-Lan, M., et al., “The patterning of
facial expressions in Chinese, Japanese, and American infants in fear- and anger- eliciting situations”,
Poster presented at the International Conference on Infant Studies, Providence, 1996,RI.
[21] Pantic, M., & Rothkrantz, M., “Automatic analysis of facial expressions: The state of the art”, In
IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, 2000, pp 1424-1445.
[22] Tian, Y., Cohn, J. F., & Kanade, T., “Facial expression analysis”, In S. Z. Li & A. K. Jain (Eds.),
Handbook of face recognition, 2005, pp. 247-276. New York, New York: Springer.
[23] Cohn, J.F., “Foundations of human-centered computing: Facial expression and emotion”, In
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’07),2007, Hyderabad,
India.
[24] Edwards, K.., “The face of time: Temporal cues in facial expressions of emotion”, In Psychological
Science, 9(4), 1998, pp-270-276.
[25] Krumhuber, E., & Kappas, A., “Moving smiles: The role of dynamic components for the perception
of the genuineness of smiles”, In Journal of Nonverbal Behavior, 29, 2005, pp-3-24.
[26] Galin, K. E. & Thorn, B. E. , “Unmasking pain: Detection of deception in facial expressions”,
Journal of Social and Clinical Psychology (1993), 12, pp 182–97.
[27] Hadjistavropoulos, T., McMurtry, B. & Craig, K. D., “Beautiful faces in pain: Biases and accuracy in
the perception of pain”, Psychology and Health 11, 1996, pp 411–20.
25
[28] Md. Maruf Monwar and Siamak Rezaei, “Pain Recognition Using Artificial Neural Network”, In
IEEE International Symposium on Signal Processing and Information Technology, Vancouver, BC, 2006,
28-33.
[29] A.B. Ashraf, S. Lucey, J. Cohn, T. Chen, Z. Ambadar, K. Prkachin, P. Solomon, B.J. Eheobald: The
Painful Face - Pain Expression Recognition Using Active Ap-pearance Models: In ICMI. 2007.
[30] B. Gholami, W. M. Haddad, and A. Tannenbaum, “Relevance Vector Machine Learning for Neonate
Pain Intensity Assessment Using Digital Imaging”, In IEEE Trans. Biomed. Eng., 2010. Note: To Appear
[31] Becouze, P., Hann, C.E., Chase, J.G., Shaw, G.M. (2007) Measuring facial grimacing for quantifying
patient agitation in critical care. Computer Methods and Programs in Biomedicine, 87(2), pp. 138-147.
[32] Littlewort, G., Bartlett, M.S., and Lee, K. (2006). Faces of Pain: Automated measurement of
spontaneous facial expressions of genuine and posed pain. Proceedings of the 13th Joint Symposium on
Neural Computation, San Diego, CA.
[33] Smith, E., Bartlett, M.S., and Movellan, J.R. (2001). Computer recognition of facial actions: A study
of co-articulation effects. Proceedings of the 8th Annual Joint Symposium on Neural Computation.
[34] S. Brahnam, L. Nanni, and R. Sexton, “Introduction to neonatal facial pain detection using common
and advanced face classification techniques,” Stud. Comput. Intel., vol. 48, pp. 225–253, 2007.
[35] Gwen C. Littlewort, Marian Stewart Bartlett, Kang Lee, “Automatic Coding of Facial Expressions
Displayed During Posed and Genuine Pain” , In Image and Vision Computing, 27(12), 2009, p. 17411844.
[36] Bartlett, M., Littlewort, G., Whitehill, J., Vural, E., Wu, T., Lee, K., Ercil, A., Cetin, M. Movellan,
J., “Insights on spontaneous facial expressions from automatic expression measurement”, In Giese,M.
Curio, C., Bulthoff, H. (Eds.) Dynamic Faces: Insights from Experiments and Computation, MIT Press,
2006.
26
[37] S. Brahnam, C.-F. Chuang, F. Shih, and M. Slack, “Machine recognition and representation of
neonatal facial displays of acute pain,” Artif. Intel. Med., vol. 36, pp. 211–222, 2006.
[38] http://www.Facebook.com/press/info.php?statistics
[39] http://metrics.admob.com/
27
Download