Computer aided generation of prototypical facial expressions of

advertisement
Methods of Psychological Research Online 1998, Vol.3, No.1
Internet: http://www.pabst-publishers.de/mpr/
c 1998 Pabst Science Publishers
Computer aided generation of
prototypical facial expressions of emotion
Philippos Vanger, Robert Hoenlinger, Hermann Hakeny
Abstract
The present paper presents a method of producing prototypical facial expressions of dierent emotions based on computation and deformation of digitalized facial images. Facial expressions of six basic emotions were portrayed
by subjects. Each individual facial image was then deformed so as to accommodate to a "face stencil" dened by standard points on the facial structure.
Prototypes for the expressions of each emotion were created by averaging the
images of all individual faces. In this way the physiognomic variability of
individual subjects is reduced to a single computer generated face while retaining the facial expression. Further combinations of upper and lower face
parts produced various facial expressions with less clear emotional meaning.
Applications and possibilities for further development of this method are discussed.
Cacioppo et al. (1992) in their discussion of facial signal systems point out that
a facial image contains information that can be subdivided into:
"1. Static facial signals, e.g., the permanent features of the face such as the bony
structure and soft tissue masses that contribute to an individual's appearance.
2. Slow facial signal, e.g., changes in the appearance of the face that occur
gradually over time, such as the development of permanent wrinkles and changes
in skin texture.
3. Articial signals, e.g., exogenously determined features of the face such as
eyeglasses and cosmetic.
4. Rapid facial signals, e.g., phasic changes in neuromuscular activity that may
lead to visually detectable changes in facial appearance." (p. 9)
A great deal of psychological research on the face has so far concentrated on rapid
facial signals or facial expressions and their role in interpersonal communication
(Ekman, Friesen, & Ellsworth, 1972). Furthermore, a large body of literature has
been concerned with demonstrating that facial expression is important and eective
in communicating various emotional states in social interaction (DePaulo, 1992) and
that the experiencing of emotion triggers the activation of facial muscles, producing
the specic facial expression corresponding to each of the basic emotions (Buck,
1984; Ekman, 1972, 1977; Izard, 1977; Tomkins, 1962). Within this line of research
a number of decoding studies have been conducted employing facial material of
spontaneously emitted or posed facial activity. This is usually photographic, lm,
or video material of real persons such as developed by Ekman and Friesen (1978) for
the FACS manual. However, since dierent decoding studies ask dierent questions,
dierent research groups have developed their own facial material especially tailored
for the needs of their studies (Etco & Magee, 1992). This means that although
Center for Psychotherapy Research, Stuttgart
y Department of Theoretical Physics and Synergetics,
University of Stuttgart
26
P. Vanger et.al.: Facial Prototypes of Emotion
the facial expressions under study may be identical, nevertheless there is a great
variability (inevitably) in physiognomic characteristics of the real persons involved
in the production of facial material.
1 The Role of Facial Structure in Facial Perception
and Judgment
In the course of history dierent theories have been proposed advocating a direct
link between facial structure (or physiognomy, as it is also called) and psychological qualities or personal characteristics (Lavater, 1775/78). Modern science has
dismissed these theories, and empirical studies have failed to nd support for an
unequivocal correspondence of specic facial structures to specic attributes (Henss,
1995). Nonetheless, there is evidence indicating that facial structure may inuence
the way a person will be judged (Malatesta et al., 1987) and responded to (Bull
& Rumsey, 1982; Rumsey et al., 1982; Nadkimen, 1984), especially in cases of facial deformity (Aamot, 1978; Bull & Stevens, 1981) or attractiveness (McCullers
& Staat, 1974; Terry & Brady, 1976; Milord, 1978). Further indication for the
inuence of facial structure on interpersonal judgments and responses is provided
by the "Kindchenschema" theory of Konrad Lorenz (1943) as well as by empirical
studies in this area (Huckstedt, 1965; Sternglanz et al., 1977; McCabe, 1984). In
this theory Lorenz proposed that the facial structure of babies { characterized by
large forehead, small chin and concave nose { elicits in adults nurturant behavior
while inhibiting aggressive acts and is thus of survival value for the new borns.
Babyfacedness has also been found to correlate with judged attractiveness with differential stability for males and females across the life span (Zebrowitz et al., 1993).
The inuence of static facial signals on the decoding of rapid facial signals has not
been yet systematically investigated. An older study by Eistel (1953) indicates that
facial structure inuences the judgment of facial expression. In this judgment study
the investigators employed schematically drawn faces that varied on the dimensions
of facial expression and facial structure. The faces were constructed in such a way
that the outer contours of the face (depicting the physiognomy) could be varied
without interfering with the inner part of the drawing (depicting the facial expression). So each facial expression was embedded in dierent facial structures. The
results obtained from the judgment of the various combinations of facial expression
and facial structure indicated that faces with a rounder facial structure were more
positively rated than faces with a more quadratic shape, regardless of their facial
expression. It seems then that in order to investigate perceptual and judgmental
processes with regard to facial expression, individual characteristics of facial material need to be minimized. Otherwise physiognomic particularities may contaminate
the judgment of the facial expression per se. Articially constructed material with
the aid of recent computer technology may be an alternative to photographic portrayals of facial expressions. Sets of facial material based on schematic drawings
or caricatures of faces (Brennan, 1985) and of facial expressions (Musterle, 1984;
Etco & Magee, 1992) have been developed and employed as stimulus material for
decoding studies of facial aect.
2 Computer Processing of Facial Images
Within the framework of digital image processing and computer vision various research groups are working on the development of computer generated facial images
(Patel, 1993) in two or three dimensions (Troje & Bueltho, 1996). Two lines of
research can be identied: developments pertaining to lip movements during speech
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
P. Vanger et.al.: Facial Prototypes of Emotion
27
production (Cohen & Massaro, 1994; Guiard-Marigny et al., 1994; Waters & Frisbee, 1995) and developments focusing on facial expression covering thus the whole
face (Waters & Terzopoulos,1991; Marriott, 1992; Essa, 1994). Since expressive
movements in face and speech produced movements often co-occur some computer
animation systems take both into consideration (Wang, 1991; Cassell et al., 1994).
Ultimate goal of these endeavours is the development of automated recognition of
facial patterns. A model for associative memory and pattern recognition devised by
Haken (1987) treats the activity of neurons as continuous variables and exploits an
analogy with pattern formation in synergetic systems. As an application of the synergetic model Fuchs and Haken (1988) showed that complex visual patterns such
as human faces can be identied by the computer. On the basis of this work a
procedure was developed for the automated recognition of images of facial emotion
(Vanger et al., 1995, 1997) by employing stored prototypes of facial expressions of
emotion.
3 Assessment of facial activity
The scientic interest in the investigation and measurement of facial movement and
its meaning dates back to the works of John Bulwer in the 17th century. He dealt
with the study of expression of aect in the face (Bulwer, 1649) and with lipreading
in hearing impaired persons (Bulwer, 1648). In the 19th century Duchenne de
Boulogne (1862) with his experimental work on muscular movement in the face
set the foundations for an anatomically based measurement of facial expressions.
Furthermore Darwin (1872) with his book "The expression of the emotions in man
and animals" opened new horizons for the investigation of expressivity in the human
face and its relation to aect. In a review article on assessing facial activity Ekman
(1982) describes dierent coding methods that have been developed and applied on
scientic research. These diverse systems have been developed for various research
purposes and for the investigation of dierent populations (e.g. for infants or adults,
for normal or clinical samples or disabled individuals) in various situations (e.g.
spontaneous or posed expressions, in interpersonal interaction or in experimental
situations). Table 1 presents the dierent measurement systems of facial activity
with respect to their theoretical background and area of implementation.
The most frequently employed coding systems in psychological research are the
FACS (Ekman & Friesen, 1978) and the MAX (Izard, 1979). The major dierence
of these instruments lies in the focus of their measurement. The MAX concentrates
only on facial expressions that corresponds to the basic emotions and ignores other
movements not relevant to aect. On the contrary FACS describes all possible
movements of the facial musculature that produce a visible change in the face. Each
discrete movement is called action unit. Activation of dierent action units results
in a combination that make up a facial expression { not only of aective meaning.
A shorter version of FACS that focuses selectively only on facial activity related to
indicators of aect was also developed (EM-FACS). Furthermore the Baby-FACS
version (Oster, 1993) was adapted in order to take into account particularities of
the facial musculature of infants.
In the present work dierent facial expressions of emotion were dened by taking
reference to the Facial Action Coding System (Ekman & Friesen, 1978). FACS is
based on an anatomy notation system that describes the muscular basis of facial
expression and classies the appearance changes on the face mediated by muscular
activity. In this way it can represent all possible appearance changes in the face.
Table 2 indicates the correspondence of dierent facial muscles to discrete Action
Units representing visible changes in the face.
Trained coders can note the single elements that make up complicated facial
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
P. Vanger et.al.: Facial Prototypes of Emotion
28
Assessment instruments of facial activity according to theoretical background
and major area.
Theoretical Background
Area of imple- Linguistic
Ethologic
Emotion The- Anatomical
mentation
ory
Infant / Child
Blurton Jones Izard / MAX Oster / Baby(1971)
(1979)
FACS (1993)
Brannigan &
Landis (1924)
Humphries
(1972)
Grant (1969)
McGrew (1972)
Nystrom (1974)
Young & Decarie (1977)
Adult
Birdwhistell
Brannigan & Ekman, Frie- Ekman & Frie(1970)
Humphries
sen & Tomkins sen / FACS
(1972)
(1971)
(1978)
Grant (1969)
Izard,
Hjortsj
o
Dougherty
(1969)
&
Hembree FroisAFFEX (1983) Wittmann
(1930)
Ermiane
& Gergerian
(1978)
Clinical Popu Ekman & Frielation
sen / FACS
(1978)
Fulcher
(1942)
Table 1:
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
29
P. Vanger et.al.: Facial Prototypes of Emotion
Table 2:
AUNr.
1
2
4
5
6
7
9
10
11
12
13
14
15
16
17
18
20
22
23
24
25
Correspondence of FACS Action Units to facial muscles.
FACS-Name
Muscular Basis
Inner Brow Raiser
Outer Brow Raiser
Brow Lowerer
Upper Lid Raiser
Cheek Raiser
Lid Tightener
Nose Wrinkler
Upper Lip Raiser
Nasolabial Deepener
Lip Corner Puller
Cheek Puer
Dimpler
Lip Corner Depressor
Lower Lip Depressor
Chin Raiser
Lip Puckerer
Lip Stretcher
Lip Funneler
Lip Tightener
Lip Pressor
Lips Part
Frontalis, Pars Medialis
Frontalis, Pars Lateralis
Depressor Glabellae, Depressor supercilli; Corrugator
Levator Papebrae Superioris
Orbicularis Oculi, Pars Orbitalis
Orbicularis Oculi, Pars Palebralis
Levator Labii Superioris, Alaeque Nasi
Levator Labii Superioris, Caput Infraorbitalis
Zygomaticus Minor
Zygomaticus Major
Caninus
Buccinnator
Triangularis
Depressor Labii
Mentalis
Incisivii Labii Superioris; Incisivii Labii Inferioris
Risorius
Orbicularis Oris
Orbicularis Oris
Orbicularis Oris
Depressor Labii o. Entspannung des Mentalis
o. des Orbicularis Oris
Masetter, temporale u. internale Entspannung
des Pterygoid
Pterygoids, Digastric
Orbicularis Oris
26 Jaw Drop
27 Mouth Stretch
28 Lip Suck
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
30
P. Vanger et.al.: Facial Prototypes of Emotion
expressions without providing an interpretation of facial activity. That is, a face
will not be described as happy, sad, or aggressive but rather will be coded according
to specied Action Units (AUs) such as 4 (= frown) + 15 (= lip corner depressor) +
17 (= chin lift). The interpretation of the combination of these appearance changes
in the face as aggressive, happy, etc. remains open. The Facial Action Coding
System provides high accuracy in detecting and describing produced changes in the
facial musculature from either still photos or video records.
3.1 Prototypes of Facial Expressions of Emotions
Based on a large body of decoding studies in various cultures, Ekman and Friesen
(1978) devised a table (FACS Table 11-1) of dierent Action Unit combinations that
correspond to dierent emotions and have a high probability of being recognized
as such. To each basic emotion of joy, surprise, anger, sadness, disgust, and fear
dierent AU combination possibilities are indicated. In the present case the most
intense combinations for each emotional expression were chosen. They are dened
as follows:
Emotion Action Unit Combination
Joy
Sadness
Disgust
Anger
Surprise
Fear
6 + 12y + 25
1 + 4 + 15
10 + 17 + 4
4 + 5 + 7 + 24
1 + 2 + 5 + 26
1 + 2 + 4 + 5 + 20 + 25
4 Procedure
This method deals with the creation of articial facial material depicting expressions
of emotion with standardized facial morphology. The processing steps were as
follows:
4.1 Preparation of original facial material.
Ten subjects were instructed by a trained FACS coder to voluntarily produce the
combination of AUs specied for each emotion. They were then photographed while
posing these facial expressions as well as when showing a neutral, relaxed face. All
subjects were photographed in front of a black backdrop, wearing a black turtleneck shirt, unter the same lighting conditions. They were photographed frontally,
full-face, with their head being held straight. The printed photographs were all of
the same format (17 x 12). Each photograph was then digitalized and stored. In
this way 70 facial images (10 subjects x 7 expressions were produced.
4.2 The facial composite image procedure.
The idea of creating an averaged face dates back to the works of Galton (1878) who
succeeded in photographically blending dierent faces by multiple exposures. The
pupils of the eyes were used as stable points for the blending procedure. Employing
a similar technique for averaging, Langlois and Roggman (1990) demonstrated the
computer production of facial prototypes. In order to produce a face prototype
all individual facial images depicting the same emotional expressions need to be
averaged in terms of their corresponding pixels. Averaged faces emerge when using the eye pupils as approximate reference points for each facial image. However,
the resulting facial expressions are rather blurred and at times confusing as to the
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
31
P. Vanger et.al.: Facial Prototypes of Emotion
manifest Actions Units. This is due to the dierent physiognomical characteristics
of the subjects' faces. With the exception of the pupils that are uniformly dened
as points in each face all other AUs lose the sharpness of their contours and consequently the poignancy of their appearance. With a few exceptions a reliable FACS
coding is not possible. It is evident that the averaging procedure based on only two
points of the image (the center of the pupils) results in blurred facial composite
images where the facial expression is not adequately depicted.
4.3 The "face stencil".
Benson and Perrett (1991, 1993) have shown that a greater number of selected
points on the face allows the reshaping of each face into an averaged facial structure
and results in sharply dened facial composite images. 208 points were predened
in order to study the perception of gender dierences, attractiveness, and age in
faces (Burt & Perrett, 1995; Perrett et al., 1994). Based on these works a similar
procedure was adopted for the production of facial expression composite images
(Hoenlinger et al., 1994). In order to overcome the limitations that physiognomic
variability poses, the "face stencil method" was developed in order to devise a
standard facial structure which would be uniform for all facial images. At rst 29
reference points were dened on each face. Five points lie on the central axis of the
face; twelve points lie almost symmetrically on each side of the face. These are:
1
2
3
4
5
6
7
8
9
Lower face
tip of the chin
between the lips
middle of upper lip
upper lip (left, right)
under the tip of the nose
middle of lower lip
lip corner (left, right)
nostril (left, right)
under the ear lobe (left, right)
10
11
12
13
14
15
16
17
Upper face
nose bridge (left, right)
inner eye corner (left, right)
outer eye corner (left, right)
eye pupil (left, right)
middle of upper lid (left, right)
middle of lower lid (left, right)
middle of eye brow (left, right)
tail of eye brow (left, right)
For each of the 29 reference points an average point was calculated over the
corresponding points of the ten original faces for each facial expression. In this
manner a standard stencil consisting of 29 points was obtained for each expression.
The next step in the construction of the prototypes was the adjustment of each
individual face to the standard stencil of the corresponding emotion. This was done
by matching the individual reference points to the standard ones and interpolating
all other pixels of the image. In other words each individual facial image was slightly
distorted in order to t the standard dimensions of the stencil. In this way each
individual face takes on identical physiognomical dimensions, e.g., length of nose,
width of jaws, distance between the eyes. Similarly appearance changes on the faces
exhibiting the same expression are dened by identical reference points, e.g., points
4, 5, 7, and 8 (left and right correspondingly) for the lip corners position, points 16,
17 (left and right correspondingly) for the eye brow position etc. The face structure
is marked by the following points: 1, 9 (L, R), and 17 (L, R) dening the outer
shape of the head; 5, 10 (L, R) dening the shape of the nose; 11, 12, 13 (L, R)
dening the shape of the eyes; 2 dening the center of the mouth.
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
P. Vanger et.al.: Facial Prototypes of Emotion
32
a
b
c
Figure 1: Processing stages of facial images: a) point referencing of a subject's face
b) adjustment of the facial image to the \stencil" c) facial prototype produced by averaging
\stenciled" facial images depicting the same emotional expression
4.4 Computer Generated Prototypes of Facial Expressions
of Emotions
Following the above procedure six facial prototypes of emotions and one neutral
were obtained, as shown in Fig. 2.
In order to verify the accuracy of the facial expression of the acquired articial
images the prototypes were coded according to FACS. During the posing of the
expressions subjects varied in their ability to produce accurately the required combination of Action Units. Although training was oered in order to improve the
performance some subjects still had diculty with certain AUs. As a result of that
the composite image of the nal prototype diverged from the originally intended
combination with regard to certain AUs. In the image of disgust, additionally to
the original AU combination 10+17+4 the AUs 6 and 7 were also coded. This was
due to the fact that most subjects activated the facial muscles responsible for AUs
6 and 7 when producing the "upper lip raise" (AU 10). This results in a more
intensied disgust expression but does not change the emotional quality of the expression. In the anger image, AUs 4 and 24 were coded but the AUs 5 and 7 failed
to reach the minimal requirements. This renders the anger expression less intense
since the "upper lid raise" (AU 5) and the "lids tight" (AU 7) are absent but again
it does not aect the essential quality of the expression. The fear image was coded
as 5+20+25+6. The AUs 1+2+4 that mark movements of the eyebrows did not
fulll the minimal requirements for coding. This partial discrepancy between the
originally intended facial expression and the resulted prototypical images is again
accounted for by individual variations in the original photographic material. Nevertheless the overall expression of fear is also in the upper face recognizable { albeit in
a lower intensity { mainly due to the presence of AU 5 ("upper lid raise"). On the
contrary the prototypes of joy, sadness, and surprise were identically coded to the
original photographic material. In fact for these expressions there was no variation
in the posed combination of AUs across subjects. This suggests that the employed
referencing points seem to be adequate for the purpose of producing composite images of facial expression and the employed warping procedure does not distort the
facial expression of the nal prototype.
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
Figure 2:
Computer generated prototypes of emotional expression.
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
P. Vanger et.al.: Facial Prototypes of Emotion
34
These 7 digitalised images of prototypical expressions of emotion may be further
processed with the computer in order to yield other facial expressions of less clear
emotional meaning. By dividing the face in two autonomous areas of upper and
lower parts a number of combined emotion expressions is possible. In this way 49
dierent prototypes of upper and lower face combinations emerged.
5 Concluding Remarks
In this paper the development of a method for producing facial images depicting
emotions was described. An advantage of the resulting facial material is that the
physiognomic characteristics of individual facial structures disappear while leaving
the facial expression identiable. Employing facial material produced according
to the present method may be of advantage for decoding studies of emotional expression in the face since the facial morphology of the portrayed expressions is
standardized. Furthermore, the present facial material based on morphing techniques seems to be more appropriate for use in judgement studies of emotion than
schematic drawings of faces produced by computer based animation program. In a
study investigating emotion recognition in faces (Burg, 1996) the prototypical facial
expressions presented in this paper were employed along with the corresponding expressions of schematic linear drawings of computer produced faces (Musterle, 1984).
Judges tended to recognize emotion overall more accurately in the realistic looking
morphed faces than in the linear computer drawings. During a debrieng interview
after the completion of the task subjects reported that the morphed faces looked
more like a real person and thus they could relate easier to the depicted expression.
In the linear drawing faces the expressions seemed exagerrated and the lack of texture and shades in face made it seem unreal. These ndings seem to suggest that
schematically drawn faces may be "read" in a more cognitive manner in terms of
what they depict whereas realistic looking morphed faces may also evoke an aective response on the part of the observer. This however needs to be investigated
further.
In the present sample certain combinations of facial action units appeared modied in certain prototypes. This seems to be due to individual variability in intensity
of muscle activation of the dierent subjects. This could be remedied by a) employing a much larger sample of subjects, and b) by intensive training of the subjects
in activating the face musculature according to FACS.
A future prospect for this method is to comprise a data bank for all individual
facial action units and the combinations thereof. Male and female facial structures
may be treated separately and aging eects may be taken into consideration as a
further variable. Furthermore race and complexion may also be of interest especially
when working with color data. Such controlled computer manipulations of facial
material may allow the systematic investigation of the relationship and the manyfold
interactions between the dynamic aspects of the human face (i.e. facial expression)
and the static aspects of facial morphology.
References
[1] Aamot, S. (1978). Reactions to facial deformities: Autonomic and social psychological.
European Journal of Social Psychology, 8, 315-333.
[2] Benson, P. J., & Perrett, D. I. (1991). Synthesising continuous-tone caricatures. Image
Vision Computer, 9, 123-129.
[3] Benson, P. J., & Perrett, D. I. (1993). Extracting prototypical facial images from
exemplars. Perception, 22, 257-262.
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
Figure 3:
emotion.
Combinations of upper and lower facial parts of prototypical expressions of
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
P. Vanger et.al.: Facial Prototypes of Emotion
36
[4] Birdwhistell R (1970) Kinesics and context. Philadelphia, University Press
[5] Blurton Jones NG (1971) Criteria for use in describing facial expression in children.
Human Biology, 41, 365-413
[6] Brannigan CR, Humphries, DA (1972) Human nonverbal behavior, a means of communication. In: Blurton Jonas NG (ed) Ethological studies of child behavior. Cambridge, Cambridge University Press
[7] Brennan, S. E. (1985). The caricature generator. Leonardo, 18, 170-178.
[8] Buck, R. (1984). The communication of emotion. New York: Guilford Press.
[9] Bull, R., & Rumsey, N. (1982). The social psychology of facial appearance. New York:
Springer.
[10] Bull R., & Stevens, J. (1981). The eects of facial disgurement on helping behaviour.
Italian Journal of Psychology, 8, 25-33.
[11] Bulwer, J. (1648) Philocopus, or the Deaf and Dumbe Mans Friend, London:
Humphrey and Moseley
[12] Bulwer, J. (1649) Pathomyotamia, or, A dissection of the signicative muscles of the
aections of the minde, London: Humphrey and Moseley.
[13] Burg C (1996) Die Mimik des Ober- und Untergesichtes. Diplomarbeit Institut fur
Psychologie, Julius-Maximilians-Universitat Wurzburg
[14] Burt, D. M., & Perrett, D. I. (1995). Perception of age in adult Caucasian male faces:
computer graphic manipulation of shape and colour information. Proceedings of the
Royal Society, 259, 137-143.
[15] Cacioppo, J., Hager, J. & Ekman, P. (1992) The psychology and neuroanatomy of
facial expression, In: Ekman, P., Huang, T. S., Sejnowski, T. J., & Hager, J. C. (Eds).
Final report to NSF of the planning workshop on facial expression understanding.
Unpublished manuscript. University of California, San Francisco: Human Interaction
Laboratory.
[16] Cassell, J., Pelachaud,C., Badler,N.I., Steedman, M., Achorn,B., Becket, T.,
Dourville, B., Prevost, S. & Stone, M. (1994) Animated conversation: Rule-based
generation of facial expression, gesture and spoken intonation for multiple conversational agents. Computer Graphics Annual Conference Series, 413-420.
[17] Cohen, M. M., & Massaro, D. W. (1994) Development and experimentation with
synthetic visible speech. Behavioral Research Methods and Experimentation, 26, 260265.
[18] Darwin, C. (1872) The expression of the emotions in man and animals. London: J.
Murray.
[19] DePaulo, B. M. (1992). Nonverbal behavior and self presentation. Psychological Bulletin, 111, 203-243.
[20] Duchenne de Boulogne, C.-B. (1862) The Mechanism of Human Facial Expression
Paris: Jules Renard. (edited and translated by R. Andrew Cuthbertson, Cambridge:
Cambridge Univ Press, 1990)
[21] Eistel, A. (1953). Der Eindruck der mimischen Erscheinungen in seiner Bedingtheit
vom physiognomischen Umfeld. Psychologische Rundschau, 4, 237-261.
[22] Ekman, P. (1972). Universals and cultural dierences in facial expressions of emotion. In: J. K. Cole (ed) Nebraska symposium on motivation. Lincoln: University of
Nebraska Press.
[23] Ekman, P. (1977). Biological and cultural contributions to body and facial movement.
In: U. Blacking (ed) The anthropology of the body. San Diego/CA: Academic Press.
[24] Ekman P, Friesen WV, Tomkins SS (1971) Facial Aect Scoring Technique. A rst
validity study. Semiotica, 3, 37-58
[25] Ekman, P., & Friesen, W. (1978). Manual for the Facial Action Coding System
(FACS). Palo Alto: Consulting Psychologists Press.
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
P. Vanger et.al.: Facial Prototypes of Emotion
37
[26] Ekman, P., Friesen, W., & Ellsworth, P. (1972). Emotion in the human face. Elmsford,
N.Y.: Pergamon.
[27] Ermiane R, Gergerian E (1978) Atlas of facial expressions; Album des expressions du
visage. Paris, La Pensee Universelle
[28] Essa I. (1994) Analysis, Interpretation and Synthesis of Facial Expressions. Media
Laboratory Perceptual Computing Section Technical Report No. 303, Cambridge,
MA.
[29] Etco, N.L., & Magee, J.J. (1992) Categorical perception of facial expressions. Cognition, 44, 227-240.
[30] Galton, F. J. (1878). Composite portraits. Nature, 18, 97-100.
[31] Guiard-Marigny, T., Adjoudani, A. & Beno^t, C. (1994) A 3-D model of the lips for
visual speech synthesis. Proceedings of the Second ESCA/IEEE Workshop on Speech
Synthesis, New Paltz, N. Y., U.S.A., Sept. 5-8.
[32] Frois-Wittmann J (1930) The judgement of facial expression. Journal of Experimental
Psychology, 13, 113-151
[33] Fuchs, A. and Haken, H (1988). Pattern recognition and associative memory as dynamical processes in a synergetic system. Biological Cybernetics, 60, 17-22.
[34] Fulcher JS (1942) "Voluntary" facial expression in blind and seeing children. Archives
of Psychology, 38, 1-49
[35] Grant NB (1969) Human facial expression. Man, 4, 525-536
[36] Haken, H. (1987). Advanced Synergetics, 2nd edn. Springer: Berlin, Heidelberg, New
York.
[37] Henss, R. (1995). Das Funf-Faktoren-Modell der Personlichkeit bei der Beurteilung
von Gesichtern. Report Psychologie, 20, 28-39.
[38] Hoenlinger, R.; Vanger, P. and Haken, H. (1994) Konstruktion von Prototypen von
Gesichtsausdrucken. Unpublished manuscript. Institute for Theoretical Physics and
Synergetics, University of Stuttgart.
[39] Huckstedt, B. (1965). Experimentelle Untersuchungen zum "Kindchenschema".
Zeitschrift fur experimentelle und angewandte Psychologie, 12, 421-450.
[40] Izard, C. E. (1977). Human emotions. New York: Plenum Press.
[41] Izard C (1979) The maximally discriminative facial movement coding system. (MAX.)
Unpublished manuscript. Available from Instructional Resources Center. University
of Delaware, Newark.
[42] Izard, Dougherty & Hembree (1983) A system for identifying aect expressions by
holistic judgements (AFFEX) Newark DE Instructional Resources Center, University
of Delaware
[43] Landis C (1924) Studies of emotional reactions: II. General behavior and facial expression. Journal of Comparative Psychology, 4, 447-509
[44] Langlois, H. H., & Roggman, L. A. (1990). Attractive faces are only average. Psychological Science, 1, 115-121.
[45] Lavater, K. (1775/78). Physiognomische Fragmente. Zurich.
[46] Lorenz, K. (1943). Die angeborenen Formen moglicher Erfahrung. Zeitschrift fur
Tierpsychologie, 5, 235-409.
[47] Malatesta, C.Z., Fiore, M.J. & Messina, J.J. (1987) Aect, personality, and facial
expressive characteristics of older people. Psychology and Aging, 2, 64-69.
[48] Marriott, A. (1992) Face Modelling for Character Animation. Curtin University of
Technology Technical Report Nr. 14
[49] McCabe, V. (1984). Abstract perceptual information for age level: a risk factor for
maltreatment. Child Development, 55, 267-276.
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
P. Vanger et.al.: Facial Prototypes of Emotion
38
[50] McCullers, J. C., & Staat, J. (1974). Draw an ugly man: An inquiry into the dimensions of physical attractiveness. Personality and Social Psychology Bulletin, 1,
33-35.
[51] McGrew WC (1972) An ethological study of children's behavior. New York, Academic
Press
[52] Milord, J. T. (1978). Aesthetic aspects of faces: A (somewhat) phenomenological
analysis using multidimensional scaling methods. Journal of Personality and Social
Psychology, 36, 205.
[53] Musterle, W. A. (1984). Linear-Kombinationen aus stimmungsreinen ComputerFaces. Unpublished doctoral dissertation at the Institute of Theoretical Chemistry,
University of Stuttgart.
[54] Nakdimen, K. A. (1984). The physiognomic basis of sexual stereotyping. American
Journal of Psychiatry, 141, 499-503.
[55] Nystrom M (1974) Neonatal facial-postural patterning during sleep: I. Description
and reliability of observation. Psychological Research Bulletin, 14, 1-16
[56] Oster H (1993) Baby-FACS: Analysing facial movement in infants. Unpublished
manuscript.
[57] Patel, M. (1993) Facial Animation. In N. M. Thalmann & D. Thalmann, (Eds). Models
and Techniques in Computer Animation. Tokyo: Springer-Verlag.
[58] Perrett, D. I., May, K., & Yoshikawa, S. (1994). Attractive characteristics of female
faces: Preference for non-average shape. Nature, 368, 239-242.
[59] Rumsey, N., Bull, R., & Galagan, D. (1982). The eect of facial disgurement on the
proxemic behaviour of the general public. Journal of Applied Social Psychology, 12,
137-150.
[60] Sternglanz, S. H., Gray, J. L., & Murakami, M. (1977). Adult preferences for infantile
facial features: An ethological approach. Animal Behaviour, 25, 108-115.
[61] Terry, R. L., & Brady, J. S. (1976). Components of facial attractiveness. Perceptual
and Motor Skills, 42, 918.
[62] Tomkins, S. S. (1962). Aect, imagery and consciousness. New York: Springer, Vol.
1.
[63] Troje, N.F., Bueltho, H.H. (1996) Face recognition under varying poses: The role of
texture and shape. Vision Research., 36, 1761-1771
[64] Vanger, P. Hoenlinger, R & Haken, H. (1995) Applications of synergetics in decoding
facial expressions of emotion. In: M.Bischel (Ed.) Proceedings International Workshop
on Automatic Face and Gesture Recognition, pp 24-29, Zurich, Switzerland
[65] Vanger, P. Hoenlinger, R & Haken, H. (1997) Anwendung der Synergetik bei der
Erkennung von Emotionen im Gesichtsausdruck. In G. Schiepek & W. Tschacher
(Eds) Selbstorganisation in Psychologie und Psychiatrie. (p. 85 - 101). Vieweg Verlag
[66] Waters, K., and Frisbee, J. (1995) A coordinated muscle Model for speech animation
Graphics Interface'95, 163-170, Ontario, Canada, May 19-21.
[67] Waters, K. and Terzopoulos, D. (1991) Modeling and Animating Faces using Scanned
Data. The Journal of Visualization and Computer Animation, 2, 123-128
[68] Wang, Y. (1991) Automating Facial Gestures and Synthesized Speech in Human
Character Animation. Proceedings of the Third Annual Western Computer Graphics
Symposium, 39-40.
[69] Young G, Decarie TG (1977) An ethology-based catalogue of facial/vocal behaviors
in infancy. Animal Behavior, 25, 95-107
[70] Zebrowitz, L.A., Olson, K. & Homan, K. (1993) Stability of babyfaceness and attractiveness across the life span. Journal of Personality and Social Psychology, 64,
453-466.
MPR{online 1998, Vol.3, No.1
c 1998 Pabst Science Publishers
Download