Application
To join
Network of Excellence
European Union IST project contract no. 507422 http://emotion-research.net
1. Name of Legal entity
GET (Groupe des Ecoles de Télécommunications)
Ecole Nationale Superieure des Télécommunications (ENST) -
Télécom Paris
46 rue Barrault
F-75634 Paris Cedex 13
Phone : +33 1 45 81 77 77
Fax : +33 1 45 89 79 06
2. Names of researchers
Mutsuko TOMOKIYO
Mutsuko TOMOKIYO is an invited researcher at GET-ENST in Paris since 2004 and at GETA-CLIPS in Grenoble since 1996.
She obtained her Ph.D. thesis from Paris 7-Denis Diderot University in
2000 in the area of linguistics.
Her current research topics are natural language processing, namely linguistic formalism of dialogues, dialogue analysis and corpus annotation in the domain of speech to speech machine translation (SSMT) and man-machine interactive systems.
She has published two papers on a semantic representation of emotions in
2004 and a book concerning a dialogue analysis in French, English and
Japanese languages from the view point of speech acts in utterances in
2001.
She is working on several international projects at ENST and GETA.
3. Publications of Mutsuko TOMOKIYO
Tomokiyo M. and Hollard S., A semantic representation of emotions based on a dialogue corpus analysis, International seminar Papillon,
Grenoble, 2004
Tomokiyo M. and Chollet G.,VoiceUNL: a proposal to represent speech control mechanisms within the Universal Networking Digital Language,
International Conference on the Convergence of Knowledge, Culture,
Language and Information Technologies, Egypt, 2003
Tomokiyo M., Analyse discursive de dialogues oraux en français, japonais et anglais, Septentrion, Lille, 2001
Seligman M., Tomokiyo M. and Fais L., A Bilingual Set of
Communicative Acts Labels for Spontaneous Dialogues, Rap.ATR, TR-
IT-161, Japan, 1996
Your research group and HUMAINE
Emotion representation in VoiceUNL : VoiceUNL is a semantic representation of emotions within the Universal Networking Language 1
(UNL). UNL is a language oriented towards written texts and is not currently suitable for the emerging technology of speech to speech machine translation (SSMT) in spite of its robust multilingual directivity.
We explored the possiblity of extending UNL “Attributes” tags to suit to
SSMT task.
We began by investigating dialogue corpus we are developping in the purposes of studying emotional classes and emotion eliciting factors.
Because "emotion entails distinctive ways of perceiving and assessing situations and processing information" (HUMAINE, 04), and in SSMT, the detection of emotions in source languages and its generation in target languages are an important issue from the view point of the naturalness of dialogues.
Our corpus contains 30 minutes of English instruction programmes on
TV (in English and French), a 40-minute TV interview (in French), 5½ hours of real-life vocal messages left on a telephone answering machine in a public hospital (in French), and 6 basic conversations on transport for tourists (in French, English, Chinese and in Japanese).
Emotions are expressed by lexicon, phatics, prosody, voice, gestures, and emotion classes we have identified in our corpus are happiness, sadness, disgust, surprise, fear, anger, irritation, hesitation, uncertainty and neutral 2 .
1 http://www.undl.org/unlsys/unl/UNL%20Specifications.htm
2 The classification of emotions is still open to suggestions. In OCC [9], class and intensity of emotions are analyzed from cognitive points of view: basic emotions, emotions as reactions to events, emotions as reactions to objects, emotions as reactions to agents.
An emotion is expressed in a complex way. For example, the surprise can be represented by a phatic “No!". However, at the same time, in our TV corpus, the character made a grimace while saying “No!”. So, it can also be expressed by the movement of his eyebrows and voice tone.
Lexicon is a main repere of emotion expression, however emotions are expressed not only by lexicon having potentially emotion contents, but also by one word which is semantically less significant and which is heavily accented. Therefore, for the identification of emotions in dialogues, all emotion eliciting factors are taken into account for emotion representation.
We propose a semantic representaion of emotion for dialogues, using a set of emotion tags. In multilingual SSMT, lexcon, phatics and gestures in the source language should be translated into each one in the target language. The recognition of prosody is made on the monolingual level and its information is shifted into the target langauge.
The translation of gestures are one of the interesting topics of further research, because "most cultures have emotions and emotional vocabularies that have two components: a universal element, and a component or parameter that is peculiar to the beliefs and values of that culture" as Randall has mentioned.
Plutchik believes that emotions are like colours. Every colour of the spectrum can be produced by mixing the primary colours. His “emotion’s wheel” consists of eight primary emotions: fear, surprise, sadness, disgust, anger, anticipation, joy, and acceptance.
Ekman mentioned that there would be a linking of a second emotion with an initial emotion, and emotions rarely occur simply or in pure form.
Randall states that most cultures have emotions and emotional vocabularies that have two components: a universal element, and a component or parameter that is peculiar to the beliefs and values of that culture.
Morita classified Japanese emotional words into 40 categories, and it contains negative or positive judgment, sense to color or sound, psychological reactions, etc.
The emotion study group in Southern Kings Consolidated school classifies the emotions as are followed: Thankfulness , Envy, Disgust, Worry, Kindheartedness , Stress, Boredom , Sadness ,
Loneliness , Bravery , Paranoia , Optimism , Stubbornness , Fear , Anxiety .