ITS-98-emotional

advertisement
Emotion Computing in Competitive Learning Environments
Sassine C. Abou-Jaoude
Claude Frasson
Computer Science Department
University of Montreal
C.P. 6128, Succ. Centre-Ville
Montreal, Quebec Canada
{jaoude, frasson}@iro.umontreal.ca
Abstract
Believability appears to be one of the most challenging factors that could be added to the next
generation of software agents. Agents that possess personalities, show emotions and interact with other
agents are being under studies and development. In ITS, researchers tend to agree that believability will
provide a direct amelioration to the learning context. In a typical learning by disturbing session, we
have created an agent that possess a set of primary emotions. We also created rules, heuristics and
regulations according to which these emotions will vary. We proved, that simple computational model
can work fairly in evaluating emotions regardless of how subjective these entities are.
Keywords
Emotional agents, believability, personality in software agents, artificial intelligence.
Introduction
Several research groups have focused their
studies on believable agents in educational or
entertaining environment. In most of the
previous work, agents behavior was goaldriven, and the selection of a certain action by
the agent is based on its personality, emotions
and relationships with other agents [Rousseau,
Hayes-Roth, 97].
At Carnegie Melon University, Bates and al.
[Bates, 94] have been working with agents that
express emotions and possess certain
personality traits and studying the role of
emotions in assuring believability.
Elliot C. [Elliot, 1997-a] started with the work
of Ortony [Ortony, Clore & Collins, 1988] and
has been developing a platform of emotions
that could be exploited in multimedia
applications known as the affective reasoner.
Another active research group at Stanford
university, Hayes-Roth and al. [Hayes-Roth &
van Gent, 1996] are working on the
development of agents known as synthetic
actors that are able to portray and improvise
fictive characters in a virtual theater.
Rizzo and al. [Rizzo, Micelli, Cesta, 1996]
concentrated their research on goal-oriented
personalities in believable agents.
Linguistic modeling in believable agents is
also an active area of research. Walker and al.
[Walker, Cahn & Whittaker, 1996] works
concentrate mainly on linguistic style
improvisation.
Under the same line, we have focused our
research on achieving a basic computational
model according to which emotions could be
computed and evaluated instantly following
certain interactive events that might take place
in an agent society. While how to represent
and express these emotions (facial
expressions, linguistics, voice, background
music, image display etc…) are the subjects of
another working group at our laboratories.
In section 1 we introduced the operating
concept of the system; the scenario of
operation, the emotions and the emotion
couples that we will be computing, the range
of values that these emotions could take and
the nature (stable versus variable) of these
emotions.
Section 2 will represent the emotional agent;
the comportment and the behaviour of this
agent and the factors that affect them
(performance and level of disappointment),
and the definition of the agent character or
personality.
Section 3 will present an idea on how these
emotions may be computed.
Finally we will present some of the
preliminary tests and results that where
obtained.
of weight 3 being the most valuable. This
weighing system is to simulate the fact that not
all needs and objectives of an agent have the
same priorities and values. The second is the
troublemaker (it is important at this level to
differentiate
between
the
traditional
troublemaker as defined by Aïmeur in
[Aïmeur & Frasson, 1996] and the actor in our
model). The first is a pedagogical agent whose
objective is to favour the learning under
specific strategies. While the second, as
introduced in this article, is merely an actor
that, check the questions of the tutor, answers
them himself and provide answers to the
emotional agent. The third character is the
emotional agent; this agent is supposed to
answer the question at his turn. He has three
options: Either he answers correctly by
himself, or he answers wrongly also by
himself or he takes the answer as proposed by
the troublemaker which in turn could be
wrong or right (see Figure 1).
1- General idea of operation
1.1 - The Scenario
The course of events takes place in a
competitive learning environment. Three
main actors exist. The first being the tutor
whose main job is to ask different questions
that are weighed between 1 and 3. A question
Figure 1: The session and the computational model of the emotion couples.
2
Answers are also evaluated to +1 for a correct
given answer and –1 to a wrong one and these
values will be used to compute the
performances of both agents and the degree of
deception as will be presented later.
1.2- The emotion couples
The basic thing to start with was to define a set
of emotions that, at the same time, cover all
the emotional aspect of an agent and do not
overlap. For this, we considered the
classification of Elliot [Elliot, 1997-b] and we
acted upon it in order to create what we called
emotion couples. An emotion couple is a
couple of two emotions that falls in the same
category but are exclusive and contradictory.
For example, Joy/Distress is an emotional
couple. Both emotions come from the same
family (it is an appraisal of a situation or a
status) and they have contradictory exclusive
interpretation (when Joy is high this means
that Distress is low and vice versa, if joy = -0.3
then Distress = 0.3). Table 1 shows the
emotion couples that we have used.
In the exception of the Jealousy and Envy
couple all the defined couples are exclusive
and contradictory.
Joy/Distress
Happy-for/Resentment
Hope/Fear
Satisfaction/Disappointment
Liking/Disliking
Relief/Fears-confirmed
Love/Hate
Sorry-for/Gloating
Pride/Shame
Admiration/Reproach
Gratitude/Anger
Gratification/Remorse
Jealousy/Envy*
*Jealousy/ Envy is the only emotion couple that
is not contradictory
Table 1: The emotion couples
1.3- Stable versus variable emotions
We further classified these couples into two
categories. Emotions that fluctuate little and
others that fluctuate often. The first, are known
by stable or character emotions. They are
usually associated with the personality of the
agent and will be used to define the initial
character of the emotional agent. As
mentioned these emotions do not fluctuate as
often as the others do. More technically, stable
emotions are emotions that are computed by
considering their previous value by a factor
that is greater to 0.5. As an example the
couple Pride/Shame is a character emotion. An
agent is either proud or not and this is slightly
affected by the outcome of answering a
question or a particular event. It is an aspect
associated more with the personality of the
agent.
The second type is the variable emotions.
These are very flexible they could jump from
negative to positive values and vice versa
upon the occurrence of a certain event. For
example the agent could be Sorry-for the
troublemaker at a certain moment and
Gloating after a while depending on the
behaviour of the troublemaker.
1.4- Value of an emotion couple
Values of an emotion couple are decimals that
range between –1 and +1 at any time; Where
+1 is associated with the highest level of the
first emotion of the couple and –1 with the
highest level of the second emotion. For
example, if the agent is experiencing a +1
level of the Joy/Distress emotion couple this
means that he is joyful to a maximum. While a
–0.5 of Satisfaction/Disappointment means
that the agent is not satisfied at all, on the
contrary
he
is
experiencing
some
disappointment.
2- The emotional agent
The emotional aspect of the believable agent is
represented by a set of values. At any given
time the emotional status of the agent could be
known by consulting this sequence. The order
of emotions in the set is the following:
Emotional agent = (Joy/Distress, Happyfor/Resentment, Sorry-for/Gloating, Hope/Fear,
Satisfaction/Disappointment, Relief/Fearsconfirmed, Pride/Shame, Admiration/Reproach,
Liking/Disliking, Gratitude/Anger,
Gratification/Remorse, Love /Hate, Jealousy/Envy)
As an example, we have defined a generally
friendly, reasonably optimistic and average
joyful agent to be the agent who has the
following values: Joy/Distress = 0.3;
Jealousy/Envy
=
-0.3;
Satisfaction/Disappointment = 0.3; Love/Hate
= 0.3; Gratitude/Anger = 0.2; Liking/Disliking
= 0.3; Admiration/Reproach = 0.3; Hope/Fear
= 0.3; so, according to our representation this
agent could be represented by the set:
Agent (0.3, 0, 0, 0.3, 0.3, 0, 0.3, 0.3, 0.3, 0, 0,
0.3, -0.3).
In order to understand the use of this
representation, one should imagine a software
package that is able to present emotions in a
user interface; then this set will make an ideal
input.
of the other and at the same time is very
concerned with the way the other is answering
now. Thus the definition of the troublemaker
performance as seen by the agent is:
Performance of troublemaker = [(1-value of
question) / 4 * previous performance of
troublemaker +
value of
question / 4 * value of answer]7/8 + 1/8*
Jealousy_Envy]
2.3- The level of disappointment
2.1- Performances
Another factor that should be taken into
consideration is the level of disappointment
that is expressed by the emotional agent
towards the troublemaker. It is the measure of
how much the emotional agent is disappointed
with the troublemaker. Reasonably this factor
fluctuates
with
the
Disappointment/Satisfaction emotion couple.
This factor varies between 1. It is function of
the answer given to the tutor by the
troublemaker, the answer provided by the tutor
to the emotional agent, the value of the
question and the choice of the emotional
agent. For example: In two different situations
if the troublemaker provided the agent with
wrong answers than the fact that the
troublemaker answered the question correctly
or not affect the level of disappointment (if the
troublemaker answered wrongly than the level
of disappointment is low. While if he
answered correctly it will be high).
There are two performances that should be
taken into consideration : the performance of
the agent himself and the performance of the
troublemaker as seen by the agent. Values of
both performances vary between +1 and –1.
The performance of the agent is based solely
on the emotional agent’s consideration. At the
beginning it will be equal to 0 and it will
change through the question session according
to the answers given by the agent. The
instantaneous value of the performance
depends on its previous value, on the value of
the last question answered and whether it was
answered correctly or wrongly.
For the moment we set the performance
according to the following formula:
Performance = (1-value of question) / 8 * previous
performance + value of question / 8 * value of
answer
The performance of the troublemaker is
computed as seen by the agent. It is not as
objective as the first one. It depends on the last
question answered by the troublemaker and its
value, on the previous troublemaker
performance also as perceived by the agent
and on the Jealousy/Envy level of the agent. A
jealous agent tends to overestimate the
performance of the others. Furthermore the
factor between the previous performance and
the last question is not 1/8 as in the case of the
personal performance. That is because the
agent tends to remember less the performance
2.4- Character and personality traits
As mentioned earlier the character and
personality are not defined in an explicit way.
On the other hand we will be using the stable
emotion couples to define the character of the
emotional agent (for example, the generally
friendly, reasonably optimistic and average
joyful agent that we have already defined).
Joy/Distress, Pride/Shame and Love/Hate are
some of the emotion couples that might help
set a personality type.
4
3- Rules, heuristics and formulas
The most important part of our work is to
create the basis upon which emotions will vary
and fluctuate. To do so, we have defined a set
of rules, heuristics and formulas. We started
with the basic idea that all emotions are
interrelated, and every emotion should enter in
the computation formulas of the others. For
example to calculate the Sorry-for/Gloating
emotion that the emotional agent has towards
the troublemaker the following should be
considered:
Joy/Distress: the more joyful the agent the
less he is to gloat about other agent.
Love/Hate: the more the agent loves the
troublemaker the less he is to gloat.
Pride/Shame: proud characters tend to be less
gloating.
Admiration/Reproach: the more admiration
the less gloating.
Jealousy/Envy: the more jealous the more
gloating.
Performances: as defined earlier, of both
agents is also a factor.
The degree of deception: also as defined
earlier, the more deceived, the more the agent
tends to be gloating.
So all this should be included when we need to
calculate the Sorry-for/Gloating emotion. The
question is how much weight should be given
to each factor. The answer is really a matter of
choice. We have created a very flexible system
where by the weight of each constituent of
every emotion could be changed. The only
constraint is that the sum of the weights should
be equal to +1.
4- Test and results
Implementation of the system was done in
java 1.2, and we profited from the portability
of Java to run it on different platform. The
results are presented in data files and then
viewed by the Microsoft excel.
We have conducted two types of test. Starting
in both by setting the initial state of the agent
to a level defined in (Figure 2, Graphe1). The
first test is known by the rationality test of the
system. We wanted to see if the system is
reasonable and consistent. We conducted a
number of sessions where the emotional agent
answered all the questions correctly (Figure
2,Graphe 2) and another number where he
answered everything wrongly (Figure 2,
Graphe3).
Figure 2: Results of the preliminary rationality test
The results that we got showed that basically
the system work. The values of the emotion
couples fluctuated proportionally with the
performance of the actors. And the importance
lay in the fact that these values tend to acquire
positive entities following good performances
and
negative
ones
following
bad
performances.
answering. The results of this test are shown in
Figure 3.
In this test we displayed the three emotion
couples Joy/Distress, Admiration/Reproach
and Love/Hate. We can see in (Figure3,
graphe 1) that these emotions tend to increase
when the agent is performing well. Except for
the Love/Hate couple which is based in a great
deal on the answers that are proposed by the
troublemaker and not by the proper
performance of the agent.
The other type of test was to run a learning
session and answer randomly to all the
questions (we set the number to 9) and to see
the fluctuations of different emotions upon the
Value of the question
Answer of the troublemaker
Answer proposed to agent
Performance
Performance of troublemaker
Love/Hate
Joy/Distress
Admiration/Reproach
1
2
3
1
1
0.375
0.75
0.342
0.285
0.33
3
-1
-1
0.25
0.2
0.187
0.129
0.12
3
4
5
6
7
3
1
2
3
1
1
1
1
1
1
1
-1
0.756 0.786 0.813 0.926
0.609 0.707 0.83 0.55
0.32 0.43 0.49 0.31
0.34 0.47 0.75
0.32
0.43 0.54 0.63 0.31
8
9
3
3
1
1
1
1
1
1
-1
0.95 0.998 0.991
0.38 0.43 0.39
0.38 0.43 0.39
0.56 0.84 0.53
0.52 0.65 0.56
Graphe 2
Graphe 1
0.9
1.5
Answers - Joy/Distress
Emotions fluctuation
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
0.5
0
1
2
3
4
5
6
7
8
-0.5
-1
-1.5
1 2
3 4
5
6 7
8 9
Question Number
Question Number
Love_Hate
Joy_Distress
Admiration_Reproach
Figure 3: Results for instantaneous test
Answer of the troublemaker
Answer proposed to agent
Joy_Distress
9
Conclusion
Following the early submission of this report,
a lot of testing and correction has been made.
The results we were able to obtain were
encouraging, we were able to detect the
behaviour of an emotional agent in different
situations (mainly positive performances and
negative one).
The context where the three actors are in is
also very promising. Under it, it is possible to
represent: the needs of the agent and the
values of these needs, the positive and
negative attitude that other agents might have
towards him and the competitiveness and cooperative aspects that usually are found in
class rooms.
We are interested in particularly in the
behaviour of stable emotions from which we
could move out to defining personality traits
and general behaviours.
Another point that we are working on is the
introduction of an uncertainty factor in the
computational model. This factor will imitate
the uncertainty and the level of
unpredictability in humans.
Finally integration of this system in already
existing ITS models at Heron Laboratory will
take place.
References
[Aïmeur & Frasson, 1996] Analysing A New
Learning Strategy According To Different
Knowledge Levels. In Computers Educ. Vol.
27, No. 2, pp. 115- 127, 1996.
[Bates, 1994] The Role of Emotion in
Believable Agents. Technical Report CMU-
CS-94-136, School of Computer Science,
Carnegie Melon University, Pittsburgh, PA.
[Elliot, 1997-a] I Picked Up Catapia and Other
Stories: A Multimedia Approach to
Expressivity for “Emotionally Intelligent”
Agents. In Proceedings of the first
International Conference on Autonomous
Agents, Marina del Rey, 451-457.
[Elliot,
1997-b]
Affective
Reasoner
personality models for automated tutoring
systems. In proceedings of the 8th World
Conference on Artificial Intelligence in
Education, AI-ED 97. Kobe, Japan.
[Hayes-Roth & van Gent, 1996] Story-Making
with Improvisational Puppets and Actors.
Technical Report KSL-96-05, Knowledge
Systems Laboratory, Stanford University.
[Ortony, Clore & Collins, 1988] The
Cognitive Structure of Emotions. Cambridge
University Press.
[Rizzo, Micelli & Cesta, 1996] Preliminaries
to a goal-based Model of Personality for
Believable agents. Technical Report IP-CNR,
National Research Council of Italy.
[Rousseau & Hayes-Roth, 1997] A SocialPsychological Model for Synthetic Actors.
Technical report KSL-97-07, Knowledge
Systems Laboratory, Stanford University.
[Walker, Cahn & Whittaker, 1996] Linguistic
Style Improvisation for Lifelike Computer
Characters. Papers from the 1996 AAAI
Workshop on Entertainment and AI/A-Life,
Technical Report WS-96-03, Portland, 61-68.
Download