2. Architecture of an Avatar

advertisement

NOTE:

The authors of this paper would like it to be considered for the Special Issue of Artificial Intelligence

Communications.

A Fuzzy Internal Model for Intelligent Avatars

Ricardo Imbert, Angélica de Antonio, Javier Segovia, María-Isabel Sánchez-Segura

Virtual Environments Research Group

Facultad de Informática, Universidad Politécnica de Madrid

Campus de Montegancedo, 28660 Boadilla del Monte, Spain

E-mail: {ricardo, maribel}@gordini.ls.fi.upm.es,

{angelica, fsegovia}@fi.upm.es

Abstract

The lack of believability of nowadays virtual characters may lead the users of Distributed Virtual Environments to stop using them, once the initial attraction of the innovation has declined. The successful use in the future of

Distributed Virtual Environments for complex tasks, beyond simple chatting systems, might depend on our ability to provide the user with powerful virtual representations (avatars), which can be used not just as a presence indicator, but also as a tool to communicate with other users and establish interesting interactions. On the other hand, a very complex avatar, with many features and controls, can make it very difficult and tedious for the user to fully exploit the avatar’s possibilities. This paper presents a proposal for an internal model of avatars that allows the user to delegate some actions on an underlying intelligent agent. The main component of the model, the psychological model, is explained in detail. The use of the model is based on a fuzzy valuation and a definition of fuzzy relationships among the parameters. A dynamic updating process is also defined to maintain the internal state of the avatar. We have applied the model to an interesting game environment, where some of its most powerful characteristics are successfully exploited.

K EYWORDS : Intelligent Distributed Virtual Environments, believability, avatars, internal psychological model, intelligent agents, fuzzy logic

1.

Introduction: Usability of Believable Avatars with Complex Behaviors

Nobody has doubts about Distributed Virtual Environments (DVEs) as tools with a promising future: their applications are huge (from education to commerce) and the possibility to interact with other distant people in real time is really attractive for users. However, there are still many problems to be solved if we want DVEs to be used for more complex tasks than just chatting for a while. Our research stems from the belief that the three key research problems in the future of DVEs are related to:

 realism and believability, ¿is it more important to be realistic or to be believable?

 interaction among users, ¿how to provide users with the ability to communicate and to have rich interactions with other distant users?

 usability of the DVEs, ¿is it easy for a user to exploit even the most complex functionalities of the system?

In our research we seek for DVEs in which the users can easily perform interesting tasks, cooperating and interacting with others, in a believable way. The integration of avatars and agents (both autonomous and semiautonomous) within DVEs will help us to improve the believability and usability of DVEs.

Having avatars as user representations in a DVE stems from the need of an identity that every user of a DVE feels when she enters into the environment. The implementation of an identity has evolved over the last years from a simple name in the most primitive text-based MUDs to complex three-dimensional human-like models.

Visual representations provide several advantages such as a strongest identification between the user and her representation and an easier way to communicate information about the configuration of the environment, the users that are populating it and their activities. In this evolution it is easy to perceive the quest for a more realistic representation, and lots of efforts have been invested in computer graphics research trying to build avatars with more realistic appearance, natural walking, clothing, grasping of objects or lip movement synchronized with speech.

The high degree of realism that is reached on some experimental research environments, which, on the other hand, requires extensive hardware resources, has not been offered to the users of commercial virtual environments. When a participant joins one of those DVEs, she often has no feeling of reality. Providers attribute it to the limitations in hardware resources on the client side and the low transmission bandwidth offered by Internet. We believe that other reasons, such as the static representation of avatars, the disability for communicating any kind of emotion, or an interaction based exclusively on limited channels, such as reading and writing, originate this lack of believability. The user then easily loses the interest and the aim of the DVE fails. The ALIVE project (Maes, 1995) also points out that how fancy graphics are may be less important than how meaningful the interactions in which the user engages can be.

The concept of "avatar" has been understood to date as a simple disguise or external representation; but we think that this concept needs to be enriched. A realistic appearance is not enough to provide believability. It is necessary to complement the appearance of an avatar with an intelligent behavior which will give the feeling of life, allowing avatars, as representations of real humans, to interact in the same way as humans interact in real life. This is the way in which you will feel that your avatar is alive and represents yourself.

The OZ project (Bates et al., 1992) may have been the first real project on believable agents in interactive environments. Bates introduces the concept of believable agents, meaning that a viewer or a user can suspend his or her disbelief (Loyall and Bates, 1997).

The Oz project simulates a small world whose inhabitants, spherical avatars named Woggles , are built through a goal-directed, behavior-based architecture for action. This architecture is coupled to a distinct component for generating, representing, and expressing emotion, based in the Cognitive Theory of Emotions of Ortony (Ortony et al., 1988). By and large, they claim that personality and emotions are the most important aspects of believability to add to social behaviors (Reilly and Bates, 1995) (Reilly, 1997).

However, the richer is the concept of avatar, the more things the user has to control. While a user is maintaining a conversation, playing or arguing with other participants, it can become very tedious to pay attention to, for instance, making the adequate gesture or sound. This is good for increasing the believability, but makes more intricate the management of the avatar, so the user can end up forgetting to use the utilities provided with that aim. Experience shows that if we excessively overload the user-virtual world interface, the user will discard all the incidental options, and will only use very few functions. That means that we now have a very rich system with a very poor use (what is, in practice, the same as a poor system).

The solution that we propose is to automate some functions, trying to make the avatar behave as the user would do. If we want DVEs to be effectively used in the future, the user must be freed from the compulsory management of every procedure. She must be able to decide which actions she wants to manage, and which others make tedious or complex the use of the virtual world.

Therefore, the avatars that we build can act semi-autonomously, thanks to a personal agent attached to the avatar and controlled by the user. The delegation of control implies the use of some AI techniques, in order to build intelligent agents, which should be a loyal representation of the way in which the user would interact. From the join of AI and DVEs arises the concept of Intelligent Distributed Virtual Environments (IDVEs).

There are many ways in which an avatar can interact with others, so we are interested on graduating the interaction ability of the avatar. The user can choose to take absolute control over the actions of her avatar or she can only operate over some of the features of her avatar, delegating the management of the rest to the agent. We

understand the management of avatars with a range of control degrees as it is represented in Figure 1.

Figure 1. Interaction control range

For instance, if a participant is very nice and wants to always greet any other avatar she meets, she has the possibility of telling her avatar every time the detailed way of doing that greeting - e.g. raising its eyebrows, weaving a hand and smiling -. This means that the user has a high degree of control over the avatar (what results quite inconvenient in this case). On the other hand, the participant could only indicate her intention of greeting and the intelligent agent would show the avatar the way to do it, according to the user model. An intermediate degree of interaction control matches with this case. Finally, the user could delegate the action of greeting in her personalized intelligent agent, which would decide to how to greet whenever it meets someone her master would greet. This is what happens when the user has the lowest degree of control over her avatar’s interactions.

We bet for the intermediate solution, where many gestures, motions, etc., which are very expressive and of course needed for a rich interaction and for believability, can be generated automatically, and some others can be dependent on the user. That will provide avatars with what Joseph Bates calls the illusion of life (Bates, 1994).

There are some previous works that have also dealt in some way with the partial autonomy of avatars in an interactive environment.

One of the most interesting proposals is that of The CyberCafe, described by Rousseau and Hayes-Roth in

(Rousseau and Hayes-Roth, 1997). They introduce the concept of synthetic actors . A synthetic actor may be autonomous or a user’s avatar. An autonomous actor receives directions from the scenario and other actors, and decides on its own behavior on the virtual stage with respect to those directions (Hayes-Roth et al., 1995). An avatar is largely directed by a user who selects actions to perform, although it also receives directions from the scenario and from the other actors. In fact, the user chooses the actions to be performed by the avatar, but the way to carry them out is chosen by the avatar. These actors are able to improvise their behavior in an interactive environment and they own a repertoire of actions that are automatically planned to achieve each goal.

Another interesting system is Bodychat (Vilhjálmsson, 1997), that tries to partly automate the communicative behavior in avatars. Bodychat proposes an avatar as a partially autonomous entity, providing an automated facial expression and gaze that depends on the user’s current intentions, the current state and location of other avatars, its own previous state, and some random tuning to create diversity.

Once we have decided that partly automating the behavior of an avatar is a good idea, the problem is that if the user decides to delegate some functions on her personal agent, she will expect the behavior that is exhibited by the avatar to be similar to what her own behavior would be at the same situation. She will also expect her avatar to behave in a consistent way. And, moreover, she will expect her avatar to behave different to the other avatars that populate the virtual world. In order to do so, the behavior of our avatars will be conditioned by an internal model of the user. The intelligent agent will also need a decision mechanism that allows it to select the most appropriate action in every situation.

This paper goes into a description of the main components of the internal model and the way in which it is valued and dynamically updated. The proposed fuzzy quantification and update of the internal model parameters shows how techniques coming from Artificial Intelligence can enhance DVEs as we know them today.

Afterwards, the application of the model to a DVE for playing games is discussed and some experimental results are presented.

2.

Architecture of an Avatar

We have taken as a starting point some of the most remarkable ideas of previous works. A good approximation to the architecture of an avatar is the one of The CyberCafe (Rousseau and Hayes-Roth, 1997). According to this architecture, a participant has a mind and a body. While all the knowledge of the virtual world and the internal state of the avatar lies in the mind, the body is the interface between the avatar and the virtual world. An interaction with the user is also necessary, so that she can set the goals to be pursued by the avatar. The selected architecture is shown in Figure 2.

Figure 2. Architecture of an intelligent avatar.

Within this architecture, our aim will be the description of the avatar’s mind. The mind will control the actions to be performed by the avatar’s body in the virtual world. We have classified the actions that an avatar can perform

(and therefore could be automated), into two categories:

Expressions

Tasks

Expressions, in turn, can be classified into verbal and non-verbal expressions, while tasks can be classified into reflex tasks and conscious tasks.

There are several reflex tasks which contribute to increase the avatar’s appearance of life, such as breathing, blinking and having a dynamic glance (Vilhjálmsson, 1997). The user must be freed from the management of all these tasks. Given that they are very related to the personality and mood of the avatar (e.g. the breathing rhythm increases when someone is very nervous, and reduces when she becomes calmed), a proactive attitude is required to manage them.

On the other hand, when a conscious action has to be automated, it must be performed as the user would do it, according to her personality. She may want to move from a place to another or to greet another avatar, but the way to perform those tasks depends on the current mood of the user, her personality traits, attitudes… Again, a personalized and proactive control must be provided.

Finally, the avatar must always show an external expression coherent with its internal psychological model. The management of this external expression can be automated, learning from the user behavior to provide the right appearance in each moment. Thus, if a nasty avatar greets another one, even if it is very happy, its smile, for instance, will not be as broad as the one of a nice avatar.

In all of these situations, when an avatar has to select an action to perform, this decision must be made according to several factors, as it is shown in Figure 3.

Figure 3. Influential factors in an avatar’s action selection

The internal state of the participant , where the psychological model of the character is defined and initialized.

The representation of the virtual world , defined in terms of other avatars and objects.

The current state of the scenario , with a suitable representation of what is happening around it (for instance, the state of a game which it is playing).

The user explicit commands and goals. Commands are actions that should be performed by the avatar as soon as possible (e.g. open the door), while goals are the way in which the user tells her avatar her main objectives within the VE, so that the avatar can adapt its behavior to maximize the probability of reaching that goals

(e.g. make friends).

This simple action selection framework could be integrated in the future within a more complex architecture for the mind, like the one proposed by Sloman in the Cognition and Affect Project (Sloman and Logan, 1998). He claims that the normal adult human architecture involves three main layers, each supporting different sorts of mental concepts: the first layer is also de oldest in evolutionary terms, and is entirely reactive; the second layer is deliberative; finally, the third is a reflective layer.

Sloman conjectures that human mental concepts (e.g. belief , desire , intention , experience , mood , emotion , etc.) are grounded in implicit assumptions about the underlying information processing architecture.

In the following sections we focus on just one of the factors that influence the behavior of an avatar: the Internal

State of the Participant .

3.

Internal State of the Participant

Within the internal state of the participant, we distinguish a psychological model, the intentions entrusted to the avatar, and the past history. These last ones are of a great importance when a high degree of autonomy is given to the avatar. The past history will be useful to maintain a coherent behavior and to have some memory to make intelligent decisions and actions. However, we have not yet included this aspect into our model.

3.1

The Psychological Model

The psychological model is the component that makes possible to obtain avatars as social individuals with personality and emotions. The body language of an avatar and the way in which it carries out every action, will be largely influenced by its personality. Some other authors have already proposed the use of personality models for autonomous agents, but very few have dealt with this aspect in avatars, that is, characters that receive

directions from a user, and whose decisions are always conditioned by the user’s wish. One of the most remarkable systems that allow for avatars to have a personality model is The CyberCafe (Rousseau and Hayes-

Roth, 1997). In fact, our decomposition of the psychological model is inspired by their model, with some additions and modifications, and with substantial differences in the valuation and updating mechanisms that will be explained in the next sections.

Personality traits, moods and attitudes are the components of this model. Thus, the main elements to be managed in the internal state are the following:

Personality traits , which mark out the general lines of every participant’s behavior. Personality traits will hardly ever change over time, though if they do, it will be very slowly. Personality is the set of psychological traits that distinguish an individual from all others and characterize his or her behavior (Hayes-Roth et al.,

1997). With this definition is obvious that the number of personality traits may be unlimited. Although multiple sets of personality traits have been defined, depending on the authors and the theories), we have selected the following ones: expressiveness, friendliness, kindness, courage, and ability for simulation., taking into account the context in which we are developing our model (an environment for interaction and games).

Moods , which show the emotional state of a participant in a fixed moment. Moods are usually very variable over time, though a participant should have a prevailing value for every mood. In (Rousseau and Hayes-Roth,

1997), moods are divided into two different categories: self-oriented moods , such as happiness or boredom, which are not directed towards other individuals; and agent-oriented moods , like anger or reproach, directed towards another individual. We have come to the conclusion that agent-oriented moods are too similar to attitudes (see next point) and have more to do with the cause of a mood than with the mood in itself. For instance, an avatar can be angry because of the action of another participant or because it has been losing in a game for a long time. Therefore, we are not going to separate these two classes of moods. Instead, we differentiate among mutually-exclusive moods and non-exclusive moods.

Mutually-exclusive moods: This set includes the six emotions that most of the authors propose as the basic ones (Ekman and Friesen, 1978) – happiness, sadness, disgust, anger, surprise, fear – and their main feature is that a person can only exhibit one of them at a time.

Non-exclusive moods: These are moods that can appear at the same time than another mood, complementing and qualifying it. This set is not as restrictive as the previous one. We have identified two moods that are interesting for our purposes – attention, excitement – but others could be added to this set.

On the other hand, like Binsted (Binsted, 1998), we have decided not to take dual moods (e.g. happinesssadness), as . With dual moods we have found that it is possible to reach inconsistent sets of mood values.

However, as Binsted and Elliott (Elliott, 1997) propose, moods degrade over time, so that they naturally return to their character-specific default values. We have made the default value depend on the personality traits of the character.

Attitudes , which determine the behavior of a participant in her relationship with another participant. In real life, as one interacts with another person, one begins to know more and more about her personality. People have, in fact, something like a register for each individual they meet, so that their attitude towards other people can be always coherent. As the attitude of a user can be different according to different participants, we will have several attitude values, one for every participant. Attitudes can be very variable over time or quite stable, depending on the values of other personality traits and moods. Attitudes modify the avatar mood only for the presence of another individual. For example, if A likes B, and A has a “ low degree of happiness”, this value may be increased to high .

Furthermore, the avatar behavior might not be the same if she is alone, with only another avatar, or with more than one participant. A solution is needed to present a coherent attitude towards a whole group. The way for solving this circumstance can be, as (Rousseau and Hayes-Roth, 1997) proposes, the calculation of the average value of the attitudes towards every person in the group, or to take the highest or lowest value.

The attitudes we have identified for a participant are: to like, to trust, to be afraid of. Others could be easily added to this set.

3.2

Intentions

Intentions express the goals entrusted by the user to her avatar. Intentions are proposed in very specific moments, and are more dependent on the user directions than personality traits, moods or attitudes.

In the Bodychat system (Vilhjálmsson, 1997) the concept of intention is introduced. Intentions are described as a “set of control parameters that are sent from the user's Client to all Clients, where they are used to produce the appropriate behavior in the user's Shadow avatars”

Vilhjálmsson notes that the user can show her current intentions through the behavior of her avatar. Still, as he develops a system for interaction in a 3D virtual world, but only using conversation, the intentions reflected by him are strongly limited to a chatting system. He describes three possible intentions for the users: ‘to indicate to another avatar her availability for a conversation’; ‘to indicate who is a potential conversational partner for the user’; and ‘to indicate the user willingness to break away a conversation’. This set of intentions can be enough if only a conversational oriented virtual world is expected. However, it must be increased and completed if a higher level of interaction is sought. In (Rousseau and Hayes-Roth, 1997), only a psychological model is considered, but they do not take into account the avatar/user (depending on the avatar’s autonomy degree) intentions.

We have identified some classes of intentions that were useful for a gaming environment. The following ones are just some examples:

Willingness:

Of beginning a conversation: indicating who the user is interested in having a conversation with.

Of making someone believe something : when the user is interested in simulating a different mood or attitude towards another character.

Availability:

For a conversation: indicates whether the user wants to welcome other people that show interest in having a conversation.

For paying attention to performances in the VE.

No other known project includes the possibility of simulating a mood , different to the real mood of the individual. This consideration can be essential in game environments, to try to deceive the adversaries. For instance, a player with a very good set of cards in her hands may want to make their rivals believe that she has been unlucky, showing a sad mood. Thus, she will have a real mood, e.g. “ high degree of happiness”, and a simulated mood, e.g. “ low degree of happiness”.

3.3

Valuation of Internal State Parameters

An important decision to be made is how to express the value of each one of these characteristics. In the model of The CyberCafe (Rousseau and Hayes-Roth, 1997), the quantification of a trait is numerical, with a value being an integer on the interval [-10,10]. This facilitates the task of establishing a correlation between different traits, since they can make use of arithmetical operators to obtain the value for another trait. For instance, they determine that the friendliness degree is (0.7 * degree of sympathy) + (0.3 * Anger). Other systems opt for similar approaches (Binsted, 1998) (Elliott, 1997). However, it can be very difficult and artificial to set a proper correlation value (Why 0.7 instead of 0.6?). Besides, if the user wants to change the current value of a trait, it can be very unnatural to say that 5 is the new degree of friendliness.

Our proposal is to use something more flexible and expressive, such as fuzzy logic. It will be easier for a user to say that an avatar has a high degree of happiness. Thus, instead of integer values, we will set the value of our traits in terms of fuzzy concepts, as Zadeh proposed in (Zadeh, 1983). The traits will take certainty values for each of these labels: < null , low , half , high , full >. For instance, as we will see below, for the personality trait friendliness , we can have a full degree of friendliness, equivalent to a usually open and extroverted individual, a half degree of friendliness, or a null degree of friendliness, corresponding to a nasty and introverted participant.

A qualitative declaration of the degree of a characteristic is formulated in a fuzzy way in relation to its numeric value in the [0,1] interval. (Bonissone, 1987) proposes that the fuzzy probability distribution can be associated to a trapezoidal distribution, as shows the Figure 4. This distribution can be described by <a, b, c, d>.

1

0 a b c d

1

Figure 4. Trapezoidal distribution for fuzzy declarations

According to this, the fuzzy labels identified will be modeled with the following trapezoidal distributions: null low

<0, 0, 0.05, 0.15>

<0, 0.15, 0.25, 0.4> half high full

<0.3, 0.45, 0.55, 0.7>

<0.6, 0.75, 0.85, 1>

<0.85, 0.95, 1, 1>

This is graphically expressed in Figure 5. half high full

1

null low

0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1

Figure 5. Trapezoidal distribution for the fuzzy labels

Personality traits, moods and attitudes will be valued with these fuzzy labels. Intentions, however, will only take boolean values.

3.4

Correlation among Personality Parameters

Traits, moods and attitudes are not independent, but they should be closely connected. Thus, as shown in Figure

6, personality traits have influence, in a higher or lower degree, over moods and attitudes. E.g. the happiness degree will be lower for a nasty person than for a nice one.

On the other hand, attitudes also have influence over moods. For instance, if an individual hates another one, the first one will turn angry when the second one appears.

As commented above, personality traits can have a slow evolution over time. These changes will be affected by moods or attitudes. Moreover, a personality trait can affect the value of other personality traits.

These three characteristics - personality traits, moods and attitudes - plus intentions and past history, determine the peculiarities of the user’s behavior and the way she will perform some actions. E.g. a sad individual will walk with low glance and dragging her feet.

Figure 6. Relationship among traits.

Once all the traits that define the behavior of an individual have been identified, we must determine how some of these traits have influence in others, according to the scheme depicted in Figure 6. Thus, to express how much a personality trait, for instance, is correlated to a mood, we will use a new fuzzy concept, named degree of influence , and its fuzzy labels will be: <nothing, few, some, much, all>.

By default, the degree of influence between any pair of parameters is nothing .

But there is not only a way to correlate two parameters. We have defined the following three correlation functions.

3.4.1 Increase

This correlation function can be easily understood by looking at Figure 7. Say that one parameter A has some influence over another parameter B. When the value of A (blue tones gradation) is higher than the medium value, the value of B (magenta tones gradation) tends to a higher value (towards the positive end). However, when the value of A is lower than the medium value, B value tends to be lower.

An example of this correlation function might be the one existing between the friendliness degree (nice - nasty) and the happiness degree. If someone has a high friendliness degree (she is a nice person), she tends to be happier, while another one with a low friendliness degree (a nasty individual) tends to be more unhappy, under the same circumstances. Hence, the happiness degree for the first one will be increased a little bit, while for the second one will be decreased.

Figure 7. Correlation function “increase”

For instance, we can say that “the friendliness degree increases <much> the happiness degree”. If the value of friendliness for an avatar is <null> and the value of happiness at a given moment is <full>, the resulting value for happiness , after applying the correlation function, will be <half>.

3.4.2 Decrease

This is the complementary of the Increase function, as it is shown in Figure 8. An example of use of this function is the relationship between the personality trait courage and the mood fear . When someone is very

courageous, she will tend to be less frightened in a threatening situation, while someone coward would tend to be more scared.

Figure 8. Correlation function “decrease”

The decrease function can also be assigned fuzzy labels. For example, if “the courage degree decreases <few> the fear mood degree”, and we have an avatar with <null> courage and <low> fear , the resulting value for fear will be <half>.

3.4.3 Accentuate

This correlation function can only be applied to non-exclusive moods and attitudes, because it only makes sense when applied to dual parameters. The correlation function accentuate, as it is shown in Figure 9, works in the following way, given that a parameter A has some influence over another parameter B. If the value of A is higher than the middle value, the value of B will be increased if it is higher than the middle value, and decreased if lower. On the other hand, if the value of the parameter A is lower than the middle value, the value of B will be decreased when it is higher than the half value, and increased if it is lower.

An example of this correlation function may be the relationship between the personality trait expressiveness and the mood attention : the attention (or distraction) of a very expressive person is accentuated, while the attention

(or distraction) of a very phlegmatic one is attenuated.

Figure 9. Correlation function “accentuate”

An example of an accentuate correlation function would be “the expressiveness degree accentuates <some> the attention degree”. The attention degree of a <null> expressive avatar will go from <low> to <half> after applying this function.

3.4.4 Correlation between real mood and simulated mood

The application of the fuzzy internal model to our prototype DVE (called Amusement, since it is an environment where people can meet to play together) has a feature that no other known DVE includes: the possibility of simulating a mood , different from the real mood of the individual. This consideration can be essential in game environments to try to deceive the adversaries, although it is also useful for any DVE, for example, if someone hates another one, but she wants to greet her friendly (that is what in real life is called diplomacy ).

This feature contributes to the believability of the VE, since the user can play with the possibility of simulating her mood or her attitudes towards other individuals. An individual has a degree for each mood identified in the model. Besides, she can indicate a different degree for the mood she wants to simulate. This takes place without

loosing the coherence with her internal psychological model (e.g. although someone is simulating sadness, her avatar has a coherent internal model with the proper actual degree of happiness).

This does not mean that every avatar is a great pretender . In the set of personality traits defined for an avatar, there is one called ability for simulation.

If an avatar is very sincere, she will not be able to simulate convincingly.. To typify the relationship between the mood value that the individual wants to simulate, and her real mood value, we make use of a fuzzy function called approach . Depending on the ability for simulation degree, the mood degree that the avatar shows will be close to the real mood degree (ability for simulation low) or to the simulated mood degree (ability for simulation high), as it is shown in

Figure 10.

Figure 10. Correlation function “approach”

This function will only be active when the user shows her intention of making someone believe something .

For instance, if someone has a <high> degree of excitement (she is nervous), but she wants to simulate a <low> degree, having a <high> degree of ability for simulation , the resulting value for excitement will be <low>.

3.4.5 Correlation Functions Syntax

In order to express a correlation between two personality parameters, the syntax is similar:

< A Parameter> CORRELATION FUNCTION <correlation degree> < B Parameter>

Where,

Parameter A has some influence on parameter B.

Correlation function is one of the three functions indicated above (increase, decrease or accentuate).

Correlation degree shows the influence degree of A on B, with a value in the domain <nothing, few, some, much, all>.

We have designed an easily understandable script language to specify the set of appropriate relationships for a given avatar. Every line of the script is similar to the following ones:

// Correlation between personality traits and moods courage DECREASE <few> fear friendliness INCREASE <much> happiness expressiveness ACCENTUATE <some> attention

...

The approach function is implicitly defined for each mood and each attitude. It is the only correlation function that does not have a fuzzy label associated to it.

3.5

Updating the Internal State

When a new participant enters the virtual environment, she has to choose an avatar and configure its initial personality profile, that is, to assign fuzzy labels to the personality traits. She could also set values for the attitudes towards other participants (or she could do it dynamically, as she meets new people), and she could set some intentions (although these could also be modified later on).

Then, a cyclic process starts in which the internal state of the avatar is updated in each cycle and the appropriate actions are selected. Two types of inputs can affect the internal state:

The user wish: At any time, the user can communicate to her avatar her desire to be in a given mood or her attitude towards a given person.

The current state of the world: The events and conditions that hold at a given moment can also influence the avatar’s mood. For instance, if my avatar likes another one and that avatar comes into the room, my avatar’s happiness would increase, or if I something falls suddenly by my avatar’s side, its surprise will go higher.

In order to deal with the current state of the world, it is necessary to define clearly the possible events and conditions that have to be considered. For each one, a prototypical response has to be specified, that is, the values that could be expected for the moods. For example, if I am playing a game and I lose, my happiness would decrease. These responses are prototypical in the sense that they would be considered reasonable in a person with a normal personality (not too extreme) without having any information about her past history or mood.

The process to update the internal psychological model, as it is shown in the Figure 11, can be summarized in the following steps:

In the first step, the input is processed:

If the user wants to modify her mood values, attitudes or intentions, she provides the new values in a fuzzy way (for instance, “I want to be very happy”). According to her personality traits, and the correlation functions defined in the model between the parameters, the new values for the moods and attitudes are calculated. The final values could be different to the values explicitly stated by the user (for instance, the happiness could be medium instead of high) but they will be more coherent with the participant’s personality, thus making our avatars more believable

If an event or condition that can affect the internal state happens in the part of the virtual world of which the avatar is aware (something happening in another room should not affect an avatar, for instance), the new moods are computed according to the prototypical response.

In a second step, the user’s intentions are applied. Some of them can considerably modify the values obtained in the previous step (e.g. when the participant wants to simulate a different mood value).

These intentions are applied according to the participant personality trait values.

In a third step, the new values calculated for every parameter are adapted to the previous mood values, the previous attitudes, and the past history, in order to maintain a certain coherence in the behavior of the avatar (it avoids extreme changes in the participant’s mood, what could give the impression of having a schizophrenic avatar).

Finally, the participant’s internal state and past history are updated.

Figure 11. Internal psychological model updating process

The current internal state of an avatar will affect its behavior and expression. Thus, when another user watches this avatar, she can understand how the other feels, and react consistently, as in real life. This is a very important characteristic to be taken into account if a believable interaction in DVEs is desired.

4.

Application of the Model: The Mus Game

The fuzzy internal model is used to represent the internal status of any DVE user, but it is hidden to her, so the best way to test the validity of the model is to check if the expressions and actions that are autonomously selected by the avatar are appropriate, given the current state of the DVE. A good domain for experimentation are the games, where people must interact and express their feelings and emotions freely.

The first game in which we have incorporated the fuzzy internal model, described above, is a Spanish card game called Mus . Mus is a betting (with points, no money is involved) card game played by two couples. Each couple plays against the other, and despite each player has her own set of cards, counting is done considering the cards of both members of the couple. Another interesting feature is that a bet raised or accepted by one partner of a couple counts for both, they both lose or win. All of this implies a need for coordination and communication between both partners.

The rules are quite complex, but the interesting feature, for our purposes, is that two levels of communication are present:

First, each partner of each couple can communicate to the other her current hand of cards by using standard facial signs (blinking, biting slightly a lip, etc.), taking care of not being seen by the other couple. The signs are standard, and no other ones can be used, so all of the players can interpret the flow of information if they can catch it. Using this channel of communication, one player with a bad set of cards can know that her partner has a very good one and bet high as if she was the real holder of the good set.

Second, emotions and feelings about the current state of the game are expressed using face expressions, voluntarily or not, depending on your interaction control degree, personality, moods, intentions, past history, etc., as usual in any card game.

As we see the Mus game provides us with an environment rich in expressions, feelings and emotions, which is what we needed to test our model.

The ability for simulation personality trait is very interesting for this game, because the Mus game is based on lies and deceits. For example, if you have very bad cards but you want to simulate good cards, this simulation will be better reflected in your face if you are a person with a high ability for simulation .

The avatars and the environment have been modeled with Alias|Wavefront PowerAnimator 8.5

. The DVE has been implemented with Sense 8 WorldUp and World2World , while the personality control engine has been implemented in C.

4.1

Facial Expression

What gives its real substance to face-to-face interaction in real life, beyond the speech, is the bodily activity of the interlocutors, the way they express their feelings or thoughts through the use of their body, facial expressions, tone of voice, etc. (Guye-Vuillème et al, 1998).

According to some psychological studies, more than 65% of the information exchanged during face-to-face interaction is expressed through non-verbal means (Argyle, 1988). In particular, in the task of communicating emotional messages, non-verbal communication is able to express things that would be very difficult to express using the linguistic system (Ekman and Friesen, 1967).

Non-verbal communication can be defined as the whole set of means by which human beings communicate except for the human linguistic system and its derivatives (writing, sign language, etc.). It is also frequently referred to as bodily communication, since it relies mostly on the use of the body.

We decided to use facial expression and voice intonation as the vehicles to represent the output of the internal model, so we have built some 3D models with all the elements that constitute the human face. These models will

allow us to express all possible combinations of personality traits and moods. In the future, the internal state will also be represented using other non-verbal communication channels.

Regarding the face, we generate the different expressions in real time by parameterisation. Therefore, we have defined a minimum set of parameters that define the conformation of each expression of the face. These parameters are what we call significant items . Some other authors have proposed different, more detailed sets of significant items, but our need of achieving expressive faces in a real time, multi-user context, led us to discard the existing models and to build another one more appropriate for DVEs.

The exact location of every one of the significant items can be observed in Figure 12.

Figure 12. Location of the Significant Items

4.2

Use of the Model in the Mus Game

In our prototype, the personality traits are initially set by the user when she enters the environment and selects her personal avatar (its appearance). The values of the personality traits are set by choosing among the fuzzy labels < null , low , half , high , full >. We have found that the users find it easier to do it in this way than by assigning numerical values.

Another possibility that will be implemented in the future is to have a repertoire of avatars, each one with a predefined personality profile (a nasty avatar, a friendly avatar, an introverted avatar, etc.). This looks like a good idea, since the use of the environment would be even easier for the user, but on the other hand it poses interesting questions and challenges. If the user selects an avatar whose personality is totally different to the user’s, and she delegates some actions, she could be see her avatar behaving in a way that she does not approve.

In order for this option to be accepted by the user, the concept of avatar should change radically. It won’t be a faithful representation of the user any more, but more like a pet.

In the Mus prototype, most of the changes in the avatar’s mood are given by the cards that the user receives at each hand. We have not yet implemented an automatic cards evaluation mechanism, so the user has to tell her avatar her opinion about the cards that she has received (very good, good, normal, bad or very bad).

Each time the user decides to change her mood, this change will be reflected in the intonation and the set of sentences used to say something, and in her facial expression. This cards valuation is internally related to a set of

mood values; then the fuzzy updating mechanism will transform them taking into account the personality traits of the participant, the previous values of the moods, the past history, etc. The resulting mood values will be used to generate a new facial expression, sentence and voice intonation.

It is interesting to note that the external manifestation of the internal state can be different to the one that the user was expecting. For instance, if the user tells her avatar that she has very good cards, but the avatar has registered in her past history that she has been losing for a very long time, maybe the expression in its face will not be as happy as the user could imagine.

On the other hand, in the Mus game simulation is very important. In addition to the ability for simulation personality trait, we have defined an intention of the type “willingness of making someone believe something”, in this case “making the other couple believe that I have a good or bad set of cards”. Whenever the user wants to simulate, she only has to tell her avatar the kind of set that she pretends to have. Again, the result will depend on the current mood values and how able to simulate is the selected avatar.

5.

Conclusions

In this paper we have discussed about the use of intelligent agents, attached to the avatars in a Virtual World, as a way to get an acceptable degree of believability, mainly in the interaction with other users, without overloading the user with too many controls. The higher usability of the environment is achieved when the user delegates some actions on her avatar, so that the decision making process is shared among both of them. This delegation can be performed at different levels, from just the details of how to express emotions, to complex actions that require high level planning and reasoning on the part of the agent.

In our first prototypes we have experimented with the delegation of expressing emotions, leaving all the high level decisions for the user (mainly related with what to do next in the game she is playing). In particular, we have selected facial expression and voice intonation as the non-verbal communication channels through which the emotions are expressed.

In most of the current DVEs, the expression of emotions is limited to a predefined set of “labeled” animations, so that the user selects any of these animations whenever she thinks its appropriate, and her avatar obediently reproduces the animation, without having any consciousness of the underlying emotion that it is supposed to be feeling. Moreover all the avatars express their emotions exactly the same way. After a while, the interaction becomes repetitive and tedious.

In other to provide for variability and consciousness, a formal internal model has to be defined. We propose a structure for the internal model that the avatar needs to have some autonomy. Some other models have been proposed by other authors, but this is the first one to use a fuzzy method to define the values for the parameters and to calculate the correlations among them. Most of the other existing models are too simple. Some deal with just moods, and most of them do not consider the relationships among the personality parameters. On the other hand, most of the ones who have defined an internal model focus their efforts on achieving a mathematicallyconsistent model, rather than a coherent, easily understandable one. This is the reason why we have adopted a more intuitive approach by using fuzzy labels and, therefore, a fuzzy internal model.

Moreover, since most of the other models are intended just for autonomous agents, and not for avatars, they do not take into account the inputs that come from out of the Virtual World, that is, from the user. The mood of the agent evolves only as a consequence of what happens in the Environment. In our model, the user can provide several types of inputs that will have an influence over the avatar’s state:

attitudes towards the others

intentions

desired moods

This is also the first method to allow the user to simulate something different from the real emotion that she feels. We could say that other models are “too sincere”.

Acknowledgements

This research has been partially supported by the Amusement project (ESPRIT IV LTR project 25197).

Ricardo Imbert is supported by the Comunidad Autónoma de Madrid through its grant program “Becas de

Formación de Personal Investigador”.

References

Argyle, M. 1988. Bodily Communication , New York: Methuen & Co.

Bates, J. 1994. The Role of Emotion. Believable Agents . Communications of the ACM. Vol. 37, No.7.

Bates, J., Loyall, A., Reilly, W. 1992. An Architecture for Action, Emotion, and Social Behavior.

Technical

Report CMU-CS-92-144. School of Computer Science. Carnegie Mellon University.

Binsted, K. 1998. Character Design for Soccer Commentary . cmp-lg/9807012. Sony Computer Science Lab.

Tokyo. Japan.

Bonissone, P. 1987. Summarizing and Propagating Uncertain Information with Triangular Norms .

International Journal of Approximate Reasoning, pp.71-101.

Ekman, P., Friesen, W. 1967. Head and Body Cues in the Judgement of Emotion: A Reformulation , Perceptual

Motor Skills, No 24, pp. 711-724

Ekman, P. and Friesen, W. 1978. Facial Action Coding System.

Consulting Psychologists Press.

Elliott, C. 1997. I Picked Up Catapia and Other Stories: A Multimedia Approach to Expressivity for

“Emotional Intelligent” Agents . In Proceedings of the First International Conference on Autonomous Agents, pp. 451-457. Marina del Rey. California.

Guye-Vuillème, A., Capin, T.K., Pandzic, I.S., Magnenat Thalman, N., Thalman, D., 1998. Non-verbal

Communication Interface for Collaborative Virtual Environments , Proceedings of Collaborative Virtual

Environments (CVE’98), pp.105-112.

Hayes-Roth, B., van Gent, R., Huber, D. 1997. Acting in Character . In Robert Trappl and Paolo Petta (eds.)

Creating Personalities for Synthetic Actors, pp. 92-112. Springer-Verlag Lecture Notes in Artificial Intelligence.

Hayes-Roth, B., Brownston, L., Sincoff, E. 1995. Directed Improvisation by Computer Characters . Technical

Report KSL-95-04. Knowledge Systems Laboratory. Stanford University. Stanford. California.

Imbert, R., Sánchez-Segura, M. I., de Antonio, A., Segovia, J. 1998. The Amusement Internal Modelling for

Believable Behavior of Avatars in an Intelligent Virtual Environment . Workshop in Intelligent Virtual

Environments. ECAI 98 – The 13 th Biennial European Conference on Artificial Intelligence. Brighton. UK.

Loyall, A. and Bates, J. 1997. Personality-Rich Believable Agents that Use Language . Proceedings of the First

International Conference on Autonomous Agents, pp. 106-113. Marina del Rey. California.

Maes, P. 1995. Artificial Life Meets Entertainment: Lifelike Autonomous Agents . Communications of the

ACM, Vol. 38, No 11, pp. 108-114.

Ortony, A., Clore, G., Collins, A. 1988. The Cognitive Structure of Emotions . Cambridge University Press.

Perlin, K., Goldberg, A. 1996. Improv: A System for Scripting Interactive Actors in Virtual Worlds.

SIGGRAPH 96. Computer Graphics Proceedings, Annual Conference Series. New Orleans. Louisiana.

Reilly, W. 1997. A Methodology for Building Believable Social Agents . Proceedings of the First International

Conference on Autonomous Agents, pp. 114-121. Marina del Rey. California.

Reilly, W. 1996. Believable Social and Emotional Agents . Ph.D. Thesis. Department of Computer Science.

Carnegie Mellon University. Pittsburgh.

Reilly, W. and Bates, J. 1995. Natural Negotiation for Believable Agents . Technical Report CMU-CS-95-164.

Carnegie Mellon University.

Rousseau, D. and Hayes-Roth, B. 1997. Improvisational Synthetic Actors with Flexible Personalities.

Report

No. KSL 97-10. Knowledge Systems Laboratory. Department of Computer Science. Stanford University.

Stanford. California.

Sloman, A. and Logan, B. 1998. Cognition and Affect: Architectures and Tools . School of Computer Science.

The University of Birmingham.

Vilhjálmsson, H.H. 1997. Autonomous Communicative Behaviors in Avatars.

Master of Science Thesis.

Massachusetts Institute of Technology.

Wright, I.P. 1997. Emotional Agents . Ph.D. Thesis. School of Computer Science. Cognitive Science Research

Centre. University of Birmingham. England.

Zadeh, L.A. 1983. A computational approach to fuzzy quantifiers in natural languages.

Computing and

Mathematics with Applications. No.9, pp.149-184.

Download