emotional development of autonomous software

advertisement
Conference Session B12
Paper # 6113
Disclaimer — This paper partially fulfills a writing requirement for first year (freshman) engineering students at the University
of Pittsburgh Swanson School of Engineering. This paper is a student, not a professional, paper. This paper is based on
publicly available information and may not be provide complete analyses of all relevant data. If this paper is used for any
purpose other than these authors’ partial fulfillment of a writing requirement for first year (freshman) engineering students at
the University of Pittsburgh Swanson School of Engineering, the user does so at his or her own risk.
EMOTIONAL DEVELOPMENT OF AUTONOMOUS SOFTWARE
Brendan Moorhead, bmm112@pitt.edu, Vidic 2:00, Corinne McGuire, cnm31@pitt.edu, Lora 6:00
Abstract - The development of emotional abilities in socially
intelligent software is at the forefront of engineering. By using
different algorithms, computer engineers have been able to
essentially teach autonomous robots how to respond to
emotion. These robots are able to read the expressions that
are presented by the human they are interacting with. The
robot is then able to make decisions that would be most
logical in the situation. With this, robots are able to simulate
responses that humans would most likely make. A computer is
able to be programmed with software that is able to detect
over 60 landmarks on the human face, and by studying facial
expressions and where the landmarks are in relation to each
other it is able to logically determine what emotion is being
displayed by the human using the knowledge of human facial
expressions. This is important because computers can adjust
their interactions just as a human would. A computer is able
to adjust their interactions by using algorithms and these
algorithms use emotion as a variable to determine what
actions to make. By using emotion as a motivator for
decisions, the robots are able to mimic the interaction of a
human. This intelligent software has a beneficial impact in
areas such as education and every day life.
autonomous agents can understand what humans are feeling,
they can more easily assist and adapt to the situations at hand
[3]. The physical signs of emotion are an important part in
deciphering human body language and emotion. By teaching
the agents how to recognize the physical reactions the human
body has to emotion they can more efficiently understand
what is happening and respond in the appropriate way. It is
important to understand the processes that are used to develop
these four abilities of emotional intelligence in autonomous
software and the practical uses they can be used for.
PHYSICAL SIGNS OF EMOTION AND
AGENTS
Certain emotions can be distinguished by their physical
side effects, such as: wrinkles, blushing, sweating, tearing,
and breathing. The extent to which these are recognized by
intelligent agents plays a powerful role in interactions
between agents and humans. For example, communication of
emotions in agents has been shown to influence the human’s
politeness, empathy, rapport, trust, and learning [4]. The
challenge is then to simulate these physical signs of emotion
so as to improve human's’ perception of emotion in the agents.
Key Words- Algorithms, Autonomous agent, GEmA, Emotion
recognition, Robot tutoring, Social Intelligence
Background of Essential Physical Signs
EMOTIONALLY INTELLIGENT
SOFTWARE
There are two types of wrinkles that can be distinguished:
permanent and temporary wrinkles [4]. Permanent wrinkles
are those that come with age while temporary wrinkles are
those that occur whenever a human smiles, frowns, is
surprised, etc [4]. The simulation of the forehead wrinkles
from all of the aforementioned emotions is essential to the
expression of physical emotion.
Blushing can be associated with embarrassment as well as
social anxiety, shame, uneasiness or happiness [4]. There are
three theories about blushing: interpersonal appraisal theory
argues that humans blush when they become self-aware and
think about what others are thinking of them; the
communicative and remedial theory theory argues that
blushing is used to save face that acknowledges and
apologizes for breaking an accepted social rule; the social
blushing theory expands upon the communicative theory and
argues blushing occurs when undesired social attention is
given [4]. The simulation of blushing is essential in
expressing feelings of shame and pride.
It is important for engineers to further research and
develop emotional intelligent software because it is beneficial
in improving society through its uses in business and
education. Emotional intelligence is based on the theory of
four different abilities. The first is the ability to perceive
emotion in oneself and others. The second ability is the use of
emotions to guide thought process. The next ability is the
understanding of the meaning of certain emotions. Finally, the
managing and regulating emotions is the last ability that
constitutes emotional intelligence [1]. The construction of a
flexible and adaptive emotional and behavioral modeling in
an autonomous agent allows for emotion to be expressed by
the agents in real life scenarios [2]. Autonomous agents’
ability to regulate their own emotions is very important, but
the software developed to help them understand and evaluate
the emotions of those around them is even more important, if
University of Pittsburgh, Swanson School of Engineering
Submission date: 2016/03/04
1
Brendan Moorhead
Corinne McGuire
allows the virtual agent to get an evaluation of the human’s
expressed emotion. Once the system has mapped out the
landmarks in the human’s face, it also is able to detect facial
movement and change in facial expression using 46 Action
Units. These Action Units are able to detect different muscle
movements in the face and by examining change in landmark
positions. The changes in landmark positions are then used to
further determine the emotion of the human the agent is
interacting with [5].
It is important for intelligent software to be able to use the
detected Action Units to make inferences about the emotion
of the human it is interaction with. These facial movements
and expressions give crucial information on human emotion
that the virtual agent can use while interacting with the
human. For example, if the software detects that the human
just lowered its eyebrows, it can infer that the human is feeling
either confused or uncertain in the situation. On the other
hand, if the eyebrows are raised the human can be in
agreement or has more certainty [6]. With the ability to detect
landmarks in the face along with facial movements, intelligent
software is able to perceive emotions expressed by humans
which is a key role in developing emotional intelligence.
Sweating is used for thermoregulation, but also can be
caused by emotional stress, or emotional sweating which
manifests in the palms, soles, and face. A human may sweat
whenever they are fearful or under scrutiny and this is
particularly evident in shy individuals [4]. The simulation of
sweating on the forehead is essential to the expression of fear.
Crying is associated with the experience of intense
emotions such as: suffering, separation, loss, failure, anger,
guilt, or joy. Crying is seen as either a cathartic release after
an intense experience or the communication of a need to be
cared for after a loss [4]. The simulation of tearing up
communicates intense sadness with those around the agent.
Accelerated breathing has been associated with
excitement and in contrast, relaxation is associated with slow,
deep breathing. Pain is related to respiratory volume (i.e.,
faster and/or deeper breathing), a lengthening of post
inspiratory pauses and breath-to-breath variability [4].
Nonrespiratory movements are generally viewed as the
expression of emotional states such as joy with laughter;
sighing with relief, boredom, or anxiety; and yawning with
drowsiness or boredom [4]. The simulation of different
breathing rates and nonrespiratory movements will lead to the
obvious expression of joy, anger, boredom, disgust, surprise,
etc.
ALGORITHMS
Algorithms are an important factor in the development of
emotionally intelligent software. They can be used to generate
personality traits and emotion in intelligent software,
simulating a more human-like interface. Algorithms are also
used in determining the emotional state of the system in
response to their personality, motivation, and external stimuli.
The use of these algorithms is crucial in architecture of the
intelligent system [7].
The first important component of the architecture of an
intelligent agent is the use of sensors. Sensors are an
important component because it allows the intelligent system
to collect information about the surrounding environment,
such as location and other stimuli, to store in the system’s
memory. This information can then be used with the
combination of an algorithm to determine how the intelligent
system should “feel” and react to the current environment [7].
The next component in developing the architecture of
intelligent software is perceptrons. Perceptrons are similar to
sensors, in the sense that they use data collected from the
surrounding environment. However, perceptrons differ from
sensors because they use an attention object list. This
attention object list consists of objects that are of high
importance to the intelligent system and checks the
environment for only these objects. An example of the use of
perceptrons can be demonstrated by using the scenario of a
virtual agent that is in a state of “thirst.” The intelligent agent
can use an attention object list for a scenario of “thirst” to
search its surrounding for certain objects, such as a water
bottle. Algorithms can then be used to determine the action
and emotional state of the intelligent agent. An example
algorithm is “MV(m)(→ ET)”, where “MV(m)” is the current
FIGURE 1 [5]
Facial landmark tracking using Facial Action Coding
System
FACIAL RECOGNITION
In order to develop emotional intelligence, intelligent
software must first be able to recognize emotions of humans
it interacts with. To perceive these emotions, the software
agent must be programmed to be able to detect the facial
features and expressions of humans and possess the ability to
derive a perceived emotion like humans are able to do. This
process can currently be seen in the Facial Action Coding
System.
The Facial Action Coding System is a program that gives
intelligent software the ability to recognize facial features of
humans that it is interacting with and perceive what emotion
they are feeling. The virtual agent is able to do this by first
mapping 68 different landmarks of the human face, as seen in
Figure 1. This allows the system to detect the geographical
shape of the current face. The system then compares the facial
mapping to its database of 2,926 facial images that contain
different emotion expressions that range in intensity. This
2
Brendan Moorhead
Corinne McGuire
motivation level and “ET” is the emotional state. The
motivation is a value between µ=0, which corresponds to no
motivation, to µ=1, which corresponds to maximum
motivation. This state of emotion is then driven by the
motivational level. If the object is at a far distance and the
level of “thirst” is low, the agent may decide it is not
motivated to seek the water bottle but still be in an agreeable
emotional state. However, if the level of “thirst” is high, the
algorithm may have high motivation to seek to get the water
bottle. Additionally, if the distance of the water bottle is too
far that it is unattainable, the algorithm could decide that high
motivation and the inability to retrieve the water bottle puts
the emotional state of itself into an unhappy state [6]. These
algorithms show how they can be used for virtual agents to
make more human-like decisions.
The next part of the intelligent software architecture is the
soul. The soul component of a virtual agent is where it stores
variables of emotion, personality, motivation, knowledge, and
gender. A virtual agent stores a personality set “PS,” which is
the entire personality set of the agent which consists of
specific personalities, “PS(k).” Productiveness, extraversion,
and agreeableness are examples of specific personalities that
can be stored in the personality set of the intelligent software.
Each personality is also stored with an intensity value ranging
from µ=0, which corresponds to no intensity, to µ=1, which
corresponds to maximum intensity of the personality [7].
Algorithms are used to determine the personality set of the
virtual agent by using information such as programmed
gender, social knowledge, and knowledge gained through
interactions to determine a likely personality set of the agent.
With these personality sets, intelligent software is able to
demonstrate unique human-like personalities.
The final component in the architecture of the virtual
agent is the actuator. The actuator uses the emotional state of
the system, which was determined by algorithms in the other
components, to determine its actions. These actions include
body movement of the virtual agent along with changing its
facial expression to correspond with its emotional state [7].
The actuator is an essential component in the architecture as
it ties together the use of algorithms with the emotional
development of intelligent software.
What is GEmA?
GEmA is a generic emotional agent, or a mathematical
method to calculate the desirability of events as well as the
praiseworthiness of actions [8]. GEmA simulates 16 emotions
by appraising events and actions with respect to both goals
and standard agents, thus allowing the agent to learn what is
desired of it.
FIGURE 2 [8]
Basic workings of the GEmA model
The GEmA Model
The above “Event and Action Repository” works with
events and actions by its two elements: pattern table, which
learns the pattern of events and actions, and event and action
history. The patterns are used to compute expectedness, while
event and action history stores the occurrence of events and
actions to compute the likelihood of the occurrence of an
event or an action [8]. The likelihood is computed for
desirable events in hope and for undesirable events in fear.
Decay reduces the emotions at each cycle. The structure of
the event appraiser is similar to action appraiser. “The terms
goals, events, and desirability in event appraiser are replaced
respectively with standards, actions, and praiseworthiness in
action appraiser. Hence, only event appraiser is presented”
[8]. However, we have to be careful about the concept
“standard” which relates to belief of the agent and manner of
behaving. This means that the source of standards value
differs from that of goals [8].
GEMA
Thus far, research on artificial intelligence has mainly
been focused on decision making skills, but through GEmA,
agents can be taught emotions such as joy, fear, hope, and
anger, all of which play an important role in being ‘human’.
Emotions and reason can often be conflicting, therefore
making it more reasonable to remove the conflicting emotions
from the agent, but if the goal is humanity and morality,
emotions are essential. Autonomous agents need to possess
emotional intelligence as well as classic intelligence to
perform the best they can, similar to the human theory of
multiple intelligences devised by Dr.’s Gardner and Goleman,
and this is where the GEmA becomes useful [8].
Applications of GEmA
By using the GEmA, engineers can give autonomous
agents the ability to recognize and understand emotions. If an
agent can predict a negative emotional reaction, it can then
make the decision to prevent the negative event and thus the
negative reaction from occurring. Robots that can regulate the
emotions of those around them would be beneficial in
business, education, healthcare, and everyday life.
3
Brendan Moorhead
Corinne McGuire
module consists of a database of all the information the
system knows on a subject. The tutoring system aims to teach
this information to the user. The next module is the student
module. This module consists of the information that the
system knows about the user prior to the tutoring session.
Users of the intelligent tutoring system are given a diagnostic
test before the tutoring session so the system can understand
what the user knows and what the user still needs to learn. The
student module is considered a proactive feature. The third
module of intelligent tutoring systems is the tutoring module.
This module stores how the tutoring system can teach
information, such as types of exercises it can give to the user.
It also stores how the tutoring system should react to user
actions such as mistakes, effort, and time used. The last
module of intelligent tutoring systems is the affective module.
Using emotion recognition, the tutoring system is able to
gather information about the user as the session is happening.
The system is able to detect how frustrated the user is getting,
how much they need help, the amount of errors they make,
and how much time the user takes to complete exercises.
Anger, disgust, fear, happiness, sadness, surprise, and neutral
are emotions that the system is able to detect. The affective
module is an important use of emotion recognition because it
is not only just assesses whether the user is right or wrong,
but also their emotional state. This allows the computer
system to have more of a human reasoning which gives the
user a more beneficial tutoring session. The affective module
is considered a reactive feature of intelligent tutoring systems
[9].
GEmA Tutoring
Learning and tutoring involve emotional processes and
some of these emotions sit in the way of the learning process.
Emotions are used to master a subject or theory. An emotional
virtual tutor that uses GEmA for two types of students: the
students who are interested in learning new skills and
increasing their abilities, or mastery oriented, and those that
perceive the performance of their abilities to be most
important, or performance oriented [8]. Each educational
tactic contains two emotional behaviors: physical and verbal.
The Virtual Tutor uses gesture and facial expression to show
physical behavior and speech and emotional text to show
verbal behavior [8]. The inclusion of emotion in tutoring can
lead to better performance from the student by allowing the
tutoring agent to adjust their teaching to fit the mood and
desires of the student.
INTELLIGENT TUTORING SYSTEMS
One important application of emotion recognition of
intelligent software is its use in intelligent tutoring systems.
Intelligent tutoring systems are computer-based tutoring
systems in which the computer software is able to detect
emotions of the user and adapt its tutoring style to be most
effective for the user using proactive and reactive features.
The software observes the user’s face, speech, and other
features to recognize the user’s emotion such as frustration,
interest, or boredom. Intelligent tutoring systems aim to
correct the negative emotions of the user during the session
[9].
Proactive Features
Proactive features of intelligent tutoring systems consist
of information that the system has before the tutoring session
with the user. These proactive features are categorized into
two different sections, user-adaptive features and nonadaptive features. Non-adaptive features consist of the
features and teaching styles the system was programmed with,
thus is not affected by the user.
User-adaptive features consist of the information about
the user that the intelligent tutoring system collects before the
tutoring session. This information includes subject material
that needs studied, such as math or history, current conditions,
such as the amount of time available, and how the system will
assess the user, such as games or quizzes. Additional
information that the system collects is the user’s gender,
psychological and personality traits, and prior knowledge of
the subject. All of this information is then stored in the student
module of the system and is used to adapt the tutoring session
before it starts [10].
FIGURE 3 [9]
Intelligent tutoring system modules
Intelligent tutoring systems are composed of four
elements: the domain module, the student module, the
tutoring module, and the affective module. The domain
4
Brendan Moorhead
Corinne McGuire
Another category of reactive features is direct systemdelivered prompts to the user. Direct system prompts are
when the intelligent tutoring system uses dialogue boxes or a
speech engine to deliver a message to the user. These
messages are aimed to change a current emotional state or
behavior of the user. An example of a direct system prompt is
if the user shifts their attention away from the tutoring session,
the system will deliver a message to the user to fix their
current action, which is off-task behavior. Direct system
prompts are useful as they allow the system to correct bad
habits and unproductive behavior of the user. Direct system
prompts can be more useful when combined with emotion
recognition and development technology. With this
technology, the intelligent system can deliver more
personable messages to the user. For example, if a user shows
frustration at the current subject, the system can give the user
messages of empathy. The system can inform the user that it
understands that the material is difficult and takes time to fully
understand. The user may feel less likely to give up if the
tutoring system is more personable and acknowledges the
user’s current emotional state. It also is important that the
system is able to recognize the user’s emotion because people
can react to prompts very differently depending on their
personality. For example, some users may feel a positive
emotion when given empathy from the system, thus the
system knows empathetic messages are useful for the current
user. On the other hand, a user may be turned off by these type
of messages, causing more frustration or anger in the user.
The system can recognize is greatly reduce the number of
these messages. Direct system prompts is an important use of
emotional development and recognition of intelligent
software [10].
FIGURE 4 [10]
Reactive features of intelligent tutoring systems
Reactive Features
Reactive features of intelligent tutoring systems are used
during the tutoring session to make adaptions as the system
sees needed for the user. The system is able to determine the
psychological state and learning trajectory of the user to adapt
its interaction with the user. Reactive features are divided into
two different categories: CALM reactive features and direct
system prompts [10].
CALM is an acronym derived from the features that the
system can adapt depending on user action, which are:
conditions, assessments, and learning material. Conditions
refer to the current interaction between the user and the
system. If the user seems to be lacking effort, the system can
then adapt and try different methods to make the user more
involved and attentive. The next way a system can adapt to
the user is by using different types of assessments. If a user is
struggling or seems frustrated, the user can shift to less
assessments and focus more on helping the user understand
the material. However, if the user appears bored, the system
can shift to using more frequent and more difficult
assessments. Adjusting the frequency and difficulty of
assessments is beneficial to the user because it gives the user
a more personable tutoring session which challenges the user
while minimizing feelings of hopelessness and frustration.
Lastly, the system can adapt its learning material depending
on user actions. If the user shows positive progression in a
subject, such as math, the tutoring system is able to shift
learning material to another subject, such as history.
Likewise, if a user is showing confusion, the system may
decide it is best to stay focused on the subject at hand.
Adaption of learning material can also be used to give the user
a break when the show a lack of effort or attention in the
session. The CALM reactive features are an important use of
emotional development of intelligent software because it
allows the intelligent tutoring system to give human-like
interaction with the user, providing a more beneficial tutoring
session [10].
ADDITIONAL APPLICATIONS
Robotic Secretary
A right-now application of an autonomous agent in the
business world is the robot “receptionist” Nadine from
Nanyang Technological University in Singapore [11]. Nadine
has the ability to hold an intelligent conversation with a client
while gauging their emotional state and desires. Nadine can
remember your name, face, and previous conversations she
had with you and she has her own personality and emotions
as well. [11] Worldwide we are dealing with the challenge of
an aging population, and these social robots can be a solution
to the shrinking workforce [11].
EDGAR
EDGAR is a tele-presence robot optimized to project the
gestures of its human user [11]. By standing in front of a
specialized webcam, a user can control EDGAR remotely
from anywhere in the world. The user’s face and expressions
will be displayed on the robot’s face in real time, while the
robot mimics the person’s upper body movements [11].
5
Brendan Moorhead
Corinne McGuire
“EDGAR can also deliver speeches by autonomously acting
out a script. With an integrated webcam, he automatically
tracks the people he meets to engage them in conversation,
giving them informative and witty replies to their questions”
[11]. Robots such as EDGAR are ideal for social venues such
as malls and large tourist attractions such as museums because
they can offer practical information to visitors. EDGARS’
user can project themselves from anywhere to anywhere,
making location challenges obsolete [11].In education, a
professor giving lectures or classes to large groups of people
in different locations at the same time could become the new
normal.
The ToER model has been tested extensively in virtual
environments. One such example was in a game comparable
to “The Legend of Zelda”, a traveler agent and a guide agent:
“The traveler is an agent that needs to travel through an
imaginary world towards a safe destination. This world is
inhabited by a various dangerous creatures like dragons and
wolves, but there are also friendly people who travel through
the area. Moreover, the world is subject to natural phenomena
like lightning and earthquakes. These encounters and events
occur at random time points, but in more or less fixed areas of
the world. Since the traveler has no knowledge of the
environment he is in, he has to follows the path of another
agent the guide. The guide has decent knowledge about the
environment. Based on this, the guide helps the traveler to
navigate through the world by showing him the way.
However, a major complication for the guide is that the
traveler’s behavior may be influenced by emotions (in
particular, the emotion of fear). For example, if his level of
fear becomes too high, the traveler will run away, and if it
becomes too low, the traveler will act recklessly. For these
reasons, the guide needs to maintain a correct ToER of the
traveler, and to manipulate him if necessary. Fortunately,
since the guide is familiar with the environment, he is able to
predict roughly whenever certain events will occur. Based on
these predictions and the ToER of the traveler, the guide will
make sure the traveler reaches the final destination” [12].
TOeR Agent-Agent Interactions
When agents start to show more realistic affective
behavior, the interaction between such agents may also
become more realistic, more specifically, agents may start to
get insight in the emotional state of the agents around them
and try to influence those. Essentially, we must teach agents
the theory of mind. IF they have theory of mind, they can
anticipate the reactions of others and even begin to manipulate
their actions. The Theory of Emotional Regulation (ToER)
can be used to enable virtual agents with more realistic
affective behavior [12].
TOeR Applications
INTEGRATION
In the domain of virtual environments the model an agent
has about another agent is not always completely correct.
Therefore, an agent should keep on updating its model about
the other whenever possible. For the case of the ToER, the
main concepts involved in the model are the Emotional
Response Level (ERL), the baseline ERL, and the regulation
speed [12]. The agents estimate the values of these concepts
in a way that is similar to the way humans estimate each
other’s personality aspects. The only information that an
agent has to update these values is provided by the externally
observable behavior of the other agent. For example, if agent
A believes that agent B has a high ERL (e.g., is very
frightened), and believes that this will lead to a certain action,
such as screaming, but if it observes agent B doesn’t scream,
then agent A will lower its estimated ERL of agent B [12]. In
addition, the agents use two other strategies to update the
ToER: updating estimated behavior and updating estimated
regulation speed [12].
Once the agent has developed a reasonable model of the
other agent, it can use this to predict the other agent’s behavior
with forward and backward reasoning strategies. The agent
may then decide how to manipulate the other agent in a way
that satisfies its desires. For example, if an agent knows that
the other agent is easily scared and that a scary event is about
to happen, the agent may talk to the other agent in order to
calm them down and prevent them from running away, which
is not what the agent desires to happen [12].
The possibilities associated with autonomous agents are
endless in business, education, and day-to-day life. When
robots have the ability to understand emotion, people can
change their perception of the agents as machines to a more
human-like being. Technology and programming are
advancing at a promising rate and autonomous agents are
benefiting from these developments. The development of
GEmA and TOeR have taken the agents leaps and bounds
beyond where they started, allowing them to understand as
well as manipulate the emotions of themselves and those
around them. By predicting the reactions of others, agents can
prevent negative emotions and lead to more praise and
positive outcomes. Robots can be integrated into society in a
more harmonious way when they are able to understand and
express emotions.
REFERENCES
[1] M. Kazemifard, N. Ghasem-Aghaee, B. L. Koenig, T. I.
Oren. (2014). “An emotion understanding framework for
intelligent agents based on episodic and semantic memories.”
Autonomous Agents and Multi-Agent Systems. (Online
Article).
[2] S. Jain, K. Asawa. (2016). “Programming an expressive
autonomous agent.” 2016 Expert Systems with Applications.
Vol. 43. p 131-141. (Article).
6
Brendan Moorhead
Corinne McGuire
[3] T. Bosse, F. De Lange. (2011).“On virtual agents that
regulate each other’s emotions”. 2011 Web Intelligence and
Agent Systems. (Journal Article).
[4] C. de Melo. (2010). “Influence of Autonomic Signals on
Perceptions of Emotions in Embodied Agents”. 2010. Applied
Artificial Intelligence. (Journal Article).
[5] K. Mistry, Z. Li, S. C. Neoh, M. Jiang, A. Hossain, B.
Lafon. (2014). “Intelligent Appearance and shape based facial
emotion recognition for a humanoid robot.” 2014 8th
International Conference on Software, Knowledge,
Information Management and Applications. (Online Article).
[6] C. L. Lisetti, D. J. Schiano. (2000). “Automatic Facial
Expression Interpretation: Where Human-Computer
Interaction, Artificial Intelligence and Cognitive Science
Intersect.” Pragmatics and Cognition. (Online Article).
[7] Z. Liu. (2008). “A personality based emotion model for
intelligent virtual agents.” 2008 Fourth International
Conference on Natural Computation. (Online Article).
[8] M. Kazemifard, N. Ghasem-Aghaee, T Oren. (2011).
“Design and implementation of GEmA: a Generic Emotional
Agent”. 2011 Expert Systems with Applications. (Article).
[9] R. Zatarain-Cabada, M. L. Barron-Estrada, J. L. O.
Camacho, C. A. Reyes-Garcia. (2013). “Integrating Learning
Styles and Affect with an Intelligent Tutoring System.” 2013
12th Mexican International Conference on Artificial
Intelligence (MICAI). (Online Article).
[10] J. M. Harley, S. P. Lajoie, C. Frasson, N. C. Hall. (2015).
“An Integrated Emotion-aware Framework for Intelligent
Tutoring Systems.” Artificial Intelligence in Education. 17th
International Conference. (Online Article).
[11] L. Kok. (2015). “NTU Scientists Unveil Social and
Telepresence Robots”. Nanyang Technological University.
(Online Article).
[12] T. Bosse. (2011). “On Virtual Agents That Regulate
Each Other’s Emotions”. Web Intelligence & Agent Systems.
(Journal Article).
thank Chase Barilar for meeting with us to give us tips on
writing this paper and making sure we are keeping up with
due dates. Lastly, we would like to thank John Calvasina for
reading our paper and providing helpful feedback.
ADDITIONAL SOURCHES
A. Egges, S. Kshirsagar, N. Magnenat-Thalmann. “Generic
Personality and Emotion Simulation for Conversational
Agents”. Computer Animation and Virtual Worlds. 15(1): pp
1-13. January 2004.(Journal Article).
G. J. Koprowski. (2003). “Socially Intelligent Software:
Agents Go Mainstream.” Tech News World. (online article).
(2016). “Scientists unveil Social and Telepresence Robots
Powered by Intelligent Software.” Scientific Computing.
(online article).p
ACKNOWLEDGEMENTS
We would like to thank Beth Newborg for providing
informative tools for writing and formatting our paper. We
would also like to thank Anne Schwan for presenting useful
databases for our information. Additionally, we would like to
7
Download