Mood-Changing Adaptive Interface

advertisement
Affecting Affect – Mood Control Interface
Iulian Radu :: 43837004
Introduction
Emotions have important, often uncontrollable, effects on human behaviour. They serve to
motivate action and cognitive processing (for example, persons in a sad state are shown to
engage in less cognitive activities [1]), they indicate the state of the world in relation to our
goals (for example, a feeling of content indicates that our goals are achieved and no further
goal generation or planning are necessary [1]), and they affect the focus of our cognition and
perception (for example, people in a highly excited emotional state have been shown to focus
on central characters and plot actions, such that they remember the size of a gun but forget
the face of a criminal [1]). We have various ways of actively changing our emotions which
involve talking to friends, reflecting our mood to ourselves by expressing thoughts through
song or writing, or engaging in activities which invoke more positive emotions. However, a
person may not have time in the middle of a busy day to allocate for such activities; it would
be useful if there existed methods for modifying a person's emotions while they are engaged
in a task, for example while doing work on a computer. This research will investigate the
effects of using a computer interface for detection and modification of user's emotions.
Research hypothesis:
User's emotions can be modified through an emotionally-adaptive computer interface.
The research objectives are twofold. Primarily we wish to discover how much a user's
emotions can be modified by use of a non-intrusive interface, specifically if transitions
between emotions can be accomplished (such as invoking a happy mood in an initially sad
user). Furthermore, we will qualitatively investigate the user's reaction when an artificial
interface attempts to modify his/her emotions.
Background work
Emotions have been extensively studied from a purely psychological perspective, as outlined
above. Their significant effects have been quantified in relation to various tasks such as
corporate productivity [2], memory [1] and learning [1].
Various modalities exist for detecting emotion. Galvanic response sensors attached to skin
can sense arousal, fear, or anger [6]. Heart rate can be monitored to indicate agitation, stress
or calmness [5]. Body temperature can indicate stress, fear, or shock [5]. Analysis of talking
faces and vocal acoustic streams allows observation of a very wide variety of emotions during
face-to-face interactions. Electromyogram (EMG), sensors implanted in muscle tissue, can
measure nervousness and relaxation [5]. Electroencephalograph (EEG) scalp sensors can
non-invasively measure brain activity and detect many different emotions such as excitement,
curiosity, sadness, concentration, joy, anxiety, happiness, fear, calmness [4]. Finally, through
the use of a large scale Magnetic Resonance Imagery (MRI) machine, one can also detect a
similarly wide variety of emotions [7].
To modify a user's emotion, several methods have been shown to be effective. Colors are
commonly known to evoke emotions; for example, red causes excitement and alertness, blue
is associated with peace and serenity, etc [9]. Music also accompanies emotions, a fact well
known by motion picture creators: high tempo trance music causes excitement, while slow
rhythm with low pitch may invoke melancholy [10]. Scents can invoke mood changes [10].
Exposure to familiar pictures also stimulates emotions in users; for example, the photo of a
rose can conjure memories of happy times, while a clown may instil fear.
Computer software will be used to analyse the emotion sensor readings and modify the
user's interface in order to produce emotional changes. To accomplish this task, methods will
be employed from Computer Science studies of “adaptive interfaces” [11]. This area of study
deals with modifying user interfaces in response to the user, in ways needed to accomplish
specific goals such as providing better help or improving learning efficiency. User readings
are usually gathered by analysing data from electronic sensors, or simply by monitoring
response times and areas of attention [10]. These readings are entered into a model [11]
which tracks the user's mental state over time and determines how to adapt the interface.
Models depend on the complexity of the application, and range from simple mappings
between user response and interface adaptation, to complex probabilistic models accounting
for multiple variables and outcomes. The interface may be modified in various ways
depending on the application, ranging from overt changes such as an interactive agent
appearing to offer help, to covert changes like color change or providing easier questions for
solving on a GRE test.
System Overview
The final system will contain three components: EEG data will first be measured through
scalp sensors, then a model will process the EEG readings into emotions, and finally the
interface will adapt in order to transform the user's current emotion into a desired emotion.
The user will be connected to an EEG cap which will measures brain activity. The readings
from multiple sensors will be transferred to the computer through a direct cable interface such
as a serial port (if the range of user mobility becomes a problem, this can later be changed to
a high speed wireless connection such as Bluetooth or WiFi). There are several reason for
using EEG over the other emotion detection methods presented - this approach is nonintrusive to the user, a wide variety of emotions have been shown detectable, the EEG cap is
highly portable and easy to connect the user, and readings can be gathered in real-time.
Next, the computer will analyse the EEG readings and classify the emotional state into a
probabilistic Bayesian model. This type of model is used in order to account for uncertainty in
the prediction of emotion; the EEG readings will most likely be noisy, and it is thus useful to
know the degree of certainty of a prediction. Initially the system will only focus on the one
emotion with highest degree of detectability, however if the system is extended to respond to
multiple emotions, then this type of model can easily be extended for this task. Given the
emotional reading, the model will predict which type of interface adaptation will be most
effective in yielding a desired emotion.
Finally, the interface will adapt according to the suggestions from the model. Since we wish to
affect the user while they are engaged in an activity, the interface will modify the ambiance of
the Windows user interface. Specifically, the interface will change the color of window
borders, the music playing in the background. A small toolbar will be added to one side of the
user's display, in which small thumbnails of pictures will appear, controlled by the emotionallyadaptive interface. This approach ensures that the user's computer activity is not disturbed or
centered on our interface, yet that they are still affected by our system in most tasks they
engage in.
Research Development
For the initial development of the system, one user will be used to gather initial readings and
perform prototype tests. Once a prototype is complete, more subjects will be used to
generalize the system for future users.
The first step in development is to hold a 2.5-hour user study session, to calibrate the EEG
and emotional stimulants to the user. During the first hour of this meeting, while connected to
the EEG recording apparatus, the user will be presented with a set of emotions such as fear,
anger, sadness, joy, happiness, and calmness. For each one, the user will be asked to recall
an event in which such emotion was provoked. Once such an event can be recalled, the user
will verbally confirm that he/she is experiencing this emotion, and continue to focus on this
emotion for 30 seconds. This session will provide EEG data correlating to specific emotions.
Once all listed emotions have been exhausted, this process will repeat two more times, in
order to provide a more accurate data set for each emotion.
After a 30 minute break, the user will be asked to concentrate on a variety of stimulants and
asked to specify which emotion each stimulant invokes, along with the strength of emotion
(from a 1-10 range). The stimulants will be those expressed by the user interface: different
colors, a variety of music (such as high rhythm electronica, rock, dramatic vocalizations,
popular classical, and calm ambient), and pictures of various scenes (such as natural
scenery, social celebrations, graveyards, city landscapes, flowers, etc). This will allow the
system to build a correlation between stimulant and emotion evoked, and determine which
stimulants are more effective at causing a specific emotion. At the same time, the user will
still wear the EEG cap, and readings can also be used to further correlate emotional state
with EEG data.
Next, the probabilistic model will be built from these correlations. Symbolic nodes will be built
to represent each possible user emotion, and the emotion probabilities will be determined by
correlations from EEG readings. Another layer of nodes will then be created to represent the
interface adaptations, and the correlations from the second study will be used to relate
stimulants to emotions. In the end, this model will be able to predict emotional state from
EEG data, and, given a desired emotion, be able to suggest a set of interface adaptations to
slowly transition the user between the present and desired emotions.
The adaptive interface will then be programmed. The main focus on this task will be to
express the emotional stimulants by modifying the user's environment as outlined in the
system design section. The software will interface with the Windows operating system to
modify ambient colors, pictures and music. Pictures and music will be supplied with the
software, and will constitute the same set as involved in the user study. In the future, the
software could be extended to analyse pictures and music found on the user's computer such
that personal items can be used in user stimulation.
Finally, a field test will be performed to judge the system's efficiency in a real setting. The
user will be asked to use his/her computer for roughly 3 hours a day while connected to this
system. The system will operate normally, reading the user's EEG signals and trying to
stimulate an emotion of happiness during the first week. During the second week, once the
user is accustomed to the system, they will be able to choose a desired emotion for
stimulation, and also set the speed of transition. At the end of the two weeks, the user will be
interviewed in person. This will help us determine whether the system can modify the user's
emotions, which emotion transition speeds are appropriate, and how the user feels when
his/her emotions are modified by a computer interface.
The last component of this study involves studying the generalization of this interface, to
determine if new users can easily take advantage of the system. For this step we will recruit
20 users to participate in the initial user studies, which correlate emotional state to EEG
readings and artificial stimulants to emotions. Analysis of the aggregated data from this group
will determine if there are correlations between people, indicating how similar are people's
EEG signals for a specific emotion, and how similarly people respond to specific stimulants. If
correlations are strong enough, new users could use the system without need for the initial
correlation analysis, improving its marketability.
Implications
This research will have implications for areas of emotional psychology, human-computer
interfaces, and EEG technology.
It will provide data for correlating emotions to EEG signals in non-experimental settings, as
people interact with their personal computers in their usual space. Data will also be generated
on correlations between people experiencing the same emotion, adding to the study of how
widely EEG signals generalize. If generalizations can be drawn, these findings will push EEG
technology to be used for a wider variety of applications.
Through this study, we will also better understand the effectiveness of the various methods of
emotional stimulation. This will have positive implications for psychologists and educators, as
they will be able to provide more pleasant environments for patients/pupils, and be able to
stimulate emotions productive for emotional recovery and learning. Socially negative effects
could be generated by this research, as the results may be used by advertisers to create
more misleading marketing campaigns, or by corporations to unpleasantly force more
productivity out of their employees. If EEG technology is shown to be highly effective at
measuring emotions, it could be a strong tool for advancing research in mind-control
techniques.
On a different side of this study, an interesting paradigm will be emphasized – the extension
of the user's emotions into the environment. Our behaviour is affected by emotions, yet we
are not normally aware of how emotions affect the people and things around us. Through the
direct reflection of emotional state by the computer interface in this study, the user will
become visually and acoustically aware of how emotions cause changes in the environment.
This will most likely increase the user's emotional intelligence, causing them to be more
aware of their own and others' emotions. This may stem further research on effective means
for users to regulate their own emotions, and the creation of other applications for modifying
environment based on emotions (this may be especially applicable in the design of living
spaces which adapt to the state of their inhabitants).
Certainly this study will develop the understanding of how emotions can be integrated with
computer interfaces. This could lead to the creation of interfaces and robots which better
respond to the user by accounting for emotion. In the future, if detection of emotion and
cognitive state become finely tuned, we may be able to develop systems which are aware of
user's intention; these systems could adapt in real-time to provide what the user desires, by
simply reading mental states, without need for the user to even move the mouse.
References
1. Dodd, M. 2006. “Memory, Emotion and Cognition”. Available at
http://www.psych.ubc.ca/~mike/Psy309/emotionlecture_topost6.pdf
2. Kahn, J., P., 2005. “Mental Health and Productivity in the Workspace: A Handbook for
Organizations and Clinicians”. Available at
http://ps.psychiatryonline.org/cgi/content/full/56/1/110?rss=1
3. Monastra V, J. Lynn S., 2005. Electroencephalographic Biofeedback in the Treatment
of AD/HD. Applied Psychophysiology and Biofeedback. Vol 30, no 2.
4. Bowman H., 2005, Emotions, Salience, Sensitive Control of Human Attention and
Computational Modelling. http://www.cs.kent.ac.uk/people/staff/hb5/attention.html
5. Nasoz F., Alvarez K., Lisetti C. L., 2004, Emotion recognition from pysiological signals
for presence technologies. Cognition Technology & Work, vol 6, no 1.
6. Wikipedia, 2007. “Galvanic Skin Response”. Available at
http://en.wikipedia.org/wiki/Galvanic_skin_response
7. ScienceDaily, 2007. “MRI studies provide new insight into how emotions interfere with
staying focused”. Available at
http://www.sciencedaily.com/releases/2002/08/020820071045.htm
8. Picard R. W., Cosier G., 1997. Affective Intelligence – the missing link? In BT
Technology Journal, vol 15 no 4.
9. Rollins W., 2005, The Psychology Of Color.
http://coe.sdsu.edu/eet/Articles/wadecolor/start.htm
10. Hamilton R., 2006, Bioinformatic Feedback: performer bio-data as a driver for realtime
composition. New Interfaces for Musical Expression, NIME 06.
11. Nasoz F., Alvarez K., Lisetti C. L., 2004, Emotion recognition from pysiological signals
for presence technologies. Cognition Technology & Work, vol 6, no 1.
Download