Animation - Department of Computer Science

advertisement
Virtual Tutor Application:
Report
Submitted to:
Jie Yan
Professor
Science Department
University
Submitted by
Ruth Agada
Research Assistant
Dr.
Assistant
Computer
Bowie State
Table of Contents
Acknowledgements
Abstract
Introduction
1.1 Overview Animated Pedagogical agents
1.2 Aim of Virtual Tutor Application
System Analysis, Design and Implementation
1.1 Hardware Specification
1.2 Software Specification
1.3 Design Implementation
Technology Overview
1.1 Java API
1.2 Autodesk 3DS Max
1.3 CU animate
Results
Conclusion
References
2|Page
Abstract:
The development of effective animated pedagogical agents is a topic of rising interest in the
computer science community. Studies have shown that effective individual tutoring is the most
powerful mode of teaching. These agents are designed to be lifelike autonomous characters that
support human learning by creating rich, face – to – face learning interactions. Animated
pedagogical agents offer great promise for broadening the bandwidth of tutorial communication
and increasing learning environments’ ability to engage and motivate students. It is becoming
apparent that this generation of learning technologies will have significant impact on education
and training. Successful outcomes of this research will provide a new procedure for developing
more engaging and natural dialogs and narrations by pedagogical agents, which are expected to
lead to more effective learning outcomes. Successful outcomes of this research will provide a
strong theoretical and empirical foundation as well as pilot data to support subsequent research to
be described in an NSF career proposal. Subsequent research will aim to extend the knowledge
gained in the proposed experiments and to automate many of the procedures that are
implemented manually in the proposed work.
3|Page
Introduction
1.1 Overview Animated Pedagogical Agents
There is plenty of research being conducted in the development of an effective animated
pedagogical agent. Animated pedagogical agents are designed to be lifelike autonomous
characters that support human learning by creating rich, face – to – face learning interactions
[20]. These agents are capable of taking full advantage of verbal and non – verbal
communication reserved for human interactions [1]. They have been endowed with human
like qualities to make them more engaging, making the learning experience more beneficial,
and prevent distracting behaviors (unnatural movements) [20, 21]. They take full advantage
of the face – to – face interactions to extend and improve intelligent tutoring systems [20,
39].
Studies have shown that effective individual tutoring is the most powerful mode of
teaching. However, individual human tutoring for each and every student is logically and
financially impossible, hence the creation and development of intelligent tutoring systems
[36] to reach a broader audience. According to Suraweera and Graesser [39, 14] there have
several intelligent tutoring systems that have been successfully tested, have shown that this
new device does in fact improve learning.
Researchers face a single problem of how to create an effective user interface to provide
the user with a believable experience. The idea is to create a system that will use intelligent
and fully animated agents to engage its users in natural face – to – face instructional
interaction. To use agents most powerfully, designers can incorporate suggestions from
research about agents concerning speech quality, personality or ethnicity of the agent, or the
frequency and verbosity of reward. Designers can also incorporate what research says about
effective human teachers or therapists into the behavior of their agent [10]. In the
development of the Virtual Tutor, both kinds of research were incorporated to make this
agent more effective and powerful. The virtual tutor Julie gives lectures on subject matter
chosen by the student and during quizzes the agent provides the student with feedback based
on specific errors made during the quiz.
1.2 Aim of Virtual tutor Application
The objectives of the year project are:

To develop a powerful new experimental approach for investigating engaging and effective
communication by lifelike animated characters through speech, head movements and facial
expression of emotions, and

To conduct experiments to gain new insights about how voice and facial expressions can be
combined in optimal ways to enable pedagogical agents to provide more believable and effective
communication experiences.
4|Page
We aim to get better comprehension, high retention rates and improved learning experience
by developing original narratives that must contain six basic emotion targets namely
emotions happiness, surprise, fear, anger, sadness and disgust.
System Analysis, Design and Implementation
This section defines the parameters that the software product must follow while interacting with
the outside world.
1.1 Hardware Specifications
Processor
:
RAM
:
Hard Disk Required :
intel core 2 duo / dual core
3.0GB
3GB
1.2 Software Specifications
Operating System
JDK toolkit
Other libraries
:
:
:
Windows XP/7
JDK version 1.6.0_18
vht.jar, blt.jar, CSLR.jar
1.3 Design Implementation
Design of software involves conceiving, planning out and specifying the externally
observable characteristics of the software product. We have data design, architectural design
and user interface design in the design process. These are explained in the following section.
The goal of design process is to provide a blue print for implementation, testing and
maintenance activities.
Index Files
The index files are a text files that list each aspect of the lecture divided into subsections for
ease of reference. The notepad application was used to create these files. The format for an
index file is as follows:
<title of subjection> | <file path>
…
QUIZ | <file path >
A lecture comprises of the title of that lecture and the various subsections included in that
lecture. As such the primary index file notes the course that the student is registered for and
for each course an outline of topics to be taught that semester. The primary index file also
follows the similar format described above. In addition it contains several more features to
allow the application to differentiate each lecture and course. It is as follows:
5|Page
--- <course1>
<topic1> | <file path>
<topic2> | <file path>
…
--- <course2>
<topic1> | <file path>
<topic2> | <file path>
…
Lecture files
Based on the outline topic the user selected (reference to screenshots of lecture window),
discussion for that section would be displayed in a box below the agent while the agent
describes the text. If an image is associated with the discussion another window pops up with
the image and corresponding text. The topic file contains several references for the agent to
works as needed. The format for the lecture file is as follows:
--- <phoneme file (.txt)> | <.wav file >
<discussion text>
LoadImage | <file path>
Explanation | <file path>
In future versions of this application, video files will be added to lectures and the agent will
be able to describe the contents of the video.
Quiz files
The quiz file contains questions and answers – the correct answer is indicated with the
question, whatever comments the instructor has for the user and the option to shuffle both
question and answer. The format is as follows, please note to the commands listed below
shuffle the question and answer of the quiz:
Comment. <comment>
Shuffle Answers.
Don't Shuffle Answers.
Shuffle Questions.
Don't Shuffle Questions.
Shuffle These Answers.
Don't Shuffle These Answers.
Question. <question>
Answer. <answer>
Correct answer. <answer>
…
6|Page
Question. <question>
…
Adding the keyword “these” shuffles or unshuffles only the current question and answer.
Technology Overview
1.1 Java API
Java API is not but a set of classes and interfaces that comes with the JDK. Java API is
actually a huge collection of library routines that performs basic programming tasks such
as looping, displaying GUI form etc.
In the Java API classes and interfaces are packaged in packages. All these classes are written in
Java programming language and runs on the JVM. Java classes are platform independent but
JVM is not platform independent. You will find different downloads for each OS.
The Java comprises three components:
Java Language
JVM or Java Virtual Machine and
The Java API (Java programming interface)
The Java language defines easy to learn syntax and semantics for Java programming
language. Every programmer must understand these syntax and semantics to write
program in Java language.
Type of Java API
There are three types of API available in Java Technology.
Official Java Core API
The official core API is part of JDK download. The three editions of the Java
programming language are Java SE, Java ME and Java EE.
Optional Java API
The optional Java API can be downloaded separately. The specification of the API is
defined according to the JSR request
Unofficial APIs
These API's are developed by third parties and can be downloaded from the owner
website.
Sample java gui code
7|Page
import
import
import
import
import
import
java.awt.*;
java.awt.event.*;
java.awt.image.ImageObserver;
java.awt.image.BufferedImage;
javax.swing.*;
java.net.URL;
/**
* The DukeAnim class displays an animated gif with a transparent background.
*/
public class DukeAnim extends JApplet implements ImageObserver {
private
private
private
private
static Image agif, clouds;
static int aw, ah, cw;
int x;
BufferedImage bimg;
public void init() {
setBackground(Color.white);
clouds = getDemoImage("clouds.jpg");
agif = getDemoImage("duke.running.gif");
aw = agif.getWidth(this) / 2;
ah = agif.getHeight(this) / 2;
cw = clouds.getWidth(this);
}
public Image getDemoImage(String name) {
URL url = DukeAnim.class.getResource(name);
Image img = getToolkit().getImage(url);
try {
MediaTracker tracker = new MediaTracker(this);
tracker.addImage(img, 0);
tracker.waitForID(0);
} catch (Exception e) {}
return img;
}
public void drawDemo(int w, int h, Graphics2D g2) {
if ((x -= 3) <= -cw) {
x = w;
}
g2.drawImage(clouds, x, 10, cw, h-20, this);
g2.drawImage(agif, w/2-aw, h/2-ah, this);
}
public Graphics2D createGraphics2D(int w, int h) {
Graphics2D g2 = null;
if (bimg == null || bimg.getWidth() != w || bimg.getHeight() != h) {
bimg = (BufferedImage) createImage(w, h);
}
g2 = bimg.createGraphics();
g2.setBackground(getBackground());
8|Page
g2.setRenderingHint(RenderingHints.KEY_RENDERING,
RenderingHints.VALUE_RENDER_QUALITY);
g2.clearRect(0, 0, w, h);
return g2;
}
public void paint(Graphics g) {
Dimension d = getSize();
Graphics2D g2 = createGraphics2D(d.width, d.height);
drawDemo(d.width, d.height, g2);
g2.dispose();
g.drawImage(bimg, 0, 0, this);
}
// overrides imageUpdate to control the animated gif's animation
public boolean imageUpdate(Image img, int infoflags,
int x, int y, int width, int height)
{
if (isShowing() && (infoflags & ALLBITS) != 0)
repaint();
if (isShowing() && (infoflags & FRAMEBITS) != 0)
repaint();
return isShowing();
}
public static void main(String argv[]) {
final DukeAnim demo = new DukeAnim();
demo.init();
JFrame f = new JFrame("Java 2D(TM) Demo - DukeAnim");
f.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent e) {System.exit(0);}
});
f.getContentPane().add("Center", demo);
f.pack();
f.setSize(new Dimension(400,300));
f.show();
}
}
1.2 AUTODESK 3DS MAX
Autodesk 3ds Max, formerly 3D Studio MAX, is a modeling, animation and rendering package
developed by Autodesk Media and Entertainment. It has modeling capabilities, a flexible plug-in
architecture and is able to be used on the Microsoft Windows platform. It can be used by video
game developers, TV commercial studios and architectural visualization studios. It is also used
for movie effects and movie pre-visualization.
9|Page
In addition to its modeling and animation tools, the latest version of 3ds Max also features
shaders (such as ambient occlusion and subsurface scattering), dynamic simulation, particle
systems, radiosity, normal map creation and rendering, global illumination, a customizable user
interface, and its own scripting language.
Features
3D Modeling
Autodesk 3ds Max and Autodesk 3ds Max Design software have one of the richest 3D modeling
toolsets in the industry:




Efficiently create parametric and organic objects with polygon, spline, and NURBSbased modeling features.
Liberate your creativity with more than 100 advanced polygonal modeling and freeform
3D design tools in the Graphite modeling toolset.
Precisely control the number of faces or points in your object with ProOptimizer
technology and reduce a selection’s complexity by up to 75 percent without loss of detail.
Articulate minute details and optimize meshes for both interactive manipulation and
rendering using subdivision surfaces and polygon smoothing.
Shading & Texturing
Access a vast range of texture painting, mapping, and layering options, while more easily
keeping track of your assets within a scene:



Perform creative texture mapping operations, including tiling, mirroring, decal
placement, blurring, spline mapping, UV stretching, relaxation, Remove Distortion,
Preserve UV, and UV template image export.
Design and edit simple to complex shading hierarchies with the Slate material editor,
taking advantage of extensive libraries of textures, images, image swatches, and
procedural maps.
Bake each object’s material and lighting into new texture maps with the Render to
Texture functionality.
Animation
Create intelligent, believable characters and high-quality animations by tapping into a
sophisticated toolset:



Leverage procedural animation and rigging with CAT (Character Animation Toolkit),
biped, and crowd-animation functionality.
Use the Skin modifier and CAT Muscle to help achieve more precise, smoother control
of skeletal deformation as bones move.
Rig complex mechanical assemblies and characters with custom skeletons using 3ds Max
bones, inverse kinematics (IK) solvers, and customizable rigging tools.
10 | P a g e


Wire one- and two-way relationships between controllers to create simplified animation
interfaces.
Animate CAT, biped, and 3ds Max objects in layers to tweak dense motion capture data
without compromising underlying keyframes.
Rendering
Achieve stunning image quality in less time with powerful 3D rendering software capabilities:





Create high-fidelity pre-visualizations, animatics, and marketing materials with the
innovative, new Quicksilver high-performance renderer.
Quickly set up advanced photorealistic lighting and custom shaders with the mental ray®
rendering engine.
Take advantage of idle processors to finish rendering faster with unlimited batch
rendering in mental ray.
Visualize and manipulate a given region in both the viewport and Framebuffer with
Reveal™ functionality.
Output multiple passes simultaneously from supported rendering software, including high
dynamic range (HDR) data from architecture and design materials, for reassembly in 3ds
Max® Composite.
3ds Max SDK
The Autodesk 3ds Max SDK (Software Developer Kit) can be used to help extend and
implement virtually every aspect of the Autodesk 3ds Max application, including scene
geometry, animation controllers, camera effects, and atmospherics. Create new scene
components, control the behavior of existing components, and export the scene data to custom
data formats. Developers can leverage a new managed .NET plug-in loader, making it easier to
develop plug-ins in C# or other .NET languages. With more than 200 sample plug-in projects,
3ds Max's comprehensive SDK offers both deep and broad access to satisfy even the most
demanding production scenarios.
Track View: Curve Editor and Dope Sheet.
11 | P a g e
Spline and 2D modeling tools.
Subdivision surfaces and polygon smoothing.
1.3 CU Animate System
Under support from National Science Foundation Information Technology Research and
Interagency Education Research Grants, additional modalities have been developed to enable
conversational interaction with animated agents.
1) Character Animator: The character animation module receives a string of symbols (phonemes,
animation control commands) with start and end times from the TTS server, and produces visible
speech, facial expressions, and hand and body gestures in synchrony with the speech waveform.
Our facial animation system, CU Animate [40], is a toolkit designed for research, development,
control, and real-time rendering of 3-D animated characters. Eight engaging full-bodied
characters and Marge, the dragon shown in Fig. 3, are included with the toolkit. Each character
12 | P a g e
has a fully articulated skeletal structure, with sufficient polygon resolution to produce natural
animation in regions where precise movements are required, such as lips, tongue, and finger
joints. Characters produce lifelike visible speech, facial expressions, and gestures. CU Animate
provides a GUI for designing arbitrary animation sequences. These sequences can be tagged (as
icons representing the expression or movement) and inserted into text strings, so that characters
will produce the desired speech and gestures while narrating text or conversing with the user.
Accurate visible speech is produced in CU Animate characters using a novel approach that uses
motion capture data collected from markers attached to a person’s lips and face while the person
is saying words that contain all sequences of phonemes (or the visual configuration of the
phonemes, called visemes) in their native language. The motion capture procedure produces a set
of 8 points on the lips, each represented by an x, y, and z coordinate, captured at 30 frames per
sec. These sequences are stored as “diviseme” sequences, representing the transition from the
middle of one visually similar phoneme class to the middle of another such class. To synthesize a
new utterance, we identify the desired phoneme sequence to be produced (exactly as done in
TTS synthesis systems), and then locate the corresponding sequences of viseme motion capture
frames. Following procedures used to achieve audio diphone TTS synthesis, we concatenate
sequences of divisemes—intervals of speech from the middle (most steady-state portion) of one
phoneme to the middle of the following phoneme. By mapping the motion capture points from
these concatenated sequences to the vertices of the polygons on the lips and face of the 3-D
model, we can control the movements of the lips of the 3-D model to mimic the movements of
the original speaker when producing the divisemes within words. This approach produces
natural-looking visible speech, which we are now evaluating relative to videos of human talkers.
13 | P a g e
Results
Below are screenshots of the look and feel of the virtual tutor application. Shown below is what
the user sees after starting up the application. The virtual introduces herself then informs the user
of what she will instructing him/her on and lets them know that there is a quiz that is to be taken
at the end of the instruction.
14 | P a g e
In this screenshot we have a diagram that is being displayed and Julie explaining the contents of
the diagram (shown on the next page).
15 | P a g e
16 | P a g e
The next screenshot is of the quiz window. After the lecture is concluded the user clicks the quiz
button circled below.
17 | P a g e
Once the quiz is loaded, the user can complete the quiz and as they answer questions it provides
them with positive or negative audio feedback. After the quiz is completed the agent will tell the
user if they passed or failed the quiz as well as there being a count being displayed on how well
they performed (portion in circle).
18 | P a g e
Conclusion:
There is great need for accessible and effective intelligent tutoring systems that can improve
learning by children and adults. The proposed work will inform the design of pedagogical agents
that can produce more engaging and effective learning experiences.
19 | P a g e
References
[1] R. Atkinson, “Optimizing Learning From Examples Using Animated Pedagogical Agents,” Journal of
Educational Psychology, vol. 94, no. 2, p.416, 2002. [online] Academic Search Premier Database
[Accessed: August 11, 2009].
[2] A. L. Baylor, R. Cole, A. Graesser and L. Johnson, Pedagogical agent research and development: Next
steps and future possibilities, in Proceedings of AI-ED (Artificial Intelligence in Education), Amsterdam
July, 2005.
[3] A. L. Baylor and S. Kim, “Designing nonverbal communication for pedagogical agents: When less is
more,” Computers in Human Behavior, vol.25 no.2, pp.450-457, 2009.
[4] A. L. Baylor and J. Ryu, “Does the presence of image and animation enhance pedagogical agent persona?”
Journal of Educational Computing Research, vol. 28, no. 4, pp.373-395, 2003.
[5] A. L. Baylor and R. B. Rosenberg-Kima, Interface agents to alleviate online frustration, International
Conference of the Learning Sciences, Bloomington, Indiana, 2006.
[6] A. L. Baylor, R. B. Rosenberg-Kima and E. A. Plant, Interface Agents as Social Models: The Impact of
Appearance on Females’ Attitude toward Engineering, Conference on Human Factors in Computing
Systems (CHI) 2006, Montreal, Canada, 2006.
[7] J. Cassell, Y. Nakano, T. Bickmore, C. Sidner & C. Rich, Annotating and generating posture from
discourse structure in embodied conversational agents, in Workshop on representing, annotating, and
evaluating non-verbal and verbal communicative acts to achieve contextual embodied agents, Autonomous
Agents 2001 Conference, Montreal, Quebec, 2001.
[8] R. E. Clark and S. Choi, “Five Design Principles for Experiments on the Effects of Animated Pedagogical
Agents,” J. Educational Computing Research, vol. 32, no. 3, pp.209-225, 2005.
[9] R. Cole, J. Y. Ma, B. Pellom, W. Ward, and B. Wise, “Accurate Automatic Visible Speech Synthesis of
Arbitrary 3D Models Based on Concatenation of Diviseme Motion Capture Data,” Computer Animation &
Virtual Worlds, vol. 15, no.5, pp.485-500, 2004.
[10] R. Cole, S. van Vuuren, B. Pellom, K. Hacioglu, J. Ma, J. Movellan, S. Schwartz, D. Wade- Stein, W.
Ward and J. Yan, “Perceptive Animated Interfaces: First Steps Toward a New Paradigm for Human
Computer Interaction,” Proceedings of the IEEE: Special Issue on Human Computer Interaction, vol. 91,
no. 9, pp.1391-1405, 2003.
[11] M. J. Davidson, (2006). “PAULA: A computer – Based Sign Language Tutor for Hearing Adults,” [online]
Available www.facweb.cs.depaul.edu/elulis/Davidson.pdf [Accessed: June 15, 2008]
20 | P a g e
[12] D. M. Dehn and S. Van Mulken, “The impact of animated interface agents: a review of empirical research,”
International Journal of Human-Computer Studies, vol. 52, pp.1–22, 2000.
[13] A. Graesser, K. Wiemer-Hastings, P. Wiemer-Hastings and R. Kreuz, “AutoTutor: A simulation of a
human tutor,” J. Cognitive Syst. Res., vol. 1, pp. 35–51, 1999.
[14] A. C. Graesser and X. Hu, “Teaching with the Help of Talking Heads,” Proceedings of the IEEE
International Conference on Advanced Learning Techniques, pp. 460-461, 2001.
[15] A. C. Graesser, K. VanLehn, C. P.Rosé, P. W. Jordan and D. Harter, “Intelligent tutoring systems with
conversational dialogue,” AI Mag, vol. 22, no.4, pp. 39-51, 2001.
[16] A. Graesser, M. Jeon and D. Dufty, “Agent Technologies Designed to Facilitate Interactive Knowledge
Construction,” Discourse Processes, vol. 45, pp.298-322, 2008.
[17] Greenfield, P.M. and Cocking, R.R. Interacting with video: Advances in applied developmental
psychology, vol. 11, Norwood, NJ: Ablex Publishing Corp. 1996, p.218.
[18] X. Hu and A. C. Graesser, “Human use regulatory affairs advisor (HURAA): Learning about research
ethics with intelligent learning modules,” Behavior Research Methods, Instruments, & Computers, vol.
36, no. 2, pp. 241-249, 2004.
[19] W. L. Johnson, “Pedagogical Agents,” ICCE98 - Proceedings in the Six International Conference on
Computers in Education, China, 1998.[online] Available
http://www.isi.edu/isd/carte/ped_agents/pedagogical_agents.html [Accessed: June 15, 2008]
[20] W. L. Johnson and J. T Rickel. “Animated Pedagogical Agents: Face-to-Face Interaction in Interactive
Learning Environments,” International Journal of Artificial Intelligence in Education, vol. 11, pp. 47-78,
2000.
[21] Y. Kim and A. Baylor, “Pedagogical Agents as Learning Companions: The Role of Agent Competency and
Type of Interaction,” Educational Technology Research & Development, vol. 54, no. 3, pp.223-243, 2006.
[22] A. Laureano-Cruces, J. Ramírez-Rodríguez, F. De Arriaga, and R. Escarela-Pérez, “Agents control in
intelligent learning systems: The case of reactive characteristics,” Interactive Learning Environments, vol.
14, no. 2, pp.95-118, 2006.
[23] M. Lee & A. L. Baylor, “Designing Metacognitive Maps for Web-Based Learning,” Educational
Technology & Society, vol. 9, no.1, pp.344-348, 2006.
[24] J. C. Lester, S. A. Converse, S. E. Kahler, S. T. Barlow, B. A. Stone, and R. S. Bhogal, “The persona
effect: Affective impact of animated pedagogical agents,” in Proceedings of CHI '97, pp.359-366, 1997.
21 | P a g e
[25] J. C. Lester, B. A. Strone and G. D. Stelling, “Lifelike Pedagogical Agents for Mixed-Initiative Problem
Solving in Constructivist Learning Environments,” User Modeling and User-Adapted Interaction, vol. 9,
pp.1-44, 1999.
[26] J. C. Lester, J. L. Voerman, S. G. Towns and C. B. Callaway, “Deictic Believability: Coordinated Gesture,
Locomotion, and Speech in Lifelike Pedagogical agents,” Applied Artificial Intelligence, vol. 13, no. 4, pp.
383-414, 1999.
[27] M. Louwerse, A. Graesser, L. Shulan and H. H. Mitchell, “Social Cues in Animated Conversational
Agents,” Applied Cognitive Psychology, vol. 19, pp. 693-704, 2005.
[28] J. Ma, J. Yan and R. Cole, CU Animate: Tools for Enabling Conversations with Animated Characters, in
International Conference on Spoken Language Processing (ICSLP), Denver, 2002.
[29] J. Ma, R. Cole, B. Pellom, W. Ward and B. Wise, “Accurate Automatic Visible Speech Synthesis of
Arbitrary 3D Models Based on Concatenation of Di-Viseme Motion Capture Data,” Journal of Computer
Animation and Virtual Worlds, vol. 15, no. 5, pp. 485-500, 2004.
[30] Ma, J. and Cole R., “Animating Visible Speech and Facial Expressions,” Visual Computer, vol. 20, no. 2-3,
pp. 86-105, 2004.
[31] V. Mallikarjunan, (2003) “Animated Pedagogical Agents for Open Learning Environments,”[online]
Available: filebox.vt.edu/users/vijaya/ITMA/portfolio/docs/report.doc [Accessed December 9, 2009]
[32] S. C. Marsella and W. L. Johnson, An instructor's assistant for team-training in dynamic multi-agent
virtual worlds in Proceedings of the Fourth International Conference on Intelligent Tutoring Systems (ITS
'98), no. 1452 in Lecture Notes in Computer Science, pp. 464-473, 1998.
[33] D.W. Massaro, Symbiotic value of an embodied agent in language learning, proceedings of the 37th
Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 5 – vol. 5, 2004.
[34] “Animated 3-D Boosts Deaf Education; ‘Andy’ The Avatar Interprets By Signing” sciencedaily.com March
2001, [online] ScienceDaily, Available: http://www.sciencedaily.com/releases/2001/03/010307071110.htm
[Accessed April 11, 2008]
[35] A. Nijholt, “Towards the Automatic Generation of Virtual Presenter Agents,” informing science journal
vol. 9 pp.97 -110, 2006.
[36] M. A. S. N. Nunes, L. L. Dihl, L. C. de Olivera, C. R. Woszezenki, L. Fraga, C. R. D. Nogueira, D. J.
Francisco, G. J. C. Machado and M. G. C. Notargiacomo, “Animated Pedagogical Agent in the Intelligent
Virtual Teaching Environment,” Interactive Educational Multimedia, vol. 4, pp.53-61, 2002.
22 | P a g e
[37] L. C. de Olivera, M. A. S. N. Nunes, L. L. Dihl, C. R. Woszezenki, L. Fraga, C. R. D. Nogueira, D. J.
Francisco, G. J. C. Machado and M. G. C. Notargiacomo, “Animated Pedagogical Agent in Teaching
Environment,” [online] Available: http://www.die.informatik.unisiegen.de/dortmund2002/web/web/nunes.pdf [Accessed: June 30, 2008]
[38] N.K. Person, A.C. Graesser, R.J. Kreuz, V. Pomeroy, and the Tutoring Research Group, “Simulating
human tutor dialog moves in AutoTutor,” International Journal of Artificial Intelligence in Education, in
press 2001.
[39] P. Suraweera and A. Mitrovic, “An Animated Pedagogical Agent for SQL-tutor,” 1999, Available:
http://www.cosc.canterbury.ac.nz/research/reports/HonsReps/1999/hons_9908.pdf [Accessed: August 11,
2009]
[40] J. Ma, J. Yan, and R. Cole, “CU animate: Tools for enabling conversations with animated characters,”
presented at the Int. Conf. Spoken Language Processing, Denver, CO, 2002.
[41] Autodesk 3ds Max. “3ds Max core features.” 2010 Available:
http://usa.autodesk.com/adsk/servlet/pc/index?siteID=123112&id=13567426#channels_Core%20Features
[Accessed : September 26, 2010]
23 | P a g e
Download