>> John Nordlinger: Hi, my name is John Nordlinger. ... Research. And I'm very pleased to introduce Ian Horswill...

advertisement
>> John Nordlinger: Hi, my name is John Nordlinger. I'm with Microsoft
Research. And I'm very pleased to introduce Ian Horswill from Northwestern
University.
Ian comes from long time ago from the AI lab at MIT. He now is at Northwestern,
where he's the director of the Animate Arts Program, a nice blend of computer
science and art to produce great objects like the one he will be talking about. He
also is a professor, and he teaches computer science at Northwestern. Please
welcome Ian Horswill.
>> Ian Horswill: Thanks, John.
[applause].
>> Ian Horswill: Okay. So let me just start out by asking how many people here
come from kind of an AI research background? Okay.
How many people come from a game design background? Okay.
And any other kinds of backgrounds here, like -- okay.
>>: [inaudible].
>> Ian Horswill: Education. Oh, okay. All right. So I'm going to try and cover a
few different topics, and so I come from a tradition that believes in interactive
talks, so if you have a question, please don't feel bashful about, you know, asking
it immediately and, you know, we can talk about it some more, especially since
there are a few different topics I'm supposed to cover, I'm going to be going
somewhat quickly through some things.
So this is in some ways less a research talk, this is about a tool for procedural
animation that I made for my friends and me in interactive narrative research so
that we would have some nice bodies that we could use as back ends for our AI
systems.
I'll talk mostly about the system. I'll also talk somewhat about sort of the longer
term goals, you know, why do we need this tool. But I want to start out with just a
quick demo, and hopefully none of you know about Ainsworth safe home base
phenomenon. Okay. I'm sorry. I'm having a multiple screen issue here. There
we go.
Oh, thank you, sir. May I have another? All right. Just one moment. I can just
run the silly thing manually. Okay. Here we go. And so I'm going to deliberately
not tell you too much about what's going on here, other than to say that this is all
sort of being computed from first principle. So, you know, I never learned to use
my, you know, I didn't make a walk loop for this or anything like that.
>>: This is the same every time you run it or --
>> Ian Horswill: No.
>>: This could be a good example [inaudible].
>> Ian Horswill: Okay. So that's an example of -- sorry. I need to -- here we go.
Of, you know, an application built with the system. It's trying to test out -- trying
to do a simulation of some specific results from what's called attachment theory.
But okay. All right.
So now I want to ask you what did the kid character want? Or in general, what
can you tell me about what was going on in heads of the characters? Sure.
>>: He wanted [inaudible].
>> Ian Horswill: Okay. Anything else?
>>: He wanted the ball.
>> Ian Horswill: Okay. Anything else? You pretty much have it.
>>: He wanted to play.
>> Ian Horswill: Yeah, wanted to play. Is it also has a somewhat anxious
relationship to the other kid. So you didn't real see it, but sometimes, you know,
he would like punch the other kid. Okay.
So what I want you to get from that -- so, you know, that -- from an AI standpoint,
that was a really simple demo, if you know about behavior based robotics,
straightforward pretty simple behavior based control system. But it's interesting
how even though there was no language there, there weren't even any facial
expressions, you could learn a fair amount about what the characters were
involved in, you know, what they wanted, what the conflicts were in what they
wanted and so on by things like bodily possible tours by pauses and hesitations.
You know, the kid's chasing the ball but periodically stops and looks back at the
parent. Or when it's running toward the parent, it stops and looks back at the
ball. Gaze direction and so forth. Okay.
So although I'm an AI person, and I'm interested these days particularly in
simulating emotion and personality, I'm interested in doing that in the context of
interactive drama, so virtual characters for interactive drama. So what I
personally really want to do is I'm interested in making characters that feel
compelling as characters. A lot of other problems to work o like making the
character really smart, having really good drama management to manage the
Aristotelian arc of the story and all that. That's not what I'm working on. I'm
trying to make characters that seem kind of life like.
And so as I said, I'm sort of hanging out with friends from the psychology
department and neuroscience and so on. In order to do that, I needed to build
some bodies and that's largely what I'm here to talk approximate.
But just in terms of background what I'm interested in thinking about are kinds of
games in which the players are really going to them to have some sense of
connection to a character. That's one of their primary motivations. So not just
challenge but actually to have some sense of connection to the characters.
And there's actually quite a long history of this. So Tamagotchi, for example, you
know people who like Tamagotchi like them because they're a kind of child and
they have an attachment to them. Or you know, the old dogs work by Andrew
Stern, Nintendogs now, and now Andrew is doing touch pets dogs. That's what
that's a screen shot from. But there's also interactive fiction, visual novels. How
many people know what visual novels are? So they're a kind of interactive fiction
that's very popular in Japan, which is basically static graphics with text, and it's
very much like a choose your own adventure novel in that you're moving through
dialogue trees and choosing discrete moves. But it's really very minimal. It's you
know static images and just text. And yet they're enormously popular in Japan.
At least as of 2006 they were responsible for 70 percent of the titles, PC titles
that were released.
So there are things like this. You have a picture of a character, you have the
dialogue you're having with them, the dialogue you're having with them is fixed
choices, but these things are incredibly popular.
Okay. So that's where I'm coming from. That's the kind of project I'm involved in.
I want to think about how do we make characters that feel engaging, AI based
characters.
Now, when I started getting involved in the interactive narrative world one of the
problems I had was that I couldn't find sort of a good game engine to use. And in
fact, a lot of my friends were very frustrated by 3D and thinking about going back
to 2D. As my friend Michael Matise [phonetic] said, what we really needed was
procedural South Park, we needed sort of flat, simple stuff, you know, didn't need
a complex world but needed a world in which it was easy to just introduce a new
prop into the world and have the character interact with it, and if you needed to
have -- if you needed to add a chair to the world, it didn't require that you hire an
animator to animate the sit-down animation because the chair has a slightly
different height than the chair that you already had a sit-down animation for.
Okay. So that's what the bulk of the talk's going to be about. And then time
permitting, I'll also give kind of a flavor of the work that I'm doing with my friends
from psychology these days.
So here's the really problem. How do we make flexible animation backends that
are easily to interface to your AI system in part so that my colleagues at
Northwestern stop asking me if I can recommend students who know Maya who
can do their animation and modeling for them.
So we're looking to build a system that has a pretty wide behavioral repertoire
that's easy to interface to the AI system that can provide some reasonable
amount of physical interaction between the students. So you saw the child
character physically hugging the parent, and you want to be able to do that kind
of thing. And you want it to be relatively easy to add props to the system. You
want at least sort of cartoon level believability.
So the gold standard for that kind of animation is of course hand animation and
motion capture. Gives you very, very, very nice results. It's infinitely tweakable
and so on. As you know, it's expensive. It's also for a lot of us in the university
setting just prohibitively expensive. You know, we don't have a motion capture
studio. At Northwestern there are actually very few students who have the facility
with Maya to do real character animation because there aren't any courses in it.
So a really attractive alternative is to do some kind of full blown dynamic
simulation of the body, run an inverted pendulum controller in the body to keep
the body erect and then generate the walk loop that way. There's been a lot of
really great work on that.
It's still fairly expensive computationally, and it's also expensive in terms of
development in that you need separate controllers for the different kinds of
actions. You have to worry about stability issues. The controllers themselves
may be relatively unstable. Physics can sometimes be unstable. And plus you
have to worry about the fact that even if you have, you know, a walking system
that works for the character under normal conditions, if they're walking with a
load now you actually need a different set of gains in order to keep the walking
system stable and so on. Okay.
There's also been a lot of wonderful work on data-driven methods where you just
go grab a whole lot of mocap and you either cut it up into little pieces and
dynamically sequence them or you use them to train a dynamic controller, just
use it as the training data for the dynamic controller, you get pretty good realism.
It's still not really appropriate at least for my kind of environment, because we
don't have the mocap data to begin with. And then if you're trying to fit it on to a
console, it is, you know, a fair amount of data that you have to keep track of.
So I just built a system for myself called Twig which does sort of rough and ready
procedural animation in a dynamic environment and now I have a lot of friends
who want to use it, so I'm trying to, you know, package it up so it's useable by
other people. But really what it's trying to be is just a pragmatic tool for
procedural animation that you can use as a back end to your AI system. So I'm
not claiming to have made any innovations in physics simulation or control theory
or anything like that.
It supports, you know, some amount of dynamic simulation. It's not great
dynamic simulation, but it works fine as far as the audience is concerned. And
it's intended for sort of interactive narrative applications. So I want it to be fast. I
don't want you to have to have a separate CPU to run it on. I want it to be easy
to interface the AI system to. I want it to be easy to run it over an RPC link if you
want to hook it up to something like sore which is running in a separate address
space. And I'm more interested in believability in the Joe Bates and Disney
animation sense than in actual physical realism.
So as long as it's emotionally believable, I don't care whether or not it's physically
realistic.
So it's free. It's and XNA library. You can download it. It integrates with the
content pipeline which is awesome. Not its integration with the content pipeline
but the content pipeline itself is awesome. And once I find the right
undergraduate to work on it, we're going to port it to the Xbox.
The main idea behind it is a kind of a puppetry style of control that I'll talk more
about in a minute. The basic idea is it's fast and stable, it's relatively easy for
people who don't have too much specialized knowledge to go an extend -- or at
least that's my sense. I haven't done the -- I mean, other people haven't written
code for it yet. And it's very much a work in progress. This is my second 3D
graphics program, so I'm sure you could all do it much better than I did. But, you
know, it's academic research code. Sorry.
Okay. So here's what I mean by the puppetry control style. Essentially what you
have is a puppet. You have a rag-doll simulation, and then rather than trying to
do invert a pendulum control or something like that, the way you're going to do
control is you're just going to apply forces directly to whatever parts of the body
you want to. Essentially if you want the character to stands up, we give you a jet
pack. You can attach the jet pack to the back of the neck and have the jet pack
lift the body up and center it over the feet. And that's how you do standing.
If you want the character to stand and also wave its hand, you have one jet pack
here, you have one jet pack here, they're both flying around, and then the
physics system is going to worry about where the other parts of the body go
subject to the constraints of the forces that are applied to the relevant parts of the
body. Make sense? Okay.
The physics system is based on Jakobsen or Jakobsen's -- I don't know how he
pronounces his name -- work from the Hitman engine. Fantastic paper from
GDC 2001. Basic idea is you model the body as a set of point masses, IE
particles connected by rigid distance constraints, IE, massless rods. And you
use Verlet integration to update the positions of things. Verlet integration just
means that you model the state of the particle in terms of its position in two
consecutive frames rather than in terms of position and velocity or position and
momentum.
The thing that's nice about that is that it makes it easy to do constraint
satisfaction because if you find that a particle is a place it shouldn't be, you just
figure out a better place for it to be, and then you move it and you don't worry too
much about updating the momentum, you just move the stupid particle and you
hope it looks good. And mostly it does. Okay? Fair enough? All right.
So for example. So you're updating -- say you have two particles that are
connected by rigid distance constraint. You let them move however they want to.
And then once every clock tick you say they're supposed to be this distance
apart, they're actually this distance apart, so we move them to the places where
they need to be. How do we know what that is? Well, we do the thing you would
normally do which is to pretend that they're connected by spring, but then you
just solve for what the rest state of the spring would be, and you move both
particles there immediately. So you're doing constraint satisfaction by projection.
Okay.
Given that, you can -- so now you have a character. You have everything you
need to do to hook a bunch of particles together in a kinematic chain by way of
massless rods so you can make a rag-doll body. And that's basically how the
body -- the rag doll for Hitman is done.
And now what you can do is if you want to move the arm someplace, you just
grab the particle for the arm and you move it wherever you want to move it to.
And the physics -- the constraint satisfaction system is going to make sure that
the elbow is someplace that's compatible with the position of the hand and the
position of the shoulder and if there is no such place, then it's going to drag the
shoulder along with the elbow and then when you stop dragging the hand, the -everything is going to behave as if it has normal dynamics. Okay?
Now, the thing that's nice for my kinds of applications are that when people want
to go and build a system that uses this, they don't have to thing about joint
angles, they don't have to thing about dynamics. All they have to do is figure out
what linear forces do I want to apply to the particular parts of the body that I care
about or even just teleport the parts of the body that I care about to the places
that they need to go. And I don't have to know -- I don't have to think deeply
about physics or kinematics or anything like that. I can just grab parts of the
body and move them where I want them to go.
Posture control works in pretty much the same way. We essentially attach a
bunch of jet packs that are doing little servo loops to different parts of the body.
We have a jet back in the middle of the pelvis which is holding it at the right
height and centering it over the mid point of the feet. We have another one that's
centering the top of the spine over the mid point of the pelvis subject to the
constraint that we want to balance out the center of mass. Then we have
separate things that are applying torques to the shoulders to make the body face
whatever direction I want it to and independently to the pelvis to make it face
whatever direction we want it. Okay? Any questions? Yeah.
>>: So it sounds like you've got a combination of these jet packs or as an
alternative you have like position base like keyframe based?
>> Ian Horswill: Right. Right. So for things like if you want to animate a hand
wave gesture, might as well just do that through keyframing, so, yeah. And in
fact, that's the major thing, other than importing prop models, that's the major
thing we use the content pipeline for is you can make little XML files that describe
the keyframes for I want the hand here, I want the hand here, I want the hand
here, and how fast do I want it to move and things like that. So, yeah, you can
do keyframing. But for posture control, the walk loop you don't have to.
>>: So can you [inaudible].
>> Ian Horswill: Absolutely.
>>: So how do you keep the elbow [inaudible]?
>> Ian Horswill: Oh, great question. Sorry. So implementing joint limits is a
pain. And it -- you know, pretty much you do it. And actually so truth in
advertising, the elbows are the part that's a pain in the butt. The knees are easy
because you just figure out what plane do you want the leg to lie in. And then
you just enforce the constraint that all the nodes of the leg need to be coplanar in
that plane. And so if you want the leg to be facing this way and the knees out a
little bit, you just move the knee in toward that, you move the other nodes a little
bit the other way so that you conserve momentum and that holds the knees in
place in whatever direction you want.
For the elbow, you do the same thing but it didn't look as nice because -- well,
what you want is you want sort of a smarter system deciding what plane it wants
the elbow to be in, and right now the system that's figuring that out is dumb.
Right. Right. So you know I actually don't remember how it decides what plane
it should be in, but at the moment it's doing something stupid because the stupid
thing works for what I've had to do so far. Anything else? Great questions.
Okay.
So posture control. So, yeah, we just basically as it were apply jet backs to
different parts of the body. We control the parts of the body we care about to do
the things we want to do, and we control them independently. Parts of the body
we don't care about we just allow to move freely and then physics takes care of
resolving the constraints between those different bits of control.
And so if control system then ends up looking like sort of a particular behavior
based system. Essentially you have a bunch of very simple servo loops that are
all running in parallel and to varying degrees communicating with one another.
And I can talk about that in more detail if you want, to but there isn't much that's
too interesting to say about it.
Gait generation is based on the work, the more recent work of Ken Perlin that he
hasn't published. But the trick is just translate the pelvis along the trajectory that
you want it to go on, and notice when one of the legs is stretched out behind you.
When the leg is -- oh, you know, keeping the feet planted. When one of the legs
becomes overly stretched, just move it ahead of you along the trajectory you
want to move in, and that's enough.
So Ken did that with a kinematics solution and explicit IK. I'm just basically
ripping off that idea and doing in it this sort of quasi dynamic simulation. Okay.
>>: Do you have a problem with stability in the sense that you're enforcing
multiple constraints in some sort of cycle in that things can't -- won't stay still or
do you just damp it?
>> Ian Horswill: There is damping. In practice -- in practice the only time you
really get oscillation in the bodies is when you really ask them to do something
that's impossible such as when the character is picking up an object and you ask
it to move the object through the body. You know, that kind of thing. But there
isn't a lot of oscillation.
You get a -- there's more oscillation actually at the higher behavioral levels where
you get like thrashing where the system is -- wants to follow the ball, no, wants
to go to the parent, wants to follow the ball, you know, wants to go to the parent.
But I don't want to oversell this. So no, I don't get a lot of oscillation, but it's not
that I'm getting stability because of my awesome integrator or my awesome
insight into gait generation, I'm not getting oscillation because I'm asking the
system to do something that's really pretty simple. And so, you know, in practice
I haven't had oscillation problems so far at least.
>>: When you -- when the character's standing, you're forcing a couple
constraints on the feet, right, but you're going to potentially -- I mean, when your
character's walking around or jumping or that sort of thing, I can understand that
you have essentially a hierarchy of -- well will, can't call them joint constraint ->> Ian Horswill: Right, right.
>>: Distant constraints. But now as soon as you put both feet firmly planted on
the floor, then you have a cycle. Right? Is that ->> Ian Horswill: Well, except the constraints are being implemented at the feet
and at the pelvis. The knees are being constrained in one degree of freedom but
then there's still a free -- a spare degree of freedom so that the knees can bend.
>>: So you're solving the legs and the feet and the torso all at once?
>> Ian Horswill: Well, right, except the trick is I don't actually need to solve for
the position of the knees. I just move the pelvis where I want it. The feet are
where they are. And so if the -- you know, so let's -- when the character stands
up, for example, you know it's -- let's say it's starting from this position. I want it
to be in this position. I just start the jet pack on the pelvis. The pelvis starts
rising and then the distance constraint on the knees pulls the knees in, so I don't
actually have to explicitly compute the IK for that, it's just being implicitly
computed.
>>: Okay. I can see how damping would [inaudible].
>> Ian Horswill: Yeah. I mean, there certainly is a lot of damping in it. You
know, and I could try turning the damping off. There's a lot of damping in it not
so much because I saw a lot of -- at least I don't remember seeing a lot of
oscillation in it. But what you get is, you know, the character -- you know, when
the character runs into something it restores from that too quickly. And so you
want damping just so that it's acting like a body that has some discus damping in
it. And so there's actually -- there's a couple of different flavors of damping in it.
There's damping relative to the environment frame and then damping relative to
the body frame. Which gives you a sort of kluge approximation to the kind of
discus damping that you get in the musculoskeletal system.
You know, all I care about is that it looks good enough so that I'm not
embarrassed by it, and then I can move on and work on my AI.
>>: Right.
>> Ian Horswill: Okay. Anything else? All right. Then the other thing you wasn't
of course are props. And so the easy part is it's built on XNA, so you can just
use the content pipeline to pull in any old model you want. So that part's really
easy. The harder part is that at least in the current system if you want to, you
know, a better collision proxy for it than it's bounding box, then you actually need
to write some C# code, and there's a specific set of collision volumes that I
understand are sort of the standard boxes and capsules and spheres.
And also if you want the user to be able to interact with the object other than buy
bumping into it, then you need to write some C# code to tell the system how to
manipulate it. So the way that object manipulation works is if you want the
character to be able to interact with some tool object, for example. So, you
know, say I want to be holding the clicker in a particular canonical position, it just
turns out the easiest way to structure that is to put the knowledge of it inside the
class for the clicker rather than inside the class for the characters so that we
don't have to keep adding methods to the character class all the time.
And so the way it actually works is the clicker figures out where it should be in
space, and it hovers there, and it drags the character's arm along with it. Actually
that's one place where if you're not careful you get oscillation, which is the clicker
is figuring out where it should be by coming up with a post hoc bodily coordinate
system for the character based on where the shoulders are and so on, and so it
figures out I want to be there. And if the body hasn't -- isn't running a control loop
that's trying to keep it facing in a certain direction, then the fact that it's reaching
forward is going to drag the shoulder forward a little bit, and so now that
coordinate system is slightly different. And so on the next frame the target
position for the clicker is a little different, and so it drags the -- and so you get this
very slow procession if you're not careful. And so that is one case of oscillation
although not in the usual sense we think of in control theory. Okay? Fair
enough?
So the way that, for example, a character writes on a clipboard is the clipboard
asks the body for its coordinate system, figures out where it should be in a
coordinate system that's normalized to torso size and arm length and computes
from that three space position, and it goes and hovers there dragging one arm
with it. The pen asks the clipboard where its writing surface is and asks for the
global position and orientation of a particular position on its writing surface. The
pen moves to that position and drags the other arm with it and then writing is just
the pen moves around within the coordinate system of the writing surface.
It's not super-realistic, it's not dynamically accurate, but it's very convenient -- it's
just much easier to build the system. And it looks good enough.
One of the things you want to end up wanting to do is to have sort of a lot of
task-specific coordinate systems. So there's a system for that so that objects can
tell you what their front surfaces are or what their writing surfaces are and things
like that.
So here are the kinds of things that you can do at the moment in Twig. You can
hold an object, which means one of the hands is attached to it and then it's just
hanging at the side of the character. Hold for use means move me into whatever
my canonical position for utilization is. Obviously that will need to get
generalized at some point. But for the moment it's good enough.
Pen writing on a surface, walking, sitting, standing up, playing a gesture file,
approaching and object, attacking an object, attachment which is a kid running
up and hugging the legs of the parent. There's a pain withdrawal reflex, so if you
hit one of the characters, there's a low level reflex that just overrides everything
else and does an undirected escape response for a short period of time.
Gaze control. Speech balloon, so to speak. Oh, I actually have is it down and
stand up twice. Sorry about that. There's a sensing system built in. You have
tactile sensing so you know when something touches you and what body part
and you know what object touched you. You have nociception, which is a fancy
way of saying pain sensation. So if something hits you hard enough IE the
kinetic energy is over some threshold, then that also gets registered. Characters
scan their field of view. There will be auditory perception, but there isn't right
now. Okay.
A lot of the users of the system -- so from this point on, we're talking about the AI
system that Ian wants to use, which a lot of my users ignore because they just go
in through the remote procedure called interface and bypass all this and just say
move the hand, walk over here and so on. But there's also a autonomous
attention system that's appraising the salience of objects in the environment,
moving the focus of attention around. And a short term memory that's tracking
that. There's a gaze control system that's semi-independent of that. It tries to
mostly pay attention to the focus of attention, but it didn't always, and in particular
it's trying to monitor -- balance its attention between different objects that need to
be monitored.
There's -- for those people who don't want to write C# code, I actually like writing
C# code, but for those who don't want to, you can just hook up to it over a TCP
port and spit commands at it.
>>: [inaudible] camera control.
>> Ian Horswill: Camera control. Yes. There is a dumb camera system in there
right now [inaudible] who had been at Copenhagen and is going to -- or is now at
Santa Cruz, he did a thesis in Michael Young's group on camera control, and
he's got students working on doing smarter camera controllers for Twig. But at
the moment what you've got is, you know, point the camera at this object, set my
field of view, set my distance, set my height, that kind of thing.
But, yeah, you definitely want something smarter than that. And ideally, you also
want to be able to reason by want a two shot with, you know, this character
larger in the screen than that, and you know, make sure that everything you want
to be in view is in view and stuff like that. But ->>: [inaudible] camera as well.
>> Ian Horswill: Yeah. Well, that would certainly be what I would do. You know,
most of my friends who want to use this want to use their own AI systems
though. So you know, I happen to come from the behavior based tradition and I
kind of thing that way. Most of my friends come from, you know, a reactive
planning tradition and so they want to have that kind of thing. And you know, I'm
trying to not -- I'm trying to minimize my barriers to entry, so to speak.
>>: You mentioned possible collaboration [inaudible].
>> Ian Horswill: Yeah. Well, if my talking to I mean I shot John a note saying
John, how do I find out about the foreign function interface for sore and John
says great, here's the documentation, this would be really cool, and here's the
name of somebody to talk to. Now, have I actually had time to build the foreign
function interface? No. But -- but that's on the queue.
Okay. Well, now let's see whether we can launch this one or whether I have to
launch this one manually, also. I can launch this one. But it's popping up on the
wrong screen. Okay. So this is an example the scripting interface which is the
prototype for the RPC system, and so I was doing a kind of a webcomic, so to
speak, with it. Okay. So that's and a example of, you know, controlling it just by
saying do this, do this, do this, do this. In this case from a text file we can send
the same command set over a TCP link.
And just to give you a sense, this is what the code looks like. I mean, it's just the
obvious stupid thing. You have one line per command. And it's just method
calls. You know, you -- this is the name of the object. You look it up in a table.
This is the name of the method you use for flexion. You parse the arguments.
You call the method, you're done. So nothing terribly fancy here. But you can -you know, Michael says to Bryan, and you give a string for the thing to generate
in the dialogue balloon and so Michael knows then to turn toward Bryan because
he's addressing Bryan. Bryan knows to shut up until the word balloon
disappears. And then Bryan goes and picks it up, blah, blah, blah, blah, blah.
Okay. Any questions? Yeah?
>>: [inaudible] physics system built into this, and is there a way to give objects
weight.
>> Ian Horswill: Yeah. Yeah. So the lightweight physics system is the particle
stuff. And for example -- I mean it's kind of I'm not -- right, I'm not a physics
hacker, so the way this is modelled is I gist have a cube made out of eight
particles with distance constraint, invisible rods linking them together into a rigid
structure and then I set the weights of those eight particles to be very large so as
to simulate a 16 ton weight. I'm sure that ->>: [inaudible].
>> Ian Horswill: Oh, absolutely, yes. So you have dynamics in the sense that
there's gravity. Objects have momentum. There's, you know, a little bit of
damping and stuff like that. So it is a dynamic simulation. But it's a dumb
dynamic simulation. You know, so don't expect too much out of it. And also the
procedural animation system violently contradicts conservation of momentum
and energy. So remember how I said gait generation gets done? We put a jet
pack on the pelvis, and we push it forward.
Well, that means you may not have noticed it, but in some of the scenes there's a
merry-go-round that the kids are running around. The merry-go-round actually
works. You can get on it and push it and it spins and all that. But if the kids get
on the merry-go-round, since they're locomotion is being done by an external
force that's pushing them, they could actually push the merry-go-round while
they're walking on it, which isn't physically possible in the real world.
Now, in practice they don't try to do that, so it's not a problem, and so I haven't
tried to fix it. But, you know, don't -- you know, there's trying to be enough
physics so that it didn't look stupid but not so much physics that I have to think
deeply about physics. Blah, blah, blah. Okay.
So that's the Twig system. What's it bad at? Accurate simulation. It's not trying
to be an accurate simulator, it's trying to be sort of a cartoon simulator, cartoon
physics, cartoon rendering. I mean not in the sense of sell shading, but it's stick
figures, folks. So it's not good at photorealism. Collision detection is very simple.
There's no path planning in it. It's potential fields.
But what it's good at is it's sort of a rough and ready system where if you want to
introduce a new kind of object, you have some hope that an undergrad who
knows C# and hasn't taken a lot of graphics classes or AI classes or robotics
classes can go and write the prim-up for that.
Because of the little bit of dynamics you have in it, it does produce to my eye at
least relatively expressive motion. I mean, the motions I think are relatively
pleasing. Your mileage may vary. But the idea is to get believability in the
Disney sense. We're not trying to get realism, we're trying to get something
where you look at it and you can easily suspend disbelief and say there's
something alive here. It may not be like me but, there's something that's alive
that has, you know, intentional state and affect and so on.
Okay. And again, it's trying to be easy to interface to your AI system. Okay?
Any questions? Yeah?
>>: Is it always just the right stick figures or can you [inaudible].
>> Ian Horswill: Oh, yes, you can add art. I haven't done it, but I can't think of
any reason why you couldn't bring a skin mesh in and compute the bone
transforms from the positions of the particles. It seems like that would be really,
really straightforward. But this is my second 3D graphics program so I don't
actually really know how skin meshes work. You know. So it's kind of on my list
of things to do to like, you know, dive into the model class in XNA and a you
know, I know there's a list of matrices in there, and they correspond to the bones.
And you know learn enough about which matrix corresponds to which bones so
that I can then hand it to an undergrad and say do it. That's the idea.
You know, then there's -- there's a separate set of issues in terms of authoring
about how you tell the system about the correspondence in the model where to
locate the particles in sort of the original, whatever the original pose of the model
is when you're importing it from the FBX file. It would be nice if you would do that
in and automated manner. So there is, you know, a little bit of complexity with -there's a little bit of tool building complexity there. But in principle it's easy.
Yeah?
>>: [inaudible].
>> Ian Horswill: In principle easy to do. I haven't bothered to do it. But, you
know, they're just, you know, more particles and appendages. Now, in terms of
->>: [inaudible] I don't see how you do it with particles.
>> Ian Horswill: So you mean getting this kind of thing ->>: Exactly.
>> Ian Horswill: Right. You know, yeah, in principle, sure. I have not studied
human gait deeply enough to know whether that's something that's easy to do.
In principle here's what you do. You have a particle for the ball of the foot. The
ankle is passive in the same way that the knee is. And you run the same
algorithm with some additional joint constraints. What I do is I try that and see
what happens. And there will probably be something about it that's suboptimal.
But hopefully it would be something suboptimal that would be easy to fix. But I
haven't tried doing it.
Now, the thing that's hard to do would be if you want to put in hands and you
really want to compute hand shapes for grasps properly. You know, that's just -that's a hard, complicated thing. And you know, it's painful and good luck to you.
I'm not trying to do that in part just because for this kind of a system where I'm
trying to not cheat on -- well, for this approach if you were to put in hands you
wanted to do really grasping then the collision detection system would be doing a
lot more work because suddenly you have a lot more cylinders that you're having
to collide, and so it would just slow things down. And so in this version I haven't
tried to do that.
>>: [inaudible] you could fake a lot of the hand shakes actually.
>> Ian Horswill: Probably.
>>: And probably get away without having to do all the calculations.
>> Ian Horswill: Right. And so if you really don't care about that, you know,
especially if you're putting some kind of a skin mesh on anyway, then you can
just kluge the hands entirely and just, you know, you know what direction the
forearm is pointing, you can compute from that. You know, if you know
something about what you want the wrist to be doing, you can compute the
transformations for the hand from that.
You know, I have to admit I'm still personally -- you know, I'm an AI guy, so I'm
still personally at the level of why is it that I reduce the number of triangles in my
cylinders and my frame rate doesn't go up? Oh, there's this thing called
batching. So that's the level that I'm, you know, having to learn about. Anything
else?
Okay. So I have some very high level stuff about mammals and
neurophysiology, but we're kind of short on time. So I'm not sure, you know -- if
you have more questions about Twig, we might be better to stick with that then to
go off into the other stuff.
Okay. Well, so you know, what is it that I'm you know, sort of really spending my
time thinking about? I'm interested in thinking about how do we simulate
personality and emotion and things like that. And partly because of my
background and partly because of a bunch of other things, I'm really interested in
trying to take seriously the notion that we're all just mammals. And so I want to
learn about what we know about the neuroscience of sort of the general
mammalian control systems. Because there really is more that we have in
common with other mammals and in fact with other vertebrates than there is that
we have that's different. In terms of sort of large scale architectural stuff. You
know, fish don't have language, but fish have hippocampuses, they have
forebrains, they have thalamuses. They're called different things, but it's the
same stuff. And when you give any anxiety drugs to fish they actually have the
same behavioral profile in fish that they have in humans. So there's a lot that's in
common there. So I'm interested in looking at that stuff.
So and it also turns out that the people in the personality in clinical psychology
group at my university are heavily influenced by recent work on neurophysiology.
So my basic claim is if you look at what the architecture of that system is, it's
pretty quirky. It's not what you would design as an engineer but it's kind of
interesting. It's what you get when you, you know, evolve something simple and
then you evolve additional layers of complexity on top of it. The quirks that you
get from it actually manifest in interesting ways in human behavior and are
important in the failure modes of human behavior and so I would argue are
relevant to narrative.
So a lot of this takes place in the context of what are called dual-process theories
of human motivation, which are pretty common. And here's the basic idea. From
10,000 feet your behavior behaves very much like you have two systems, and a
impulsive system that is just trying to approach good stuff and stay away from
bad stuff, and then an effortful control system that can reach in and override that,
okay.
And so the impulsive system is sometimes called the Homer Simpson brain. It's
the system that says ah, donuts. It's the system -- so that's the approach side.
It's also the system that says don't want to do my homework.
And then the effortful control system is the system that's on top of it that reaches
in and overrides them and says, no, you have to do your homework or, no, you
shouldn't eat that donut because you already ate five of them.
Now, why is it that you think there's two separate systems for them? Well,
there's a bunch of evidence for that. Partly it has to do with when you get lesions
in certain parts of the brain one part or the other part of the system stops
operating. But also the effectiveness and particularly effortful control system
adversary a great deal with age. It keeps developing up through age 25.
But also there are drugs that can selectively improve or worsen the effectiveness
of this system. So if you take Ritalin or Adderall, a lot of what that's doing is it's
making the system better able to override these systems. And Carver, et al's
argument is that that's actually part of what SSRIs are doing for treating major
depression. But conversely if you take ethanol, IE beer, that's reducing the
effectiveness of the system. And the thing that's really cool is there's this great
literature and it's a very, very well documented phenomenon that the
effectiveness of this system goes down temporarily with use. The specific
argument is it acts like a muscle. When you use it too much, it gets tired out, and
it didn't work as well, and there's a refractory period.
And so here's the argument behind it. Or here's an example of the phenomenon.
The way we're going to assess your ability to do effort full control is we give you a
you a cryptarithmetic puzzle, and we don't tell you that it's unsolvable, and we
time how long you stick with it before you give up. Okay. And now here are the
two subject conditions. One of them I give you the -- or, sorry. One of them you
walk into a room, there's a plate with quote/unquote a yummy cookie that is an
actual quotation if the paper, and the other has a Snicker's bar. And I say eat the
Snicker's bar, you seat it, and then do you the cryptarithmetic puzzle. That's the
control condition.
The experimental condition is you walk in, you see the same stuff, and I say eat
one of them. And so you actually have to choose which one you do. Forcing
yourself to make the choice between the cookie and the Snicker's bar
measurably reduces your performance on a cryptarithmetic puzzle. Yeah?
>>: That would argue for optimizing your stressful choices. For instance, you
wouldn't want to break up with your girlfriend for [inaudible].
>> Ian Horswill: Exactly. And so it has all kinds of interesting implications for
things like addiction, the obesity epidemic. You know, why is it that people are
eating more? Well, partly there's higher availability of high calorie foods, but also
the average length of the work week has gone way up. And just in general the
level of stress that people are under has gone way up. And so consequently
when you get back from work at the end of the day, you don't have energy to
override this. Okay. Okay. Sorry. I went on a digression here.
Here's the system that -- this is -- the data on this is still at a pretty hand wavy
stage. Put if you zoom in and focus on the impulsive system, we actually at this
point have very good neuroscience on what the impulsive system looks like
internally. And so you've gone an approach system, you've got an avoidance
system, and the really cool thing, and this is what all the people in the clinical and
personality psych group in my department are into, is you have a third system,
which is roughly the hippocampus, which is a conflict detector. And it notices
when there are conflicts, and in particular when there's a conflict between
wanting to approach something and wanting to avoid it. And when that happens,
it reaches down, it inhibits both sides, increases like autonomic [inaudible]. It
also triggers a whole set of information gathering behavior. So in other words,
you go into a risk averse mode and you trying to gather more information. So
you get increased memory scanning, you get increased environmental scanning
and so forth.
And here's the argument. If you look at brain lesion studies and if you look at the
effects of anxiolytic drugs, drugs that cut anxiety, and panicolytic drugs, drugs
that cut panic. Panicolytic drugs affect this and not that, anxiolytic drugs affect
this and not that. And effect in particular the behaviors associated with these
things similarly the neuroscience -- the lesion studies. And so basically fear and
panic are the avoidance system, anxiety is actually or in some sense anxiety is a
separate emotion that corresponds to distinct behaviors that can be modulated
independently. And that becomes interesting to the psychologists because the
personality people like it because you can tie important personality traits, in
particular the big two, extraversion and neuroticism.
Extraversion roughly coincides to the gain on the approach system, neuroticism
roughly corresponds to the gain on the avoidance system. If you do the actual
studies what they really correspond to is sensitivity to cues for rewarding,
sensitivity to cues for punishment which means a 30 degree rotation of the
extraversion and neuroticism axes.
Anyway, so we were working on simulating this kind of thing. And they actually -yeah. I'm out of time. There's actually some wonderful stuff they have now on
the distinct behavioral systems involved in different kinds of avoidance. There's
actually a whole series of parallel avoidance systems that essentially make a
hierarchy of stupid and fast to smart and slow. They're all trying to do roughly the
same thing. And so you can actually plot out the time course of something like a
startle response in terms of the startle happens 10 milliseconds -- or the stimulus
happens 10 milliseconds later, the startle system engages, and you get all sorts
of stuff like you might not know that you blink when you're startled. And you hold
your breath briefly when you're startled. But that's 10 milliseconds which is
incredibly fast.
And then maybe 100 milliseconds later or so, then the actually behavioral
systems kick in. And first you get high intensity freezing. All of the motor outputs
of everything get shut down for a brief period of time. That's what flash bang
grenades are relying on in part. Although they're relying on other effects too.
And then later on first you get the undirected escape system because it's fast and
stupid. And it either depending on its assessment either trying to attack the thing
that startled you or just getting away from it, but it's doing it in a very sort of
haphazard manner and that's why you're as likely to bump into something when
you're startled as to actually move away from it.
And then much later, the higher level systems kick in, tell those lower level stupid
systems okay, shut up now, I know what to do and have you do other stuff.
We're at the point now where we actually know enough about this where it's
really kind of plausible to simulate it at a kind of a qualitative level. I'm not
interested in doing like accurate reproduction of neuron spikes or anything like
that. But in terms of the macroarchitecture we actually know enough about it that
we could simulate it, and it's simple enough that you could really put it in
characters and it's not going to require enormous amounts of CPU time, but at
least potentially it can give us sort of some compelling performance in our
characters. Because you're going to get sort of nice timing of everything. So
that's the idea. That's what we're working on now. Okay. Thank you.
[applause].
>> Ian Horswill: Any other questions?
Download