16774 >> Stewart Tansley: Welcome everybody. I'm Stewart Tensley,...

advertisement
16774
>> Stewart Tansley: Welcome everybody. I'm Stewart Tensley, in external research here at Microsoft
Research. But I'm really here to introduce my friend and partner in robotics, Professor Robin Murphy,
formerly at University of South Florida, but now at the Texas A&M University. She's here today to talk
about the emergency informatics and the Survivor Buddy project which is a project we're doing
collaboratively, in a lightweight collaboration. But she's going to tell us all about that shortly. Over to
Robin.
>> Robin Murphy: Thank you. And howdy, as we say at Texas A&M. So I'm going to be talking about
emergency informatics, which is a big thrust at Texas A&M. Then I'm going to scope it down to the area
that I do within that which is unmanned systems in a particular project called the Survivor Buddy, which is
how a robot can be your best friend during a disaster.
So what is emergency informatics? It's the real time collection, processing, distribution, visualization of
information for disaster, or for an emergency incident. That really consists of prevention, preparedness,
response and recovery.
So a lot of people think it's just as when the incident happens, but the information before, after, during, all
the structures that need to be put into place. So we're very interested in that.
And, of course, emergency informatics is different because the nature of the emergencies are different.
They're infrequent. People do them as exceptions, not as part of their day-to-day.
Every emergency is different. And so even though we have course patterns, the things that you need to
know at Hurricane Katrina were quite different than the things you needed to know at World Trade Center.
I was at the World Trade Center. Everybody was predicated off the Northridge earthquake model of how
they were going to respond to a disaster. Of course, that was a building collapse in a major urban area. It
didn't work that way, even the structures weren't right.
So many, many challenges with emergency informatics. It's something that we work on at Texas AM. My
area, I was just put in IEEE spectrum for my dream job, which is I work in rescue robotics. I've been doing
that since 1995. And that backdrop is part of one of our sites at Disaster City, which I'll talk about a little bit
later.
What we've done since 2001, we've been in 10 different disasters, and then we've set through several
others like Hurricane Dennis where we've been in the EOC, watched all the deployments. And Dennis was
sort of a nonstarter. We've used unmanned aerial, ground and surface, water, marine vehicles for these
disasters.
And we see the information flow as we go through it. We have a lot of boots on the ground experience.
And that's where I'm coming from. So this is a personal passion motivation for me.
One thing about Texas A&M, just to let you know about the engineering college, it's one of the top 10
engineering colleges in the world. We've got about 10,000 students, which puts us about the size, our
engineering college is the size of Georgia Tech.
Texas A&M has 48,000 students there. Computer science is, again, a top program, ranked 15th. And we
have 46 faculty. I was the 46th. And we have about a thousand students.
Now let's talk about emergency informatics. I think of it in the terms of this idea of basic research, which is
what we computer scientists, mechanical engineers, industrial engineers, psychologists, all of us are really
good at but it's not necessarily, the fundamental advances are not particularly tied to a particular
application.
You think of emergencies, you need networks, wireless networks are important. Security, unmanned
systems, real time distributed computing. Cyberphysical systems. Physical devices you can trust.
Visualization and simulation is very important, how you use video games to train for this. Social
networking.
Almost all the applications now, people almost invariably assume to understand a disaster it must be
geolocotive, that you must relate to the geography of it and see it on a map. That's not always true, but we
tend to go that way. We have sensors and sensing and artificial intelligence, any intelligent decision needs
of making that. We have this information space. And you see you have the prevention and preparedness
phases before the incident happens. Then the incident happens and we have response and recovery, and
how do we get from here to there, to what we need.
The traditional method is policy-based, where you have something like the federal government, say, here's
what we need, we're going to get people to build this, contract with companies or places like Miter or SIC to
build up large systems. We'll give these to the responders and the responders will deal with it.
And we're very good at that at Texas A&M. We have a policy institute. We have a college on government
service and policy, and the policy institute works particularly on how to get managers, sitting managers
different levels of the hierarchy to know what to ask for in technology, how to integrate, what's the impact
on training, what's the impact on cost.
We also have what's called TEXS, Texas Engineering Extension Service, which is the state agency for
emergency response. Texas A&M actually owns search and rescue Emergency Support Function Nine for
the entire state of Texas. Texas is the size of an average state in the United States. It's a huge deal there.
And we also have, since the late 1800s have trained firemen. We train fire rescue teams all over the world.
Secret Service, law enforcement, FBI. So we have good boots on the ground. We've seen -- we know this
path. We know how to get it in there. We know how to train. We know how to use it ourselves, but it's
lacking.
And that's just Disaster City. We've got a 52-acre facility where we can try just about everything out there.
Now, what's really fascinating to me, having been in several disasters, is that people spontaneously adopt
abilities. Particularly, they go to this cloud. Particularly, the idea of social networking. They put the
wireless.
They do ad hoc stuff. They build their own websites. They create new applications. They find data and
mine it and put it out there. Hurricane Ike, we were looking at using our marine vehicle, our surface vehicle
to check out one of the bridges that had collapsed.
When I pulled up Google Earth, people had already begun to mark hazards in the channel that they had
encountered themselves, because they had their own GPS system.
Now, there's all these disclaimers that the Army Corps of Engineers has not checked this out. But I'm
thinking these people also put pictures of the hazards there, too.
I'm thinking, yeah, the Army Corps of Engineers should quit putting disclaimers, they should embrace this.
I understand there's problems. So it gets interesting. These are mostly, for me what's fascinating, it's the
victims who are doing the majority of the adopting and adapting in real time, because you see these types
of interactions that we see, the traditional, take usually seven years.
The lag time, because it's requirements-driven, it's pretty much all the waterfall style, we think of it. And we
know that our technology is changing faster than 18 months, some of the things we can see and do.
But what you'll also see is people in the emergency response community will kind of sneak it in. They'll
start using their cell phones in creative ways. They'll call each other. They'll text. Unfortunately, because
of the politics and bureaucracies being bureaucracies, you'll get cases like where they're told they can't
bring their cell phones, because they might leak information out to somebody who shouldn't know. And
their wife may call them or husband may call them, and a lot of times they do, they call them with good
information that we couldn't get any other way from CNN or something or they know somebody there. But
we shut that down.
So this confluence of how we take the wonder of basic research that we have here, get it in here. But at
the same time take advantage of it in real time, see it, propagate it, get it to the right people, within the
recovery period, within that 72 hour, the crisis part of the response.
Wouldn't that be great to have that and do that? That's what our global step, that's what we're working on.
And we have 55 faculty members that are working in this area from 10 departments in four colleges.
And so the questions that we are working on are this idea of polycentric control architecture. It's no longer
hierarchies. We've got horizontal. We've got vertical methods that we have relationships, and we pass
information and we make -- we make clusters of decisions here. And we have a lot of emergent behavior.
And that gives us adaptivity and resilience, how do we exploit that. How do we get the right information to
the right people at the right time when our networks are bandwidth constrained.
Every time you go to a technology exercise, something like the Strong Angels, anybody been to the Strong
Angles, you go there, immediately saturates because everybody has all that.
And then things go down intermittently. So you have this. So we don't have that capacity. And all the
solutions in AI have always been predicated on that you can do contract net protocols, that you can use
more bandwidth to negotiate. And so we need to find ways that don't rely on that, because it's making the
problem worse, right?
Talking about who is going to share is taking up more bandwidth on the bandwidth we're trying to share
that we don't have enough of.
Another area that's very important to us is visualization and simulation, and what we find is that we have
these great models that you couldn't tell, that work sort of, but remember every emergency is different in its
own way. There's always something different.
How can we correct, how can we update that model and simulation in real time? How do we get the
information in there? How do we say: Oh, no, no, go get us that information because we need to update
this so the decision makers can run through and do some projections?
How can we use this idea, actively do it? Not only can we do it, can we do it actively?
And in an area that's very interesting to me personally is that we know that yellow branch where people are
spontaneously coming up with good ideas, both within the organizations and within this larger group of the
general population. How do we notice what's working well and get that replicated and get that out there
and institutionalized or whatever we want to call it, to get it to really, to propagate the good stuff, how do we
detect new adoption and applications of these technologies? And a lot of these technologies are
information technology-based so with can cleverly there should be ways to embed that in the way that we
distribute the information itself.
All right. Well, that's what we do at Texas A&M. I'm going to talk now, turn the attention to what I
personally do. My area is unmanned systems, and I've already alluded to that this is a picture of Oklahoma
City. But you can see that aerial vehicles to give the views, to give views closer in here that you can't get
from a manned helicopter, ground vehicles to go behind the rubble where it's not safe for people, not even
physically possible for a person or dog to get back in there.
If this were near, Oklahoma's rare, but most of the population now lives near water. So you can imagine
having marine vehicles to look at your bridges, your infrastructures. At the World Trade Center worried
about that basement collapsing, because if that had collapsed it would have taken out several other
buildings.
>>: So when you say unmanned, you don't mean autonomous?
>> Robin Murphy: It can be autonomous.
>>: But it can be controlled?
>> Robin Murphy: Another distinction in emergency response work, a lot of the unmanned systems work is
not classically autonomous even if it could be. The autonomy levels are there for the control, because in
emergency response, you're trying to project yourself into the scene.
You want that data in real time. And because everything's different, you personally want to see it so you
can see the anomaly, the thing that you couldn't tell anybody else to look for, because you don't know it
until you see it.
And so we see a focus of people saying robotics, we want one person to control 100 robots at a site.
Okay, that's a control idea. What I want to see is how do you have the network structure for a thousand
people to look through that robot and make sure they're seeing what they need to see in real time. I always
get the data from these high altitude autonomous things. We get it two weeks later.
The governor, the president gets the data. But the streams don't go to the people in the field. And I do the
boots on the ground. I've been in both places. I watch how it goes. That's unmanned systems, generic
term. Autonomy. You're not going to see a lot of full autonomy in the systems I'm going to talk about,
because we need the human in the loop, the human wants to be in the loop.
Okay. In terms of unmanned systems, these are just some of the -- these are the robots that have been
deployed to date in the United States. You can see typically with this one exception, this is a mine robot,
they're very small. And in general they're small because they're going places people can't go, because if a
person could do it, if there was a way to do it we'd already do it. So this is adding new capability.
And something I'm very proud in the green is all the ones that the Center for Robot Assistance Search and
Rescue, of which I'm a part of, has been there, that we've fielded scientific teams that have been able to
insert technology and learn from this.
Those are all the modalities. Now let's talk about ground vehicles. Here's the Berkman Plaza Two
collapse, last year, six-story parking garage.
You don't need -- the forensic structural engineers wanted this picture. But if you're a responder, you want
to see what's underneath. And, in particular, there were two places they wanted to see. One was right
behind this hanging slab, which is incredibly unsafe. There was enough space for a person to get back
there to look in, but it's just so clearly unsafe nobody would get back there.
And the responders will take a chance if they think it's reasonable. They will push but they won't do -- it's
suicidal. So we put a robot back there to get back and look around there.
And the reason why we're looking back there is because there was a report that a person had been up on
this part when it collapsed and so therefore he might be down here. So you're using the information that
you have about a collapse to modify your search and where you put your efforts in.
I'm not going to talk about that as much because we're going to talk about Survivor Buddy, what would we
do if we found someone? Out of all these 10 incidents, we've to date never found a survivor. We've found
a lot of remains, but we've not been able to get to the right place at the right time to be helpful.
But what would we do if we get there? So how do we do that? And here's just some of the stuff that we've
been working on where we initially started it.
[video]
The robot is on its way. The first challenge is to maneuver it by remote control through the rubble. It's very
difficult to navigate and see it's like looking through a soda straw.
>>: When the robot finds me, rescuers need a good look and a conversation.
>>: We want to know if you're okay -- where does it hurt.
>>: I have pain in my left shoulder, and it's hard to breathe. I have pain in my left leg.
>>: They need to see that the camera on the robot can get a good look at simulated injuries so that
rescuers can triage, get help to the most seriously injured victims first. In this simulation they determined
that my condition is stable.
>>: Several rescue robots can be working the disaster scene at the same time.
>>: Another simulated disaster victim, Jamie, is now being contacted by a new generation of rescue robot.
>>: Hello, hello, can you hear me?
>>: In this simulation she doesn't respond. Is she alive and unconscious? Or has she passed away? If
she's not alive, rescuers don't want to use valuable time that could be critical to saving other victims.
This robot carries a kind of nose, a sensor that samples the air around the victim's mouth. When we
breathe, we exhale carbon dioxide, or C02.
>>: The green here shows that there's enough C02 to show that she's breathing.
>>: A major breakthrough, they want robots. A tube on the robot can deliver water or even medicine.
Through the hours or days it could take rescuers to dig down to the victim, the robot can be the only lifeline.
>>: How to get them water, how to keep them psychologically comforted, talk to them, keep them
motivated...
>> Robin Murphy: And the person who is going to be trapped and found by a robot is going to be there for
four to 10 hours. It will have been 12 hours, statistically, before we'll have found them.
So this is going to be an important lifeline. But you can see our initial work, five years ago, was very, very
focused on devices, the usual mechanical thing. And if you look and take the Joint Cognitive System from
David Wood, we think about the robot here, there's the survivor. We've got a lot of work on how the
mission specialist, the pilot, the person running the robot, you need two people, one to focus on the robot
and the other person to look at the data and interact with the person and stuff.
With Eric Rassmussen, we began to realize, no, no, this person, if you've got a crush injury or something
you're going to want to do some sort of reach-back over the wireless network to literally maybe 200 people
in the world who are specialists in treating people in on-site care in how to assess and work with them.
So you have this. They'll be working with this giving directions, telling people what to do. This might
propagate on up. This is a basic model of the larger cognitive system.
If you think about human robot interaction, how are you going to manage the victim? There's the robot.
There should be human robot interaction. Very traditional focus is to either work on this level how we're
seeing what we're seeing or how these two groups are working together. The people behind the robot and
what should be made autonomous so that they can focus. Can you get rid of the pilot and just have the
EMT somehow miraculously point in the right way and do that.
And that's been the traditional focus. The problem with that is that we forget what about me, the survivor,
with this thing in my face, in the dark, when I've been trapped for 12 hours and I'm in pain. And I now have
a device in front of me and I need -- that part needs some attention.
And there's another aspect to it, is I'm now going through this device, I'm talking with people. I'm
interacting with several people. And this doesn't even show the fact that sometimes we'll drive the robot off
to go look at the disaster. Oh, hey, we're not paying attention to you anymore, I need to go look at this
beam over here and do this and things.
So we're not even talking about that. Already we have what's called a mediated experience. We're not -we're talking to people but we're talking through this box, so that adds another level of complexity. And that
brings us to Cliff Nassau's work at Stanford. Cliff Barnes and Cliff Nassau did a breakthrough book on their
studies called the media equation.
And the equation, one equation, all you need to know, media equals real life. He has all these stages that
show that if something moves like on a screen, we, if it's got sufficient animacy, we subconsciously treat it
as if it's animate. And we expect to respond to it socially for that.
He's made a lot of great money consulting ding things with call centers and how we find when the voice will
say I'm here to help and you know it's not here to help because there's no "I" there. When do we accept
"I"? When do we want it to be impersonal, in that. And he does that with a media equation.
We've got him involved in thinking about how people are going to react to these robots. And he had
already predicted that people were going, with these robots, because they move, even if you heard your
best friend on the other side talking to you, you would still treat this as this and that person separately.
And I told him he was wrong. And it was about the third time I've told him. I just quit telling Cliff he's wrong
because I keep having to buy him dinner. What we found with responders, they spontaneously, notice he's
looking at the robot. He's looking at the robot, talking to it. He's maintaining good social distance. He
makes eye contact.
Before that robot was in there, that responder was using a hands-free mic. The robot came up and it was a
better two-way communication link. It actually had less static. So he began talking through it.
He had no need to turn around. You would have thought he would have just done away with the mic right
here, wherever he was looking at, talk. But no, he would point to it. And they traded on out operators. We
still saw the same phenomenon that this entity was treated differently and treated socially.
Interesting thing. So we've been working on following that up. Also, you've heard the stories about the
soldiers getting attached to their bomb squad robots. They're teleoperating the robot, but they still think of
them as something discrete. It's very bizarre and Cliff has all these things, it's a different part of our brain.
What does that mean for victim management? It means they're going to expect a social relationship.
They're not going to expect the robot to be in your face. They're not going to expect it to behave. We know
that when people slow down -- they don't get in your face, right, based on culture. There's a certain
stand-off distance that's considered polite.
When we talk we make eye contact. That's a big cue to show that we're paying attention. If the robot
doesn't do these things, they will probably distrust the robot, because it's not quite the part of the brain
that's going well they should be behaving like an animal, they should be behaving like this, but they're not.
It's a stupid -- and it doesn't matter what's coming.
Another one is that as that social actor is what Cliff calls it, this animacy, we begin to think of it as a social
operator which we've already seen. Then it becomes important, that idea of the voice, is it a synthetic
voice, is it a taped human voice? That could have quite a bit of impact. Think about call centers and also
whether it says "I" that could also influence whether you trust regardless of whether it's a doctor or
structural engineer or just me talking over the robot, whether you trust it, whether you begin to get creepy,
creepified by it. I think creepified is actually a word.
[music]
>> Robin Murphy: Do you know this story? Okay. In 2006, in Tanzania, a gold mine collapsed. They
were able to drill a hole where they found two miners alive, one of those things was going to be about 12
hours before they could dig them out. And you know you're talking to them. And they got tired of talking to
the guys. They said could you just send us down an iPod with the Freedom Fighters. And so, of course,
heard about it, and wrote them a ballad and had it dedicated to them, the ballad because of the miners.
Let's take the idea the robot is going to be up here with you, you're going to be attached to it.
We get into this idea of a Survivor Buddy. We're going to make it -- web-enable it. Now, instead of just that
creepy little robot coming in, a better behaved robot, but also fully web enabled. Now we can do the
two-way video conferencing. Cliff maintains the first thing they're going to want to do is CNN to validate
they're part of the disaster and this is kind of cool.
They're going to want music. They're going to want -- and how are they going to react to that because they
already know that they react socially, now we're putting this different part of the device and we're being a
media.
So this adds a problem in that we have what could be just normally a social actor, we would treat it, but
we're also doing things like our CD player, which we don't relate to like a social actor.
And if we mix it up, will it make people not trust the message, going back to McLellan, the media is the
message. So what we've postulated is that in communications, it's fairly well accepted they're pure
mediums and there's a lot of support for the social actor work, but what we're postulating is there's a middle
part, this social medium which has a little bit of both. And that's going to require a slightly different
configuration, slightly different heuristics on how we do it.
And some ways you would measure the congruence between are you going to trust this device, are you
going to be comfortable with it is does it behave well? Does it make the eye contact? Does it stay out of
your face? Does it approach in the ways that we normally do?
The communication identity and the voice. And in terms of affect, my NSF graduate fellow Cindy Bathel,
absolutely brilliant woman, has been looking at nonverbal and nonfacial affect. You notice these robots
don't have anything that looks like a face. Form has to be driven by function. We don't want any extra
stuff. Just one more thing that breaks.
And so she went back and went through the cognitive and behavioral literature and figured out heuristics
on how things behave based on distance, on proxemics. There's some cues you talk louder when you're
further away, but your voice automatically drops when you're closer because that would be rude to be
yelling at a person, but further away they need to hear you.
So she put all that together and she's just completed a study, we're still working on the results, but this is
partial results where she has done over 128 people, largest human robot interaction, human subject study
done to date.
[video]
These are both real robots we would use, we have used in disasters. This is in a medical assessment path
that we did for some original studies with doctors. So this is how they would drive the robot if left to their
own devices.
We captured four sets of video data. This is all done in the darkness. Those are night vision cameras,
because we wanted the people to feel trapped and confined as stressed as could be.
Here's an example how we would drive it normally, without thinking any consideration of the survivor.
Bright lights. That was kind of jerky, right at her face. The noise, we can't see what's going on.
And you'll be shocked to see that people were creeped out by it. And that was on the self-assessment they
used the word "creepy" a lot.
That's not healthy. Clearly that's not. So why don't we drive it or program it the way we do puppets, to be
more gentle, more consistent, with nonverbal nonfacial affect. Notice here the lights are dimmed. It's
submissive. We've got lighting, so you can see more of it. It's no longer two dots coming like something
out of the X Files at your face.
See him smiling. And we did physiologic tests. So we've got -- she's going through all the stuff. You can
see the blood pressure and the different changes with people. It's not just an assessment. So she's
currently measuring the valence and the arousal. And charting through the statistical data here. But the
initial work is that we're already seeing a strong indication.
This is important. So that's what we've done. What we're working on now with the survivor is really to go
into the Survivor Buddy project itself that's funded by Microsoft is looking at communication identity. Is it I?
Is it the robot? Is it I am acting on behalf? How do we phrase that? And Cliff Nass and Victoria Groom,
graduated student, is the lead person on this. And the first phase is we've just finished the script for this,
and this is a video game simulation where the player is a victim and they go to this video game and the
robot comes through and they'll talk in one of the identities. This is the controller, is the pure medium. This
is the robot. This is the social actor in this one in the middle is I'm in that middle. I'm a thingy but I'm
transmitting stuff that I have no control over.
And apparently in psychology the real trick with these studies, if people trust you, you measure trust by
giving them choices and seeing what they do. Do they take your suggestions. We'll have suggestions -people, at this point, the experts say, you should probably listen to music to calm down.
Is that okay? And if they say yes, then we know something. If they say but if they choose something else
that tells us something. And they have a series of questions and events within this script to go with that.
So if they take the suggestions, if they do the self-assessment and say they like the robot, if they show less
anxiety in the way they respond, then we say that that identity is going to be more congruent, more leading
to trust.
And then we're building the Survivor Buddy. We've just found a really neat seven-inch monitor [inaudible]
monitor, fully web enabled. Have you seen it? Pretty cool. Video camera built in and speakers,
microphone.
So we'll put that on top of our robots. We're building the heads for it now. And we'll take it out to Disaster
City where we've already started doing -- we've already done experiments with other aspects with victims.
We have places where we can hide them in the dark in the cold and with the spiders, I might add.
So in future work -- we've looked at nonanthropromorphic robots, but we'd like to compare it with
nonanthropromorphic robots.
Go ahead.
>>: When you put the person in there do you let them stay for a while until they're okay?
>> Robin Murphy: Now, for the one that Cindy did, they had a three-minute period of time before they got
into the box. Then they had three minutes in the box so that their heartbeat would get down to normal. We
would see that.
It's also good to discover they're going to be claustrophobic before the study, so you lose that point there's
built in things with that. Now we did not try beyond that too much more with anxiety because if you've done
human testing you know they get really wonky about people doing anything that stresses them out.
In this case, in the previous case, where we had -- these were responders, subject matter experts. These
are responding to, wanted to play with the robots. A whole other study that came out of this. We were
going to try from human robot interaction class, we were trying a proto study. And we have an
anthropromorphic robot that looks like a caterpillar or a snake, and we sent it down for them and they said
that's creepy, yeah, this is not going to work.
And they also did something we had never thought of again that's why I love field work, is that so we've got
this $100,000 snake from Japan and they just grab it and say, look you can see, do you see me now? Do
you see me now? Hey, look here. It's like -- fortunately the developer was right next to me because I
would have had a heart attack with that.
I was kind of like [gasping]. But he was good because he was getting data on how they spontaneously
reacted. That's another thing, they think victims, no, no, they've got their own agendas, their own opinions,
their own life.
And we just haven't thought of that. So even very surprising as we do these kinds of tests. So did that
that...
So the anthropromorphic, that was the example of the anthropromorphic. It doesn't have to be the usual
sony dogs or looking an anthema. We'd like to have, the communication identity, we have medical doctors.
We have structural specialists. We have people trying to talk to you as the victim that are just friends and
family.
Now we have many people trying to use the robot at one time and communicate, how are we going to
make that consistent? How is it not going to be confusing or overwhelming? So we'd like to look at that as
future work as well.
And then we have one on communication voice doing what the synthetic voice, do we want to tape a
human voice? Do we want a facilitator's voice, a real facilitator's voice that we've submitted as part of a
larger NSF grant.
So just to summarize and wrap up, one of the cool things about this is Survivor Buddy comes from both of
our oh look we should take neat things that are in this sort of research space and apply it to an obvious
needed victim management. But at the same time we see is social networking, those darn people just
come up with their own ideas in trying to merge that to make it a web-enabled robot.
And we're really looking at the mediation this way and focused on the social interaction that will happen
because of the social actor, because of the animacy of the device with the survivor, yet we have it being
the mediator to this larger world out there.
And, finally, to me one of the coolest points, I love doing field work, because I always get smarter. It
always helps us identify new issues. And that's always -- that's another strong suit of Texas A&M, is that
we really are trying to take advantage of our ability to work with the subject matter experts and insert and
learn and continue that cycle and to teach our students to be a little bit more hands on.
And this is just -- it takes a village, and here's some of the pictures of some of the teammates. Texas A&M,
there's myself. Dr. Sorenson is working on building stuff for us from double E. Cliff's group and at
University of South Florida we have Cindy Bathel and Jeff Craighead, who have been working on this as
well. So that's our group. And thank you so much for supporting this work. We're very excited about it we
think it can lead to so much more. And it all started with y'all.
>> Stewart Tansley: Thank you very much.
[applause]
>> Stewart Tansley: Do you have time for questions?
>> Robin Murphy: Sure. My time is yours.
>>: You haven't talked at all about the reported technical stuff. Is that kind of figured out? Do you feel
confident about the ability of the robot to get where it should be?
>> Robin Murphy: I have never met a robot that I've liked yet. Okay. So there's certainly many things. But
after 9/11 I wrote a series of papers and said you know the robots are okay. The biggest deficit is in the
human robot interaction.
And, in particular, I could show you, and did in my research -- let's go back. I'm going the wrong way.
Sorry.
Yes. So we focused right here. We could show you so many errors being made, slips and mistakes being
made. And we saw lots of miscommunication that it took, that if you had one person run it they couldn't do
it because even if the robot still -- you're in a deconstructed environment. You couldn't perceive it well
enough. It's very hard to look through a soda straw. It's a keyhole effect. So we began working with
perceptual psychologists and stuff.
So I haven't hit on the robot. First off, the robot mechanical stuff is a whole another set of talks. We can
talk about that. And I haven't really talked about it because I wanted to talk about the HRI aspects of the
Survivor Buddy. I think HRI is the biggest bang for the buck, the part. Particularly, looking this and this
way, rather than focusing on reducing the number of operators. A lot of people are doing that. Will make
some good progress on that.
>>: Certain level, isn't something better than nothing? Like if I'm trapped in dark, if something comes down
at least I'm not forgotten and I know that. Then after that these factors come into play?
>> Robin Murphy: We've heard debates either way from the medical doctors. We've heard debates either
way from the medical doctors that you could be so in shock and in pain that it just sends you over if it's
super creepy.
What's interesting to them and to the psychologist is that, you know, for the first 15 minutes this is great.
And then you don't trust the robot anymore. You don't take directions coming from it. You don't do it. You
ignore it.
And we're talking about managing over a 10-hour period. So what does it cost to explore that, to get that
benefit?
>>: I have a question about your experiment setup. For your anthromorpic if I can or otherwise robots, did
you tell your participants -- obviously you tell them you're going to be in a closed space or I assume. Do
you tell them to expect to encounter a robot or do they have any idea?
>> Robin Murphy: I forget -- I do not recall the exact form that Cindy did. But we said you're going to be
working with rescue robots.
>>: So they know somehow a robot is going to be involved.
>> Robin Murphy: They know somehow a robot is going to be involved.
>>: Would there be any plans to kind of leave them dark?
>> Robin Murphy: We could look at doing that for some of our field trials that we're looking at doing on
another thing. You wind up into some interesting problems with the IRB until you get further along, they
don't like for you to like -- yeah. Those kind of big surprises.
Texas A&M, we have a big volunteer program. A lot of people come out on the weekends like the Boy
Scouts and Girl Scouts can get their community service by being victims to train the dogs. You put people
in those rubble piles two of our eight rubble piles are specifically for you to be sitting there forever. Like
you've got room for you and a soft drink and something to snack on, because wait for the dogs to find you,
which could be a long time, because some of them are young and not too sharp yet.
So we'll have volunteers so we can look at doing that. And I want -- I like the total immersion, because you
really get those impacts.
>>: So I was wondering how you would [inaudible], the situation where the victim is seriously injured and
you want to probably test out how they're reacting to maybe an anthromorpic robot, whether it's creepy
versus something that's not so creepy.
>> Robin Murphy: There are different ways to induce stress and anxiety that don't require physical pain.
There's techniques where they have you do very frustrating problems, and you get the same physiologic
response.
So then imagine doing that and being in the dark and being confined, being uncomfortable but not in any
way dangerous or health but just kind of that in that whole thing, and then that. So the psychologists have
ways of doing it. They have ways of doing it without hurting. You never know until somebody is out there.
That's why working with medical professionals who have actually done this on site and there are relatively
few that have that kind of experience.
One of the things we've drawn very heavily off of Eric Rassmussen's experience in Turkey and the Turkey
earthquake.
>>: You said you just have a display panel to be able to [inaudible] particularly people's faces? Obviously
something you show pictures to people and they see when they recognize [inaudible]. That should have
some calming effect, right?
>> Robin Murphy: You would think. We also know that certain music has calming effects. So can you
regulate? You can sometimes regulate a patient's heartbeat by the music you play in the background. So
there's a whole bunch of that. An interesting thing about heavy metal. When you do an MRI or something
you get to pick your radio station now and stuff. People are savvy about this.
But what we also expect to have happen is that the two-way video may be there but we may not tell the
families. They can see their families but the families can't see them for fear that they're going to overreact.
Because people tend to look a lot worse than they actually are in those situations.
So it's a whole -- it's like -- it's gone from oh what do we do if we find somebody, which apparently the
strategy for the emergency response world is that Robin being the chattiest robotics person will be stuck
talking to this person, to the, oh, we'll do web, oh, shoot, there's some serious psychological ramifications.
And we're trying to figure out ways to, again, something's better than nothing but let's try to do it as right as
we can given that there's this wealth of wonderful ideas from psychology and communications theory.
>>: There's also, I think, if you see a human face as opposed to being anthromorpic about the robot, this
one has a face --
>> Robin Murphy: They're going to treat it as a face, as soon as they see that. Look at sock puppets. We
manage to get a face out of a sock puppet. It's like we see faces in cars.
We're wired to do that. So, yes, we expect there's that.
>>: What about going to more fully mapped robotic -- more biologically ->> Robin Murphy: One thing we'd like to look to is some of our future work, because we are getting -- so
you like where I'm going here.
The two robots I wish I had the most of that I had would be one that looks like a scorpion, the legs are
wonderful and then you use that tail with your camera to get up the heighth you need which gives you -well, all right. That's nonanthropromorphic but creepy. And the other one is a ferret. Yes the face eating
ferrets, right, because ferrets are leg snakes. They get up, they can look around. They squeeze through
places, what a beautiful set of agility. Again, something that we're not really wanting in our face.
So and then, oh, let's do the small cockroaches. Case Western, Roger Coin has lovely things, wigs, they
can stick, climb, but they're cockroaches. Now I've got 10 creepy things coming at me with not enough light
or power for me to really see and visualize, and we think one of the important things is that if you can see it,
it will be better. And also the sound. So that, okay, yeah, we'll start over riding some of this creepy
animacy stuff. So we'd like to go there. I think there's a huge thing. And of course there's the more benign
anthropromorphic designs.
>>: Dave [inaudible] scary spider.
>> Robin Murphy: There are so many creepy things. Dennis Hong has one. He says look at this. It's like:
You go, Dennis, that's going to work really well. You'll just have people like stepping on it.
But it's interesting how we react. We would have never -- we did not -- I've been doing emergency
response since 1995 and doing field work since 1999.
And it just is fascinating how wrong I can be. And just all the things that we're learning. And it's not often
not the technology it's how we use it, how we package it, how we present it. How we help people train and
the silly things. That's why I love being in the field to try to get that insight.
>> Stewart Tansley: Any other questions?
>>: Sneak one in what we were just talking about, there's nothing wrong with this field I find it fascinating,
this particular application. But what about other application domains potentially in this research. For
example, in home robotics or assistive devices, do you see your stuff being able to speak to those contexts
as well?
>> Robin Murphy: Well, we think it's going to be heavily related to healthcare has been looking at like as
projecting the healthcare provider there. But now -- what do you think about somebody who is highly
dependent on robots or a robot? A shut-in who is interacting as that, as their media to the outside world. I
mean that's the thing they talk to. I mean, that's how they talk to other people, really. And that we haven't
seen looked at in healthcare, and we believe this type of work will carry over to that as well. We also -- so
that's any time that you have your connection to the world as mediated by a robot for a long period of time
where you have a real dependency not just that 15 minutes initially or when something comes through or
as an interrupt but as a continuous thing, we think that this will have significant impact and help. If not help
frame the answers will help frame the questions that need to be answered.
>> Stewart Tansley: Thank you.
[applause]
>> Robin Murphy: Thank you very much for having me here.
Download