>> John Krumm: Hi everybody, thanks for coming. ... welcome to Scott Davidoff from JPL, hosted by AJ and...

advertisement
>> John Krumm: Hi everybody, thanks for coming. I'm John Krumm. Today, we give a warm
welcome to Scott Davidoff from JPL, hosted by AJ and me. And AJ and I first got to know Scott
when he was a PhD student at CMU. And now he manages the, let's see, the Human Interfaces
Group at NASA JPL. And when I told Scott that today was our hot wing party, so he threatened
to come in costume. But he didn't, and so I was expecting, in order: astronaut, alien, robot, or
maybe Mohawk Guy. And then Scott was kind enough to bring along his colleague Mark
Powell, who is Curiosity Rover flight software technical lead, and so he's going to be doing part
of this talk too. So take it away.
>> Scott Davidoff: Thank you very much. So this image is clearly one of the icons of a
generation. And I think really helped define how we see our relationship to our universe. And
what I'm, like to talk to about today is how, well and that this footprint had a face to it. And,
you know, it's truly a human mark. But our mark of exploration today is looking a little bit
different. And the tracks that we are leaving on other planets no longer have a human shape.
And this creates a very interesting set of issues for our exploration. Rather than a human face,
we have this cute feller here, and you know, she's really the hero of the exploration, and the,
our entire window into the other worlds comes through her senses. And everything that she
does comes from our command, but the difference now is that there's 80,000,000 miles
between us. And so, the set of problems for exploration are a little bit different. And at JPL, in
the Human Interfaces Group, our sets of, what we do, involves both the uplink side to the
robots and the downlink side. So the uplink side actually starts with a very social problem,
which is to develop a system to allow the 300 Mars scientists who are now gathered at JPL to
come to some kind of agreement every day so that they can come up with a plan for what the
robot should do. Some would say that getting scientists to agree is the hardest of all the
problems. And then we have to take the goals that all of the scientists provide for us and
actually balance them with what we know the Rover can do, how much power it has, whether
the cameras would accidentally focus on the Sun, and how much, how long the day is; make
sure that these are all in fact, valid, and then turn this into a plan and radiate it through here,
this is an image of the 70 meter dish at Goldstone. Then on the downlink side, what we get are
just loads and loads of data. So we get all the telemetry, the images, the state of the robot, and
then of course, turn them into subsequent products like 3-D images. And we really want to be
able to have a very accurate model of what the state of the spacecraft is and where it is in its
environment, make sure everything is healthy and then part of what we do is share it with the
scientists and the science community and as a public asset, share it with the world.
Now, so the challenge for our group is, for any particular mission, to take about 100 mission
scientists, for example, on Curiosity, there's actually 10 separate instruments that are located in
seven countries, and over the course of eight hours, have to develop a single plan. You can see
here that they're located in quite a few different places. And by the way, Curiosity is, I think, a
pretty famous spacecraft these days, but our group actually manages the communication and
planning for 16 different space missions. And there are four in preparation. And we continue
to get data to this day from spacecraft like Voyager. So it's a very exciting environment.
However, as you can imagine, with this age of robotic exploration, I think the challenges that
we face as roboticists, people who are interested in other planets and explorer's, and people
who are interested in human interfaces, are actually significant and different than they were
when we were during the Apollo age.
And so, what I wanted to talk about is to bring these differences to the foreground and
describe what I think are the most important challenges that we face as a community, and of
course, look for input and insight into ways to solve these problems. I think the number one
problem is what's going on? So we know we have a robot on Mars. But where is it? And in
what state is it? And each instrument, where are they located? Well, we know where we think
they're located, but in order to do precise measurement, we really need to know the difference
between our expected plan and what the real plan is. We need to know, we get tons of data
about all of the different sensors, what the temperature is, what the weather conditions are,
what the composition of a particular set of minerals are, and the, we could stitch all of this into
a very detailed picture of a landscape, and then we still have this very complicated
representation problem of how can we set forward a plan for what we want the robot to do.
And as the distance to the robot gets further and further, it becomes more and more important
that we can express that in a high-level language. So while we might tell Curiosity go over there
10 meters, when we start to explore the outer planets, we're going to need to have the ability
to send them a plan that would survive for weeks at a time because the time delay is so
significant. And so both expressing the plan and what inferences the robot is going to make
based on our expression of the plan, is I think one of the hardest problems. So we can sort of
encapsulate this as visualization, but I think it's quite a few more things than just that.
When we do want a robot to go somewhere, there are a lot of ways that this can be expressed;
and I think one of the hardest problems for, especially robots that we’re looking towards the
future to do things like to explore asteroids, expressing the high-level objectives of emotion
plan in a very natural way is actually a challenging problem. And so we can tell the robot to
increment its motion in millimeters, but if we really wanted to say, go over and touch that rock,
how do we use that language, how can we express that in the way to the robot that’s efficient
for us as, you know, robot essentially pilots and explorers, and how can we do it in a way that
we can validate it?
The next two problems are really robotics problems. One is about recalibrating plans in sensor
data. So anybody who's ever tried to program a robot knows that there is inevitably a
difference between what you want to happen and what really happens. And when we are
having robots explore autonomously, one of the things that is challenging is to constantly have
the robot reassess what its state is compared to what you originally instructed it to do. And
then, I mean, I think, probably the hardest plan is when the robot itself is enacting a plan, it's,
you know, an autonomous action, how it can really self-monitor, and make sure that it's safe.
So at this point, I will hand the platform over to my colleague Mark, who's going to talk a little
bit about some of the current missions.
>> Mark Powell: So first I'll talk about little bit about the fourth challenge that Scott was just
talking about, how to develop autonomy and to be able to calibrates the system that's
executing the autonomy and validate the autonomous algorithms to see if they really meet
their expectations, and to do this, I'll put us in the context of one prototype mission of
exploration that's preparing us to visit a world that we have not yet visited with a robotic’s
presence for a lengthy period of time, anyway. So, if we, we all wish that we had a time
machine that could take us into the future or into the past, and the mission that we're trying to
capture in this instance is to be able to travel back to a world that looks like the earth, looked
like 3 billion years ago when the atmosphere was not nitrogen and oxygen, carbon dioxide, but
had more methane and had really, a mix of chemicals that was not as conducive to life as we
have today on our planet. And we, instead of a time machine, we have been fortunate enough
to learn that there is a world that is like this now that is sort of a snapshot of 3 billion years ago
Earth, that we can go and visit.
If we were to go there, if we weren’t a methane breather like this fellow, we wouldn't want to
get out of our environment suit, and so it's necessary to visit this world with a robotic mission.
And this world is Titan, the moon of Saturn that we've only just barely begun to visit. We can
fly by it with Cassini, which is still in operation now, and observe things that are interesting, like
the glint at the North Pole, which is the glint off of the lake that's there. To most outward
appearances of an orbiter flying by with a camera, this is a world not unlike Venus where
there's a greenhouse layer that you can't penetrate with visible light, you have to look at it with
radar in order to map the surface. But we want to learn more about it, nonetheless, and so
beyond the raider what can we do next? Well, we dropped a lander called Huygens, which was
a collaboration between the French Space Agency and NASA some time ago, to one specific
spot chosen more or less at random, and discovered a lot of really interesting things about this
world; that although there a greenhouse layer, the winds compared to what were expected by
scientists were actually a lot less in their velocity and then strength. And the surface, while
solid, and while the atmosphere contains a lot of methane, a lot of these rocks are composed of
water ice which was a genuine surprise to many. And so that has some very interesting
implications. So we want to go back, and we want to learn more, and we want to do more than
simply drop a lander in one spot and see only out to the horizon of what there is. So how can
we do that? Well, since imaging with orbiters is a challenge, it just so happens, due to learning
from the Huygen’s Lander, that the greenhouse layer only goes so far down, and after you get
down below about 10 kilometers, so through the atmosphere, there's, it's clear as far as the eye
can see. And so, if we could float an airship over an extended mission and really map the
surface in visible light as well as radar and other techniques, then we can start focusing pinpoint
service missions.
And so as a platform to prove out the autonomy that you’d need to support an extended
mission to Saturn, where the one trip, one way light time is about eight hours, and
communication bandwidth is very, very limited and very, very expensive, use an aerial platform
such as the JPL Aerobot in order to conduct a mapping mission; but since it’s out of touch for
so long and may need to go weeks at a time without any direct communication from Earth, it
really needs a great deal of autonomy to be able to affect its self-localization and awareness of
its environment in a GPS denied environment like Titan. And so, the autonomy engineers who
were creating the onboard software are very interested in being able to validate how well is
this autonomy that they're developing performing according to expectations, where is it going
wrong, where do we need to deal with unexpected variables that we haven't accounted for
yet? And so they use all the sensors, and we have a representation of state as we monitor the
performance, and we need to be able to, in richer and richer ways, add additional layers of
being able to autonomously monitor the platform's own performance and make adjustments.
And so, one means that we have to do that that we are currently use is to use mapping and 3-D
visualization techniques to both interactively specify this wide blue track, which is the intended
path of the airship, and be able to map its progress over time, and you getting feedback about
what the cameras that are looking down to the bottom of the airship that are doing the ego
motion, tracking the position of the airship using image-based localization techniques are
seeing; and you're getting insight into things like, well, as the robot is trying to navigate this
path, the wind is actually having a significant effect on its trajectory. You want it to go directly
along the path but you're having the time to fight the wind at the same time, and so if you are
in an environment like this where you have external forces that are going to affect the way that
you're able to navigate, how do you compensate for that? Do you do something with your
mobility system to be able to correct for that, or do simply plan enough margin into your
trajectory so you're able to tolerate a certain amount of this?
And of course, another way to get more insight into the performance of the autonomist
navigation is to look at it in 3-D. And so, we do the same sort of visualization with the intended
track and a trail of breadcrumbs that the ship is leaving behind it as it’s navigating. And this
time, we are adding in the carrot. So this is the current goal in real time of the autonomy that
the airship is trying to go after. And so this is something that adds another layer of insight for
the author of the algorithm of the autonomist path planner to really understand what it is that
you're asking the robot to do and how it's actually performing in real time relative to that. It's a
nice video; often I get reactions from this video saying, well, does it should ever get the carrot?
And unfortunately no. The ship doesn't actually get the carrot. But so many people want to
see it get the carrot that I'm thinking about adding that capability in another version of the
software to have it actually, ultimately get the carrot and maybe level up or do something fun
like that that's rewarding to see.
>>: [inaudible] know the carrot, or does it know the whole path?
>> Mark Powell: It only knows the carrot, which yeah, which is, you know, one characteristic or
a limitation of this algorithm. And so, you know, depending on what you take away from the
visualization like this, you might naturally say, well, should we be more refined in our
specification of its goals than a simple point in real time that it's going after? Maybe it needs
more history, maybe needs more future specification, or additional higher-level>>: [inaudible] it’s overshooting al little bit sometimes, but it's a little hard to say because you
don’t have a good way to visualize [inaudible]?
>> Mark Powell: Right. So actually, being able to visualize the wind is very challenging. A few
people have done good jobs with that, visualizing the wind and sensing the winds. So yeah,
that's, and the wind, you know, changes its function of space and there’s a function of altitude
as well. So wind socks can be helpful, but then you have this whole, you know, volume metric
spatial temporal visualization problem when it comes to doing wind.
>>: Is there no more visualization?
>> Mark Powell: Well, the next thing is ATHLETE, so>> Scott Davidoff: Thanks. So another one of the challenges of expressing motion plans in
terms of high-level objectives becomes more complicated when the actual platform itself
becomes more complicated, more multipurpose, and less anthropomorphic. So this is the
ATHLETE Rover. It's a robot that's been designed for multiple mission purposes. You can see
that here it’s very nimble. This is a, well, it's a faster time simulation of the robot getting off a
lunar lander and then some embellishment allowing it to dance by using a little bit of, just
camera trickery. But the actual motion of getting off the lander is for real. But it can bust a
move. So you can see how, you know, one of the things that it might be designed for is getting
off of a lander and then being able to then explore the surface. And similarly, it's carrying a
very uncomfortable little compartment in which astronauts could conceivably be. Similarly, the
robot also has a variety of capabilities that here, each wheel actually has a tool joint and the
motor for the wheel actually drives the tool as well, and you can see here, the robot
autonomously picking up a, something heavy, and putting it down.
So now this robot is designed for exploration and here on simulated lunar or Martian surfaces.
However, I think it's, well, and it's also a quite sophisticated robotic system, and in that the,
well, it's a hexapod robot here; it's designed so that it can be daisy-chained together to carry,
you know, a bigger payload and a more sophisticated payload while keeping the deck level and
then similarly, when it's time for exploration or higher risk missions, you can see that the robot
is actually three, two tripod robots that can assemble themselves. So when you think about
motion planning or describing a plan for a robot with this kind of agility, even on a flat surface
can be challenging. But within the robot’s mission prevue is also an asteroid capture, and
exploration, and mining. And so if you can imagine a surface like this, which is certainly not
very planar, and you know, it has a microgravity environment of about 100th of an Earth
gravity, which is the kind of environment where just flexing your shins could put you in the
orbit, the expressing of a plan becomes a lot more different of a situation. And these are the
two biggest asteroids in the Kuiper Belt, and you can see they’re also, they’re, one seems very
solid whereas this one seems very loose and granular. So our understanding of actually what
material we'd be landing on is also highly unknown. And this is just not a future scenario but to
give you perspective of the size and scale of what we'd be talking about, orbiting around, and
landing on.
So one of the things that we've been working with is a desktop virtual reality system called
zSpace. It basically uses passive tracking, I'm sorry, passive stereo vision and passive tracking of
your head with two infrared detectors at the top. And one of the, here's a video of how we've
developed a system to be able to use the stylus to actually control operations on the robot
where you can see that if we wanted to, for example, have a more much more natural way to
command, tell the robot what to do, we could use the stylus as basically an orthogonal to one
of the end effectors, build a dynamic reachability map, and effectively tell the robot, drill in this
particular location . And so instead of needing to express all of the motion command, we'd
really use an inverse kinematics. The guys had fun making this video. And so, you know, by
basically telling the end effector where to go and then calculating through inverse kinematics all
of the motion, this is one of the ways in which we've simplified the actual motion planning for
any individual limb. But, you know, the problem still remains of multiple limbs or a very
complicated surface in microgravity.
Let's see what else we have here. This is, I'm not sure if this is just a video or just a still. But
also, we built a system where, for example, in imagining trajectory monitoring, one of the other
challenges that we have is to plot an ideal path to landing. You essentially have the problem of
mapping a path through three space, and along that path, your orbital dynamics might change
because one, you're going to get a more and more detailed map of an object that we've never
approached before, two, we're going to learn more and more about the surface itself that we
might have chosen as a landing area, and also, we're going to learn more about the
gravitational forces because it might be quite uneven. And so here, we are actually using a
surface that we've mounted slightly upright to create a 3-D environment that would allow the
user to essentially pinch and drag and draw a 3-D trajectory. And then at each point, be able to
plan and tell the robot re-plan, and so you can see up at the top is the ability that each one of
those buttons represents a branch in what to do if this happens. And so we can create a
conditional planning structure for the robot in terms of what to do if this spot doesn't look
good.
Let me show you some other scenarios. In terms of recalibrating sensor data, another one of
the robots that we, that gives us a very different set of challenges, is the Robonaut. And
Robonaut is on board the International Space Station and is a very dexterous anthropomorphic
robot. And Robonaut is designed to be able to support astronaut work in space, or possibly
replace astronaut in space and remind them to stay fit, which is really important in space. So
you can see it's quite, it’s a dexterous robot. And also, you know, potentially one that we'd
want for repair of various external parts of the space station. One of the things about
spacewalks is that they have a lot more danger than just operating under normal
circumstances. The astronaut is exposed to a lot more radiation, but even just preparing for a
spacewalk requires four hours. And so having a very dexterous robot that would potentially
serve as a repair robot is a real benefit. But it's also one that if we’d imagine it needing to be
commanded for very subtle gestures like this, so here you can see it following a script, but yet
being able to do very delicate maneuvers, if this robot would be, say, tele-operated, to do other
various chores or repairs, it's something that, I mean, you know, sort of $10 million mailman
here, but you can see, you know, these are very perfect size maneuvers. Thanks.
So the idea, one of the challenges that I think comes becomes more clear with this robot is this
is a sort of behind the scenes shot. So whenever you see the astronauts on the International
Space Station, they show you the clean aside. But, like any real house, there's a place where
really things get swept under the rug . And this is actually what I was shocked to learn that it
really looks like on the inside. It's pretty hazardous. There are wires everywhere and displays
everywhere. And here you can see the robot mounted in a location where it would be able to,
for example, support repair tasks. And what you would also have is these wires would be
moving in, you know, in a null gravity environment, and precision specification of how to
accomplish a task, like a repair or switching wires, can be expressed at a variety of levels of
abstraction, and you know, one being by motion by motion, another being, you know, move
this wire from there and then telling it, allowing the onboard autonomy to take over. So one of
the things that we were working on here is that using actually two connects that are mounted
above to map an individual's hand and then map them to a simulation of the Robonaut, and
you could see that the individual was grabbing a drill in this simulated three space here. And
this kind of higher-level control is something that when you can express an objective, we've
found to be just a lot more natural of a way to command the robot. And of course, in this
environment, we don't actually have stereoscopic simulation. And I think that's one of the
things that we learned is that that's just critical to being able to operate within a 3-D
environment. So going back to Mars, and Martian telemetry, I'll hand it back to Mark. This is
something that he loves to spend time on and has really been in charge of the development of.
>> Mark Powell: Thanks Scott. So on a mission like Curiosity or Spirit and Opportunity that
came before, and Opportunity that's still going on actually, every day is like a new mission for
the science and engineering teams that collaborate to plan the activities of these rovers every
day. When we drive to a new location, it's like we're starting over again. And we have to
reboot our understanding of not only where is the robot but what are all of the targets of
opportunity that we now have? And oh, by the way, you have about half an hour to figure that
out because if you don't, if you're not able to express cogent proposals for activities, for what
the spacecraft is going to do within the first two hours of the day, then you'll miss the boat and
someone else will, or nothing will get done that day which is not what anyone wants. And so,
the visual, the visualization challenge here is to bring as immediate an understanding to a broad
segment of this community, as broad as possible, immediately. And so one of the most
effective ways to do that, that really stands multi-generational unity of scientists and engineers,
is to take a panoramic photo of everything and present that to them as the environment. And
we supplement this with individual image views, and maps, and 3-D visualization, but really for
many of these folks, just being able to see your world in 360 on the screen at once is very, very
powerful and has a low barrier of entry for people to be able to interact with it because when
they want to start building up a lexicon of, let’s start talking about specific places that are
interesting today, that we want to debate the relative importance of, then simply being able to
drop place marks on the terrain and give them familiar names that people can discuss is very
important. And so, just a click in name targeting system calling things, naming things according
to a theme for the day, like Southern Shore, Mullet, Oyster, Moray eel, these indicate, you
know, potentially interesting locations that have been identified by some of the scientists. And
so we give them a forum to kind of whiteboard these all together in a collaborative fashion. So
as one person names something, that name is immediately transmitted to all of the other
scientists in the community, and they start to then be able to discuss very naturally the relative
importance of one location versus the other. You have a question?
>>: I just wanted to clarify. I'm assuming they're not all co-located, right? The scientists aren’t
or they are?
>> Mark Powell: We are enjoying a great deal of co-location right now with Curiosity, but it’s
very brief. It only lasts about the first 90 days of the mission and that’s about to expire.
Opportunity Rover has been, you know, going over eight years with everyone geographically
distributed. And not just around our country but in Spain, Russia, Canada, other places.
Germany. Many of the principal investigators of the instruments in the science payload on
these assets are multinational. So we have the need to bring in a global community and to give
them all, you know, a fair and equal opportunity to be able to understand all of the potential
targets of Opportunity that we have in front of us to talk about and plan for in just the next
couple of hours. And so this is sort of the forum that we used to establish that. And then once
there's a common lexicon, once people are able to talk about these things effectively, we begin
to see proposals for important science observations that need to happen in order to learn more
about the surroundings or in order to accomplish the longer range goals of the mission. And
they start off at a high level. So take a higher resolution multi spectral mosaic of that area
there in the center, would might be one, and the next one from another science theme group
might be do a multi spectral analysis of the target over there on the right called Minnow, that's
going to teach us more about the chemistry whereas the former might teach us more about the
geology. And then there's also the longer-term goal of the mission, where in this case, we
might be, you know, seeking to learn as much about it the bedrock outcrop in this crater that
we are currently looking at. There's an even bigger one that's 12 months away, and if we don't
keep driving we’re never going to get there. And so there's constant pressure to be able to
support in a fair way, both the immediate needs to do the science of where we are on these
targets that are right in front of us, and we’re probably never going to come back to them and
see them again once we are done we’re done, and this is our only shot; and also, you know,
trying to move on to make sure that we hit all of the longer term goals that we have as well
before the spacecraft mission is finished. Yes.
>>: Do you directly observe the distances here? Do some ranging? I mean I look at this and it
seems like I could guess how far away this is.
>> Mark Powell: Yes. And so behind this visualization, typically what we do, is we use
stereoscopic imagery to make a 3-D terrain mesh of all of the targets that are visible. And so
when you simply click and place an icon on a location, there's 3-D information about its location
and surface orientation that's available behind the scenes as well, and we just tie all that in
automatically so that when we have to take these high-level goals and give it to the roboticist
who needs a 3-D information to say, I need you to take the end effector of this robotic arm and
place it along this surface normal vector 2 centimeters from this specific 3-D target, which is the
level of detail that they need, that they get that. But at the higher level for the science team
who are simply debating something more scientifically interesting versus less, then they don't
have to necessarily worry about that.
>>: [inaudible] images of [inaudible]?
>> Mark Powell: With these rovers, yes. They have, I mean depending on the spacecraft, you
might have lidar [phonetic] systems that might give you the same sort of information if it’s an
orbiter that is capable of sustaining an instrument with those kinds of power requirements. But
on the rovers, there's no, there's typically not enough power for things like active laser range
finders or lidars [phonetic], at least not yet. And so most of the ranging is done through the
stereoscopic method.
>>: What is the resolution [inaudible]?
>> Mark Powell: So for the stereoscopic cameras, the resolution is, I mean they’re basically one
megapixel CCD’s, on Curiosity they can range effectively out to a distance of about 40 to 50
meters from the rover. Of course it gets more sparse the farther that you go out. On MER with
the pan cams, they're even higher resolution and so they can see how to even 100 meters
away. But it's very narrow focus and so you can get very high resolution small patches of data
in key locations, and typically that's about all that you have time for before you plan a drive
that's going to take you over in that direction. So you have sparse pockets of very high
resolution information, and a broader collection of lower resolution imagery, but you have it.
It's very dense and you can use that more effectively to make decisions about different ways to
go.
And so the science teams with these competing goals will debate and reprioritize their science
goals, so they try to achieve the highest priority things first. And all the time that they're doing
that, they're getting support from the engineering teams who are trying to look at, do we
actually have enough the spacecraft resources to support all the things that you want to do
today? The answer very frequently is no because their goals are very ambitious and often will
fit into the resources of a single day, and so we've found out of for this particular case that we
are broken on time. We’re using over hundred percent of our time capacity for the day.
There's just not enough time during this particular day to do everything that they want to do.
And so they have to make a decision. They have to either reduce the fidelity of one of their
science observations so that it will take less time, or they have to defer it to a later day and
hope that they're going to stick around in this place long enough to be able to do that before
they drive off.
>>: [inaudible].
>> Mark Powell: That's right. Yeah. So the Martian day or sol, so 24 hours plus about 39 min.
But for most of these observations, these imaging observations or even things like the laser
spectrometry that's done with Curiosity, require either sunlight to be able to take a picture or
the Sun to be up so that the temperature is high enough so that the, so the instrument can be
run within its temperature constraints. Because you can't run it when it's too cold. Either have
to spend power to heat up or you have to wait until the sun naturally heats it up for you. And
so it's time to make a hard choice because we don't have enough time to do everything that we
need to do, so the decision is made to defer one of the observations until now. We’re now
fitting without in our research budget.
And then, so, from here, the high-level plan is refined to a point where it looks achievable with
the support of the engineering team to help assess that, and then the engineering team will
take the high-level goals and really create a command sequence or a program that the
spacecraft is able to execute. And the process of creating the command sequences is very
much like a programming task. The tools that we use to do it looks very much like IDE’s, so you
see something that looks like Visual Studio or some other IDU where you're creating a sequence
list of commands and subroutines that the master flow of controls calls to execute, the
subroutines, and they do the things like turn on the heaters, and then power on the camera,
and then take the image, and then power off the camera and then do all the housekeeping
that's needed to in order to implement these high-level goals like take a 10 x 3 color 13 filter
mosaic of this particular area. And so the engineering team is tasked with doing those work.
And as an engineer, on the Curiosity Mission and the Spirit and Opportunity Missions, I get the
opportunity to be able to be among the first to be able to see the images that come down when
they arrive. Those missions are also very good about being able to distribute those images
instantly when they hit the ground, and so you guys get to follow along with us, you know,
minutes after the data arrive for us to look at. You guys get all the raw images to take a look at
as well, and so really, it's really cool to be able to share these experiences with everyone
around the world. But since we’re there every day and involved in it, we really enjoy the
surprises that we get, like on Opportunity Mission a few days after we landed, we noticed this
little guy and wondered what in the world it was. Was this something that was there on Mars
or was it maybe something that we brought with us? Kind of looked like a bunny. And so we
took more pictures of it. And he's like hey, that’s looking more and more like a bunny. It's got
two eyes, little ears, and we took multi-filter colored imagery of it and in an attempt to learn
more about it, and we found that if you blink the images between the little eyes it kind of looks
like it's winking at you and the ear is moving a little bit. We're like hmmm, this is curiouser and
curiouser. And the scientists, meanwhile, are going wild about this bedrock outcrop that’s got
all these cool morphologies that they never seen before on Mars, and oh my gosh, isn’t this
wonderful? And the engineers are like, have you seen this bunny? And they’re like yeah, that's
nice . Go away. But nonetheless, we were even further intrigued when we took a picture of it
again a day later, to learn yet more about and it disappeared. It was gone. And we’re like okay.
This is definitely something that we need to learn more about. And so we managed to find it a
few days later, it was, it had blown under the lander where the rover had bounced to a halt.
And we aimed a spectrometer add it and discovered that yeah, that's a shred of airbag material
that fell off as it bounced to a halt in the crater that we got there. But it was an amusing
diversion for us at the time.
And I had a real sense of déjà vu last week with Curiosity. We were scooping up our first
material from this rock nest region that we've driven to; this is the scoop on the robotic arm
that's going to collect samples that we are going to use in the chemistry labs that are in the
payload of this rover we didn't have on MER, and down there at the bottom there's a small
little bright, bright thing that one of my colleagues saw and zoomed in on it and said oh, what is
that? I'm like, I'm not sure what that is. Pretty soon, it went viral among the engineering team,
and they're like, oh well. We're not sure if that might be something that was on the arm or not.
We really need to kind of figure out what that is so that we can assure that it's going to be safe.
And so we immediately dropped everything and started taking lots of pictures of and
discovered that this is just a piece of debris, probably some of the plastic that was used
externally to wrap some of the wiring that engages the robotic arm instruments. And so, but it
was important to learn what this was because as we are scooping up the material, we want to
make sure that we're getting only Martian material and no Earth material mixed in there that's
going to influence any results that we hope to learn.
>>: We finally figured out a way to litter interplanetary>>: How much stuff was [inaudible]? So you're driving along, and then you're like shedding this
stuff>> Mark Powell: Yeah, so we are keeping a very close eye on that now. But also in this region,
there's some really interesting heterogeneous materials that are from Mars. There's a little
clod of dirt there in the center upper area, with some little bright white speck sized places in it
that you can see. And we didn’t bring that with us. That is generally like heterogeneous
material that is as yet unknown composition. And so this is, it's good that this shedding
business has put us in an even higher state of alertness because it's really helping us to focus
and identify all of the unexpected and interesting things that we're seeing in the newest images
that are coming down so that we can make the most of the opportunities that we have. And
this is just a little map showing our progress so far over about this first 60 sols of this mission.
The area on the right is where we are now, and we're headed along that green line to this
destination where it's an interesting crossroads between three geological units. We have one
geological unit which is most of this center on the right, this kind of brown in color, and then on
the upper right corner, we have another geological unit that's brighter, and in the lower right
corner we have another geological unit that's a little bit darker, and so geologically what we
want to study what are the differences between these regions at this crossroads, and how does
that fit out into the larger regional picture of the evolution of this region as we come to see it
so far?
And so to be able to bring this picture in focus for all of our geographically distributed
contributors on a daily basis, we use tools like these maps that dynamically load an enormous,
you know, hundreds of kilometers of Mars regions, like this at Gusev Crater for Spirit that we're
seeing here. Where we can zoom in and see not only the broader regional context geologically
speaking, but be able to zoom in to the rover scale and be able to mix in data that it takes in
situe [phonetic] and overlay its traversed path and all of its locations that it’s visited and all of
the 3-D information that we get from it's stereo image collection, so that we can zoom all the
way down from kilometers all the way down to centimeters and put everything into its
geological scale at multiple resolutions.
>>: [inaudible] from Curiosity or from [inaudible]?
>> Mark Powell: So this is an orbiter image. And so for planning our missions, even on a daily
basis, we tie everything back from the rover data to the satellite data which gives us a broader
regional context, which is really useful for the geologists and the long-term planners to be able
to see if you planned to go 100 meters, did you actually go hundred meters, did you actually
not go 100 meters but only go 90 meters, or did you go further than you thought, and what
effect is that going to have on what we’re going to today by way of compensation if you didn't
get it to exactly where he expected to go. Sure.
>>: [inaudible] long-term analysis that we the new plan [inaudible] and another [inaudible]
Alpha Centauri. How will this influence [inaudible] centuries?
>> Mark Powell: So what, it's going to be an interesting challenge to try extend a lot of the
techniques that we're using for a planetary neighbor that's relatively close to us to one that is
going to be in Alpha Centauri. Where we are able to plan a complex set of science
observations, to run tactically on Mars when it’s only about 20 min. light time away, everyday,
when, even when you go only as far as Saturn, to say nothing of Alpha Centauri, you have to
plan a little bit farther in advance because your bandwidth is a lot more expensive and a lot
more limited. And the autonomy that you need, therefore, to be able to keep the spacecraft
safe while you're waiting for your next opportunity to be able to communicate with it, those
needs are higher as well. And so to cover these greater and greater distances, we need to be
able to do more, even higher-level goal specification, even higher-level contextual awareness
for people to be able to understand after, you know, if you only see brief glimpses that are
very, that are separated very far in time, so if you're only visiting your spacecraft about once a
month, it's very easy to, you know, lose touch with well where was it again the last time that
checked in on this mission? And so being able to kind of, you know, to refresh your mental
cache of oh yes, now based on the last context I see now this new information how it ties
together with the other information that I've seen before. And so that's, that is a grand
challenge for us to pursue.
>>: I have a feeling we'll be pretty busy exploring our own solar system for a long time. There's
a lot to discover.
>> Mark Powell: And there's certainly a lot to discover in the solar system. And after we've
exhausted all of those, which will take longer than my lifetime, I’m encouraged to see that
there’s going to be lots of other, hundreds of other already known mysteries and other planets,
exo-planets that we find with our Planet Finder Missions left for people to follow me to pursue.
And that will be even more exciting for them. So in order to provide this immediate contextual
awareness for all of our partners, we've extended our data processing to a parallel cloud-based
system for processing and distribution. And so in order to make sure that the latency to get to
this data is relatively short no matter where you are in the world, and in order to make sure
that it’s presented for our collaborators as soon as possible, basically, as soon as the hit, as
soon as the bits from space come through the DSN to us, we immediately put it on the cloud
and we immediately started doing all of our advanced product and visualization generation,
transforming the raw products into stereo maps, into 3-D terrain, and maps, and mosaics, and
all of that. We do it all on parallel and we do it all in the cloud and we stage all that data also
on the cloud geographically distributed so that everybody has equivalent performance despite
where they may be, independent of where they may be working from. And so we'll take not
just the raw data, but we have a lot of metadata from the spacecraft itself, where it is, what its
orientation is, other related metadata information that we can put into and build a search
index, and we can tile it up so that we can have a tiled map with different levels of detail.
Similar processing for taking pairs of imagery in creating 3-D meshes, which we also tile up.
And then one of our most computationally challenging tasks to do quickly is to build up large
mosaics of imagery, so take, you know, thousands of images and warp them together into one
cylindrical panorama 360, in order to do that in parallel we divide it up into striped sections and
then merge them together and create different levels of DCL in tile, those for tile image viewer.
And we also do that for the satellite imagery as well, so that it can all be feeding into the
visualization I was showing you a few slides ago. So this is one of our parallelization strategies
where you take the entire image, and maybe many tens of gig pixels in size, and divide it up
into stripes, give every stripe to a processor, tile them up, and then produce a lower level of
DCL from that by combining two stripes and tile that up, and use the pyramid-technique to
bring all of that together. And so that was I think all that we had. And I just wanted to throw
this slide in there to show off our institution itself, which is in a very nice place, right next to a
forest and mountains and a very nice place to visit. I’d like to encourage all of you guys to come
and pay us a visit whenever you have the opportunity. It's a fun place to see, all the robotic
work that's going on there. [inaudible] support of the rovers and the other systems like the
Robonaut and ATHLETE that Scott was showing earlier. So thank you very much and I'd love to
entertain any questions that you have at this time. Sure.
>>: I know you talked about [inaudible] GPS. How much for these things are you using celestial
navigation? I guess you'd have some [inaudible] higher up in the atmosphere you'd have very
good [inaudible] the stars especially [inaudible] astronauts?
>> Mark Powell: So, celestial navigation is very much used for the spacecraft that are in space,
but even on Mars, the atmospheric distortion is significant enough that observing stars is very
challenging. And so to do things like to refine the rover’s knowledge of its orientation, we are
limited to being able to do things like observe the Sun, which is the only star, I'm sorry to say,
that’s bright enough for us to be able to see through the Martian atmosphere with any real
reliability. We are able to do some observations of things like the moons of Mars, Phobos and
Deimos, when we are, if we take a picture of them at night when there's not a lot of ambient
illumination we can see something that's that bright that's basically reflecting the illuimination
from the sun down to us, but the stars themselves after we’re on the surface, are not available
for our cameras to observe, which would be terrific.
>>: [inaudible] much richer set of objects to look at?
>> Mark Powell: So the blimp on Titan actually would also be dwarfed by the illumination from
Saturn itself and wouldn't be able to, if it was below the greenhouse layer, it would have the
same problem that we have when we're trying to look down through the greenhouse layer
from orbit. And so the localization problem then I think becomes what can we do by observing
the landmarks beneath us, rather than the stars above us. And so that seems like the most
promising direction to go, using lidar[phonetic] and image-based and differencing techniques,
image-based localization techniques to accomplish that in the lieu of the GPS capability. You
have a question?
>>: Is there any, so you mentioned like processing [inaudible]? Is there any processing that’s
done [inaudible] the rover, or is there power for like sensing, like, you know, gathering some of
the data and then processing [inaudible]?
>> Mark Powell: Oh yes. There's a great deal of processing that happens on board on the
rovers , particularly on Spirit and Opportunity we’re kind of graduating Curiosity, the newest
rover, through, we are putting it through its paces before we are able to fully trust its autonomy
to do all the things that Opportunity has done. But Opportunity has begun by using a simple
path planning algorithm by taking stereo maps and looking for obstacles and moving toward
goals, kind of a plan, sense, act, basic robotic autonomy with obstacle avoidance pipeline to
being able to do richer, and richer autonomy thing. So it's able to use visual odometery to
refine its localization, to do better than simply counting wheel rotations and being affected by
slipping around on the soil without having any way to compensate for that, it's able to be able
to monitor its own localization very accurately now using visual odometery, and it’s also able to
extend those techniques to be able to do higher-level path planning, not only for driving but
also for placing the robotic arm. And so now from 10 meters away we can say, that rock over
there on that point, drive over there and put the spectrometer on the end of the arm down on
that. And it can execute that autonomously as well. It's also able to identify unusual features
and images as potential science targets as well. So it has enough autonomy so that in the extra
time that it has at the end of day after a drive, it can take some pictures, and it can seek to
identify unusual things, things that are different than things that have been seen before. So
based on a machine learning technique, it's able to assimilate features in the images that it can
take post drive and say, seen this before, seen this before, maybe haven't seen that before, and
so maybe that's something to take a higher resolution of predictably and prioritize that in the
downlink and highlight that to the science team and say hey, you might want to look at that,
you might want to look at that because these things seem unusual. So a number of things.
>> Scott Davidoff: And in Spirit and Opportunity, for example, they developed a dust devil
detector, which became one of these interesting, you know, real image difference visual
objects. But also something that was both easy to detect and something that was a real object
of interest. And so you could also tell the, allow the robot to focus more of its own sensor and
memory capacity on these particular images. But one thing that I was, I think, both that I was
surprised to learn is that when the, when a rover like Curiosity lands, the JPL basically assumes
nothing works. Right? Like everything, we have no evidence of any arm of any movement of
any sensor. And so, you know, we land on Mars, and the world is watching and then for about
five days nothing happens. And basically, they're just, you know, warming up the components
and making sure all the instruments work, and you know, here we are 60 days in and you know,
it was, Curiosity was built to be a chemist, right? Basically a chemistry robot, and yet it’s taken
us 60 days to take it, to actually risk taking a sample. But gradually, that comfort level will grow
more and more, and what I find also fascinating is that we can reprogram Curiosity. And as
there's more and more confidence that we can send control sets, instruction sets that allow the
robot to have higher degrees of autonomy and tolerance for risk as it becomes more
acceptable.
>>: So how much of that day it does the robot self-task? So it wakes up and you know, it
[inaudible] spent 30% where it’s just self-testing all of the components on the aircraft itself like
that, so how much [inaudible] Curiosity?
>> Scott Davidoff: In the morning? Maybe an hour, is it?
>> Mark Powell: So most of the testing is in collaboration with the engineering team itself,
right? Like Scott was just saying, when you don't know if the chemistry lab is going to work,
then you start with the simple test and then a little more ambitious test, and then moving
through to extend its capability to the best of that you can achieve. And so like, in the early
phase of the mission, like a lot of the days are scripted and devoted to that kind of testing. And
more and more as we are moving into the latter phases of this mission, you have unscripted
days where you know where all of your reliability is, you can build on, you can increase your
ambition level, you can do more audacious things, and really do more science than you could
before. It's not just the prescripted things that you know you need to do, you know you need
to do your calisthenics, you know need your warm-ups so you can go and you can run this
marathon, which is what we are preparing to do right now. And so, yeah, we are looking
forward to the days when, you know, 6-8 to even 10 hours of the day, it's all ambition. It's all
run. And you know, so that in 12 months we’re going to get to that spectacular destination that
we are all hoping to get to, to the south of our current location.
>>: What's the next big project, robot project, you guys see coming out?
>> Scott Davidoff: Well, I think there's quite a few that are happening in parallel. The missions
that w’ere most involved in are the ATHLETE and the Robonaut. Robonaut is actually, like
currently on the International Space Station, and so it's being space proven. The ATHLETE is
being explored as a real multi-mission capability robot. There are, I mean, whole varieties of
just really very, very out there exploratory missions. You know, ones that are 10 percent
success rate where they're going to, let's just try these crazy ideas and see if they work, to, I
think we deal with the robots that there's already been about a decade of work on ATHLETE.
So, you know, but there's a huge number, and people are betting on totally different
approaches to, for example, going to Europa, and then what would it take to get through the
ice layer? And, so there's, you know, an entire robotics team dedicated to thinking about how
to thaw and then autonomously search through what they're imagining to be an ocean. And so
it’s a totally different way to navigate a huge set of challenges that are totally different. But I
would, roughly guessing, 60 different projects.
>> Mark Powell: Yeah. NASA doesn't put all its eggs in one basket either. Like simultaneously,
we are exploring making better autonomy to be able support a variety of missions like, what if
we want to fly an airship at Titan versus well, what if we want to support a boat at Titan that's
flying around, that’s floating around on that lake that we can observe that's there at the north
pole? Maybe that would be the kind of surface mission that we want to do, to do a survey of
that environment and so what kind of autonomy do you need there? Or if we want to do even
more at Mars, then what kind of systems do we need to support a sample return mission,
which would be very, very exciting, and something that's very, very challenging that has been
looked at for a number of years. And ultimately, it's going to be done. It's just a matter of
when; it's just a matter of when all the resources and everything come together at the right
place at the right time. But we will have a sample return mission from Mars one day. We'll
have a sample return mission from the moon pretty soon, if things continue going that way.
And so that's another really, really lucrative, really great robotic, set of robotic missions that we
have to look forward to.
>> Scott Davidoff: I think probably the other main objective in our group is through gathering
all of this data that we've have available in these smaller channels to allow scientists the ability
to, for example, take a walk on Mars, and so be able to, you know, really experience it in a
different way that’s more similar to the way that they generate their hypotheses now. So that
geologists don't like look at a television of California, and you know, think about hypotheses
about if they go there. And they look, and they look at the layers, and so to allow them the
ability to explore it in a more first-person way is also, I think, one of the challenges that our
group is looking into. More of a human interfaces, an experiential challenge than one of
robotics. But I think we really work at the overlap. Yeah.
>>: Can you join us for lunch? Given the time we should get going.
>>: So if you want to join us for lunch that's what we’re doing next over in the cafeteria.
Thanks a lot you guys.
Download