>> Sudipta Sinha: Good afternoon, everyone. It's my... Maimone here with us today, and he will be basically...

advertisement
>> Sudipta Sinha: Good afternoon, everyone. It's my great profitable where to have Mark
Maimone here with us today, and he will be basically talking about the Mars Curiosity Rover.
He's been at JPL, working on rovers and autonomous robots for space for a very long time, and
I'm really excited to hear more about this today. Mark.
>> Mark Maimone: So I'd like to talk today about what drives Curiosity. I'll explain some of the
robotics technologies on the Mars Science Laboratory, NASA's current Mars rover. We landed
about two years ago. We landed in August 2012, and we've been operating on the surface of
Mars for that long, and we share a history with earlier NASA rovers. The current one is the big
one on the right, the Curiosity, and the one next to it is the Mars Exploration Rover. So we had
two of those, Spirit and Opportunity. Opportunity is still working today. The 90-day mission
has gone on for more than 10 years so far, and then at the bottom, you can see Sojourner.
Sojourner landed in the '90s. That was the very first NASA Mars Rover, and that was supposed
to last for a week, and that ended up lasting for three months. So we tend to run over our specs,
which is a little bit of a problem, but it's working out pretty well for us. So I'm going to talk
about how do we drive these rovers in a few different aspects. So one thing that drives Curiosity
is the people that work on it. So I've shown some pictures of some of the people who are the
human rover drivers who get to send commands up to the spacecraft, and also you can see one of
us working in the test bed, and that gives you a sense of scale of just how background this
vehicle is. It's a rather large vehicle. The wheels are each 50 centimeters in diameter, so they
can climb over larger rocks than the previous ones could. And you'll see some more of that in
the talk coming up. But, basically, when we're driving these rovers, we're sitting at a
workstation, pictured here, and we're reviewing pictures, either 3D reconstructions shown on a
mesh, or we're wearing the stereo goggles, so we can tell what the world looks like by looking at
pictures taken from the stereo cameras on the rover. We use both of them, because you need the
3D model to simulate your drives, and you need the human eyes to really get the feeling of what
the terrain is. You really get a good feeling for whether there are depth discontinuities, ridges or
something, by looking at it yourself.
>>: What's the range of round-trip times for communication?
>> Mark Maimone: Good question. The range of round-trip time for communication depends
on where the planets are, but it's between 4 and 22 minutes, depending on whether the planets are
close by or opposite. Actually, that's one way, so round trip would be 8 to 44 minutes. And a lot
of people think that's the real limitation in operating the Mars rovers, and it is a physical
constraint on operating the rovers, but that's not the only problem we have. We also have the
constraint that we are sharing the deep-space network. We communicate with the rovers using
these big 70-meter or 35-meter antennas, and a lot of people want to use them. We have dozens
of space missions, and they all want to share this limited resource. So for the Mars rovers, we
get to talk to the rover by sending commands really just about once a day, and we get data back a
few times a day, two to four times a day, usually. And we usually get the high-priority stuff first,
and then we transmit the rest as bandwidth allows. But that's really the limitation, is the logical
use of the communication platform. And it kind of works out pretty well, because what we
ended up doing is spending a whole day of humans creating a plan, reviewing it, making sure it
meets the resource constraints, simulating it, testing it, reviewing it with everyone. And then
we'll send it up. Then we go home and go to sleep, and we wait for the next results to come in.
Then, when the rover's done executing, it will send the results back and we can review what
happened and get ready for the next day's activities. So the way that we operate it is as I was
mentioning, we look at the images we get. Now, this is from Earth. This is from our test
environment, our Mars Yard, and the rover will take pictures that look like this, and this is kind
of what you would see if you were standing there, because the mast is at about human height.
It's just under two meters tall. And so we can either review pictures or we can review the 3D
geometry over there. It's just a still frame, but that's actually a 3D mesh, and we can zoom
around and move it and watch the rover drive across it, too. And so what the human drivers will
do is look at whatever images have come down to see what's available nearby, and then we'll
plan some activity on the rover. So this is showing you the plan for one day on Curiosity, and
you can see we intended to go in two short segments, one straight segment, a little turn, and then
another segment. And you'll notice that the camera is going to start moving pretty soon. It's
showing you which way the camera is looking, so that motion goes on when the rover is being
asked to drive itself, autonomously. We're asking the rover to turn on the cameras, see what's
out there and only drive if you think it's safe enough. Find a safe path through the world ahead.
And this view of the drive, I'll just play it back again. This view of the drive is what we
simulated, is one possible way it could execute. But when we actually send the command, it'll
depend on what it sees on the terrain that will determine what it does for that second half of the
drive.
>>: Are you saying the first half is not autonomous?
>> Mark Maimone: Right, yeah. The first half is actually what we call directed driving, and so
in this case, humans have spent 8 to 10 hours looking at the terrain, studying it, reviewing it,
talking it over with the science team whether it's the same kind of material we expect, or how we
expect the rover to behave. And it's always faster to just tell it to go, just like a remote-control
car, go forward 10 meters, stop, turn a little bit, go forward five meters, stop, turn, without using
vision. Now, even in this directed mode, it's always doing onboard sanity checks. It's always
making sure the motor current doesn't get too high. We'll always set limits on how much tilt we
expect during the drive, so if it finds itself at too much of a tilt, it'll autonomously stop motion.
We have dozens of those kinds of checks running all of the time, but in terms of using vision to
make an autonomous choice about where to drive, when that is turned on, it's a much slower
driving mode, because we have to stop, take the picture, process the image, make the decision
and then execute it, so that takes more than tens of seconds, so we don't do it all the time. As a
robotics researcher, I wished we would have done it all the time, but the system just wasn't up to
speed enough, and in order to get the best overall drive distance out of the vehicle, it's cheaper in
terms of Mars power and resources to have humans spend eight hours doing the planning, and
you get an extra 60 to 100 meters out of it that way. Yes?
>>: When Curiosity landed, we heard about how you drivers have to live on the Martian day.
Do you still have to do that?
>> Mark Maimone: Yeah, do we have to live on a Martian day or Mars time? We did when we
landed, you're right. So for about three months, we're living a Martian day, which is about 40
minutes longer than on Earth, so actually when the MER rovers landed 10 years ago, as a single
guy with no kids, I loved it, because every day I could sleep in 40 minutes longer. Two-thirds of
the time, I go to work, there's no traffic, parking is really easy. But now, it's a little harder, and
so most of the team doesn't want to do that all the time, and so what we do is, when we first
landed, just to make sure we get everything going, we do live on Mars time, but after a couple
months, we go back to Earth time. And what that means is won't be able to interact every Earth
day with the rover. You can make a plan, but you won't get the results of it for 30 hours, so you
may not get it until after the next workday has happened. And so we just plan activities for that
time that don't require us to know where it is or what happened in a previous day, so yeah, we
don't have to do that anymore, thankfully. So another aspect of how we drive it is how far ahead
we can see, and this is an example of a terrain mesh, so stereo image views of the terrain around
it, reconstructed into a 3D shape, and when we see enough terrain data in 3D, then we can make
a choice to drive in this directed mode into it. When we see the terrain, we understand it, the
geologists tell us what it's like, and we know that there's nothing missing, there's no uncertainty
about what's out there, then we can just send it out onto that terrain. We also have a good
understanding of just how much the vehicle is likely to slip, and so if it's at a high tilt, it'll slip
more, so we wouldn't command it to go very far at a high tilt in the blind. We'd always have it
check. But, anyway, this gives you some idea that as you get farther out, you get less knowledge
about the terrain, and that's really the constraint on how far you can send it in a directed way
before you have to turn on the autonomy to take over and figure out where it's safe to keep going
beyond where we can see. So another view that we have into Mars is we get these grayscale
stereo views that give us a nice 3D knowledge of what's nearby. We also have science cameras
onboard, the mast cam, and they're good color imagers, so we can see farther away with them.
So this isn't a stereo view. It's just a monocular, but we can actually zoom in at pretty high
resolution and get a good feeling for what's coming up, even beyond where we can see in stereo.
So we may not know the shape of the terrain, but we'll at least know, is it rocky, is it sandy, does
it look like there are gaps there? So we have different ways of looking at it, both from the rover
and even from overhead orbital views. And I really already reviewed this one, so let me just skip
on here. Curiosity will carry out the activities that we send once a day. It will receive the
commands early in the Martian morning, or like 9:00 or 10:00 in the morning and then execute
as much of the plan as it can, as well as it can, and in mid or late afternoon, it'll send the results
of the day to the orbiters. We have three spacecraft orbiting Mars right now that we can use for
relays. Usually, we use the Mars Reconnaissance Orbiter. There's also Mars Odyssey, and
occasionally, we even get to use Mars Express. So it's always better to use the relay stations
rather than talk to directly to Earth, which we can also do, because we get much more bandwidth.
Very slow talking direct to Earth. It's like a 14.4 modem. It's really bad. But when we use the
orbiters, we're able to get on the order of about 500 megabits per day of data back. That's sort of
one nice YouTube video, basically. Yeah.
>>: So can you describe the technical reason why that's the case?
>> Mark Maimone: So why is it the case that we're so limited?
>>: Why you have more bandwidth when using the orbiter.
>> Mark Maimone: So the orbiters give us more bandwidth because they have bigger antennas.
I don't actually know the power situation between the two of them, but really, just the larger
antenna gives it more opportunity to send a better signal back, and perhaps it's due to the more
visibility, but I'm sorry, I'm not a telecom engineer, so I'm not sure. I just know that when we get
the data from the rover, it trickles in very slowly for a long time and never reaches as much as
we get in a big burst from the orbiters. So it's also maybe just -- well, that's the main reason. It's
just the orbiters are so much closer to the rovers. It's just a couple hundred miles to send a signal
up to the orbiters. It's not millions of miles or hundreds of millions of miles, so the antennas on
the orbiters are better able to send back the extra bandwidth. Was there another question?
>>: What is the bandwidth you mentioned?
>> Mark Maimone: What is the bandwidth I mentioned?
>>: Yes.
>> Mark Maimone: I said 500 megabits per day.
>>: And that's between the orbiter and Earth or between?
>> Mark Maimone: Orbiter and Earth for this mission, only for the Mars rover relay purpose.
They also have their own data, and they could relay for other missions, as well, like probably the
Opportunity Mars Rover.
>>: What is the bandwidth between Curiosity and the orbiters?
>> Mark Maimone: I don't actually know. I'm sure that it's less than 500 megabits, but I don't
know exactly, because we have multiple passes. It's not just once per day, but I'm sorry. I don't
have the number directly. So I showed you that we like to rely on information collected by the
rover, the pictures that the rover has taken, because they're the highest resolution, they're the
most current, they're the best localized information that we have, but we also have images from
orbit that were taken, either pictures or in multi-spectral bands. And so the science team will use
all this information to help us plan our long drives. The science team is who chooses where
we're going and what the destination is, and they help us tactically by explaining to us ahead of
time, this terrain looks like it's pretty rocky. This terrain looks really sandy. They'll give us
those inputs, and that helps us determine what our best choice is to get to where they want to go.
On a given day, we'll only go between anywhere from a meter to over 100 meters. I think our
record so far is over 140 meters. You'll see that coming up. But the mission overall has gone so
far over nine kilometers in two years. Do you have a question?
>>: Well, this is a very naive question, but if so much is dependent on how far you can see from
the camera on top of a two-meter mast, it seems like it wouldn't be that hard to have a
telescoping mast that goes up five meters or something like that, just to get you -- I know it's not
there now, but next time.
>> Mark Maimone: Right, so I think the question is, if we're limited by the height of the mast,
why not go higher? Why not have a telescoping mast, something like that? Yeah, that's an
interesting idea. You're right. We don't have that today. Basically, building any space mission,
the more moving parts you have, the more likely things are to break, so unless it's really going to
benefit the mission tremendously, you're not going to want to do something like that. We do
have the orbiter information, and we can see the surface at a really good resolution. It's like
quarter meter per pixel, 25-centimeter pixels, so that's pretty phenomenal. The problem, of
course, is just keeping it current, knowing where the rover is on any given day, but yeah, you're
right. So having a camera up higher would be nice. Having something floating above you
would be nice, to look down. That would definitely help us to be able to drive in a directed way
for a longer distance.
>>: Maybe you will mention this later, but can you just tell us, is the robot also doing some local
mapping, so taking all the scans and building up over time a larger model of the area it's in, so
that it knows when it comes back to the same spot that sort of thing?
>> Mark Maimone: Right, so is the rover building its own map, and does it know when it gets
back to the same spot? It can do that. When we turn on the capability, I'll show you in a little
bit, then it can build a map and have that knowledge. It doesn’t do self-registration to that map,
though. That's not a capability that we thought we needed to have onboard. We do get to talk to
the rover once a day, and so after it's gone some distance, the humans on Earth can figure out
what the localization is and go from there. It would be -- that would be a nice capability. We
have shown the ability to drive more than one day in a row, and so for that, the better your
registration, the better you'll be able to identify any problems from orbit. But so far, we haven't
done that often enough that they wanted to put that capability onboard. Yeah?
>>: [Indiscernible] part of the terrain changed, due to winds or whatever things?
>> Mark Maimone: How often does the terrain change?
>>: Yeah, so during the communication time, the thing totally changes.
>> Mark Maimone: Oh, does the terrain change in between the time we talk to it and it goes
again? Not normally on that scale. There was a recent exception to that. The Opportunity
Rover recently did some work over a few days, and the next time they took a picture, they had
discovered this new rock right in front of it that wasn't there before, and it was a big deal.
Everyone was kind of shocked where this thing came from. Was it a meteorite? Who knows
what it was? And it was perfectly timed, because it was on the 10th anniversary. It was just
before that, so a mystery for the 10th anniversary. As it turned out, they discovered that they had
dislodged a part of the terrain uphill when they were turning in place a few days earlier, and it
just ended up rolling down to the spot where it came to rest afterwards. But to answer more
generally, no, the terrain doesn't usually change on a scale that we can notice overnight. We do
check for that, because especially if we're using the arm to do any operations, you don't want to
discover that you've slipped downhill all of the sudden. So we do these slip checks. We take
images, and we compare before and after to make sure that we're still in the same place, and the
rover can do that autonomously onboard, so it can do it from one position, to tell us if it slipped.
So this is just to show you the overall path that we've taken so far. This is current as of a few
months ago, four months ago, but we landed up in here. And we only drove a relatively short
distance, less than a kilometer, and the science team was ecstatic to be here, because this was the
junction of three different types of terrain units, and they discovered that coming down here was
outflow -- was like water outflow. We're talking billions of years ago. And so they were able to
determine by analyzing the minerals and the contents of the rocks here that this area used to be
covered, at least partially, with normal pH water. Earlier rover missions had found evidence for
past water that was highly acidic. They found minerals that only formed in very acid
environments, but here, for the first time, Curiosity found evidence for water with a neutral pH
balance. In the press conference, I think the science lead said you could take a cup and drink it,
if you were around a few billion years ago. What's that?
>>: Until you suffocate.
>> Mark Maimone: Yeah. So anyway, we spent a long time here, and then we really picked up
the pace and started driving down, and we've since gone farther than this, off of this map. You
can see the scale here, this is one kilometer. And I'll show you the odometry later. We've gone
over nine kilometers so far. So what else drives Curiosity? A lot of what drives it onboard is
robotics technologies. So I've listed some of the things that we have for helping us operate the
vehicle, so that we don't just joystick it all the time. We can rely on extra capabilities to go
farther or go more safely, go more robustly to where we're going. We have very basic drive
responses, spinning the wheels with velocity-controlled driving. I mentioned earlier, different
autonomous fault responses. There's a half a dozen kinds of vision-based autonomy that I'll get
into, and we have an arm onboard and instruments on the end of the arm and other robotics
technologies that lead into it include simulation, simulating the activities of the arm, and you'll
see some of that, and validating our sequences before we send them up. Sorry, there was a
question.
>>: So when you said joysticking it, I'm assuming that you generate the commands locally, and
the commands are sent to the orbiter, orbiter relates them to the ->> Mark Maimone: Yes, I apologize. I shouldn't have said joysticking it.
>>: So now, there's a big delay on this, and so do you have local simulators with 3D terrain
mapping that you actually test it out before you send the commands?
>> Mark Maimone: Yes, and that was what we saw earlier, somewhere here. Right. So this
slide is showing you an animation of a planned series of drives. So when I said we joystick it,
what I meant was we log a series of commands into a control sequence. We simulate the
sequence, we review it, we confirm with the science team it's the right thing to do. But by
joysticking, I just mean we're not -- well, okay. I shouldn't have used joysticking. I meant
directed driving, where we're not using vision onboard or the vision to adjust the path that it's
taking. The first part of this path here is not using vision. It's just being commanded to drive for
some distance, and then at the midpoint, you can see where it changes direction. The simulation
showed the camera moving around, and at that point, it is being commanded to use its vision to
autonomously determine, is it safe to go this way? I want to go over here, but you, the rover, get
to choose how to do it, because I want you to be safe.
>>: Question.
>> Mark Maimone: Yes.
>>: How much distance could you cover if it was in direct drive the entire time?
>> Mark Maimone: How much distance could it cover?
>>: How fast is it at top speed?
>> Mark Maimone: Oh, what's the top speed? We can spin the wheels at up to six centimeters
per second. We usually spin them at 4.2 centimeters a second, but we often pause in between.
We don't want to just run the motors continuously. So it can be achievable that you could get
over 150 meters an hour, but we normally get closer to about 100 meters an hour. That's sort of
our blind speed, usually. And just by practice, our convention is that we won't ever drive 100
meters in the blind without stopping to check and see that we're not stuck, because we did that on
Opportunity, and back in April of 2005, we commanded a 90-meter drive without looking, and
we got stuck after 40 meters, but the wheels kept spinning for another 50 meters. And it ended
up digging itself into a dune we ended up calling Purgatory, because it took us two months to get
out of it. It took two weeks to get disembedded from the meetings discussing what to do, and
then the rest of the time actually getting out of the sand pile. So because of that, the drive rate is
even a little bit slower, more like 80 meters an hour, something like that.
>> So is the terrain -- is there a lot of loose terrain, so sand, the kind of things that would you
make -- you talked of slipping a number of times. Is that the way the rock slips, because it's not
solid?
>> Mark Maimone: So is there a lot of slippery terrain and loose sand? And yes, there is, and
we'll see some examples of that coming up. It is interesting, though. We tried sending Curiosity
recently -- we've been experiencing a problem with the wheels, that the wheels keep getting
poked through with some of the rocks that are embedded in the soil there, and so one of the
solutions was going to be, well, let's drive on sand instead, because there's this whole sandy area.
Let's try that. We tried that, and we started getting embedded again, so that turned out not to be a
great solution, but yeah, basically, there's all kinds of terrain. There's sandy areas, there's
exposed bedrock. The terrain we like the most is something that has loose sand around or
something that we can push rocks into the ground as opposed to rocks that are cemented in place
that poke into the wheels. I'll get back to that. I'll show you several examples of that. So now,
I'd like to just show you some of what the hardware is on the vehicle. So start with the arm. The
arm on Curiosity, it's 2.5 meters long, and the arm here actually weighs more than the whole
MER rovers did. It has a really heavy and well-instrumented turret on the end. It's a big wheel.
It spins around and has different instruments on it.
>>: Is that 100 kilos Earth gravity or Mars?
>> Mark Maimone: Well, kilos the same on both. It's a mass unit, so the Mars Exploration
Rovers had a comparable mass. Things weigh one-third as much on Mars as they do on Earth,
but they have the same mass. So here, the arm and the payload is -- I think it's over 10% of the
mass of the whole rover, and we use the arm for a lot of different activities. Some of the science
accesses require direct content with the target, with the rock or the soil, so we have to place the
arm or the turret onto the surface. Other things we can do remotely. We have other instruments
onboard that can do remote sensing, but this arm is what we have for doing the placements, and
right now, so far, this is the most complex manipulator we sent to another planet on a robot. This
has a drill on it that we've never had on a robot before. Human astronauts have gone to Moon
and used a drill to go subsurface, but this is the first time we've had a mobile drill platform that
we can just place wherever we end up going, and we've taken four samples so far. This is a
close-up view of the turret, some of the instruments onboard. We have a high-resolution camera,
color camera, the MAHLI, and we have a spectrometer, the alpha proton X-ray spectrometer.
You place it on the instrument and leave it for some integration time, either just a few minutes or
overnight for a more detailed understanding, and it gives you the breakdown of what's inside the
soil it's sitting on or the rock it's sitting on. We can scoop up sand in the terrain, and we can also
brush sand off of a rock surface. The scientists always want to know what's inside the rock.
They don't really want to know what's on the surface. The sand has blown and weathered it on
the outside, but to clear it off and get the best spectral reading, they like to brush it off with the
brush sometimes. So the arm has a forced-torque sensor to measure how much pressure it's
getting when it's being placed onto the ground, and that's used internally just as a safety check to
know when to -- how much force to push down on to make sure you've got good enough contact
and not too much contact or you have a lost contact. And you can see here some of what's in the
arm workspace, things it can do with the samples. When it collects a sample, it can drop it into
different instruments, into the inlet for the CheMin or the SAM science instruments that can
process the materials. We have effectively like an Easy-Bake oven inside. It takes the samples
and heats them up and analyzes the vapors that come off of it when it's been processed. In
addition, we have another instrument on the mast that shoots a laser. The CheMin instrument
shoots a laser out, vaporizes some material some distance way and again, uses a camera and a
spectrometer to analyze the vapors that come off. The rover has several calibration targets
onboard, just to keep the instruments -- have a good understanding of how the instruments are
working, so we've got color cameras, the microscopic imager, targets, calibration targets for all
the different spectrometer and imaging sensors there.
>>: There's actually a penny on Mars right now? That's what that is.
>> Mark Maimone: Yeah, actually a penny on Mars right now.
>>: And so we can send the arm to places on the rover itself. It has its own teach points, like
places where we know it's going to go. Part of the checkout activities were to make sure that it's
going where we expect, and we can measure that with fiducials on the arm to guarantee that we
can predict the effect of Mars gravity on the arm very well. Normally, when we operate the arm,
it has a sort of a sweet spot, where it has the best behavior, where it's able to get multiple
orientations and has the best-known precision in measuring the surface using the stereo cameras.
So we call this the magic cylinder. So we generally like to put targets into this volume, if we
want to put the instruments on it. That's our goal for a drive, is to end the drive so that the target
is in here, and that gives us the best opportunity for getting instruments on it. If we don't
succeed, it can still work outside of this volume, but it's much more constrained. We may not be
able to successfully place on that target without further positioning the vehicle. And every time
we position the vehicle, that's another day that it takes to send a new command to drive and
confirm that it's there before we can do more arm work. So here's an example from our
simulation program of an actual command sequence that was sent up on sol 612. That was
maybe four months ago, and what you're seeing here is what the rover planners would see in a
control room on Earth as they're planning the day's activities. We have a model of the terrain in
3D, and we can watch what our commands will do on that terrain, and it does interact with the
virtual terrain in some cases. In other cases, it doesn't, so you may see like the instrument go
into the ground, and what that means is just that if it doesn't know that there's a surface there, it'll
just push it through. On Mars, what's going to happen is it will contact, and you'll get a force
feedback on it, but that's not always shown in this simulation. But we used this simulation to
make sure that when we take pictures, they line up where we expect them to be, that we're on the
right target that the scientists want. This animation is created and visualized throughout the day
by the rover planner, and at several strategic meetings throughout the day by the whole science
team and the whole operations team. So everyone has a chance to see this, see the plan, review
it, make sure we're hitting the right target. Anyone can say at any time, wait a minute, don't you
want to go to that other target over there, or how are you sure you're not going to be able to -you're not going to slide off if there's a pebble nearby, something like that. And so this
visualization really makes it possible to get this all done in a single day, without having to do
additional thinking or extra tests by doing anything on Earth in reality. We really use the
simulator almost exclusively. We almost never go back to the actual test vehicle, because we
never know exactly what we have on Mars. It would take us a while to try and reconstruct
everything in reality. We just use the sim for all of our planning. So we can use the arm to do
contact science, and this was the first rock for which we did that on the Curiosity mission, and
this is what it looks like from the top of the deck, from the navigation camera. As you can see,
the arm is reaching out and stretched out, placed onto the target here, and these are some of the
images that were taken during that close touch. So another thing we discovered we could use the
arm for was self-inspection. You can take selfies with the arm. You reach out and just look
back at yourself, and we did that to look at the belly pan, look at the wheels under the rover, and
we've also done that to get this whole selfie image here. Now, this has become an iconic view of
the rover, but it actually took a lot to generate that. This is going to show you an animation of all
the motions that the arm had to go through in order to collect those images to be stitched together
on the ground. And also, you'll see the view through the camera as it goes. So this wasn't
something we had planned originally, but after we landed on Mars, some of the team said, you
know? This would be really useful, to be able to see back to the vehicle, so they worked this up,
worked this up in simulation, and then went out to our test model, tried it on the test model and
were able to generate actual images that our team could then stitch together and come register to
make the overall view. So the other kind of technology that we can do with the arm, another
thing we can do with the arm, is to sample sand with a scoop on the arm. A scoop isn't a new
instrument. We've had that on earlier missions. Even Viking I think had a scoop, and the
Phoenix mission eight years ago also had a scoop on it, but that's another resource for us to get
some sample into the system. So we can either scoop or drill, and here's a close-up of the drill
bit as seen by a camera on the rover. So any time we use the drill, we can use the cameras to get
the close-up view to see, does the bit still have enough life in it? Is there anything stuck on it?
We have extra bits that we can swap out if we need to, but happily, we haven't needed to yet.
And you can see the end result of using the drill is that we really pulverize the material. We
create this powder, and what happens is the -- so the drill sits on the terrain with these two
prongs, and then the middle piece comes in and out and starts spinning around to drill into the
surface. And once it drills down, it creates the powder, and there's a helix in the middle here,
and the powder slides up the helix into the unit, and once it's inside here, if we decide we want to
keep it, we end up twisting the arm through a series of maneuvers using gravity to send it
through the system. It moves from here through a door into another sample container, and
there's a whole mousetrap kind of configuration inside for routing it through different sieves to
filter out different particle sizes and routing it to different storage places. Here's a video that
shows you the drill on a rock on Earth. This is the engineering model. It's not a video from
Mars, but you're seeing how the drill works in a time-lapse view here, and what's kind of neat is
not only do you see the activity here, but you can see some sand particles pushing away at the
bottom, and we see that on Mars, too. If you look at the before and after pictures, you can see
little pieces of the terrain going downhill after the drill has been there. And just to give you an
idea, the way we choose the target is we get the stereo view, and we'll also get this very detailed
color view, and the science team will go and try and figure out which way do we want to go?
Which targets do we want to look at? What's our priority? And they work with the rover drivers
to go between what the scientists think is the best target to what the rover drivers think is
achievable. Because it may be an awesome target, but if you can't drive there, you can't reach it,
it's not going to do you much good. So we've really worked together. The science team outlines
the goals, and the rover drivers are trying to reach that goal. So I mentioned that once we collect
a sample, it goes through the mechanism here. We have different sieves, different filters to
prune them down to a certain size, and they end up getting portioned out into tiny little aspirinsized tablets, and it's those tablets that get processed by the instruments inside, to get baked or
processed in other ways. Okay, so that was the arm. Curiosity also has cameras and mobility
capability and technologies, so let me go through some of that. We have 17 cameras on the
rover, so we have body-mounted cameras on the front, and on the back, there's actually four
cameras here on the front, two and two on the back, and there's four cameras up here. There are
more cameras inside the mast. There's a camera on the arm. There's a camera in the laser
scanner, or the laser instrument, and all those can provide data for Earth, but onboard the rover,
the only cameras that it really uses onboard autonomously are the body-mounted hazard cameras
and the mast-mounted navigation cameras. And even though you see four here, we only use two
at a time. We have a redundant backup computer, and the cameras are tied to a particular
computer. So if you're on one computer, you use one pair of cameras. If you reboot to the other
computer, you use the other pair of cameras. That's going to be a problem later on in this story,
but that was the design. So we get a nice, wide field of view. Here's a sample field of view from
the body cameras. It's 120 degrees. It's a fisheye view. It's actually 180-degree diagonal view,
and we use those to detect nearby rocks and also look at where the arm's going to be placed, and
here's an example from the mast. I mentioned that you're at human eye height, and these are the
cameras that we use for those activities, and you can get a nice view all around the rover, looking
out 100 meters or nearby, too. And so what's some of the processing we can do with those
images onboard? Well, if we're not playing directed driving, we have the option of several kinds
of vision-based autonomy. So one option is that we can ask the rover to look for hazards around
it, figure out where they are and steer to avoid them, so that's what we call autonomous
navigation, or auto-nav, so that's one way of driving. Another way we call visual odometry, and
what that does is, when you're driving your car, you have an odometer in your car, and it's
measuring your distance, and it's pretty accurate, because you have rubber tires moving on
asphalt. They don't slip that much. We have metal wheels going through sand and rock at
different slopes, so we slide a lot more, and we don't have any sensor onboard the vehicle that
can detect slip in real time. There's nothing there. You have accelerometers and you have
gyroscopes, but gyroscope only measures your attitude change. It doesn't tell you if you're
sliding one way or another. And the accelerometer, in theory, you could doubly integrate it and
get position information, but they're so noisy that you really can't rely on that. So what we do is
we actually use pictures. We take before and after stereo pictures, and we can process those
pictures to figure out the transformation of where the rover went, and that can be used to correct
our position as we go. It's not on these slides, but we also are just demonstrating now a new
capability for visual target tracking, visual servoing. You aim the camera at a target, and ask it
to keep watching it, and it will keep tracking it as it goes by it. So for visual odometry, the way
that works is we detect pints in the nearby terrain automatically, just find whatever features
happen to be there, and then track their motion across frames, so this is sort of a before and after
frame. There's a rock here, and you can see the arrows are pointing down this way, and the rock
over here has moved down to that point. So this is just examples of different frames from actual
motion on the vehicle. So this is a very helpful capability that tells you when you're slipping and
gives you precise positioning, and we really couldn't drive through sandy or sloped areas without
this. Yes.
>>: So are you doing the tracking on Earth or on the robot?
>> Mark Maimone: This is all on the robot. This is not done on Earth.
>>: Programmed through the robot.
>> Mark Maimone: Right, well, we have the software on the robot, and what the software will
do is, if we turn it on, it will automatically start taking pictures as it drives, and then it'll stop
every meter or so, whatever distance we tell it, to take another picture, compute the difference
and update its position knowledge onboard. So it's able to use that to figure out.
>>: [Indiscernible] really fast in the past few years and you just keep uploading new programs
there.
>> Mark Maimone: So you're asking if the compute vision advances quickly, can you update the
software? We do have the ability to update the software, and we do that sometimes. But the
mission overall has sort of a low tolerance for big changes, so we generally won't do huge
updates unless there's some really big benefit to it. Because any time we send an update like that
to the software, we have to put it through a whole test program. We have to revalidate all the
other software to make sure that's still working, we didn't break it accidentally. So we don't
generally upload new stuff, but on MER they did that. Spirit and Opportunity, in 2006, did get
new technologies, like four new technologies uploaded, and that was done through a whole big
test program. It took a couple years to go through the test program, but they did get it done.
Yes.
>>: So I'm guessing the orbiters cannot provide the equivalent of GPS.
>> Mark Maimone: Right, yeah. You're asking whether the orbiters can provide, and no they
can't. We don't have MPS at Mars, no. I would love to have that. Having GPS equivalent on
Mars would solve a lot of problems, but we don't have it, so we rely instead on this visual
odometry of looking at the local terrain.
>>: Has there been any discussion of creating a constellation to do that kind of thing?
>> Mark Maimone: Yeah, people have talked about making something like that, but the expense
of putting enough satellites around there to support that is just beyond anything that we have so
far in our infrastructure. It would be great. If we actually start putting things on Mars for real,
for long, extended periods or people there, you're going to want something like that. But it's not
available today. So I think I'd like to show a quick video here, just to give you an idea of how
the safe driving software works. When we ask it to drive and avoid obstacles along the way, just
want to show you how that capability is implemented. Now, this is showing you the MER
vehicle. It's really a very similar kind of approach on Curiosity, but Curiosity is using more
pictures than MER did.
[Video Plays].
>>: Class, today's lesson is autonomous waypoint navigation in natural terrain. This is your
designated goal location. First, watch this video to learn how it's done.
>>: A robot exploring natural terrain needs to continually find and avoid navigational hazards,
such as large rocks and steep slopes. Our system models the terrain using stereo vision,
determines and stores how safe the rover would be at each location in the model, then estimates
the traversability of several possible driving paths. The rover chooses the safest path that moves
it closer to its goal and drives a short distance along that path. This process of taking pictures,
predicting rover's safety and driving continues until the goal is reached. The first step of the
navigation cycle is to take pictures. A pair of cameras at the front of the rover captures
simultaneous left and right images of the terrain. Features in the image pair are correlated and
triangulated to determine the distance to the feature from the cameras. Range data must satisfy a
number of tests before being accepted as correct. Confusing or misleading features, like parts of
the vehicle and areas of the image seen from only one camera, are automatically removed from
consideration. The resulting points are accumulated into a three-dimensional geometric model of
the terrain. This 3D model is evaluated for safety across a grid at rover wheel resolution. For
each grid cell, points within a rover-sized patch are fitted to a plane and rated for traversability.
Each patch is checked for steps, excessive tilt and roughness. Obstacles are expanded by the
radius of the rover, so that the rover center will stay far enough away from the obstacle to keep
the entire vehicle out of danger. Each patch is accumulated into a gridded traversability map that
drapes across the terrain near the rover, including areas not currently visible from the cameras.
A small number of paths are superimposed onto the map and evaluated both for safety and also
closeness to the navigation goal location. The rover chooses the safest path that gets it closer to
its goal and drives a short distance along that path. After each step, the navigation process
repeats until the goal is reached, no safe path is available or the rover is commanded to stop.
[End Video].
>> Mark Maimone: There's one last little bit. Give it a second.
[Begin Video].
>>: Now, it's your turn to try. Oh dear. Maybe we should watch that video one more time.
[End Video].
>>: Is there any significance to the number 42?
>> Mark Maimone: I don't even remember, where was the number 42?
>>: Hitchhiker's Guide to the Galaxy, the answer to the question of life, the universe and
everything. Everybody knows what the question to the answer is, let me know, please.
>> Mark Maimone: Okay. That's probably the source, yes. I did the screenplay for that, but I
didn't do the animation, so the animator got to sneak his own little thing in there. Okay.
>>: So you said that the pilots here use it and the rover uses it, so how often do they just make it
autonomous and say go there, and how often are they giving it the exact path plan? Or maybe I
misunderstood.
>> Mark Maimone: No, no, that's a fine question. Maybe I can jump ahead and answer it. So
how often do we choose to drive autonomously versus not? This is a breakdown of all of our
driving as of about two months ago, and it's broken down by drive mode. So the red on the
bottom is the directed driving, where it's not using vision at all. It's just following the
commanded distance, and the purple on top is where it's using what you just saw, the vision to
look at the terrain and figure out where the rocks and ditches and slopes are and avoid them. So
it's maybe 10% of the time that we're using that autonomous mode. In my dreams as a researcher
20 years ago, I thought when you were driving on Mars, you'd want to use that mode all the time.
It's the safest way to go, and it is, but it's just too slow. The total distance here is about nine
kilometers, and if we'd only driven in that mode, we'd be much less distance accomplished.
>>: Why is it slower?
>> Mark Maimone: Why is it slow? Well, it's slow because we have a 133 megahertz CPU,
which I only get a fraction of the time on it, and it's slow because the cameras are not designed to
take images very quickly. We actually have to stop, expose the images for half a second, transfer
them across a camera board into the computer's memory. All that takes more than 10 seconds
for a regular stereo pair. So it's just not built like a terrestrial vehicle with fast image acquisition
and fast transfer. The components on this have to be space qualified. It's using a rad-hard
processor, and the time it takes to qualify a processor is usually a decade, so we're always at least
a decade behind the times there.
>>: But if you had a superfast processor, super-good cameras and all that kind of stuff, then in
theory it would be faster than having the manual? Is that right? With the manual, there's all this
latency and concern.
>> Mark Maimone: Well, yeah, so you're saying, couldn't it be faster if you actually had all the
right sensing onboard? And yes, you're absolutely right. If we had -- as a tradeoff, given a
certain amount of drive time available per day, the current tradeoff is driving with all the
autonomy to avoid obstacles, our best rate so far has been about 23 meters an hour, and driving
blind, our best rate is 100 meters an hour. So if we could get the autonomous rate up to 100
meters an hour, there'd be a lot of incentive to use it. And so we're looking at the next mission,
deciding how we're going to do that, but generally, we don't make changes to stuff like that,
unless it's motivated by a science goal. So if the science team is telling us we don't want to go
any faster than you went on the last mission, we probably won't. I would love to go faster. I'd
love to be going even faster than the blind speed. Why not go faster? We've set the drive speed
so that we have enough torque to overcome obstacles. Each wheel has enough power that it can
pull half the weight of the rover. So motivated by that special case, they put the speed very low,
with a very high gear ratio. But you could imagine systems that were built differently. We just
haven't had the incentive to do that yet. Okay, so you saw how the onboard autonomy works.
The way we use it is we usually stop every half a meter to meter and a half, taking four sets of
images now, not just the one image pair, to follow that whole process. So here was an example
of the first time that we used that software on Curiosity. Now, remember earlier on in the talk, I
showed you a drive that was two straight-line segments, a little bit off to the side? That was the
plan for this drive, and here you can see that when it drove, it didn't quite follow the plan. It
didn't quite go in the straight segments. The first part was straight enough, and then here, it did a
little curve, and let me show you what ended up happening. These are the maps that it generated
onboard, internally, as it was driving. So it took pictures, processed the data, and it color-codes
every cell on the map based on how fast it thinks it is. Green is really safe, go ahead. Yellow is,
it's a little rough, but go ahead and drive there. Red just means don't go there. And so the reason
you see red all around the outside is that the human drivers set a keep-out box. We said, don't go
to this area. If your position estimate tells you you're in this area, stop. We didn't mean for you
to go there. It's a way of constraining it to not ever get out of the terrain we want it to go
through. And what you can see over here is it's a little hard to tell, but every one of these gray
specks is an arrow. What we have onboard is the ability to plan the best possible path to get to
the goal. And so what it's doing is it's evaluating at every point, what's the next best step I can
take to get to the goal? It's a global path planner. You may be familiar with A* search. People
use it in video games a lot for optimal course planning. At Carnegie Mellon University, they
developed an optimized version of that for driving called Dynamic A*, or D*, and what it does
is, it is still an optimal A* search, but when you have a map update that's tiny, like when you're
driving and you just see a little bit of the grain, it's very efficient, so it runs much faster than
plain-old A* would run. It doesn't have to do the whole search at every step, and we have that
software onboard. So here's a view of what the rover actually did during the autonomous part of
the drive. So it had gone this way, and it turned on its cameras, and it saw this stuff over here.
And it's not that big. It's not so big that it's a real hazard, it can't possibly go over with the
wheels, but it still noticed that this part of the ground was flatter and even better than that, and
given the weighting functions that we had on that first day, that was enough for it to choose to
drive around the outside over here. We've since changed the weighting. We didn't want it to
avoid the small rocks anymore, but that was the first-day activity, and it worked just as we
thought it would. So what else drives Curiosity? A lot of challenges along the way. One
interesting thing, as a roboticist, people tend to trust sensors more than the high-level autonomy
processing, and that had been our experience in initial development, too, that in order to estimate
where we are, what our attitude is, you really trust the IMU implicitly, right? And we all have
smartphones now that have accelerometers and gyros in there, and you kind of trust it to do the
right thing. So on this one day, it turned out that -- and so one way we enforce that is we say, if
the vision system's estimate of what our attitude is differs from the IMU by more than a degree
or whatever the parameter's set to, then you don't trust the visual odometry. You trust the IMU.
And so we got a fault on this day and on two days after that. The VisOdom disagreed with the
IMU, and like, what's going on? It's not supposed to be that far off. There must be something
wrong with all the software. Well, it turned out that the VisOdom was actually right. What was
going on is that someone had set a parameter in the integrator for the IMU that would ignore
gyro changes when there was a big acceleration. So if the acceleration is too big, the reason, it
must be a faulty reading, you should ignore it. But what was happening is we'd be driving,
coming down off a rock, you'd hit an acceleration, and it was a big enough force that it hit that
threshold they had set, and it was just ignored. So the IMU estimate was actually off from
reality, and it turned out to be off by what the visual odometry had measured. So this was a case
of not finding broken software on the sensor, but finding the broken sensor with a broken highlevel visual odometry software. So we updated that integration parameter, and we haven't had
any other problems since. And it's been working really well for us. This was a few months ago,
but as of then, out of 3,800 updates, we only failed to produce an update twice, with two frames,
and that was because we were looking at terrain that was so sandy, there was no texture there. It
couldn't find enough features autonomously to track it. Another problem we had last year was a
flash problem. Rover found some problem with the flash memory and ended up rebooting, and
the ops team decided to switch to the backup computer with backup flash in order to deal with it,
so they could assess it safely from the other CPU. And they did find a real problem, and they
ended up deciding to disable half of the flash on the main computer. The RCE is Rover
Compute Element. That's one CPU, so they disabled half of the flash on the A computer and
actually we've been on the B computer ever since. We haven't gone back to the A side. So that
was a case where everything worked as it should. There was a problem, but the fault response
acted correctly. It informed the team of the problem. The team reacted quickly to switch to the
backup CPU, and everything was fine. We were able to resolve it and move on, except for one
thing. I mentioned earlier that the cameras are tied to the CPU, so when we switched CPU, we
went from the top navigation cameras to the bottom navigation cameras. And when you do
stereo vision, you want to have the cameras maintain their alignment, so between all the stereo
pairs on the rover, we have like a titanium bar to keep them rigidly mounted with respect to each
other. But if you notice, these cameras don't have a bar between them. They're sort of hanging
down from the ones above. And what we noticed was that when we tried to do stereo with these
images from these cameras, like stereo processing not with the eyes but with software, we
weren't getting very good results. It was coming out really poorly, and it took us a while to
figure out what was going on, but we eventually realized that the cameras were warping with the
temperature, ambient temperature of Mars. That was a problem, but it turned out that it was
actually predictable. You could model the warping based on the current temperature, so what we
did was we actually had to write some new software, upload some new software to read the
temperature sensor and update the camera models appropriately, and with that, we were able to
keep working again. We figured out the problem, we tested it on the ground. Once we had the
right solution, we uploaded a version of that solution and kept it going. Later, we had a new
version of flight software. We incorporated a more efficient patch to that directly into all the
code, instead of just a little quick fix, but happily, we got it working again, but it did take a few
months to sort it out, to understand what was happening, get the solution and get it onboard.
And you can actually see that reflected here. That happened on sol 200. We had started drilling
right before. This is the odometry of how far we went each sol. You can see we didn't do a
whole lot here. Here, we were drilling, here we had the fault. After about a month, we had a
ground solution worked out that we could still fix the models on Earth with the right
temperature, but then Mars went behind the sun for a month, so we couldn't do anything anyway.
So we lost almost 100 sols here just because of drilling for not moving and then the fault and
then solar conjunction. But you can see we've done pretty well driving ever since. So what's
another challenge? One time, we -- so I mentioned, when we do the autonomous driving, the
rover is in charge, and usually, we start the day with a human choosing how to drive. But if you
have a weekend and people aren't working on the weekend, or if for some other reason you don't
get to talk to it in between, couldn't you keep driving autonomously? Yes, you could, and we've
done it. We've commanded it successfully on sol 435, and we tried it again a few weeks later,
and this time it failed. And what happened was, we drove blind, and we got out to here and we
started the autonomous driving. Now, this was going to be a really cool autonomous drive.
They had sequenced a plan to go like 200 meters more, off the chart here. But what ended up
happening is, pretty much right away, it saw something and tried to work its way around it. And
you can see here what it saw. It's driving up, and there's a little crater here, and the reason we
couldn't see it is it's a little depression in the terrain. The end of what you can see in the blind
cameras is where the depression starts. So in retrospect, it's kind of obvious that there would be
something there, but it tried to go around it, but then -- I was talking too much. Let me bring this
back up again. So here we are, approaching the crater. It's going to see it and veer right. You
can see it steering to the right there. Now it's going around the crater, around it. Eventually, it
hits the end of the red box, that keep-in zone. The red zone is where you don't want to go, so
even though it's finding a way around it, it says, oh, I'm hitting my red box. I can't go any
farther. It turned around -- and you can see the tracks here. It's going back over its own tracks to
go the other way. Now, this was especially interesting, because we had actually set parameters
to not let it drive backwards, so when we started seeing these backward tracks, it was a little
concerning. It turned out that I had left one little hook in there. Of all the paths it could choose,
almost all of them were forward, but there was an optional choice if you're totally boxed in but
you can turn around 120 degrees, it was still allowed to do that. So it found that one option
backed into the corner and just did a turn in place and started driving the other way. So the
autonomy took over and emergent behavior took over, and it found a way to get out of the box
we'd put it into. The drive ended up stopping for other reasons. It ended up stalling when it was
steering one of the wheels, totally unrelated to the autonomy, but this was a fun day to figure out
what happened when it's confusing, it's driving backwards and it stalls. Happily, everything just
worked just fine, but it was a tough day to get the first images back and try and explain what
happened. This is the map that it built up. You can see the edge of the crater here. It goes off to
the side, gets boxed in and then turns around and comes back the other way. So the other issue I
mentioned is that we discovered that the wheels were getting torn up much more quickly than we
expected. We can see here at the bottom -- I guess it's a little dim. I'm sorry, but if you can see
right here, that's a piece of the wheel that's been pushed inside. What ended up happening is we
ended up in terrain that we had never experienced again. Sojourner didn't experience it,
Opportunity, Spirit. We hadn't found terrain with rocks that are so embedded into the ground
that they won't move out of the way when this big rover comes onto it. Normally, in the past,
when we'd seen small rocks, we would just push them into the ground, push them off to the side.
They'd move out of the way, everything was fine. But here, the rocks weren't moving, and these
wheels, these 50-centimeter-diameter wheels, are machined out of a big block of aluminum
down to the thickness of less than a millimeter on most of the wheel's surface. That's like a tin
can, and so you're running a tin can. You're pulling it against a rock that's not moving, and it just
pokes right through. So we started noticing right about the time of that last drive that we were
starting to get these problems in the wheels. Here are some before and after. Just two weeks
after we landed, the wheels looked nice and pristine and good, and just two months later, we can
already see little dings, but the dings didn't worry us. But almost a year later, it still wasn't that
bad. We hadn't hit that terrain yet, where it had those embedded rocks. We'd been in nicer
terrain up to that point. Even here, there's more dents, but there's no punctures yet, so more than
a year into the mission, we're still find. But now, all of a sudden, it's getting worse and we're
starting to see actual holes in the skin on the wheels. So that gave us pause and made us slow
down a lot and reassess our strategy. We had driven from here all the way down to here, and at
this time was when we realized there was a problem with this terrain, and so we asked the
science team to help us understand what's going on, so they came back with this map, where they
said, green is pretty good terrain for the rover. Yellow is this embedded stuff. And you can see
we embedded it right in the middle of the sea of yellow, so we couldn't back out. It was just as
bad as going forward to try and get out of it. And so what we ended up doing was trying to
change our course through here to try and micro-optimize every step to avoid that kind of terrain.
But I actually found it really interesting that the science team after the fact could go back and
reassess the data they already had and say, you know, from orbit, we can only see down to this
resolution, but based on all the other features we see, we can predict that these tiny rocks that are
only two centimeters tall are going to be a problem in these areas, and they were right. They
were able to get that assessment right, even though they couldn't see the actual obstacles there.
So we had another issue. The first time we encountered a big dune here, this was a dune they
called Dingo Gap after a feature in Australia. This thing was about a meter tall, just a big sand
pile, and this was the way out of that nasty terrain, so we'd been climbing in the nasty terrain
over here, and we said, if we can just get over this, it gets much better. It's much nicer terrain,
but can we get over it? We don't know. So as it turns out, we got up close and we studied it, and
we looked at the slope, and the scientists told us that, you know, the scope is actually less than
the angle of repose for this material, which means it's actually been there a while, it's weathered,
so it's probably more compacted. It's probably able to support the weight. So we planned a drive
to go over it, come over to the side here, get some of the wheels on some of the rocks on the side
and some of the wheels on the sand, and that was the plan. And, happily, it actually worked just
fine. And what I did here was project the course that it took. We were driving the straight line,
and then here's that sand wedge, viewed from above. We're climbing over it and coming down
to the other side. What you can see in this view of the course plot is the red data shows where
the rover thought it was, so we were using the visual odometry to update its position knowledge,
but it only runs every meter. So in between every meter, it thinks it's here, and then it gets a
correction, and you see this impulse jump back to where it was. So the green line is sort of the
smoothed over view of the telemetry after the fact, but the rover itself thought it was doing the
red drives here. So, happily, it only took two sols, two days, to get over this thing. But there
was a lot of concern and consternation whether we'd be able to make it, and happily, we had no
trouble whatsoever. I wish that had held true later, but at least here, we were doing just fine,
getting over this thing. So it takes a while to do the whole playback. I can jump back to it if
people would like later, but basically, that was the end result. Looking back at our tracks, you
can see, we were able to climb over it, came down the other side, and there was slip along the
way, but it wasn't that bad. It never exceeded more than I think 50% going up the other side.
And this was fun for me. I got to be the driver both on the up side and the down side, or one of
the drivers those days, so it was nice to see that everything worked out. So let me go back to
this. These are the drive statistics for the whole mission in all the different modes that we were
driving. So viewed this way, what you can see is that our top distance per sol was about 140
meters, and this ended maybe four months ago. We do have more since then, but it's a similar
record. We're still at about 140 meters per sol, but I really like this view of it better, because
here you can see when we started using every behavior, and just by the slope, you can see when
we were driving a lot and when we hit a problem, here's where the wheel wear came into play,
where we saw the holes in the wheels. We took some time to drive much more slowly to figure
out what's going on, and then we eventually got a handle on it and got better. So the red is where
we drove I the directed way, without using vision. This blue is where we drove with the visual
odometry, so you can see we actually used that part of the autonomy a lot for the mission. That's
every step, every meter, stopping to take pictures and measure your progress. The purple is the
avoiding hazards mode of driving, and this green one is a mode in between these two. It's where
we used visual odometry only as we needed. So it would kick on every 20 meters or so to see if
we'd got stuck yet, just as a slip check, as a sanity check. We'd drive in the blind, stop, move a
little bit, stop and see if we were stuck. If we weren't stuck, drive another 20 meters without
looking, but along those 20 meters, it would use other criteria to think about whether it was
getting stuck. So we can check average currents in the wheels to see if that's going up too high.
If it's increasing beyond a limit, then we won't halt the drive. We'll do something else, but if that
happens, if our tilt is too high, if we're turning but we're not turning fast enough, or if we know
we're slipping, because we measured slip, what it'll do is it'll kick on visual odometry
automatically. So the autonomy will kick on more autonomy if it discovers that there's some
reason to do that, and that mode was invented for Curiosity based on the Opportunity experience,
because Opportunity got stuck unexpectedly, and only later, looking at the data, did we realize,
well, if you had a filter on the average currents, that could have stopped us. If we had a filter on
the turn rate, that could have stopped us, so all these things are now in the flight software for the
new mission. Yes.
>>: The correct answer to slippage or other faults is not to stop, because stopping makes it
worse?
>> Mark Maimone: Yes, so could there be a case where stopping during slippage would make it
worse? That is possible, but unfortunately, in this architecture, we have to stop to collect the
images to determine whether we're slipping, so we don't actually have a choice in this system.
>>: If you have a fault that says, okay, apply more power here.
>> Mark Maimone: No, there's no fault that says apply more power here. What we'd have to do
is we'd have to reconfigure parameters to allow more current to be used, set the current limit
higher. And we could write a sequence of commands to do that, but we haven't done that to this
point. I'm just going to skip over the others. We have the numbers for tilt, pitch, roll and yaw
for the whole mission available for study, but I think -- I'm running out of time here. I think I
should wind it down, but just to show you some of the science results that we got, I mentioned
before that they determined that where we landed was in the outflow of a previous water
distribution network, and so they determined that by looking at, analyzing the mineral contents
and things nearby. This was something they didn't expect to find until we got all the way to the
mountains over here, but we actually found it very close to the landing site, which is why we
ended up staying near the landing site for most of the first year. We did drive all the way -- here,
you can actually see tracks. This is an orbital picture showing the rover's tracks. So I mentioned
before, we get like quarter-meter resolution, so you can see the scale of it here. They really liked
this location, because it was the intersection of these three different kinds of terrain types, and
that's where they got the evidence for the neutral pH water. And here, they show that some of
the rounded pebbles and sand indicate that the depth of the water was at least not trivial, not just
a little small covering, but at least ankle to hip deep. So not necessarily an ocean, but at least a
standing pool of water. And you can see just some of the terrain they found there, all kinds of
interesting things to try not to drive over. And I'm not going to go through all of these in any
great detail, but you can see some of the science instruments on the rover here, so we have the
color cameras on the mast, a weather station, to measure wind speed and I think barometric
pressure. There's the DAN instrument, which sends out a neutron pulse to look for subsurface
hydrogen, and they'll be running that as we're driving sometimes, or every 15 meters or so as
we're driving. The APSX sensor for the chemistry of rocks on the turret, the close-up camera
imager on the turret and the different instruments we talked about before. SAM is the chemistry
and isotope processing. That requires a sample from the arm to go into the oven inside, to be
studied. And CheMin for mineralogy. So they've also measured -- the RAD instrument
measures radiation flux, and so they measured the radiation on the way to Mars, as well as on the
surface, and that's informing mission planning for eventual human missions to Mars. They
wanted to know about the radiation environment. So I'm going to just skip through a little,
unless people have questions about a specific one. Just that's the size of the oven, inside the
rover. That's one reason it had to be a big rover, was just to hold all the science instruments
there. Yes?
>>: Wasn't there some issue with the oven commonly used [indiscernible]. I got the impression
there was some -- it had reagents or something that [indiscernible].
>> Mark Maimone: I’m sorry, you're asking about limited capacity, limited supply for which
instrument?
>>: For the oven.
>> Mark Maimone: Oh, for the oven. Yeah, there are a limited number of tubes that you can put
samples into that can go into it, so there's a lot of limited resource components of some of the
science instruments, and the drill bits, too. We have limited drill bits, but we do have spare ones,
if we ever need them. So here's a nice view of the ChemCam, the laser. This is the terrain
before and after a laser blast, and it's not the hole that matters. It's the evaporation after it's fired.
They take the picture and they run it through a spectrometer, but what's really cool here is this is
a drill hole, and you can see in the drill hole, they were actually able to fire the laser into the hole
they had drilled nearby.
>>: Has the rover ever used the laser for personal defense?
>> Mark Maimone: It has never used the laser for any other purpose, to my knowledge, but
maybe I'm not at liberty to say. Who knows? So the main science results have been that they
did expect to find some evidence for some ancient river deposits, because you could see from
orbit what looked like a delta, like a fan going in, but they've also found that from the
mineralogy directly there, that it's not too acidic or too alkaline, low salinity, and so just really
nice water. It's the best water we've proven that there used to be on the Martian surface. And so
what else drives Curiosity? Well, future discoveries await. We had driven up to here, and where
we're looking at different kinds of approaches, actually, this is outdated now. Just today, there
was a press conference with a new map, planning the future drives. But basically, we're planning
to go down past all this black sand, into a region here that's going to get up the mountains, Mount
Sharp over here, and you can get a nice panoramic view like this. These are the buttes in the area
they call Murray Buttes, just the small mounds. The original plan had been to drive a little bit
past them and go up through, but today, they made an announcement that they're going to cut a
little bit short. We're actually already in a new geologic unit. We've driven nine kilometers, and
we're already into a new area that's more related to this mountain than to the area we've been in,
so that happened over a month ago, but they just made the press release today, and so based on
that, they have decided to -- and based on the local analysis of the terrain, they've decided to
change the overall strategic path a little bit, so they do expect to be doing more drilling nearby
pretty soon. But eventually, the goal is to go up the hill, with this extended mission and any
future extended mission, through the different units, and they're looking back in time at the
Martian history through all of these different units. So the farther up you go, the more
interesting it gets, with the different history that connects the dots all together. We don't have a
drill that's big enough to go meters or kilometers into the ground. Instead, we have to rely on
geology or meteors to expose the layers in the surfaces, and so that's why rovers like to climb up
into craters or up mountains, is to find that exposed geology. So this is what's changed lately.
It's no longer the expected path to go this way. They've cut it back a little shorter. I can actually
show the image from today's conference, I think. I think it was here. Oh, no, sorry. I brought up
another view. Just to show you how useful the autonomous driving is, yesterday, we drove over
a little ridge, just about 12 meters, but we couldn't see what was on the other side. Today, we see
this. This is what's on the other side, and this is a rock that's clearly at least the size of a wheel
that we couldn't see from orbit. So even though we can resolve quarter-meter pixels, this one
was not obvious from an orbital view, it's perfectly obvious here, but this is why we have the
onboard autonomy, to let us go farther safely, without plowing into something that we might not
have seen from orbit. So that concludes my talk, so thanks very much for coming, and we look
forward to many more years of exploring different parts of Mars.
>>: I'm just a little confused at some of the timing. The charts that you showed seem to all end
around 650 sols, something like that, but it's been years.
>> Mark Maimone: No, no, sorry, you're asking about the timing, why only 650 sols?
>>: Yeah.
>> Mark Maimone: Well, the Curiosity rover has only been on Mars for 746 sols. It only landed
in August of 2012. Opportunity has been on since 2004, so I was showing mostly Curiosity. I
wasn't showing Opportunity here.
>>: Okay, so it's just missing the last couple months, I guess.
>> Mark Maimone: That's right, yeah. And we have new data. I didn't get it into the slide
package today. Sorry about that.
>>: And a sol is a Mars day or an Earth day?
>> Mark Maimone: Yeah, sorry, I should have explained. What is a sol? It's a Martian day. It's
called a Martian solar day. Since it's about 40 minutes longer, I mentioned before that when we
were working Mars time, it was a little bit longer. We measure time on Mars in terms of local
solar time, and just to have a quick abbreviation for it, we made up the word sol, short for solar
day.
>>: How do you do timekeeping on Mars, or do you do timekeeping on Mars?
>> Mark Maimone: How do we do timekeeping on Mars?
>>: Does it matter what time it is, and if so, how do you track that?
>> Mark Maimone: It does matter what time it is, because you need sunlight. We actually don't
like to drive until the motors are warm enough. So if we didn't have sunlight -- we don't drive at
night, because the motors are so cool, we'd have to spend a lot of power heating them. So we do
care about that, and we do measure time using Earth seconds, but there's more Earth seconds in a
Martian day, so we just take that into account. The flight software takes that all into account.
>>: I guess I was curious. This is maybe an esoteric question, but how do you keep -- if you just
have a watch, it'll drift, especially after years, so I'm wondering if you just have a very high
quality, like an atomic standard there, or does it get synchronized periodically with the satellites,
or how does it stay synchronized?
>> Mark Maimone: So how do we deal with clock drift? And the way we deal with it is humans
are comparing its estimate of time to time on Earth, and so I don't know all the details of what
the clock source is there, but I do know that we always return time tags and correlated
components, and so people are monitoring that on Earth, and they will deliver updates to that,
either to the spacecraft or to our own internal planning tools, to make sure we stay in sync, but it
is a problem. Actually, Opportunity is having a problem with that now. They may have to
update the flight software, because the time has drifted so much, more than they expected. They
have to do a software update to compensate for it.
>>: So now that we know what you use as a reference in the time domain, in the spatial domain,
when you're using the visual odometry, I can intuit how that would work in a relative sense, you
build a map as you move, but does the algorithm also incorporate global knowledge like from
orbital information, so that you can nail that odometry down to features you have measured
globally?
>> Mark Maimone: Yeah, that's a good question. So does visual odometry incorporate global
information or orbital information to tie down its estimate, and really the answer is no. We don't
do that onboard. If we were really going to have operation modes where we had to drive
autonomously for days or weeks, we would want to do something like that. There are
capabilities for localizing yourself with respect to huge terrain features, like the mountain or the
hill kilometers away. We don't have those onboard in the flight software, because on the scale
we're driving, we haven't needed that level of precision. The way that we maintain our
localization onboard is that we kind of reset our position every, say, few hundred meters. We
send a command that says, reset your position, increment your index, and that resets it to 000.
>>: Okay, so the rover doesn't have a local notion of spatial.
>> Mark Maimone: Right, the rover does not have a global notion.
>>: But when you get the data back from the rover, when you're processing it here, you then
reference it to a global -- by global I mean Mars.
>> Mark Maimone: Yes, so we do reference it globally here on Earth, and so that's how we
produce maps like this.
>>: This is exactly what I was getting at, is how do you take that drifty data and bring it back ->> Mark Maimone: So part of the operations team work is that after we drive any distance, we'll
have these images with the navigation cameras. They will use that to re-localize, so they have
the quarter-meter pixel orbital data, and then they compare that to the panorama you get at the
end of the drive, and they say, oh, I thought it was here, but it's really over here, and that
correction gets incorporated every day by the time.
>>: At a layer of the control system that includes humans on Earth, not down on the device.
>> Mark Maimone: Correct. Yes. I can imagine software onboard that would do that, but it
would require a lot of data for your map, and it would require extra time, so it just hasn't been a
priority to have that capability. Yes.
>>: What kind of significant changes for the next rover are being informed by the stuff you've
learned from this mission?
>> Mark Maimone: So what kind of changes are being considered for the next rover? Well, it's
very early in the next rover's development. We are talking already about how we're going to
build the next one. They have announced that there will be a 2020 rover, and they have
announced even what the science instruments will be on it. They've made those selections public
already. But the engineering team is just catching up now. That announcement was only made
maybe a month or so ago, so we don't have all the requirements in place yet, but basically, we're
considering what are the science goals for the next mission? How many samples do we want to
collect? How far do we need to drive to accomplish the science goals? And so that's going to
inform what choices we make on what gets improved. One thing is that we've only been able to
collect four drill samples so far. It's not that we couldn't have done more, but it's that that's what
the science team wanted. They only asked us to stop and do the drilling four times, but the
notion is, it would be better if we could collect more samples more quickly. It takes many days
sitting in one place to do the operation, because you basically -- you stop. Did you stop in a safe
place? If you use the drill, are you going to shake yourself loose and fall away and rip the bit
apart or rip the arm apart? That takes some time, and then, once you choose the target, you try to
preload on it and see if it's stable enough. We actually -- about a month ago, we tried to drill a
target. We preloaded on it. It was fine. And then when we started drilling, the ground just
cracked apart and halted the drill. And the science team -- what's that?
>>: Broke Mars.
>> Mark Maimone: We broke Mars, so the science team gave up on that. They just said, well,
it's not stable enough. We don't want to risk breaking things in order to get this one sample.
We'll take another sample somewhere else. That's a human in the loop process that drags it out,
so the more we can automate it, the better off we'll be, but it's going to take some potential
design changes to make it possible to do more automation there. Yes.
>>: It seems almost Catch 22. If the science team is driving and saying, well, this is what we
want, but they're kind of assuming what the capabilities currently are. If you could drive like
100 miles a day, they would probably ask for different things, right? So are they -- is there like a
wish list on both sides, and you're both kind of pushing on each other? Do they say, oh, wow,
give us a list of things that we could potentially have and we'll tell you what we would love? I
assume there's communication like that, but it seems really ->> Mark Maimone: So the question is, did the scientists and engineers work together to come up
with the next set of requirements, or how does it go? So far, my experience is that unless it's a
technology demonstration mission, it's really led by the science goals, and the team just doesn't
want to take on the extra risk, the extra cost, the extra test time, the extra activities of integrating
more capabilities than are strictly needed. But I have to say that I'm sure it was helpful that prior
to the Mars Exploration Rover, Spirit and Opportunity, a lot of scientists who were on that
mission came to JPL and went to other places to see demos in the technology program of what
was possible. Because people were driving rovers then, and they were driving them even faster
than the current systems are. And just seeing how the rovers could work, just seeing that they
could drive up and what they could explore and where to put the cameras. There was a lot of
feedback in the technology development stage. But right now, in the mission proposal stage, it's
really led by science. It's motivated by the science goals, and that really determines how much
technology gets onboard. As a former robotics researcher, I would love to do more autonomous
robotics on the next mission. If they call for that, I'd love to be a part of that, but it's not a tech
demo mission, unfortunately, so we can't just push stuff into it.
>>: So when you say science goals, are all the science goals directed towards humans living
there and habitation there?
>> Mark Maimone: Sorry.
>>: So what are the other science goals?
>> Mark Maimone: So what are the science goals of the mission, and are they including leaning
toward humans visiting Mars? I don't want to speak for the 2020 mission. I'm not on that
mission full time, so I can't really address all the science requirements there, but historically,
MER and Curiosity, the goal was to show ancient environments, show where there had been the
possibility for habitable environments, where there used to be water. Look for the evidence of
past life, maybe, or at least evidence for environments that could have supported past life. So
that's been the overriding goal here. On Curiosity, as you saw with like the radiation detector,
that's something that was added to get a better handle on the potential environment for astronauts
in the future. So we are getting bits and pieces toward the eventual human exploration, but it's
not like we're laying the foundation. We're not setting the tracks for the train that the humans are
going to use. We're exploring the science and exploring the environment to understand what we
have to send next. But it has been mostly motivated by the historical science context rather than
just laying the groundwork for the human explorers.
>>: It seems like the Spirit, the main factor [indiscernible] you said that there were chances for
improving -- I believe the [indiscernible] cameras and so forth. Are there plans for that, now
with better technology?
>> Mark Maimone: So are there plans to improve the speed and improve the cameras to make it
be able to go faster?
>>: For your autonomous driving mode.
>> Mark Maimone: Right, for the autonomous mode. I like the way you think. Yeah, I would
love to go faster, and so I'm actually working with the 2020 team now. We're doing a study on
what's the best way to place the cameras for the next mission, to make it better for the
autonomous driving and the blind driving. I'm not sure we'll get the telescopic mast in there, but
we can mention it.
>>: Would it be possible to fly around?
>> Mark Maimone: To fly?
>>: Yeah, like an autonomous quad copter?
>> Mark Maimone: Would it be possible to have an autonomous quad copter. Anything's
possible. I've seen proposals for that, but there's nothing currently accepted for the mission.
>>: Is there a power limitation for that?
>> Mark Maimone: Is there a power limitation?
>>: Power limitation, because you have solar and it's dimmer there?
>> Mark Maimone: So you're right. There's less sunlight at Mars, so you get less power from
solar panels, but it is possible, at least, to imagine something being built. You could get a
lightweight quad copter that can get enough power, and it doesn't have to fly all the time. It can
fly just briefly during a day. It wouldn't need that much, so it is possible. People are looking
into proposals, but there's nothing currently on the books for the next mission.
>>: Maybe a drone [indiscernible].
>> Mark Maimone: Yeah, you're right. Having such a thing would give us a better view of the
terrain, and one of the limiting factors on this directed drive distance is how much you can see.
So if we could have something that would give us a view of the terrain ahead, that would open
up the possibility of more directed driving and faster driving, that way. Yeah.
>>: Are you limited by computing capability on the rover itself?
>> Mark Maimone: Yes, are we limited by computing capability? Yes, we are. The vision
processing that we do is detailed enough that we are CPU bound for that part of the processing.
So it does take tens of seconds to do all the visual odometry and even longer to do the hazard
avoidance, because we're taking four stereo pairs along the way. So there are other options. You
could imagine having FPGAs, or another kind of coprocessor that would help speed that up. But
again, that's more parts, more stuff on the vehicle. The mission's not going to sign up to that
unless there's a need from the science side that motivates it.
>>: Is there any chance of maybe offloading to the actual orbiter or something?
>> Mark Maimone: Any chance of offloading computation to an orbiter? Well, there's always a
chance, but at present, the orbiters are really in low orbits, so they're only really overhead for a
brief period. When we send data back through the orbiters, we only get a few minutes to send all
the data in a burst, so I don’t think that, with current orbiters, that would be a useful enough
system. If you had something that's in a stable orbit that's always there that had better
computing, I could imagine that.
>>: Probably by [indiscernible] generate by computation.
>> Mark Maimone: Are we validated by heat? Yeah, we are. Part of the power modeling takes
heat in. There's thermal modeling, too, that takes heat into account. And sometimes, we have
actually had to terminate drives because we're using the cameras too much, and the heat model
predicts it will go above the guaranteed range.
>>: To put a finer point on that, is the limit available energy or heat budget, or is the limit the
fact that you bought the CPU 15 years before you launched it?
>> Mark Maimone: So is the limit on the computing?
>>: If you could send new hardware there this month -- well, six months -- is the probably that
you had to build it with a year 2000 processor, and if you used modern hardware, would you get
a free order of magnitude, or is the modern hardware not a whole lot better off because of power
and the heat budgets?
>> Mark Maimone: So the question is, are we bottlenecked in processor performance by heat or
just by the fact that it's old by history? As far as I know, it's not the thermal consideration that's
stopping the processor from being used more quickly. I think it is just the fact that it takes a
decade to qualify, to space qualify, a processor. So we're up to 133 megahertz. The same board
allegedly could run at 200 megahertz. There have even been research studies and proposals
years ago that say, just fly a commercial CPU, 2, 4 gigahertz CPU, and it's going to reset,
because you're just going hit, because the radiation hits, but so what? You're on the surface,
you're safe. You'll stop and reboot maybe even a few times a day. I would much rather have 2
gigahertz for three hours, except for the 10 minutes its rebooting. That's the thing ->>: And then when the 2 gigahertz one goes belly up, then you keep flying on the 133.
>> Mark Maimone: Yeah, you fly both, and so the thing is, right now, we're using the same
computer that had to be qualified for the entry, descent and landing. So the approach to Mars,
landing on Mars is super-critical. There's only one shot at it. Earth can't possibly communicate
with it during that time. You may have seen the video, seven minutes of terror. If it takes eight
minutes, round trip, at best, in seven minutes, it's all done, you don't have any impact on it. So
we're using computing that was designed to meet those requirements, that had to survive coming
into the atmosphere at high temperature, slowing down and mission-critical stuff. When we're
on the surface, operating in a slow drive mode, it's really not the same kind of risk to the vehicle.
So there is the potential for using other kinds of solutions there. All right, well, thanks for
coming.
Download