>> Kat Randolph: Good afternoon. Thank you all... Randolph. I work in DX and I'm not normally...

advertisement
>> Kat Randolph: Good afternoon. Thank you all so much for coming. My name is Kat
Randolph. I work in DX and I'm not normally the one who has the privilege to introduce the
authors, but it is a special honor for me to do so today. Welcome so much to Microsoft
Research Visiting Speaker Series. I have the great honor to introduce my dear friend, David
Mindell who's here today to speak to you about his new book. I've had the pleasure of knowing
David for 30 years when we first met as undergrads and have followed his amazing career all of
these years. The thing that's always impressed me about David is his ability to combine a deep
passion and curiosity about science with a talent for the humanities and the ability to tell
stories. When I was in business school and he was finishing his PhD at MIT, I would go visit him
and he would tell me all about the work he was doing studying ironclad warships since the Civil
War and tying it to literature and history and the war itself and decisive battles and he had me
on the edge of my seat. Today he's here to talk about his new book and this will be his third
award-winning book. He has written two award-winning books already and I'm sure the third is
the charm. His first two were called Iron Coffin and the second Digital Apollo, and he wrote
both of them while he was a professor at MIT as the Dibner Professor of History of Engineering
and Manufacturing. In his career he has worked Woods Hole, the Oceanography Institute in
Cape Cod and other places where he has had the ability to do cutting-edge research in various
areas that he will speak about today with his new book. So welcome, David. [applause].
>> David Mindell: Thank you. It's a pleasure to be here. It's a pleasure to see my old friend Kat
and to join you to talk about this topic. The book just came out last Tuesday so it's been pretty
interesting just sort of seeing the initial reactions. As of yet it's maybe not quite as
controversial as I had hope it will be. I got one e-mail telling me how completely wrong I was,
although I responded by saying when you actually read the book you'll see why all of your
objections are not quite correct. The book is in some ways an extension not only of my prior
scholarly work, but of my last 25 or 30 years working as an engineer. As Kat mentioned, my
previous book was called Digital Apollo and was a story about the development of the Apollo
guidance computer, and how it actually played out on the six lunar landings for the Apollo
program. It was built at MIT. The programming was done at MIT. And is relevant for this story,
as a matter of fact, I talked about it in the new book. When the Apollo computer contract was
first issued, the engineers who were working on it said this is a computer. It's going to fly to the
moon. All it needs is two buttons. One button labeled go to moon and one labeled go to
home, take me home. And by the time the thing actually flew seven years later, you have this
scene which is actually the cover of the book, but you can't see the whole thing on the book,
which is Neil Armstrong reaching up to turn off the automatic targeting system and fly it in, the
legend has it fly the LEM in manually, but actually, it wasn't manual at all. It was a fairly still
semi automated fly by wire attitude hold automatic rate of descent mode. And what I learned
from this book was surprising to me, even as someone who had worked and still working in the
robotics world, is that in this case it was a radical forward thinking decision to put a digital
computer in the Lunar Lander. It was cutting-edge technology both on the hardware and the
software of the day. But ironically, that high technology and great advance was not used to
make the trip fully automatic and very highly automated, but actually, to build this system
where the pilots could actually have very finely tuned and really in some ways finely engineered
levels of control of the system so they could intervene when they needed to take control over
or hand over control in other cases. At the time, the Soviet spacecraft had less advanced
computers. They were all analog computers, analog control loops, but were more automated.
That really kind of led me to the idea that this book, what the new book is really about, is that
the highest levels of technology are not necessarily full autonomy or full automation.
Autonomy is the buzzword today, of course, and in many cases what you see in actual practice
is that as robots and autonomous systems find their way into the field from the laboratory, they
actually have human interventions added at critical moments. The book is really lessons from
40 years, 50 years of experience in the field with people operating robots in extreme
environments, the deep ocean, where as Kat mentioned I've come out of, aviation, which I
spend a lot of time in, spaceflight and warfare. And the argument is those fields have been
forced to adopt robotics before automobiles and other aspects of daily life and they've learned
a lot of lessons about how robotics and autonomy ought to work and those are valuable lessons
to think about as we ponder the coming robotic era. The book sort of formulates a new idea of
situated autonomy. The highest level of technology is an autonomy that is situated within a
human environment and responds well and in some ways perfectly, of course, that's
unattainable, to human needs and desires and directions. If you think about the levels of
autonomy as something that scholars like to talk about, level one being fully manual, basically
the way you drive your car today minus cruise control. Level ten being fully automatic,
driverless, the sort of Google car impossible dream. There's a kind of sense just by thinking
about levels in that way that we are going from level one to level ten and somehow ten is the
ultimate goal of the technology. I argue in the book that what we really want to look at is the
perfect five, the kind of perfect balance of human and machine collaboration. That doesn't
mean that you're always had an even human and machine. You may move yourself up or down
the scale at any given point, but the human, ala Neil Armstrong in this view is in control of what
the levels are and that the system's responding to what the human wants in any given moment.
At me talk a little bit about the myths of autonomy, because that's always the first question I
get. What are the myths of autonomy? I'll read a little bit from the book on that. First, there's
the myth of linear progress, the idea that technology evolves from direct human involvement to
remote presence and then to fully autonomous robots. Peter Singer, a prominent public
advocate for autonomous systems in his book Wired for War, kind of captures this mythology
when he writes this quote, the concept of keeping the human in the loop is already being
eroded by both policymakers and by the technology itself, which are both rapidly moving
towards pushing humans out of the loop. Close quote. I argue there's no evidence to suggest
that this is actually a natural evolution and that the technology itself as Singer puts it does any
such thing. In fact, there is good evidence presented in the book that people are moving
towards deeper intimacy with their machinery. Second is the myth of replacement, the idea
that machines take over human jobs one-for-one. But researchers have found that rarely does
automation simply mechanize a human task, but tends to make the task more complex, often
increases the workload and certainly shifts it around in space and time. Finally, we have the
myth of full autonomy, the utopian idea that robots today or in the future can operate entirely
on their own. Yes, automation can certainly take on parts of tasks previously accomplished by
humans. Humans do act on their own in response to their environments for certain periods of
time, but the machine that operates entirely independently of human direction is a useless
machine. I used to say only a rock is fully autonomous, but then my geologist friends reminded
me that even rocks are formed and placed by their environments. Automation changes the
type of human involvement required and transforms it, but it does not eliminate it. For any
apparently autonomous system we can always find a wrapper of human control that makes it
useful and returns meaningful data. The questions that interest me then are not manned
versus unmanned or human controlled versus autonomous, but the ones at the heart of the
book really are where are the people? Which people are they? What are they doing? And
when are they doing it? Those are really the questions that interest me. And what you find is
that from the New Horizons mission to Pluto that was in the news this summer to the Mars
exploration Rovers which are detailed at great length in the book, to undersea exploration,
commercial airliners, remotely controlled warfare, there's always still human involvement in
these tasks, but the human involvement is often in different places and at different times. And
those differences matter. They're not trivial by any means. They often change the task. They
have social and cultural implications as well. But if you ask those questions you'll always find
where the people are. The book opens with a little bit of history of remote presence in the
deep ocean, particularly focusing around my mentor in the deep ocean who is Robert Ballard,
who you may know as the discoverer of the Titanic. This is an image that Ballard published in
National Geographic in 1981, capturing what became known as the Argo Jason system with an
oceanographic ship, which at the time the most popular way to visit the seafloor was a three
man submersible called Alvin, which still operates today operated by Woods Hole. But what
Ballard was beginning to develop was a theme of tele-robotics where you send a remotely
towed sled that scans the seafloor, kind of digitizes it, passes it up what at the time was a very
novel technology of a fiber-optic cable, to a kind of subversive experience on the ship and then
increasingly small mobile robots that come off of Argo and do closer in inspection. This system
sort of took shape over the course of the 1980s. Actually, the version of it that was just Argo
before the Jason robot came in, was actually the system that discovered the Titanic by remote
video. Often, it's confused with having been discovered by Alvin. I'll talk about that in a
moment. I came into this evolution as it was kind of wrapping up and becoming operational
and being we find in the late '80s. And what we found was that it didn't make the deep ocean
exploration cheaper and safer, but it did fundamentally change the nature of the work. What
you were doing when Alvin would dive is a three-man sphere. Three people would go down, to
scientists and one pilot. They would experience the seafloor. They would come back up on the
ship at the end of the day, have a meeting like this and explained to everyone what they saw.
With Jason, you barely see it here, but Ballard's original vision is one person in a kind of
immersive virtual reality type environment. It ended up being like 20 people crammed into a
shipping container full of monitors. We'll talk about that in a minute. And that was a very
different experience and much more like a real-time seminar on the seafloor. These two modes
were combined in 1986 with the return to the Titanic when Alvin dive with a little robot Jason
Junior hanging off of it. Jason Junior descended down the grand staircase of the Titanic. That
scene, of course, was immortalized in the opening scene of the second most popular movie
ever made, Jim Cameron's, Titanic movie. In that movie it was kind of the window into the
history. And what's interesting about these two magazine covers from those years is that the
National Geographic article which Ballard had control over only publishes an image of the
remote vehicle poking in the Windows of the Titanic. Time magazine, which was the more
public venue only has Jason and no robot, and I go into this a little bit in the book about the
tensions between literal pulling on the cable, in fact, tensions between human presence in the
ocean and remote presence in the ocean that kind of played out here. This view that I really
internalized about how the robotics was going to evolve during the 1990s is reflected in this
family tree from Woods Hole. You have Alvin, the kind of older, Alvin has been diving since the
early sixties. Manned submersible and moving up the tree to the remote cabled submersibles
and then you have this whole evolution of autonomous vehicles. This is sort of the myth of full
autonomy, that we are moving from direct human presence to remote presence to
autonomous presence. Instead, what you see actually is a kind of convergence of them all.
These are the newer vehicles up here. Nereus actually has the record for the deepest I've ever
by any human system. It is a hybrid remote autonomous vehicle. It can actually switch modes
that important. The people are involved in different places and this is the kind of myth of linear
progress of autonomy that I'm trying to counter. What you find, this is a more current view, is
here's an autonomous vehicle working in the ocean on its own. But it's still communicating via
acoustic signals sort of basically like 1990s acoustic modems from your telephone lines going
through the water. You still always want to be in touch as much as you can. We are actually
moving more towards optical modems where the ship can lay down a kind of streetlight and
the vehicle can communicate it sort of tens of megabits through the water. And that suggests a
kind of system where not only are you only partially autonomous, the vehicle may go out and
swim through the darkness, collect some data, do some tasks and then come back around
under the streetlight, upload it's data, maybe even operate tele-robotically for a while. So you
see you are kind of always moving in and out of this kind of autonomous mode. Of course,
something that occurred to me about halfway through, which most of my colleagues had not
appreciated, we always thought about the vehicle is autonomous. You send it off the ship. It
goes in and does its mission and comes back. But, of course, every one of those missions is a
collaboration between a manned vehicle and an unmanned vehicle. The manned vehicle being
the ship, which is such an innate inherent part of oceanographic research that people tend to
forget it's a manned vehicle. It's the oldest kind of manned vehicle, arguably, and all of these
autonomous missions are kind of multiple collaborations between manned and unmanned and
that's arguably a kind of more deeper holistic systems way to view the modes of autonomy.
The book goes through a number of these different examples. I mentioned there's a chapter on
space which talks about the Hubble repair. This is Jeff Hoffman and Story Musgrave doing the
early Hubble repairs. The later Hubble repairs are very interesting human robotic dances as
well. The Mars exploration Rovers, where you have a team of folks at JPL operating at a 20
minute time delay over many millions of miles still managing to feel present on a remote planet
without the aid of any of the new Microsoft great technology that's coming out, paper charts
and printed landscapes, but they remarkably feel present in those landscapes. I mentioned the
undersea vehicles. There's a story in the book about this vehicle, the Remus 9000 that found
the wreckage from the Air France 447 crash. And that story kind of links automation aboard
airliners and the way that people on airliners interact with highly automated systems very often
in ways that enhance safety and sometimes in ways that really cause accidents and cause them
to crash perfectly good airliners. There's a story in the book about heads of displays and new
ways that pilots are learning to interact with the autonomy in ways that keep them more
intimately in the loop for safety purposes. And then there's a chapter on the Predator drone
that was operated, still operated very frequently in the Gulf War, sorry, in Iraq and Afghanistan.
A vehicle that, like the Apollo computer, was intentionally, originally designed as a fully
autonomous surveillance vehicle or intelligence gathering vehicle. The original designers for
the Predator decided that they didn't need any interface at all because why would you need an
interface in an unmanned vehicle. And instead, what you end up with is these are only three of
the more than 150 people that it takes to operate the vehicle. And you can see that there are
one, two, three, four, five, six, seven, eight, nine, 10, 11, 12, 13, 14 different LCD and CRT
screens, six keyboards, four trackballs, a telephone lines, 12 chat rooms, most of them added
by the users on top of the vendors' engineering. And it's a wonderful case study of all the
reasons that full autonomy kind of gets filtered out of the store. I can read you a little bit from
the Predator chapter. It's based on a dissertation that was written under my supervision at MIT
by an Air Force colonel doing his PhD. Ironically, despite its high technology aura, Predator is a
human factor's nightmare. It embodies old tensions about the identity of the vehicle itself and
of the people who operate it. Two people fly it from a shipping container were a small building.
Their control stations look less like the latest military hardware than a set up equipment racks
cobbled together by undergraduate engineers the night before their term project is due. How
would I know about that? I don't know. To fly, the two main Predator operators have to
monitor 16 displays, interact with four touch screens, type on four separate keyboards. The
main control stick and throttle are perched high on the console making them fatiguing to
operate for long periods. Manned aircraft, by contrast, actually become simpler and more
spare over the years in their cockpits, while the Predator control station has acquired screen up
on screen up on screen. It's a 1990s era confusion of pieces, tapes and drop-down menus.
When Predator pilots issue a command they experience nearly a two second time delay before
seeing it executed thousands of miles away in a war zone on a vehicle. The crew stations are
not designed for comfort taking them to fatigue inducing for long missions. One 2011 study by
the Air Force even concluded that the poor interfaces of the Predator contributed more to crew
burnout than the combat stress does. It's easy to dismiss the Predator cockpit as a product of
poor engineering, neglected ergonomics and inadequate government contractors, but it
actually represents the fruits of a remarkable integrated process, where users and operators
took a vehicle originally designed for a completely different task and transformed it into a
global system for conducting remote warfare. Again, many similarities to both what happens in
NASA and what my experience was undersea. And it's a very complicated story because they
are not physically present over the battlefield, but because of the nature of the cameras and
the voyeuristic nature of what they're doing, they become incredibly present and deeply
immersed in the situations they are watching, through the social relationships they have either
with their own troops on the ground or with the enemy there. They experience PTSD at
roughly comparable rates to which pilots who are actually in the combat zone do. And yet the
Air Force still can't get its head around are these real warriors? Are they not? Do they deserve
medals? Do they not? Are they entitled combat pay? Are they not? It really is a, I think, a
pointer into the future about the confusion about professional roles and traditional tasks that
happens when you get not full autonomy, but this kind of remote presence. Who are the
people? Where are they? What are they doing? When are they doing it? The answers to
those things matter. It's actually not so different than the book I wrote about the Civil War
where American sailors fighting from within ironclad warships wondered why it was so heroic
to go into battle that way. I think the Department of Defense is a little bit ahead of the a lot of
the industrial world on this. I quote this piece from DOD report from 2012, all autonomous
systems are joint human-machine cognitive systems. There are no fully autonomous systems
just as there are no fully autonomous soldiers, sailors, airmen or Marines. If you extend that,
sailor, soldier, airman or Marines out to factory workers, surgeons, computer programmers,
almost any human profession, you realize that human workers always embedded into different
kinds of networks. This statement is a statement by the DOD, which has burned up a lot of our
taxpayer money and gotten burned by systems that seem to be fully autonomous, but then
when they actually got them out into the field doing a job that either had to protect human life
or potentially take human life, the human response was what's it doing now? The company
that we're just founding, Humatics, which I'll say a little bit about at the end, we are taking this
as one of our symbols of you never want people to ask that question about your system. Full
autonomy can be very scary for people. They don't like machines when they are not operating
the way that they want. Interesting videos coming out just in the last week about people
operating new autopilot features on the Tesla car and doing some surprising things at 80 miles
an hour. It's a very frightening experience. Part of the book is a critical view at how people
have engineered autonomy, but then the engineer in me says there must be a better way to do
it. About four years ago I started working with a partner at Aurora Flight Sciences on an Office
of Naval Research funded program called AACUS, which is a full-sized autonomous helicopter
designed to deliver supplies into remote locations. That was the assignment and we put a
bunch of lidars and a lot of computers on a helicopter. But we needed to have the system,
again, if you're bringing supplies you are, by definition, going someplace where people are
where people want water or food or whatever it is that's being delivered. So we talked to the
folks to be receiving those supplies. In this case there is a specialty in the Marine Corps called
Landing Support Specialists. These are guys who kind of vector in all of these helicopters. Their
response was a full-sized autonomous helicopter bearing down on me, no thank you. Very
scary. They have all been to Iraq and Afghanistan and they said you have no idea how
unnerving it is to look up and see unmanned aircraft flying around when you don't know what
their intentions are and who is operating them and what they are up to. We absolutely want to
be able to control some aspects of it, especially if it's coming right at me, because that's what
it's doing is delivering things to me. We ended up designing a system. This is a complicated
picture but it shows you a little bit about the vehicle comes in. It's actually laser scanning the
full landscape with the lidar, fairly powerful custom lidar. And actually it's capable of
identifying landing zones. And there is a brief conversation with the operator on the ground
who has a kind of iPad mini type interface and the operator on the ground says I want you to
land here and bring me my, I think the Army phrase is bullets, butter and something. And the
lidar may say that's not okay. I can't land there. It's not a big enough spot. There are trees in
the way and offers a few other potential landing zones for the thing to come in, and then the
human has a very, very simple setup of states. Come in, change the landing zone, go around
and let me think about it. Go home and abort or go off and hold until further notice. Those
states are extremely well simulated and presented to the user during training and then the
whole system is engineered around those states. This is just a little bit of a workflow model.
The details aren't important, but that basic set of states for the autonomy is modeled in this
case in Matlab state flow and is auto coded both into the interface and in the mission manager.
So each one of those states may contain all kinds of interesting path planning algorithms and
mapping algorithms, all sorts of stuff, but the overall macro states are very transparent and
clear and simple for the user to operate. And when we flew this system off we flew against one
of the big defense contractors and we beat the pants off them and we won the second phase of
this contract and it is now being put into larger scale pilot testing for production, one example.
And yet, this project is written up in the New York Times, sorry the Wall Street Journal and this
is the title, Navy Drones with a Mind of Their Own. Everything that we engineered out of that
system, the press, the kind of public perception of robot kind of reintroduced. And there's a lot
of work there to be done about bringing this kind of perception of autonomy into in some ways
a more mundane kind of engineering mindset, but also into the world of human control. A sort
of parallel somewhat follow-on to this project is now a DARPA project that we are working on
to actually put a robot kind of in the copilot seat to take over essentially only half of the pilot's
jobs in any number of different kinds of aircraft. Here you have to put a huge emphasis on
collaboration, human robotic teaming and, again, having a very transparent and simple set of
states for the autonomy to go through that the human is well able to comprehend. That
system was written up by John Markoff of the New York Times over the course of the summer.
You may have seen it. It's actually based on an optionally piloted aircraft that is made by
Aurora. And this is a great kind of new twist on this idea. It's not an unmanned aircraft. It's
optionally piloted. You can put a pilot in the front seat who can fly it just like a regular aircraft.
It can be flown from a remote ground station like an unmanned aircraft. My favorite mode is it
can actually be flown from the back seat through the remote ground station just by a pilot or an
operator, really, sitting in the backseat. These are all different examples of ways that the
autonomy of these systems is growing I think into a richer more complicated but more useful
and I think safer conversation with the human operators, and away from this notion of kind of
full automation, which I argue in the book is kind of a twentieth century idea and that the real
twenty-first century idea is what is this perfect five balance. I'll give you to sort of wrap up, one
example back into the undersea realm, which I come back to at the end. James Kinsey, a young
engineer and scientist at the deep submergence lab came to his job with great plans for the
autonomy he hoped to be still on his vehicles. He began to build up probabilistic models of how
the hydrothermal vent plumes propagates through the ocean and to try to instruct the vehicles
to follow minute detections from their sensors back down to the events. Over time, however,
Kinsey realized that trying to view that much autonomy in the vehicle was likely to be a
problem. Because of the nature of exploration, the tasks are poorly defined in the environment
is changing. Anything programmed into the vehicles ahead of time constituted assumptions,
models about how the world might work that might not be valid in a new context. I think I
focused on the wrong aspects of autonomy, Kinsey said. You are requiring the vehicle to
understand a lot of context that may not be available to us. One of the problems with the
vehicle that makes its own decisions, Kinsey continues, is that there is a certain amount of
opaqueness to what it's doing. Even if you're monitoring it you say it suddenly just wandered
off to the southwest. Is that a problem or is that part of its decision-making tree? You can
never know. And in his observations people like to know where their assets are, especially
when they pay a lot of money for them or if they can threaten human life. Overall, in the ocean
the lines between human remote and autonomous are blurring. Engineers now envision an
ocean with many vehicles working in concert. Some of them will contain people. Some of
them will be remote or autonomous and all are actually capable of shifting modes at different
times. I'll conclude with a little section about autonomy and opened it up for questions about
this newer way to think about autonomy, situated autonomy. The fully autonomous robot
making its way through the landscape under computer control remains an attractive idea for
many engineers including many of my colleagues and friends at MIT. Perceiving the
environment, classifying it, matching it up to models and prior experience and making plans to
move forward, resemble our daily acts of living. Uncertainties in the world and within the
machines, the unexpected that will always foil prior assumptions make the problem not only
harder but more interesting. Thinking these problems through, aided by the medium of
technology is a noble effort, engineering at its philosophical best. How do we observe, decide
and act in the world? How do we live with uncertainty? But we should not confuse technical
thought experiments with what's useful in a human context. When lives and resources are at
stake, time and time again for decades from the deep ocean to outer space we have reigned in
the autonomy. It's not a story about progress that when they we'll get it right, but it's a story
about the move from laboratory to field. The transition tempers the autonomy whether the
task is to respond to instructions and return scientific data, or to protect and defend human
life. In retrospect, Neil Armstrong's last-minute intervention, turning off the automation of his
moon landing or turning it down, signaled the limits of the twentieth century vision of full
autonomy and foretold the slow advent of potent collaboration with humans and human
presence. The lone autonomous drone is as much an anachronism as is the lone unconnected
computer. The challenges of robotics in the twenty-first century are those of situating
machines within human and social systems. They're challenges of relationships. And I'll just
close than, took a little bit about the startup that I've been working on with my cofounder here
Gary Cohen, which is an attempt to rethink how autonomy works in order to make it safer and
more protectable and more acceptable within human environments. Making it transparent,
trustworthy because, again, as we think about what it's going to take to make robotics useful
and economically productive in the world, it's almost by definition proximity to human
environments. Economic environments are human environments and we are putting together
kind of a newly composed product team with a whole set of traditional robotics professions,
but quite a number of other forms of expertise as well based on the simple idea that humans
will remain essential to valuable technological systems. And so I will leave it at that and open it
up to questions. Thank you.
>>: First I want to say that it makes sense to me that collaboration between the machine and
the human. Similar to the compilers and me assemblers and that is very [indiscernible]. I want
to say [indiscernible]. My question is as far as I read about the Predator drones they actually
have two different [indiscernible] with them. One for Air Force and the other for the Army.
The Air Force wanted to control it, so actually control the pilot. They wanted to control the
[indiscernible]. And the Army proposed operator, so they give high level rules to them.
>> David Mindell: That's a great question. There are more than two. The question or comment
was the Army and the Air Force operate Predator drones differently. There are actually two
different kinds. They are the same airframe but the controls are different. The Army situation
is a very good comparison because they are the, the autopilot basically does more of the work
because the Army doesn't have the same kind of cultural baggage around the role of the pilot.
In the book I talk a lot about the Air Force version which is an ironic situation because the pilots
still sort of flies though most of the time they don't have their hand on the stick. And the pilot
is an officer. And then there's a person sitting next to them called a sensor operator who may
have just graduated high school six months before who actually is the person who's doing a lot
more of the interesting difficult cognitive work adjusting the sensor and following the camera
and whatnot. Whereas, in the Army both of those operators I believe are enlisted and they
don't have this kind of officer enlisted relationship. That just highlights the fact that again,
autonomy needs to be situated. How you implement it depends on the particular
organizational and cultural context it goes into. In the Air Force they actually say we have an
officer who controls the weapons delivery because these are big and powerful weapons and in
the Air Force the officers are the ones who make the decisions about human life. The Marine
Corps, one of their mottoes is every Marine is a rifleman. In the Marine Corps every single
person right down to the lowliest private are tasked with fighting and killing basically. It has a
very different kind of way that it breaks down. If you are designing the drone in some way in
that case neither one of them was really designed this way. The Army one was a little more
designed than the Air Force one because it came later, but it needs to be situated within the
professional and cultural service culture there. And there are other kinds of unmanned aircraft
in both services that fly fully preprogrammed routes where they don't have the ability to take
over control and it can be a very complicated story. It's a great point.
>>: The question of autonomous versus man applies to planetary spaceflight. Do you have any
thoughts as to how this applies to that question?
>> David Mindell: Sure. To begin with, there is a whole story about the Mars exploration
Rovers, which even the people who operate them described them as autonomous only in a very
limited way. The engineer on the ground that gives the demands can give it autonomous
demands to do some path planning around some immediate obstacles. And they tend not to
do that because it actually takes much longer than just looking at the picture and drawing the
path for it. Even so, NASA even and NASA PR at JPL routinely describes them as robot
geologists, but they don't do any geology, of course. They collect data given instructions by the
humans and that data comes through the telemetry link and, again, there are great stories
about how rich an experience it is for the geologists at JPL or many other places remotely
connected as well and how much they feel resident in the Martian landscape. Interesting
debates there about, you know, Steve Squires who headed the program is famous for saying,
what they did was great but it was excruciatingly slow and if people were up there they could
go out and collect those rocks in a matter of weeks. That, of course, when you're doing
geology, speed of interaction is not exactly your highest priority. It's in the book. I spent some
time talking to field geologists who really value the experience of going out and hammering
rocks and interacting with their environment. What exactly is it about an environment that
hasn't changed for hundreds of millions of years that requires that kind of real-time
interaction? Again, you would think that slowness, what the slowness does with the Mars
rovers is spreads out the kind of collective cognition in time, which for science should be a good
thing. There's a lot more time to be deliberative, more time as we experience with Jason for
groups of scientists to communicate rather than the kind of traditional opportunistic sort of
field science route. I think this is even more true with the New Horizons mission which was in
the news so beautifully this summer. That spaceship is out at Pluto. It spent the better part of
a decade getting there. Certainly it reacts to its environment in certain ways and actually in
some scary ways for the crew the week or two before the flyby. The flyby was preprogrammed.
It did it's thing really well without too much connection to humans in real time. Four-hour time
delay, but still very much a human experience of that flyby. Nobody would say that the mission
is doing exploration or doing anything but going out and gathering data and bringing that data
back home. It's not something that's in the book and I haven't studied that particular mission
closely, although I know people who worked on it, it's a matter of human effort displaced in
time and the beauty of studying it in space displaced in space at the furthest possible
imaginable way. And so there is autonomy aboard the spacecraft, but the autonomy is limited
at space and time and mostly it's used to gather the data, digitize the world, send the data
back. We'll be seeing images from Pluto regularly now as it takes a year to a year and a half to
download the data from the flyby. And then the scientists on the ground are the ones who
have the excitement of exploring in the data. They are in a different place. They are not out on
Pluto. There is no Neil Armstrong for Pluto at the moment. That does matter, right? It is a
cultural change from walking around on the moon, but it's still a kind of exploration and a kind
of remote presence. So if there's human situated autonomy on Pluto, you know, I would think
there would be on the highways around Seattle or Mountain View. There's a question from
online back there.
>>: How would you characterize the momentum and interest in studies around the systems of
collaborating robots swarms, particularly, where autonomy is concerned?
>> David Mindell: Again, I think one of the challenging things with swarms is keeping them
under human control. And how do you, it's a perfectly good idea to design rules for larger
numbers of crafts to collaborate with each other, but how do you enable those missions to
maintain some sense of coherence and actually be effective without being completely
overwhelming whoever the operators are or operator is? And it's still a relatively big challenge
to operate one remote semi autonomous vehicle much less many of them. What happens
when certain of them start failing and whatnot? There's no question that small sets of
autonomous rules are valuable in making that stuff work, but the whole thing still has to
someday get its instructions from somewhere and bring its results back. There was a question
here.
>>: Yeah, there are certain application areas in robotics that get a lot more press than others,
like military self driving cars, delivery drones and that sort of thing. Is there a particular area
that you think most common people don't know much about that you think will have a
profound impact?
>> David Mindell: That's a good question. I mean I guess if you follow the robotics world
there's a lot of coverage in different places. Which ones aren't making their way into the New
York Times? I'm not so sure. I do think that you're receiving, I think that John Markoff who has
a very good book out in the last couple of months about the history of AI. I think if you follow
his reporting over the last two or three years, even just on the Google car, it's gone from a kind
of great enthusiasm and belief in total autonomy to a more skeptical read of what's it going to
take to make this system responsible to respond to human needs and human direction.
Actually, the interesting thing about the book coming out now is I think the public conversation
is gradually shifting towards a more kind of situated view of robotics. Almost particularly,
especially in the DOD, if you follow the DARPA robotics challenge from June, which was these
kind of humanoid robots doing disaster response work, the video that DARPA put out after that
was mostly of the robots falling down and it was kind of a semi-comical view of these systems
kind of not quite accomplishing their goals with the message that autonomy is still really hard.
And we are still really a long way from autonomous robots. I think that was a very deliberate
messaging on the part of DARPA to kind of tone down the public fear of killer robots coming to
get you. I'm much more afraid of a badly designed robot killing you then of and evil intentioned
robot killing you. I think that's a much more likely effect that is probably going to happen
pretty soon. And that's a much more of a concern. It's a much more mundane of a concern.
It's not quite as existential. I doubt Stephen Hawking will chime in on it, but I think it's a much
more realistic concern.
>> Kat Randolph: Time for one more question. David, thank you so much. [applause].
Download