[applause] >> Robert Hess: Microsoft is well known for Windows,... long list of other products and technologies.

advertisement
[applause]
>> Robert Hess: Microsoft is well known for Windows, Office, .NET, Xbox, Zune and a
long list of other products and technologies.
Less discussed, however, is a group at Microsoft that isn't necessarily focused on ship
dates, packaging, or competing products. Instead they think about how computers and
technology can make life easier.
The name of this group is Microsoft Research. It was in 1991 when Microsoft became
one of the first software companies to create its own computer science research
organization. Today's guest joined Microsoft Research with two colleagues in 1993 to
form the decision theory and adaptive systems group. Since then, he has been at the
center of a variety of projects focused on machine intelligence and adaptation and the
related task of information discovery, collection and delivery.
Hello. I'm Robert Hess, and I'll be your host today as we talk with Eric Horvitz, research
area manager. I hope you enjoy this chance to look at the technology and the person
behind the code.
Born in 1958, Eric didn't initially jump into computer science. In fact, his interests first
led him to look into biophysics and neurobiology at Stanford before ending up shifting
focus to computer science which eventually led him to Microsoft. But technology did
fascinate him even at an early age.
In kindergarten, he dissected a flashlight in order to reverse engineer it. This sense of
accomplishment led him to try a second project, building a robot, efforts which
unfortunately ended in failure. Today Eric continues to seek new insights in machine
learning and reasoning and interpreting patterns of information.
Join me now as I welcome today's guest, Eric Horvitz.
[applause].
>> Robert Hess: Eric. So, a robot, huh? How old were you then?
>> Eric Horvitz: Well, this was first grade. And I became known by friends and family
as some sort of a technical guru when I ran around with a battery and a wire and light
bulb showing them how to do this. As a kindergartner and I said okay, well next project
I want to do a robot. And I had a toy robot that I had been given by my grandmother as
a three year old, and I was always rather concerned that this robot here had rather
limited cognitive powers, didn't answer questions, didn't understand what was going on.
It was just a sort of a robot that could walk around ->> Robert Hess: Couldn't do your homework for you or anything like that?
>> Eric Horvitz: Homework wasn't that big in those days. But in general, I pulled
together a whole bunch of parts. I remember spending a few afternoons with housing of
a peanut can, wires, springs. I had some motors from toys. And I said I can crack this
problem, I can -- I just got to think this through clearly to figure out how to make a robot
that can actually do some thinking.
And here I am quite a few years later, and I think we're still pushing on that question.
>> Robert Hess: So I guess it's probably a good thing you didn't actually complete that
project because you would be out of a job right now, right?
>> Eric Horvitz: Yeah. There was no chance of that, of [inaudible].
>> Robert Hess: So I mean, early on, having that level of fascination over, you know,
robots and science, stuff like that, I mean where did that come from?
>> Eric Horvitz: I'm not exactly sure. I've always been rather curious about things, and
I remember going through a phase where I was very curious about thinking. It's
probably in -- I don't know if it's healthy or not for a three or four year old to start
worrying about these things, but thoughts intrigued me, and thoughts of self and other,
you know, thoughts of people in my family for example. And I just was very optimistic
that this could be explained by the same kind of mechanism that would explain that
flashlight. You know, putting things together, flows, relays, parts.
And I guess I had some sort of a inborn optimism that this could be solved, we can
gather some insights by producing something bigger out of parts.
>> Robert Hess: So I mean, that is computer science to a certain extent, and I
understand that's where steps work together, yet you didn't immediately get into
computer science, you went down more the physical sciences, the immediate sciences
paths.
>> Eric Horvitz: Right. Well, so I was very interested in physics, in biology, very
interested in -- I had a growing interest through school in brain, how the nervous system
functions. Probably harkening back to those early days.
>> Robert Hess: And that's where your little essay that you wrote comes in?
>> Eric Horvitz: Yes, we actually -- we talked about things from childhood, and I dug up
a sixth grade report that my mother had kept for a number of years on the study of
bionics and I was -- I find it interesting that I actual capture some of my early optimism
here, even in my little report in sixth grade. Where we talk about in bionics scientists
have done experiments on trying to duplicate animal's brains and also man's own brain.
And I talked about a few years ago. So I just said here that you know, you could build
an electronic brain to match human brains, but it would require all the tubes and relays
that would fit in the Empire State Building, it would require all the energy that the
Niagara Falls gives us, and with all this tubes, relays and energy, the electronic brain
would only operate for a fraction of a second before parts would have to be replaced.
And then the prescient line here, transistors have changed the trouble with tubes. Good
thought. So I think we're still, though, struggling on some of the basic concepts.
>> Robert Hess: Yeah.
>> Eric Horvitz: And I think I -- we don't know enough yet to know, for example, the size
and scale and properties and functions that you would need to replicate something as
mysterious to us today as human thinking and thought.
>> Robert Hess: I mean it's just part of the research, understanding the whole piece
like that.
>> Eric Horvitz: Yeah. And in fact, Microsoft Research in many ways, many teams
have that kind of vision where you look out with vision, we look at -- we consider the
difficulty of problems, the doability of various kinds of tasks and challenges and we think
about timing, you know, like what's a -- what's a five year target, a seven, ten, 15 year
kind of target? And what's nice is for many problems that we face, it's not all or nothing
where you get to the 15 year mark and finally there's your prototype popping out ready
to go to find some value in the world, along the way you can actually do quite a bit, and
designing the trajectory that's flexible and responsive to the early findings so you can
actually even, you know, fire those little jets to vector yourself in different directions on
your way to a vision can be really an effective strategy.
>> Robert Hess: Now, your academic training comes from Stanford mostly, correct?
>> Eric Horvitz: I did my undergraduate work at Harper College in Binghamton and
then went to Stanford with my eyes on an MP PhD in neuro biology and that was my
goal for the first year and a half at Stanford.
>> Robert Hess: That's the direction you wanted to take your career into?
>> Eric Horvitz: Yes, absolutely. I was really passion at about neurobiology. I did
some undergraduate research with a fabulous mentor, Robert Isacson who is a scientist
of the limbic system and I became, as he told me, one of his best microelectrode
people. The idea is you take a piece of very small piece of glass and you put in it puller
and you pull a very fine electrode that you fill with saline solution or other kinds of
solutions, but the tip of that electrode is small enough to poke individual and neurons
with, and to listen in.
And I remember my first few experiences of listening in to the neurons in rats as they
were mostly asleep in different regions of the brain as part of per the studies we were
doing in a darkened room with an oscilloscope tube showing me the [inaudible] as the
single neuron was firing that you know something, even though I don't know what's
going on in this mysterious black box that's very related to our own mind no doubt as
we're both vertebrates, sharing the same essential recent branch of the tree of life that
I'm in there with thinking somehow, something about what I'm doing is I'm probing
thought.
And I went off to Stanford thinking, you know, it's part of the reduction as the emergence
try to understand [inaudible] beyond and as far as the evolution of where computing
came in, reading those books that I mentioned earlier, the [inaudible] diving into some of
von Neumann's work, looking at Turing's early paper on computability with Church that I
really enjoyed, this was the time in the early 80s where personal computers were
becoming quite popular. And I remember sitting in a lab, actually in my lab, where one
of my [inaudible] neurons doing their clicking and clacking and during my reflection
about what the relevance of listening to neurons was turning and looking at I think it was
an Apple II with the top off, sitting on a table right in the corner of the room. A friend of
mine was tinkering with it, putting cards in it of various times. And it just hit me that, you
know, what I was doing with such good intentions on the path, the biological path to
understanding nervous systems in you might say more broadly cognition was sort of
akin to taking a little wire and putting it in that little black CPU on the mother board and
trying to infer the operating system of that computer or even the application layer.
And that probably wasn't going to get anywhere very quickly for understanding, for
example, the functionality of the code. The intentions of the software designer.
>> Robert Hess: So we now have you studying computer science rather than medical
science. And how long did you do that at Stanford?
>> Eric Horvitz: It's not necessarily rather than medical science. So I basically in joint
MD PhD programs they can be various degrees of separation between them, but you
often do a couple years of what's called preclinical medical education at which time you
could take other kinds of classes and I was doing computer science work and some
psychology classes and so on. Then typically the medical students who were going off
to do a PhD will take off some time and dive into their PhD work. Finish that work and
come back to do the clinical years where you put the white -- your white coat on and get
your clip board and you actually go through rotations in the hospital per different
specialty or areas of medicine.
And so I dove off into changing my PhD out from neurobiology into the decision science
space and started actually thinking about really interesting applications in health care.
So here I was socializing to the medical school class, you know the medical students
get to know each other quite well, you work on, you know, in your anatomy class four
per cadaver over a cadaver that you dissect and it's really a bonding experience. So
you're really socializing to the medical school class and you really -- whether you
intended to or not become more and more passionate about health care, you know
quite a bit more about health care, and when I started getting into decision science I got
very interested in both decision making under uncertainty but also about what do we do
with limited reasoners. If we built a limited system how could it be optimized to do it the
best that it could.
I had the sense that, and I still do very much, that we can understand a lot about
intelligence, biological intelligence, human intelligence if we push really hard on this
notion of trying to do the best one can with limited computational resources, limited
time, for example. That this would explain the pressures of competition and minds
would etch out brains with certain properties that did their best under constraints of
various kinds.
>> Robert Hess: Now, this is about the same time where the buzz words seem to be
expert systems.
>> Eric Horvitz: Yes.
>> Robert Hess: Does this fall into the same model as expert system or is this a totally
different path?
>> Eric Horvitz: That's a very good question. So in the mid '80s there was an explosion
of interest, both in academics and in industry, maybe even more so in industry, that we
now had reached the era of intelligent machines. And there were quite a few interesting
successes then. It was a time when theorem provers, essentially theorem proving
systems that had been applied in abstract domains, mathematical theorem proving were
being filled with association knowledge or logical rules about the real world in domains
like health care and machine diagnosis. And the theorem proving technology typically
known as rule based, either chaining forward or chaining backward or combinations
thereof was being demonstrated and showing a remarkable ability to actually solve
problems in the world, to do diagnosis in medicine, for example, to diagnose
bacteremia. This is the MYCIN system authored by my third advisor, Ted Shortliffe.
And I and a couple colleagues were actually situated, we found ourselves situated in a
group that was sort of at the -- you might say at ground zero for this technology, and I
and others were looking at these methods and we found some trouble with the ability to
actually make them modular. People wanted the systems to be such that you could
have a modular knowledge base that had modular rules in it that you can sort of stick
rules into or pull them out and not worry about all these dependencies.
>> Robert Hess: So it's a medical technician one time and a plumber the next?
>> Eric Horvitz: Yeah, but even more medicine, for example, you might want to add
rules to it, make it smarter and smarter as you learn more information. And there was a
goal, and if you read some of the early writings in those days, one of the goals was
these systems should be easy to maintain and extend from data or from human
expertise.
And there was a -- and this is a little subtle potentially for the audience but there's a
issue when you introduce a new fact that in reality these facts can't be considered
wholly atomic or modular from other facts.
>> Robert Hess: They're not isolated in that one area.
>> Eric Horvitz: Right.
>> Robert Hess: They actually propagate throughout the entire tree.
>> Eric Horvitz: Right. And they don't necessarily -- they can't be integrated in
necessarily in an easy manner, in a local manner. You may have to sort of do a
complete global reanalysis of all the dependencies in the system. And this we found
made the systems very fragile.
Now, it wasn't our doing that there was a collapse of excitement, there was a sense that
there was an overheating of the expectations for artificial intelligence in the mid '80s
because of that and other reasons the systems didn't pan out. It was hard to build these
systems and maintain them, have them be broadly applicable. I and other colleagues
started looking at going beyond the theorem proving approach, which was chains of
logical inferences, if them rules, into looking into what we call probabilistic systems.
Models of inference and probability that you could have some modularity in
dependencies and do proves about how to add facts to a system and remove them and
extend these systems.
More generally building systems that could actually take action under uncertainty which
seemed to be the real world. And I think this line of reasoning, which has been called
UAI or uncertainty in AI became mainstream artificial intelligence research. It in part
ended up weaving together several separate disciplines. Computer science, decision
science, statistics and probability theory came together, and all of a sudden, rather than
in many ways reinventing the wheel and saying here's a new way to reason, we were
building upon hundreds of years of great human effort in intellect in these different
disciplines that were coming together and being woven together and now we're right in
the middle of that. So now you go to an AI conference and what you're seeing is the
beautiful synthesis of work in many different fields coming together with an openness to
really understanding previous work as well as where the newer things that were
considered new and separate let's say in the mid '80s, how they fit in.
>> Robert Hess: And then all this is basically tying back to your first grade robot.
>> Eric Horvitz: Well, I mean, ties back to the interest in understanding human thought.
And of course I have much healthier respect for the difficulties of building things like
robots these days.
>> Robert Hess: So then from Stanford, did you go straight to Microsoft, or did you
have other businesses between?
>> Eric Horvitz: While at Stanford, I did a couple things. I started a non profit
foundation with a couple of friends in the early '80s called the Center For Innovative
Diplomacy and that group did quite a bit in the early '80s on the Internet for the world.
In fact, I and my two colleagues at CID were recently reflecting that if we were more
interested in making money as a non -- rather than being a non profit that was going to
save the world during the early days of the Regan Administration, we could have built
AOL with ease. We basically brought online -- we set up networks of servers and
brought e-mail to different communities, to the foundation community, who also funding
us, that was very handy overlay of goals. We actually also as part of CID in a fun
project brought the first wide scale Internet to the Soviet Union at the time. It was called
GlasNet.
And it was really remarkable. We had servers in Palo Alto, California. We bought them
online and we bought the Internet to the Soviet Union. This was kind of a, you know,
our non profit project, three of us co-founders of the Center for Innovative Diplomacy.
But we were exploring back then the idea of applying technology in ways to make the
world a better place, to make it a freer place, to enhance communication. Now, maybe
my side got wiser or just more passionate but as I got into my PhD work later on, I
joined up with David Heckerman, my colleague, and Jack Breese a little bit after that,
and we formed a for profit company. And this was Knowledge Industries and -- KI, and
a little bit different than the non profit work.
And maybe I was kind of empowered by my -- by the successes we had had as a
non-profit to pull things off in the world. On you I'd say a fairly large scale. I should say,
by the way, on that glass net system before I move on, when I say large scale,
remember there was a coo back in, you know, in the '90 or so, '91, and where all
communications were shut down coming out of the Soviet Union. Maybe by then it was
already Russia. And I recall that the people were reporting the only communications
coming out were through glass net. And so they basically have this computer network
actually beyond all the subtleties of how this might have affected the civilization there
and the society and for openness and so on, the fact that these, the Internet links were
all up and running and the only way people were getting information out during that time
was a really, you know, sort of we felt like this was, you know, it really was an indication
that this really was effective, it really had some impact.
But moving on to the profit work in Knowledge Industries, I think there's nothing more
exciting, though I don't recommend this to graduate students these days that whose
committees I sit on, there's nothing more exciting than taking some really fun stuff that
you're passionate about, that stuff you're working on, as a doctoral student. Specially if
it's new, like we work in graphical models and Bayesian networks. You realize you have
stuff here that really does work. My gosh, we really have stuff that really can do
inferences about health care and about machine failure. Let's do a startup.
>> Robert Hess: What exactly did you do? You never really said what KI did.
>> Eric Horvitz: Yes. So KI was taking some technologies that we had been working
on. David Heckerman had been working on some knowledge acquisition technology.
How do you capture and code knowledge from experts? Several of us were working on
inference. So we actually created one of the first might call it platforms, commercial
platforms for capturing expert knowledge graphically with probabilistic knowledge under
uncertainty as well as runtimes.
You know, you can compile out models that would sort of look at the world, take in
evidence, ask questions back, do a dialogue about what was wrong with the machine,
for example, which tests to do next and then come to a conclusion as to a diagnosis.
>> Robert Hess: We're still talking expert systems then for the last part?
>> Eric Horvitz: We're talking a new generation of expert systems that you might say is
today's generation. That line of work has continued. And what's changed a lot between
those days and now is that rather than requiring experts to sit and it's a wonderful
experience to have a dialogue with an expert and have him working with tools and
having an expert saying wow, I really like seeing my thoughts on this screen, this causal
knowledge is beautiful to look at, and I can actually edit it, that's always nice, but these
days this -- we have quite a bit more prowess taking raw data, also the data is more
available now on a variety of fronts, whether it be search logs, behavioral data or even
[inaudible] logs, but taking data and actually building these networks, these PICTIVE
models directly from data -- and in fact, some people have actually shown how you can
actually take expert knowledge of the kind that Don Owens gave us or that a trauma
care surgery gave me for trauma care or the expert photocopy people gave me for
photocopier, the NASA -- that people give us for the NASA space shuttle, but taking
those photos and actually combining it with data streaming in and building new models
that capture both the expert knowledge and the raw data to build expert systems that
are quite good these days, and so we are seeing a rise now, and I always say you feel
like it's a palpable increase in our ability to do -- to have automated intelligence,
machine intelligence, not necessarily at the level of, you know, cognizant self aware
agents that I may have been pursuing in first grade, but you really sense we have -we're creating with confidence now the building blocks you'll probably need to get there.
>> Robert Hess: So I mean like do you separate the decision process from the thought
process, are the two the same?
>> Eric Horvitz: That's a very good question. You know, just popping out of maybe
talking about what I know a lot about into maybe cocktail conversation for a bit, human
mind. It's not clear how much of our mind our conscious reflective processes take up
versus subconscious processes and whether or not at all these levels subconscious up
to conscious we may have the same kind of things going on, decision making and
actions, for example, inferences and actions under uncertainty.
So at the foundations agents that do well in the world because they know how to sense,
they know how to take multiple observations and create inferences, do inferences about
the various probabilities of things going on in the world and then given those
probabilities and some objectives, taking the best actions at any moment, that might be
happening at many levels of human cognition.
What we -- people refer to as decision making is the deliberate reflective explicit notion
of oh, I have a decision to make here, I'm going to reflect and take it in action. They
don't usually think about how they approach or engage another person as a decision
problem or how they walk from one location to another or decide how to drive
someplace or even in driving, steering and correcting a vehicle as decision making. But
all these things might be viewed as decision making. And my sense is, to answer your
question now crisply, is that there are likely decision processes at many levels, and
people and we'll probably have to address that kind of thing in successful robots some
day.
>> Robert Hess: What would the decision process then that eventually brought you to
Microsoft?
>> Eric Horvitz: That's a big jump. Well, that's an interesting situation. So the story
there was here we were, three of us and I mentioned, this is my dear colleague David
Heckerman and dear friend Jack Breese who joined up with us. We were all PhD
students together. David and I were actually MD PhD students together. And we were
zooming along, we had our company, we were finishing up our dissertations, and I think
a blast I remember actually I actually went back to medical school to finish my last of the
medical school and there I was the CEO of this, you know, company that was on the
upswing with projects and people and so on. I remember getting like, you know,
buzzed, paged, and having phone calls that I had to call back the senior VP at
northwest airlines with my white jacket on who had no idea that I was finishing up
medical school. I was running this company and my PhD was now done and I was on
the phone talking about a maintenance problem and a contract next to this beeping
cardiac machine. That's nothing, you know. I'm a CEO, I'm still in charge of Knowledge
Industries.
But we were all in kind of this world of doing many things when David Heckerman
mentioned one day that a friend of his from high school, Nathan Myhrvold, had called
him up and was interested -- getting interested in -- noticed some of his papers and was
getting interested in uncertainty and had got a sense for this new wave any call
Bayesian expert systems, Bayesian referring to probabilistic inference that goes on in
them. And wouldn't he and his colleagues might want to come up and talk to Microsoft.
Well, we said let's follow up on this. And so Jack, David, and I came up to Microsoft.
Our goal was we put on -- I had a little sport coat. We were going to sell our wares to
Microsoft like United Airline.
>> Robert Hess: Did you have a tie on?
>> Eric Horvitz: I think we had a tie on, a sport coat and tie, the way we see most
visitors in Microsoft showing up for their meetings here. And our reflection was that
Microsoft was probably going to be interested just like NASA or Rico corporation in
making a deal with our group.
We wondered, you know, okay, Windows 3.0 had -- 3.1 had just shipped and they
probably needed us for something or other. And so we went up to give a presentation.
And actually, I remember meeting the recruiter, was actually Kevin shields, he's actually
a -- any time he was in HR, Kevin kind of do many things at Microsoft, and he handed
us these packets, and I said what are these, and he said these are your employment
packages.
And we looked at each other and we said, no, no, no, you've got the wrong idea. We're
here to show you our company and tell you what we're up to and see how we can work
with Microsoft. And he goes no, no, no, you have the wrong idea. These are packages,
they're -- you know, we'd like you to come work for Microsoft. We looked at your stuff
and we really like it and this is Nathan's intention, this Nathan Myhrvold. And I think we
pushed back and forth a couple times. And you know, I think I even may have uttered
the comment like we would be more likely to join Microsoft -- sorry, we would be more
likely to join Maytag than Microsoft. I mean Microsoft.
And this is before we really had a sense for what Microsoft was going to come to be.
More broadly but more specifically we had no really concept of the plans to build a
major research organization at Microsoft. And so we spent a couple days here. I gave
a presentation. I remember I spoke in building 9 in the big conference room about all
the things we were doing and the applications it might have to some data software and
Microsoft -- along our best guesses as to what Microsoft might find interesting in this. I
remember in particular I thought that we had just done some really fun work at NASA, at
the mission control center, working with the propulsion section down in the lower right
hand side of that old mission control room where people that looked at data streaming
in, and we built Bayesian system where probabilistic system that would not only reason
about the fault on the space shuttle in a very time critical propulsion section of the
shuttle, but would reason about what the human in front of the screen knew and needed
to know and would triage the information coming at the human being so an ideal kind of
human computer interface solution with our methods, with many -- with several layers,
not just a diagnostic layer but also reasoning about -- with a user model what we call it
about what the person believed and expected to really of this ideal connection between
machine and human.
And we sort of focused on that in part in our Microsoft presentation in some of our
discussions. We sat with Rick Rashid, we sat with Nathan, and we were convinced that
Nathan was serious, that he was going to put together, even though there may be, I
don't know, ten or people here at MSR in Redmond at the time, some of which weren't
actually doing core research yet, they had been acquired from other teams, but
convince us that there was going to be a major operation and he really spoke to my
heart when he said and this stuff you're doing, this reasoning under uncertainty, it
should be core in operating systems and applications. It's the future of computation.
And this went back and forth for a while. We said no several times. And I remember
when I finally broke down when Nathan at one of our talks when I came up again leaned
forward to me, and he said Bill Gates is a Bayesian, and he wants you guys here and
he'll do anything -- you know, we'll do whatever it takes, and think about it. He said,
how many applications have you shipped. And I said, well, you know, we got this thing
at NASA, we have United Airlines, you know, Rico Corporation, we're thinking about
doing all this medical stuff now, we have 500 people using this pathology system and he
said minimal ship 10 to the sixth at Microsoft. That seems funny nowadays. But he
said, you know, a million. And we said -- he said think about what that means to your
stuff.
And I started thinking that Microsoft might give us this incredible lever with the fulcrum
at the horizon to really take stuff we really believed in this technology as a way to
enhance the world and give us the leverage to really get it into the world and to even at
the same time promote basic science, basic research.
>> Robert Hess: And did he follow through on that promises?
>> Eric Horvitz: Microsoft Research has grown to become even more what I -- than
what I expected it to become at the time. We were pretty optimistic. Under the lead of
Rick Rashid and others, maybe we can even say under -- with help of some of the early
folks here like myself, it's become this gem among research labs well known for bringing
in only the best and the brightest for creating a beautifully open environment, having an
unprecedented publication model that's open where the researchers themselves make
decisions about how they want to engage, for example, on IP, intellectual property, with
patent attorneys before publishing, educating the researchers to make those decisions
rather than having a tower of attorneys, for example, that oversee or committees that
oversee the timing of publications.
The academic freedom and the excellence and the concentration of the best and the
brightest folks across a diversity of computer science specialty areas and now not just
at Redmond but at five other centers has made the labs just an outstanding and unique
community.
>> Robert Hess: So what are some of the things you personally have been able to
deliver here at Microsoft?
>> Eric Horvitz: Well, just starting from the most recent that I'm still buzzing about, we
buzz about things that we can ship, is Clearflow. So Clearflow is I call it one of our
Manhattan projects. I think I once mentioned that to our PR folks and they said call it
moon mission. And I said okay, one of our moon missions. And basically the idea was
to see if we could through machine learning the building of predictive models from large
amounts of data could we predict the road speeds by time of day and day of week and a
whole bunch of other factors whether traffic reports and so on even Mariner games
starting and ending and Husky games on the weekends, could we predict all the road
velocities on side streets. So we fielded internally for a year. We had people at Live
Maps monitoring it an playing with it as well.
We were confident in it enough that we can really do something quite innovative for the
traffic industry, never had been done before, to have a whole city content sensitive,
traffic system that's -- that could route cars based on inferences. And the idea that right
now as we sit here, right, so at this moment there are 72 north American cities for which
Clearflow is available on maps.live.com, you hit the little Clearflow button, it's actually
called Clearflow in our product, they used our code name, they liked it so much, the
marketing folks, that to know that every few minutes we have systems that are updating
the road speeds on every single surface street in 72 cities and then letting people route
with these inferences. To me this is like a big buzz. It's like okay, we're actually -there's an El Dorado turning left in Buffalo right now and instead of turning right and that
might help this person out get to their place quicker, more quickly ->> Robert Hess: And it's all because of you.
>> Eric Horvitz: Our team.
>> Robert Hess: Yes.
>> Eric Horvitz: We have a great team of folks.
>> Robert Hess: As a matter of fact, I actually used it the first time myself, I needed to
go to downtown Seattle and traffic just was a mess, and so I said I need to get from
Microsoft downtown Seattle, the main street was like 35 minutes, just click this
Clearflow, what happens, Clearflow said 29 minutes if I took their path, and it took
exactly 29 minutes to get to downtown Seattle from Microsoft even with bad traffic. It
took me off the main streets and I got there and this is wonderful.
Some of the future stuff. I understand your doing something with an automated
receptionist or something like that. What exactly is that?
>> Eric Horvitz: Well, here it is. I'm just teasing. So we have a project that's being led
by Dan Bohus on our team, he's a new researcher who joined us from CMU, but we're
building a bigger team around him on this. And I collaborate very closely with him as
well, called situated interaction. And the idea is it's part of this long term dream of
building systems that can do inferences about the pace of conversation that can engage
people in a dialogue with a machine, automated receptionist and so on.
For one of the projects in this space we called the receptionist, the idea was to look at
the task that Microsoft building receptionists have. It was hard enough to be almost
undoable, but well defined enough to give us a glimmer of hope that we might actually
learn something by trying to build an automated variant of that receptionist,
understanding his or her tasks, understanding how to major people, what their attention
is at any moment, how to deal with multiple parties waiting for service, how to optimize
the flow of a group, how to understand conversation even overhear it at times, how to
use gaze to communicate to people who was being addressed, understanding the
social cues that might go on in a fluid conversation. And that's basically we call it the
receptionist project.
>> Robert Hess: We've actually got a video of it in action so people can kind of see
what this means to have an automated receptionist working for them maybe sometime
in the near future. Let's take a look.
[video played]
>>: The receptionist project is a specific instantiation of this long term vision that we're
pursuing. The parts that we're looking at have to do with how do you manage
conversation, engagement, attention, flow and grounding. So how do you manage, how
do you relate all these different concepts and create a system that's able to able to
engage in interaction that's free flowing and follows the natural pace of human
conversation.
>>: Hi, my name is Laura, and today I'm here to help the [inaudible] with reservations.
What's your name?
>>: I'm Don and this is Eric.
>>: Nice to meet you. Do you need a shuttle.
>>: So what you see noticing is the system is able to detect multiple participants in the
scene. It's able to track their poses. Here it knows I'm oriented towards the system and
Eric is oriented towards me. It's composing that with information from the microphone
array, which tells us the direction that speech is coming from at any given moment and
with an analysis of the parcels of a different factors basically of the clothing that we're
wearing, and based on that we infer a number of variables about the actor.
Then we go one level up in this analysis and gather information about the tasks that are
active here, so the system analyzes the relationships between the different factors and
infers that here most likely Eric and I are in a group together and we're engaging as an
active task to the system and our currently goal is to get a shuttle.
We believe that it's a very good platform for doing that kind of work.
>>: Which building are you going to?
>>: I forget, where are we going?
>>: I think it's building nine.
>>: You sure?
>>: Yes.
>>: So you're going to 9 right?
>>: Yes.
>>: And this is for both of you, right?
>>: Yes.
>>: I'm making a shuttle reservation for building 9 for two people in case you want to
correct anything say or press star over. Are you visiting someone?
>>: Yes.
>>: I'll help you register in just a moment.
>> Robert Hess: I think that's a great video. I think it really illustrates some of the hard
problems we might just take for granted when you're dealing with casual conversations
like that that.
>> Eric Horvitz: Exactly. One of the interesting aspects of doing research on artificial
intelligence, especially when you're getting to the goal of building systems that can work
with people and interact and have dialogue and sense intention and address goals is
that so much of what people do so easily it's almost magically invisible to us, needs to
be teased out, brought to the surface, and addressed with explicit machinery. It's
interesting we often discover these things when we fail dramatically. We'll try the first
version of receptionist and realize oh, my God, you know, we need to really understand
when people are talking to each other versus the machine, we need to understand
when what they're saying and the systems overhearing might be relevant to the current
question or task at hand.
>> Robert Hess: Or irrelevant to it.
>> Eric Horvitz: Or non relevant, which might be most of the time. We have to
understand, and it turns out it's not so easy for a machine to recognize when two people
among a crowd or three are a part of the same task or same group. I just need one
shuttle. It's obvious to a human being that these people are together.
There's so much that goes on subconsciously with fluidity and with ease that even
researchers pushing on the hard problems don't get to make those things explicit. They
don't come to life until you fail and you realize oh, my God, we have to even do that
part. Wow, that's an interesting area of research, now let's push on that a little bit. So I
mean, little things even in that video just now you didn't -- probably didn't see what was
going on because you can't really in a fast paced video, especially if you -- on NTSC
you can't really see some of those annotations going on in that conversational scene
analysis, but we're even reasoning about not just the likely goals of each person in the
group but there's a line between people when the system thinks they're likely in a group
with a certain confidence. There's a little reflection about the dressware, you know. It
noticed that Dan and I were addressed casually and Zecheng in the back was
addressed formally. It said formal dress ware. That's why the agent looked up and said
-- and directed the gaze, which is a really red dot in the system, directed her gaze at the
person waiting in the back, in this case, Zecheng working with us on the project and
said are you visiting somebody.
That kind of thing would have ->> Robert Hess: Because he's wearing a tie [inaudible].
>> Eric Horvitz: [inaudible]. We can get a sport coat and a nice little white shirt on, and
likelihood of him being a Microsoft employee is way down there in the 10 to the minus I
don't know what.
So the system actually knew that. And I made those inferences. And these subtleties
are what we expect from people all the time and they are -- kind of provide a delightful
array of challenges that you didn't expect at the time outset of the project when you try
to build a system like this.
>> Robert Hess: One of the exciting things talking with you about what you used to do,
what you're doing now and what you're planning on doing is the fact that there really is a
solid thread throughout your entire life that is tying everything together that you clearly
enjoy doing what you're doing today and what you can be doing in the future.
I suppose one way to really find out how well these things fit together is what do you do
outside work, you're clearly not working at Microsoft all the time. You do have an
outside life.
>> Eric Horvitz: I do. It's really funny because one of my friends who actually I did my
PhD work, he's another PhD at Stanford, I noticed that we hang out at Tahoe or
something and we'll be talking and he would say Eric that's work, we're off now. I said I
never thought these things were separate. This is the most exciting stuff we can be
talking about whether we're in a cabin in the hills or not.
So maybe I take it very personally and seriously like understanding, for example the
computational foundations of mind is almost a religion. It's pervasive. But I do do other
things. And so I enjoy going out and roller blading, I love working with outside
organizations. So like other people, many other people at Microsoft Research, we have
outside efforts in the academic community. People sit on program committees, editorial
boards as I do.
One of my major outside activities right now is that I'm serving as president of the
association for the advancement of AI, the AAAI as it's called. It's probably the largest
membership group in the world of AI researchers and practitioners. And it's been a lot
of fun to take the helm of a larger organization, which has been around since 1979 and
work on where that sits today as it engages not just the academic community but
society as a whole.
>> Robert Hess: It's hard to imagine an artificial intelligence group started in 1979.
They were still doing wire wrap boards back then, weren't they?
>> Eric Horvitz: Right. They were removing the last of the vacuum tubes and going to
the 16 transistor radio versions of things.
>> Robert Hess: So what does this group do, though?
>> Eric Horvitz: So the AAAI is a large membership group that promotes research in
this area. It runs a major conference in some smaller conferences every year and one
of them the major national conferences on AI. Now we brought into it, it's just the
international conference every year. It does student scholarships, publishes a
magazine, it does education work, gives out awards, it recognizes fellows in the society,
people who have made certain achievements get a distinction. Works with government
at times. People might find interesting that one thing that I'm doing this year as
president is I established what's called the presidential panel and the idea is that the
president can call into being a panel and focus it on a topic. So this panel that I'm doing
is called as the first panel the AAAI presidential panel on long term AI futures. You
know, there's a lot of rumbling these days from people like Ray Kurzweil and others that
we are approaching singularity that things might change quickly, that we have movies
like Terminator, robots getting out of hand and Skynet and so on.
And so what this panel is doing is we brought together a fabulous group of experts from
around the world, the best people in their field, to look at challenges with potential
disruptive influences of AI, good and bad on society. Long term concerns about AI
getting out of control. For example we do proactive things if that was really a concern to
make that less of a concern. And even ethical issues coming online as we get things
like more competent robots in the world.
We're having a meeting at a Selmar coming up in February akin to the meeting that the
people in recombinant DNA had several decades ago when there were concerns about
what might go wrong with the result of recombinant DNA experiments and efforts. And
so the idea is to sort of have this group get together and sort of publish a report that
goes a little bit beyond maybe the lay press and says, listen, here's an expert panel and
here's what we think about the concerns versus the opportunities and challenges
ahead. So that's going to be a lot of fun.
>> Robert Hess: But isn't that something Isaac Asimov already solved with the Three
Laws of Robotics?
>> Eric Horvitz: Well, it turns out that Isaac Asimov was present especially in some of
the things that he was reflecting about. And if you got online and looked at some of the
discussions we're having right now with this committee, some of those basic laws of
robotics, for example, are coming up now. But we have -- from the point of view of the
experts working on this panel, you can actually formalize those things in beautiful ways
mathematically and extend them to create biases and constraints on agents, for
example, computational agents, to make sure that there's no risk of things getting out of
hand.
>> Robert Hess: Like sometimes we see in science fiction movies happening all the
time.
>> Eric Horvitz: I think most movies that characterize something about AI go off the
deep end in terms of fear, which probably sells a lot of movie tickets.
>> Robert Hess: Yeah. Yeah. Well, we hope you enjoyed that little discussion with
Eric. We now come to the part of the show where we have a few specific questions we
always ask our guest just to find a little bit more behind them.
So, Eric, the first question we want to ask you is what sort of a device do you have for
people in your field?
>> Eric Horvitz: I like to often talk about boundaries and bridges. Really hard problems
don't necessarily respect the borders that we impose with our disciplines. Computer
science, decision science, biology, hard problems just look at these and scuff these
borders.
So I basically tell people to blur them, build bridges, think about the problem directly but
know many things about the world from all of our sciences, all of our philosophy and
thinking to address the problem head on. Many sparks of creativity come by breaking
down the borders to do fundamental in a disciplinary research.
On another boundary, I like to tell people to push to the edge of tractability, to the edge
of doability, try it, go for it, go a little bit harder, broader, and keeper, even failure you
can learn quite a bit from and success sometimes comes magically in ways you didn't
expect. And finally, while it's great to have insights and inspiration and come to a-ha's
alone, finding great collaborators is as important as finding great problems to work on.
>> Robert Hess: Pretty good. I mean, it's just basically understanding what the
problem set is and pushing yourself and pushing the problem forward as well.
>> Eric Horvitz: It's [inaudible] over those boundaries.
>> Robert Hess: Yeah. So the next question is how do you explain what you do to
someone who is not technical?
>> Eric Horvitz: I would have to say that I'm trying to enhance computers such that they
can do things that you expect you'd need people to do. To make computers better at
learning and thinking and to make them more valuable in the world beyond just little
appliances we might be playing with on a desktop, to bring them into the world.
>> Robert Hess: Then in life, what would you compare to producing software?
>> Eric Horvitz: Well, these softwares as you know we're an incredible time where we
have these tools now and software and compilers. It's like this beautiful open canvass
to do painting on and to bring creation into the world that's only limited by our minds.
It's a tool that provides both the blueprints and the structure to build beautiful
architectures that sore into the sky.
>> Robert Hess: I suppose for all of our guests, this is probably a question that fits
more at home to you, because what you're doing is so much like life to a certain extent,
it's just like thinking and thought and so it flows very well from that.
Now, the next question, which I think you might have some fun with, is finish this
question. You know you're a computer nerd when:
>> Eric Horvitz: When you have dreams about simulation modeling that you learn from
and wake up thinking it was a great class.
I actually mention this question to my wife just before we went to bed last night and her
reaction was well, wait a minute, how about the -- our first dinner date, the third chair
was a laptop top, you were showing me software on. That's a sign that you're a nerd.
Or the fact that she can't get rid of my 128 K Mac in the garage, I just want to hold on to
that Macintosh.
>> Robert Hess: Sometime it might be a prop just like your robot is a prop on stage like
this.
Now, the final question that our audience always enjoys is stressing some of your
creative talent in a different direction draw a picture of your favorite data diagram,
explain it and make sure you sign it as well.
>> Eric Horvitz: Well, I have to say that a very interesting data structure that -- that's
come out of our field in the last 20 years is called the directed acyclic graph, and these
are beautiful in that you can often do a graph reversals on these. They capture
probability distributions, condition probabilities among variables you care about,
observations for example an hypotheses and they even allow you to reason about at
times variables you have never observed before, hidden variables, to do foundational
science work. Directed acyclic graphs.
>> Robert Hess: Go ahead and sign that. Very good.
And now we can open this up to questions from the audience. Does anybody in the
audience have a question? Yes. Yes. Over here.
>>: So you mentioned earlier in your advice section that to not be afraid of failure
because sometimes you really learn a lot from it. I was just wondering maybe at one
point in your career maybe where you encountered that and what you ended up
learning.
>> Eric Horvitz: Many times I think -- I'll say that my first PhD thesis topic ended with
my decision that it was too hard, and I changed topics. I was going to build a system
that could do scientific theory confirmation to reason about the validity of different
scientific theories. I tried really hard to make that system work and be part of physicists
and biologists and I decided it was too hard. But I learned a lot about [inaudible]
information which is a construct I use in other things now.
I decided that that would be a long term mission that I'm on now to have systems that
help scientists do their work. As research collaborators of sorts.
I learned about the kind of frustration with stopping everything and retooling. You want
to hear about my house remodeling experience? It's very similar.
>> Robert Hess: Thanks. We have another question from the audience? Yes, over
there.
>>: Yes. So Eric, I was very fascinated about your similarities that you draw between a
human mind and a computer. So I was thinking, have you thought about what rule does
emotions and irrationality play in human lives, can that sort of, you know, get projected
on to a machine, and are those things important in terms of computation and how will
that benefit?
>> Eric Horvitz: It's a really interesting question. I think that computational foundations
of mind is orthogonal to the kinds of behaviors that people express and the way they
feel at times. I am interested in that dimension in two ways. One is in making systems
that if they work with people understand human emotion so they can better coordinate
and collaborate and so on. But more fundamentally emotion must be there for a
reason. And to understand the information theoretic foundations of sadness, arrogance,
humility, confidence, I think will be very illuminating in a theoretical way.
>> Robert Hess: Thank you, Eric, from the technical community network for being our
guest today. And thanks to all of you in the audience for coming.
[applause]
Download