Amy Draves: Thank you so much for coming

advertisement
>> Amy Draves: Thank you so much for coming. My name is Amy Draves and I am
pleased to welcome Tony Hey to the Microsoft Research Visiting Speaker Series. Tony
will discuss his book “The Computing Universe”. He was inspired by the fascinating
overview provided by Richard Feynman in his Feynman lectures and he details the
history of technology and computers from the early days until now. Tony is a senior data
science fellow at the University of Washington’s E-Science Institute and he formally was
a corporate VP with us here in Microsoft Research. He has authored many research
papers and has edited or co-authored several books including: The Quantum Universe
and Einstein's Mirror. Please join me in giving him a very warm welcome.
[Applause]
>> Tony Hey: Thanks very much Amy. Well it’s really great to be back and feels
slightly strange to be back in building 99 without my badge, but nonetheless it’s great to
be back. I will tell you about why I wrote the book. It is meant to be about computer
science. I have read lots and lots of books, popular books on science, physics,
astronomy, genetics, biology and all these things, but the science that has changed the
world the most in the last 50 years is undoubtedly computer science/computer
engineering, whatever you want to call it, and there are very few popular books about
computer science.
In England I have three kids and they all did an exam at the age of 15 and it was on ICT,
Information Communication Technologies, and it was the most boring subject they ever
did. And that is really so sad because the world that we live in now young people can
make a big difference and change the world and that’s what’s evident from the history of
computing. So it was because I wanted to see if I could interest even my kids and find
that computer science was relevant and exciting that I ended up writing this book.
So this is Jeannette who you all know, Jeannette Wing. She had this notion of
computational thinking. I realize that I am speaking to an audience of experts. This is
normally not given to an audience of experts so I will skip some of the details, but the
idea is that it is more than just knowing how to use a word processor or spreadsheet. It’s
more than being able to just write in code, to write in Python or whatever. You have to
have something to do and making the computer do something is an algorithm. And that’s
really the heart of computer science for me, algorithms and clever ways to do it. This
was Jeannette’s vision and it’s a vision that has been taken up very much in the UK
where they are now going to teach computer science in the same way as they teach
physics, chemistry, biology and computer science as a science from all the way in school.
And they are very much inspired by this vision of computational thinking.
Computational thinking came about actually from one of the heroes of the early days of
computing, Alan Turing. This is a computer image of him and Turing Machines were
deliberately based on what a person would do, how you would get a human computer to
do a calculation. You have a piece of tape, you can move it back and forward, you put
marks on it and so. So that’s what the Turing Machine was and it gives the formal
definition for computability and it gives you a way of deciding whether this algorithm is
the fastest or the shortest time, or uses more memory, or whatever. So you have a way of
classifying algorithms, classifying what things are computable and also what things are
not computable.
So algorithms have been known for a long time. The first one is probably Euclid’s
algorithm, which is to find the greatest common denominator, for example 8/12 you can
divide by 4 and that’s the greatest common divisor. And there’s an algorithm that
presides where you give it fraction and you can go precisely and find the greatest
common divisor and that’s the essence of an algorithm. Over here is a person from the
19th century, Charles Babbage, and he had this idea very much about what we would call
a modern computer. They never actually built it because the engineering standards at the
time he was trying to do it were not quite available to do it, but you can see for example
in the computer history museum a replica of his difference engine, which was completed
after his death. It was completed very recently. And this of course is Ada Lovelace who,
with Babbage, figured out how to program things.
I like this quote and it sort of typifies what computer science is. I was a physicist, if we
had an algorithm which did it, yes we would like it to be smart, but we didn’t worry
whether it was the smartest or the shortest or whatever. But what he says, “Whenever
any result is sought by [the aid of the analytical engine],” that was his machine, “the
question will arise: by what course of calculation can these results be arrived at by the
machine in the shortest time?” So computer scientists worry about those sorts of
questions and they are sort of typified by just take one example: sorting algorithms.
So this is sorting a list of names from Bob, Ted, Alice, Pat, Joe, Fred, May and Eve and
you just go through this list, comparing this one, and if this is before this you switch it
and so on. And this is like a bubble rising up and it’s called bubble sort for that reason.
It’s a very inefficient algorithm, because when you have done that you have to go through
and do it again, and again, and again. And this is an example of a much more efficient
algorithm for sorting the same names. You do divide and conquer and then you merge
them again. So these are the sorts of things that computer scientists care about. They
care about new algorithms, new ways of doing it, the most efficient, which requires the
least memory and so on and complexity and so on comes from the fact that we can base
that on Turing’s model of computation.
So I started off as a particle physicist and a friend drew this, this is meant to be a quark
inside a proton. There are three quarks and as you can see from the arrow this is an up
quark, that’s an up quark and that’s a down quark and that makes up a proton, but they
never get out. So these are funny objects that they are inside the protons, we all believe
they exist, but you can’t see one by itself. It’s called a quark confinement problem, but in
this case Gell-Man’s Quark Prison and that’s what I was working on.
But I ended up doing this, building parallel computers. This is the cosmic cube at
CalTech, Geoffrey Fox and Chuck Seitz and they built essentially an assembly of
hundreds of micro processors, put together to make a very powerful computer. And this
was the first message passing computer; previous to that you have big Cray’s and so on.
I was reduced to doing this because a Cray Vector super computer in the 1980s cost
millions of dollars and professors, as you know, don’t have millions of dollars. So this is
the sort of thing we ended up building and of course the difficulty is, it’s much cheaper,
but if you have to plow a field it’s much easier to have the plow lead by an ox than by
100 rabbits. So programming the thing so it actually goes and all these things do what
you want is the challenge there.
Okay, so how did I switch? Well I was on sabbatical at CalTech in 1981 and this is
Gordon Moore and the Gordon Moore professor of computing at CalTech was Carver
Mead, he was the guy who figured out why Moore’s Law is true. Moore’s Law was here
in 1965 there were 64 components on a chip and he said, “Well in 75 it looks like a
straight line I think there will be 64,000 by 1975,” and roughly speaking with the aid of
large amounts of investment from the [indiscernible] industry they have managed to keep
this Moore’s Law, the fact that the number of transistors on a chip doubles every 18
months or so keep it going for 50 years.
So I went to a seminar at CalTech given by Carver Mead and it was like this, it was the
weekly seminar for the faculty and students, we were all waiting and Carver Mead didn’t
turn up. And someone went and fetched him from his lab, he had forgotten about it; he
came with his box of slides and gave a scintillating talk about Moore’s Law. And I
remember in 1981 he concluded there were no engineering obstacles to making
everything smaller, faster and cheaper for the next 20 years. And of course he was
wrong, because it has gone on for over 50 years all together. So it became clear to me
that was really the future.
And just to give you a quick idea, see this was a key invention. This is Intel’s memory
chip around a few thousand transistors on this chip in 1970. The big difficulty of the
early machines was building enough memory. And we all know Bill Gate’s famous
quotation, “64k should be enough for anybody.” It’s because memory was so expensive
and so difficult, but this was the beginning of making memory cheap and then this was
the thing that changed the game. Intel were approached by Japanese manufacturer’s who
made a whole thing with calculators and they wanted a range of chips for each calculator
with different capabilities, but Ted Hoff and his colleagues at Intel said, “Why don’t we
make a chip that we can program and then you can program it to do this, this or this and
it’s only one chip,” and that was the beginning of the microprocessor and I don’t think
Intel fully understood the implications of that, because now we find those in cars,
refrigerators, mobile phones, everywhere we find microprocessors.
So that was really the engine of the revolution and this had a few thousand chips in 1971.
This is a Xeon 2015. It has 4 billion transistors. Now that is remarkable. The challenge
of computer science is actually can you get 4 billion components working together doing
something useful? It is really a remarkable achievement of computer science, managing
the complexity to do that. And this is a slide from Gordon’s original paper in 1965, long
before there were PC’s. Here we have notions, cosmetics and handy home computers, so
it was really an interesting paper.
So after that I went really enthused to solve my quark confinement problem on a parallel
super computer using microprocessors. What I didn’t know at the time was that my hero,
Richard Feynman, he was famous for the Noble Prize for Feynman diagrams, and he also
wrote a very famous series of lectures on physics, The Big Red Books, 3 volume lecture
series on physics with CalTech and he also was on the Challenger inquiry and did a
famous demonstration live showing what the problem was even though the commission
he was on were trying to white wash it and he wouldn’t have any of that. So he is a very
good hero to have and he was my hero.
What is not so well know is that he spent the last five years of his life lecturing on
computing and it’s because his son went to CalTech. Feynman as you know when he
gave a lecture was really entertaining, sparkling and he used to take great pleasure in
annoying philosophers. So he would say, “So a philosopher would say it is essential for
physics that the same produce the same results. Well I’ve got news for him, they don’t.”
And all the philosophers used to walk out of his lecture and he used to enjoy it, but then
his son went and did philosophy at MIT, which upset him. But then he changed to
computing and worked for thinking machines and Feynman was a consultant for thinking
machines.
Anyway, so this is Feynman’s view of computer science, that there are limitations due to
math, limitations due to noise, to thermodynamics, to engineering and also the limitations
due to quantum mechanics. And in this he sort of proposed a quantum computer, which
as we know is being pursued by a number of 3 letter agencies based around DC, but also
by companies such as Microsoft. And these lectures I wrote up at his request and they
are really aimed at a CalTech grad student, but they were really very insightful and
Feynman also, I discovered, gave a lecture at the Esalen Institute on Big Sur.
Now that’s sort of an alternative therapy place and you can just imagine the sort of
audience that was listening to Feynman, telling them how computers work. So if you
look at the video that was one of the things that inspired me to try and make a popular
version of this and that is what this is an attempt to do. It is meant to be a popular book
about computer science. It’s meant to interest young people if they still read books. So
it’s got pictures, antidotes, stories and also shows how young people change the world.
So let me now just take you a quick run through and show you where we have got to and
what sort of applications we are going to be doing in the future.
So what I like is Butler Lampson who is a Microsoft technical fellow likes to classify the
ages of computing in terms of the types of applications. So for the first 30 years we were
using them for computing basically. We were doing it to design aircrafts, to d galaxy
formation, to do weather forecasting and things like that, by enlarge in computation and
spreadsheets as well. Then we got to connect them up together and we have been
learning how to do them with the internet and we now have all sorts of applications:
search engines, we have web pages, so on and social computing is the latest development
in that. But since 2010 they have been engaging with people in intelligent ways and
that’s with adding AI to all this computing and communication.
So let me give you two examples of what’s meant by embodiment, which is what he calls
the third age. I think I prefer intelligent, but here is what Butler says, “So to give to two
examples I was shopping in England with my son Jonathan, we got back from the
supermarket, there was only a tight parking space, he was in his Volkswagen Touareg
and he said, ‘Ah, watch this dad’, he pressed a button and the wheels turned by itself and
it did a perfect parallel park. Now that is something we would all like, parallel parking is
particularly challenging at time and that’s something we would welcome.” But as you
will see from this talk along with the good things sometimes comes some bad things.
So I was visiting a friend of mine, Horst, in the Berkeley lab and he was telling me the
story that he had to go to Livermore laboratory to see one of his bosses and he took a lab
car to go there. When he got to Berkeley his boss said, “I’m terribly sorry Horst, but I am
going to have to report you for speeding.” The car had reported him for speeding. He
didn’t know that, so that’s slightly scary, but that’s the sort of up side and downside of
these applications that have intelligence.
So let’s quickly go through the first stage and it started really with these guys: John
Mauchly and Presper Eckert at the Moore School of Engineering in Philadelphia and they
built this machine. It was eight feet tall, it was eighty feet long, weighed thirty tons and
had seventeen thousand vacuum tubes. And of course all the early computer pioneers
thought the major challenge was keeping this huge amount of hardware running. It’s a
complicated thing, but actually all the work was done by these people and these were the
women programmers of the ENIAC and you had to program it by connecting up to
various components for the particular problem. And as we all know the challenge in
computers is really as much in the software as in the hardware. So these people now
finally have got some recognition for their work.
But I would like to give, because I am English, a little English diversion here. As I said
the early computers, the difficult thing was getting memory to store enough data, results
and so on and because of that Maurice Wilkes was a professor at Cambridge in England
and he built what he called the EDSAC. It was in homage to the design document called
the EDVAC, which was put out by Eckert, Mosley and von Neumann and it was the first
fully operational stored program computer where you could store the program and the
data for the program, the calculations, in the memory. The reason he could do it was
because of this. These are mercury delay lines, the signal can go up and down here and
so you can delay the signal.
The reason he was able to do it before von Neumann, before Mosley and Eckert, is
Mosley and Eckert were in a paten dispute with the university. A typical problem with
the universities and von Neumann was using a different technology, but he had an
astrophysicist called Tommy Gold. He was the guy who figured out what pulses are,
rotating neutron star, so he’s famous in his own right. But during the war he had been
working on radar where you do a sweep with radar and then you do another sweep and
you have to compare the two to see if anything is moved so you have to delay things. So,
he built delay lines, mercury delay lines and that was the reason why these guys were
first to offer a computing service for a university.
And this guy is the first computer science PhD I guess, in 1951. What did he do? Well
per Maurice Wilkes he says, “I can remember the exact instance when I realized that a
large part of my life from then on was going to be finding mistakes in my own
programs.” And we all know that. But the idea was, what they invented was if I figured
out how to multiply two matrices and got some complicated machine code to do it which
works it makes no sense for everybody else to do it. So you would like somebody else to
be able to use my code. So they invented the idea of the software libraries. Then you
had to go through the program, going through the program instructions, and you had to
go from this point over here where the program for matrix is stored, do the multiplication
and then jump back. And that’s called the Wheeler Jump and he invented how you do
that. And David Wheeler and Wilkes wrote probably the first book on computer
programming.
Okay and then in the 50s and 60s we had IBM where you took your cards and handed
them to an operator who was the high priest and you weren’t allowed to touch the
computer. The programs looked like that. Mine wasn’t quite as big as that, but it wasn’t
good to drop your card deck on the floor and have to reorder them. But really the
exciting times came in the 70s. I would like to say about the miracle at Xerox Park.
Xerox had this Palo Alto research center and they built what was called the Alto, it was
the first recognizable personal computer in that it had a WYSIWYG Word processor and
[indiscernible] was one of the people there of course, he came to Microsoft. It also had
Ethernet, you could connect it up and they invented the laser printer so you could actually
go from here, print a document on the laser printer and it also had a Windows interface
with a mouse, icons, pull down menus and all that sort of stuff.
This is Chuck Thacker and Butler Lampson and they are both now technical fellows at
Microsoft research and they were the architects, hardware and software for the machine.
There is a nice story about Lois Lampson, Butler’s wife, she was the first person to have
her thesis, remember they used to type thesis, typewriters, and they used to have carbon
copies, and you took them to the university and handed them in. So she did that, took
two copies and she handed them into the clerk and the clerk said, “Which is the original
to give to the library?” And she said, “Oh, it’s done by a laser printer, they are identical,”
and the clerk said, “No, no, no, no,” and so they had a long argument and in the end she
said, “That one.”
And then this of course is the famous picture of Bill and Paul at Lakewood and that’s the
famous picture of Microsoft when it was in Albuquerque. I think there were two people
missing at the time and this was the Alto Belltown, Intel’s new 18800 chip and you had
to program it with the switches. So having a basic compiler was a hell of a lot easier and
that’s how it all started. But in the middle 70s it was the hobbyists who made the
progress. It wasn’t the professional computer companies, nor was it the academic
computer scientists; it was a bunch of hobbyists. This is the Homebrew Computer Club.
Now at the time I was a physicist this was at Stanford Linear Accelerator Center and to
my chagrin I was actually at Stanford Linear Accelerator in 1975. It was a very exciting
time. We just found a new quark, the charm quark, but I didn’t know any of this,
interesting.
Okay, and then of course Steve Wozniak produced the Apple 1. Steve Jobs realized that
people wanted it packaged and so that was the Apple 2 and then the MAC and so on.
And the rest is history as they say. The only application that they didn’t have at Xerox
Park was of course the spreadsheet. And this is Don Bricklin, he was sitting bored in a
lecture in Harvard Business School, had to do all these exercises where you change
parameters and realized it would be a hell of a lot easier if you could do it on the
computer. So that was the thing. What I should say about the Xerox park effort is it
never became a popular machine, but that was partly the time it was done. In the early
70s memory was very expensive and it used a huge amount of memory, but Chuck and
Butler both understood Moore’s law and they understood it was very expensive then, but
it would be very cheap later on as indeed it was. Then this is the thing that sort of
established Microsoft and established PC’s in the business world.
Ctrl, Alt, Del was invented by an engineer at IBM called David Bradley. There wasn’t
quite enough room to put a reset button on the board in a convenient way so he chose
these combinations of keys so it wouldn’t be difficult. And there was a nice conversation
of him and Bill Gates and Bill is complaining of Ctrl, Alt, and Del being such a pain and
Bradley said, “Well actually you were the person who made everybody use it.” And of
course this is another of the killer apps if you like for the consumer market, computer
games. This was Pac-Man and it was the first game that actually appealed to both sexes.
Okay, the second age of computing, this is my hero in computing JC R. Licklider,
professor of psychology at MIT and he had this vision. A concept of the intergalactic
network which he believed everybody could use computers anywhere and get data from
anywhere in the world. This was Larry Roberts, the guy who built the ARPANET. He
went to ARPA and got this vision, his successor got the money and then Larry Roberts
built it. And this is a thing from their paper, these nodes connected by these message
processors. The idea was that everybody had these very expensive computers and what
you could do was connect them and then someone at UCLA could use the computer at
Utah.
Now this was the first ARPANET node. It’s all on the West Coast and that’s because all
of the guys at MIT, Harvard and places didn’t want to share their computers with
anybody. So they were not very keen on this and they had invented most of the stuff and
that’s why these were the first. In 1969 this was the ARPANET network. And of course
very quickly it became much bigger. In 1973 I remember from logging on from UCL in
London to CalTech to use a computer in CalTech in the 1980s. This is the killer app for
that. Bob Tomlinson at BBN had the idea of using that. There was e-mail on the
computer locally and he just put this so you could actually do this across the internet.
Well it was the ARPANET at the time. This is a representation of the Internet, TCPIP
and all that and of course now we have billions of computers connected to the internet
and roughly more than 3 billion people in the world can actually logon to the internet, a
third of the population. But it was this guy who made it usable. I have him his first
honorary degree when I was head of department in South Hampton, Tim Berners-Lee.
And it was an amusing story; I was watching the Olympics on NBC when they were in
London and they had a whole elaborate opening show and at one point they honed down,
they opened doors and there was this guy hacking over a computer and the NBC
commentators in America had not been briefed. They didn’t know who Tim Berners-Lee
was. They do now, but it was interesting they had no idea who he was and what he had
done. So he was saying, “Welcome to the world,” and this is the first photo from friends
of his who have an amateur singing group.
And of course then these guys came along. In 1978 I was using a browser called Alta
Vista and it was a good one, but these guys being young Sergey Brin and Larry Page had
the idea, Stanford grad students, they could do better. They wanted to look at the
communication through their algorithm that they developed was very much like citations
of academic papers in the search. So in sort of counting how many times the word is
mentioned on the page to be most relevant. They wanted to see how many links pointed
to it.
And of course the more links point to it may be more relevant, but if it comes from sites
where nobody cares, if it comes from a university professor, or a Nobel prize winner, or
industrialist like Bill Gates, or whatever then that site is more important and they gave it
more weight. And that was the page rank algorithm and they setup a network to do the
search on Stanford campus and then they tried to sell the algorithm. They went to Alta
Vista, which was owned by Digital at the time and they said, “No,” and they went to
Yahoo and they said, “No,” but one of the founders of Yahoo David Filo who was also a
Stanford grad student said, “Why don’t you try and set up your own company.” So okay,
they did. And this is a shot from the Microsoft’s digital crimes unit, just a random shot to
show this bad side as well. You will see that most of Spain seems to be a botnet and so
does Mexico. Brazil and the USA don’t do too well. Canada does quite well, but nobody
lives there of course. So anyway, all right, enough.
Computers for embodiment: so let me show you, I want to go here.
[Video]
>>: What if your car could actually see what’s ahead and respond right in the moment?
A radar sensor and advanced cameras including a new stereoscopic camera in the
windshield allow intelligent drive to recognize objects 360 degrees around the vehicle.
Radar sensors give it foresight, long, short and multi-range radar scan the environment in
front of the car, behind it and out from the rear corners. The intelligence comes from
insight; advanced computing power interprets what it sees, determines the best response
and springs into action faster than humanly possible. The systems of intelligent drive can
recognize a pedestrian entering your path, vehicles ahead slowing to a stop and even
cross traffic at a nearby intersection. The moment it does it lets you know. As soon as
you break, even if it’s too lightly, intelligent drive can increase it to the precise amount
needed, even applying full breaking power. And if you don’t respond it can initiate
breaking on its own all to help reduce the chances or severity of an accident.
>> Tony Hey: Okay, so that’s Mercedes an add for Mercedes and you have seen things
like that. So I went to work with people like Mercedes and BMW in Europe on industrial
projects in the 90s and therefore I had been slightly puzzled by the Google driving cars,
because they have had this technology for many years in these companies, but difficulty
is legislating and making sure you can drive it safely, if it was an accident who’s
responsible and so on, the program owners at the company or whatever. So deploying it
is interesting and it will be interesting to see where that goes, but clearly it can do some
things really well like self parking and stuff like that.
Okay, so how does that work? How does it recognize a person, from a car, from a
building? Well those are the sorts of things that I learned about from my time at
Microsoft Research. People like David Heckman and Kyle Cady who told me a little bit
about machine learning. This is also, I do have a paper on neural networks, and this is an
artificial neural network. We have about 100 billion neurons or so in the brain with
trillions of synapse connections between them. Obviously this is a very simplified
version, here we have three: hidden layers, output layer and input layer, but we have lots
of connections between them and you can adjust the strengths. So for example you can
train this to recognize a number plate. So you can set it recognize E or 8 and you can
adjust the strength until the output identifies that it’s an 8 or E and so on. Then you can
have a camera that automatically reports you for speeding by recognizing things like a
number plate on your car.
So computer vision is one of the areas, speech processing is another and this is one of the
earliest types of machine learning algorithms from very early on with von Neumann and
[indiscernible] and people like that. But there has been great excitement recently with
these deep neural networks and multiple layers there where you can train it to recognize
different features and there has been great progress. And it’s ironic really that a small
child can recognize that it’s a cat and not a dog in a photograph and that’s quite a hard
thing for a computer to do, but we are making dramatic progress in these things with
these deep neural networks. And so that’s the sort of algorithm.
Another algorithm is the one that we all know about Kinect. This is the 3D camera
showing you 3D. This is further away than this and then you train it on lots of images.
These are synthetically generated images of people fat, short, a young child, old child and
you want to recognize that it’s my hand, not the hand of the person next to me or my arm
may be obscured by me standing by the person. And of course you have to have good
sound engineering and that’s again using a different machine learning algorithm decision
trees, decision forests was how that was done by the Cambridge group and help from
other Microsoft Research labs with the Xbox team.
And of course now you put those two together, the computing and the communication
and you have the cloud and that turns this sort of device into something really powerful.
So unfortunately this audience I can’t make my normal joke you see. This is Cortana and
Master Chief here and so when I speak into my phone I usually say the audience will talk
the Siri, because most of them do, but in this audience you will talk to Cortana as well.
And I say, “Cortana, open the pod bay doors,” and the signal goes up to the cloud where
the processing is done, they understand what’s said, they do the speech processing, the
generate the answer and the speech comes back to me in less than a second and says,
“I’m sorry Tony, I can’t do that.” So it changes these devices and it changes the world
and that’s really an exciting set of new possibilities for applications.
So I just list two here: this is a device you may recognize which can actually measure all
sorts of things about you. It can alert you to the fact that you have an appointment, it can
know your diary, and it can know you have to go buy flowers for your wife or husband,
or whatever and it can also send texts to your doctor that he’s having a heart attack and
things like that. So, lots of potential for these sorts of devices in medical areas. And this
is I can speak in my Skype in English and it comes out in Chinese to my colleague in
China and vice versa. So these are the sorts of applications you will see. And then I’ve
talked mainly about transportation, but healthcare is a big one and there will be lots of
applications in these other areas. And that’s really clearly where the battlefield is coming
and smart applications, in all of these areas will come.
But I would like to say, you have seen the good things, but there is also bad things. So
Ed [indiscernible] showed me this, which you might find interesting.
[Video]
>>: This is a regular new car. The masking tape is only there because we agreed to
obscure its make and model.
>>: We will give them an illusion they control the car for now.
>>: [indiscernible] has been working on this for five years with multiple research teams.
>>: When I hit the fluids.
>>: Oh my gosh.
>>: There we go.
>>: What’s that, what’s that?
>>: Yeah, the windshield fluid.
>>: Now wait, so this is something a hacker –.
>>: That’s right, a hacker, obviously you didn’t turn on the windshield wipers.
>>: I didn’t, no.
>>: Using a laptop the hacker dialed the car’s emergency communication system and
transmitted a series of tones that flooded it with data. As the cars computer tried sorting
it out the hacker inserted an attack that re-programmed the software gaining total remote
control.
>>: Oh my god.
>>: The horn.
>>: They could control the gas, the acceleration, they can control the breaking?
>>: That’s right.
>>: And they could do this from anywhere in the world.
>>: So just stop at the cones here.
>>: She thinks she is going to be able to stop right at those cones. Let’s make sure that
she can’t and she is going to drive right through them. We will have complete control of
those breaks.
>>: All right, here we go.
>>: Oh no, no, no, no, no, no.
>>: The breaks didn’t work, right.
>>: Oh my god, I can’t operate the breaks at all. Oh my word, that is frightening, that is
frightening.
>> Tony Hey: That is frightening. You all know that Dick Cheney had the wireless in his
pacemaker turned off because there was a plot in one of the movies where they killed
somebody by attacking the pacemaker. He felt he was under threat. Okay, so just to
conclude with some light relief; those are the sort of applications, the intelligent
applications that the computer scientists of tomorrow, Microsoft, Google, Facebook and
so on will all be doing. And it really is an exciting time. You can have people starting
companies and doing all sorts of things. And the cloud makes that easier. So that’s
what’s really interesting.
So I would like to end on the sort of academic note since I am back in academia. I would
like to talk a little bit about real intelligence and artificial intelligence, weak or strong AI.
This is from Russell and Norvig’s recent book, “The assertion that machines could act as
if they were intelligent is called weak.” You are simulating intelligence as opposed to
them really thinking and of course all the movies we see are the ones where they are
really thinking and that is called the strong AI hypothesis. So you have all seen The
Imitation Game I hope and if you want to find the correct history read my book, alright.
It’s slightly perverted, it’s a great movie, the history is not quite right. Anyway, he had
the idea of the Turing test, you had connections to computers or people in two rooms and
could you tell between a person or a computer and that was the idea of the Turing test.
Way back in the early days of computing he thought of computers and artificial
intelligence.
This is in 1997, Garry Kasparov over here playing Deep Blue here and of course it’s well
known that Kasparov lost. Now we would have said that to play chess requires
intelligence, but of course Deep Blue had large amounts of parallel processing power, it
had special chess chips, it was primed with all the openings and all the finishers and
actually it just beat Kasparov because it had so much computing power it could go deeper
and see all the possibilities. Kasparov could go fairly deep and he could also see patterns
and intuition. The computer had none of that, it just did, by brute force, was able to beat
Kasparov. So that’s weak AI.
Then more impressively in 2011 IBM Watson beat the two best players at this rather
obscure game called Jeopardy. So it’s a curios game. So you have to teach the computer
lots and lots of information. It has to know about Belgium, dinosaurs, notable women,
you know huge amounts of all sorts of stuff and also it asked the question in a funny way.
It gives you the answer so it could say, “Brussels,” and then what you would have to say
is, “What’s the capital of Belgium,” and that’s the way it does these things. So it has to
understand all that and it actually beat these two champions at Jeopardy. But David
Ferrucci who was the IBM engineer in charge of the project also had a screen that the
audience could see which had the top five guesses that Watson was going to come up
with. He only gave the top one in public, but he showed the five and said you could look
inside Watson’s brain. And if you did that you would find on some things that Watson
had absolutely no idea what the question was about. It was completely in left field and it
really had no understanding of the question.
So this is a philosopher at Berkeley called John Searle and he as been one of the people
who has attacked the idea of strong AI. So this is his Chinese room, you put things in
here, you look in the book and it says, “You get this and you put that out there,” and
actually what you are doing is answering questions in Chinese. His argument was that he
doesn’t understand Chinese and it looks as if the room understands Chinese, but he
doesn’t, he’s just a rope machine. So what he said about Jeopardy and Watson was,
“Watson doesn’t even know that it was playing Jeopardy, let alone that it won.” So will
we have R2D2 coming in to rescue us, or will it be Arnold with the Skynet, or will you
fall in lover with your operating system? And of course there are lots of other examples;
Transcendence, Ex Machina, I recommend Ex Machina it’s a sort of Turing test if you
haven’t seen it. It’s a very interesting thing.
It is interesting that even Bill Gates, and Elon Musk, and Stephen Hawking and people
have said that this is really a really serious possibility for the world and that could be the
last things we ever event because the machines will take over. I think that’s probably
overblown, but the question is: are we going to go to the next singularity? As Ray
[indiscernible] felt singularity is near, computers will become more intelligent and design
better computers without us or this is Feynman’s example, what he told people at Esalen.
His example was of a computer, it’s a really dumb file clerk.
What is a file clerk do? He goes to the filing cabinet, takes out a card, takes it over here,
does what it says, adds the total to this, puts the card back, takes the next one out and
does it. That’s actually the von Neumann principal for doing a computer. And in order
to tell him to do that you have to instruct him in excruciating detail how to do
multiplications, or adds or things like that. But the reason why it appears smart is
because it goes so fast. “The inside of a computer is as dumb as hell, but it goes like
mad,” is what Feynman says.
So in this position it’s just a very smart dumb machine or are we going to get a
singularity? Well I think we will be down here for a long time in my view, but you never
know. And just up the road at the Allen Brain Institute human consciousness is just
about the last surviving mystery. We have 4 billion transistors on an Intel Xeon and we
only have 100 billion or so neurons. Why are we self aware, and conscious, and so on
and thinking? What makes these simple little neurons actually give us awareness?
That’s a really exciting problem, but it could be a very long way off before we discover
how to do it. Thank you very much for listening.
[Applause]
>> Tony Hey: So I finished more or less on time Amy. People are free to leave, but I am
happy to answer questions until I get stopped by Amy. Yes?
>>: You suggest reading your book to figure out the Imitation Game and meaning no
disrespect, why shouldn’t we read the Alan Turing paper?
>> Amy Draves: I didn’t say to figure out the Imitation Game; to get the history correct.
Alan Turing’s paper tells you about the Imitation Game, but it doesn’t tell you about the
history of Russian spies and all this sort of stuff or who actually built the bomb and the
fact that the first prototype was done by poles with assistance from the French. That’s
the sort of thing you will get from the book, Alan Turing is Alan Turing. Yes?
>>: You mentioned Richard Feynman and I have to ask you if you have a good story
about him?
>> Tony Hey: I have a good story about Richard Feynman, thank you for asking. So
Feynman was at the Manhattan Project during the war, the atom bomb and afterwards he
went to Cornell and he thought there would be a nuclear disaster. It took a long time for
him to get back into working normally and he had these ways of visualizing things. And
he went to a famous conference at Poconos in New York; it had Oppenheimer, it had
Powley, it had Teller, all these famous people from theoretical physics there and they
were trying to understand how to do calculations in quantum electric dynamics. So, the
things that explain light and everything else. It’s called relativistic quantum
electrodynamics and the trouble is you had all these things where you did a calculation
and it became infinite and people couldn’t make sense of it.
So there were two papers there; one by him and one by Julian Schwinger from Harvard
and Schwinger gave a very elegant mathematical presentation. He was really powerful at
math and Feynman waved his hands a bit and he talked about electrons, and positrons,
and positrons are anti-electrons and he talked about treating positrons as negative energy
electrons going backwards in time. So you can see why his audience might have a little
problem with things like that. So he came away enormously depressed, but a few weeks
later he was at an American Physical Society conference in New York and there was a
presentation by a professor Slotnick and it was about the equivalence of two particular
types of couplings of different particles to pions and he concluded that they were
different.
And after his talk the great professor Oppenheimer stood up and said, “Professor
Slotnick, thank you for talk, I am afraid however I must tell you your calculations must
be wrong because they violate Case's theorem and Slotnick said, “I’m terribly sorry
professor Oppenheimer I have never heard of Case’s theorem,” and Oppenheimer said,
“Don’t worry professor Slotnick, you can remedy your ignorance tomorrow by listening
to professor Case present his results.” So Slotnick as you can imagine crawled off the
stage and was not a happy bunny. Feynman couldn’t sleep that night and so he thought
he would do the calculations that Slotnick did using his methods.
So he did them, he found Slotnick the next morning and he said, “Slotnick you old chap
why don’t we compare notes,” and Slotnick said, “What do you mean compare notes?”
He said, “Well I did your calculations overnight,” and he said, “What do you mean
overnight, it took me six months,” and he said, “I’ve got these methods you see, anyway,
why don’t we just look.” So they looked and Slotnick said, “What Q squared?”
Feynman said, “Oh Q squared is just the scattering angle,” and he said, “But it took me 6
months to do Q squared equals 0.” Feynman said, “No problem, just put Q squared
equals 0 in my formula and we can compare.” So they did and Feynman agreed with
Slotnick.
Then Case got up to give a talk and gave his talk about Case’s theorem, showed that were
identical all orders and after his talk everybody applauded and at the back of the room,
“Yes, Professor Feynman?” “I’ve calculated Slotnick’s calculations last night and I agree
with Slotnick, so your theorem must be wrong,” and then he sat down. That was when he
knew he had something and he was going to win the Nobel Prize. So, lots of funny
stories about Feynman. I knew him when he was happily married to his third wife, an
English librarian like mine, and he had won his Nobel Prize. So he was very relaxed. He
used to have lunch with the post docs and grad students most days. So it was kind of fun.
>> Amy Draves: So we have an online question which is, “Do you think the Connectome
Project will be able to successfully and fully map the brain whilst exposing the secret
sauce of consciousness?”
>> Tony Hey: So do I think the Connectome Project –. Okay, so funnily enough when I
was at Microsoft Research I funded the Connectome Project at Harvard for 4 years and I
didn’t think I got anything out of it. What it is its cutting through slices of a rats brain to
try and find the connections between the neurons and the synapses. It sounds great; you
are finding the wiring diagram of the brain, right. So everybody wants to know that. So
what we were providing was technology that could help you automate that process,
looking at the slice and finding the connections and stuff.
So that was a valid thing for us to collaborate with them for. And that was okay, but I
found it was rather like as illuminating as imagine a tangle of spaghetti cutting through a
bowl of spaghetti. You really don’t find very much and actually you need to know it’s a
dead brain by that point. It’s a narrow slice of a brain. You need to know what signals,
what’s happening, what these neurons do. So I think by itself it doesn’t, maybe with
MRI, it may be that there will be progress, but I think it’s a long way to go. But I did
fund Connectome. Judith?
>>: Can I just switch now to youth, Tony?
>> Tony Hey: To whom?
>>: Youth, young people.
>> Tony Hey: Youth, youth, young people, yes.
>>: So I am sure you have been speaking around the world on your book and so on and
maybe you have spoken to young people. What do you think they are going to do with
computing now that we are into this third generation and that maybe the syllabubs has
changed in the UK and so on.
>> Tony Hey: Well first of all I think there is really opportunity for them to go do things,
but only if they are sufficiently educated and they understand that these are the skills that
they need to learn. That’s one thing and not everybody can do that. My real worry, I am
reading a book by Martin Ford, it’s called ‘Rise of the Robots: Technology and the
Threat of a Jobless Future’, because these things are really smart. You are making really
great stuff and they will take out lots and lots of jobs that we currently think as needing
intelligence. And it’s not obvious, every previous thing, you know the agricultural
revolution, everybody moved from the field, but then we had factories in the city and so
on.
Well, we have replaced them; it’s not obvious this time, as the book says, that we are
going to replace those jobs. So I think its protection to some extent, but I think there are
opportunities, especially in the internet of things of doing interesting things that doesn’t
cost a lot. I try to show young people, Tim Berners-Lee, and Sergey, and Larry, for
example all very young and they did these things and made a difference in the world.
And I think that’s one of the things they can do, but I think it’s to some extent self
protection and working in IT and on computing technologies is a very wise thing to do in
my view.
The book also has terrible examples, Martin Ford’s book, of a sushi house with no people
there, because they have a machine to make sushi and they have a way, the color of your
plates, it’s automatic, you put it in the bin, they send you a bill and it goes to your table.
There isn’t even a manager on site and then apparently there is a new machine coming
out which can make gourmet hamburgers. So you don’t need someone to make the
hamburger. So anyway, there are all sorts of technology that is threatening jobs and I
think that’s the message I would get across. So it may be different this time, because of
you guys.
>>: This is probably a silly question, but I was curios about Moore’s Law because when
you say this is how it’s going to be and then you have thousands of people working on
that project do you think that Moore’s Law drove the capability because people
[indiscernible]?
>> Tony Hey: Moore’s Law was an observation made in 1965 from those 4 points and
yes it has actually been kept true by the semiconductor industry. I think Gordon Moore
once said that it was becoming exponentially expensive to keep Moore’s Law true. And
they have done really huge innovation and they have this semiconductor roadmap which
maps out the technologies, but actually I do think we are getting to the end of Moore’s
Law, because I am a physicist originally and now we are getting down to atomic
dimensions and at the atomic dimension things don’t behave like the sort of billion of
balls that we like to visualize, therefore I think Moore’s Law is coming to an end.
There is already a slow down and actually what replaces it is extremely interesting. Is the
industry going to be like airlines? We cross the Atlantic, or at least I do, at about 600
miles and hour, it doesn’t go faster, it hasn’t gone faster for the last 30 years. Maybe we
will meet an [indiscernible] and maybe you don’t have to throw away your phone every
two years. It changes all sorts of things.
So I think there could be some profound changes coming from the end of Moore’s Law,
but there will be new things to replace it. And the stuff here on FPGA’s for example is
an interesting thing. We have GPU’s and maybe we will have sort of graphical
programming units, neural programming units, that’s what IBM is talking about and
Watson and stuff like that. So maybe different types of specialties and I think there is
plenty of room for innovation, but Moore’s Law as it stands will come to an end, but it
has been a huge effort by the semiconductor industry to make it so and it has truly drive
everything. Yes?
>>: I am curios, you haven’t mentioned the cloud.
>> Tony Hey: I did mention the cloud. I had a picture of the cloud and don’t forget that I
talk to the cloud.
>>: Is it the thought that “the cloud” is now becoming ubiquitous as a utility like running
water, or electricity, or any other sort of basic service?
>> Tony Hey: You wouldn’t think so if you went to Greenwater, Washington on the way
to Rainier. You can’t even get a cell phone signal.
>>: Yes that’s an RF machine, that’s another branch of engineering.
>> Tony Hey: Anyway, no the cloud I think is important. I mean one of the projects we
did in Microsoft Research was called “Greek Fire”. There were these fires on the Greek
mainland which nearly reached Athens, which burnt 3,000 buildings down and killed 300
people or something like that and so if it happened on an island it could sort of wipe out
the whole island. So what they did with the Greek Microsoft employees, plus some
people at Microsoft Research, with the Greek universities, they developed an early
warning system which took data from satellite data, it took real weather forecasting data,
it had topography of the island, it had population, the roads, knew where the fire trucks
were and things like that and if a fire started here this was the most likely place where
would it spread because of the wind direction and you could do all those things.
Now there is no computing for a super computer on an island. So that really did depend
on having the cloud to do your computing. And those are the sort of applications that are
made possible by the cloud. So I think the cloud is great. I used to do startups with my
staff in university and instead of them having to get large amounts of sort of angel
funding to sort of buy equipment you can just start immediately and it just makes a big
difference. So I think the cloud is really important for all sorts of innovation. I didn’t
talk about the other killer application, which is social networking, which also things like
Facebook, and Twitter and Flickr all use the cloud, right. So those are clearly going to be
important. So I’m sorry if I didn’t mention it sufficiently. I think it’s important, though
everybody now has clouds.
>> Amy Draves: Tony, thank you so much.
>> Tony Hey: Thank you very much Amy and thank you very much.
[Applause]
Download