Host: Okay. Thanks for coming, everybody. It`s an honor today to be

advertisement
Host: Okay. Thanks for coming, everybody. It's an honor today to be hosting Jerry Kaplan. Jerry
received his Ph.D. in computer science at University of Pennsylvania. He did his doctoral work at the
intersection of natural language processing and database access. He's currently a fellow at the Center
for Legal Informatics at Stanford University and a visiting lecturer at the campus there in the SAIF
Department.
He has a very interesting history. He co-founded four different Bay Area startups, two of which
became publicly traded companies. Among the several startups, Jerry founded the GO Corporation. I
remember that very well, in 1987. He was a visionary back then. He thought about this new kind of
touch sensitive tablet that might replace notes and handwriting on paper some day. A few years later he
worked with AT&T on EO, which I also remember from back in those days, EO, on a project that
would integrate GO's Tablet designs with AT&T's growing cell phone business, creating an early notion
of what came to be known as the Smartphone. So it was an interesting prescient idea. So we tend to
listen to Jerry's ideas about the future as they seem to be on point just a few years in advance of when
things go big.
In the late 1980s, Jerry developed the first personal information manager called Lotus Agenda, working
with Mitch Kapor and Ed Belove at Lotus, where Jerry was a technologist. Speaking of Lotus, he had a
chance to, just a side note, from Boston yesterday morning, I rode back, by coincidence, with Ray
Ozzie, of all people, and had a great time talking on that flight. It looks like in 1994 Jerry co-founded
on online auction company, Onsale, that ran the first, the world's first online auction, in 1985, went
public in 1997. That was before even# e#Bay came into the fore.
So beyond his entrepreneurial work, Jerry's been an author. In 1994 he wrote a best-selling non-fiction
novel called Startup, A Silicon Valley Adventure, starring Jerry Kaplan. I'm just teasing about that part.
But he could have probably, given all of his experiences there.
Today Jerry will be speaking about some ideas based on his latest book, "Humans Need Not Apply, A
Guide To Wealth And Work In The Age Of Artificial Intelligence." But he's going to spring off those
ideas to talk "AI, Think Again."
Jerry Kaplan.
[Applause]
Jerry Kaplan: Okay. Let me just get a sense of the audience here. How many of you are engineers?
Oh, wow. Great.
How many of you are doing work that is even tangentially related to the field of artificial intelligence?
Okay. Good.
How many of you are not engineers?
Okay.
How many of you have not raised your hands yet?
Anybody? Okay. That's called closure. It's a computer science concept.
All right. So this is great. I can talk to the engineers in the audience today. Okay. Let me start with
observation. The common wisdom about artificial intelligence is that we're building increasingly
intelligent machines that are ultimately, may surpass human capabilities, that may steal our jobs,
possibly they might even escape human control and take over the world.
well, I'm going to present the case today that that narrative is both misguided and counterproductive.
More appropriate framing, in my view, this really is better supported by all the historical and current
events, is that artificial intelligence is really just a natural extension of the long-standing effort to
automate tasks. And this dates back at least to the start of the industrial revolution. And I want to talk
about the consequences of viewing the field in that way and rethinking what the field is about.
So let me start with a history lesson. Here is a news flash. "Science doesn't proceed scientifically." It's
like the making of legislation or software. That was a "legislation or sausage," I meant to say. Never
make yourself laugh in the middle of your own presentation. That was funny, though. So much for my
talk.
Progress in science and technology is really messy and perhaps it's best done out of public view. But
more than we might want to believe, progress is also due to the clash of ideas and egos and different
institutions. And AI was no exception.
So let me start at the beginning. Dartmouth College, 1956. In the summer of 1956, a group of
scientists got together to have an extended working session, they called it, at Dartmouth. Now John
McCarthy. How many people know who John McCarthy is? Okay, interesting. He was a
mathematician. At that time he was employed at Dartmouth. And he hosted the meeting along with
Marvin Minsky. Marvin Minsky? More hands? Claude Shannon? Good. You guys know a little bit.
You really need to know about Shannon's work, for historical purposes. And Nathaniel Rochester.
He's a little more obscure. He headed AI at IBM.
Now, McCarthy decided to call his proposal a proposal for the Dartmouth summer research project on
artificial intelligence. That was the first known use of the term artificial intelligence. But what's not
commonly known is why John McCarthy chose this catchy phrase of artificial intelligence. And it was
later, actually much later, that he explained his motivations. He said, "As for myself, one of the reasons
for inventing the term artificial intelligence was to escape the association with cybernetics. Its
concentration on analogue feedback seemed misguided, and I wished to avoid having either to accept
Norman Wiener as a guru, he was the guy who invented 7X, or having to argue with him."
So Norman Wiener, as you may know, is a highly-respected senior mathematician and philosopher at
MIT at the time. And McCarthy was a relatively junior professor, assistant professor, something, at
Dartmouth. So to understand the original intention of the founding fathers of AI, it's worth reading a
little bit of the actual text of his conference proposal. He said, "The study is to proceed on the basis of
the conjecture that every aspect of learning, or any other feature of intelligence, can in principle be so
precisely described that a machine can be made to simulate it. An attempt will be made to find out how
to make machines use language for abstractions and concepts to solve all kinds of problems now
reserved for humans and improve themselves. We think that a significant advance can be made in one
or more of these problems if a carefully selected group of scientists works on it together for the
summer."
>>: Wow.
Jerry Kaplan: That's a rather ambitious agenda for a summer break.
Now, many of the Dartmouth conference participants had their own views of how best to approach AI.
But John McCarthy's specialty was mathematical logic. He was a logician. Now, in particular, he
believed that logical inference was the key to, as he put it, simulated intelligence. Now, his approach
later became known as the physical symbol systems hypothesis. And that, for those of you that don't
know, was the dominant paradigm for the field of artificial intelligence for at least the first thirty years
after that Dartmouth conference.
Now, I'm old enough to have known John McCarthy. You knew him too. Okay. Anybody else have
met John McCarthy? Oh, you're all youngsters. How about that.
Well, I was a post-doc at Stanford and he founded the Stanford Artificial Intelligence Lab. Now, John
was certainly a brilliant scientist. For example, he invented LISP. Who knows what LISP is? Oh,
good. That's good. You might also know that he, not know that he, invented the concept of
timesharing, which is really quite interesting. But he definitely had the mad professor thing going, with
the wild hair and the eyes. Like the inventer of the flux capacitor time machine. Is everybody familiar
with him? Well, I'm glad to know that Professor Emmett Brown is better known than John McCarthy
here at Microsoft. At least you guys are up on your 80s movies or whatever. Was it 80s?
I'm confident that John McCarthy never expected that his clever name for the emergent field would
turn out to be one of the great accidental marketing coos of all time. Now, it has not only inspired
generations of researchers, including myself, but it spawned a virtual industry of science fiction and
Hollywood blockbusters and media attention and pontificating pundits, including myself. Now, had he
named the field something less rousing, like logical programming or symbolic systems, I doubt that
very many of us today would have heard of the field at all. It simply would have motored along
automating various tasks while we marveled at the cleverness not of the creations, but of the engineers
who were building those things. I'm getting a little bit ahead of my story here.
In any case, McCarthy's hypothesis that logic was the basis of human intelligence is at best highly
questionable. Today in fact most AI researchers really have abandoned this approach and believe that
it's just plain wrong. The symbolic systems approach has been almost entirely abandoned today in
favor of what is now called machine learning, that is the dominant approach. But in my opinion this is
throwing the baby out with the bath water. Now there are some really important advances in
computing that came out of the symbolic systems approach, including heuristics search algorithms,
logical problem solvers, game players, and reasoning systems.
Many of the results of this kind of technology are in wide practical use today. For example, when you
formulate driving directions or laying out factories and warehouses or proving that complex computer
chips actually meet their specifications, these all use techniques that were originally part of the field of
artificial intelligence. And no doubt there are many more of these kinds of things to come.
How many people here working in machine learning? Okay, did I mention machine learning? It's
certainly the focus of most current research. In some circles it's considered a serious candidate for the
real basis of human intelligence. Now, my opinion is that while it's a very powerful technology and it's
going to have a very important practical impact, it's also very unlikely to be the computational
equivalent of the human mind. But whatever your view, you might be surprised to learn a little more
about where the fundamental concepts that underlie at least the so-called connectionists or neural
network approach to machine learning come from. So I'm going to give you a quick history lesson
about that.
Look at this dude. Back in the 1950s, John McCarthy wasn't the only guy interested in building
intelligent machines. There was another highly optimistic proponent, and that was a guy named
Professor Frank Rosenblatt, and he was at Cornell. He was at Cornell, McCarthy's at Dartmouth,
they're competitors in that sense.
And Rosenblatt was intrigued by some pioneering research that had been done by a Warren McCulloch
and Walter Pitts at the University of Chicago. And those guys have observed that a network of brain
neurons could be modeled by, of all things, logical expressions. So Rosenblatt got the idea to
implement those different ideas into a computer program which he had his own branding for, he called
it a Perceptron. Now he built an early version of what we today would call a neural network. This is
actually a guy, he looks like he's twelve years old. Talk about a geekie looking dude. And that's
actually his sensor array for his Perceptron.
now, not to be out done by McCarthy and Minsky, who were making pronouncements about their point
of view of AI, let me read you some of the things that, this is from an article, this is a real article in
1958, in the New York Times. And he was quoted as saying, "The machine he was building would be
the first device to think as the human brain. In principle, it would be possible to build brains that could
reproduce themselves on an assembly line and which would be conscious of their experience. The
article said that Rosenblatt's work was the embryo of an electronic computer today that will be able to
walk, talk, see, write, reproduce itself and be conscious of its existence.
Now, here's the cool stuff. Let me move on. "It is expected to be finished in about a year at a cost of
$100,000." So much for the journalistic accuracy of the New York Times. I usually wind up debating a
friend of mine, John Markov, who writes for The Times, and I love bringing this kind of stuff up.
Now, it might seem a little optimistic given that Rosenblatt's demonstration, it had 400 photo cells and
it was connected to a thousand Perceptrons. And after fifty trials, here's what he got to. It was able to
tell whether a card had a square marked on the right side or on the left side. That's what it was able to
do.
Now, on a more positive note, I can't help but notice in this same article that many of his wilder
prophesies in the article have actually now become reality. And they sound just crazy as the other
nonsense in here.
He went on to say later, "Perceptrons will be able to recognize people, call out their names, instantly
translate speech in one language to speech or writing in another language." So he was right. It took
about fifty years longer than he had predicted.
Okay. So Rosenblatt's work was well known to at least some of the participants at that Dartmouth
conference. In particular he attended the Bronx High School of Science. Anybody? No? From New
York? Okay. Anybody here from the Bronx? Okay. And that's with, Marvin Minsky was there at the
same time. They were actually a year apart. And they later started to debate many different forms,
these different approaches to AI. And Minsky, though, in 1969 wrote a book called "Perceptrons," in
which he went to great pains to discredit, rather unfairly, I might add, a simplified version of
Rosenblatt's work. Now, this is the way science goes. Rosenblatt was actually unable to mount a
proffered deference. Can anybody guess why? Nobody? Okay. He died in a boating accident, 1971.
So the book, however, went on to be highly influential, because there was nobody there to defend the
work and it effectively foreclosed funding and research on Perceptrons and artificial neural networks in
general for more than a decade after that.
So after fifty years, which is better? Symbolic systems or machine learning? The plain fact is these
approaches have different strengths and different weaknesses. In general, symbolic reasoning is more
appropriate for problems that require abstract reasoning, while machine learning is better for problems
that require what I'll call sensory perception, or extracting patterns out of large collections of noisy
data.
So you might ask the question, why was a symbolics system approach dominant in the last half of the
20#th# century, while machine learning is dominant today? And the answer, mainly, is the machines on
which they run. In the early days of AI, the available computers weren't just powerful enough to
automatically learn anything of great interest. They had only a tiny minuscule fraction of processing
speed of today's computers and only a vanishingly small amount of memory compared to what we're
used to today. It's literally a factor of a million in these things.
But more importantly for machine learning, there weren't sources of machine readable data. I think it's
on the slide, to learn from. Most communication was on paper. You didn't have corpuses of electronic
information. So for realtime learning, also, the data from the sensors was also very primitive at that
time, or was only available in analogue form, which was very difficult to process digitally. So you had
four trends, computing speed, memory, the transition to physically to electronically stored data; and
from physically to electronically stored data, and low-cost high resolution digital sensors. These were
the prime drivers in the refocusing of the field from symbolic reasoning to machine learning.
Okay, now let me move on to some probably controversial points. Can machines think? So what is
artificial intelligence really? Can machines really think? Now, I will tell you that after a lifetime of
work in the field, in a great deal of reflection on this question, my reluctant and disappointing answer is
simple. No. Or at least they can't think or they don't think in the way that people think. So far at least
there's no obvious road map from here to there, from what we're doing in the labs to this concept of
human-like thinking. Machines are not people. And there's just no persuasive argument they're on the
path to becoming genuinely intelligent sentient beings, despite what you guys see in the movies.
Let's say, wait a minute, can't they solve all kinds of complex reasoning and perception problems?
Here's my answer today. Sure they can. They can perform tasks that humans solve using intelligence.
But that doesn't mean the machines are intelligent. It merely means that there are many tasks that we
thought required general intelligence that are in fact subject to solution by other more mechanical
means.
Now, there's an old joke in AI, that once a problem in AI is solved, it's no longer AI. Anybody heard
that? Okay, I've got a couple of nods on that one. Now, personally, I don't think that's a joke anymore.
I'm going to go through a number of the signature accomplishments of artificial intelligence and look at
them from this different perspective.
Let me start with computer chess. 1997. No, maybe you guys weren't around in the field back then,
but for decades the archetypal test of whether AI could ever come of age is whether a machine could
ever beat the world's chest champion. Now, the reason is for a very long time, chess was considered
the quintessential demonstration of human intelligence. So surely when a computer was the world
chess champion AI would have arrived, the smart machines are here! Oh, my God! What are we going
to do?
What happened in 97, as you probably know, IBM's Steve Blue beat then World Champion Garry
Kasparov. And lots of ink was spilled in the media lamenting the arrival of super intelligence machines
and there was a lot of handwringing over what this meant about the future of mankind. But the truth is
it meant nothing, other than it can do a lot of clever programming and uses the increased speed of
computers to play chess. The techniques that they used have applications to a lot of similar problems,
similar classes of problems. But they weren't harbingers of the robot apocalypse. That just didn't
happen.
So what did people then say? They said, well, sure, computers can play chess, but they would never be
able to drive a car. That requires a broad understanding of the real world and the ability to make splitsecond judgments in all kinds of chaotic circumstances and a lot of common sense. Well, as you know,
that bulwark of human supremacy was breached in 2004, notably with the Dartmouth grant challenge
for autonomous vehicles, which is soon coming to a parking lot near you. And yet, self-driving cars,
they do just that. They drive the cars. They don't build houses, they don't cook meals, they don't make
beds, that's not what they do.
So computers can play chess and now they can drive cars. The next thing people said is, well, they
could never play Jeopardy. That requires too much world knowledge and understanding metaphors and
all kinds of clever word play. Well, thanks again to the ingenious people at IBM, that hurdle was also
cleared. As you guys probably know, the IBM's Watson beat Ken Jennings, the reigning Jeopardy
champion, 2011.
Now, what is Watson? The reality is it's a collection of facts and figures encoded into a cleverly
organized set of modules that can quickly and accurately answer various types of common Jeopardy
questions. Watson's main advantage over the human contestants was that it could ring in before they
could when it thought it had a high probability that the answer was correct. My God, the machine was
faster than the people at pressing a button. Duh.
Okay, now I don't want to dump on Watson. It's a remarkable and very sophisticated knowledge base
retrieval and inference system. And it was honed at that time to a very particular problem set. It's a
very powerful and valuable technology. But here's the problem that I see. In my opinion, IBM didn't
do the fields any favors by wrapping Watson into a theatrical suite of anthropomorphic features. There
was really no technical reason to have the system say it's responses in a calm didactic tone of voice,
"Oh, Alex, my answer is Nebraska." Also, they didn't have to put up a headlight graphic of swirling
lights suggesting the machine had a mind and was thinking about the problem. They didn't need to do
that. Those were incidental adornments to what really was a tremendous technical achievement.
Here's the problem. Without a deep understanding of how these systems work, and with humans as the
only available avatars and exemplars with which to interpret the results, the temptation to view this as
human like is irresistible. But it isn't.
So okay, so what about machine learning systems? Let me bring this home to some of you guys in the
room. Aren't they more like human intelligence? Answer, unfortunately, not really. In reality the use
of the term neural networks is little more than an analogy in the same sense that airplane design was
inspired by birds. Consider how machines and people learn. You can teach a computer to recognize
cats by showing it a million images. Or you can simply point out to a three-year-old and get the same
job done. "That's a cat." "Oh." And now they know. It's miraculous. So obviously humans and
machines do not learn the same way.
So let me look at another example. Machine translation. Now, tremendous strides have been made in
this field in the past few years, plainly by applying statistical machine learning techniques to large
bodies of concorded text. Anybody here doing machine translation? Okay. About to dump all over
your field. Not true, it's really quite remarkable what's been done. But think about this. How do
people perform this difficult task? They learn two or more languages along with their respective
cultures associated with those languages and the convention of those languages, then they read some
text in one language, they understand what it says and they render the meaning as closely as possible in
the other language.
Now, machine translation, as successful as it is today, there's almost no relationship to human
translation process. Its success simply means that there's another way to proximate the same kind of
results mechanically.
Now, I hope I have a slide on this. I do have a slide on this. Okay. Smartphones. We use the term
smart for the phones that all of us are carrying. These Smartphones are actually reminiscent of the
capabilities of the computer on the Star Trek Enterprise, or maybe the Hal 9000, but hopefully without
the homicidal intent. "Hey, Siri." You can talk to your phone and it talks back. And it becomes more
capable every day as you download new apps and you upgrade the operating system on these phones.
But do you really think of your phone as getting smarter in the human sense when you download an
app or you enable the voice recognition? Surely not in the same sense that you get smarter when you
learn calculus or when you learn philosophy. My view is the modern Smartphone is the electronic
equivalent of the Swiss Army Knife. It's a bunch of different information processing tools that are
bound together to a single unit and they take advantage of certain commonalities like access to detailed
maps or access to the internet or things like that. You know, you have one integrated mind and your
phone has no mind at all. There's just nobody home.
So machines perform an increasingly diverse array of tasks that people perform by applying their
native intelligence. Does that mean machines are smart? Well, let's look at how we might measure
supposed machine intelligence. Now, we start by looking at the question how do we measure human
intelligence? One common method is an IQ test. But even for humans, this is a very deeply flawed
concept. We love to measure and break things with numbers. But let's face it, reducing human
intelligence to a flat linear scale is really highly questionable. Little Sally did two more arithmetic
problems than Johnny did in the time that was allotted. So her IQ is 7 points higher than his. Bull. It's
nonsense. Now, this is not to say to some people aren't smarter than others. That's certainly true. It
only says that simple numerical measures provide an inappropriate patinae of objectively and precision.
You know, psychologists are frequently fond of pointing out, there are many different kinds of
intelligence, social, emotional, analytic, athletic, musical. You know, what does it mean? I got this off
the internet. I did not make this picture. What does it mean to say that Mozart and Einstein have the
same IQ? Is that a meaningful statement?
Now, suppose we give the same intelligence test to a machine. Wow. It took only one millisecond to
accurately complete the same sums that took Sally and Johnny an hour. Machine must be super smart.
It also outperformed humans on memory tests and logical reasoning tests and God knows what else.
Maybe it could also shoot straight and it can read faster and it can outrun the fastest human. Oh, my
God, the robots can out perform us! What are they going to do?
So are the robots really taking over? Well, by this logic, the machines took over a long, long time ago,
whether they were smart or not. They move our freight, they score our tests, they explore the cosmos,
they plant and pick most of our crops, which I'll get to, they trade stocks, they store and retrieve our
documents, they manufacture just about everything, including themselves. And sometimes with human
help, and sometimes they do this without human intervention. All those tasks at one time or another
were thought, oh, people have to do that, that's what people do.
But what are they not doing? They're not taking over our businesses. They're not marrying our
children. They're not watching Si-Fi Channel when we're not around. That's not what they do. So and
that's sort of the public view of what artificial intelligence is about.
So what's wrong with the traditional picture of AI? You know, we can build machines and we can write
programs that can perform tasks that previously were thought to require human intelligence and
attention. But there's really nothing new about that. Each of the new technical breakthroughs from the
invention of the plow to the CGI rendering of Rapunzel's flowing hair in an animated movie, you know,
it's really better understood as an advance in automation. Not a usurpation of human primacy. We can
program machines to solve very complex problems and operate with increasing independence, but as a
friend of mine once observed, a vehicle will really be autonomous when you instruct it to drive you to
the office and it decides to go to the beach instead.
So my point is simple. Lots of problems that we think require human intelligence to solve actually
don't. There are other ways to solve them and that's what the machines are doing. You know,
calculating used to be the problems of highly-trained specialists. You guys know that? A calculator
used to be a profession; that you go see a calculator when you need to get something done. Now it
takes a 99-cent calculator. You know, making money in the stock market used to be the province of
experts. Now the majority of trading is actually done and initiated by computers. Same thing for
driving directions, picking and packing orders in warehouses, designing more efficient wings for
airplanes.
You don't really have to worry about the robots taking over, though, because robots don't have feelings,
except in the movies. They aren't, this is news to most people, robots aren't male or female, despite
what you see. You can go see X Makenna, you guys seen that? Was it Eva? It was a female robot.
What does that mean, it's a female robot? It's great for fiction. But it has no meaning. It's crazy.
So robots don't have independent goals and desires. A robot that's designed to wash and fold laundry
isn't going to wake up one day and say, "Oh, my God what a fool I've been. I really want to play the
great concert halls of Europe." It's not going to happen. Machines aren't people.
So just as we can teach bears to ride bikes, we can teach chimps to use sign language, we can build
machines that perform tasks the way that people do, and we can even build them to simulate human
emotions. We can make them say "ouch" when you pinch them. You can build a little robotic dog that
wags its tail when you pet it. But there are simply no compelling reason to believe that this bares any
meaningful relationship to human behavior or human experience. Machines aren't people, even if we
build them to talk and walk and chew gum like we do.
Now, I've given you hopefully a new way to think about artificial intelligence. So let's talk about the
implications of this perspective. Because it's pretty profound. Here's some potential future headlines.
New York Times, "Robots steal jobs at record pace." And The Wall Street Journal, "Profits rise as
worker productivity soars." Now, it's certainly true that artificial intelligence is going to have a serious
impact on labor markets and employment, but perhaps not in the way that most people expect. If you
think of machines as becoming ever more intelligent and threatening our livelihoods, the obvious
solution is to prevent them from getting smart and to lock our doors and arm ourselves with tasers, get
that robot before he takes my job.
Well, the robots are coming. But they're not coming exactly for our jobs. Machines and computers
don't perform jobs. What they do is they automate tasks. And except in extreme cases, you don't roll in
a robot and show an employee to the door. Instead the new technology hollows out the activities of the
worker and changes the jobs that the people perform. Now, here's an interesting observation. Even
experts spend most of the time doing mundane repetitive tasks. They review lab test results, if they're
doctors; they're drafting simple concepts if they're attorneys; they're writing straightforward press
releases; they're filling out paperwork and forms.
On the blue color side you've got lots of workers doing things like laying bricks, painting houses,
mowing lawns, driving cars, loading trucks, packing boxes, taking blood samples, fighting fires,
delivering mail, directing traffic. And many of these intellectual and physical tasks that I've just
mentioned require straightforward logic or simple hand/eye coordination of some kind.
And here's were the problem comes in. The new AI technologies are poised to automate an awful lot of
those tasks. So if your job involves a narrow well-defined set of duties like the bricklayer over here. I
got this off the internet. What do bricklayers do? They lay bricks, according to the, I don't know,
Bureau of Labor Statistics or something. So if you have a narrow set of tasks that you perform, and
many jobs do, then indeed your employment is at risk.
If you have a broader set of responsibilities or if your job requires a human touch, such as expressing
sympathy or providing companionship, I don't think you really have much to worry about. And just
look at these, here's a licensed practical nurse duties. I got this list. I just want to point out. I really
wondered about this. Monitoring fluid and food intake and output. Output? Okay. Moving patients,
providing emotional support. "Oh, I am so sorry. I hope you feel better soon." I mean you can't have a
robot take over that function. It just doesn't make any sense.
So most jobs involve a mix of general capabilities and specific skills. And as machines can perform the
more routine activities, the plain fact is that fewer people are needed to get these jobs done. So what
does that mean? It means that one person's productivity enhancing tool is another person's pink slip.
Or if you put it a little more likely, it just means that there won't be as many job openings for people to
perform those particular kinds of jobs.
Now, this is what economists call structural unemployment. So automation, whether it's driven by
artificial intelligence or not, also changes the skills that are necessary to perform the work. If an
oncologist no longer needs to read X-rays or an accountant operates a computer program instead of
doing calculations by hand, you need different aptitudes, different talents and you need different kinds
of training to get the job done. So this is what, as I say, it's called structural unemployment. It's the
mismatch of skills that needs employers. The more pressing problem posed by AI for workers is not so
much the lack of jobs, it's the training that's required to perform those jobs.
Now, historically, as automation has eliminated the need for workers, the resulting increase in wealth
eventually generated new kinds of jobs that picked up the slack. And I see no reason that this pattern is
not going to continue. But the keyword here is eventually.
Now, 200 years ago, more than 90 percent of the U.S. population worked in agriculture. Think about
that for a second. Basically, all anyone did was to grow, prepare food. What it meant to work would be
out in the field. Now, today less than two percent of the population is required to feed everybody. Oh,
my God. Is everybody out of work? Of course not. We've had plenty of time to adapt, and as our
standard of living has relentlessly increased, which I'll get to in a few minutes, new opportunities have
always arisen for people to fill their expanding expectations of our ever-richer society.
Now, try to imagine this. If an average person from 1800 could time travel and see us today, they
would think we've all gone nuts. You know, why wouldn't we just work a few hours a week, buy a sack
of potatoes and a jug of wine, build a shack in the woods, dig the hole for an outhouse, and we just live
a life of leisure. They would think that was gone to heaven.
Well, somehow, our rising expectations somehow magically kept pace with the level of wealth that we
have. So what are the jobs of the future? You know, I really don't see why we can't become a society
of, for picking examples here, competitive gamers or artisans of various kinds, or personal shoppers,
flower arrangers, tennis pros, and party planners. No doubt a lot of other things that don't exist yet.
That's really possible. And in fact it's likely. A hundred years from now if we were transported, they
go "you guys are working? We're playing video games all day and getting paid for it. Or you're just
growing beer in your backyard. That's not a job." Well, that's a job then.
So who's going to do the real work? Well, our great grandchildren may think our idea of work is very
21#st# century. It may take away 2 percent of the population, assisted by some pretty remarkable
automation, to accomplish what takes 90 percent of our labor today. So what? It may be as important
to them because they're going to be a lot wealthier than we, to have fresh flowers in the house each day.
In the same way that we think it's important to take a shower every day. Which by the way, 70 percent
of the people in the U.S. take a shower every day. The other 30 percent work in another building.
Now, you might know that, interestingly enough, in 1900, people bathed once a week. That was the
standard. Once a week you'd go and take a bath. So I mean today you'd think that was ridiculous. But
that shows how our expectations begin to change.
Two of my kids just got their first jobs. And I couldn't help but notice that their chosen professions
didn't even exist ten years ago. At the time when they were being trained and going to school. Now
one does social media promotion for restaurants. She makes a great living doing that. The other one
works in an online education company. You guys know Audacity? So, yeah, she works there.
So here's the problem. That's all the good news. But the bad news is it takes time for these transitions
to happen. In the new wave of artificial intelligence-enabled applications is likely to accelerate the
normal cycle of job destruction and creation. So we need to find new ways to retrain the displaced
workers.
Okay, Jerry, you told us what the problem is. Tell us how we're going to solve it. I'm just going to give
you a couple suggestions. I'm not saying these are the answers. But I want to explore the kinds of
thinking that we should be involved in to think of how we should be thinking of how to solve these
problems.
So one solution that I suggest in my book, which is available for sale at a discount in the back of the
room, is the idea of a job mortgage. Now, basically people should be able to learn new skills by
borrowing not against their bank account or whatever, it's against their future earnings capacity. Just
like we do with house mortgages. Today our vocational training system is really messed up. Mainly
because the government is the lender of first resort for student loans. So skills that people are learning
are disconnected from the economic value that those skills are creating. We're not really investing in
education, even though we use that term. We're handing out money to people to learn things that will
only help them pay it back. You can't get a job? Too bad. Your student loan is still due. It's a big issue
today.
So we need to create new financial instruments just as we did to encourage homeownership with the
home mortgage. That was a creation, a policy creation. Do the same things for displaced workers and
for training. And that benefits not only the people whose skills have been obsoleted, but it benefits
society in general. We're not investing in the right way to solve this problem.
Okay, now I'm just about done. Let me talk about another very serious problem that we have today and
it's about to get a lot worse, in my opinion. It's another dark cloud and it's unfortunately become
largely consequence of AI. It's true that technology and automation make society richer. But there are
serious questions about whose pockets are filled by that increased wealth.
You know, we live in, those of us here in high tech tend to believe that we're developing dazzling
technologies for a needy and grateful world. And indeed, we made tremendous progress on raising the
standard of living for the very poorest people worldwide. And that's really a great accomplishment.
But for the developed world where we happen to live, the news is not so good. Up until around 1970,
on and off, we've found ways to distribute at least, I'm sorry, up until then we've found ways to
distribute some of these economic benefits broadly across society, the rise of the mythical middle class.
But since then it doesn't take much to see that those days are over. And all the evidence since the
1970s, the vast majority of people are not participating in this increase.
Now, as economists know, automation is the substitution of capital for labor. And I'm here to tell you
today that Karl Marx was right. The struggle between capital labor is a losing proposition for the
workers. What that means is that the benefits of automation naturally accrue to those who can invest in
the new technology and the new systems. So this is Karl Marx capitalist. Here's Jerry Kaplan's
capitalists. Anybody recognize these guys? You know who that is? Nobody? Paul Ell, okay. Thought
you might know. Yeah, he's good on guitar. I play piano. We've played together.
Now, why shouldn't the technologists who invest in this stuff gain the benefits? People aren't really
working any harder. And I'm sorry to say we really aren't any natively any smarter than we were a
generation or two ago. In fact working hours have decreased slowly but consistently for the last
hundred years. Most people don't realize that. The reason we now do more with less is that the
business owners invest some of the their capital into the process of productivity improvements, and
they reap the rewards. That's what happens.
So what does all this got to do with artificial intelligence? The technologies that are on the drawing
boards right now, right here at Microsoft and other labs, are quickening the hearts of entrepreneurs and
investors everywhere, as you're probably well aware. And they're the ones who really stand to benefit.
And while they're able to export more and more of the risk to the rest of society, workers are less secure
outside of some environments like this, wages are stagnated, pension funds can go bust. We're really
raising a generation of contractors with a gigged economy whose variable working hours and health
benefits are their own problem. The two kids that I mentioned earlier, one of them's a contractor. She
thinks she works for the company. But it's been, she says they may extend her contract for another year
as a contractor. No benefits, et cetera. And she can of course have her hours cut back at any time.
Now, some people have the mistaken impression that the free market is going to address these
problems. If only we can get the government out of the way. Some of you may have seen the
republican debates recently. This theme is rampant, universal among the group of people in those
debates.
But I'm here to tell you something different. Our economy is hardly an example of unfettered
capitalists. And believe me, I know. The fact is there are all sorts of rules and policy that drives where
the capital goes, how it's deployed, who gets the returns, how it's taxed. And the problem is their
economic and regulatory policies have been decoupled from our social goals. So we have to fix this.
We have to fix this. And the question is how.
Now, going back to the good news. Most people think of this problem statically, that things are the
way we are today and we've got to fix the way they are today. But that's not really right. If you see
enough frames of this movie, and I'm old enough to have seen several of the frames, and you go back
and you look at it historically and you project it forward you find something very, very interesting. The
U.S. economy in our overall level of wealth doubles every 40 years. And it's done that pretty reliably
since the start of the industrial revolution in the 1700s.
Now in 1800, remember I was talking about 90 percent of the people in agriculture, the average
household income in the U.S. was $1,000. And that's inflation adjusted. It actually was $1,000. So
that's the same as it is today in Malawi and Mozambique. And probably not coincidentally, their
economies look surprisingly similar to what the U.S. economy looked like 200 years ago.
But I doubt the people in Ben Franklin's time thought of themselves as dirt poor, merely scratching out
an existence. So what this means is that 40 years from now there will literally be twice as much wealth
to go around. So the challenge for us is it to implement policies that are going to encourage that wealth
and new wealth to be more broadly distributed. We don't have to steal from the rich and give to the
poor. We need to provide incentives for entrepreneurs and businesses to find ways to benefit an ever
larger swath of society. That's what we need to do.
So in my book, which is available for discount in the back, let me, I give one suggestion. I don't know
if it's right or wrong. You can argue about it. But directionally I think these are the kind of things we
should be thinking about. Why don't we make corporate taxes progressive, progressive based on how
broadly distributed a company's equity is? So the more stockholders a company has, and this is
suitably defined, and this can be done, the lower their tax rate. Now, Microsoft, you might not realize,
is one of the most widely distributed. The equity of Microsoft is one of the most widely distributed
companies in the world.
Now, let's compare that to Bechtel. Bechtel is owned by a family. You may not know it. It's a building
and construction company. Huge multi-billion dollar company. They do huge projects all over the
world. It's owned by a family. So they should pay a higher corporate tax rate, in my view, than
Microsoft should. What that will mean is that when you pay a different tax rate you have a competitive
advantage in the marketplace. And believe me, the business owners will figure out a way to make sure
that their equity and the benefits that their corporation is providing economically are widely distributed.
Now, progressive policies like this can promote our social goals without stifling economic growth. We
just need to get on with it and stop believing the myth that unfettered capitalism is the answer to the
world's problems.
So let me wrap things up and we'll take a few questions. I don't want you to think I'm anti-AI. Nothing
is further from the truth. I think it's potential impact on the world is similar, and I'm not exaggerating
about this, to the invention of the wheel. Now, we need to think of it not as some sort of magical
discontinuity in the development of intelligent life on Earth, but as a powerful collection of automation
tools with the potential to transform or livelihoods and to vastly increase our wealth. But the challenge
that we face is that our existing institutions, without some enlightened rethinking, run a serious risk of
making a mess out of this opportunity.
So I'm supremely confident that our future is very bright. I think it's much more Star Trek than it is
Terminator. But the transition to get there may be protracted and brutal unless we pay attention to these
issues that I've raised today. We have to find new and better ways to ensure that our economy doesn't
motor on going faster and faster and faster while throwing more and more people overboard. You
know, our technology and our economy should serve us, not the other way around.
So thank you very much.
[Applause]
Jerry Kaplan: From my overseers in the back, can I take a few questions. Okay, if people need to
leave, I won't be insulted. I'll just dock your pay.
Yes?
>>: So I'm going to be a bit of a troll here. You through around the term general intelligence quite a
bit. Can you explain what you mean by that?
Jerry Kaplan: Well, the interesting point is nobody knows what it means. AGI, ever heard that,
artificial general intelligence? This is people, let me not get too personal, naming names, but there are
people that put out books about several people, put out books about the singularity and what are we
going to do when these machines exceed the human intelligence. I'm here to tell you this is complete
nonsense. It's base on this mythology that we're building more and more intelligent machines and some
day they will be generally intelligent. When they're generally intelligent, man, we're in trouble.
They're going to eat up all the ice cream. That's not what's going to happen.
So I have no idea what it means. And that's kind of the point. What does it mean to be generally
intelligent? We have some sense that people will, by definition, that's what it means, I guess, but
machines generally intelligent? I'm not seeing any of it. And I've been at this for a long time. They're
very powerful and they can do lots of things and they can automate tasks. But the robots aren't coming.
Any of you guys see the TV show "Humans"? Nobody? You know, in a lot of movies you have two
kinds of robots. There are the mindless zombie robots, and then there are the ones that became selfaware. Why? I teach philosophy and ethics of artificial intelligence and Stanford, and I'm telling you,
that's a ridiculous distinction. There isn't some boundary that we have any idea. We don't understand
what it means for people to be conscious and intelligent. So when you see this, hang on, they always
have these swirling lights in the movie, "bling, oh, now I get it." That's like, you know, talking about
vampires and werewolves. There's just nothing going on that I see as an engineer and entrepreneur that
makes that other than wild science fiction.
Okay. Yes, sir?
>>: Just sort of following on this, people like Stephen Hawking, they come out and say, "oh, my God,
artificial intelligence is going to be [indiscernible]." What do you have to say about that?
Jerry Kaplan: Well, I don't like to talk about this when I'm being recorded. He's an amazing man
obviously. I won't argue with him about radiation from black holes. Let him not argue with me about
artificial intelligence. What does he know about this? He's a physicist. You know, we're all computer
scientists. The fact that he said something doesn't make it true. In the same place, where he went to
school, I hope I get it, was either Oxford or Cambridge, I think it was Oxford, 1897, I hope I have my
facts right. Ward Calvin, who was a prominent guy. You can't argue what he did. It was amazing what
he did, his laws of thermal dynamics. I mean that is just off the charts. He said in 1897, heavier-thanair flight is impossible. And it was eight years later in 2005 when the Wright Brothers flew a heavierthan-air aircraft. And people said the same thing. They were, "oh, my God, Lloyd Calvin says you
can't fly." You know, he's a physicist. He must know what he's talking about. Well, a couple guys at a
bicycle shop went ahead and did it. They didn't hear about it. They didn't get the memo.
So I'm sorry for Mr. Hawking, you know. He's a wonderful scientist and brilliant guy. But stick to
physics. That's my answer.
Yes, ma'am?
>>: So you mentioned that robots can actually do the task of emotional support.
Jerry Kaplan: Yes.
>>: But there are various instances like there's show of eyes, in fact, that you can talk to and kind of
mimic human emotion makes you feel like someone is there.
Jerry Kaplan: Yes.
>>: And things like that. I feel like there are a lot of instances. And also in movies too. Like artificial
intelligence, even though they may not actually consciously feel like they have emotion, but as long as
we feel that they do.
Jerry Kaplan: Yes, this is a very, a little bit subtler complex. I've oversimplified this a little bit. There
are many indications people do attribute emotional states to machines. That's a good way to put it.
People fall in love with their cars. And there are many programs that appear to care and appear to
express empathy. And there are possible uses for that. You know, I'm not necessarily thinking this is a
good idea, but you know we're going to have a lot of elderly people around, and they may be talking to
robots to keep them company in twenty years, particularly in places like Japan where the population is
changing and it might happen in China as well. But, you know, do you really want your grandmother
sitting there going, "hey, let me show a picture of my grandchild," and the robot saying, "oh, that's a
beautiful child. I've never heard that story before." I mean maybe it provides them some kind of
comfort. But it's not, for me, the key issues is are they being fooled. If you get comfort out of that,
God bless you. Go ahead and do it. But if you're thinking that's because there's somebody home or
there's somebody that cares about you, when it's a program written by this guy over here who does
machine learning, you know, I'm against it.
I think that the issue is full disclosure and the opportunity to fool people. And we're going to be facing
a lot of these machines. You're going to be facing very attractive male and female machines that say,
you know, "I really wish you'd buy this new Tesla. My kids are, you know, they're growing up and they
really need my commission on this so they can go to school." And robots will be talking to you like
that. And believe it or not, a lot of people are going to fall for that bull. But do we think that's a good
thing? Of course not. It's an extension of advertising techniques put on steroids, and they will work.
But I think there's an argument that if you're fooling people to get them to do things that are really not
in their benefit, then you're dong the wrong thing.
Yes, sir?
>>: So you've already talked a little bit about how you denounce the idea of super intelligence. That
was one of the things that Hawking was concerned about when they signed this open letter on artificial
intelligence that Gates and musk also signed. One of the other things they also brought up was the idea
artificial intelligence also being used in ways that were not socially conscious, like the idea of
automated weapons systems, for example. I wonder if you've had any take on that idea.
Jerry Kaplan: Well, I do. I have another couple hours of take on that issue. There are very serious
issues. We have to frame them in the right way. Well, our view is, you know, the robots are coming
alive and they're coming to get us, we better get our guns and shoot the robots. That's framing it the
wrong way. The appropriate role of automation and the social control over its use, social constraints
over its use is a very, very important topic. And we're going to have a lot of trouble with this.
Particularly with military applications, which raise very, very serious ethical and practical issues. That
letter that you're referring to was an open letter. How many people here signed that letter? Nobody?
They go out of the country, like thousands of AI researchers have signed this letter. You can't find one
at Microsoft. Yes.
>>: [Indiscernible] director of MSR actually signed it too.
Jerry Kaplan: I'm not saying they're wrong to sign it. The letter was kind of like we don't want to do
bad things; right? And anybody can do an internet petition, which is basically what it was.
But look, the instincts behind that are good and they're justified. But it's a way more complex subject
as to whether our government should be involved and to what degree in developing these technologies,
because they could fall into the wrong hands, they're subject to being hijacked, there's the possibility of
fomenting a new kind of arms race. It's a very complex subject.
On the other hand, we have a right to protect ourselves. And if we're not going to develop systems and
we're going to leave it to some hostile party who's going to get ahead us in that, you know, that's a
problem.
And by the way, there's a lot of work going on on this. People are not really aware of and it's really
good. A lot of very smart people, a meeting going on, I think right now in Genèva. They get together
regularly, philosophers and military people, and they have very serious conversations to see if they can
figure out how to define some kind of limits on the humanitarian constraints on the use of, not just AI,
but new kinds of autonomous weapons. It's not so much about artificial intelligence, it's about the same
technologies that allow us to make a self-driving car can be used to do some real serious damage in
battle fields. And that's going to be a big issue.
Yes, sir?
>>: A separate question. The first question is that in your earlier model, you seem to portray AI and
machine learning to be separate things, one is symbolic, the other one is machine learning. And so my
question is in a when you wrote this book, did you actually look into, you know, you know, there have
been credible modern development in machine learning and AI [indiscernible].
Jerry Kaplan: Well, there has been some recent work yeah, knowledge-based systems.
>>: You put them in the book?
Jerry Kaplan: Not prominently. Not prominently. There are certainly people, I was kind of, when I
wrote the book, oh, it takes forever to get books published. So I wrote this a couple years back. No
offense, I'm not trying to duck the question. I'm teaching at Stanford, and at Stanford, you know, it's
machine learning morning, noon, and night. And the people who were there who were doing, they
would be left-eyed or just what I was saying about the way [indiscernible], we're doing symbolic
systems approach.
There are now people, actually interestingly enough, not in the AI lab, but in the database systems
group and other places that are trying to find ways to meld the benefits of these different technologies.
And there's a lot of good work going on in that. But the answer, hey, that's, I don't think the answer,
well symbolic systems isn't the basis of intelligence and machine learning isn't the basis of intelligence,
although a lot of people disagree with that. Let's put them together; maybe we got the basis of
intelligence. I don't know. You know, to me, it's still nobody home.
>>: The second question, so like two years ago when Google bought this deep mind, they form an
ethic committee. Were you involved in doing this?
Jerry Kaplan: Was I involved? No. I would have liked to be involved, but I guess I had shallow mind
or something. That was a couple years ago. And I actually haven't been doing this for thirty years. I
mean, I've been busy, you know. I got kids. I've been starting companies. I've been building a lot of
AI-related technologies. But I haven't been asked, you know, where's the camera? Any day now you
just call me and I'll be happy. Microsoft should have a committee to look at these issues. And I know
Eric is, I don't know what he's doing here with disrespect, but he's ->>: [Indiscernible].
Jerry Kaplan: Yeah, well, he had to go to another meeting before I got to it. Pretty much my point of
view. But it's definitely worth looking into. Actually the ethical issues are much deeper, both more
mundane and more important. Even in self-driving cars. I mean if you guys had time I'd give you
another ten minutes.
It's really interesting the ethical issues about how to program those and what they ought to do. How
should they prioritize the life of the person in the car versus pedestrians and how should you view
differently in people, and what does all this mean? There are really a lot of important ethical questions.
And one of the positions I'm taking in the computer science department at Stanford is to say that we
will have a course sequence in computational ethics, an engineering course. How do you actually take
those principles that have usually been considered part of philosophy, and embed them, embody them
in computer code, to be able to deal with certain kinds of situations that come up that may have been
unexpected and unanticipated some way.
Yes, sir?
Host: So I think we actually have to stop it there.
Jerry Kaplan: Stop it there.
Host: Yes. Thank you so much.
Jerry Kaplan: Okay. Thank you very much.
[Applause]
Download