>> Kevin Kutz: Thank you for coming. My... Lanier to the Microsoft Research Visiting Speaker Series. Jaron...

advertisement
>> Kevin Kutz: Thank you for coming. My name is Kevin Kutz, and I'm here to welcome Jaron
Lanier to the Microsoft Research Visiting Speaker Series. Jaron Lanier is one of the most
provocative and creative thinkers of our time, drawing on his expertise and experience across
computer science, music and digital media to challenge conventional notions about how
technology is transforming society.
He is the bestselling author of "You Are Not a Gadget," and he is well known for popularizing
the term "virtual reality." In fact, as the founder of VPL Research, Incorporated, years ago, he
was the first to sell virtual reality goggles and gloves, and today he's with Microsoft Research.
Jaron is here today to discuss his latest book, "Who Owns the Future?" which has received a
great deal of attention, earning reviews, interviews and commentary in dozens of media outlets in
the US and overseas. Please join me in giving him a warm welcome.
>> Jaron Lanier: Hey, how's it going? So this is sort of weird for me, because I kind of live a
schizophrenic life, and that's by contract. I'm schizophrenic by contract, where I have one life as
a so-called public intellectual where I write these controversial books and run around and blab
and whatnot, and then I work here, doing research, where I complain about staffing allocations
and all that stuff. Although I can't tell you what they are, but man, we have some really cool
results, so I'm very happy with my research here.
But, anyway, I'm here in my other persona, nonetheless still in the Building 99, so it's a very
weird experience for me, like an alternate universe. I feel like I've just stepped through some
sort of portal. So one of the things I do love when I give talks is I play music, and even that's
kind of weird, because I'm talking about this serious stuff, with the future of economics, but I
sort of still do it as this hippie artist person. I don't know how this all happened, but somehow it
seems to work. So I'll play music for you, unless -- is that too weird? Are you interested?
Okay.
This is an instrument I play a lot for audiences, because it's just kind of kickass. It's from Laos.
It's called a khene. [Music]. All right, so that was music. Actually, there's a cool thing about
this instrument, which is it might be the earliest digital number, so this is one of a family of
Southeast Asian mouth organs. This is from Laos, but there are all kinds of variants of it, and I
believe it's the oldest human design of a set of objects in fixed positions that are similar that can
be turned off or on combinatorially. So this is a 16-bit number, and it's about 15,000 years old,
so this is it. This is where it all started. This is why we all got in trouble.
Actually, I'll tell you one version of history that gets us from this to where we are now. In the
ancient world, these were traded on the Silk Road, and the ancient Greeks and Romans knew
about them. The Romans made a giant version of this to accompany the gore in the Coliseum, so
it was sort of like the feature soundtrack of its day, and it was called the hydraulus, and because
they were Rome, it was steam powered. It was gigantic, and there are actually some wrecked
hydrauluses that survived, so we can actually see them today. And they're so big you can't just
use your fingers to open and close the holes. You have to use these planks that you open and
close, and those evolved into keyboards.
The hydraulus evolved into the medieval pipe organ, but it also involved into keyed string
instruments very early on, as well, and that turned into, of course, the harpsichord and the piano.
But from the very beginning, there were attempts to automate, so even on the hydraulus, there
were attempts to open and close multiple planks at once and build a higher-level mechanism,
macros, so this idea of building a bit of higher-level control into player instruments continued
through the centuries, and there was a nondeterministic player piano that could so-called
improvise a little bit that actually inspired a fellow named Jacquard Programmable Loom, which
in turn inspired the Babbage Programmable Calculator, which in turn inspired Turing and Van
Neumann to formalize this field that tortures us all to this day. So this is it. This is the start.
First-mover advantage right here.
Okay. Let's see, of concern in this talk is the question of how digital network architecture relates
to economic and political outcomes in a society. And, as a prologue, I will describe my personal
experience of decades of waiting with great anticipation for the benefits that the availability of
digital networking would bring to people. Like I'm sure many of you here in this room, I've been
involved in this game for a really long time, and starting in the '70s, when I was a teenager, I'd
been infused with this bug that someday, by being able to share information, collaborate on
networks, there would be this wave of improvement in wellbeing for people. It would be
analogous to the improvement in wellbeing that resulted from electricity in the walls or plumbing
in homes, hot and cold running water or vaccines or decent fertilizers, the interstate highways,
these basic capabilities that made life better for large numbers of people at once.
So we're now years into a period in which networking has become available, and I think we see
mixed results. I think we do see benefits, but what we don't see are economic benefits. Now,
here's -- or let's say we see a kind of economic benefit that I think isn't sustainable. So I was
personally shocked by two sequences of events, both of which defined my expectations. One of
them was just in the musical field. So I play music professionally, I do soundtracks and whatnot,
and in the '90s, I had a career as a recording musician, and I was signed to a major label, and I
did pretty well at it.
But it was during that time that I was deeply upset and disenchanted with the befuddlement and
corruptions of the music business, as it was, and I was absolutely certain that, if we went to a
different model of open source, open culture and so forth, where musicians shared their music,
that the benefits they would get would open up possibilities and that a whole new generation of
musicians would cleverly invent new ways to have careers, and there would be this wave of
wellbeing.
Oh, I didn't realize I was being interpreted. Please tell me if I'm talking too fast, okay? Or
somebody, I don't know, indicate, because I know I can sometimes go fast. All right. Right. So
I was just sure that would happen, and I actually made up a lot of the rhetoric that's become just
an orthodoxy today. What I found is that if you question the open-culture orthodoxy and the idea
that information should be free and all, you're just pounded with these arguments, and the weird
thing for me is I made up some of that stuff. I was there, and if you go back to some of my
writing from the '90s, I was really articulating a lot of the stuff that I get from kids these days.
Kids these days. And so it's one thing to be complaining about kids these days, but it's another
thing when they're parroting back the stuff that you said. It's a very weird, echo chamber-y, kind
of surreal experience.
But, anyway, what I saw was, around the turn of the century, musicians just started to do badly,
plain and simple. But there was a particular pattern that bothered me, especially. I mean, I
understand that with technological change and with economical evolution, sometimes, there are
going to be groups that are disadvantaged, and I'm not expecting that everybody has some
entitlement to always do well under every circumstance, and I understand that. I'm a big boy. I
get it.
However, what we were seeing was a disturbing pattern, which was reminiscent of what were
called Horatio Alger stories in the United States, which date back to the 19th century. A Horatio
Alger story is when there's a widespread illusion that people are doing well when they're not, and
a lot of people live on false hopes, where the statistics are so against them, that no matter how
well they perform, no matter how much merit they present, they actually don't have a shot. But
there are a token number of people who can do okay, and it creates this false impression. And if
you have an economy that's built too much on false hope, it will fail, and so that's the pattern I
was seeing.
I was seeing tiny token numbers of people who had found a way to make do with the new system
we had created, the post-Napster system, and yet there was an illusion of a massive number of
people who were succeeding, but it was totally false. And I put a tremendous amount of effort
into trying to uncover every single example of somebody who was making it in the new system
in music, and I continue that, and there really is almost nobody. I mean, statistically, it's a total
failure, but there are token examples. There are the Amanda Palmers, or whatever. These
people exist, but there are just incredibly tiny numbers of them. There's this tall thin tower, and
then there's this emaciated long tail.
All right, so that's one thing that really bothered me, and the result of that was a specific human
cost, where I saw people who'd had successful careers in the sort of middle of the music business
-- not the Madonnas or superstars, but people who were like well-known jazz musicians,
suddenly needing benefits to pay for their operation or some problem. And it was getting to the
point where we were having benefits once a week at jazz clubs to try to deal with the most
difficult cases. And I realized we're killing our musical culture. Something has gone desperately
wrong. That really got to me.
But then, around 2007 and '08, the next thing that got to me was the nature of the recession that
hit. Now, look, there are a lot of explanations for the recession. Yes, we had an unfunded
couple of wars. That'll do it. Yes, there's the rise of China and India. There's more competition
for resource base. Yes, there's more older people than ever and more ways to spend money to
keep them healthy, blah, blah, blah, but the whole developed world, at once, went into these
bizarre debt crises in a similar particularly stupid way around bundled phony securities, and that
was really strange. And that one really got to me, because earlier, in the '90s, I'd had a role as a
consultant to people who were trying to figure out how to apply what we now call cloud
computing and big data to finance. The terminology was different in an earlier phase, and that
stuff had worked out terribly. There were a few experiments that just flopped awfully. LongTerm Capital was one. Anybody remember that one? So that was this experiment in trying to
use big computation to sort of make a perfect financial scheme. It was fronted by a bunch of
people who'd won Nobel Prizes in economics and it seemed very legit, until there was this
humongous collapse and a huge public bailout.
That one was bad, but it was kind of entered into with some innocence, because I think the
people sincerely didn't realize they were screwing up, and I knew some of them. I saw it
firsthand. I'm pretty sure that that's true. So there might have been a technical failure there, but I
don't think there was a mal intent. Enron was doing the same thing, more or less, but with mal
intent, and then there was this huge collapse, huge public bailout, and I also knew the folks at
Enron. And it was funny, because I had a startup in those days, and Enron wanted to buy it, and
I was telling everybody, "No, Enron's this horrible, ugly, evil thing. No, no, no, we have to sell
to somebody better." So we sold it to Google, and now -- I'll get to that. Love you, Google
people.
Then, with '07 and '08, we saw the same pattern again, exactly the same thing, and I'll explain to
you how I think these are all similar. Of course, there are differences, but I think there are strong
similarities. And what I realized is that this is not something where people are able to learn
lessons. There's a kind of a temptation in the way you can use computing to create fake financial
schemes that just seems to be unassailable. And what I realized at a certain point is that the
failure of the music business and the ascent of fake finance were actually two sides of the same
coin, so that's the story I wanted to explain to you.
So, first of all, let me give you a few models with which to think about how big financial
schemes have become fake. The metaphor I'm going to use is Maxwell's demon. Who knows
about Maxwell's demon here? See, it's great to be in a lab environment. It's a teaching tool that's
used in introductory thermodynamics, so I'm going to talk about physics. Don't run screaming. I
know it's a computer science lab, but it's all based on physics, ultimately. It's a good thing to talk
about. So Maxwell's demon is a little imaginary guy. He's a 19th century guy, so he's eloquent
and he speaks in long, long sentences, but what he does is he operates this little door. It's a little
tiny door, and it separates two chambers, and the chambers are filled with a fluid. It could be air
or water, perhaps. And what he does is he's looking at the molecules that come close to the door
on either side, and if there's a molecule on the right that seems all jumpy and perturbed, that's a
hot molecule, and he opens the door to give it a chance to get through. And if there's a nice,
languorous, cold molecule on the other side, he opens the door to give it a chance to go through.
And, gradually, he's selectively opening this door, just flipping one little bit, to separate these
two chambers into hot and cold.
Now, that's an awfully available thing to be able to do, because then you can open a bigger door
and let them mix again and run a turbine, generate some electricity, then repeat the whole
process, and you have perpetual motion. Endless free energy, right? Okay, so what's wrong
with this? Why don't we get free energy from this guy? Why can't we just build this? The
reason why is the act of discrimination, the act of computation, the act of even the smallest
action is still real work. There's no such thing as non-work. There's no such thing as purely
abstract information.
And so what happens is, the operating, the measurement takes energy. Operating the door takes
energy. Computing whether to open the door takes energy. All of these things also radiate waste
heat. They're entropic. And, cumulatively, it always costs more than you gain. That's the cost
of computation. That's why your computer gets hot, too.
And so the interesting thing about this is that every possible perpetual motion machine somehow
can be equated to this same no-free-lunch system of Maxwell's demon. Okay, what happens
when you have a big computer with a lot of connectivity and you can get a lot of data into it on a
network, even if you don't intend to, you're tempted to try to turn it into Maxwell's demon
yourself, but in an economic sense. And I saw this happen firsthand as a consultant, especially in
the '90s.
I had a weird consulting career in those days, because there weren't that many people who
understood big networks, and so I worked with the early high-frequency trading type schemes. I
worked with Wal-Mart, which was a really important early big computing operation, all kinds of
other examples. So I saw what happens.
For instance, one of my consultancies at that point was actually the largest American healthcare
company at the time, and I saw directly how it was transformed. Prior to the existence of digital
networks with a sort of an endless amount of freely gathered data and this huge amount of
computation, insurance was limited -- was computationally limited. The way the schemes
worked was entirely computational. In fact, the term computer used to refer to humans, and
almost exclusively women, who were employed in these giant, long buildings in upstate New
York who would sit there calculating actuarial tables. Did you know that that's what computer
used to mean, before Turing?
And so they would have human computers calculating these things, and the statisticians, who
were called actuaries, had a limited amount of data, and they could come up with very broadbrush approaches to setting rates for insurance policies, and that's how the business worked. But
as soon as there started to be lots of data and really big computers, and because of Moore's law,
when that stuff got really cheap, a whole new picture emerged. It started to become thinkable to
model individual people and place odds on them, and you could do that not only based on the
scientific theoretical knowledge that had been published by medical researchers, but you could
create your own correlations, because you could gather your own data, and you didn't even have
to understand them.
It might just be that people who have purple wallpaper are more likely to have a stroke or some
bizarre thing like that. Maybe that isn't bizarre. I don't know. But you wouldn't have to
understand the correlations, you'd just compute them. And then what you do is you pretend to be
Maxwell's demon. You say, "I'm going to open the little door, and all the people who are likely
to need my insurance are going to be excluded, and all the people who are likely to not need to
use my insurance policy will be included." So you attempt to make the perfect perpetual motion
insurance plan, where you take as little risk as possible. So you've turned yourself into
Maxwell's demon.
Now, this is a little story I tell, which is true, and it goes like this. One of my consulting things, I
was with a bunch of people from this health insurance company, and the CEO of it was sort of
taken with this observation that this new world, that he could become Maxwell's demon,
although, of course, that wasn't the way he was talking about. And he said, "You know, what I
can do now is I can get rid of that guy who's going to have a heart attack years in advance. I
don't have to insurance him anymore." And right at that moment, and I remember thinking, "Oh,
my God, that's not what computing is supposed to be for. Something's gone terribly wrong here.
That's not what we've all worked so hard for." And at that very moment, there was this huge
swooshing sound, and then there was like this earthquake, and it turns out there was a meteor
strike right by us. And I won't tell you exactly where it was, but it was on a San Juan island, so it
wasn't far, so you can figure it out, if you really want to be diligent.
Anyway, what this leads me to is if any of you are astronomy researchers and you're interested in
meteors, what you can do is you can use health industry executives as bait, because who else gets
to -- I was amazed to be near a meteor strike. From then on, I didn't get too close to the guy. So
it was like, "Yeah, I hear you!"
This sort of Maxwell's demon fallacy really breaks for the very simple reason that the overall
economy, the overall world, isn't big enough to absorb all the risk that you're avoiding by trying
to have a perfect scheme. You have to offload it into the world, and there isn't some infinitely
big economy that can keep on absorbing your risk and can keep on providing you with more and
more benefits, so the scheme has to break. And that's when I realized that there was a unifying
paradigm in all these different failures, that what was going on with healthcare in America was
driven by a certain model of computation that wasn't sustainable, but that on a deep level, on this
Maxwell's demon level, it was profoundly similar to what I was seeing in finance over and over
again. It was similar to Long-Term Capital, it was similar to Enron, it was similar to the recent
recession and all the bundled derivatives, and it's similar to what's going to happen with student
debt and high-frequency trading and carbon credits and anything else where somebody's trying to
compute a perfect position.
Trying to compute perfection doesn't work, in reality. I mean, it doesn't even work in -- there's a
funny thing where sometimes I talk to people that are saying, "No, with a computer, you can
make this perfect, pristine thing, because computers are perfect." And then I'm thinking, have
you developed software? Do you know anything? There's a whole question about whether it's
even realism from a computer science perspective, but at least from a physics perspective, it's
profoundly not realistic, and also from an economics perspective.
So the thing about these schemes is that they appear again and again and again with different
surface colorations, different terminology, different semantics, but this idea of trying to calculate
the perfect position comes up again and again. So what I've realized is that schemes like
Facebook and Google have strong similarities, as do recent large elections that are highly
computational, as do the new face of national security organizations all around the globe, as do
new criminal organizations.
Basically, what's been happening is wherever you find the greatest centers of power and clout
that have been strengthened and improved since networking, you crack them open, you'll find a
big computer in the world, running a fake Maxwell's demon scheme. So I call these Siren
servers, and it comes from the ancient Greek from Homer. The Siren is this dangerous creature
who doesn't directly attack you or try to eat you, but just confuses you so you fall and drown of
your own doing, and that's how I look at these things.
Siren servers are only a problem if we allow ourselves to be idiotic. They're not like some alien
force or some intelligence that's screwing us up, but it's a kind of a temptation. It's almost like a
drug, because as soon as you have this illusion that you can compute your way to a perfect
financial scheme, at first it works. That's the problem. It's like an addiction feels great at first,
then you pay in the long term. So it has this drug-like quality to it.
In order for the scheme to work, the information that feeds the algorithms has to be free.
Otherwise, it would cost money to try to be a Siren server. So this whole information wants to
be free stuff, which I'd been so actively promoting in the '80s and '90s turns out to actually feed
this beast. It turns out to actually be the cocaine that the Maxwell's demon wannabe runs on, and
that's where the idea fails.
So another way to put this is, if you have a bunch of people in some sort of an attempt to create a
utopia, let's say, and they're all sharing information and they're all on a network, the ones of them
that have the most effective computers, the biggest computers, the most highly connected
computers, that have been able to hire the most clever recent PhDs for Caltech and Stanford or
whatever, and UW, of course -- whoever's got the most effective computer can make use of that
same openly shared information to such greater benefit than other people. That differential
becomes so big that it's actually not sustainable.
So what we've seen since the advent of widely available cheap networking is not the
strengthening of a broad range of people in the way that we saw with the availability of
electricity and drinking water, hot and cold potable water and all these things. Instead, it's
created benefits almost exclusively in the most concentrated, elite people, which includes many
people in this room.
I certainly feel a part of it -- you know, what the Occupy movement calls the 1%, if you want to
use the language of the Left. But you have this idea of a recovery after this recession that is
almost exclusively benefiting a very tiny part of society, and you have a loss of social mobility
and a lessening of the middle class across the whole developed world at once, which is just
astounding. All right, so to talk about this, I want to give it a historical framework, and I'm
going to go back to the 19th century, and this has to do with how we think about people in a
world of technological change.
So the 19th century was strongly characterized by nervous futurism. In a way, they worried
more about the future than we do today. We don't really talk about the future as much now as
people did in either the 20th or the 19th century. The 19th century was all about machine
anxiety. I'll give you some of the highlights of machine in the 19th century. We can start with
the Luddite riots early in the 19th century. These were textile workers who were concerned that
improved looms would put them out of work. They rioted, and they were executed in public in
order for order to be restored. It was a very ugly, difficult scene. So we use the term Luddite
today to mean somebody who doesn't have the latest phone or something, but it started out really
as the birth of the modern labor movement.
Other signposts in the 19th century are early Marx, starting in the 1840s. I always like to tell this
story. I was driving in Silicon Valley one time, and I heard somebody on the radio talking about
how this new scheme they were promoting was going to allow productivity to cross international
borders with extreme efficiency, and I was thinking, "Oh, it's another one of these stupid startup
companies. I can't listen to more of this crap. I hear this all day long." And just as I was turning
it off, I realized it was the lefty station, KPFA, and that it was an anniversary reading of Das
Kapital.
It just turns out there's passages in Marx that read incredibly current, and I loathe Marx as a
proposer of solutions. Marx had this idea that he was smart enough to know in advance what the
perfect society would be and how to get there, and that's a very dangerous kind of anti-scientific
thinking, because you can only do science empirically, but he thought he could have perfect
foreknowledge, so I'm not advocating Marx at all. I think he's been a disaster, but as an observer
of his times, he was really extraordinary, and as a tech writer, he was really good. He might be
the best tech writer we've had, actually. He might be better than McLuhan. He's just amazing.
Anyway, what are some others? How many songs do you know from the 19th century? If you're
American, anyway, one of them is probably the Ballad of John Henry, and this was about a guy
who in a race to lay down railroad track with a robot that can do it, and he wins, but only to drop
dead from exhaustion. This was a really popular song. And then another familiar element that's
with us to this day is science fiction. Science fiction, the genre, was started to explore the
anxiety that people could become obsolete because of our own creations. So we can go back to
Mary Shelley and Frankenstein, if we want, but in the late 19th century, we have just sterling
examples from H.G. Wells with The Time Machine, for instance. In The Time Machine,
humanity splits into two species. The rich ones are the descendants of the people who owned
social networking servers and whatnot. The other ones are the people who use them, and the rich
ones farm and eat the poor ones, and they're all miserable.
Science fiction is always about whether people are going to become obsolete. There's two kinds
of science fiction. We're either made obsolete because of our own machines or because of aliens,
but our own machines are the much more common element of obsolescence, and so some of the
recent ones are the Matrix and Terminator movies and Inception and Battlestar Galactica, and it's
on and on and on and on. That was born out of the labor movement. That's the remarkable
thing. You can read a crossover -- like, if you look especially at Mark Twain's early writing,
there's this amazing things where theoretical ideas about machines putting people out of work
turn into science fiction stories. You can see the labor movement morphing into early science
fiction, so that's actually its origin, and that's what guides so much of our imagery about tech to
this day.
So here's an interesting question -- in the 20th century, we did not see ultra-widespread
unemployment because of new machines. Instead, we saw better jobs. Why'd that happen?
Well, I think it happened because the labor movement triumphed on the one hand, and on the
other hand, industrialists realized that they have to think about their own interests, and that there
was actually a completely unacknowledged commonality between the two. So on the
industrialist side, so Henry Ford was a racist bastard. Let's just be clear about that. His own
descendants will say it more clearly than anyone else, and yet, he was a successful entrepreneur,
and one of the things he said is that it's crucial that he be able to price his cars so that his own
factory workers could afford to buy them, because you can't have a market without customers.
It's so simple.
So if wealth is too concentrated, you can't have a market, so if you want to grow your business,
you have to grow the market. Ta-da! This is not rocket science. This is actually a pretty simple
idea in basic entrepreneurship. Then, from the labor movement side, they faced a really tough
struggle. Now, there's been a lot written about the labor movement, obviously. I'm going to talk
about it in a way that it's usually not talked about, from a techie perspective. So, from a techie
perspective, here's an interesting question. An example of a technology that used to support this
huge industry that then went away was buggy whips, right? That's a cliche you always hear
about -- oh, whatever-it-is is going to go the way of the buggy whip. All right. The transition
from dealing with horses to dealing with motorized vehicles is really a big deal, and I don't know
how many of you have dealt with horses, but I have dealt with horses, and horses are work.
They're actually really hard to deal with, and if you love horses, and if you have some really
interesting, sympathetic horses, that's one thing. But to have to deal with them all day long, even
the ones that aren't so nice, and you're dealing with feeding them and dealing with their hoofs
and brushing them, and then the poop -- the poop, my God, all that. And then you move from
that to a motorized vehicle and it's easier. It's way easier. In fact, motorized vehicles are fun to
drive. A lot of people in this room have probably bought a nicer car than they really need,
because driving's actually really cool. We like our cars. They're just great toys. We enjoy them.
It's really fun to ride a well-engineered car. So this brings up a really interesting question -- we
have to pay people to deal with the horses, because who would do that if they're not getting paid?
It's miserable. But why the hell are we paying somebody to drive a cab or a truck, because
driving's fun? Why should those people be paid?
If you ever meet a Teamster, and you wonder, why is the Teamsters Union so tough and brash?
It's because they have to fight like crazy for the idea that even if life gets less miserable, less
smelly and less dangerous, you still ought to be paid. The idea, so better technology can be
associated better jobs rather than fewer jobs, so long as you decide that it's still okay to pay
somebody, even if they're not risking their life and if they're not miserable and covered in crap
all day long. That was this huge, huge, huge transition, and it took decades to fight for it.
Now, one of the interesting features of that realization is that to answer it, to say that people
really should be paid, requires the creation of a somewhat artificial ratchet system to give people
a little bit of a license or something to get paid for the job, so that you don't have a race to the
bottom and it becomes unpaid again. So, for instance, union membership, taxi medallions,
academic tenure, these are all mechanisms -- tenure actually goes back to the Middle Ages, but it
served as part of this movement in the 20th century to create ratchet systems where people could
achieve a kind of a status where they were paid for something that wasn't actually miserable and
life threatening.
Now, we come to the 21st century. The 21st century, we have rejected that old covenant, and the
rejection happened I think in a lot of different ways and different places at once, always in
connection with the fake-perfect scheme I was talking about, always in connection with
Maxwell's demon. But I think the first person to really articulate it in public was Sergey, who I
really like, from Google. But, anyway, the way the idea went was, "Okay, maybe you can get
paid to drive a truck, but just to do stuff online? Give me a break. Information, you don't get
paid for that. That's too easy." So whatever work you do online, like sharing your music, just
put it out there for publicity.
And so now we enter into this new scheme where we're saying if technology gets advanced
enough that it can be delivered as a software service, then we stop paying people. Then we start
to say the benefits you get are going to be what we call informal benefits instead of formal
benefits, and so this is a key idea. If you talk to people interested in development in the
developing world, one of the key -- well, the key quests is to get people out of an informal
economy, into a formal one. Informal economies can give you bargains. They give you barter,
they give you reputation, they give you all these things, but the problem with an informal
economy is it's real time. What that means is you have to sing for your supper for every single
meal. So, for instance, if we tell musicians, "You can't get royalties on your music anymore, but
you can still play live gigs," the problem with that is that then you have to play a live gig
constantly. What if you get sick? What if you want to raise kids? What if you want to take care
of aging parents? You can't be a biological entity anymore. You're always right on the edge of
failure, and that's exactly what's happened with people who are living that way. A real-time
economic career based on informal benefits is a career of insecurity, and all it takes is one little
string of bad luck, which will always come along, just because of how randomness clumps. It
will always come along, and at that point you're knocked off.
So it works great if you're an immortal, perfect robot, not a human, and especially if you're an
immortal, perfect robot who can live with rich parents who still want to support you. Then it
works great, which is of course what everyone wants to be, but none of us can be. So, if there
were only going to be a limited number of people who would be disenfranchised by making
information free, that would be absorbable. We could figure out a way to compensate for that.
So, right now, the kinds of people who've tended to be forced into real-time economic careers by
the open-culture idea are the journalists, musicians, photographers, those kinds of people. We
could come up with institutions to compensate. For instance, there are various attempts to create
new institutions to support investigative reporting, because we don't have nearly enough
investigative reporting for our times. I think that that statement shouldn't require justification.
But the problem is, it doesn't stop there. The problem is that it covers everything in the economy
except Siren servers, eventually. So let's look at some of the upcoming waves that are going to
become -- I call it software mediated. It's hard to come up with just the right terminology for this
stuff. 3D printers are a great example. If you're a member of MSR and you want to 3D print
something, just talk to the guys in the hardware lab across the atrium and they'll print out
something for you. It's fun, it's great. I love 3D printing. It's still early. For those of you who
haven't used a 3D printer, it's like this box that looks kind of like a microwave oven or
something. You download a file from the Internet, just as if you were downloading music from a
BitTorrent site or something. You get your file, and then these little nozzles follow instructions
in the file and deposit materials a little bit at a time until your object is printed out. Today, we
mostly print out objects in a limited number of materials and colors, and you don't print out
everything you might want. But in 10 years and 20 years, I imagine we'll be able to print out
new phones and tablets and things like that. All the components of them are sort of printed
already to some degree. I think we can do it.
What that means is a complete transformation of manufacturing, because now suddenly you can
enjoy the efficiency of printing out things on an as-needed basis and on a where-needed basis.
You stop transporting goods around. You stop having factories. Instead, you have this
distributed system. All you distribute are the antecedent goops. However, recycling becomes
vastly more efficient and precise that it ever was before, because you have a price record of how
everything that was printed was printed, so you can unravel it with great precision, because the
information isn't lost. So instead of recycling being a gross process, it becomes a fine process, so
you'll be able to recycle those antecedent goops, so you suddenly have this amazing green effect,
this amazing efficiency. It screws China royally, because you have to tell them, "All that huge
manufacturing infrastructure in southern China, Foxconn, all that, you don't need that."
Microsoft's making a big investment in that stuff.
But, obviously, as much as the manufacturing sector has declined in the US, it's still a big part of
our economy, even, and it's huge in China and in other parts of the world, and all of a sudden,
that goes away. Now, it's actually not going to be all of a sudden. It will come on with some
slowness, but you know about how Moore's law works. It accelerates. So if one year, suddenly,
you can print a new phone, then a few years later, you'll be printing new medical devices, and a
few years after that, you'll be printing everything, including the printers, by the way, so they
spread virally at some point. It's not like there's some store where you go buy your printer.
So what happens, then? Retail goes away, manufacturing goes away. I know I'm exaggerating.
It won't be that clean. It's always messy, there's always exceptions, there's always gotchas, all
that stuff. But just in the broad picture, there's obviously a huge problem here, because what's
happening is then we're Napsterizing the fabrication of physical stuff. We're Napsterizing
material culture. Then, do I need to list many other examples in a lab like this? There have
already been effective demonstrations of automated pharmacists, legal researchers, bio-bench
researchers. All kinds of educated, middle-level jobs can already be automated. I'm pretty sure
we'll be automating our CS interns, and maybe we can automate our managers.
But, anyway, the thing is that this wave spreads. It doesn't just stop with the creative-type
people. As the 21st century progresses, it hits every part of the economy. Those Teamsters who
managed to survive the obsolescence of the buggy whip and drive trucks are going to then face
the new challenge of a self-driving truck, and that one will surely knock them out.
So let's look, though, at how automation really works. Now, when I was a kid, there was a guy
who was the sweetest, most generous mentor to me when I was a very young computer scientist,
named Marvin Minsky, who was one of the founders of the artificial intelligence movement.
Now, in 1958, a couple of years before I was born, Marvin had given some of his grad students
an assignment to, over the summer, write a translation system from one language to another.
Now, that might sound crazy to us today, but nobody knew at the time. It was a perfectly
reasonable thing to hypothesize about. So, hypothetically in those days, based on how people
understood language then, it should have been possible to take dictionaries for the languages and
write some sort of parser translation scheme and come out with a translator, right?
Now, of course, as we all know, it doesn't work that way. The only way to translate between
languages effectively is with a big data strategy, so we have these huge corpora that we get of
previously translated passages. And it works. It's great that it works. And we're in a race with
our colleagues at Google and elsewhere to make better and better language translators, but we're
all doing basically the same thing, which is gathering huge antecedent examples and then
performing statistics to create new examples.
Now, let's notice something critical about this, which is that there were a group of real humans
who translated passages in order to generate the examples that we use in order to create the socalled automation. So it's kind of stage magic. What we're doing is we're mashing up the efforts
of real humans in a new and useful way, but it doesn't mean that these people don't exist. They
do exist. Nor can you say that you only have to gather data from them once and then never
again, because language is dynamic. So all of us are constantly scraping the net for new
examples of translations to keep our ability to translate current and dynamic, right?
Okay, so this is a key point. There's a kind of a figure/ground flip, or sort of a gestalt
transformation that can come into application here, and I know I see this differently than many of
my colleagues, but this is how I see it. Any time you show me something that's automated or
something that's called AI, there's a way to flip it and see exactly the same phenomenon in
different terms, where humans did all the work. It always traces back to humans. There's not
some alien species that's sending down data to us, so far as we know, anyway. Some of the stuff
you find online, I wonder, but at least the useful data is all tracked back to real humans.
Now, that raises an extremely interesting point to me, which is, if we were to achieve that
figure/ground flip and, instead of thinking about AI, instead of thinking about automation,
instead if we were thinking of the whole system as being run by real people from whom the data
comes, but just having the mediation become more and more useful, if that's the way we think
about technology, which is absolutely as valid as the usual ways, then there's a possibility of
thinking about an economic solution that gets around the Siren server problem, that provides a
way for people to lift themselves out of the idiocy of trying to become Maxwell's demon.
Now, to explain that alternative, I have to go back to the very origin of the idea of networking.
So the first person to write about how people could use digital networks to communicate with
one another or to collaborate actually predates the ability to implement a network, because it
happened before packet switching was invented, and that was Ted Nelson's work, starting in
1960.
So Ted Nelson is still with us. He lives in Sausalito on a houseboat. He's a buddy of mine. He's
in his 70s now, and he's not the easiest figure to understand in some ways. He's kind of a
beatnik-hippie sort of person, and his early writing was infused with a kind of psychedelic glow
or countercultural zest to it that might not be to everyone's liking and is not necessarily as clear
for many people as it might be, and that has to be said. Nonetheless, starting in 1960, Ted was
the first person to describe people using digital networks to collaborate. It was brand new. I'm
not aware of anything earlier. He did so with extraordinary insight. I think sometimes the first
person on the scene can see more clearly than people who show up when it's already cluttered.
So what Ted realized, and what he called it was hypertext, which is where the HT in HTML
comes from. So there's a direct descent of his original terminology to what we use today. So
Ted had Hollywood parents, who benefited from the labor movements of creative people, so we
usually think of Hollywood as being populated by super-overpaid actors who just grunt while
they fire weapons or something and then become the Governor of California or whatever it might
be, but actually, the unions for actors and whatnot benefit mostly a middle class of people, and
his parents benefited from that. So he understood that, even if all you're doing is pure
information, you're vulnerable to a race to the bottom, where you're demoted into an informal
real-time life, unless there's some kind of a mechanism.
But what he realized is that instead of these artificial sort of ratcheting mechanisms, like unions,
maybe something more organic could come about in a digital network, and what he proposed is a
universal micropayment system. Remember, this is before -- universal micropayments were
invented before packet switching. This is a remarkable thing. They're the actual origin point for
networking. So he proposed a universal micropayments system, so that when people make use
of information that exists because the other person exists, that person receives some
micropayment for it, so the people whose translations prove particularly useful to a translation
algorithm would keep on getting little dribs of pennies. The people who -- if you write code,
whenever your particular line of code executes, you might get a little drib and drab of money out
of that. And it's a really interesting idea which hasn't been adequately explored.
For instance, let's look at code. We tend to think of the economics of code as being a war
between two camps, one of which is the open-source world, the Linux people and everything,
and the other one is us at Microsoft, who are supposed to be the Evil Empire. But the thing is,
there's this third way that has not really been tested that might be better than either of those. If
there's a micropayment system that's activated as your code runs, then the more your code runs,
the better you do. And the way I put it in the book is Sergey and Larry could have become
really, really rich just from a system like that without having to build a private spy empire. But
the other thing is, if you look at the Linux stack, and you look at the number of people who've
contributed to it, or the number of people who've contributed to something like the Wikipedia, if
that stuff was monetized, you'd see a middle-class distribution coming out of it.
The intriguing possibility is that a universal micropayment system might actually generate a
sustainable middle class, even if technology gets really good and what we call automation
becomes really advanced, without the need for special systems that are inevitably very difficult
and sometimes corrupt and awkward, like unions and medallions and licenses and all this stuff.
So this is a big idea. I'm not certain it would work. I'm not proposing to be like Marx and to
know in advance what the perfect world would be and how it will happen. Rather, what I'm
proposing is a line of research to see how it can work.
Now I'm not sure whether I'm giving a book talk or a research talk at MSR. Actually, I'm doing
work to model this down at SVC, at our campus, this summer. I'm trying to build agent-based
models of economies and trying to do monetized networks within them to see what kind of
distribution of outcomes we get. I'll give you a few basic ideas about how this kind of research
works. If you look at a spoke-and-hub style network, where everybody goes through a central
arbiter, and an example of that is YouTube or the Apple Store, then the outcome of winners and
losers is a very stark power curve, so that's where you get just a few big winners -- or Kickstarter
is another one like that -- you get a few big winners, and then you have this huge long tail of
wannabes, and the neck is pretty thin in between them. And that's when you get the Horatio
Alger effect, where people think they have better chances than they really do, and it's not
sustainable.
Now, on the other hand, if you look at a thickly connected network, where people are interacting
with each other and there's not a central arbiter allowing only one person to get through at a time
-- I mentioned the Linux community is like that and the Wikipedia is like that -- or another
example is Facebook, where anybody can connect with anybody, and people can get compound
products out that have been contributed to by many people. Then the variety of people who were
the source of information that people see takes on a completely different character. Instead of
this steep power law, you start to see something that looks like a bell curve.
So the average person on Facebook actually is exposed to a wide variety of people, not just a tiny
number of stars. And the average piece of code in the Linux stack involves contributions from a
large number of people, not just stars. That's not to say that there aren't stars. It's just to say that
there's a distribution that has a big hump in the middle. There are still stars. This is not a world
in which there are no elites and everybody's the same. This isn't some socialist utopia. It's just a
world where there's a bell curve instead of a power curve.
So why do we care about bell curves? I already mentioned before that if what you like is market
dynamics, if you think capitalism any value, you have to realize it won't work if there aren't
customers. That's what Henry Ford realized. You have to have a strong middle class, or you
can't have a market. It's just really that simple. You can't have a market if you have some sort of
petro-monarchy or oligarchy or something. That's fake. All right, but then if what you care
about instead is societal dynamics or democracy, if you're sort of coming more from the left and
you don't like markets so much, you still need a middle class, because if income becomes too
concentrated, then politics becomes corrupt, which I think is actually an issue in the US right
now. So the point is, you can abstract away whatever ideology you have. It depends on a strong
middle class.
I don't care if you're libertarian, left or right, you need it. And so what we really should be
asking is how can we design network structures so that, economically, we're generating middleclass distributions? Now, the term middle class can be problematic. Now, maybe not in this
audience, I don't know, but a lot of times I'm talking to the sort of literary crowd, and if you say
middle class, what they think is, "Ew, the bourgeois, it's our parents. It's everything that's not
cool and beautiful and hip." Fine -- a big middle-income block in the middle, a bell curve. It
doesn't matter if you want to call it the middle class, especially in Europe. That's a really hot
button, let me tell you, as I learned the hard way. It's like, "You want to promote the middle
class? Is it all Leave it to Beaver now? Is that the idea?" I was like, "No, no, no."
Anyway, what I think the crucial thing we have to understand is how can we design a network
that yields a middle-class outcome from information sharing that's sustainable? Because if we
can get to that point, then the 21st century can answer the fears of the 19th century, but in a way
that's even better than what the 20th century did. If we keep on doing what we're doing, of Siren
servers and fake Maxwell's demons, we're just going to keep on having one collapse after
another, with one public bailout after another, with more and more concentration of wealth and
power, less and less social mobility. The pattern will go on forever, and obviously, we can't keep
on doing that.
I have this sense of how long we have, which is 20 or 30 years. I say that because I think that's
about how long the intense technologies of automation will take to really get out there and get
cheap. So that's my sense of how long we have. So if we do the research now, if we approach it
honestly, if we're not ideological, but simply trying to be problem solvers, I think we have time
to fix it, and I feel confident that we can. I want to address one other point that I often hear
about, just to preempt a question that I always get. The question goes like this -- isn't it true that
there's only a tiny number of people who are really doing any valuable thinking or who are really
creative, and won't most people be useless in the system, and won't it just recreate some sort of
elite distribution? And I just have to say, "Maybe. Let's be empiricists." There's a kind of
weird, stealth elitism that creeps in that assumes a priori that that's the case, and empirically, in
those cases where we have data, I don't think it is the case. I mentioned Facebook as one
example where we see a broad middle in terms of who's exposed to who, rather than a star
system that we see in hub-and-spoke networks. So we've already seen that network topology
changes that. And if it were really true that most people were only interested in a few stars, we
wouldn't see that.
Now, another objection I often get is, "Oh, my God, how can you be talking about Facebook?
That's such fluff. You can't monetize that. Don't encourage them." And here's what I want to
say about that -- our job is not to judge each other. I'm not like some cultural critic. Personally,
I'm not on Facebook. I find it to be fluffy and useless, but you know what? That's just me. It
doesn't matter what I think about it. Who cares? The point is, entertainment's always like that. I
mean, you show me entertainment of any era in history in any location in the world, and I'll show
you some part of it that just seems stupid and pointless, because there's always something like
that. People are different. That's good. That gives us that broad distribution. That gives us
those bell-curve outcomes.
So if you want to get a sense of how much value is already being denied to people by Siren
servers, you can start to, in your own life, keep a tally of the differential between what you'd
spend if you agreed to join into somebody's computational scheme versus if you didn't. So, for
instance, if you have a shopping cart at Safeway or another store, keep track of what the
differential is for a year. Your Facebook activity, on average, is worth about $100, if we're to
believe the valuation, so that's maybe $100. It's not a lot, but it adds up. Look at the difference
between keeping track of your frequent flyer miles and not. If you think you're really getting
bargains from these things, of course, that's a magic act. There's no such thing as a bargain.
That doesn't exist. It's just a price. So if somebody says, "Oh, this is the bargain price," it just
means that they would otherwise be overcharging. You have to get out from under stupid
marketing tricks, and especially if you work at Microsoft. I mean, we do them, too, when we sell
stuff. I mean, get wise. Never be the snookered. Always be the snookerer in a market economy,
okay? General principle of survival.
So if you start counting up all that stuff, you'll find that for a lot of people, it's already well up
into the thousands and even ten thousands, and automation has barely begun. So as this
progresses -- and people will be specialized. Like, there might be one person who is a star on
Facebook and another person who is valuable in some other way, maybe as a 3D print object
designer, right? It will be all over the place, but on average, I believe we already have empirical
indications that there will be enough value there to create a persistent middle class, not out of
charity, not out of entitlement, not out of revolution, not out of some kind of proclamation, not
out of Luddite riots in the streets, but simply out of honest accounting. You show me AI, and I'll
show you accounting fraud, if I want to put it really harshly.
It's true. That's the flip I'm talking about. There's a great deal more that can be said about this,
of course. Wow, it goes on and on. That's why there's a whole book about it, but this is basically
what I'm up to these days as far as economics work. The book is designed for a popular
audience and has all kinds of stories about other things. I hope it's fun to read, but that's the core
idea. I think the key question to ask about doing well in a market economy is, "Are you
succeeding through growing the market or shrinking the market?" Among Silicon Valley
venture capital firms now, it's very popular to say, "We like funding schemes that shrink
markets." So, for instance, Kodak is bankrupt. By the way, guess what Kodak did. Kodak grew
up in the community and with the same workers, or the descendants of the workers, who'd had
the biggest buggy whip manufacturer. So that morphed into Kodak, and now Kodak's bankrupt,
and the company that's performing approximately the duties that Kodak used to, which is letting
you take family pictures with interesting colors and share them with people, is Instagram.
Instagram sold for $1 billion with 13 people. Kodak supported hundreds of thousands of people
with solid middle-class jobs with benefits and security. So that difference is the difference that
computation has wrought.
Now, the thing about it is I don't begrudge those 13 people. I love success. I love Silicon
Valley's success. I enjoy Silicon Valley, I enjoy startups. I've done a bunch of them. So I don't
have any problem with it. The point is that when we find success, we should find success by
expanding the market, expanding the economy, and you expand the economy by monetizing
more value. That's what expansion is in economic terms. When you monetize less value in
order to concentrate it for yourself, you're actually shrinking the economy to concentrate your
own.
I'm absolutely convinced that if we got to a monetized scheme -- this is not some leftist project
or an anti-corporate project. Instead, I'm convinced the Facebooks, and now it's part of
Facebook, but the Instagrams, the Microsofts, the Googles, I believe we'd all actually do better,
because we'd be part of an expanding economy as tech improves, instead of a shrinking one.
Because to shrink one under the ideology of automation is to pretend that people aren't there,
which means that we're pretending that the value isn't there, which means that the economy has
to be smaller. So it's the wrong kind -- I want us to grow rich. I want us to be successful, but
we're doing it in a wrong way, and the reason it's wrong is that it's not sustainable. We're
swallowing our own futures, just for short-term gain.
All right, that's it.
>> Kevin Kutz: So we have about 10 minutes for questions before we get to do book signing, 10
to 15 minutes, so if folks want to take questions, Jaron, you can just call them out.
>> Jaron Lanier: Yes.
>>: [Indiscernible] more the business model, how would that fit into your picture? Because
some would argue that's the root of evil?
>> Jaron Lanier: I'm sorry. Just say it again.
>>: You didn't talk about ad-supported business models.
>> Jaron Lanier: Oh, ad-supported business models. Well, okay. That's true. I didn't talk about
advertising.
>>: Could you repeat the question?
>> Jaron Lanier: He's saying I didn't talk about the ad-supported business model. You mean
like Google and Facebook ads. So that's true. I didn't talk about that. So the term advertising
has been repurposed recently. Advertising used to be an act of communication. It used to be a
romanticization of a product. I've acted in the ad -- I've been a professional in the advertising
business, because I did jingles for commercials for many years, and I do a lot of work now
actually supporting Microsoft advertising, but that's another story.
So I have no problem with the advertising business, as it's always been. Sometimes I have a
problem. In the book, I describe how I found myself suddenly annoyed by this annoying radio
jingle for a furniture store and realized it was actually my own jingle. So sometimes, of course,
I'm annoyed. But the thing is, what happened with Google is a redefinition of the term
advertising to mean micromanagement of the options in front of people.
The problem is you can't search through a million links, so you really can only look at the ones
that are the most immediately accessible, and by manipulating which ones are accessible, you
manipulate people. And if you have a behavioral model of those people based on big data, then
you can make that manipulation be more successful. Now, I know that the way that we
commonly put it is that that's win-win, because then you're getting the links that are most useful,
blah, blah, blah. But then I ask, "Why aren't you getting those links anyway?" If Google or Bing
are doing their job, there shouldn't be a lot of room for extra paid links, because they should
already be getting you the useful ones. That's sort of a basic idea, right? And so the problem
with it is -- the problem from a consumer perspective is that you start gradually being
manipulated by third parties who are paying to do so, and, inevitably, that means in the long term
you're losing prospects.
In order for the scheme to work, your information has to be free. So, for instance, you get free
music because your choices in music provide a profile of you that's then used to sell you antacids
or whatever it is. But the long-term problem, and the reason it's not sustainable, in the book, I go
through how there will eventually be little artificial patches that can synthesize chemicals. This
is a long thing, but anyway, whatever technology is now making something that can be
advertised as a link on Google or Bing or Facebook will eventually get automated away by free
software, so it will no longer be there as a customer. So Google's business model is gradually
going to evaporate its own customer base, so it's not sustainable. Is that clear?
And then, another problem with it is it forces -- it's the only official business plan for consumerfacing Internet services in a world of free information, so these totally different companies, like
Google and Facebook, with different competencies and cultures, are forced to compete for the
same pool of customers, which is ridiculous and creates this sort of claustrophobic, bizarre
competition that doesn't make any sense. This would be saying that light bulb and horse feed
people should be competing with each other. It doesn't make any sense. Google and Facebook
should be different, but they're not, because there's only one business plan.
So, yes, so I think it's a stupid business model. It's the only legal one, if you really believe in free
information. The only model left is to micro-model people and keep the model secret from them
so you can manipulate them for pay. And then, furthermore, another problem with it is we've all
grown used to the idea that there are these recommendation engines that tell us who to date and
what music to listen to or whatever, or where to buy our plane tickets, but the thing is, we all
know in our heart of hearts that it's a little scammy. We all know -- any social scientist or
psychologist that studies the dating sites comes to the conclusion that the algorithms don't work,
but we make them work because it's not actual science. It's social engineering, and we allow
those two to be confused.
That then creates this atmosphere where big data becomes treated as a form of manipulation
instead of science, which in turn sort of makes us distrust it, I think. It's a whole other topic, but
big data is really important. I mean, real big data, that's not part of fake business schemes, is
critical to our survival. It's the only way we know about global climate change, and big models
are the only way we know about the human contribution to big climate change, so this stuff is
very serious. The public knows about it in this way that they really know in their heart of hearts
is a confidence game, is a scam. And that's really, really unhealthy. So, anyway, there are a lot
of reasons why I dislike the advertising model. That's not to say I don't work on supporting it
while here, because hey, one has to be part of the world and also looking ahead for how to make
the world better.
So I don't think it's helpful to be like this perfect soul and say, "I am just going to boycott reality,
because I don't think it's good enough." Instead, what you have to do is work well within reality
as it is, but then also try to think reasonably about how to gradually improve it. Okay, any other
questions? Yes.
>>: I'm [indiscernible] on the pathway of how we get from where we are to there. A common
example for me is the ubiquitous evening survey phone call, to which my typical response is,
"Well, how much are you going to pay me to take your survey?" It seems like the right model,
right? It costs them about $50 a person to collect data. Share a little bit of that with me, you'll
get better data for less money, it'll all work. It seems like the right model. How do we get there?
>> Jaron Lanier: Right. So the question is, how do we get there from here? It's a hard one,
because we've gone down pretty are on another path, right? So in the book I outline a little bit
about that. I don't want to be too prescriptive, because I don't want to commit Marx's error of
presuming perfect foreknowledge, but I think there are a couple of things. One is, every time a
new platform of hyper-automation comes around like 3D printing -- lately, what happens is the
open-source movement grabs it and says, "Oh, we're going to have this open source. All the
models have to be open source," because that's the side of everything that's good and holy or
whatever.
Just for once, just to be experimental, let's make one of those things be paid just to see what
happens. Like, what if 3D models weren't open source? What if we just, as an experiment, said,
"We're not strict orthodox. We're not absolutists. We're just going to try to see what happens."
And if what came out of that is a lot of interesting people doing well and more and better models
and all that stuff, that would be -- so one way is to do isolated experiments, where the isolation is
created by technological change.
Another way is to start theoretically, which I'm approaching, and then to try to sort of advertise it
to politicians and captains of industry or whatever. Another way is if all the companies could
just -- like, there's four or five companies that kind of run the consumer Internet at this point. It's
hyper-consolidated. People always talk about how media is wide open because of the net, but
the truth is, in terms of what actually reaches people, it's more consolidated than it's ever been.
And we could just sort of get together and try a big experiment. I realize it's hard. Just us and
Apple and Google and Facebook, we could just do it. How hard would that be, for God's sakes?
We all get along, right?
>>: You [indiscernible] the newspaper business where they tried to monetize them?
>> Jaron Lanier: Right. Well, the thing about monetizing is that you can't do it in isolation. The
micropayment system genuinely has to be universal, at least in a domain. Like, if it's 3D
printing, it has to be in a domain. If it's only a local thing -- if you're just trying to monetize one
newspaper, it's very hard, because, of course, the open, free thing will route around it. So it does
have to be universal.
I think part of it is ideological, and I'm partially at fault for this, but we've raised a generation of
idealistic young people who are absolutely convinced that free information is the only way for
things to be okay. They have to understand that systemically and empirically, it's just not
working. It sort of works in the immediate sense, but it doesn't work macroeconomically, and it
doesn't work for your lifetime. Yes.
>>: I think you covered this a little bit, but I haven't read the book yet, so I'm guessing. Can you
talk a little bit about what it means to be you in 20 years, when you're talking about what you
feel your life will be like, as an author, as a public intellectual, as a teacher, in 20, 30 years, if
this model works?
>> Jaron Lanier: Oh, I mean -- so the question is what would it be like to be a public intellectual
or a writer in 20 or 30 years. In a sense, I don't worry about that too much, because so few
people are. That's a very small part of society. I'm much more worried about the broader
middle. Like I said, I'm kind of a weirdo. If we designed the future for me, it wouldn't work for
other people. I have to accept, I'm always going to be an outlier. Utopia for me would really be
a weird one, let me tell you. There would be weird instruments at every corner. I'd get infinite
resources in my lab. It'd be like, "Oh, you want your own linear accelerator? Sure, yeah, you
need that." That sounds right, yes. Yes.
>>: Do you have any insights as far as how you would deal with defectors in the new system
where somebody -- you talk a little bit in your book about two-way links, so if I wrote a paper or
something and they linked to me, then I'd get a little bit of that action, and I couldn't charge less
than he charged. But unless you DRM ideas, then what's to stop somebody from reinterpreting
that? Is this no worse than the system that we're in now?
>> Jaron Lanier: I always get this question about how you'd enforce it, and the thing about
society is it has to be mostly voluntary. So I once knew a criminal who was serving time and
said to me about one in 20 people was going to be a criminal, and that was his experience. I've
kept watch on that in life in many different sectors of the world, and I think it's a reasonable
estimate. So what we can say is that 5% of people will not accept the system.
I don't want us to become a really hard-ass society where there's like the police from Brazil who
swoop down on bungee cords to arrest the people because they copied a file or something. And
especially, by the way, I really don't like enforcing copyright with a really iron fist right now,
because there's no reciprocity. If some kid copies a music file, but meanwhile their life is being
examined by thousands of remote computers to model them and manipulate them, honestly, it's
hard for me to say to that kid, "Oh, yeah, you better respect those copyrights," because they're
being abused all the time, or taken advantage of.
Eventually, what has to happen is there has to be a categorical imperative. There has to be a
Golden Rule feeling. Look, this is a lab with a lot of techie guys. I bet a lot of us know how to
pick locks. I'm just guessing a lot of you here could go out into this parking lot and steal a car
right now, and you wouldn't have any problem with it. The reason you don't steal cars is in part
because it's illegal. It's in part because you might have these ideas that it's the wrong thing to do,
but it's also in part just because you don't want to live in a world in which cars are being stolen
all the time. You like the idea of normalcy being the car doesn't get stolen. That feeling, that
broad sense of a categorical imperative of acting in the world in the way you wish other people
would act towards you is really what holds the whole thing together. The police and
enforcement can only do a little tiny bit.
This is another example of a Maxwell's demon fallacy. If you think that some big computational
scheme is going to keep people in line, of course, it's going to break. Give me a break. So this
scheme has to be a social process in which the broad majority of people feel it's in their own
interest, and it has to demonstrably be in their own interest, or else it fails. Enforcement can play
a role. There could be a certain amount of it, perhaps, but it can't be the centerpiece, and it can't
be the main question, and it can't rely on -- I think there could be DRM, but DRM should serve
as just a reminder of what social contract we've entered into. It shouldn't serve as an iron fist.
>> Kevin Kutz: I know we've got some questions online. Can we take at least one and then
have that be a wrap-up?
>> Jaron Lanier: Sure.
>>: The only question online was just about how the market economy works on Second Life.
>> Jaron Lanier: Say again?
>>: How the market economy works on Second Life.
>> Jaron Lanier: Oh, Second Life. So Second Life is an interesting experiment. I think it's a
little less in the air than it was a few years ago, but I was an adviser to it at the start. I'm sure you
know what it is. It's an online virtual world where you control an avatar with a very sort of lowbandwidth method. It's got a slightly Burning Man kind of a feeling to it, overall.
I think there are some successes and some failures in it. It is monetized, in the sense that people
buy and sell virtual tchotchkes on it. It's not universally monetized, in that a lot of things happen
on it that aren't monetized, so it's like a halfway system. It has pretty poor-quality tools, and it's
a very rough implementation. When the thing was going up, a typical argument I had with them
was, "You can't possibly plan to ship it with only that. It needs to be better." And they said,
"Oh, come on, we need to ship it." It was very much like the arguments we have in Microsoft all
the time, I think.
I think they were probably right, because it had its moment in the sun. I don't know. The
distribution of outcomes is not quite a bell curve, but it's not a stark power law, either. It's kind
of in between, so I'd say it's an intermediate result in terms of the spread of outcomes. I don't
know. It's encouraged me that something can work. I don't think it was perfect, and I don't think
it gives us proof that we understand everything, but I think it was worth doing.
>> Kevin Kutz: Well, thank you very much.
>> Jaron Lanier: Cool.
Download