>>: Each year Microsoft research hosts hundreds of influential... around the world, including leading scientists, renowned experts in technology,

advertisement
>>: Each year Microsoft research hosts hundreds of influential speakers from
around the world, including leading scientists, renowned experts in technology,
book authors, and leading academics, and makes videos of these lectures freely
available.
>> Amy Draves: Thank you for coming. My name is Amy Draves and I’m here to
welcome Evgeny Morozov back to the Microsoft Research visiting speaker series.
Evgeny is here today to discuss his book, To Save Everything, Click Here: the Folly of
Technological Solutionism.
The temptation of the digital age is to fix everything, from crime, to pollution, to
obesity by digitally tracking or gamefying behavior, but when we change the
motivations of our moral, ethical, and civic behavior we may also change the very
nature of that behavior.
Evgeny Morozov is a contributing editor at the New Republic and the author of a
monthly column on slate. His first book, The Net Delusion, won both a New York
Times notable book award, and the Harvard Kennedy School Goldsmith Book Prize.
His articles have appeared in the New York Times, The Economist, The Wall Street
Journal, and many other publications.
Please join me in giving him a very warm welcome.
[applause]
>> Evgeny Morozov: Thank you so much, and it’s good to be back and thank you
much for coming. I guess I’ll start by just explaining why I decided to write this
book, having written another book called, [inaudible] with sort of similar issues. So
I [inaudible] and in part because I myself come from [inaudible].
Some of you may know it’s a country that has this very difficult and challenging
political situation. That’s where I grew up, and for me always the Internet and
media where, you know, they gave us hope that you can actually use them to do
something positive with bringing democracy and freedom of expression to the
country.
So I spent some of my early career working as a practitioner working for several
NGOs in the former Soviet Union where we tried to actually use social media and
blogs and so forth to train activists and bloggers how they can campaign online.
And what I noticed and what prompted me to write The Net Delusion first was that
the governments were increasingly using the same technologies and techniques to
track dissidence and to [inaudible] and to engage in propaganda.
And what I noticed is that a lot of technology companies were actually involved in
supplying many of these dictators and authoritarian regimes with tools for tracking
down the dissidence, but it was tools of surveillance, where it was tools of
monitoring them online or monitoring them through their mobile phones, where
there were tools for spying on Internet traffic.
So I tried to understand what are some of the moral implications of first of all
building those technologies for use in our own countries, because many of those
tools are built to be used in America and Europe, and then later they end up in the
secondary market in the middle east and china and elsewhere, and also try to
understand what were the consequences of relying on technology companies in
Washington to help America promote democracy, right?
What I noticed is that a lot of policy makers in Washington were very excited about
relying on Facebook and Google and Microsoft, much less so, but on social media
companies to be the new platforms where the [inaudible] can blossom, right?
So there was a lot of talk about Internet theorem and there was a lot of excitement
about this [inaudible] twitter revolution in Iran, it was a lot of excitement about the
Arab Spring, so a lot of people thought that Silicon Valley could be a powerful and
useful ally in helping America promote democracy.
So part of what I tried to do in my first book was to show the costs and risks of
actually off loading or outsourcing some of this democracy promotion at first to the
private sector.
So tried to understand what happens once we engage these companies in promoting
democracy? And as my title of the book probably gives away I wasn’t very
optimistic on the net delusion that you could achieve was the help of these
companies.
So what I tried to do in my second book was to shift and change my focus a little bit.
So I no longer looked at authoritarian states, and I decided to focus on our own back
yard, to look at the democracies, look what was happening here in America, and
look what was happening in Europe.
And what I saw was that there was a very similar tendency among policy makers
and among the companies themselves to actually get involved in solving big social
problems, right?
They were no longer just the business of building software, if you listen to the
executives at Google and Facebook they all want to help solve some problem,
whether it’s the problem of publicity or the problem of climate change.
You know, Marc Zuckerberg will tell you that we have built a platform through
which we can tackle and solve some of the world’s greatest problems. And I have a
long series of quotes from Zuckerberg and Eric Schmidt, and many of these other
executives from Silicon Valley who believe that it’s their mission to do more than
just build software.
And they are building apps, they are building all sorts of platforms in which to tackle
these problems. And what I tried to do in the book is to explore some of the
consequences again of tackling these problems in this particular manner, by
building apps as opposed to say passing laws and engaging in more ambitious
structure reforms, all right? Because there are obviously many alternatives that we
have at our disposal as citizens of democracy, and one of those options is to build
newer and shinier apps, and another options is to work on probably less shinier but
probably more consequential political reform, or to try to regulate certain
industries.
I mean, there are all sorts of options on the table. So what I try to do in the book is
to try to calculate the costs and benefits of relying on the private sector and special
technology companies to help us do that.
So now with this philosophy out of the way let me just give you a few examples of
some of the things I am talking about.
So I think the biggest change we’ve seen in the last ten years is that now we do have
a new infrastructure, privately run infrastructure of problem solving. The existence
of this infrastructure is something that plays a huge role in my argument and what
do I actually mean by the infrastructure?
So I think what has happened in the last ten years or so is that basically [inaudible].
One is that sensors have become much cheaper and much smaller, so you can
basically, as you all know working in technology, you can build small sensors into
pretty much every object and every gadget and every artifact, and that allows to
have new type of interactivity, but it also allows to supply the user with new types of
information that you previously couldn’t build into a cup, or a blackboard, or table,
right?
Previously those devices were dumb, right? They were analog. Now you can
actually build a sensor that will provide and additional layer of information that can
result in users making a somewhat different decision when they use a certain
device. All right? So you have all seen the smart gadgets and smart objects, right?
We have smart shoes that let you know when they are about to get worn out. You
have smart forks that because of sensors can tell you that you are eating too fast and
they will vibrate if they sense that you are moving your hand too fast, right?
You have smart toothbrushes that monitor how often you are brushing your teeth
and can actually send that data to your dentist and to your insurance company,
right?
All of that is possible because you have little sensors built into all of those devices,
right, and they allow for new types of behavior, but they also allow to push and
nudge the user to do something else, right, to adjust their behavior in one way or
another.
Again you have smart umbrellas that you may have heard of, which have a built in
connection to the Internet and they check the weather, and they have a little blue
light that goes on if the weather promises rain, so that they remind you to take the
umbrella before you leave the house, which will probably in Seattle I doubt you ever
leave the house without the umbrella, but in other cities it comes in very handy.
So again, here you see new types of behavior that become possible as you add new
layers of information to devices and gadgets that previously either unconnected or
that previously had no interactivity whatsoever.
So this is one trend and I’ll tell you what the political implication of this is a little bit
later. The second trend that again, those of you working on social media issues are
well aware of, it’s that you can now build a so-called social layer into almost
anything, right, because you have your smart phone with you almost anywhere you
go.
Most of us have presence on social networking sites, right? So virtually every single
decision that you tape right now, you can run that decision through your entire
social circle. You can involve, if you want, as a sort of policy architect as a problem
solver, you can involve your entire social circle in helping you decide on something,
or on having you compete with your friends and be in some kind of game or some
kind of competition, right?
And again let me just unpack it a little bit more. So there is this new app that a lot of
people in Silicon Valley are excited about called Seesaw, right? So what does the
Seesaw app do? It allows you to basically poll your entire social network about any
tough decision that you are facing right now.
What latte drink to buy, what dress to buy, potentially what politician to vote for,
you can just create and instant poll in your smart phone, your friends will get an
immediate notification, they will say what they think about the certain choice that
you’re facing, and then within a minute you’ll know what everyone in your social
circle thinks about a particular range of options.
Right? Again, this is possible in part because we carry our smart phones
everywhere, but then part because we have a strong online identity and we carry it
with us almost anywhere we go, right?
So new types of behavior become possible gamefication is a trend that some of you
may have heard of, this tendency to turn behaviors that were previously motivated
by appeals to, say, morality or some kind of civic language that you have to do
something because that’s what good citizenship is all about, now you can actually
get people to do it because they can earn points, right? And thus compete in a
competition against their friends, right?
So you can actually turn almost anything into a game because now there is social
currency, whether it is the currency of facebook or currency of attention, and we can
all be compensated for things we do compared to what our friends are doing.
So someone who is a major theorist of gamefication, a guy called Gabe Zichermann,
wrote a notepad in November right after the elections, but he argued that one way
to boost civic behavior in America is to start rewarding people with points for
showing up at the election booth on election day, because now you can do it easily
since we are all carrying mobile phones, and just like you check in with Four Square
at the bar, you can check in at the polling station.
And wouldn’t it be great if you actually give people rewards for doing things that can
boost civic life in America?
You see what’s happening here, right?
So the behavior that was previously expected of us as citizens, because the
assumption was that this was what good responsible citizens do, can now be
motivated with a very different register of incentive.
So now you can actually build commercial consumerist incentives, point
accumulation incentives, into activities that previously were less efficient, but relied
on pure persuasion, and they relied on trying to talk to you as a citizen and as a
political subject.
So keep those two trends in mind. So the proliferation of sensors on one hand, and
the fact that they are getting cheaper and they’ll offer new types of behavior, and the
fact that now almost anything can be made social.
So when do those two trends converge? I found a very interesting design project,
which is not an art project, it’s actually something serious that serious designers
sitting in Germany and Britain decided to build because they are very concerned
about the environment.
So it’s a bunch of well-meaning people who actually want to change the world. So
they built something called Dim Cam [phonetic]. And Dim Cam is basically a smart
trashcan that is supposed to be in your kitchen.
And the way it works is that it looks like a regular trash can, it has a smart phone
built into its upper lid, right? So every time you open or close the trash can it takes a
picture of what it is you have just thrown away, right, and that picture is being
uploaded to mechanical Turk, which is a site run by Amazon where freelancers are
paid to do things that computers cannot do yet, or it takes some money and effort to
do it for computers.
So you have these freelancers who are analyzing your photo to see whether you are
engaging in environmentally responsible and recycling behavior.
So if they judge that you have recycled things correctly they award you points. And
this picture, along with the points you just got awarded is uploaded to a facebook
profile where you are entering a competition against other users of the Dim Cam.
So at the end of the week you can actually come up with some kind of a [inaudible]
table where you can see which household has won most points for being most
environmentally friendly, right?
So here is the convergence of many different trends I have been talking about. So in
the one hand you can now have a trash can, that because of sensors and this
participation of essentially [inaudible] that the trashcan actually knows what it’s for.
It’s for throwing things away, right? It knows that it’s meant to be used for
recycling.
So there is some kind of basic intelligence even though it came from elsewhere and
built into the trashcan, but at the same time you see that new types of behavioral
interventions become possible, in this case an act of recycling, can be turned into a
game that you are playing against other citizens.
And you see that what’s happening in this example, and we can talk about how
realistic it is and that’s a good debate to have, and we can say that no one is going to
buy it, and again that’s an argument that I’ll except to some extent, but you see the
logic at play here is that acts that were previously expected of us for purely political
reasons, right, can now suddenly be recast in a very different language where the
goal is not so much to save the environment or to be a responsible citizen, but the
goal is to collect points, which you can then convert into buying a latte or ordering
something online, or adding to your frequent flyer account, right?
This is something that essentially replaces the political language, right, that we had
before, with a very different consumer-friendly business language. And it doesn’t
actually matter what it is that you do, whether you are trying to recycle or whether
you are trying to vote, or whether you just go somewhere walking on the street and
you see littler lying on the pavement, right?
And now suddenly your phone realizes that you’re reaching out to pick up that litter
because it also has a sensor built into it, and it suddenly awards you these points,
and it goes to your facebook and you feel great and everyone feels great because you
are helping to keep the city clean.
But what happened is that essentially you are engaging in this behavior not so much
because you care about the city or because you care about the environment, but
because you are getting some reward for it, right?
And the literature that exists on sociology of incentives basically tells us that
whenever you replace non-market incentives with market incentives, which for me
is essentially what gamefication is like, you are replacing something that used to be
about philosophy and morality and appeals to ethics, you are replacing it with
appeals to what we want to do as consumers, which in this case is accumulate
points.
You see that people are reluctant to do things now, that they expect a payment. So
probably the most interesting literature in this exists in Israel. There was this
interesting study on a day care center that some of you may know about.
So they have a day care center and basically some parents were late to pick up their
kids all the time, right? So you have a percentage, say five or seven percent of
parents were always late to show up at the day care center. And of course when
they were late the staff of the day care center was not very happy with those
parents, right?
Because they had to stay extra time and they gave them angry looks and parents try
not to be late.
So what they tried to do was to start imposing fines on the parents, right? So they
would now, whenever they would show up late they would charge you 10 or 20
bucks for showing up late.
And the moment they started implementing fines they noticed that people started
becoming late much more often. So now it was no longer five or seven percent, it
was like 20 or 30 percent, because the moment they start introducing these
monetary incentives people realized that a fine is something they can just pay and
not worry about it because they no longer need to worry about the angry looks from
the staff because they feel that they just pay for the discomfort with that fine, 10, or
15, or 20 dollar fee.
And there are many other examples. So there is a very interesting example in
Switzerland where a village there was asked to basically accept some kind of a
waste dump built next to it, right? And they were told that they have to do it for
political reasons because that was the only place that such a dump could be built.
And, you know, 10 percent disagreed and said we don’t want to do it. But then
something else happened and they came and said, well, why don’t we pay you for
accepting this dump?
And the moment they started paying those people many more of them said no, we
don’t want it because they understand that once you talk to them as citizens in the
political language, then they understand that something is required of them.
Once you start monetizing this, and once you start introducing the market
[inaudible] into this people start behaving differently, right?
And the kind of responsibilities they used to accept in they no longer accept because
they think that this is something optional, right?
So for me, for example, one of the critiques I’m making of gamefication in the book is
that while it might allow us to increase efficiency in some respects, it might help us
to increase it locally, for example, so you might be able to get people to show up at
the voting booth, or it might help you to get people to recycle properly, or to pick up
litter.
Once those incentives are missing from other parts of our daily existence people
may no longer do things that they do now because they now expect to be rewarded
with market incentives in pretty much every field of behavior, right?
So now that you are expected to be rewarded with points for voting or recycling,
when you see a litter lying on the pavement you may not reach for it unless there is
a proper reward coming for it, because now you treat everything like a consumer
and not as a citizen anymore.
So this is sort of the broader critique of incentive gamefication that I’m building
because I think there is something very perverse in how many of these solutions are
being sold to us, because they are being sold to us again as something that can
increase efficiency and we don’t really need to talk about the fact that we are
replacing something that used to be regulated with politics with something that’s
now regulated through market.
But the broader attack I’m reaching in the book goes far beyond just gamefication,
because I think there is something very narrow-minded about the kind of politics
that are often smuggled along with such technological solutions.
So many of you have heard about the quantified-self movement, right? And the selftracking and the life logging especially. In Microsoft you must have heard about it.
So there are now a lot of people who try to track everything about their life, and now
they don’t even have to do a conscious decision because our gadgets track this stuff
anyway, right?
So your iPhone might track how much you are walking because it has sensors built
into it, right? So since you have sensors built into all of your smart devices you no
longer even need to buy any extra gadget in order to make an active choice.
So what Google has done, for example, was Google Now. I mean, how many of you
are familiar with Google Now? All right. So Google Now is this app for Android,
right, which basically analyzes -- and now it’s available I think on the iPhone as well,
it analyzes everything that you do on different Google services.
So it analyzes what you do on Gmail, it analyzes what you do on Google Calendar, it
analyzes what you do on Youtube, and then it tries to make predictions about what
you might be doing in the future.
And it also tries to do some of those things for you to make your life easier, right?
So the example that they usually use is that you have a plane reservation in your
inbox, and you’re about to catch a flight somewhere. And since there is a
reservation in your Gmail, your Google will automatically check you in, right, into
that flight. It can check the weather at your destination and tell you that it will rain,
so you need to fetch an umbrella.
And it will also tell you that the traffic conditions on your way to the airport where
you live are bad, so you need to leave an hour earlier, right? So it can do all of those
things in real time again by analyzing where you will be going by studying your
behavior. And it will make those predictions about the future.
The interesting thing is that now they also provide another kind of information, and
at the end of each month they show you how much you’ve been walking, right? And
how many miles you’ve walked that month and how it differs from the months that
passed before.
So they’ll show a percentage change, right? Those of you familiar with behavioral
economics will recognize that this is a kind of a nudge, right? The proper term for
this is nudging, and Google is trying to get you to walk more. I mean, that’s the
implicit assumption that they make, by basically recording information about your
current behavior, right?
The idea here is to show you that maybe something is wrong and you may need to
exercise more and maybe you need to make some other adjustments. But the idea
here is that again this infrastructure for problem solving is there, and it’s tied to our
gadgets, and now it becomes possible to have all sorts of interventions that were
impossible before.
From the perspective of a policy maker, right, this is wonderful! Because just think
about it from someone who sits in Washington who has been trying to solve a
problem like a obesity or climate change, for that matter, for a very long time.
You have tried everything. You have tried regulating the food industry. You have
tried regulating the energy industry. You’ve tried building infrastructure. And you
know, it’s very hard. There are a lot of lobbyists. You have to go and build political
capitol and you have to do all of those things.
Wouldn’t it be easier if you could just go into Google and have them remind
everyone that you need to walk more, right, or that you need to exercise more and
you need to eat more vegetables?
And you have to understand that this infrastructure that I mentioned for problem
solving is expanding. So now that we all will be wearing Google Glass, right, even
new types of interventions will become possible because even more information
will be analyzed, right?
And even more kind of information will be collected and processed, and even new
types of interventions will become possible.
So if now it’s tracking how much you are walking, it will be possible to track what it
is that you are eating because the glass will process what it is that is on your plate,
right? And based on that you can think of all sorts of other behavioral interventions
that might help to solve the obesity problem.
Researchers in Japan last year did a very interesting experiment. They took a
system like Google Glass, and they basically have it do one thing. So you come to a
restaurant and you order a steak with some fries, and basically this smart glass tried
to make that portion look much larger than it is, so that you get full sooner.
And that’s like there. And their stated goal is precisely to help fight obesity, right?
But you see that again as this environment gets smarter and they become easier to
think of and manipulate, new types of problem solving become possible.
So in a sense we as a society, as people living in a world where we still care about
democracy and reform, we need to figure out how to build a [inaudible]. Because
there are more and more ways to solve problems, as I’ve said. You know, you have
these smart trashcans, which can analyze everything that you throw in them and tell
you to do something differently.
That was science fiction 15 years ago, now it’s reality. You know, you have these
smart glasses, which can not only make portions look smaller, I mean, if they think
that you’ve been eating too much fried chicken they can have fried chicken
disappear from the menu, right, that you are looking at.
I mean, you might laugh and say that this is science fiction, but again that’s an option
that’s on the table, right? And if we really think hard about solving a problem like
obesity we need to know why we don’t want that option. We need to be able to
articulate why that option looks ugly.
And my problem is that people who build these things like smart trashcans not only
cannot articulate why the project is ugly, but they actually are convince that this is
the best thing ever. That this will actually solve many problems and that they are
fully on board with this as a problem solving mechanism.
And what I’m trying to do in the book is to show that there are basically always
other ways of tackling problems that might actually be at a higher level. So of course
it’s one thing for me to be told that I need to walk more because Google has tracked
how much I’m walking and it knows that I’m not walking enough.
But it’s not going to solve the problem if there is no infrastructure and there is
nowhere for me to go, right? Because in order to be able to go somewhere it’s nice
to be able to have pavement and to have public spaces and to actually have places to
go to and not just to go to the mall or go to the highway, right?
I mean, this is something that you tackle -- you couldn’t tackle that with an app,
right? You need to tackle it at a different level by building infrastructure. And it’s
the same as food, right? It’s the same with -- of course, you might be told you need
to eat more vegetables, but it would be nice to do something about the food industry
and the fact that they advertise their products to children in ways that are poorly
regulated.
This is something that you need to solve by passing tougher laws in Washington and
by taking on the lobbyists and not just by building apps. I mean you can see the
revolution -- I don’t know how many of you followed Michelle Obama’s efforts to
tackle the obesity problem in America, right?
So how it started in 2008 was a very ambitious effort. We will go and we will just
take on the food industry and we will just make sure that we tie their hands. I mean,
after five years it all became about planting vegetables in the white house, it was all
about moving more, right?
We need to get people moving. Of course it’s nice to get people moving and with
Google lass and smart phones you can be told that you need to move more, but it
doesn’t solve the problem that this [inaudible] that that problem needs to be tackled
through a different set of solutions, right?
It needs to be tackled at a macro level of reform in politics and not at the level of
technology, right?
And my fear is that, of course, if this new infrastructure for problem solving was just
a bonus, right, it was just a way to supplement existing efforts to tackle all of those
problems differently I think it would be a wonderful idea. But the problem is that
the intellectual climate that we live in right now doesn’t treat it as a bonus, it treats
it as a full-blown replacement to other efforts.
And that happens for several reasons. It happens in part because governments have
tried everything else and they know that it’s very hard to be taking on these
powerful industries that either, the food sector or the energy sector, we know that
they all [inaudible] innovation and they want to be seen as experimenting with apps,
again because this is seen as something cool.
We know that all of them are very excited about behavioral economics and nudging.
There is an entire unit in the British government whose job it is just finding new
nudging solutions. So they want to nudge citizens to do more.
So the confluence of all those factors results in us basically taking on this new
problem solving infrastructure, built and provided by Silicon Valley as our default
way to solve problems, right?
And this is what I find so dangerous, in part because I think again, it’s a kind of
politics that basically treats the existing as a given, as fixed, right? And all you can
do is just to change the behavior of the citizen.
So the way that it starts is that you are being told that you need to adjust your own
behavior, walk more, walk less, eat more, eat less, recycle better, recycle worse, but
you are not doing anything about the system itself, right?
You are not reforming the system you are reforming its users. And I think that while
it’s important to be reforming the users, it’s important to be reforming the system as
well.
And it’s in this level that I think most of these apps fail and they cannot deliver. And
if our policy makers do not see that, then I think we are in big trouble. But there
are, of course as you all know, many other concerns to this private problem-solving
infrastructure. I mean, concerns about privacy would be probably number one. In
part because it is now, it is possible for FBI or any other law enforcement agency to
go to whoever owns that smart trash bin and ask them what was in that trash bin
three weeks ago, right?
Because there is a record of the photos, and of whatever data came through those
sensors stored somewhere. So suddenly that information becomes discoverable in a
way that it wasn’t discoverable just 10 or 15 years ago. All right?
It’s the same with that smart toothbrush. You might think that it’s completely trivial
in who would ever want to know what’s happening in your mouth, but now there is
an agency somewhere that collects information about the movements in the
toothbrush in your mouth, and if the FBI needs that data they can go and ask that
provider for that data.
And again that’s something that was impossible before. With Google Glass I think
here it’s also quite obvious what the implications are, right? I do think that
whatever feed will be generated by the glasses will be stored in the server
somewhere and it will be, of course, the law enforcement will need a warrant or
some order from the court, but again that data will be discoverable, right?
So there are additional costs in terms of privacy attached to many of those solutions,
which might actually negate the benefits, right? So we might think that this is all
great and we are solving problems, but at the same time what’s happening
politically we are creating newer and newer ways for law enforcement agencies to
abuse this infrastructure and I think there will be abusing of it pretty soon.
I mean, the other part, and that has more to do with the quantified-self movement in
its consequences is that I actually don’t think that building stronger privacy tools
here will necessarily help us. In part because there are good economic incentives
for people to actually track what it is they are doing and use this problem-solving
infrastructure for their own benefit.
Because if you can track your own health and if you can track your own driving
habits, and if you can prove to your insurance company you are actually a much
better person than they think you are, you will end up paying less for insurance,
right?
So there are good structural incentives for most of us, at least for those of us who
are better than the average person, whatever that means, right, to actually go and
track how much we are walking, what it is we are eating, and how safe we are
driving because that data in itself can give us an economic advantage because we
will end up paying less.
And that’s already happening with people who install sensors in their cars and then
they can take the data to the insurance company and they’ll end up paying less.
If you think about this logic, basically it’s in your benefit. And you will see all sorts
of intermediaries and some of them are already quite active to allow you to record
that data and put it up somewhere online, and then to have secure access to that
data and sell it to other companies who would be able to come and take it and look
at it and do whatever they want with it.
The problem with this new brave new world, if you will, is that those of us who are
less successful than the average person, or those of us who have something to hide
would no longer be able to opt out.
I mean, right now it’s being sold to us as something where we all of a choice,
whether we want to participate in the system or not. We are being told that if you
want to track yourself you track yourself, if you don’t want to track your behavior
your health or your driving you don’t have to.
But of course once this reaches the critical mass of users people who refuse to selftrack will be seen as very suspicious and as people who have something to hide,
right?
I mean, try not having a mobile phone and not having a facebook account now, and
being a young person who wants to go and rent an apartment on Craigslist. If your
landlord cannot find you online on facebook it raises all sorts of questions. You
actually already see now a trend on Craigslist of landlords demanding in advance
only to see applications of people who have facebook accounts, right?
Again, institutionally this is no longer optional, right? So of course you have to have
a choice not to be on facebook, but if our institutions and if our culture ahs been
rewired in such a way that carrying this online identity with us everywhere we go
brings benefits, and not carrying it brings costs, then it’s no longer optional, right?
And then people who refuse to participate in this will be treated with suspicion and
they will be the ones bearing all the costs, right? And we don’t actually have a very
good system and a very good way of discussing how we should balance the benefits
of those of us who want to self track in our own interest and the interests of those
who don’t, right?
We don’t have a good framework justice-wise to decide what needs to be done here.
And I think that’s another complication that we need to have, because if you think
about it, your decision to self-track indirectly affects my decision whether to selftrack or not.
This is a very different ethics. Right now it’s being presented to us as being
completely autonomous. I do what I want to do, you do what you want to do, and
it’s almost like throwing stuff away and using energy before the climate change
problem hit us.
You pay for stuff. Your electricity is being monitored. You pay for how much you
are using. You don’t really have to care about anything because everything is priced
perfectly. But of course you realize that all of those pricing assumptions are based
on models that treat energy as essentially infinite and they do not take climate
change into account.
And if you start taking those things into account you will end up with a very
different pricing model but you will also end up with a very different ethics, right?
And I think that’s essentially what will happen with privacy. That will happen with
self-monitoring.
We have to find a way in which we can have a more robust discussion about ethic s
of self-monitoring. Because I think we if just let it go the way it goes now, where
anyone can do whatever they want without thinking about some of the broader
ethical implications we will end up with people who are the weakest suffering the
most, right?
And monitoring yourself also takes time and takes extra [inaudible], and if you work
five jobs you don’t necessarily have time to monitor yourself, or if you monitor
yourself you might discover that you sleep only four hours a day and that’s the
consequence of having five jobs.
And then, of course, it might affect your insurance status and all sorts of other
things. And then there are some structural issues and structural consequences that
are not necessarily positive to forcing the self-tracking logic on everyone.
So the basic sort of argument then, the basic message I’m trying to deliver in the
book is that many of the solutions, whether it’s self tracking or whether it’s
gamefication, or if it’s proliferation of sensors and this new type of nudging that
becomes possible, all of them may seem very tempting and all of them may seem to
promise us so much at an early stage. And we might think that they boost efficiency
or that they allow us to push people to do things better, but all of them also have
hidden costs, right?
And some of those costs have to do with the fact that we are not solving the root
problems, or the root causes of any of those problems. Some of them may have to
do with the fact that there are actually huge consequences to the technologies like
self-tracking in terms of privacy.
And all of those costs need to be made visible before we embrace many of those
technological solutions. I think I’ll stop here and we can have a debate and a Q and
A. Feel free to ask me anything and thank you so much for coming.
[applause]
>> Evgeny Morozov: Yes?
>>: I agree with all the privacy stuff definitely. I had some questions though about
the first thing you were talking about with the recycling example. I wonder about
this moral subject that is sort of accepted here implicitly. That there is this moral
subject that actually reasons morally and recycles or doesn’t recycle as a result of
that and that’s a normal state of affairs and it’s being threatened now much
technological change and [inaudible] and whatever else.
I wonder if it’s really there. I mean, the reasons that people do whatever they do in
society are largely from social pressure, from religious dictates, from laws, they do it
to avoid punishment.
I wonder how realistic an expectation that is or how much of a [inaudible] the
moment is gone now? That there’s actually a moral reasoning that’s going on among
most people most of the time in their every day behavior.
>> Evgeny Morozov: Well, again I wont say that we’re living in some kind of
paradise where we all behave morally and now it’s suddenly being threatened by
these new types of incentives. I mean, that of course is not the case.
But I think the question here is what kind of future do we want to build?
Do we want to build a future where we completely bypass the language of politics
and language of the liberation?
Because for me, I think another way to tackle this problem would be to have a more
robust debate about things like recycling and climate change and make sure that
people are aware of their responsibilities and not just bypass that level all together
and start rewarding people with coupons and virtual points for things that someone
somewhere has decided to be moral, while -- perhaps we’ll figure out that recycling
may not be the best way to go and maybe there will be some other solution.
But again there needs to be some kind of debate, right? Citizens need to be
confronting those issues and they need to be thinking about them, and I think
keeping this debate within the framework of morality and ethics -- and ethics is one
of those you mentioned religious dictums and, you know, social pressure.
That’s what ethics is, right? The side of moral imperatives that you tend to follow
because you believe that that’s the right thing to do. In displacing them with
commercial logic where -- to me it feels like the end of politics all together.
It’s like we’ve given up on the political language. And instead of treating citizens like
they’re all [inaudible] we start treating them like children, you know? It’s like, you
know, we’ll be giving them good rewards and reinforce daily actions like walking
dogs, you know?
We are rewarding them for things that help marginally solve the problem of climate
change, but the recycling is not going to change it. For me something like recycling
would be a way to hook people into the thinking process and then make sure that
they confront other decisions in their life with this sort of environmental sensibility.
I mean, to me that would be the point of getting them to recycle. It’s not necessarily
because recycling helps the planet, it’s because it keeps them within this moral
political framework, right? And that in itself influences how they behave in the
political arena, who they vote for, you know, what contributions they make.
I mean, if you kind of cut all of that out you’re just left without civil society, without
any activism, without any politics whatsoever.
If they’re being paid in coupons for recycling and that logic is also taken to motivate
to do other things in life that were previously done for political and moral reasons,
what kind of politics is left?
You know, like, all you do is spend time on facebook to redeem the coupons you
earned through recycling. So to me it’s a very unambitious and kind of dangerous
future.
So I’m not saying that the present or the past are very nice, they aren’t. But I don’t’
want to give up. And to me you might say that I’m utopian or whatever, and that
would be an interesting charge, but I don’t want to give up on the idea of citizens as
being capable of deliberation.
Yes?
>>: It sounds to me like you are saying that we have an over reliance on technology.
Or are you saying that the institutions of a civil society are broken?
>> Evgeny Morozov: Yeah, I mean look. What’s happening is that right now there
are several things. I mean, I have a long chapter in the book attacking the very idea
of the Internet.
Actually saying, now it [inaudible] throughout the whole book because I think at this
point the Internet now is being invoked in all sorts of ways to justify all sorts of
policies and interventions. And one of the consequences of this is that we tend to
think that we are living through unique and exceptionalist times, and it’s almost like
the second coming of the printing press.
And once you have the second coming of the printing press all sort of interventions
become possible. So a lot of people think that what Wikipedia tells us, basically new
ideas and new forms of [inaudible] work, and you can then go and remake, say
politics, based on Wikipedia’s template.
So people do look to the Internet as a source of answers, right? And they do it
because they think that it is the new printing press, right?
So what I’m trying to do in the book is to show that maybe the times we are living
through are not as unique and not as exceptional, and that some of the [inaudible]
will change that exists now derived solely from the fact that too many people are
convinced that it is the second coming of the printing press and we need to go and
radically reshape and redo everything from politics to how we fight crime, to how
we fight recycling.
So there is something to do with our own perception of the uniqueness of the
historical situation we’re in, but part of it has to do with, again, I imagine some of
those factors have to do with the fact that we’re living through times of austerity.
Governments don’t have budgets to solve problems. So they are very happy to
outsource some of the problem solving to [inaudible]. Some of it has to do with the
fact that Silicon Valley and technology companies position themselves as basically
being in the business of problem solving.
Why do they do that? I think one of the reasons is that it allows them to avoid
greater regulatory scrutiny, because if policy makers expect that Google and
Facebook, and perhaps Microsoft, are going to change the world and they are going
to help us tackle problems like climate change and obesity, why would you want to
over-regulate that industry?
And you don’t want to tie their hands. So perhaps it’s okay to be collecting all the
data if you need the data for better nudging.
So there are good reasons for executives of these companies to play up their own
humanitarian role and their own humanitarian mission. And I mean, there are many
other secondary reasons here I can give you.
I also think that, having studied closely some of the remarks that someone like Eric
Schmidt makes, if you follow his remarks very closely you see that he clearly
positions Google, and Silicon Valley by extension, as being the anti-Wall Street,
right?
People go and join Wall Street firms because they want to make the world worse,
and Silicon Valley is the place where people go to make the world better.
So you know, all of the developers and programmers who come can’t decide
between joining a hedge fund and joining Google should go work for Google because
in Google you will not only be organizing all of the world’s knowledge, you will also
be solving climate change.
I mean, that’s the rhetoric, and it’s almost word-by-word.
So there are many reasons here why this is happening, but partly I think it has to do
with the fact that we’ve just run out of options. And that brings me back to my first
book because in my first book one of my arguments was that the reason why all the
[inaudible], blogs and social media look so exciting to policy makers in Washington
is because in my country, in Belarus, people who sit on the [inaudible] desk at the
state department have tried every single other tool in the last 20 years.
They’ve tried funding political parties, they’ve tried funding MGOs, they’ve tried
funding investigating journalists, environmental groups, and they’ve funded
everything. In 20 years nothing has worked. It’s still as bad as it was.
So when you drop this new shiny technology on them and tell them blogs can
overthrow dictators, of course they just great, put more money into this. Let’s just
start funding groups.
So I mean, there is a certain sense of hope that comes with many of those
technologies and in part because everything else has been tried and everything else
would require very complex, long-term, sophisticated and very uncomfortable and
dirty work.
You have to go and fight with industry groups in Washington. You have to go and
fight with dictators in Belarus. Wouldn’t it be better to have a bunch of kids in Palo
Alto build a start-up and build an app, and then use that app to try to radically
transform the world?
I mean, it looks from a policy maker’s perspective it looks much more appealing,
right?
Yes?
>>: I guess I struggle with this being such a straight line. How can -- couldn’t you
argue that sure, people will start to do this and this and this, and then something
totally unexpected that you can’t predict today could happen and the world would
just be totally different. Maybe come back this way.
I mean this kind of progression isn’t going to be a straight line, I wouldn’t think.
>> Evgeny Morozov: I mean, I’m not saying it is. But again, I mean I can only
operate from the empirical data that I have seen now, and the trends I see in the
public discourse.
The way I see a strong interest in gamefication from public institutions and
businesses, yes. The number of times -Huh?
>>: How many years have you studied? I don’t know I haven’t read your book so -[inaudible]
>> Evgeny Morozov: I mean, the difference on technology -- I mean I can tell the
number of times gamefication was used and mentioned on the front page of the New
York Times was zero a few years ago, and this year alone it was probably five, right?
I mean, do I see a greater interest in these methods? The quantified-self and selftracking and nudging, do I see the [inaudible] their own unit? Yes. Do I see
[inaudible] draining the White House? Yes.
I mean, based on that I do see that certain trends are happening. Could they be
reversed? Possibly, and I hope they will be reversed. That’s why I write the book
and make that argument.
But you know, I just don’t think it’s responsible as an intellectual to just hope for the
better. Like, you know, the better may never come. I’ve seen it in Belarus with use
of blogs and social media.
I would love people in the state department to just do whatever they want with
social media, but things will only get worse. I mean look, I work for an MGO where
our job was to take money from western donors and to go and try to run our own
little projects where we recruit bloggers and do pod casting, and you know, what I
realized is that we are actually doing much more harm than good to social media in
Belarus, in part because what we were doing was plugging people out form already
functioning and existing projects where they were building interesting technologies.
And they were doing it because they like that. They were not even paid for any of
this. We came with a log of funding form the west. We plugged them from those
already existing projects and we put them on grant money and told them look guys,
if you fail we’ll give you another grant.
So of course they kept failing for like five years because the money never stopped
coming. And in the end we ended up with this bunch of lazy people who didn’t want
to do anything unless they got a grant.
I thought our intervention was just disastrous, right? And we all had very good
intentions. I mean, has it changed? Yes. But do you think when I started making
that message and telling people that what we are doing is harmful, like do you think
people just walk up the following day and just said Oh yes it is harmful, just kill it.
No. It requires a lot of arguments. You need to go and fight with those people
publicly. People’s minds don’t just change over night, right? That’s why I think I
need to go and fight with people who promote gamefication. It’s an entire industry
of consultants who all they do is just go from company to company and tell them
that gamefication is the way to solve all their problems.
>>: I appreciate shining a light on it, but there could be some good things that come
out of it, like people become more familiar with what's causing their obesity and
then they stop having to play the game and they start doing these things without -for the right reason or they start going to the polling because they realize, hey, when
I go to vote, it actually makes me more excited about my community, and the fact
that they're checking in isn't the -- I'm just saying it could be that that -- that's why I
think it's good to focus or shine a light on it, but it's not all doom and gloom.
>> Evgeny Morozov: But, again, like my point -- in the book I draw distinction
between what I call numeric imagination and narrative imagination, right? You
want to develop narrative imagination about the world and how it functions. In
order to understand how systems work, you want to have a more holistic
understanding of how things interrelate.
When you think about -- look at Coca-Cola. So what has Coca-Cola been doing in the
last few weeks? So they said, fine, now we'll start putting calories like on every
bottle. But, you know, who cares about calories if it's all about sugar? Right? I
mean, there are ways in which self-tracking can allow you to learn something about
nutrition, but you still wouldn't develop a holistic picture and we will just be, like,
making things worse, you know. Like this smart fork that is being advertised, it
cannot differentiate between different types of food. So whether you're eating peas
or whether you're eating a steak, like it cannot -- it doesn't know that if you're eating
peas, maybe it's okay to, you know, move your hand and your fork 10 times, and if
you're eating a steak maybe just one time per ten seconds is enough. Like it doesn't
differentiate.
So, like, what kind of complex narrative will emerge from that sensor? Like it
wouldn't. So, I mean, I'm with you that often it can result in more complex
understandings, but very often it doesn't, right? So, again, there are ways in which
you can do that, but you need to strive towards it, you need to build, you need to
think about it. So I'm not saying that we need to actually have a whole chapter in
the book. The last chapter is all about ways in which you can actually use
technology to get people to think more in new and different ways.
You know, I have this -- I borrowed from a theorist, a guy who came up with
something called adversarial design. You can actually build artifacts that by
malfunctioning will actually make you think a little bit harder about the world you
live in and think holistically. So, I mean, I'm busy on the need to get to the thing.
Yes?
>>: So part of the problem you're describing is just that technology, I guess, lends
itself better to the language of economics and transactions. Is there something
about structure of technological [inaudible] that you think facilitate other kinds of
discourse, other kinds of thinking, other kinds of -- you know, like you were saying,
just more -- for example, facilitating political and moral discourse around the
choices that you were talking about
>> Evgeny Morozov: Yeah. Look, actually, I'm very -- I don't like talking about
technology with a capital T as a force in nature as culture or whatever. I just don't
think it's the right way to think about technology. So for me, this question makes
sense only in as much as you can recognize that there are certain groups and certain
intellectual schools of the thought, if you will, people who do law and economics, for
example, when, you know, lawyers who do law and economics, they of course will
try to use technology to maximize efficiency or to maximize innovation because the
main goal for them and the only goal that they know how to calculate is efficiency
and innovation.
But, of course, if you think about it holistically from the perspective of democracy
and, you know, social life and public life, perhaps in certain areas we would like to
have inefficiency and we would like to have ambiguity and we would like to have
opacity, and occasionally it's okay to lift certain spaces where politicians can
actually be hypocritical every now and then because they have to talk to so many
different constituents that if you tie their hands, they'll just stop being effective.
But if you do not approach your attempt to reform politics with any basic
understanding of how politics functions, you're not likely to develop that insight.
Right? So part of my attack on this solution of enterprise in the book is that many of
these companies and many of the start-ups, they proceed solely by analyzing their
tool and what it can do and they think that, well, if this tool can help us -- can record
everything and analyze every single statement ever made by a politician, that by
default is a good thing and it's going to help politics without asking any questions
about how politics actually functions.
Now, part of my effort in the book is to actually urge people building the schools to
spend more time trying to understand the areas they are trying to improve.
Whether it's education, whether it's politics, whether it's crime-fighting or whether
it's obesity or whether it's climate change, unless you go and ask questions about
what makes those areas work, you're not likely to come up with effective solutions.
You cannot just, you know, start by thinking that, well, now we can store more
information and now we can build sensors into everything, and if you combine those
two in the right proportion, you'll be able to fix politics. That's not how it works.
So -- but to answer your question more directly, again, as I mentioned in the last
chapter, I do talk about all sorts of devices that by -- so let me give you an example.
So the designers in Germany have built something, like they've built an extension
cord that looks like a caterpillar.
So every time you leave devices that are in standby mode in that extension cord,
that caterpillar starts twisting as if it were in pain as if to alert you that you need to
spend more time thinking about the environmental consequences of your behavior,
right? And that's a somewhat different paradigm from the conventional paradigm in
design where all you would want to do is to actually hide that extension cord. You
would want to make it as invisible as possible so that not to have you think at all
about the fact that you're using energy, right?
Because the paradigm that they used when designing those extension cords
previously was that energy's abundant and you don't need to care about scarcity so
let's just hide it and let's just make sure people don't ever ask that question.
And if you approach it from a somewhat different perspective and you think that
energy is not abundant and it's okay to get people to think politically about its use,
then you'll probably end up designing different artifacts.
Now, whether they're going to sell on the market, whether people will find them
convincing, I mean, that's something that I think is a good question, but, again, there
is no reason why we cannot develop consciousness in consumers that they want
these kind of devices over the purely functional ones. I mean, now we have people
who go and pay extra for bananas because those bananas are fair trade. I mean,
rationally there is no reason for you to be paying more for bananas just because
they have a sticker on them. You do it because you attach some political ideal to that
banana.
I mean, there is no reason why you wouldn't want to go and buy an extension cord
that may not be as functional as the one that your neighbor has but that articulates a
different political ideology. There is no reason why it shouldn't be done, but just
because we're committed to these paradigms of efficiency and functionalism, we
don't consider those options and no one is particularly interested in raising the
political consciousness of consumers.
Yes?
>>: You bring up a good point that efficiency and -- I was just thinking that, like, my
electricity/gas bill will tell you the data of how you rank with your neighbors. For
some people, that's enough, you know? I think there's different incentives that are
going to work.
But I'm also thinking with all this data as it's being collected, I mean, it can also be
hacked and construed in a way that, like you said, there's incentives that you'll save
money because you were a good consumer or whatever. But that can be
compromised. So, I mean it ushers in a whole era of authenticity of data and, you
know, other problems and I think privacy being a big one.
All this collection and always searchable, never forgotten isn't going to work for
everybody. And also older people who, like my aunt who's in her 80s, isn't going to
really have a Smartphone. A lot of people aren't going to capitalize on that or they
just won't care.
So it's sort of like this social, economic, political alignment with IT that at certain -depending on the culture, it has to have some of those factors that really line up, like
the technology is built to solve the problems that society will accept that there's an
economic base for and that politically, you know, the right level of regulation that's
going to mean society -- you know, I mean, there's a lot of factors plus just the legacy
of how people think and adopt, I mean -- and the hacking piece. I'm throwing a lot at
you, but ->> Evgeny Morozov: No, no. Look, I agree with you. Again, I have -- [inaudible]
everything you've said, I have less -- I have less trust in numbers probably than most
people in this room, in part because I don't think that the numbers actually are as
good as we think at telling us complex narratives, and I think that very often we -and that's my problem with self-tracking and quantification of our behaviors. I
mean, if you started using complex narratives and complex systems to a number,
you end up with a very simplistic understanding of how the system functions and
also how it changes.
So knowing your electricity bill might not actually tell you anything about where the
energy comes from and what are the factors [inaudible] and, again, I don't want to
live in an environment where we as citizens are constantly overwhelmed with this
information. I mean, that's another challenge. Because, again, I don't want you to be
pushed to think about highly political matters every time -- because everything is
political. How this table is made is political and this chair and this projector, and
they're always made in China with, like, workers and why not think about the
workers when using the projector. I mean, there are all sorts of things you can be
thinking about, right?
And what questions will be opened is itself a political question, right? Why force
you to think about energy and not about workers in China? So, I mean, there are all
sorts of questions here, and, I mean, I must confess, I don't have a full-blown theory,
a political theory, which will tell you which questions we want to leave open and
which questions we want to leave closed.
But I think the fact that so much of this infrastructure is built by the private sector
now, I mean, allows me to go and make an argument to companies that perhaps
there is something to be said about building new types of political consciousness
amongst consumers, right? And companies, you know, that will -- I mean, Starbucks,
that's your home company here, I mean, they figured out that people will one day or
another will start caring about fair trade, right?
And, again, this may or may not be a good way to solve the problems of economics
and equality, but that allows them to create new types of products that in
themselves have a political dimension and a political consequence.
There is an entire debate and political theory about the consequences of ethical
consumption, and, you know, some people say that this is the best thing ever, some
people say that ethical consumption doesn't do anything to address the root
problems of many of those -- root causes of many of those problems, but I think
there are ways in which technology companies can actually tap into this longing that
I hope will sooner or later emerge among people to actually have a different set of
gadgets that does actually recognize that climate change is happening and that, you
know, they want to make sure that they are confronting that problem in their daily
life other than, you know, confronted every two years when Al Gore makes a new
movie.
And I think that that will change and that will happen and then one way or the
other -- I mean, it's happening already with some of the recycling stuff. And I think
it will happen at the level of consumer electronics. But so for me the question is
how can you also take some of those insights and take and transpose into the world
of privacy, to the world of how we use computers, how we use our browsers. I
mean, it's one thing to be -- like if you're using a browser and you have a pop-up
window telling you that you're being tracked by 500 websites right now because
you're visiting dictionary.com or whatever, any other website, does this information
tell you anything? Like, I'm not sure. How does it compare to CNN.com? I have no
idea what that number 500 tells me.
On the other hand, if I'm being told that now I am being tracked by more websites
that existed on the internet in 1993, I mean, that's not necessarily meaningful, but it
creates some kind of very weird feeling of unease that might make me question
what's happening right now that might force me to go and Google and actually
investigate, like, questions of privacy. So you can actually build browsers that built
off some of that weirdness and, you know, they try to expose it to these problems,
but in a way that transcends the limitations of numbers and tries to create these
stories that might be incomplete but that might still get you to think about issues
you would not be thinking about otherwise.
So that part of my thought is still in progress, but ->>: Numbers plus context, right? I mean ->> Evgeny Morozov: Yeah, yeah.
>>: How do you even evaluate this and make sense of it, because there's so much
that, you know, 500 websites that you -- what does it all mean?
>> Evgeny Morozov: Yeah. Well, you know, you can be browsing and a line from
Kafka will emerge. I mean, it doesn't have to be ->>: [inaudible].
>>: You've seen the Mozilla collusion tool?
>> Evgeny Morozov: I haven't, no.
>>: It's nice. It's a visualization of exactly that, who's following you. These bubbles
pop up and they point and show their interrelations. No context, like you were
saying, no historical aspect to it, but ->> Evgeny Morozov: Sure.
>>: It's very alarming. Collusion is the name.
>>: It's all about collusion really.
>>: Thank you so much for coming.
>>: Thank you.
[applause]
Download