>> Kim Ricketts: Good afternoon, and welcome everyone. ... Ricketts, and I manage, along with Kirsten Wiley the Microsoft...

advertisement
>> Kim Ricketts: Good afternoon, and welcome everyone. My name is Kim
Ricketts, and I manage, along with Kirsten Wiley the Microsoft Research visiting
speaker series. Today we welcome Physicist Leonard Mlodinow to Microsoft
Research here to discuss how randomness rules our lives and of course for most
of us this is not a comforting thought. In fact we spend much of our time here
collecting amounts of data and information and finding and creating patterns and
codes to predict and to plan and to solve problems. Randomness is not in our
plan.
And the human mind in fact is built to assign a cause to each event and to
concoct a story of meaning around unrelated events. But today we begin the
unsettling process of looking at the role of chance in our daily lives and our guide
through that journey is Dr. Mlodinow who is currently a lecturer in physics at
Caltech and is published extensively in that field.
He's also published works of popular science including Feynman's Rainbow,
Euclid's Window and with Stephen Hawking, a Briefer History of Time. He's also
written screenplays for films and television, including work on Star Trek: The
Next Generation, and MacGyver, and has co-authored a series on children's
books of children's books, entitled "The" Kids of Einstein Elementary." We're
thrilled to welcome Dr. Leonard Mlodinow to Microsoft Research.
(Applause)
>> Leonard Mlodinow: Thank you. And before my talk, we gave everybody two
questions, and we divide you into two groups, and so in a little bit, I'll call for the
answers and hopefully I'll get the average value of the answer to the second
question from each group, and we'll compare those.
But the name of the book is "The Drunkard's Walk." It's not a self-help book for
alcoholics, it's about randomness. And it's -- in particular it's about the
importance of randomness in our lives and how we often underestimate how
important chance events are in our lives and whether it be in business, sports,
medicine, our love life, and all these things we're constantly affected by
unforeseen or unpredictable random events that change our paths in life. And
we often misinterpret what happens not understanding what the role of
randomness where we make wrong inferences because we didn't understand the
role of randomness.
So I wrote this book to talk about the theory of randomness and the historical
development of the concept. But also mainly to talk about its effect in our lives
and also the psychological phenomena that make it hard for us to understand
what's really going on.
The name -- the term drunkard's walk as a lot of you probably know, if not all of
you, is a mathematical process that essentially is a process of random
meandering. And I named the book Drunkard's Walk because I think that the
process of random meandering is a good metaphor for our process of life. We
might think we have a direction and we might make plans and we may work very
hard at what we're at but things are constantly hitting us from different directions,
and it's those things and our reactions to them that determine our path in life.
I'm going -- the talk is divided into two sections. The first part I'm going to talk
about some illusions and confusions that arrive from randomness, some
surprising situations that we may misinterpret, and the second part of the talk I'm
going to talk about some psychological effects that cause us to do that.
So the first illusion that I want to talk about I call it the illusion of causality. And
you'll see the Drunkard's Walk here again in the lower left and the -- I have the
laser pointer, but the old-fashioned low tech one will work.
The start is here, and it meanders, and it finishes up there. Now, if you weren't
privy to the actual path that you can see here and you just saw that somebody
started at the lower left and ended at the upper right, you might think that the
person got there on purpose.
But actually as you all know when you execute a drunkard's walk or random walk
you actually get somewhere even though you're not aiming to get anywhere, and
that is what happens a lot in life. Here I have three examples. The top one is
from sports. We all tend to think that when there's a championship sport series
the best team wins, unless there's some extraordinary event of luck that happens
in the series, like the Chicago Cubs, some guy reaching over and grabbing a ball
out of a fielder's hand, you normally don't talk about the luck of winning or losing
the games. You think that the winner of the series deserved it and that the series
proves which team is best.
In high schools, in colleges, people look at various statistics and data and they
infer from them what they think the schools are like. Now, if you have kids that
go to these high schools or colleges, you might know that that's may not be a
very good picture, but people tend to believe that.
And in the lower right I have an example from business. This has to do with a
fellow named Bill Miller. Bill Miller was the head of a fund called the Legg Mason
Mutual Fund, and he got a lot of press because his mutual fund beat the
Standard & Poor's index for 15 years in a row. This was hailed as various things,
my favorite of which is the greatest fund feat in the past 40 years.
And people quoted in a lot of different articles about Bill Miller the various odds of
someone accomplishing that. They range from about 150,000 to one in the 13th
year to I saw somewhere four billion to one, which would be hard to believe
actually that a four billion to one occurrence actually occurs given the number of
mutual fund managers there are.
But let's look at this and see what the chances really were. So my point here is
that when you look at something that happens, that seems very rare or
extraordinary, and you're wondering what the chances were of that happening at
random or whether you should have expected that that might have happened
purely by chance, you have to be very careful on how you do your analysis. A lot
of times it's hard to separate the causal component of something from the
random component.
And one way to approach this is to say would we expect this to happen if things
were purely random. And if we don't expect them the things -- this to happen if
things were purely random, then we might say, wow, the odds against us were
very great and this guy really accomplished something. But let's see if it really
was four billion to one or what it was for Bill Miller or for someone like him.
So let's assume that the chance of beating the Standard & Poor's in any given
year are one in two, okay? Then the probability of Bill Miller beating the
Standard & Poor's for 15 years in a row would be one over two to the fifteenth,
which is about 32,000. So that is indeed pretty long odds against hem.
But as the graphic says, that's the probability of Bill Miller beating it 15 years in a
row starting in 1991, right. So you have to be very careful when you're dealing
with random situations before you give the answers to make sure you're asking
the right questions.
And, you know, is the question here what are the chances that Bill Miller beat the
Standard & Poor's 15 years in a row starting in 1991. Well, if you thought that Bill
Miller was an idiot and totally incompetent and it was 1991, you might say, hey,
let's do a test and see what the chances are of him beating the Standard &
Poor's in the next 15 years. They're probably pretty small. But nobody was
really saying that. Really what was happening was people were observing the
market and the mutual fund and they're looking for outstanding events, and when
they noticed that somebody beat the market or beat the Standard & Poor's for 15
years in a row, then they say, hey, that guy, he did something great.
So there's no real reason to say what are the chances that Bill Miller beat the
market 15 years in a row. It could have been anybody. It could have been Barry
Diller if he were a stockbroker. And if it were Barry Diller instead of Bill Miller,
we'd have exactly the same headlines that we have talking about Barry Diller as
being the king of the stock market. So what you really have to do is recalculate
this taking into account the fact there are thousands of mutual fund managers
trying to beat the market.
Actually there's about 5 or 6,000, but the articles that I was looking at tended to
use the number 1,000, they were taking about comparable funds, so that's okay,
we'll use 1,000 which makes the odds not quite as great. The odds that
someone amongst 1,000 managers beating the market for 15 years in a row
starting in 1991 are about three percent.
So in a series of parallel universes, if you had exactly the same thing happen,
about three out of every hundred of those universes somebody would do the Bill
Miller feat. It might not be Bill Miller, but it would be somebody.
Now, here's the other thing, because I know if I keep saying starting in 1991, now
is that special. If he had started in 1990 would people have been less
impressed, they'd say, hey, he didn't do it in '91, he did it starting in '90. No, they
just say 15 years in a row, right.
So the question is why should we look at one 15-year period? There's no reason
to do that. We should look at many 15 years periods or a longer period of time. I
took 40 years because the article talked about this being a great feat over the
last 40 years. That's not as great a time period as it sounds, because if you
compare it to 15, you can't even fit three of those 15s in there, at least not
overlapping. So I said what are the chances that somebody amongst a thousand
people trying this every year with a 50 percent chance would do it sometime in
the last 40 years? It's about three in four.
So I would claim that the chances of Bill Miller's feat were not one in four billion,
but about 75 percent, and if I didn't see someone doing it, I would say we're
really wasting our money on mutual funds, because you could be flipping a coin
and do it, and a lot of people say that's true and you should buy index funds. So
the moral of this story is is that the headline should have said expected 15-year
run finally occurs, Bill Miller lucky beneficiary. (Laughter)
Okay. The second illusion I wanted to talk about, I call it the illusion of small
numbers. It's has another name. It's called the law of small numbers. I was
given to it by these two guys, two psychologists named Conoman (phonetic) and
Firsky (phonetic). I don't like the law of small numbers because when I talk to
people about the law of small numbers they think it's true, but the law of small
numbers is our sarcastic name for people applying the law of large numbers to
small numbers. I know I don't have to tell anybody here about that, but people
tend to think the law of large numbers says that the underlying probabilities or
potentials of things will tend to be reflected in the results if you do it, repeat the
test often enough, right.
Doesn't apply if you repeat the test twice, okay. And that's what this is something
that confuses many people. It doesn't necessarily confuse people on the
conscious level it confuses you on the intuitive level, and that's why it causes
problems even for people who know all about probability and theory.
So one example I like to say, use here is you look at CEOs, and let's assume that
each one has a certain probability of success in any given year, however you
define success it doesn't really matter, every industry or company could have a
different definition, but suppose it that you believe in the CEOs to this extent or
the past history shows that you have some reason to believe that this person will
have a 60 percent chance of having success in any given year.
Things could still happen, oil prices could go up, unlikely, the weather could get
warmer, that will never happen. A lot of strange things could happen to get in the
way, small things, vendors maybe not coming through. I know at Microsoft you
never find that.
So these things could happen. But in the end, if you fold it all in, let's assume he
has a 60 percent or she has a 60 percent probability. So the question is, okay,
fine, that's the underlying probability. Now we're looking, we don't get to see that,
right, we may estimate that, but we don't get to see that in life. What we get to
see are the results. When we look at how did he do over the last five years, how
did he do over the last 10 years, right.
So the question is in a given five year period what are the chances now that
when we look at the results that those are going to reflect the underlying
probability of success. So what are the chances for instance in five years that
the CEO who has a 60 percent chance a priori of success will have three out of
five good years, which is 60 percent? If that happens, we go, look, he had 60
percent good years, that must mean that he or she has a 60 percent potential.
You tend to think that way, right?
Well, here's the graph. So here's a histogram showing from zero to five what are
the chances of having that many good years out of five. The chances of having
precisely one out of -- three out of five good years are only one in three. So
two-thirds of the time the person's results will not reflect their ability or their
potential. And in fact, one out of 10 -- in one out of 10 cases you can expect
someone with a 60 percent probability of success to have either five years good
years in a row or all five bad years in a row. Which means if you look at the
Fortune 500, about 50 of those companies, even if the CEO had a 60 percent or
the company had a 60 percent chance of success, 50 of those companies would
have either five good or five bad years despite that.
So the question is, I was talking about the law of large numbers and the law of
small numbers or the delusion of small numbers, how large is large, okay. Well,
now I'm going to briefly transfer to championship sports series, because you
don't usually get seven, even seven years as a CEO to try and prove that you're
not really a loser even if your company isn't do doing well. But you could have a
seven series sports series and lose the first couple and you're still in the game.
So suppose in a sports series we tend to have these seven-game series for
championships, and suppose that the two teams are matched with a 55-45 edge.
One team has a 55, either historical chance or theoretical chance of beating the
other team in any given day.
I think this is actually a pretty lopsided edge. In professional sports by the time
you get to the World Series or the NBA playoffs, the teams are pretty evenly
matched and 55-45 is not a real even match. But let's say that's what the
probabilities are and you play a best-of-X series.
All right. We play best of seven series. What do you think the chances are that
the inferior team is going to win; in other words, win four out of seven? It's about
40 percent.
So playing a best-of-seven series doesn't really tell you a lot. Now, it's fun. I
don't deny that. I won't say I don't like sports, I think it's exciting, it's fun, but it's
not very meaningful, okay.
Now let's get meaningful. Let's see, let's say there's a little trade-off. How much
fun would it be if it were meaningful. Well, let's say we want it to be this
meaningful. We only want the lesser team to win one out of four times, okay.
How many games do you have to play, all right? Well, it's a little bit more than
seven, what do you think, 10, 15, 20?
45. Well, I don't know, ABC or NBC might not want to keep televising by the 39th
game of the same two teams going against each other. So that's quite a bit. And
being into mathematics, I ask one other question, which is this is already getting
a little bit on the tedious side.
But let's say we really want to know who's better, okay, we really want to know
who's better, so let's say we want a statistically significant result, which means
that the loser -- that the inferior team will win only about five percent of the time,
95 percent of the time the better team will win. How many games then? You
need 269 games, which is several NBA seasons, one and a half baseball
seasons of the same team playing each other head on to be statistically
significant. So the point here is in sports we don't really care that much, but
when we observe people's results in other areas we do tend to draw conclusions
from much smaller numbers than we really have a right to.
Okay. The last mathematical illusion I want to talk about, I'm calling here
conditional confusion, this comes up a lot in the medical field. This has
happened to me personally, and you probably all know someone who has
suffered from this effect.
This has to do with false positives on medical exams. Suppose that the
probability of a false positive, which means that the test comes out positive but
there's really no tumor or no disease, okay, in the case of a mammogram, the
case means there's no tumor, and suppose that the probability is about ten
percent of that happening. The question is if you have a positive test, what are
the chances that the patient actually has a tumor? So I'm saying that the
probability of a positive test when there's no tumor is 10 percent. Does that
mean that the chances that you have a tumor if you have a positive test are 90
percent or are they 9 percent?
Well, you probably know there's probably a trick here, but let's go on. First of all,
I need more information. So if you know something about probability theory, you
know you need more information. Okay. In this case, let's assume that there's
about one percent incidence of women in their 40s of breast cancer, that's
roughly correct, and let's assume for simplicity that there are no false negatives,
which means that you say you don't have it, but you do. It will just make the next
graphic a little bit simpler but doesn't really change the point, okay.
Well, doctors were given this data in two different studies, and in one study a
third of the physicians chose A, 90 percent, and another study that was more
open-ended, they guessed that the probability of cancer was about 75 percent.
Okay. So when the false positive is a small number like 10 percent, people seem
to think that a positive test is meaningful. Not at all, okay.
Okay. Now here is a garbled, correct but garbled English description of why that
is, which I can't even understand even though I wrote it. A fraction of all those
tests that tested positive but aren't is not the same as a fraction of all that tested
positive but aren't. Say that 10 times in a row, okay.
Fortunately I've learned sometimes that art is better than English or math is
better, but I chose artwork. Here's some artwork. Okay. Here's 100 women.
The ends are those who test negative and are negative. Remember there's no
false negatives. The Fs are the 10 percent, there's 100 there. There's 10 Fs
which represent the -- there we go, I'm using the laser, there we go, high tech.
So these are the false positives 10 percent, right, so 10 out of 100. There's the
true positive. Remember the incidents I said you have to note one out of 100,
okay. So before you start you go 10 out of 100 false positives, they get a positive
result even though there are not positive, small number, small fraction, that's it.
Here's the one woman who actually has it, and here's the other.
Now that we have these tests, though, it's different. Forget these ends. They're
irrelevant. Now you know that you tested positive. You're only looking at these
people. Okay. If you're looking at these people, what fraction of these people,
these 11 people have a false positive, what fraction are really positive? One out
of 11 is really positive, right. The other 10 are those 10 false positives. So the
answer is 9 percent, and the reason is that you have to compare the false
positive right to the actual incident, that's what's important, not the raw false
positive rate, and a lot of people don't understand that.
So those are just a few examples of some of the randomness confusions. But
now I want to talk about something else that I talk a lot about quite a bit in the
book, which is why from a psychological point of view we make these mistakes
and what kinds of psychological errors occur in our thinking process and in our
intuition that lead us to make these kinds of -- have these false intuitions.
Conoman (phonetic) and Firsky (phonetic) were the pioneers, Daniel Conoman
(phonetic) and Amos Firsky (phonetic) in much of this. Firsky died unfortunately
in I think it was '96. Conoman got the -- even though he was a psychologist got
the Nobel Prize in economics in 2002 for this work because it's so important in its
implications for economics.
And their work talked about what they called heuristics and biases, meaning that
people's brains have evolved over millions of years in order to make quick
judgments in dangerous situations of uncertainty and probability.
And you don't have time to sit down and draw a diagram of the false positives
and false negatives and figure out what's going on if a saber tooth tiger is
chasing you, or if you see a blur and you dont' know what it is, you have to make
a decision based on your own feeling of probability.
Now, if once in a while you're wrong and you run away from some leaves, it
doesn't really hurt you very much as long as you run away from the tigers, right.
So the heuristics are the terms they use for these intuitions that we've developed
and the biases are the mistakes or our tendencies to make certain errors. And
those errors are also called cognitive illusions in analogy to optical illusions. So
I'm going to talk about three of those here. The book covers much more, but I
don't have nearly enough time, and you would all fall over if I went over
everything. So I'm going to give you three of these. The third one is the little test
before we -- before the talk began.
So the first one is called the illusion of control, really I'm fond of this one. There
are many studies of this which are all extremely amusing and interesting.
This one was a study of Yale students, so these are supposed to represent Yale
students and these coins represent coins.
(Laughter).
So what they did was they took the Yale students one at a time with an
experimenter, and they said I'm going to flip the coin a number of times, 30 times
and each flip you guess heads or tails. Okay. Easy enough, simple. And
certainly if you would ask the student can you do that, better than average or
better than at random, better than 50 percent, they would all say oh, no, no, no,
it's all random, unless they were a relative of Uri Geller. But that's if you asked
them explicitly.
But the question with all these is not whether you know this or whether you
believe in it, but whether you feel it and how you behave, okay.
So what they wanted to get into it a little bit deeper. So they brought these
students in, they had them guess the coins, and as psychologists always do, I
would not be good in a psychological study because I always know they're lying
to you, they're tricking you and they're not really telling you what they're after,
okay, and of course that's what they have to do.
So in this case, they weren't really telling them whether they had the heads or
tails right, they had a prearranged sequence that was identical for every student
of yes you got it and, oh, too bad, you didn't get it, prearranged, predesigned,
and occasionally they would show them the coin when what they -- when the
student guessed it right and was supposed to guess it right that time, they would
say look, you got it right, and otherwise if they were supposed to guess it right
and they were lying to them about guessing it right but they really got it wrong,
they just wouldn't show them the coin.
So it looked pretty convincing to the students, but they all had identical
experiences. They all got it right 15 out of 30 times. They all got it right more at
the beginning and less at the end. So they got more of a feeling at the beginning
that, hey, the first impression was this is going well. And then the question is do
they think it's going well, or do they still think that it's random, okay?
So you don't really -- again, now, they don't know what they're being tested
about, right, because then they would figure out their answers consciously, and
you don't ask them can you -- directly, can you guess, do you have ESP, they
asked them slightly, slightly more disguised questions and they asked them a
bunch of questions and then they look at the particular questions that they're
interested in and see how they answered them.
In this case they asked them these two slightly disguised questions, would your
performance be hampered by distraction, and would your performance improve
with practice? Now, obviously probably not, right? But what do they really feel?
Well, they're just answering these questions thinking you know, hopefully how
they feel about it, and surprisingly a quarter of them thought they would be
hampered by distraction and 40 percent thought they would improve with
practice.
And this is not, as I say, an isolated case. There are many very amusing studies
of this sort and ->>: (Inaudible).
>> Leonard Mlodinow: I'm sorry?
>>: (Inaudible).
>> Leonard Mlodinow: I don't know. But I do wonder sometimes about the
randomness of the studies that are always done on college students. But,
anyway, the point is that there are other very serious studies actually about why
people have this need to believe that they're in control, and there seem to be
deep-seated psychological reasons that it would be hard to go on in life in some
ways if you felt you didn't have control. And there were studies of nursing-home
patients who are given control over their environment versus nursing-home
patients who were not given control over what plants to have or taking care of
their own rooms, et cetera, and the results were very dramatic that the patients
that were given control lived longer and did much better than the patients where
everything was done for them and they had no decisions to make.
Okay. The second bias, I'm going to talk about Bill Miller, again I'm going using
him because I don't have much time to develop different examples, but by the
way he is supposedly, I haven't met him, a very bright and a nice person. And
the only thing I'm talking about is that he's also a lucky person.
And this bias is called the expectation bias which is that our assessment of
people and one of the barriers to understanding what really happens are
expectations. And when a fellow is this successful and the head of a big
company making a lot of money, we tend to think they deserve it and that they
have some skill that's bringing them the success that they are having.
So this has also been investigated in many arenas, and I think an amusing one is
taste. In particular hear beverages, the beverage world. A lot has been done on
wines. And of course people, I enjoy a lot of wine and I go to wine tastings and I
listen to a lot of banter about green pepper and wild strawberries and the scent of
freshly tanned leather and people go I get that, I get that, it's a Nike, I think.
They're very fine distinctions made, and the question is of course would you
make those distinctions if you didn't have certain expectations?
And these studies are very amusing, too, because for instance in a lineup, they
did similar tricks to the coin tossing where they line up wines for experts and
don't really tell them what's going on. In this case, one of the wines was a white
and -- well, actually this is a few studies, they are similar. They put a white wine
in that was dyed to look like either a rose or a red, and they're analyzing the
whole flight of wines and giving their -- not knowing what they're being tested on,
just giving their analysis, and sure enough it's white wine but they have tan -they assigned the qualities of the red, they find the rose that's really a dry white
to be sweeter, and they basically taste the wine according to the way they expect
the wine to taste.
Now, at Caltech recently there was an interesting study where they -- this is not
the one -- there were many studies done, by the way, done with wine experts,
wine students. Okay. This particular study, though, this is fun was done again
with college experts. They are not wine experts. They are probably Boone's
Farm experts maybe.
So they lined up wines for them, and they labeled the price of the wines, okay.
But unbeknownst to them, two of the wines were identical, but one was labeled
$90 and the other one -- the other identical wine was labeled $10. Okay. It's
probably not a big surprise that they picked the $90 one as being better, right?
And you could say, well, yeah, they don't want people to know you can't even tell
a $90 wine from $10 wine. You know, so that possibility that they really picked it
better because they felt they should pick it better. But, aha, there was a little
catch here. These people were doing this inside an MRI machine, okay, and
their brain was being imaged, and they were actually not just asked then which
one was better, they are looking to see are their pleasure centers lighting up
more. And guess what, they did. They actually were enjoying the $10 bottle
labeled as 90 more than when it was labeled as 10. Okay. So that's how
deep-seated these effects are.
And the last study here is a Coke, Pepsi study where they got people together
and said, hey, you like Coke, you like Pepsi? They found some who like Coke,
some who like Pepsi, and then they said let's just test it out. So they gave them
some Coke and some Pepsi and said try this and try this, what do you think.
Well, 30 percent of the ones who said they liked either Coke or Pepsi changed
their mind when they tested it, when they actually tasted it. 70 percent said yeah,
yeah, I was right. Okay. But the trick here was there was Coke in the Pepsi
bottle and there was Pepsi in the Coke bottle. So actually 70 percent were
contradicting themselves when they actually tasted it, and the other 30 percent
were the ones who actually were confirming their taste.
So if you want someone -- you know, I always can go to these fast-food places
with your kids and they order a Sprite and they only have 7-Up, they should just
have the different cups, just have one kind of beverage, put it in the Sprite cup,
put it in the 7-Up cup, put it in the Coke cup, add a little dye, you know, it could
be just one beverage and just call it whatever you want and they'll like it. I don't
know. I should patent that idea.
Okay. Now, may I have the results? And this is a -- okay. Now, this is group
two. And where is group one?
>>: There's group (inaudible).
>> Leonard Mlodinow: Okay. Good. This came out very nice. Okay. So this is
a fun one. Okay. So let's see. Here's what I asked you guys before the talk.
You were divided into two isolated groups. You each read two questions. The
second question was how many countries are there in Africa? Now, why would
one side -- okay. I guess I can tell you the answers. Group one, you guys,
averaged, your average guess was 49.72. Let's say 50. All right. 50. You guys,
your average was 23.
Now, why would this side of the room have a guess that's twice the size of the
guess on this side of the room? Well, the reason is that our brains and
estimation remembering frequencies, probabilities and sizes are very tenuous,
okay, and very easily influenced; and this is another reason that you misjudge
things, okay. And how did I get you to misjudge this? I did a very simple thing,
right. The first question you got was different. You got one question. You got
another question. I'm sorry. These are the results from a couple other groups I
did.
The first question was, the group on this side that did 40 -- that did 50 were
asked are there more than 180 countries in Africa. You guys were asked are
there more than five countries in Africa. Well, those answers are pretty obvious,
but what it did was it planted a seed in your brain and then when I asked you how
many countries are there, it skewed your results. Okay. And this is related to
another effect called priming, which is also very amusing, where they've shown
that if I say like rude, loud, and a whole bunch of words like that, you are more
likely to go back and have an argument with your boss than if I didn't say that.
Don't blame me, please. I'll be out of here, I guess. This is actually something
that is used either consciously or unconsciously. People asking for things.
Okay. Civil suits. You demand 100 million, 200 million dollars. Well, you might
not even think you're going to get that, right. You might not even expect to get
that. But if you ask for five million you're probably going to get for a lot less than
if you ask for 100 million to start with. Bail amounts and there have been studies
showing that the effect works in all these different areas. And even gamblers
judge bets based on the payoff not on the expectation of winning, in other words
not a balance of the payoff and the probability but just the payoffs.
So as some parting words one of my favorite quotes from Bertrand Russell. He
said we all start from naive realism; i.e., the doctrine of things are what they
seem. We think that grass is grown, that stones are hard, and that snow is cold.
But physics assures us that the greenness of grass, the hardness of stones, and
the coldness of snow are not that the greenness of grass, hardness and coldness
that we experience, that we know in our experience but something very different.
And what I hope to show in the book is that the mathematics of randomness is
similar and the mathematics shows you that the world that you perceive and that
you interpret around you is not the world that you think that you see and that you
conclude intuitively that you see, but it's really something very different if you take
the time to analyze it.
So it's as if the world around you and your perception is like a van Gogh painting,
it's vivid, it has sharp lines and contrast but in reality it's like a gray-scale thing,
kind of blurry and underneath there might be very vivid potentialities, but by the
time things happen it's all blurred by the effects of randomness.
And what are the implications of that? I'm not saying don't try hard, talent doesn't
matter, ability doesn't matter. All those things matter, and it's important to
increase your probabilities, okay.
But if you don't, if you fail, for instance, don't say I suck, you know, don't say my
manuscript sucks because it got rejected say by 10 publishers like Harry Potter
did the first time or one manuscript was called Dreary and a Typical Portrait of
Family Bickering or something, that was the Diary of Anne Frank.
So when these things happen, you know, don't take it to heart. There's a lot of
chance and luck involved. And on the other side, if you have great successes,
don't consider yourself the person of destiny and think that you're better than
everybody else but that everything is kind of randomized and blurred by the
effects of chance.
And it doesn't mean also that you can't do something about it. There are many
things you can do about it to increase your chances. As I say, you increase your
chances if you just plain increase your own ability. You also increase your
chances by having a lot of bets. The more opportunities that you have, the more
opportunities that you take, the greater the chances that you'll have of success.
And so I want to end with one other point which is a little bit more succinct from
someone also in the computer biz, Thomas Watson from IBM. And he said if you
want to succeed, double your failure rate. Thank you.
(Applause).
>> Leonard Mlodinow: Time for questions? Yeah?
>>: I had a question about the Yale student. Suppose that the Yale student
walked into that experiment with no preconceived notions about whether the ESP
worked or not and if he was a (Inaudible) he would say my performance in this is
similar to my performance in other skill based tasks, therefore on the balance of
probability this must also be a skill based test.
>> Leonard Mlodinow: He got 15 out of 30 correct.
>>: You said he did well at the beginning and worse at the end, and that's what I
would expect for all skill based tests that tire after a bit.
>> Leonard Mlodinow: Or maybe you learn it after a bit. I don't know, but overall
-- I don't know. I mean, I'm -- what is your question?
>>: If it's true that they were relating it to other tasks which have a similar profile
then the answers they gave would not be indication of a cognitive relation,
instead it would be an indication that they were being really good, that they were
actually making the right call.
>> Leonard Mlodinow: I wouldn't say that if they got 15 out of 30 correct, but,
you know, if you say they're getting tired or ->>: But the experiment to distinguish that would have been (inaudible) 15 out of
30 correct, versus a different balance, maybe a more even balance?
>> Leonard Mlodinow: Yeah. There may have been a condition where they did
that. I don't really remember. But I don't -- I think they were definitely going
under the conclusion -- operating under the conclusion that if they got half of
them right that it wasn't skewed really, you could even argue the fact they were
getting more wrong at the end would tend to leave a bad taste in their mouth and
have them think that they couldn't do it.
But in the book, if you want to look it up, I mean I have the reference to the
studies in the footnotes.
>>: So given what you know about probability would you live your life any
differently than a normal person?
>> Leonard Mlodinow: Definitely not different than a normal person. I have
changed, yeah, I think so, it helps -- certainly helps I have a certain Zen attitude
which I think has started when I wrote the book about Feynman actually, around
that time, Feynman's Rainbow. A certain way of looking at life. And this has
definitely amplified that, because a lot of times when things happen I go, you
know, I'm drawing a conclusion from not very much data or if something happens
that's eithe4r very good or bad you tend not to take it to heart very much because
you realize that those things happen.
But even, you know, personally if someone behaves in a certain way and you
interpret that as being like rude or in some other negative light because you have
an expectation because of a person's reputation or for some other reason and
then you ask yourself was that really rude behavior or was that me just
interpreting it that way. So I definitely tend to look at things a little differently and
realize that we often jump to conclusions, totally not a mathematical conclusion
that really aren't justified based on one or two incidents, right, and it definitely -- I
say that to myself a lot, you know, I learned not to do that, you know.
>>: You know the book "Stumbling on Happiness"?
>> Leonard Mlodinow: Yeah.
>>: You sort of alluded to I think with the controlled experiment on older people.
Do you buy the premise there that we have like zero ability to predict which
outcome will make us happier?
>> Leonard Mlodinow: I don't know if he said zero, but he said ->>: That's the (inaudible).
>> Leonard Mlodinow: Yes, I do. I think that I, you know -- well, for one thing the
studies he cites are very convincing, but I find the same things in life and I find
that a comfort in a way because you go this really bad thing is happening, but
you know what with my son something happened the other day, and I said in six
months you won't even remember this, your life will not have changed.
Because in that book, for instance, they talk about the lottery winner and the
paraplegic, right. Someone is terribly injured becomes a paraplegic or
quadriplegic I don't remember, and someone else wins the lottery and they go to
them a year later, and I don't know how they did this or how it happened, but they
had some tests of happiness before and after, and they were equal, equally
happy. So happiness I think comes more from in here, and it's also hard to
predict what circumstance happens around you is going to make you happy or
not.
And if you look on the back of my book there's a nice quote from Daniel Gilbert
about Drunkard's Walk. So he seems to belief this book as well.
>>: How do you find the lottery winner (inaudible)?
>> Leonard Mlodinow: I don't know, and I -- that's what I say, I don't know how
they do that, I don't know if they happen to be -- maybe they happened to be
studying that person had come in. I don't really know. I don't know if he says it
in the book. I don't know remember him saying how that happened.
>>: The paraplegic (inaudible).
>> Leonard Mlodinow: Yeah. I don't know how they did. But I'm sure there's a
reference to it, and when I read that book sometimes I'll go back to the original
papers because I was curious about how it actually did it.
>>: They just looked at lottery winners overall and compared their happiness
with happiness with the population of (inaudible) expect that you see a gap.
>> Leonard Mlodinow: Maybe.
>>: Do you have any credibility to the power of the observer on the (inaudible)?
>> Leonard Mlodinow: I'm sorry. Could you say again?
>>: Do you give any credibility to the power of the observer on scientific
experimentation?
>> Leonard Mlodinow: The power of the observer?
>>: Yeah. (Inaudible) random event? Do you have a viewpoint on that?
>> Leonard Mlodinow: Well, one thing I have thought about is this effect and
especially medical studies where you if you have a positive result you report it, if
you have a negative result you bury it, and you know so if we have statistical
significance, you know, which means that five percent of the time you'll have a
spurious positive result, but if you bury the other 19 you'll have what looks like a
good positive result. So I definitely think that there is that and that effect of the
experiment or you have to be very careful with the experiment is objective and is
not analyzing the data in a way that reflects what they want to find, and that's
why good experimenters have to design everything beforehand and not change
the design later because they're seeing something happening. All right. And
that's why it's dangerous in medical studies also to go. We weren't looking for
this effect but guess we found a small effect of -- right. Well, there's a million
small effects that could happen with small probability by change and some of
them are going to happen, and if you weren't looking for it and it suddenly is
there, you have to be very suspicious about it. So I think that is a danger.
>>: You say something about conditional probabilities?
>> Leonard Mlodinow: Yeah, that's what the false positives test was. What do
you want me to talk about?
>>: Well, I just -- the subject intrigues me that the probabilities that are
mathematically there could change with the ->> Leonard Mlodinow: Okay. I have the -- this is this is my book, but I didn't put
this in because it's kind of a mind bender. But have you guys heard of the two
daughter problem? Okay. So this is a variant on the two daughter problem.
This is a good one for you to go back to your desk and waste the rest of the day.
Okay. Suppose I have this room and you all have two kids, all right, we all have
two kids, right, so what -- if I know that like one of yours is a girl, but I don't know
you that well, I don't know if you have two girls or one girl, what are the changes
given that knowledge that you have two girls?
>>: Can you repeat that.
>> Leonard Mlodinow: Yeah. So I know that you have two kids, I know that you
have one or two girls. What are the chances you have two girls? Okay. Well,
this isn't the problem, so I'll just tell you, the chances are one in three, okay,
because there's four possibilities. You could have had a boy and then another
boy, you could have had a boy and then a girl, you could have had a girl and
then a boy, you could have had a girl and another girl. Okay. Those are all
equally probable, because boys and girls assuming that they're equally probable.
Okay.
>>: If you have one boy you're more likely to have another one.
>> Leonard Mlodinow: I don't know, I haven't heard that. But let's forget that part
and if that's true, I haven't heard that. But I do know that one is slightly more
probable than the other. But forget that. They're 50/50. So if I eject the boy,
boy, because that's the information I know. So that's what changes. So initially
your chances of two girls are one in four, but the knowledge that you have at
least one girl makes it one in three. Okay.
That's not the problem. The problem is if you know the name of the girl does that
change anything? Yes, it does. See that's the hard (inaudible). So if I know that
your girl is a girl named Florida, this is what I talked about in the book, so I
remember you're my distant cousin, I know you have two kids, I know you have
like a girl named Florida, but I just don't know if you have
two girls or one girl. Is that different than I know just know you have a girl. It is
different. But if you want to stay late, I could explain it, but I don't want to bore
people here. But it's maybe something that's worth thinking about.
>>: (Inaudible).
>> Leonard Mlodinow: You want to hear it?
>>: Yes.
>> Leonard Mlodinow: All right. Okay. So the same -- okay. Let's take the
same situation. Let's say this room, okay, and you all have two, okay, so what
conditional probability about it taking into account new information, right. So
suppose that there's 100 people in this room, I'll get the numbers screwed up, but
suppose there's 100, so one in 100 names for girls, okay. So now I have to look
at a lot of different categories of people. So there's not just boy, there's boy, boy,
there's girl named Florida boy, right, and there's girl not named Florida boy, two
different categories, right. And then there's boy, girl named Florida, and then
there's a boy, girl not named Florida, right, and then there's a girl named Florida,
a girl not named Florida, and there's a girl not named Florida, girl named Florida
and let's forget both being named Florida, I mean doesn't change much, and it's
very rare, and who would do it?
So, all right, so it may be hard to do without -- I should have done -- there is a
blackboard. But okay, those all have different probabilities now, right. Because
the girl named Florida is one in 100, or it's one in a million I say in the book, but
let's say it's one in 100, okay, and the girl not named Florida is -- okay, it's one in
100 times the one-half because it's a girl, so the chances of not going what the
kid is, the chances of a girl are one-half, right? The chances of a girl named
Florida are one-half times one-100th, okay. It doesn't really matter. I called it
epsilon, one-half times a small number, right? The chance of a girl not named
Florida are one-half times almost one, pretty close to one, right?
Okay. So now I'm going to take the new information, I'm going to send
everybody away from this room except those who satisfy my new information,
okay? And my new information is not just that there's one girl but they're a girl
named Florida. So if I take out all those possibilities, what am I left with? Boy,
boy, go away, right? The girl not Florida, girl not Florida goes away, right? What
stays? The boy, girl named Florida; the girl named Florida, boy; the girl not
Florida, girl Florida; and the girl Florida, girl not Florida. Four, those four
possibilities, right? They all have equal probability because they all have one
Florida in there, one boy and one girl -- I mean they have one Florida in there,
right, they have equal otherwise possibility of one-fourth. Two of those are two
girls, two of those are one-girl families. So the chances are now 50/50 not one
out of three, which is why I had to convince you first that it was one out of three
so I could get you back to be surprised about the 50/50.
But if you do it as a function of epsilon it interpolates between -- so when
everyone is named Florida, you find one out of three as a function of epsilon. So
epsilon is one, right, you'll find one out of three.
As epsilon gets smaller and smaller, the answer goes to -- a limit of epsilon being
zero to one out of two and 50 -- half the girls are named Florida, it's in between.
But in reality I did some research, I didn't put it in the book, but I spent, it's
horrible, I'm very slow at writing books because I get into this, I go what are the
probability of names. In I think in 16th Century Scotland, 25 percent of the girls
were named Mary. I still remember that. Not very useful. But then I found I think
Emily was the most popular name when I was looking this up and it had an
incidence about one percent.
So for the purpose of this common, even a pretty common name is a rare name,
it's just one percent. But Florida was an actual name that was given to people,
given to girls up until about 1930, and it wasn't that uncommon. And there's an
actual Website. Okay, the U.S. Government, blessed be them, take your money
and they don't just spend it on -- I won't even go there, but they also spend it on
things like lists of names and their frequency. I forgot. It might be in the book, in
the footnotes, but there's a Website that gives names by year and the frequency
of those names. I'm not sure what they use it for, but they got it there.
>>: (Inaudible) think it's just, social security records they have. Somebody was
born ran a query for in doing it but probably not too much time.
>> Leonard Mlodinow: Time. Okay.
>>: So this is just that accrual of information affects probability?
>> Leonard Mlodinow: Do you want to answer that?
>>: Only if ->> Leonard Mlodinow: No, (Inaudible).
>>: This is the thing that probably affects your probability being right if you
basically make your predictions after you get the information. There's the other
problem ->> Leonard Mlodinow: New information changes the probabilities.
>>: (Inaudible) information the problem with three doors.
>> Leonard Mlodinow: The Monte Hall problem.
>>: Choose one and then (inaudible).
>> Leonard Mlodinow: Do you all know the Monte Hall problem?
>>: One in three. That's the same in the one in three.
>> Leonard Mlodinow: The Let's Make A Deal. You have three doors and you
want to -- and the thing is, so there's that problem, there's there doors and at first
you just pick a door, it could have a prize, it could have not a prize. Then the
host picks one of the two doors that you didn't pick and says look there's not a
prize here, do you want to change doors, should you or shouldn't you? Everyone
says it doesn't matter. But it matters. And the reason that mistake that people -I talk about this in the book, too. Not only -- I talk about the whole history of this,
because this is a very interesting history, but the reason it matters is that the host
is not acting randomly, right? The host is opening either door at random if you
pick the prize then he can open either door. But if you didn't pick the prize he's
choosing to open a door, right, based on where the prize is. So he's adding
information, he's changing it. So you shouldn't expect it to be random. And
there's other ways of looking at it that makes it pretty clear. But that's a problem
that when Marilyn Vas Avan (phonetic) published it in the newspaper in Parade
Magazine she got 10,000 letters about it, including about 1,000 from math Ph.Ds
most of which said that she was full of it and she shouldn't mislead the
American public.
And so I think the girl named in the (inaudible) Florida problem -- I was going to
name her Cinnamon at one point, that's also a rare name, but I didn't want
people to get confused with the Spice Girls.
>>: (Inaudible) more troublesome than the Monte Carlo one ->> Leonard Mlodinow: Monte Hall.
>>: Monte Hall one. Because you're counting twins and all sorts of other things.
>> Leonard Mlodinow: Right. That's okay. What you're saying is here's our
model. We're going to discount twins, we're going to assume boys and girls are
equally probable, we're going to assume that parents would not name both kids
Florida, but that they would -- you know, so when you make a probability model
you ->>: (Inaudible).
>> Leonard Mlodinow: Yeah.
>>: I think it's a very (inaudible).
>> Leonard Mlodinow: It's an idealized, but so is the Monte Hall problem
because how often do you go through life and someone says here's these doors?
Not since the show was canceled.
>>: A game shows that (inaudible).
>> Leonard Mlodinow: (Inaudible).
>>: When you're listening to possible permutations of conditions of boy and girl,
it implies that you believed that the birth order was significant when guessing the
other child was a boy or girl. Why is that?
>> Leonard Mlodinow: Well, the ->>: (Inaudible) boy or girl, girl, boy, one of them is a girl.
>> Leonard Mlodinow: Imagine having two kids ->>: (Inaudible) boy girl if you (Inaudible).
>> Leonard Mlodinow: I'm sorry? Well, you can't -- you can't ignore the order
because if you -- what I'm saying is if you take 1,000 people and you have them
each have two kids, assuming that boys and girls are equally likely, there's no
correlation between first child and second child, et cetera, they each have two
kids, how many do you think are going to have one boy and one girl? One-third
of them or 500 or 333?
>>: But we're only talking about people who do have two children and that you
know one of them is a girl.
>> Leonard Mlodinow: But it could be the first one or the second so there's two
different possibilities. This is a problem that Galileo first solved and this was in
the -- he was asked by the Grand Duke of Tuscany, who was quite a gambler,
see if I can remember the problem, but it was a dice problem. And you know it's
in the book. I don't quite remember it. But he was wondering why when you -- I
think he was throwing three dice, why a certain number that could come up -- oh,
well, you know, I don't quite remember it so I'll just stop there. But it is a
difference because if you catalog -- I mean it doesn't matter that they're born, it's
not the birth that matters, it's when you had two things. The first one could be
something or the second one could be something. That's different from the
second one being something and the first one being something.
>>: (Inaudible).
>> Leonard Mlodinow: You know what, you can just mimic it with two dice
though. You could take dice and throw either even or odd dice, right, and you
want to ask what are the chances that they're both even or both odd, okay. And
the chances are -- that's exactly the same problem as the boy-girl problem
because it's half a change for each die, whether it's even or odd.
>>: That (inaudible) your cousin example you could say hey my cousin, I know
that your first child was a girl now (inaudible)
>> Leonard Mlodinow: That is 50/50. If I know your first child is a girl that's why
people get confused because that's the difference.
>>: The first -- one way of looking at it is it only matters in determining what the
priors are on the possibilities ignoring letter. So you use the two independent
events to get to the fact that the prior is .25 boy-boy, .25 girl-girl, and .5 boy-girl.
So once you get to that point, then you don't need to think about birth order
anymore, you just say okay, I'm ruling out boy-boy, so what's left it's one-third.
>> Leonard Mlodinow: You are Bayesians here, aren't you?
>>: Yes, we are, actually.
>> Leonard Mlodinow: Okay. All right. If that makes it simpler. Okay. Thank
you.
>>: Thank you.
(Applause)
Download