>> Kirsten Wiley: My name's Kirsten Wiley. I'm... Kahneman who is visiting us as part of the Microsoft...

advertisement
>> Kirsten Wiley: My name's Kirsten Wiley. I'm here today to introduce and welcome Daniel
Kahneman who is visiting us as part of the Microsoft Research Visiting Speaker Series.
Thinking, Fast and Slow is a tour of the mind and explains the two systems that drive the way we
think. System 1 is fast and intuitive and emotional; System 2 is slower, more deliberate, and
more logical. How do we know where we can and cannot trust our intuitions and what it means
to tap into slow thinking.
Here today to tell us is Daniel Kahneman -- Kahneman, excuse me.
Daniel received the Nobel Prize in economic sciences for his work that challenged the rationale
model of judgment and decision-making. He is a professor emeritus of psychology and public
affairs at Princeton University's Woodrow Wilson School and a founding partner of The Greatest
Good, a business and philanthropy consulting company.
Please join me in welcoming him to Microsoft. Thank you.
[applause]
>> Daniel Kahneman: Well, it's true, actually, that I've written a book about two systems. I
don't believe, though, in the existence of these systems. So let me begin by explaining that I
think that they're fictions. And let me explain why I chose this fiction.
I speak of System 1, which is quick and intuitive, and I speak of System 2, which is sort of more
deliberate and effortful. And -- but there are no systems in the sense that we understand systems.
The reason I chose this terminology, and I'm being taken to task by my colleagues for it -- you're
really not supposed to invoke little men inside the head, you know, to explain behavior, which is
what I sound as if I were doing -- the reason is very similar to -- it's related to a very good book
that appeared this year. It's called Moonwalking With Einstein, and it describes the adventures of
Joshua Foer, who was a science writer, and he saw the memory championship of the United
States and he became curious about how that is done. And a year later he was the champion, the
memory champion of the United States. Very entertaining book, very good.
And the point of that book is that people are very bad at some things and very good at other
things and they're terrible at learning lists, and they're very good at learning routes.
And so if you want to remember a list, then you have mental -- you have to perform a mental
exercise of distributing the items in the list along a mental route, and then you will remember the
list.
And this is how all those tremendous feats of memory are performed. The Greeks discovered
that system and, you know, it's being slightly improved to this day.
Very much the same thing happens when we talk -- when we want to talk about psychology and
about mental processes. So there is one way of telling you that some processes are automatic
and effortless. So, you know, that is true of perception and it's true of some emotional reactions,
and you will see it's true of many things and they're automatic and effortless, and there are other
activities of the mind that are effortful and you can talk about those.
And they have the character of lists. But there is something else that the mind does really very,
very well indeed. The mind is excellent at dealing with active agents. It seems to be specialized
for it.
And we are very, very good at assigning traits to agents just in the same way that we're good at
remembering routes. We assign traits, we assign behavior. We have a mind that is specialized
for understanding agents. I'm just thinking [inaudible]. So instead of speaking of effortful
operations and effortless operations, I call them the operation that System 1 performs and the
operation that System 2 performs.
I find that it helps me think, and I think it is very helpful to other people. Don't be misled. I
don't think there are systems, but it's a very good shortcut to learn about two kinds of operations.
So let me tell you a bit -- whoops. Okay. Let me tell you a bit about those two operations by
bringing some examples.
So this is one way the thoughts come to mind and how System 1 produces that. And what's
important about this is that this is not something you do. The judgment that she's angry is
something that happens to you. Subjectively you're passive. And that is true of many mental
products; that they seem to occur passively so far as we're concerned. We're not the author of
them. They happen to us. And there's a long history of dealing with thoughts that comes from
this.
Then there is another kind of thinking. You know, so you didn't produce that number, I mean,
unless, you know, a few of you might have done, but you shouldn't have.
[laughter]
>> Daniel Kahneman: But to produce that number you get -- you have to do something else
entirely. You have to go through a sequence of operations. And this is something that you feel
you're doing; you're the author of this action.
And so subjectively there is a really fairly straightforward demarcation between the thing that
happened to us, mental products that happened to us, and mental products that we produce and
where we have the sense that we're doing it.
Whether, you know, free will and this sense of agency is an illusion or not is another
psychological problem that I won't go into, but the sense of agency is clearly present.
Then there is something else that System 2 does, and what I mean by that is there is something
else that demands effort. And when we know that something demands effort, it's because it can
be interfered with.
So something demands effort if it's hard to do while you're making a left turn into traffic,
because that takes a lot of your capacity and there is less left.
And so 17 times 24 is going to be very hard to do while you're making a left turn into traffic, and
there are other operations that have the same character; that is, that they are susceptible to
interference.
And one of the examples -- one of these is actually self-monitoring, and I'll give you an example
with a riddle that you may be familiar with.
A bat and a ball together costs $1.10. The bat costs a dollar more than the ball. How much does
the ball cost?
And the point is that everyone has an intuition when those three lines appear on the text, and the
intuition is $0.10. I mean, $0.10 is the first association. It's wrong. But it's $0.05. But for most
of you, if not all of you -- I've seen one person shaking his head -- for most of you $0.10 came to
mind. And then you may or may not have checked. Probably in this audience you did check.
When Princeton and Harvard students are given this problem, 50 percent of them write $0.10.
And it gets to 85 percent in some universities that remain nameless.
[laughter]
>> Daniel Kahneman: So those are the better ones. What that means, somebody who writes
$0.10 hasn't checked. And this leads to a very important characteristic of System 2. System 2 is
lazy.
Now, you know, that is really a naughty thing to say in well-informed psychological circles. But
it's very easy to keep that in mind. It is a system that operates on the law of least effort. It is
easily depleted. It is easily distracted. And when it is depleted or distracted, performance
deteriorates, including self-monitoring. And when people are keeping things in mind, rate them
to get quite poor at self-monitoring and they do -- they do impulsive things.
One -- an example of an experiment of that kind is you give people seven digits to keep in mind,
to remember. And then you give them other tasks. And you see which tasks are impaired when
you're holding seven digits in your head.
And people use [inaudible] language when they're holding seven digits in their head. When you
give them a choice between chocolate cake and fruit salad, they're more likely to choose the
chocolate cake if they're holding seven digits in their head.
So there is a function of control, which is the System 2 function, and that function can be
impaired by -- and disrupted when you make the system busy, cognitively busy, or depleted.
So let me go fairly quickly in introducing you to the way that System 1, as I understand it, works.
And okay. Now, let me tell you of a few things that happened to you, you know, while you
stared at those two words.
So one thing that happened to you is of course you read those words. You didn't have to intend
to. That just happened to you. The reading is automatic. It's a System 1 operation.
Then memories came to mind, all of -- you know, most of them unpleasant of course, and you
had very little control over that; that is your associative memory, your memory delivered things
to you passively.
It did more than that. You recoiled. Very slightly, but measurably. When people are exposed to
a noxious word, they draw back.
There were physiological changes: an increase in heart rate, slight increase in blood pressure,
changes that are associated with threat. So your system reacted as if this was a real threat.
One of the more important things and that really labels all of this as a function of associative
memory is that there are a whole lot of words that you would now find easier to recognize. So if
those words were whispered, the threshold at which you could detect them or recognize them is
now lower than it was before. And they would be words like stink and smell and nosy and
illness. And actually a very large set of words.
The word vomit in this case evokes or activates a whole network of associations, and those
associations do not come explicitly to mind. But they will facilitate recognition if there is
another stimulus. So you are prepared for a lot of things.
So clearly one of the functions of System 1 is preparation. Now, while you were watching this, a
lot of the changes that occur are actually reciprocal. So you make a disgust face, which most
people do. When people make a disgust face, they're more likely to experience disgust.
But as we know, for example, that when you stick a pencil in people's mouth like this, which
forces their face into the shape of a smile, they find cartoons funnier. If you stick the pencil like
that which makes them frown, they find cartoons less funny.
So the shape of your face influences emotions you have. Recoiling induces negative emotion.
Negative emotion induces recoiling. What you get is a mutually reinforcing network of
activation or activation within a network which creates a coherent pattern. I'll call that
associative coherence. And that is one of the main ways that System 1 functions. It reduces
associatively coherent responses to stimuli.
Now, we know a great deal actually about the way that the system works, these responses to
stimuli. Let me give you an example, and I'll come back to the bananas in a minute.
You read the first of these two things as ABC, the second one as 12, 13, 14. But in fact the B
and the 13 are identical. They are the same stimulus. What this shows is that the interpreter
system, System 1, is context sensitive. It interprets. Everything is brought to bear at the same
time in creating a coherent interpretation, and that is a key function of the associative memory,
this production of coherence. It produces a story, it produces an interpretation.
Now, the system is also oriented to causality, and I'll have several opportunities to emphasize
that, because I think it's very important. So there were two words there, bananas and vomit.
There was absolutely no reason for you to make a story out of those two words, but you did. The
interpretation of that, that the bananas somehow -- you were looking for a cause of the vomit,
and the bananas looked like a possible cause. And so temporarily you're off bananas.
Now, this is temporary. It will pass. But there is an interpretation, a causal interpretation. You
were not asked to do it; it happens automatically. This orientation to causality and to coherent
stories is another essential characteristic of that system that I call fast thinking.
Now, this associative memory, the network is preexisting, the connections between the nodes,
between the elements of that network, the ideas, if you will, are preexisting and they contain a
huge amount of world knowledge that gets activated at a tremendous speed.
And, you know, to give you my favorite study of that, it's a study in which a male -- an
upper-class male British voice says the word I have large tattoos all down my back. And I will
not try to do that with an upper-class English voice. I couldn't. But what happens is
approximately three-tenths of a second after that word the brain reacts with a characteristic
response of surprise. People who speak in upper-class English voices do not, in our world
image, have large tattoos down their back.
That gets registered immediately. The speed -- you know, the subtlety of the amount that work
would need to be done to produce that recognition that this is a British voice, it is upper class,
upper class, related to tattoos, all sorts of things, all that happens at enormous speed. All of this
is fast thinking.
So it's highly sophisticated. And there is more to it. I mean, this is a system, System 1, which
operates at different levels of fluency, so sometimes it is able to make a coherent story of
information, sometimes it is not. And its own fluency is an input for other operations. So it
monitors its own fluency of processing. I won't go into that.
Now, let me emphasize a little more. Let me go back to this issue of causality. Let me give you
an example. Which is more probable, that a mother has blue eyes if her daughter has blue eyes
or that her daughter has blue eyes if her mother has blue eyes?
Now, here again, there is an intuitive interpretation. And most people, you know, if you force
them to respond very quickly see one of these sentences as much more normal than the other.
And this is that a daughter has blue eyes if her mother has blue eyes. That's a flow of causality.
In fact, the probabilities are equal if the proportion of blue eyes -- if the frequency of blue eyes
are the same in the two generations, the probabilities are precisely equal.
The intuition is quite powerful and, you know, people who do not know statistics and do not
bother to check and to compute massively prefer one of these statements to the other.
Here is another example, and I do want to emphasize this causality, because System 1 is very
good with causes and very, very poor with statistics. This is one of the characteristics of System
1. It deals with individuals. It deals with agents. It deals with story lines. It does not deal very
well with ensembles and with statistical facts.
And one of the things that it doesn't do well at all is combine statistical information with
specific -- case-specific information. Let me give you an example of that.
There are two cab companies in a city. 85 percent of the cabs are blue; 15 percent of the cabs are
green. And there was -- whoops. There was a hit-and-run accident and there was a witness.
And the witness said with 80 percent probability, subjective probability, that the cab that was
involved in the accident was a green cab.
And -- or the -- or the witness just said it was a blue cab -- a green cab, and then the [inaudible]
tested and found out that this is the probability. There's a probability of 80 percent that the
witness is telling -- is telling the truth.
What is the probability -- what is your probability that the cab that was involved in the accident
was blue? Now, you probably know the answer. Most people don't. Most people just say 80
percent. They completely neglect the base rate information. In fact, this is a straight Bayesian
inference problem. The correct answer is 41 percent.
But people do not combine, and there are many other instances, you give them the base rate, you
give them the statistical information, then you give them case information, case-specific
information, they don't use a base rate. A base rate isn't planted. And this really happens a great
deal.
Now let me give you an example, very similar example. Formerly I think identical, but it
produces quite a different reaction. Okay. There are two cab companies in the city. Half of the
cabs are green; half of the cabs are blue. But 85 percent of the accidents are caused by the green
company. And now there was a witness and the story is the same. What's the difference?
The difference is that this information leads immediately to a causal interpretation. This is not a
statistical fact. What you have immediately inferred from this is that the drivers in the green
company are reckless fools. I mean, this is why they cause 85 percent of the accidents. They
must be -- you have inferred a cause.
And that cause, that causal information, applies to every green cab. And now that and the
witness are both case information about the same case, about the same story, and people get it
right.
So we have two versions of a Bayesian inference problem. People do miserably in one and they
do quite well in the other, and the difference is whether the information lends itself to a causal
interpretation about the case at hand. And if it does, it will be used and it will be intuitive. And
if it doesn't, in general, for most people, it will be ignored.
Okay. Let me go back to a possible introduction to this talk. A possible introduction would be
that we are very interested these days in whether intuition is good or bad. There's a fair amount
written about it.
We have a lot of -- there is a mystique about intuition. Mystique was to some extent promoted, I
think, by Malcolm Gladwell's Blink book. I mean, people saw it as an endorsement of the magic
of intuition.
You have a lot of business leaders and national leaders who claim to hear very clear messages
from their God and to act on those. And so the -- and by and large we are inclined to believe
them. And we want to believe them. We like this idea that people have access to intuition.
Stories about the novels of intuition abound, you know, like the physician who sees a case and
immediately diagnoses whatever it is the intuition of stress plays and so on, and this is all seen as
something magical that happens with expertise.
There is actually absolutely nothing magical about it. There's nothing magical about the feats of
intuition, when they're actually feats of intuition rather than luck, which is the case for many of
the people who claim that their God speaks to them.
What happens with expert intuition is that it's produced by reinforced practice. And with
considerable amount of reinforced practice, activities turn and become System 1 activities. In
other words, they become automatic and effortless. It happens to us when we drive a car. We
have learned to do it. And now -- making a left turn into traffic is an exception, but we can do
things while we drive a car because driving is mostly automatic.
That's a skill. There is no magic. We don't feel that there is something magical about our ability
to drive a car and talk at the same time. But it's the same magic that -- other feats of intuition.
Or, you know, one of my examples is that I can tell my wife's mood from her first word on the
telephone, and so can you probably. And that's because we've had a lot of reinforced practice at
that kind of detection.
Now, in many cases when people have reinforced practice, they develop true expertise. And that
applies to physicians. When they get the practice, when they get the good conditions for learning
and with considerable practice, there is a large difference between different kinds of specialties
and professions.
So, for example, anesthesiologists are much more likely to develop good intuitions than
radiologists because the feedback conditions are much better for anesthesiologists. They do
something, the results are fairly immediate. The radiologists just get very, very poor feedback, if
they get any at all. And so they're not in the position of acquiring intuitions.
Now, there are people who feel they are expert but are not, and there are quite a few of those.
And those people populate Wall Street, for example. So you have a lot of people who really
believe they can tell which stocks will go up and which stocks will go down. Contrary to the
evidence. Because most of them believe that others cannot do it but they can.
And that's an important point, by the way, the sense of subjective confidence which is divorced
from any real basis. They just feel that they can do it.
And so what we have is some of the intuitions, some of those responses that System 1 provides
us with are based on true expertise, because we have learned that and it has become -recognition has become automatic and effortless, and others, we have the same intuition with the
same level of confidence, but it is not based on expertise.
How did that work? And I think that the way that it works, I propose that the way that it works
is that associative memory, when it's trying to answer a question and it cannot answer that
question, it will answer another question and think that it has answered the first one.
And the way this happens is in part because any instruction that you get activated -- I call that the
mental shotgun. You don't actually obey one instruction. You tend to do many things at once
when obeying an instruction. I'll give you an example.
Suppose I tell you to respond as quickly as possible, you know, by raising your right hand if two
words rhyme and you don't raise your right hand if they don't rhyme or you raise your left hand,
and now I present pairs of words. So vote note. That rhymes. Vote goat also rhymes. But vote
goat is considerably slower than vote note. And the reason is that you visualize. Nobody asks
you to visualize. Nobody asks you to compare the spelling. You just do it. It happens. So the
instruction to carry out one operation activates other operations. Or, you know, I mean, I could
give more examples, but we're short of time.
What happens then is that if people are asked a difficult question but there is an answer to an
easier question which is related and comes more easily to mind, people are likely to answer the
easier question. And they won't know that they have done it. And that gives rise to many
illusions. So I'll give you an example.
Okay. Here the task, I think, is going to be -- the question is there are going to be several
figures. Oops. That. I'll come back to this. Okay.
The question is which of these figures is larger on the screen. And you can do this. I mean, you
may know that they are equal. You know, if we took a ruler to them, they are equal. What
happens is that our System 1 is really geared to solve problems in three dimensions. And so you
see the three-dimensional size. And if this is the picture, then indeed the figure on the right is
much larger than the figure on the left.
Even when you know it and you know that this is wrong, you go on seeing it. So this is a case in
which quite automatically you answer a question that you haven't been asked.
Let me give you another example of -- here -- that's from a survey of students in which the two
consecutive questions were how happy are you and how many dates did you have last month.
And when you ask those two questions in this order, the correlation is zero. There is actually -doesn't seem to be the essential thing that determines people's happiness.
And now you invert the order and you ask how many dates did you have last month, how happy
are you. Now the correlation is .66, which is just about as high as it could get. Clearly what
they're doing is they're answering that question. That is, how many dates did you have last
month created an emotion; that emotion is more or less happy, now you ask them how happy are
you, that emotion is on top of their head and that's what comes out.
Now, they're not confused about happiness. They know that happiness is not the same as
happiness with their -- satisfaction with their romantic life. You are not confused about what it
means size on the screen. You can't do it. That is, quite naturally you give that process a
substitution. That process is in some ways marvelous because it means that we're really
stumped. That is, we -- and quite often what is substituted, the answer to the near -- to the
neighboring question is good enough, and sometimes it isn't.
Now, I'll give you another example which brings to bear one of the capabilities of System 1.
And this is it. I tell you that Julie's a graduating senior at a college or university. And I'll give
you one additional fact about Julie; that she read fluently when she was four years old. And I ask
you what is her GPA. And you have an answer. I mean, you -- you know, I mean, we all know
it's a little more than 3.5 surely, probably less than 4, certainly more than 3.2.
There is an answer. What is the answer. This is worth following, because that's the capability of
System 1 which is quite interesting. There are dimensions of judgment which are intensity
dimensions. You can go from more to less, quantitative dimensions.
Precocity of reading is one such dimension. And now I can tell you here was Julie at age four,
and, you know, I could tell you what amount of rain in a day in Seattle corresponds to being able
to read fluently at age four. You'll have an answer. You're able to match the intensity of the
precocity variable to the intensity of, you know, the amount of rain on a day. Or to actually
almost any other quantitative variable.
There is an answer. It comes to mind. It gets generated. And now, interestingly enough, the
answer that occurred to you when I asked you what is her GPA is in effect a matching of
percentiles. You have some idea of how extreme it is in the distribution of learning -- of reading
acquisition, how extreme it is to read fluently at age four, and you can match that to a GPA.
That will be roughly equally extreme.
If you stop to think about it, this is a truly absurd way of predicting things because it is
completely nonregressive. It does not -- you should -- you know, that information is worth very
little. You should make the judgment that is not very different about Julie's GPA from what you
would give if you knew nothing at all about her.
But in fact you respond as if this information was sufficient. And this is something else that
System 1 does. System 1 is extremely insensitive to the quality and quantity of evidence on
which it generates judgments.
So I give you that bit of information about Julie and you can go to town with it. And now if I
give you more information, you would recalibrate. But you would not necessarily make more
extreme predictions or make more confident predictions if I gave you more information. This is
how the intuitive system works. It generates a story, the best story possible, on whatever
information is available.
And now I get to the point of subjective confidence. Subjective confidence is not a judgment.
Subjective confidence is a feeling. And it's a feeling of the quality or the coherence of the story
that you have been able to generate. It's a feeling about the goodness or fit.
And when the fit is good, confidence is high. That tells you more about the coherence of the
story that System 1 has generated and that the individual is willing to accept than it does of the
likelihood that the judgment is in fact correct.
Here is another example of substitution. International travellers are asked how much would you
pay for insurance that pays $100,000 in case of death for any reason. People can do that. Then
they're asked how much would you pay for insurance that pays $100,000 in case -- oops. How
much would you pay for insurance that pays $100,000 in case of death in a terror incident on
your next trip. They pay more for the second than they pay for the first.
And the reason that they pay more for the second than they pay for the first is that they're more
afraid of dying in a terrorism accident than they're afraid of dying. And this is probably true of
all of us.
What happens is System 1 generates effortlessly -- I mean, you know what I mean when I say
System 1 generates. An emotional reaction is generated. It has a certain intensity. You're asked
the question about the insurance and you answer that question about the insurance by matching it
to the intensity of the emotion that you feel. And there are literally hundreds of examples of this
general kind of process, a process of matching.
And notice that this really comes to mind fairly quickly, fairly immediately. There is -- you
know, you don't feel that you're doing anything wrong while you're doing it. Because it happens
to you. Okay.
Let me see. I would like to conclude by talking about what I will call illusions of understanding
or illusions that the world is understandable.
The ability of System 1, of our intuitive system, to generate coherent stories on the basis of
information, even scant information, unreliable information, means that almost anything that
happens we feel we understand. Our ability to understand things that have happened is
extraordinarily good. We can construct stories.
Now, this creates an illusion, and the illusion is that the world is knowable, because you
understand the world in the past and it seemed that if you understand it in the past, then there
must be a structure to it that would enable -- you know, all it would take is some intuition and
you'd be able to predict the future.
In fact, there may be no structure in the past, it's just a story you're telling yourself. Or there's
something that indeed you have learned but that you didn't know. So, you know, one example
there are two football teams, you think they're equally matched, they play, one of them crushes
the other. Now one of -- the one that did the trashing is perceived as much stronger, and you
can't remember that you didn't know that ahead of time.
That's what's called hindsight; that in hindsight you misjudged what the probability was and you
misjudged what you yourself used to think about that.
What makes this very pernicious, I think, is that it leads us to misunderstand the randomness of
the world in a very deep way.
And here my friend Nassim Taleb, friend and sort of intellectual hero, he's had a big influence on
my thinking, in his book on Black Swan he makes very similar points. And the point is once you
tell a story, you're misled by it.
And I'll give you an example about the financial crisis. There are people who we believe, and so
the story is told, that they knew a crisis was about to happen. This I think is a serious cognitive
mistake; that we let that statement go by. It's almost a scandal. We should retire the word
"know" from these situations.
And the reason is this: When we say that people knew something, we can only apply that verb if
what they knew was true. Until we know that it is true, it's not knowledge. In fact, they thought
a crisis would happen and then it happened. They didn't know it. Knowing is really something
else.
But we -- that's the way we use the word know. That use of the word know gives us the illusion
that the world is knowable, even when it is not. In fact, there were equally intelligent and
knowledgeable people who knew the same facts and who didn't think there was going to be a
crisis. My inference from that is it was not knowable. Some people thought it, and some people
thought other things.
Very similar about intuition. The mystique of intuition comes in part from the language. We
don't say I had an intuition that this was going to happen but it was false; generally we said I had
an intuition when it came true. That biases us to believe in intuition, and it biases us to believe
things about the world that are simply not true.
That is what I could do in 30 minutes. Thank you.
[applause]
>> Daniel Kahneman: There's time for questions, I think. Yes.
>>: Is there a way to train the systems to help us make better decisions?
>> Daniel Kahneman: Is System 1 trainable? I'm very pessimistic. And I'm in part pessimistic
about the training of intuition from personal experience, because I've been studying this for 45
years and my intuitions are just as bad as they ever were. I mean, absolutely nothing has
happened to me in terms of making my intuitions more reasonable.
It doesn't happen. And that's because System 1, it can be undated very quickly, but it doesn't
learn very quickly. You know, a lot of experience typically is needed to acquire skill. So I don't
really believe it.
I have some vaguely optimistic ideas, but they are only faintly optimistic about what could be
done. And this involves getting System 2 involved. So, for example, on that illusion, you are
likely to -- when you see that picture again, you will still see one thing as -- the three figures as
different, but you will know that this is a situation in which you cannot trust your vision.
That is very useful. And that is something that sometimes people are able to do. Usually not
about themselves. I mean, what I have learned is I'm really fairly good at detecting the errors of
other people and not my own. And that is generally true because when I'm making an error, I'm
really busy making that error and I don't, you know -- there just isn't enough capacity to get it
right.
But we are pretty good. That's why I wrote the book. And I have that thing in mind actually
from the beginning. I call that educating gossip. Because if you learn a more differentiated
language to think about the judgment and thought processes and decision-making of other
people, you will improve. And they will learn to anticipate your gossip and they will improve.
So it's a social process. I don't think that people can easily train themselves to have better
intuition. Very long answer. Sorry about that. I'll answer the other questions more briefly.
Yeah.
>>: [inaudible] describe [inaudible] three or four or maybe a continuum of systems.
>> Daniel Kahneman: Oh, I mean, the systems are completely fictitious. You could write this,
the whole story very differently. What I mean is, you know, there are two families of processes
that share some characteristics, and there probably are blends and not everything fits very neatly.
But clearly there is a whole set of things that happen that the associative system delivers, and
then there are things about which we have a sense of authorship and agency and they require
effort and are susceptible to interference.
And as I said immediately, that's what I call the two systems just because you may have noticed
how much easier it is to think about it when you think of it as System 1 does this and System 2
does that. I don't mean to imply that they really exist. Yes?
>>: Is there a lot of variation among people on how much they lean on System 1 versus System
2? And, if so, do you look at the characteristics of those people?
>> Daniel Kahneman: Yes, there are. There are differences in the extent, especially in the
extents of self-monitoring. There are significant differences. The people who -- the people who
fail problems like the bat and ball problem aren't distinctively different. I mean, for example,
they don't delay gratification quite as much, so you give them problems in which, you know,
they are to imagine that they've ordered something and how much would they pay to get it
tomorrow rather than to get it after three business days.
People who fail the bat and ball problem pay more to get it tomorrow. So there are consistent
differences. The correlations are not perfect. But, yes, there are differences. They're related to
intelligence, but it's not quite identical with intelligence. It really is closer to self-control. And
to the control of attention which turns out to be central to this whole thing. Yes?
>>: [inaudible] about the coherence of things that System 1 generate and that they were
[inaudible] so like smiling makes you happier and being happy makes you smart. So I'm
wondering how much is it possible to use System 2 to control System 1 by creating some of
those conditions yourself consciously in a way to then generate more System 1 reactions that you
can benefit from.
>> Daniel Kahneman: I think that's very -- yeah. I think that's an excellent direction to go. That
is. And people do know that. That is they -- people do have an idea of the conditions under
which they are likely to be more creative. And they can create those circumstances in which
their automatic system will operate differently. I think that's true. It's actually a new thought for
me, and I think it's a very good one. Yes.
>>: You were referenced quite a bit in Winifred Gallagher's book on attention called Rapt. Do
you agree with the theory that your life is the sum of what you focus on?
>> Daniel Kahneman: Oh, yes. You know, that's -- one should never agree to any statement that
is made in one sentence, but if you are going to have one sentence about it, it's a good one.
Yeah. I mean, clearly what we attend to, what we choose to attend to, is determinant of a lot of
what happens to us. Certainly of well-being. I've been studying well-being in the last decade,
and well-being is very largely about attention. Yes.
>>: As we were listening I looked around and most of us are sitting with legs crossed, arms
folded, and it's a little [inaudible] told kind of how I think and what I'm thinking about. I wonder
if our learning style changes or makes any impact [inaudible] correlation to the way in which we
make decisions, the way in which we balance System 1 and System 2. So have you noticed a
difference in the way which people learn or the way they take information and how that changes
and what they do with it?
>> Daniel Kahneman: That's a very good question, and I don't have an intelligent answer. I
don't know. Yes?
>>: I started a particular [inaudible] ten years ago which I observed [inaudible] those are
directly correlated with the notion [inaudible] and as long as I'm constantly aware of [inaudible]
less likely that I would [inaudible]. So I'm curious has there been similar experiments by monks
or ->> Daniel Kahneman: Yeah, there have been ->>: [inaudible]
>> Daniel Kahneman: The question was the effects of meditation, the cognitive effects of
meditation. There is fairly large and increasing literature, and clearly there are effects.
There is a real problem interpreting the literature in that the people who study Buddhists, the
effects of meditation, are in general meditators and they find it very difficult to be objective
about what they're studying. And so there is a real problem interpreting the literature. And it's
turned into a sort of insider's literature for that reason. And I'm not an insider, so I can't tell you
very much. Yes?
>>: What's the correlation between System 1 and stereotyping?
>> Daniel Kahneman: System 1 and stereotyping. You know, stereotyping is the way we think.
We have stereotypes about tables. The stereotype -- you know, social stereotypes are very
noxious and they cause a great deal of damage. But it is not true that, you know, you can have a
mind that is stereotype free. We can't operate that way.
So people have stereotypes. And what happens is there is a possibility of controlling your
reactions to stereotyping. And in the last few years there is a lot of debate about how early you
can make yourself resist a stereotyping impulse. But that the impulse is there and that everybody
has it. It's something that we'd better accept because otherwise, you know, we won't be very
realistic about the way the mind works.
So it's not that there are people who are stereotype free. I don't think there are. You know, and
if you don't have stereotypes about the people you're not supposed to have stereotypes about, you
do have stereotypes about Harvard law professors. So, you know, there are categories that are
okay to have stereotypes about.
It is true in general that overt stereotyping is clearly something that can be controlled. And for
people who are very -- there's interesting results on depletion. For people who are very
stereotyped, they spend half an hour being polite to somebody of another group that they don't
like, and later you give them a hand grip task and they're depleted. They're really tired.
Self-controlling stereotyping is actually very tiring for people when they're impulses are strong.
Yes. Yes?
>>: So is it possible that System 1 is affected by any social factors, like social norms or what
majority of the people are saying?
>> Daniel Kahneman: Oh. Of course. The question is, if I understand it, what is the effect of
social norms on System 1. Well, System 1 has world knowledge, and world knowledge to a very
large extent is constructed by social norms. So of course. Yes.
>>: [inaudible] in the last several years [inaudible] popular books. So easy question. What does
[inaudible]?
>>: Can you repeat?
>> Daniel Kahneman: Yes. What are my favorite books about, you know -- there is -- it is true,
there is a real industry of books on cognitive limitations and cognitive failures. Well, I mean, the
best known is clearly predictably irrational whether I think I like the books by Chip Heath the
best on Switch -- Switch -- and what was the first one? I forget. But those are books that take
issues of limitations and apply them to the real world. There are some very good books like that.
Which I'm not a great customer of those, because, I mean, obviously -[laughter]
>> Kirsten Wiley: One last question.
>> Daniel Kahneman: Yeah. Yes.
>>: So you mentioned that System 1 [inaudible] trained by [inaudible] do you think it's possible
to increase System 1's awareness of its own limitations [inaudible]?
>> Daniel Kahneman: I don't think it would be a System 1. I mean, you can -- System 2 does
the monitoring. System 1 is aware of its own fluency, but I don't think that System 1 -- the way
that I describe it, System 1 can be trained. And obviously there are very large and interesting
individual differences between people in System 1.
What is remarkable is that the whole industry of intelligence testing is in my terms about System
2. It's about reasoning. There is something else happening, the kind of coherence of world
knowledge that enables us to understand situations, to detect incongruities and so on for which
there is essentially no tests.
So we don't know nearly as much about System 1 as we should, and if we define that as a
problem, which I think we should do, then probably in a decade we'll know a great deal more.
Thank you.
[applause]
Download