>> Kirsten Wiley: Good afternoon and welcome. My... here today to introduce and welcome Kathryn Schultz who is...

advertisement

>> Kirsten Wiley: Good afternoon and welcome. My name is Kirsten Wiley and I'm here today to introduce and welcome Kathryn Schultz who is visiting us as part of the

Microsoft Research Visiting Speaker Series.

Kathryn is here today to discuss her new book Being Wrong, Adventures in the Margin of

Error. Err is human . To err is human. Yet most of us go through life assuming, and sometimes insisting, that we are right about nearly everything. No surprise then that we are frustrated or humiliated when we discover that we were wrong. At a moment when economic, political and religious dogmatism increasingly divide us, Being Wrong explores the seduction of certainty and the crisis occasioned by error. Kathryn is an award-winning journalist, author and public speaker. She is behind the Slate series, The

Wrong Stuff , featuring interviews with high-profile people about how they think and feel about being wrong. Her freelance work has appeared in many publications including the

New York Times magazine, Rolling Stone and Time Magazine.

Please join me in welcoming Kathryn to Microsoft.

[applause]

>> Kathryn Schulz: Thank you so much. For starters, is the mic functional here? Can everyone hear me okay, good. Yes, all right.

Thank you so much for that introduction. I'm actually really happy to be here. Some of you might've heard me chatting in the back before we get started. I spent most of my 20s in the Pacific Northwest. When I graduated from college I moved out to Portland and give there for several years, and then even after I moved away, I actually took a job in

Seattle. And would sort of commute occasionally and make these business trips, and then extend them as long as I can get away with in order to spend time out here. So it's very lovely to be back. As I was thinking about this and about my time in the Northwest I thought well, maybe I should start off today by telling you guys a story about something that happened to me when I moved out to Portland that first-time.

As I said I moved there after graduating from college and a friend who was also planning to move west said, oh well, let's make it a road trip, and so we packed all of our stuff into her car and we set off across the country. And because we were young and unemployed and had nothing better to do with our time, we took the longest, slowest possible route from the East Coast to the West Coast. We drove almost entirely on back roads and through state parks and national forests and a long way into this journey, somewhere in the middle of South Dakota, I turned to my friend and I asked her this question that had been bothering me for like 2000 miles. And my question was what's up with all the

Chinese characters by the side of the road?

Well, so my friend just looked at me totally blankly, like that. And I was like, you know, all these signs we keep seeing with the Chinese character on them. And she just kind of stared at me for a few seconds, and then she started cracking up, because she figured out

what I was talking about. And what I was talking about, was this [shows slide of a recreational area picnic table] [laughter] . I like this story because it demonstrates something really wonderful and kind of strange and occasionally a little problematic about the human mind, which is that we are unbelievably good at generating explanations for what's going on around us. And sometimes those explanations themselves are also unbelievably good.

You know, I mean these days we can explain things like tuberculosis and neutrinos and what the hieroglyphics on the Rosetta Stone meant. And sometimes these explanations that we generate are really, really interesting and we have no idea if they're good or not.

So we explain things like how the universe began and, you know, what William

Shakespeare was like in person and why your best friend’s husband of 25 years just left her. And then sometimes these explanations are actually totally unmoored from reality. I mean at no point in my cross-country journey did it occur to me to pause and ask myself okay, is there any remotely plausible scenario in which the US Department of

Transportation would line 2000 miles of American back roads with Chinese characters?

No, like this didn't even dawn on me, right? But the point that I'm trying to make isn't that we generate bad explanations; it's that the bad explanations are totally inseparable from the good ones and the interesting ones. And this is the thing about this capacity to generate these theories and explanations about our world. These are amazing; they're what makes us human; it's why we have things like science and literature, and why we tell each other stories. And it is also why we get things wrong. And the reason that I'm telling you this particular tale is that I've spent the last five years of my life thinking about situations that are figuratively basically exactly like this. Okay, I have been thinking about why it is that we sometimes misunderstand the signs around us, and how we react when that happens. And what if anything this can tell us about being human.

Another way to put all of that, of course, and as you heard in the introduction is that I've spent the last five years of my life thinking about being wrong. This might strike some of you as a strange career move. But it has one really big advantage in the marketplace, which is as that there's absolutely no job competition. Nobody else is spending all of their time thinking about being wrong. In fact, quite the contrary, in my experience most of us think about being wrong as little as humanly possible.

I think this is a problem. I think it's a problem for each of us as individuals in our personal lives and in our professional capacities, and I think it's a problem for all of us collectively as a community and as a culture. So what I want to try to do today is get us thinking about our wrongness and get us sort of reconnected to our own personal fallibility. Because it turns out it's not exactly accurate to say, oh, we don't think about being wrong; we do sort of think about being wrong but we think about it in this funny asymmetric way. So let me tell you what I'm talking about. It turns out that there, there’s some kind of things that we think are fallible and then there's some things that we don't really think are fallible. So do we think is fallible? Well for starters, the human species.

None of us have any problem recognizing that as an entity, human beings get stuff wrong. In fact, every major account of what it means to be a human being, whether that account comes to us from religion or science or philosophy or psychology all of these domains that basically agree on almost nothing, they all agree on this one thing, which

that human beings are fallible. Were imperfect; we make mistakes. And we all get this in our everyday lives too, right? We say things like oh, to err is human. So we understand it in the big picture. We also have no trouble whatsoever realizing that other people are wrong, especially when they disagree with us. In fact arguably, we are too good at this. I mean think for a minute about the expression, I told you so. We love saying this. We get this kind of strange sadistic satisfaction out of uttering those words to other people, out of pointing out their mistakes, or their putative mistakes, their ostensible mistakes, the stuff we think they're wrong about. So we have no problem with that either.

We also, and kind of more interestingly, more surprisingly, we really have no trouble recognizing that we ourselves have been wrong in the past. So I have no trouble standing up here and telling you about my stupid Chinese character mistake, and I could tell you about lots of other mistakes I made too, some of the more important than that. And I could talk to you about beliefs that I used to hold, that I no longer agree with. So we get that the whole species is fallible; we get that other people are fallible; we get that we ourselves in the past have gotten things wrong, over and over and over again, so what's left? Well obviously, right? This is where it falls apart. Me, here right now, the beliefs I hold at this moment, the ideas I'm attached to as I'm up here talking to you, okay, that is when suddenly all of us lose sight of our innate human fallibility and start to feel like we are right all the time. The question is why? If we're so good at understanding this in all these other domains, why do we have trouble with this one particular problem?

It's very easy to chalk this up to some kind of individual psychological or emotional problem. To say, oh you know, people who can't deal with being wrong, they’re either too arrogant to imagine they could possibly be making a mistake, or they’re too insecure to admit it, or both. There's a certain amount of truth to this idea. I mean we have all I'm sure, met people who seem almost pathologically unable to ever admit that they're wrong about anything. And for sure there's a huge range of how well and how poorly people are able to deal with their mistakes. That said I'm not personally satisfied with an explanation that just makes this a problem about a bunch of individuals. For one thing it just doesn't account for the fact that really most of us are like this. Most of us struggle in the here and now to do with our fallibility. So what I want to talk about today is a structural reason that all of us have trouble with this.

And the structural reason goes like this, so however well we might understand in the abstract that we’re totally fallible, what we're actually called upon to do in real life is to decide right here, right now, am I correct or not about this specific proposition? Most of us have one strategy for doing this. And our strategy is that we kind of look inside ourselves, we search our souls for some sort of evidence that we might be wrong. This might seem like a fairly commendable thing to do. I mean when you think about it, right,

I have no problem looking at you and saying I think you’re wrong. So in theory, right, I should turn that searchlight around and I should look inside myself. That seems nice.

That seems virtuous. But if that is your strategy for figuring out whether or not you're wrong, it's going to fail you almost every time.

Let me show you what I mean by starting with this. This is one of my favorite optical illusions. Some of you have probably seen it before I suppose, I saw some of you flipping through the book as I was walking in, so maybe you noticed it in there too.

Here's the trick with this optical illusion. This square here that's labeled A, and this one here that's labeled B, those two squares are the exact same shade of gray. Now if you haven't seen this illusion before, you're having trouble believing me, because you're looking at it and it seems completely evident that A is much darker than B. What's interesting to me here is this feeling of self evidence that you're experiencing in this moment.

So what I want to do is show you a second image that might shake up that feeling a little bit. It suddenly introduces a note of doubt into that experience. The only problem with this example that I'm showing you of that feeling of certainty about something, is that it's an optical illusion, right? The whole point of an optical illusion is to generate the experience that you guys are having; it is to set you up; it is to deceive you and make you feel convinced about something only to show you that you’re false. So it's not quite fair right, that the deck is stacked against you guys, sorry. But the thing is that experience, that sense of feeling certain about something happens to us all the time in the natural world.

I am reasonably certain that everybody in this room at some point in their lives has made a bet, not like at the racetrack. Like a bet, like a factual bet, you know? Like who won the 1992 Super Bowl, or whatever. And the whole reason that you make that bet is that you know you are right, right? And you're like you’re shaking your friend's hand and you can feel the triumph that is about to be yours. And then 45 seconds later you're handing your friend ten dollars, and you’re like, what just happened? Like it's actually quite surprising and confusing, that situation. Because you have been in the grips of this feeling of clarity that you are right, you could almost see it inside yourself. If you only take one thing away from our conversation today, I hope it is a distrust of that sense of certainty that we all feel very often.

I could give you a thousand reasons why you shouldn't trust that feeling, but instead of going to tell you one very quick story. The story takes place two years ago at Beth Israel

Deaconess Medical Center. Beth Israel is the teaching hospital of Harvard University in

Boston. And it is widely considered to be one of the best hospitals in the country. So a couple years ago, one day a woman comes in for surgery, she's taken to the operating room, she's anesthetized, the surgeon cuts her open, does his thing, stitches her back up and sends her out to the recovery room. And when this woman wakes up she looks down at herself, and then she looks up at the nurse standing next to her and she says, “Why is the wrong side of my body in bandages?” Well the wrong side of this woman was in bandages, because the surgeon in question had performed a major operation on her left leg instead of her right one.

When the vice president for health care quality at Beth Israel talked about this incident, he said something really interesting. He said, “For what ever reason, the surgeon simply felt that he was on the correct side of the patient.” You can see where this is going.

Trusting too much in the feeling of being on the correct side of anything can be very dangerous. The lesson that we learned from the optical illusion, from your hypothetical

Super Bowl bet, and from this medical mistake is that internal feeling of certainty, no matter how convincing it is, is not a reliable guide to what is actually happening in the external world. That is one of the reasons why I said you a few minutes ago that this strategy of looking inside yourself to see whether you're right or wrong, is going to fail you. But there's another reason too. This reason has to do with the problem with the, of the feeling of being right. But it turns out that there is also a problem with the feeling of being wrong.

So let me ask you guys a question. How does it feel emotionally, how does it feel to be wrong? This is a real question you can answer it like with your voices.

>>: It sucks.

>> Kathryn Schulz: Yeah, it sucks, thank you. How else?

>>: It depends on what you're wrong about.

>> Kathryn Schulz: Fair enough, a nuanced man, I appreciate that. It absolutely depends on what you're wrong about, it sucks, and what does it feel like? What is that feeling of being wrong?

>>: Anger.

>> Kathryn Schulz: Say again.

>>: Anger.

>> Kathryn Schulz: Anger yeah, you feel angry, what else?

>>: [inaudible] glee, when you tell someone.

>> Kathryn Schulz: Say again.

>>: There's lots of glee when you tell someone.

>> Kathryn Schulz: When you tell someone else, absolutely, lots of glee. What did you say back here?

>>: [inaudible] Unsettling.

>> Kathryn Schulz: Unsettling. Okay, thank you. These are great answers, you know it sucks; it makes you angry; it's unsettling. The problem is these are great answers to a different question. You guys are answering the question of how it feels to realize that you are wrong about something. Realizing that you're wrong about something can feel

like all of those things and a lot of other things, right? It can be illuminating; it can be agonizing; it can be horrifying; it can be really funny like my Chinese character situation.

But just being wrong, just going through life in the grips of some kind of belief that you are later on going to realize is false. That experience doesn't feel like anything. I'll give you an analogy. So I grew up watching the Looney Tunes. And there's this one cartoon where this kind of hopeless coyote, who's forever chasing a roadrunner and never catching it. Do you guys -- are you with me -- do you, okay, great. So in pretty much every episode of this cartoon, there's a moment where the roadrunner is being chased and runs off a cliff. Which is fine; roadrunners are birds, they can fly. But the thing is that the coyote runs off the cliff after the roadrunner. And what's hilarious, at least if you’re seven years old, is that he's flying to; he keeps running right up until the minute that he looks down, and realizes that he's in midair, and then he falls. When we're wrong about something, not when we realize it, but before that, when we are just holding on to some kind of erroneous belief, we are like that coyote, after he's gone off the cliff, and before he looks down.

We are already in trouble, but we think we're on solid ground, so it should actually revise something I said to you a moment ago. It does feel like something to be wrong. It feels like being right. And that's the situation we're in. We feel right when we're right, we feel right when we might be right, and we feel right when we're wrong, until it's too late. So on the one hand, okay right, it's great to feel right. I like feeling right, it's fun as someone pointed out; it gives you a little jolt of glee. It makes life easier, right? You sort of get through the day with out worrying about too many things, because he just sort of assume that everything is right. So yeah, it's great. But we pay a pretty steep cost for this sense that we’re always right about things.

Specifically, we pay a steep interpersonal price for this, and we pay a steep intellectual price. Let's talk about the interpersonal price first. What is it mean to feel like you're right about something? It means that you think that your beliefs are in line with reality.

They're just reflecting the world as it actually is. You’re on the side of truth. And if you feel that way, you have a kind of moral conundrum on your hands which is, how do you treat people who don't agree with you? Because in that framework people who don't agree with you are not in touch with reality and are not on the side of truth.

Well it turns out that most of us answer this ethical question using the same strategies. I call these strategies to three assumptions. What's the first assumption? Ignorance assumption. We just kind of think, oh this poor fool who disagrees with me doesn't have access to the same evidence and information that I have, and once I impart that information to this person they will naturally see the light and come around to my side.

When that assumption fails us, when it turns out that someone knows all of the same things that we know, but connects the dots differently and still disagrees with us, then we default with the second assumption. The idiocy assumption. This person knows all the facts, but they don't have the brains to interpret them. And when that assumption fails us, when it turns out that our adversary in any given situation knows all of the facts and is actually pretty smart, well then what do we conclude? Obviously they’re evil, right?

They’re in possession of the truth, they know it, but they are hell-bent on destroying it.

These assumptions, when I state them flatly like this they sound ridiculous, right? I mean there's evil in the world. But to imagine that it's lined up against you in the face of every disagreement is a little bit extreme.

On the other hand we hear these three assumptions all the time. Turn on the radio, people. I mean this is all you hear on the radio, and I suspect that if you turn off the radio and you listen carefully to your own minds, you'll hear the assumptions disturbingly often as well. As I see it, this is a huge ethical problem. I don't want to treat everyone who disagrees with me like they are ignorant or moronic or immoral.

But it's also as I said a huge interpersonal problem. I don't care whether you are a member of a family, or a member of a product development team, or the leader of a company, or the leader of a country, if your strategy for dealing with this agreement, is to belittle and demonize the people who disagree with you, you are losing it out in two huge ways. One of them is that you are losing any respect, trust and possibility of future communication with that other person. And the other is that you're losing out on that person's perspective. You're losing out on learning whatever it is that person knows or sees that is causing them to organize the world's information differently than the way you do. And when you start going down that road, pretty soon you just surround yourself with people who agree with you. You make the world smaller and smaller and you start down the very dangerous road to the phenomenon known as groupthink which I'm sure you're all familiar with.

And which leads me to the second problem with this conviction of being absolutely right, which is the intellectual problem. This conviction of being right, is toxic to intellectual curiosity. It turns out that the sense of knowing of things paradoxically is a huge obstacle to actually learning new things. I'll give you two very quick examples of this. Part one is the phenomenon called search satisfaction. Search satisfaction means that you go out looking for an answer, and as soon as you find one that seems all compelling, you stop looking. Search satisfaction is why when a friend of mine was having trouble breathing and she finally went to go see her doctor, and he took x-rays and looked at her lungs, and saw that she had a little bit of pneumonia, he sent her home with some antibiotics. Four weeks later she was still having trouble breathing. And so she went back to her doctor.

And he had been right. There was some pneumonia in her lungs. And because he saw it he missed the tumor that had now spread to both of her lungs. That is search satisfaction.

That's what happens when we think we have the answer.

Similar phenomenon that again most of you probably heard of is the problem of confirmation bias. Confirmation bias is that thing where you have a belief about the world, and so everything you see confirms that belief. Any other evidence, any evidence that like, contradicts you, you either ignore it, you literally don't see it, you deny it or you misinterpret it. The engineers who designed the space shuttle Challenger, they loved that space shuttle. They were really proud of it. They put the odds on a problem occurring with it at one in a million. That wasn't like a figure of speech that was their actual statistical estimate of a problem with the Challenger. And when some other people who had some concerns about the shuttle came to them and said, hey, we're a little worried,

we’re seen from previous flights all of this damage to the O-rings and to other components, the engineers said I know, isn't it amazing what they're able to survive?

That's confirmation bias.

Taken together, things like search satisfaction and confirmation bias and any number of similar phenomenon, conspire to keep us from taking on and taking in new information about the world when we think we have all the answers. It turns out that not knowing the answers or suspending your faith in your own answers and being open to other ones is actually the best way to really learn things about the world. This is a totally simple lesson. I am really reasonably certain that most people in this room have heard it before, from other speakers, from books you've read, from leadership trainings, whatever. But it is surprisingly hard to really internalize it and implement it in your own lives. And it's hard because it is exactly antithetical to the lesson that you've been learning for the entire rest of your life, the entire previous part of your life. Think back for a moment to those answers you gave me when you thought you were asking, answering the question about how it really feels to realize that you're wrong. It sucks, right? It's unsettling. It makes us angry. You guys didn't generate these answers out of your own heads; I mean your own heads are brilliant; you work for Microsoft. But you didn't just spontaneously come up with them. This is an idea that our culture pushes very strongly at you from before you can walk and talk. We have this deep-seated belief that getting something wrong, means there is something wrong with you. Imagine for a second that you're in your elementary school classroom, and the teacher is walking around and she's handing back quiz papers.

And she hands back one that looks like this [points at slide]. This is not mine, by the way, not that I would have a problem with making all those spelling mistakes. When you are still just in elementary school, you know exactly what it means about the kid who gets to quiz back. That's the dumb kid. That's the bad kid, the troublemaker. So by the time you are seven or eight or nine years old you've already learned two incredibly powerful lessons about wrongness. The first one is that people who get things wrong are either intellectually inferior, morally inferior or both. And the second lesson is that the way to succeed in life is to not make any mistakes. I hope I've already convince you that this first lesson is not merely wrong; it is totally backward. That in fact it is the refusal to admit our mistakes, it is our fear around our fallibility that cuts off our intellectual growth and that causes us to treat other people in morally and unethically.

But this second lesson is really problematic too, this idea that the way to succeed is by not making any mistakes. And I want to be a little bit careful here because I'm not trying to say that we should never seek to prevent mistakes. Obviously there are all kinds of mistakes we would love to have never occur again. I hope there's never another mistake like the one we saw in Beth Israel in the operating room. I'm on an airplane probably twice a week these days. I would just as soon never be on one where a major mistake is happening. None of us want to see mistakes in any context like that, or in nuclear power plants, or in deep water oil rigs, or in any context where the stakes of our errors are really, really huge.

But it turns out that even in contexts like that, in fact especially in contexts like that, we are very poorly served by an attitude toward error that involves shame and blame. If that is how you feel about error all you do is you drive the problem underground; you make it impossible for people to admit them, to discuss them, to understand why they occurred, and to prevent them from occurring in the future. People who work on error prevention in high risk domains know this and have known for a long time. They understand that you cannot solve the problem of mistakes by hiring a perfect surgeon, or a perfect pilot.

Those don't exist. All you can do is create a system that takes into account our fallibility and do the best you can to either make mistakes not happen in the first place or not have disastrous consequences when they happen anyway.

So even when we really, really, really want to prevent mistakes, paradoxically we need to embrace the inevitability that they're going to happen. But that aside, this ideal of preventing mistakes turns out to be a very narrow tool. When you want to achieve an outcome that is narrow, predictable, uniform and known in advance then yeah, you know what, focusing on eliminating mistakes is useful. If you want every seventh grader in the

English-speaking world to spell vegetable the same way, you can achieve that goal by focusing on eliminating mistakes. And if you want every widget coming off of your assembly line to be exactly ten millimeters thick, you can also achieve that goal by focusing on eliminating mistakes. The problem, of course, is that most of life and quite frankly most of the really interesting, challenging parts of life do not concern outcomes that are specific, predictable, uniform and known in advance. If your goal instead is to design better software, or create a killer ad campaign, or cure polio, or negotiate a peace agreement in the Democratic republic of Congo, or write a sonata, or raise your kids or anything like that, you cannot succeed by focusing on the elimination of mistakes. On the contrary, if you want to succeed you are going to have to deeply accept the fact that you're going to make mistakes all along the way.

The whole point about all of those projects and countless more is that they involve innovation, and I know you guys must hear and use this word all the time but what does innovation mean? It means going into the new, and here's the thing about novelty, here's the thing about newness, no one's done it before; we don't know how to do it. You wouldn't even know what a mistake looked like in advance, so how can you possibly prevent it? You can't. All you can do is throw as much stuff at the wall as you can and see what sticks and figure out a way to do that so that the mistakes you do not make will not have huge catastrophic costs associated with them, and so that the mistakes you do make will each and every single one of them contain some kind of lesson that will propel you forward into the future. Because the reality is this is how we get forward to the future, is by screwing up all along the way. This is true for us as individuals, and it is true for all of us collectively as a species throughout our history. And before I leave I want to share with you a kind of lovely idea from philosophy that really encapsulates this.

But I need to apologize because it actually has a terrible name. It's this, the pessimistic meta-induction from the history of science. What is this? Well at some point some philosophers and historians of science were sitting around and they were talking about the history of scientific development. And they observed that almost every scientific belief that at some point in time has been deeply entrenched and widely accepted has at

some later point in time proved false. And they drew from this the necessary conclusion which is that most of the scientific beliefs that we ourselves hold today, are probably wrong as well.

This idea does not just apply to science. In politics, technology, economics, how we raise our kids, how we educate them, what we think is safe and smart to eat, you name it, almost any domain of life, the things that we believe most strongly right now stand a very good chance of looking in the future, either absurd or flat out wrong. So much so, that quite frankly, we might as well have a pessimistic meta-induction from the history of everything. But if that sounds sort of disheartening, let me say that I don't think that this insight is at all pessimistic. Or rather it's only pessimistic if you dread being wrong. And you don't want to learn about your mistakes. If instead, you think that mistakes are one of the great engines of human innovation, if you think that they are how all of us grow up and learn and change then actually this insight is fundamentally optimistic. It suggests that our errors will propel us forward into new ideas and new knowledge. The person who, to my mind, put this most beautifully was the philosopher Richard Rorty, who said something really lovely. He said that accepting our mistakes is really just a way of embracing the permanent possibility of somebody having a better idea. This to me is the most admirable attitude toward wrongness any of us could have. If we feel this way we can look at the world around us with awe and with curiosity. And we can treat other people, including those who disagree with us, with interest and with respect. And most of all, we can let go of this ridiculous effort to be perfect, because, quite frankly, the point of life is not to be perfect; the point of life is to get better. To be better thinkers and better leaders and better listeners, better family members better members of society and I hope that I have convinced you today that one crucial way to doing all of this is to get better at being wrong. Thank you so much.

[applause]

>> Kathryn Schulz: Yes, questions? Anyone got some, someone does? Yes?

>>: The [inaudible] looks like a wooden patio bench to me.

>> Kathryn Schulz: Picnic table, exactly. You know, when you, when you spend a lot of time traveling around in state parks and national forests, you see that sign a lot. And to this day it mystifies me that my mind interpreted this so terribly incorrectly. But again as

I said, I also think this is, kind of, this is the miracle of the human mind that we can create for ourselves natural optical illusions, if you will. We can see things that aren't there, and you know, sometimes this thing happens where we then just kind of commit to that belief. We fall in love with that belief, and that I think is where we can get into a little bit of danger. In that particular case, obviously, it was just goofy, but needless to sometimes we do so in more dire circumstances. Yes?

>>: Have you looked at different types of being, being right? Because being wrong and being right may be two sides of the same coin. And it seems to me there's some sorts of

things that you could be right about that are testable and some that could never be testable.

>> Kathryn Schulz: Absolutely.

>>: I think there's a whole range of different wrongs as well.

>> Kathryn Schulz: There's a huge range and thank you for bringing it up, because it's really important. I talk about it at length in the book, and I didn't speak about it all today but, right, obviously there's this elephant in the room which is right and wrong about what? You know, like who gets to say, who gets to decide whether something is right or wrong? And you're exactly correct, there are, there are things that, you know, we can obviously say, look it was right or it was wrong. If I said, oh, it's going to be twelve degrees and snowing in Seattle, today clearly I was just objectively wrong.

But there are a lot of situations in life, in fact, arguably most situations in life where there is no plain and simple benchmark, like that. And that's part of why I focus a lot when I speak not just about right and wrong, but about attachment to feeling right or wrong, about disagreement. Because we can -- part of what's interesting to me is even when there is no benchmark, even when no one can say for sure, yes, this is absolutely right, or this is absolutely wrong, we ourselves, we experience ourselves as right. We kind of entrench in our own rightness. And, in fact, in some ways the less we can know for sure if it's right or wrong, the more we do that. I mean when you think about it what do people disagree with on the broadest scale, we disagree about things like religion, right.

We disagree about the origins of the universe, the origins of our species. These are questions we are not in our lifetimes going to settle, but those tend to be the things about which we feel most right and wrong. So yes, the short answer to your question is absolutely these are very shifty terms, but I think there's still a lot to be said for looking at how we deploy them in daily life and the ways that we feel attached to right or wrong whether or not anyone can prove that we are right or wrong. Yes?

>>: It seems to be that some of our most important decisions are always in a gray area.

That is whether they're right or wrong is always a little [inaudible] I think that climate change is a big debate that needs to exist and in areas where there are ominous consequences for inaction as well as inappropriate action, what is there a strategy for?

>> Kathryn Schulz: It's an excellent question and, yes, I mean, quite closely related to this gentleman's point that a lot of life takes place in the gray area. You know, we don't know for sure there's competing evidence, we can interpret the evidence in many different ways, and I think in situations like that there's, there's a kind of reasoning strategies to work our way through that gray area and then there's action strategies to deal with those uncertainties. In terms of how we think when we're in a situation of uncertainty, I would argue that the more uncertain the situation is, the deeper into that murky gray area we are, the more important it is to remain really in touch with the possibility that we’re wrong. That's what uncertainty is, right? Rising uncertainty is rising probabilities that we don't know the answer. And in those situations we need to be

able to listen to other viewpoints; we need to be able to suspend our own judgment; we need to be able to ask ourselves, what are we not seeing? You know, what are the other possible pieces that are occluded from my vision for whatever reason? But then at some point you do have to act. We act in the face of certainty all the time. You know I mean, I could take a step forward and the ceiling panel could follow me, right? And I just assume that's not going to happen. And I take action anyway, and we’re all called upon to act even when the probabilities are actually greater and the uncertainty is more extreme.

So with the case like climate change, I mean, I do think that there are lots of steps that one could take that take into account multiple possible outcomes. I mean, I would say what you try to do in a situation like that, is you maximize good outcomes no matter which future materializes, right, so if you think there's a 95% chance that I'm pulling these numbers out of nowhere, if you think there's a 95% chance that human-caused climate change is actually going to create catastrophic, you know, outcomes with real, you know, human material costs in terms of money, in terms of health, in terms of whatever, then obviously you want to take action. But if there's a 5% or 15% or 20% chance that it's not going to happen the way you foresee, which given a system like climate, right, which is unbelievably complicated, there's always a chance it's not going to happen. Then the question becomes well what's the intervention that's a good outcome no matter what, right? I mean I would argue, for instance that you know what, investing in diverse energy sources, that's a good outcome no matter what. Even if the climate change forecasts are off in one direction or the other, we don't lose from that. You might argue that something like carbon sequestration, which is huge usually expensive and has a lot of unknowns associated with it, and could be problematic in its own way, maybe that's a little bit of a less good solution, because if the model isn't working exactly the way we predict, well then we don't get benefits from that, whereas, we clearly get benefits from things like green energy, so, why not? So I do think there's strategies for decision-making and acting even in the face of extreme uncertainty. Yes?

>>: What you're saying is that in an innate way that humans tend to be behave down towards the uncertainty of being right is incorrect and it's always, or it's not a sensible way, not a successful way to survive and get on. That's always a difficult argument to make in the light of evolution; you have to ask why has evolution optimized us to have a non-optimal strategy for getting on? And it feels like the gap came further between the beginning and the end of your talk. In the beginning you said the certainty that you're right is not a reliable way to perceive how the world is, and I believe that. But to go for that, from that to say that it's not a successful strategy to achieve good things, that's a harder gap. Where I see this gap happening is that it feels like passion more than mistakes is the real engine of innovation. For instance, when someone has a business idea, oh I can build a better widget and she mortgages her entire house and life-savings, she's making the kind of bet that you weren't keen on. But it's only, I think, from people making that kind of bet, which as humans we can't make unless we're really certain about them that we get innovation and advance.

>> Kathryn Schulz: Those are really fascinating points, I see a couple of different issues that you are raising. This point about passion, I think is a wonderful one. You're

absolutely right. Passion is clearly central to innovation. The only place that I would quibble with you a little bit is I do believe that it is possible to be passionate about something, and to still entertain the possibility that you're mistaken. In fact, I think the best innovators are able to have a big picture goal, a thing that they want, right. And nonetheless be very, very sensitive to, you know, counter evidence, essentially. To any information that’s saying, wait a second, you know, that didn't work, try something else and, in fact, in really big bold passionate thinkers, people like Darwin and Einstein you see an incredible attentiveness to their errors. I mean these are people who literally kept journals of what they got wrong, and it wasn't because, it was because they didn't perceive those errors as somehow, you know, challenging their fundamental vision.

They've perceived them as fueling that fundamental vision.

So but, but am really glad you raised the point, because I do want it be super clear that in no way am I trying to make a case against conviction. I believe powerfully in conviction.

I believe powerfully and acting on our conditions, but I also think that the only sensible way to do that, and by sensible I mean pragmatically, intellectually, you name it is to bring along with that passion and that conviction some awareness of the rest of the world.

Otherwise, we’re just, you know, in a land of tunnel vision, and that historically does not work out very well. Do you want me to speak really quickly to the evolutionary point as well? Oh sure.

Evolution and rightness, okay. So I see your point or what I take to be your point, which is that if this, if we're so naturally inclined to perceive ourselves as right all the time, surely this must be programmed into us for some kind of reason. I'm going to leave aside my various quibbles with evolutionary psychology, although I do have them, you know, which is just to say that I think it is problematic to reason backwards from any given human tendency to, you know, evolution kind of hardwiring it into us in some way.

You're clearly right in the sense that in many contexts it is useful to be right. You know, if I hear a rustling in the bush, it is to my advantage to be able infer accurately whether I can eat that thing or that thing is going to eat me. Right? That's very useful.

That said, I think it is simplistic to assume that the conviction of rightness is a evolutionary good. First of all we know that we do all kinds of other things, we deceive other people there's there's times when being deceived is actually evolutionarily advantageous. There's times when being wrong, I mean, in a way your argument that being blinkered to any counter evidence drives us forward, well that suggests that actually evolutionarily we advance because, you know, we don't care about counter evidence. And, you know, in a way I think that we have a tendency to sort of put some things in the, well evolutionary Dave us this category and other things, not, and I'm standing here in front of you making a counter case. I got her somehow. Right, I got my brain somehow. We got the capacity to not merely want to be right but to be making a case where it's good to be wrong somehow, you know, presumably that is part of our evolutionary legacy. And I would argue that things like tempering our sense of rightness, it's less in the category of things like a sex drive and more in the category of something like algebra. You know nobody looks at people who can't do algebra and say well like obviously we weren't evolutionarily designed to do algebra, no it's like, oh, you missed

ninth-grade. Go learn algebra, right? We can learn the kinds of lessons that I'm talking about, we can learn to embrace this wrongness, and I don't think that that somehow suggest that where like doing a disservice to our evolutionary path, that means we're intellectually advancing, which I would hope that we would all want to do. Yes?

>>: So let me present a scenario. You're having an argument in professional or personal context and you're pretty sure that you're right. You've done as much introspection as you can and you're pretty you're right, entertaining the idea that you're wrong, but pretty sure that you're right. Your um, I don't want to say adversary, your other person we can dismiss that they are not ignorant, that they are not an idiot and they are not evil; they are just trying to do their job, or whatever. And they have a complete 180° opposite point.

But you're pretty sure that they're wrong and you are definitely sure that they are not looking inward to identify that maybe they are wrong. Is there any maybe a non-obvious best practice you discover that you can impart to how to, without offense, saying that, hey, hey did you think that you're wrong, you idiot that you [inaudible].

>> Kathryn Schulz: That's a great question. So right, as a practical matter what you do when you're faced with someone else who can't entertain the possibility that they're wrong? Yeah, generally speaking saying hey you idiot have you considered that you might be wrong, doesn't work very well. Here's something that does work other than giving them a copy of my book. Something that does work often is saying you know what we're stuck. Let's switch sides. I'll argue your viewpoint, you argue mine. Because especially, I'm suspecting, in arenas like yours where you're dealing with really bright, brainy people who are very good at making a case. This turns out, I didn't invent this; this is a strategy that used in mediation contexts all the time. It turns out that when you get someone to turn around and argue the antithesis of their own case, they will almost inevitably start believing it a little bit more. You have to engage with the evidence more substantially yourself while trying build a compelling narrative about why something happened, and so in a way it's kind of a backdoor to getting someone to say, have you considered that you're wrong? Instead of being like yo, maybe you're wrong, which people don't like to hear. It's like, hey okay, like, can you argue that this other thing is right? And that tends to be a little bit easier for people to do, for whatever reason. This is something that like management teams do with some regularity, actually it's kind of turn the tables and get someone else to argue their viewpoint. You're looking skeptical.

>>: No I kind of…

>> Kathryn Schulz: Which is fine.

>>: Really fine. Is this in your book?

>> Kathryn Schulz: Do I mention that the in my book? I think I talk about a sort of similar practice developed by this guy named, of course, the name is escaping me.

There's a practice called deliberative democracy, does anyone know this and came up

with it? I'm forgetting the guy’s name. It's similar, it's done, it tends to be done at very high level diplomat, like literally if you're trying to broker a peace accord in Congo, right? These are deliberative democracy is a strategy that was developed by someone to try to bring essentially warring factions, often literally warring factions to the table.

What we know from many, many decades of psychological research is that often the things you would like to have work, don't. So it doesn't work to just give people the counter evidence, right? All they -- In fact, in some ways it can actually reinforce their own beliefs. So this guy invented this, this very structured process for how you can get people to kind of turn the tables and come around on it. I talk about that briefly in the book. The book, I'll be quite frank with you, is not a how-to compendium. I didn't set out to write a business manual for like convincing other people that you're right, or error proofing your life, or whatever. But I like to think that the main ideas you would need to sort of arm yourself for a situation like that are in there.

>> Kirsten Wiley: One more question.

>>: Is there any hope for policymakers and politicians? I mean they're set up to basically pick at one thing the other person's done wrong and completely discredit them because of it. And we're not getting anywhere with that sort of behavior.

>> Kathryn Schulz: This isn't -- actually a question, thank you for asking it. Is there any hope for policymakers and politicians? Well. I mean, look, I will be totally blunt with you, I think, you know one of the things that's really fascinating to me is how much our attitudes toward error and fallibility vary from domain to domain and context to context.

Sometimes even within one company, I mean maybe Microsoft Research thinks totally about differently about wrongness than some other component of the company, I don't know. But what I can tell you is that of all the kind of subcultures I can think of, there is no question that politicians have the most dysfunctional relationship to being wrong.

And with good reason, right. In a way they are between a rock and a hard place when it comes to admitting error, and I think there's ways that a lot of those of us who elect these politicians are really complicit in that happening. But to the question of whether or not there's any hope, you know honestly I'm of two minds about this, which is maybe appropriate. On the one hand, I look at our political rhetoric today, and then I look back as far as I can look in history and I think this is never going to change. I mean this kind of entrenched absolute sense of rightness, refusal to admit error, this, you know, talk about the ignorance, idiocy, evil assumptions, these play out in politics all the time and you can see these playing in politics all the way back to Troy. Right, so on some level and probably further. On some level I look at that and I think we're doomed, right. I can never get this message across to politicians. On the other hand, and again, not to leave you on a terribly gloomy note, I do have some hope, and my hope comes from this. It comes from the idea of democracy, because democracy is, when you think about it, our fundamentally error-tolerant system. You know we don't live in an autocracy. We don't live in a fascist state. We live in a context where at least in theory if not always in practice, we encourage and embrace the fact that we are supposed to have multiple viewpoints, multiple parties that we have freedom of speech. The people are free to stand up and disagree with us, and stand up disagree with the president, and stand up and

disagree with one another. We don't put this into practice perfectly, but it is a radical, radical, radical idea. The notion that a new party would come into power and not slaughter the party that disagreed with us, this is a really new idea in history. And in the last hundred years, we've seen democracies grow radically. They're spreading around the world, and I do look at that and on the big global picture, I do actually have some hope about it. Thank you guys so much.

[applause]

>> Kirsten Wiley: Thank you.

[applause]

Download