1

advertisement

1

>> Amy Draves: Thank you for coming. My name's Amy Draves, and I'm here to introduce Nassim Taleb, who is joining us as part of the Microsoft Research visiting speaker series. Nassim is here today to discuss his book,

Antifragile --

>> Nassim Taleb: Your microphone.

>> Amy Draves: You can't hear me. I can hear myself really well. I'll lean in. Thank you. Nassim is here today to discuss his book, Antifragile, Things

That Gain From Disorder. Antifragile refers to something that is unbreakable but not because it is impervious to assault. Antifragility allows an entity to with stand all the black swans coming its way and to absorb their forceful volatility and emerge stronger for it.

Nassim Taleb is a former trader and is a distinguished professor of risk engineering at New York University's Polytechnic Institute. His work focuses on decision making under uncertainty as well as technical and philosophical problems with probability and meta probability.

As you no doubt know, he's also the author of The Black Swan. Please join me in giving him a very warm welcome.

>> Nassim Taleb: Thank you. So let me talk about fragility. You guys are techies, no? So let's be techie. Forget about the usual waste of time talking about candles and this and metaphors and packages and stuff. They're all in the book. Let's talk about fragility.

Let's define fragility, okay? Fortunately, this is not porcelain so we can do an experiment to try to break it. But the fragile is what light bulb has this payoff. Small gains or no gains time over time. This is time. And big losses. So harm exceeds gain. Think of a bank portfolio. You make small and then you have big losses. Like option. I was an option trader for 20 years and we viewed the world through kind of packages. One that loses from volatility, this one, and one that gains from volatility after the payoff.

Now, this is exact payoff over time, a time series. So this is time, okay, and on a vertical, it's payoff. Small gains, big heart. As you can see this applies also to medicine. If you go to see a doctor who gives you some random medication, small gain, big potential harm at some point in the future. That's

2 the payoff for short volatility.

If you look at a coffee cup on the table, imagine it's not -- you know, it's something that breaks, breakable material. It wants peace and predictability.

So you can map fragility as something that doesn't like volatility. Don't hesitate to ask me questions during my conversation.

So this is an interesting techie way to view things because from this, this is in time space. We can have an even better definition of fragility and the way to detect fragility as follows.

In my chapter, I ask people not to read the chapter, chapter 18 or 19. Ask them not to read it, meaning they're going to read it. You tell someone not to read it, you know what happens. In it, size of stone. This is harm. Harm is a negative, okay. Gain is in a positive, okay. Like here gain and losses.

There's a story in the Talmud of a king who had a son and who committed whatever violation. Raise your hand if you can't hear me, all right. Or if you want to interrupt to say something aggressive, it's okay as well, okay?

So the son committed a violation. The king have to punish the son. Big dilemma. I don't know if you've been either king or had a son who committed a violation, have to punish him with a big stone. But you know it's a very difficult situation. How can he solve it? The king had an advisor who was very shrewd. He told him, okay, you break the big stone in small little pebbles, okay. And you pelt him. And what happens? The son is not going to be harmed.

So it's very simple. This is a stone and this is harm. We have acceleration, you see. A ten pound stone is more than ten times more harmful, all right, than a one pound stone. With this, you've got a definition of fragility gives you that. So this is a good book, all right. 550 pages of, you know -- we're going to get into this. I'm giving you the gist, go straight to the core of the problem. This is what it is.

Accelerated harm. Take your body or anything that has survived has to have nonlinear harm. If harm were linear, it would be like this. You see, this is linear. This is concave. For extreme events, you're harmed a lot more when you're concave than when you're linear.

3

Let me explain. If you're drug ten meters definitely, even in Microsoft, people die. I'm not sure people die in Microsoft. I don't know if that happens. But I'm sure, I mean, but it's possible that people die, you know.

Outside of Microsoft, people die of ten meters fall. They die.

But now, think about if you fall ten times from one meter, what will happen to you? Nothing. Just people laugh, all right. Now a thousand times, one millimeter, nothing. If harm were linear, the cumulative harm from small steps, you see, would be -- would kill you just going to the office because one hundredths of a millimeter, okay, you got that. This impact would kill you, you see.

So you realize now we have a definition of fragility. I still didn't get to antifragility. But once you have the definition of fragility, you understand how to map it mathematically, how to define it, how to measure it and how to use it.

Let's think of size. Projects, hundred million pound projects have a lot more cost overruns in the U.K., 30 percent more than five million pound projects.

So we have these economies of scale from random shocks. This explains why elephants break a leg very easily, and you don't have as many elephants as mice, even if you weigh them by the weight.

The real explanation is probabilistic. Is that you have the distribution of the small harm is vastly higher. Let's take earthquakes, all right. If you weren't -- you have to be concave to earthquakes. Incidentally, I went to the bathroom, and there's something that say what to do in case of earthquake.

Something you never see in New York. So I thought of it coming to this lecture.

But you probably have had eight earthquakes since I've been speaking, very small scale. You see. So probably three million earthquakes, small, micro, every day in the world. All right. Every day. But so if we're linear to earthquakes, we'd be gone. You see the idea. So you have to be nonlinear, whatever has survived.

So this is sort of like the idea, I have 450 page of math on the web and a technical document that comes with this to back up these things with theorems and stuff like that, all right. But that's the main idea of fragility.

[indiscernible] now let's talk about antifragile, all right? The antifragile

4 is convex to stressor. Fragile shape with respect to stressor. Not concave, convex.

You like variability if you're convex to any stressor. [indiscernible] variability, what is the perfect temperature here, 70 degrees? You do better if you spend -- if your linear combination of 80 degrees and 60 degrees. Or even better, 50 and 90. Even better, 40 and 100. You see? For the same average. It means if you like variability, it means that you are -- love randomness. You're going to have the opposite payoff, and this is convex.

This is concave.

I'm being techie, because you guys are techies. Tell me if it's too complicated, then we can go back to what I'm going to say tonight at, you know, talking to retired readers, you know, metaphors about packages that break and stuff like that. But I'd rather, you know, I'm much more relaxed, you know, talking about things in a -- that's how I do things in life, you see. And then later on translate them into, package them into metaphors.

So with this, we can classify the world in three categories. Fragile, robust, antifragile. The robust is not the opposite of fragile, visibly, because the robust has a very simple definition. The fragile, the upper bound is -- the upper bound is unharmed. The lower bound is harmed. The robust, the upper bound and lower bound are unharmed. The antifragile, only the lower bound is unharmed. You're open to the up side.

So doing this for assistant for any package, now we have an exact definition of fragile and antifragile, and we can map things biologically that the antifragile loves variation. It also maps from this. We have three map as I showed you. One, you know -- two maps so far. One in time space and one in functional space. Yes.

>>: Is differential same as convex?

>> Nassim Taleb: No. Convex is a way. If you're convex with respect to source of randomness or source of variation, you're antifragile so long as you're convex to it.

>>: If you shoot that down so that it's 60 degrees, let's say that there was actually some --

5

>> Nassim Taleb: Then you would have, then you would be like this. You could be like this, all right. It would be convex and then concave, you see. It would be convex, concave. You can have all kind of shapes. Or it could be flat or it could be even concave, you see.

In fact, we all for temperature have this shape. This is a temperature.

You're like this. Let me explain. If you spend 140 degrees, 70, zero degrees, all right. If your grandmother spends her -- a chapter, I talk about grandmothers. If you spend one day at zero degree, the other day at 140 degrees for an average of 70 degrees, you're not going to have a grandmother, you agree?

But within some small thing, you're better off having some variation. You see.

So everything is convex with respect to a source of randomness and up to a level, up to some degree.

For example, let's talk about the body. Weightlifting, you do a lot better lifting once 100 pounds than ten thousand times a millionth or whatever, a thousandth -- you know, one-ten thousandth of a pound. You get the idea. All right? You do better. But visibly, if I lift -- or someone puts on my shoulder 5,000 pounds, you know what's going to happen to me. I'll be transformed into corned beef you, know.

So you see the idea. Up to some degree of randomness.

So now that we have this idea, now that we have the graphs, I'm relaxed, it's very easy, I not making an effort anymore. I like what I went through during my book tour. I can talk about my book and refer to these, all right?

Now we can, there's actually a third dimension which I call the probabilistic distribution of things, why everything antifragile loves randomness. So now let me start the regular presentation of the book, okay?

So this book is about this quality that people forget in the discourse called antifragile, because they have the wrong definition of fragility. In fact, the ancients understood it very well. And antifragile is whatever gains from, I call it the disorder brothers. Randomness, variability, stressors, chaos, volatility and similar structures.

I learned about that when I was an option trader and we had something called

6 long gamma. Long gamma means if there's big market variational turmoil, you make some money. Up to a point, of course. You may make money up to ten percent crash but not up to 30 percent. Or it could be -- or it could open-ended long gamma, you make all the way, all right?

So we have these structures, and I was trained to think in these terms and map them, so that was my profession. But couldn't communicate with people, you see. I didn't realize my definition of fragility in finance, which was very simply if you're home more by five percent might have, by five times a one percent move, you are fragile, okay. I didn't realize it was so universal, the definition of fragility.

I didn't realize until I started thinking that effectively, the Romans have a word for something that likes volatility. It's called hormesis. I mean, the medical profession has a word for it. Hormesis. They have two things. They have something for what people call resilience, called myth realization. You take a little bit of poison every day and you're going to be immune for that problem. You're not going to be better off, but immune against that source of randomness.

And actually, in here, there's the mother of Nero kept taking, you know, the thing. The problem is as you well know, you could be antifragile or you can gain immunity by taking drugs, but you can't do it, you know, with respect to other sources of harm; namely, swords. So her son wanted to kill her. In the end, he sent someone to kill her with a sword so she couldn't be poisoned.

So there's myth realization, and the other category is hormesis. Hormesis, when your overall system gets better from a little bit of poison or a little bit of shock.

So I thought about it. I thought about it, I said, uh-huh. Now there's something that I can map mathematically and there's a mechanism of overcompensation at work, like mechanism of redundancy. Just think of the way, even at Microsoft, people come with two kidneys, no? No? You have two kidneys. Why do you have two kidneys? You have redundancy against random events.

Because when you have redundancy, you don't have to understand the future as well. If you have two kidneys, you don't have to emphasize in understanding the future as someone who has, you know, only one so, you know, and there's

7 seven or eight ways you can lose a kidney from trauma, lead poisoning, your cousin hitting you, a wild animal, two other things. You don't have to understand the environment as much.

Well, it turns out that there's also a redundancy in the way we have strengths.

If I lift 100 pounds, my body won't adjust to lifting 100 pounds. It will project an extra 20 pounds, you know, and it will code, code like you guys do, coding. It will code for 20 more pounds of muscle over time or 20 more pounds of lifting. You get the idea. And that is a form of redundancy is overshooting to the future. You see the idea.

So I realized that in my business of risk manager, effectively, there's no better than nature. It does things in a certain way that is extremely -- after three billion years, we can give it some credit. And as statisticians, there's no more respectable than an end at large, all right.

So this is how I start developing the idea. Then I realized, hey, you know what? We're very bad at predicting socioeconomic variables. We can put a man on the moon, all right, but even Microsoft can't forecast at what price the market is going to close tomorrow. These complex variables. And we're getting far away from that.

So you have to focus instead of predicting. And only the method of predicting on being robust or perhaps antifragile. Gained from randomness. Gained from error, and try to identify what gains from randomness and gain from error. So this is the book.

Now you map to everything. But I wrote this book in a way to prevent any reviewer from being able to skim it. You see? And so far, no reviewer got these points. See what I'm talking about? This is the core. I started the book with this. No reviewer got the them, except for one, all right. It's a technical book, visibly dressed in an easy language. But if you're a mathematician or a scientist, you get it right away. And then you start to get it right away.

But people who are professional reviewers are lost. Completely lost on it.

And I make sure that -- let me give you the chapter titles if you want to be amused to realize why the reviewers are never going to get it. Like after the prologue, where the idea is clear in the prologue. History Written by the

Losers. Never MarRy the Rock Star. Tell Him I Love Some Randomness. The Souk

8 and the Office Building. Time and Fragility. The Philosopher's Stone and Its

Inverse. So they're not going to be able to pick it up, all right?

So, you know, the other books are written exactly the same style, by the way.

The other two books. So you bypass the middle man, the reviewer, and you go straight to the consumers, okay? So this is the book.

It's composed of seven different books, seven different topics that apply this.

First topic, the most interesting one, is on evolution. Evolution is long randomness. Think about it. If you have ten offspring, you're never going to have improvement of the species if you don't have some randomness in the environment that make -- that differentiate between them.

And then also, genetic drafts from randomness along the line. So evolution likes randomness. It's a problem. Why is it a problem? Well, because we, since the enlightenment, we have some social justice. Can you reconcile? Of course we can reconcile. And that's sort of the challenge I take in my book one, where entrepreneurs. And here, you're in the tech business.

Entrepreneurs, really the system improves thanks to the failure of entrepreneurs.

And then I learned something in biology from a gentleman who, remember hormesis, system gets stronger? He discovered one thing, that no, system doesn't get stronger. Some parts of the business under shock, the weak parts get cleaned up in you, you know. The restaurant business, as a business, is composed of restaurants. Layer a restaurant business. The restaurants are fragile for the business to be -- to work well, to be robust. You agree?

Bankruptcies help clean up the system. Therefore, the system becomes robust because parts are fragile. It's the same thing in your human body. Cells need to be fragile for you to experience hormesis, improvement. Because when you starve yourself, weak cells go first. And then within cells, protein. He modelled it, published a paper in genes.

If you go to my website, I have all these papers linked to this concept. So realize evolution loves volatility. Okay? And this is a very map like this.

So in this representation, so I had to do some work, some mathematical work on that.

So this is book one, but, of course, it's written in a poetic level. What

9 kills me makes others stronger. And then why you have the difference. You know that since the enlightenment, we talk about I, the individual. For that, we never, never use collective. The tribe, the collective, you die for the tribe. This is why you have suicide bombers. They don't consider themselves as a unit.

But also, you have -- but now we have modern society and you're not going to commit harakiri for the sake of the computer industry to get better by failing.

So we need the mechanism to make failure acceptable. When you seem to have some -- give social acceptability to failure, but not in other businesses, certainly not in Japan and not in Europe. When you fail, you're a loser, you see. So it prevents people from doing these things. So we have to find the mechanism to make people fail to improve the system. That's sort of book one.

Book two, I'm going to go through the books and then during the Q&A, I can answer whatever, you know, divert you into whatever I haven't covered, okay?

Book two I talk about modernity. The problem is look how smooth these surfaces are. Look how smooth this wall, okay. Nothing is fractile in here. Look outside. You see the green outside? See how fractile it is? Complete different, different world.

So you're psychologically, you like variation. We like variation. But modernity doesn't know it. Wants to make us comfortable. You see? Modernity wants your children on Prozac to iron their mood swings. So the problem is that they don't realize that there's a difference between what I call the cat and washing machine.

What's the difference? The mechanical doesn't need stressors to operate. The organic communicates with the environment to get stressors. You see, you have to get stressors. This is the information system. So the problem is if you stabilize everything to make everything comfortable like we have here, what happens to you? You get weaker. Your bones get weaker if you spend time in bed. Your system gets weaker and we sort of understand it, because I'm sure we have a gym here, no? So contradiction, you have a gym, I'm sure, and you have elevators. You have a gym, yet you have smooth surfaces. And then you go to the nurse and they give you antibiotics, no? So you understand very quickly.

So there is something about modernity, trying to make people comfortable, that weakens them. So I call that interventionism. And the person over intervening in a system is called fragilista. And typically it fragilizes the system

10 two-fold.

First, by making it too comfortable, it becomes weak. And two, by not being there intervening when needed. Mr. Greenspan, you've heard of him. He inherited the nature. He made sure we have 68.7 degrees year round. You get the idea? He wants to do the same for the economy. Fragilizes the economy, because without stressors, you don't have, you know, you have like flammable material accumulating and risks accumulating on the surface and then it blows up.

So what you have is a very stable but then blows up, something volatile.

Volatile things, you know, markets need some volatility because that's the way information is conveyed. You can't stifle information. Take, for example, democracy. It needs volatility. And I compare in the black swan, Syria, which you heard of Syria. You heard about it now, which has zero political volatility. Italy has a lot of it.

Which country do you think is safer? I have a lot of letters saying Syria is safer than Italy. Okay. Syria had zero, you know, political volatility. You get the idea. That was in the black swan. And now I'm taking on Saudi Arabia.

I mentioned Syria and Saudi Arabia in the black swan and, of course, Egypt.

Look what happened. You over stabilize.

This is interventionism, this is modernity. So I go through that. And the problem is you're never there for the big, like in medicine, you're never there for the big things.

Now, book three, I talk about prediction. Forget about it. It's not interesting. And then I introduce Fat Tony. I have Fat Tony, a character who smells fragility and makes a buck with those who are lulled by a false sense of security making pennies. Boom, blow it up once in a while, okay. So this is like Fat Tony. I introduce Fat Tony.

And then I introduce Seneca by defining -- let me redraw the convex for you to see the link. If I have a payoff like this, if I have a payoff like this, I'm here, all right? Sorry. It's convex. Let me make it very convex. This is payoff. This is variation. We're here. The market is here. Okay?

If the market goes up one dollar, I make two million. If the market goes down one dollar, same amount, I lose hundred thousand. I'm convex. When more up

11 side than down side.

Well, the person who figured it out is Seneca, the Greek -- the Roman philosopher. And he was really Roman. They hated intellect. They liked practical things. And he was the wealthiest man in the world at the time. And he figured out that you're harmed, you see, when you're very wealthy, because you have more down side than up side. You're fragile.

And every morning, he would wake up, try to convince himself that he's very poor. Once in a while, like fakes a ship wreck and then oh, I discovered all this wealth. So he wanted to make sure that if he loses all his money, which happened at a time, he wouldn't be harmed because he, you know -- so that's sort of like the idea.

And so I have a discussion of Seneca, who to me is probably the greatest thinker. Academics never got it, because academics think that stoics are people who have no emotion. In fact, he didn't want to have negative emotion.

More up side than down side.

And I define antifragile as more up side than down side.

Something you may like now, my central chapter, I just put on the web today a distillation of that. My central book is on discovery. Tinkering, as a compared to rational top-down. Tinkering is like optionality, how it likes bottom-up tinkering. You have little to lose, a lot to gain. As a process, we show that the evidence that a lot of things we think came from science, in fact, came from heuristics of practice.

And I do a random simulation. Today is an edge essay, all right, where it has that simulation. It upset a lot of people where you show that two brothers, one has knowledge, the other is just tinkering, and how much further you can go by, you know, by tinkering and have more up side than down side, aggressive tinkering, as compared to someone who knows where he's going, logical.

And effectively, you look at history of technology and when you look at definition of technology as defined by Harvard or all these places that it's good to drop out from as one of your leaders has, they think that technology comes from science. Nonsense. Science comes largely from technology. You see?

12

Even in some cases, sometimes you find predecessors, like Euclidian geometry.

They were building cathedrals and didn't know how to divide. You know, they probably thought that Euclid was some Greek poet or something. They didn't understand, right?

But later on, we find matches, you see. Industrial revolution by thinkers. So jet engine, thinkers. Engineers knew how to function. Later on, they find the theory. You fit the theory back to the past. I call it lecturing per how to fly. Upsets academia at all levels when I show formal knowledge versus wild optional knowledge that gains from uncertainty.

If you're tinkering, you want maximum exposure to uncertainty. While the university is prison, you see. You're a prison of the curriculum and all that nonsense. And so this is a chapter that really, really, really gets university people angry. Like emotionally angry, particularly status university system people and U.K., even more.

>>: So you're referring to your edge essay that you put out this morning.

>> Nassim Taleb: Sorry?

>>: You put out an essay in Edge?

>> Nassim Taleb: Yes.

>>: That you were referring to. I think it's interesting to this group, because largely research people in this building, by tinkering, you're actually creating a situation where you're able to tinker and take advantage of basically positive black swans.

>> Nassim Taleb: Exactly, what happens is there's a theorem that people don't realize. And it's as follows. Is anybody of you into probability? Okay. Any probability [indiscernible] is skewed right, okay. Takes a lot normal. What's a mean of a lot normal? It has a variance in it.

So you increase the variance. What happens to the mean? Increases. If you have little to lose and lot to gain, you see, the mean of a distribution that's bounded on the left and open on the right, the mean depends on a variance. In other words, you inject uncertainty in the system, your expected return goes up.

13

The exact opposite of a plane ride. Take a plane ride, you know, going to New

York tomorrow. Six hours, all right. I'm lucky if I make it five and a half hours, all right. If you inject uncertainty in a system, it's not going to make the plane ride one hour, but it can take two weeks to get there, you see.

September 11 or something. You get the idea. Any uncertainty.

So the systems that are fragile are defined as their mean degrades when you inject uncertainty and this the mean increases when you inject uncertainty.

And that's pretty much the gist that article. The thing is when I talk to probability people, they get it right away. That any distribution list skewed right has a mean like normal. Has a mean is half a sigma square plus something, right.

So but when you talk to normal people, it's hard for them to understand why uncertainty is good. So you have to use a lot of patience for that. So this is like the Edge essay and my chapter here, which is vastly more aggressive than the Edge essay, you see, and my chapter here, I talk about frauds, all right. A lot of things that are frauds effectively that anything that compresses uncertainty, to channelling it, is not very good and you see, obviously, a lot of situations, like the [indiscernible] formula, stuff like that, that were developed in a sophisticated form by operators and later on claimed by professors, you know, as coming from them because they can't imagine that knowledge can come from any other way.

And in it, of course, we have the two [indiscernible] and he who discovered that problem, Dmitri. You've heard the craze of destruction. People assume it's an economist. How can anything intelligent come out of a [indiscernible] economist? Impossible. Got to be somebody else. That's the rule. If it's attributed to an economist, it has to come from a philosopher, all right?

Non-academic, of course, semi-academic at the time created destruction.

And he said that Socrates destroyed the balance between Apollonian, which was reasoning, and Dionysian, which is the deep dark force in us because he made it disrespectable to do something you don't understand. He made the unintelligible seem unintelligent. And who is going to debate Socrates? My

Fat Tony friend.

As you can expect, think of a guy from Brooklyn, all right, who makes his killing squeezing people debating Socrates. You know who's going to win, all

14 right? Fat Tony wins, right? But he has to abide by philosophical. But he out-Socrates Socrates by cross-examining him.

So this is sort of my book on innovation, the one I like the most. I have a distillation of the article, but I go much deeper in here, you know, against the soccer mom [indiscernible], education system.

You know, something about educational system people don't realize. Do you think that education raises wealth in the country? It raises wealth for a family. Stabilizes income, actually, for a family. Doesn't even raise wealth.

Education comes typically, if you take countries that are rich, have a degree of education, education came after, you see. You get rich and then suddenly, it's phenomenal, this thing caused by education.

Only people who think, like you guys, understand it. But if you go on the east coast, Harvard Square, you're put to death right there for saying something like that.

So book five, how to detect antifragility of systems. It's measurable.

>>: Can I ask one more?

>> Nassim Taleb: Yes, go ahead.

>>: So it seems to me your Edge essay and what you just talked about --

>> Nassim Taleb: You read the Edge essay?

>>: Yes. We changed some thoughts on Twitter this morning.

>> Nassim Taleb: Really. What's your name?

>>: Mr. Griffin.

>> Nassim Taleb: Okay. Very nice guy. Thanks.

>>: It seems to me that your essay explains the venture capital industry and its basically parallel distribution --

>> Nassim Taleb: I'll get to that during the Q&A. Let me finish this. Okay.

15

He wants me -- and I'll answer it in the Q&A, the first question. I explain more than that, the essay. I'm trying to explain more than that. I'm trying to explain how life works. Cooking is heuristic.

So we have two approaches. Cooking is I make hummus. How do you make perfect hummus? You tinker and then add because they got nothing to lose by adding an ingredient, giving it to Fat Tony, he says yay or nay, right? If you say get lost, all right, you dump it, all right. So you have gains.

Can you make hummus out of the chemical, you know, composition? Yes or no.

You can't. All right? Okay. The problem is so many things resemble hummus.

If you take in the world what resembles hummus versus what resembles the atomic project, you'd notice it's huge compared to so small.

But then we focus on the other ones, because science writers make us believe that science plays a larger role. In fact, what we had is vastly more sophisticated. Technology is more sophisticated than science, you see.

We get to venture capital in a minute. This is my book four.

Book five, I explain here how you can measure fragility. You know how we can measure fragility? Measure how concave something is. That's it. Take a portfolio or take a company. You have a company? All right. Take a company, increase sales ten percent, decrease sales ten percent. If the company makes a hundred million and loses five hundred million, then you're fragile.

And that ratio is the exact mapping of fragility, okay. And you can compare companies that way. Now, someone may tell me you have mistakes in measuring.

I say who cares. I'm second order. If I'm measuring a child and the child is -- you say I'm measuring the child, and the child is growing, but my ruler doesn't work, I will still be able to tell how fast he's growing. You see?

Second order effect, you see?

I have, you know, you can get the -- this is the [indiscernible] that defines fragility. Technical, but it works. I can't measure the future. I can measure fragility for the future, comparative fragilities. Okay.

Now, chapter -- book six, I call them books because they're independent, and they can be read each on its own. And, of course, I go through layers of depth. Every chapter has a topic and then you go through layers and layers and

16 layers. That's the style.

So book six is something called via negativa. And in it, I talk about medicine. And I model medicine using convexities. Shocking, because very simply, just as I said, that a human body has to be concave to sources in harm, all right, it has to be treated -- let me give you the intuition. It's complicated to do it now, or maybe I don't have a clear idea how to express it in words.

But you're away one sigma from the norm, say, in hypertension, and I -- nature has dealt with this problem, and no medicine will out-perform nature, you see, easily. So someone who is slightly hypertensive has only one in 53 chances in benefitting from a drug. You see? You can probably work out, do something.

But when you're very hypertensive, okay, you have 80 percent chance of benefitting from a drug. Which means that whatever you do in Pharma, okay, is bad competition with nature for cases that are around the normal.

But the very, very, very rare nature didn't encounter it. So let's focus on the very rare. The following hit me. Aha, what hit me. The problem that one sigma, all right, is five times more frequent than four sigmas. Five thousand -- did I say five thousand? Five thousand times for frequent.

If you're pharma, who do you try to cure, the four sigma? They're going to die anyway, all right. You see? Or the one sigma, the five thousand times more.

The one sigma. That's where they find their clients. You reclassify people who are one sigma as pre something. Pre-high cholesterol. Pre-hypertensive.

Pre-diabetic, all this, all right?

So the chapter, same context, same convexity, same everything. And this is the same graph. Now I'm going to show you this graph and it's exactly the opposite of what we talked about venture capital. This is a bank. They make small money and then they lose big, you agree?

It's the same thing when you take -- there's no drug we have taken in history that did not give us this at some point. You take steroids, you see unconditional gain. Look what happens. And when you're mildly ill, any treatment will have this shape. When you're very ill, it's the opposite shape because you have a lot to gain.

If you have cancer, anything can gain, all right. So as a result, we should

17 overtreat the very ill, and we don't. It's not good place for resources.

Pharma makes money on the moderately ill. That's where our money goes, smoothing out lives.

And in it, I talk about via negativa. Very interesting. If you like variation, now replace temperature with calories. 2,400 calories as opposed to, you know, 4,000 calories one day and whatever the other day, all right?

800 calories, all right.

This is how we're made out to be. The randomness of nature never gives calories steadily. Actually, it's more complicated than that. I discovered one thing. I started reading papers in medicine about seven years ago.

Everybody in my family is diabetic. Everybody but me.

So I realize, you know, how can I pull out of it. Everyone of my ancestors.

My grandfather -- they all died younger than I was now, except my grandfather.

So I realize that, you know, you have -- you know, you understand how sort of like you worry about it.

Then I realized the following. When people talk about diet and stuff like that, they don't realize that a second order of effect matters a lot more than the first order effect in some cases. You see? It's not what you eat. It's how -- you know, it's the second order distribution. Like temperature for the grandmother, okay. The average 70 degrees isn't as relevant as what was the variance around that average. And if she spent half the time at zero degrees, half the time at 140 degrees, information is vastly more important than the average, you see.

Second order swamps first order. Can you hear me well? All right. When you think of diabetes, you realize that a lot of it comes from not over feeding, but it was not studied. There's very little studies on something here called

Jensen's inequality. Are you guys familiar with Jensen's inequality? Okay.

So take a function, all right. Function. The temperature, F is the health of a grandmother. At zero degrees plus 140 degrees divided by two, all right. Or function at zero degrees plus function at 140 degrees, divided by two.

There's a difference between two of them, no? It's called the expectation of an average is different than the average of the expectation. It is a minor point, but it captures a second order of fact. Pretty much the theme of my

18 book is Jensen's inequality, all right? The problem is if you don't take that -- if you like randomness, then there's a difference between the left and the right side. You know. If you hate randomness, the difference is big but in the opposite direction.

So it looks like we are made for randomness in feeding. Who understood it?

Religions. Religions understood it. When people talk about the Cretans diet, okay, they say let's eat what the Cretans eat. What do the Cretans eat? Very simple. They have 200 days of fast. I'm Greek orthodox, all right? 100 days of vegan fast and then days where they only eat fat, fatty lamb, you see.

That's the important thing.

The other one is they don't eat three meals -- you know that three meals a day that's a cereal industry. We are antifragile, you know. We starve. Think about it. We are a combination of cow. We're omnivores. We're like a cow at a time opportunistically. And opportunistically, meat eaters, do you agree?

Now, how often does a cow eat? All the time. So there's no randomness when you're getting herbs and whatever, all the boring stuff, you agree? But how often does a lion eat meat? Rarely. Does he eat breakfast to go hunt? No.

He hunts to get breakfast so their eyesight improves. So that's Jensen's inequality.

So we started looking with a bunch of friends and we started meeting in

Brussels now, of all places, where they have french fries, by the way, to try to look at where there's something in metric we called regular divergence, where people in medicine didn't get that point. There's only one place in medicine that use that motion of Jensen's inequality is in pulmonary ventilators.

So I have a chapter explaining, you know, this. And to be quick before my time is up, there's another thing that people don't understand. That pharma is never going to make money stressing your body, by removing food from you, making you meet episodically. You know, putting you six week at 600 calories, you know. You lose diabetes.

There's institutes in Siberia that have been to go this for a hundred years.

They say pharma isn't going to make money removing. This is what I call via negativa. In a complex system, if you remove something unnatural from it, okay, you have no long-term side effects. But if you inject something new, you

19 don't know the chain of unattempted consequences. This is what I call via negativa. So this book is philosophical about decision making by the negative.

Stop you from smoking, you save, it's better than any drug developed. But nobody is going to make money if they stop you from smoking. So the emphasis is on adding, you see.

Same with diabetes drugs. Stop people from eating breakfast, dinner, one meal a day like the Romans or the Cretans, and make them follow some fasting protocols. Like religions try to establish fasting. So this is my book.

Now the last book is on ethics. It so happens that ethics can be mapped exactly as a bankers makes bonuses, taxpayer pays for it. Some people have the up side, others have the down side. You agree? Well, he is short an option that someone else is being harmed by. And he's collecting the premium. So you can look at ethics as I'm harming others.

And in my writing, I adopt the following. There's a person I collaborate with,

Mike Blank in Berlin, a professor, and he said very simply, if you ask a doctor what he should be doing, he gives you a different answer. If you ask him what he would be doing for you, okay?

So the -- and is very simple. I never, never, never tell you what you should be doing. I tell you what I do. I will never make a prediction unless I have something at harm. It's called skin in a game. You see, if I'm harmed, I want to be harmed first by my opinion, okay?

So start looking at transfer of fragility, you see. Why did the economic establishment continue to develop these bogus models? Because they're not harmed by the error. Someone else makes the error. They make their salary, someone else eats the error. You see?

So here we have three categories of people. People calibrated ethically, skin in a game. People who are not calibrated ethically. They have the up side, other people have the down side. Like someone, a bureaucrat who harms you and makes a policy mistake. You pay the price, he doesn't. Okay? This is why big, centralized states don't have the corrective mechanism municipality, okay.

And then finally, there's a category of heros who have soul in the game.

Firefighters. They have all the down side, give you the up side, you see. You get the up side, they take all the down side. And they don't get a bonus. And

20 at no time in history have we had more people in power who don't come from the third category.

Remember, even one generation ago of presidents, George Bush, the father, he was a war hero, all right? George Washington. Hannibal was first in battle, first, want to be first. Churchill had -- okay. He, of course, crossed the

Atlantic, chased by the Germans, but he couldn't care less. But you see the idea. The idea of skin in the game as a corrective mechanism.

And some back to [indiscernible] law is that risk in a system increases -- there's no inspector who will ever know more than a person who has skin in the game. So Fat Tony has a rule, never get on a plane unless the pilot is on board and make sure there's a co-pilot. That sort of thing.

And also, never ask anyone to give his projection. Ask him what he has in his portfolio. I made a lot of enemies with this, and I will continue to make enemies, but also make friends. The idea is ethical commitment has to be all the way through, all right. So that was the idea. So I went to CNBC and put it on a show. They said you have to give us a prediction. I said no, I'll give you my portfolio without, you know, hiding the scale, the percentages.

And then said okay, no. It's not sexy. They want people to sell -- take projections.

So this is sort of like the book and now we can start the Q&A, and we start with this, the entrepreneurship, no? So thank you for listening to me.

So I guess the first question was precisely about implication for your business, okay, along these lines was having the opposite payoff, failing fast.

If you have optionality, you'd rather have five options -- the biggest mistake is long-term planning, because it locks you up like a highway without exits.

And mathematically, five resetting options are worth a lot more than one long five-year option. You see? Because you opportunistically have time to change.

You can accept or not. So this is one of the conclusions that Ken Griffin, no, has identified as I'm sure the one he likes the most.

And there are other ones. You know the other ones. If you phrase it -- I'm not contributing anything particularly intelligent to that debate, other than framing it in a mathematics of this. In a coherent, mathematical framework for all of this, you see.

21

And if you frame everything in a coherent, mathematical framework, then you can make immediate recommendation that if you have optionality, then your gain from randomness, then you have maximum exposure, then a five-year option is less worth than five times one year option reset every year. You see? That don't have long-term plans that counter intuitively and stuff like that. So that answered your question?

>>: Yes.

>> Nassim Taleb: All right. Final thing, you know, for entrepreneurship that failing fast for you guys, you understand the value of failing fast, you see.

Because you understand that if you have less harm, okay, this is the only place where people know how to fail fast. I mean, this part of the world. So another question? Yes.

>>: On your second chapter regarding evolution, you mention that ancient

Romans used to [indiscernible] --

>> Nassim Taleb: Ancient Romans.

>>: The weaker kids, the weak child.

>> Nassim Taleb: Yeah, what I'm saying is evolution is harsh, all right?

There's something called naturalistic fallacy that we have to fight. We have to use all what's good about evolution, all right, while imposing our moral standards. And how do you do it?

>>: How does that [indiscernible].

>> Nassim Taleb: Darwinism, okay, by wanting to have a society that has no

Darwinism, we weakened it further so everybody's got to sing together. The way you've got to do it is to protect the very weak. And this is what I call the bar bell approach. I haven't spoken about it now. You see bar bell, bar bell approach is instead of having the middle class being favored, favor the very, very weak and let the wild entrepreneur prevail, you see?

The middle class are a comfortable thing. The bar bell approach is like for a portfolio, okay, hedge your extreme risks, you see, hedge your extreme negative risks and take maximal positive risk. That's the bar bell approach. You can

22 apply it to society. It's a little complicated to do it here. But the bar bell means instead of having a middle policy to smooth out fluctuation, just try to smooth out fluctuation very unhappy.

And another way to practice in medicine when I say have more people in emergency rooms and fewer elective surgeries. That's my form of -- my answer.

Protect people from extreme unhappiness, but don't give them Prozac, because they have mood swings. You get the idea?

Yes, and then -- yes.

>>: Another question about figuring out whether a model or a system is fragile. So it's, I guess it's easy in cases -- right question is about picking what the stressor is that you're looking at.

>> Nassim Taleb: Excellent.

>>: So yeah, like, in the case where you're long in options, it's easy to know. But if you're picking a company, how do you know if it fails or what?

>> Nassim Taleb: That's perfect. When I take a model, I take a look at economic models. It's very simple. I take every model, as it is. People usually never miss a relevant variability to their model. In fact, they have too many. You see?

Model error typically doesn't come from misidentifying, you know, what enters your model, like you have a company and then you have sales, the weather in

Spain, all these you have in your model.

The problem is that making a point estimate of that variable, rather than a variable estimate of that variable. You see the idea? Like, for example, the big error, you know that your grandmother, in the grandmother example, is affected by the weather.

But a point estimate is you take the average temperature, 70 degrees. Instead of stochasticizing the average. And this is -- the test for that, I call it the alpha -- sorry, the Omega A. The Omega A test is you take a model, for example, I have projections of sales ten years from now, okay. Maybe I'm very good at it, but they're random.

23

Instead of doing that I test my model across different values of that. If I'm concave to that, then I have model error. Just as I said earlier, very simple.

Take your self-fertilizers, all right. You're very good at estimating orders of fertilizer for the next year. On average, I'm going to get ten million orders, no?

To see how vulnerable you are to model error, you try to see at eight million fertilizers or at 12 million fertilizers, you see? To see if you lose a lot more at eight million than you gain at 12 million, you have a huge exposure to model error. You see?

You take the model is composed of variables, A1, A2, A3, A3. And you check every variable to see for concave to it as the first cut. There's a more complicated method where you move them together.

But the way we did it for the IMF, I wrote a paper heuristic to the tech model error that was so simple that even the IMF people understood it, all right.

And then, you know, it took only a year to get stamp of approval. To tell you how easy it was. It had to be very easy. I work for free for the IMF for a simple reason. They say we are forced to pay you, and then they send me 83 pages to sign. And I resigned right away before starting.

So we negotiated, I don't earn a salary and I only had to sign a few sheets and stuff like that to the IMF.

We wrote the paper. By the time to put their approval, all right, it took about a year. The idea is very simple. I take a bank. No bank -- the linear, you see the variations right away, you see. It has to be nonlinear.

So you take a bank, you do a stress test. I don't care how you do it, all right? Because I tell you, the import is not the precision of your stress test, it's the acceleration.

So I say okay, the stock market is going to go down 30 percent. How much do you lose? Okay? Then you say stock market's going down 35 percent and going down 40 percent. If you have accelerated losses, you see it means that you have model error. You see? That's how we test for model error. And then you test the antifragile has opposite characteristics. You make more and more.

You make a lot more if the market goes up 20 percent. More than twice than if the market goes up ten percent, you're antifragile.

24

And then you have, you know, the sky is the limit. Yes? Sorry, he was first and then you.

>>: I was curious based on all these principles, what was your

[indiscernible].

>> Nassim Taleb: Unfortunately, because of the book tour and because I retired as a trader. I spent my life as a trader on one simple thing. Doing the opposite of this, okay. Trying to survive by selling volatility and buying tails. So I started in '87 so I had a big bonus from the market in '87 and I realized looking at '87 that if you have a 20-sigma event, okay, 20 sigma that you're so convex that you can hold, and I wrote a paper, it's on the site, showing why it's so disproportionate that you could wait 400 years before having another one and you're still okay, all right? It's very convex.

So I told myself I'd only specialize in extreme tails. So that was my profession. I did that. And, of course, you know, you wait three, four, five years, everybody tells you you're an idiot, you're not profitable and so on and

I say okay. And then you disappear, they blow up.

So that was my -- how I have built my portfolio. Now that I have it, I'm lazy,

I don't want to lose it, all right? So I like Seneca. I can't wake up in the morning until I'm in a shipwreck, all right? So I can't do that.

So I have inflation hedges, 85 percent of my portfolio, including land in

Lebanon. I'm trying to buy stuff in Canada some stocks, stuff like that, all right. Real estate. This is my portfolio. I had gold and silver. I've decreased it after the crisis.

But now my portfolio doesn't correspond to how I built it. So it's an unfair question, you see. I'm now a retired person who writes books and doesn't want variation. Actually, I do deals, God. Tell them 20 days from now, give me back 90 percent of my money, unharmed by inflation and I'll sign off right away.

You see, I give ten percent to have that hedge. And I can't find -- there's no answer, and I can't find anybody to give me, you know, that kind of package, you sigh. So this is my portfolio. And then him, yes.

25

>>: So we actually tweeted back and forth a little bit. I'm Nick Mallet.

>> Nassim Taleb: I got on Twitter about two weeks ago.

>>: You're popular. So one of the things that was brought up recently in enterprise architecture was a discussion of business strategy and how to find antifragile and should review it. Now, business writing, not an investment strategy, but what do we do with a product or what do we do with a market. Is the strategy itself fragile or antifragile, or is the system of executing on the strategy and thereby improving it fragile or antifragile?

>> Nassim Taleb: I'm going to give you a simpler question. If you've notice

Ed, I have no theories so far. You know, I've produced no theory. All I've produced is measurements. Have you noticed? We can do measurements. If a business is fragile, measure it.

Describe a business, what do they do for a living?

>>: So we can just use Microsoft for fun, because we're all here. Let's tick a particular area of Microsoft's business, so the Windows business is sold through OEMs.

>> Nassim Taleb: Okay. So we have Windows sales. You have sales and you have costs, right? So you take sales in different divisions and you see what happens to your business as sales go up by one mean deviation, go down by one mean deviation or two many deviations, to be more robust, okay.

And then if the costs go up or done, you see, and you look at the net because you may not be able to keep your cost and see what happens to your business.

You see how fragile you are. I'm certain you are fragile for one simple reason. It can't be very big, given that you sort of dominate the business.

What you grow, the gross -- everything in nature has a shape, by the way. You know that, right? Everything in nature has a shape. So you are fragile probably going down from here, but you're getting paid for the fragility. The point is you want to know how much you are paid for the fragility. It's a little more -- everything in nature has this shape, you see.

There's a maximum. Once you're antifragile and then at some point it tapers off and then becomes -- now you're fragile, you see. Because going back down,

26 you get the idea. As a price of success. That's Damocles' sword. So you want to manage it either by exiting the main business and going into sort of what

Google's doing, going into areas that are not -- nobody's dominating, you see.

So because, I mean, if you have search engine used by 80 percent of the people, right, you're only going to shoot for 85 percent? No. You got the idea. You want to go for something that you can dominate. That's sort of the strategy.

But you know you're fragile in a business that you dominate. Definitely, you know, it's the point that are you paid enough for it. You see. That's the point. I don't mind selling volatility if I'm paid for it. You see the idea.

But I'm not a business consultant, but I would say if someone tells me how to build a business, I say maximum optionality at all times. Maximum convexity at all times.

Now you have minimum convexity, but you paid for it because you dominate that business, okay? And you can lose it not because of the competitor, but because a business may mutate into something else. So this is why you've got to keep seating those tinkering ventures.

>>: This is where you make a point that as the information content and the offering goes up, you get this -- you talk about [indiscernible] he wins and everybody else loses or Apple wins and everybody else loses. As the information contact goes up, it gets more fragile and brittle for one sense.

>> Nassim Taleb: The overall environment is very fragile for one simple reason. We're Dom -- with the distribution as one characteristic.

I adjusted, coming on the plane to relax from -- you have to realize, this is my 21st day on the road. Was, you know, with males -- you got the idea. Or journalists. And this is the only relaxing lecture I've given, other than the one I gave at the university, all right?

So relaxing because you can get technical right away without the bull -- sorry.

So I just did a very simple, I took -- I had book sales. There were some book sales for all editions. I processed them. It's always mind bogglingly shock that you have 0.1 percent of the books represent 50 percent of the profits.

You get the idea? And we're moving into that environment. We didn't have that before. We didn't have the Harry potter effect before.

27

Microsoft would have been impossible for someone who drops out of college to reach that monopoly in so short period of time unless you had an environment in which there was no connectivity. You see? A variant that was not technological. So we're entering a world where things can happen very fast either direction, you see.

Google came, they're babies ten years ago. So you have, at the same time you have to take with it that necessarily the thing can shift to something else.

So when you look at books also, quite shocking, take history, and then you see books that dominated the planet disappeared completely. Like you go -- they're selling, seeing the sales, seven copies a week for something that sold 50, 60 million copies, all right? Seven copies. Shocking how things can shift the other way.

This, to me, is a nice equalizer, but I don't want to depend on huge success.

You want to have a steady base. How do you do it? I don't know. I mean, you manage your own way.

I have no more time?

>> Amy Draves: We're going to stop the questions so we have time for book signing.

>> Nassim Taleb: No more questions, okay. Thank you for listening to me.

Download