Applicable Game Theory Chapter 3: Deterrence* 3.1 Introduction How should nations behave to prevent a nuclear first strike? How can incumbent firms prevent the entry of new competitors? The answer lies in the art of deterrence, which, if successful, encourages your opponent to refrain from the unwanted course of action in anticipation of your retaliatory response. Successful deterrence requires persuasion. Your opponent must believe that the costs suffered as a result of your response will outweigh the benefits of his action. Consider the following deterrence success story: Harry is a foul beaked parrot who, to his owner Mr. Brown's dismay, spends most of his days experimenting with uncouth and obscene language. Mr. Brown, a mild mannered gentleman, finds these expectorations offensive and constantly reprimands the bird. To no avail. In fact the more Mr. Brown complains, the more Harry cusses until, one day, Mr. Brown decides to take action. He grabs the bird and throws him into the freezer. Follows an innovative string of obscenities and vigorous scraping of the inner door. Then silence. Total silence. Worried that his pet might have hurt himself, Mr. Brown opens the freezer door. Out comes Harry, looking as contrite as any parrot can be. "I'm so dreadfully sorry to have bothered you" says Harry, "I do declare that my cussing is over forever." And the bird snuggles up to Mr. Brown's ear and whispers, "By the way, what did the chicken do?" Harry kept his word and took to reciting Walt Whitman much to Mr Brown's delight and amazement. Had Mr. Brown known about game theory he could have predicted the impact of his frozen chicken on poor Harry's psyche. To the parrot, the frozen bird was the retaliatory threat he exposed himself to if he insisted on pursuing the less elegant forms of language. And after all, is it not better to speak well and be alive even if one has to forego the pleasures of a well rounded swears word? To Harry the cost of freezing to death outweighed the benefit of hearing himself cuss. And the threat was truly credible to Harry since another bird had clearly fallen victim to Mr. Brown's retaliatory measures. Deterrence is a major theme of game theory. This is hardly surprising since only the rational decision maker with enough concern for tomorrow is likely to be moved by deterrent threats. When such threats are the product of the rule of law, there is rarely any issue of credibility. However, when a deterrent threat is made by a firm or by a nationstate, its credibility can be at issue especially if the course of action it is attempting to deter is not a clear encroachment of its vital interests. For example, the threat to intervene militarily to defend an ally against a third party attack can be the subject of considerable doubt on the part of the attacker. The fundamental question it raises is the following; should deterrence fail, the defender might find it too costly to implement the threatened course of action. The credibility of such a threat is therefore intimately associated with a * Copyright 1997-2002, Jean-Pierre P. Langlois. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the author. 1 Applicable Game Theory cost/benefit analysis not unlike that faced by the attacker. Should the cost of intervention outweigh that of submitting to the attack the threat will hardly appear credible. So if Harry was knowledgeable about game theory, he could consider the credibility of Mr. Brown's freezing threat: would Mr. Brown really prefer to lose his pet rather than hear a few obscenities a day? If Harry were to realize that the chicken had never been Mr. Brown's pet, he may even entertain the possibility of breaking his promise, abandoning Walt Whitman, to resume his exploration of crude and ugly vocabulary. Game theory equates the credibility of a threat with the rationality of carrying it out. From a game theoretic viewpoint, Mr. Brown would need to clearly prefer silence to the company of a cussing pet for Harry to find his threat credible. Game theory has played an important role in the modeling of deterrence in economics as well as in political science. It has been a source of insight into policy relevant issues such as nuclear deterrence or predatory pricing. The sections that follow present game theoretic models of deterrence theory in political science and in economics. 3.2 The Nuclear Deterrence Debate In the early days of the nuclear age, before the Soviet Union acquired its first nuclear capability, the nuclear deterrence equation was a rather simple one: the U.S. could threaten with impunity a nuclear retaliation for any serious encroachment of the status quo in central Europe. Indeed, it has been argued that one major reason for the Hiroshima and Nagasaki bombings was President Truman's intent to warn Stalin against pushing his advantage in Europe after the defeat of Germany. With the evidence of the destructiveness of the bomb and of the willingness of the U.S. to use it, any territorial or political gain that the Red Army could achieve could be quickly outweighed by the thorough destruction of Moscow and other major Soviet cities. The Soviets did not need a great deal of rationality to be impressed by the argument. The situation began changing considerably with the first Soviet nuclear tests1 and their subsequent development of the hydrogen bomb and long-range delivery systems. How could the U.S. credibly threaten Moscow with nuclear destruction when the Soviet Union could inflict the same damage on Washington or New York? How could the nuclear threat thus be used to protect Western Europe from Soviet ambitions? De Gaulle who asked whether the U.S. would risk New York for Paris raised the issue most emphatically. Although the question was never actually put to the test, the answer was quite evident for the French who decided to develop their nuclear capability. One American answer to this credibility issue was the creation of NATO. The American commitment to the defense of Western Europe that lies at the heart of the NATO alliance explicitly involved the possibility of nuclear warfare. But how would the nuclear escalation come about? Would it be a deliberate decision that enough was enough and that a desperate military situation warranted the escalation to a nuclear exchange? Official declarations were never that clear and, even if they had been, their credibility 1 It now appears that, although Soviet scientists had reached an excellent and in some ways superior design, the first Soviet atomic bomb was a copy of an American design obtained by Soviet intelligence. 2 Applicable Game Theory would have remained highly dubious. In fact, it has been argued2 that a threat of deliberate nuclear first use can never be credible between approximately equal nuclear opponents. Instead, nuclear deterrence theorists have adopted the proposal3 of a "strategy that leaves something to chance." The argument goes as follows: in the fog of war that would inevitably envelop a conventional Soviet drive on western Europe, it would be impossible for either side to keep complete control of all developments. In particular, if nuclear arms happen to be available to the local commanders whose communication channels have been cut, and if they have authority in that case to use their weapons, chances are that they will use them in desperate situations. So, nuclear first use becomes a chance move that need not be deliberately ordered by the central authorities. This thinking was so pervasive among military and intellectual strategists that NATO deployed numerous "tactical" nuclear weapons in Western Europe and adopted a policy that delegated the authority to use them to the local commanders under certain circumstances.4 This was based on a distinction between "tactical" and "strategic" nuclear weapons, which is not obvious. In principle, a weapon is tactical if its primary use is intended against military targets, such as an armored division or a naval battle group, or their support and supply structure. It is strategic if its primary destination is the economic, industrial, and even civilian resources of the opponent, or its command and control structure. The trouble is that tactical nuclear weapons were never thought to be very effective against armored divisions unless one used them in the hundreds, thus inflicting tremendous collateral damage that would become either a suicidal or a strategic use.5 And indeed, the fuzziness of that distinction served a major purpose: since tactical use could result from a loss of control on the battlefield, and since the tactical use would easily border on the strategic, a Soviet conventional drive on western Europe would place its own society at a significant and unacceptable risk of nuclear destruction. One may feel appalled that this kind of thinking served as a centerpiece of the defense strategy of the western world during the Cold War, not just an emotional response to the risk of nuclear holocaust, but as a simple questioning of the very logic of the edifice. For instance, is it sensible to argue that the tactical use would necessarily lead to the strategic? Although tactical nuclear warfare would have messed up central Europe (and deprived the Soviets of the fruits of victory) local NATO or Soviet commanders would never have been in a position to attack a Soviet or a U.S. city. So, both U.S. and Soviet societies seemed relatively immune to the "loss of control" thesis, at least as far as local commanders were concerned. The strategic "button" always remained in the very close hands of the highest authority on each side. 2 See for instance The Illogic of American Nuclear Strategy (1984) by Robert Jervis or Deterrence (1977) by Patrick Morgan. 3 See for instance The Strategy of Conflict (1960) and Arms and Influence (1966) both by Thomas Schelling, and Escalation and the Nuclear Option by Bernard Brodie for the evolution of those ideas. 4 At the same time NATO built numerous safety features such as Permissive Action Links (PALs) into the system to prevent accidental or unauthorized use. 5 This dilemma led to the invention of the so-called “neutron” bomb whose primary effect was to emit tremendous amounts of radiation that would kill tank crews almost instantly and inflict limited collateral damage on the countryside. Neutron bombs were never deployed because they threatened the very principle that nuclear escalation needed to be uncontrollable. 3 Applicable Game Theory Could the supreme command, instead, become responsible for this loss of control? The very proposition sounds like a contradiction in terms within the intellectual framework of deterrence theory: on the one hand one argues that deterrence will succeed because the central decision makers will see that the risks clearly outweigh the benefits; on the other hand they may lose control in the heat of a crisis to the point of not seeing through that obvious calculus any longer.6 Finally, one wonders why NATO went to such great pain and expense to ensure that the conventional defense of Western Europe would succeed. If the "strategy that leaves something to chance" were really credible, wouldn't it be simpler (and cheaper) to keep only those forces necessary to guard the nuclear weapons and serve as a tripwire, thus demonstrating a complete confidence that the loss of control would indeed occur at the tactical level and propagate upward to the strategic? Volumes have been written about the credibility issues of extended nuclear deterrence and, lacking any direct evidence of its failure, most of these issues are still largely unsettled. The contribution of game theory to the debate is similarly ambiguous. On the one hand, it clarifies some of the arguments by exposing the critical elements and relationships necessary to achieve credibility. On the other hand, it over-simplifies the inevitably complex structure of international crises. 3.3 Early Game Theoretic Models of Nuclear Deterrence Herman Kahn7 likened nuclear crises to the game of Chicken supposedly played by California teenagers: two cars race toward each other driving in the middle of the road. The first driver who swerves is "chicken." The game and its variations would be played for prestige and related prerogatives. The chicken metaphor was soon "formalized" into the normal form game of Table 3.01. Implicit in the strategic form is the assumption that each side has a strategy in mind before even starting the race. In Table 3.01, the set of possible strategies has been reduced to "Challenge" and "No challenge," a clearly extreme simplification.8 If both sides challenge each other to the end, the inevitable collision will result in a disaster: death for both sides. And the posthumous prestige is not quite enough to compensate for the loss of opportunity to enjoy life (even without prestige). Each player's preferences are therefore in the following order: (1) to win, (2) to draw, (3) to lose, and (4) to die. 6 One may also argue that a Minuteman crew, for instance, could initiate a launch in the heat of a crisis. But this could only arise from a total loss of communication between the silo and the central command, loss that would likely result from a successful Soviet first strike. Both sides deployed immense ingenuity in guarding complete control over the strategic arsenals, evidence that a loss of control was not intended to happen at that level. 7 On Thermonuclear War (1960). 8 The representation of Chicken (as originally described) by a normal form game is, to say the least, highly dubious. It is in fact a game of timing with numerous decision steps to account for. Even if one is stupid enough to play that game, it is difficult to believe that he will not reconsider a few times the possibility of swerving as he races toward the incoming car. 4 Applicable Game Theory Table 3.01: Chicken In the nuclear crisis version of Chicken, the "strategies" available to the two sides have been given various names depending on the authors. Brams, for instance, uses the terms "cooperation" and "non-cooperation."9 Other authors have used the terms "cooperate" and "defect." In any case, if neither side is willing to cooperate the crisis is assumed to result in an escalation to nuclear disaster, which neither side prefers to any other outcome. But if one side behaves cooperatively, it is likely to be exploited by the other to its advantage. Only if both sides are willing to cooperate will some compromise emerge. Of course, the simplification of the complex options available in a nuclear crisis to two extreme attitudes seems at best naive. But, more importantly, it portrays decision making in a rather odd way. In essence, the two players independently commit to one "strategy" at the outset of the crisis. If that commitment is uncooperative on both sides, neither side is given a chance to reconsider and to promote a more compromising attitude. The game model appears to erase essential components of actual crises, especially those of timing and learning from each other’s actions. In fact, if used as a model, Chicken leads to serious logical inconsistencies. It is found to have three Nash equilibria: (1) Player 1's victory, (2) Player 2's victory, and (3) a mixed strategy equilibrium with each side playing their two extreme choices with some probability that depends on the exact payoff values used to represent player utilities The latter has not been given much attention in the literature. The two other equilibria imply that one side will rational pre-empt the other, an outcome that contradicts much of the empirical evidence. Indeed, the famous Cuban Missile Crisis is a typical case where the Soviet Union attempted to pre-empt by establishing a more favorable status quo. The final outcome, however, was not quite to its advantage. Indeed, the very existence of a crisis seems to imply that the other party contested an attempt at pre-emption. Nevertheless, numerous models were developed around the Chicken approach, often by adding at least a second turn to the model.10 A few authors, notably Frank Zagare, have criticized the Chicken model.11 Most of the criticism is based on the observation that, in a real crisis, if one side challenges it is 9 See Superpower Games (1985, p. 15) by Steven Brams. See for instance Game Theory and National Security (1988) by Steven Brams and Marc Kilgour. 11 The Dynamics of Deterrence (1987) by Frank Zagare. 10 5 Applicable Game Theory usually better for the other to respond in kind than to submit, thereby escalating the level of tension. This is especially true in the case of approximately equal opponents. Other two-by-two normal form games have been proposed as models of international crises. A major contender is the famous Prisoner's Dilemma of Table 3.02. Table 3.02: The Prisoner's Dilemma The story that gave the game its name is usually attributed to Tucker: two prisoners are held in separate cells with no way to communicate among themselves. They are suspected of being accomplices in some crime. The prosecutor offers each of them the following bargain: if only one of the prisoners confesses, thus providing much needed evidence to the prosecution, he will be given great leniency while his accomplice will get the heaviest possible sentence. If both confess, they will be given heavy sentences, but not the heaviest. If neither confesses, they will still get significant sentences (based on existing evidence) certainly heavier than what a sole confessor would get. The payoff values in Table 3.02 translate their individual preferences for a minimum sentence. The Prisoner's Dilemma, in its simplest one-shot form admits a single rational solution: each side finds it preferable to defect no matter what the other side does. This results in the single Nash equilibrium (Defect,Defect) with utilities that are worse than what the two sides could obtain by both cooperating, thereby the dilemma. In its simplest form, the Prisoner's Dilemma model does not offer a much better prediction than Chicken for international crises. The rational outcome here should be the sole Nash equilibrium Defect/Defect that results in all out escalation. But again, the model over-simplifies the complex decision structure of a crisis. In reality, escalation can be a step-by-step process of retaliation upon retaliation with the two sides edging closer and closer to the nuclear abyss. This critique gave rise to numerous models that add options and turns of play to the decision structure. Eventually, the modeling efforts turned to the extensive game form in order to better capture the sequentiality of decision making as well as the timing and information components of the problem. 3.4 Extensive Form Models of Deterrence Perhaps the simplest situation where deterrence issues arise in international relations is when a challenger can attempt to grab some piece of real estate held by a defender. In the simplest analysis, if the challenger challenges the defender may resist or submit. If the challenger waits, the status quo continues. A weak defender is one who will prefer to submit rather than defend with possibly worse consequences. A strong defender 6 Applicable Game Theory is one who will prefer to defend, perhaps with the perspective of winning some concessions as an outcome of the crisis. The two cases are pictured in Figure 3.01. Figure 3.01: Two Deterrence Configurations There is only one solution to each of these two games. When the defender is weak, a challenge occurs followed by a submission. And when the defender is strong, the challenger waits. Game theory does not seem to contribute extraordinary insight here. This elementary model can now be complexified in some interesting ways. First, it seems more realistic to allow the challenger a bit more sophistication in his decision process. He may, for instance, first express a demand and only act later if that demand is not met. This will give him a chance to back down should the defender appear too resolute in her resistance to the demand. This situation is now pictured in Figure 3.02. Again, there are two cases of defender. The challenger's outcomes are affected by whether he faces a weak or a strong defender. Should the defender be weak, it would be silly to back down instead of going to a war the challenger can win whereas, if the defender is strong, that same war will be far too costly. The rational solution in the case of a weak defender is for the challenger to demand expecting the defender to submit on the assumption that the challenger would attack next. In the case of a strong defender, the challenger will wait expecting the defender to resist on the assumption that the challenger will backdown. Again, there is little here that game theory tells us that we did not already know. 7 Applicable Game Theory Figure 3.02: Several Stages and Configurations Of course, the games of Figure 3.02 are still greatly limited in several ways. First, they limit severely the number of turns and options available to the two sides. There are presumably numerous ways to resist, escalate, and possibly propose settlements. Second, although the sequentiality of play seems appealing, it is not always the rule at higher escalation levels since both sides could choose to move at the same time. Third, the players have perfect information, especially concerning their opponent's preferences, a dubious assumption. And, fourth, the number of allowable turns is pre-programmed into the model, an equally dubious proposition since crises only end (in war or compromise) when the protagonists choose so, not when the theorist does. 3.5 Hawks and Doves12 One alternative to the above perfect information models that has been developed in various ways in the literature is worth mentioning. Using the Cuban missile crisis as a referent, it appears that the very composition of the Executive Committee of the National Security Council, especially the proportion of so-called hawks and doves, very much influenced the strategic behavior of the U.S. side. Similarly, the decision-making authority on the Soviet side, presumably the Soviet Politburo, determined its own strategic behavior. To simplify, each side's strategic behavior could be labeled "dovish" or "hawkish." The issue of deterrence can now be reframed into that of whether a challenge would occur and what its outcome would be. The game of Figure 3.03 is perhaps the simplest that captures the issue. 12 Section 6 of Chapter 1 on advanced solution concepts should be read before this section and the following ones. 8 Applicable Game Theory Figure 3.03: Hawks and Doves Game Hawkish behavior on both sides is assumed to escalate to war (possibly nuclear) with disastrous consequences for both sides valued here at -10. Dovish behavior from the defender will yield a compromise with a dovish challenger and an outright victory to a hawkish challenger. But a hawkish defender will win against a dovish challenger. The utility parameters can be almost endlessly adjusted to represent stronger or weaker protagonists with various rational outcomes. But the equilibria of this game are worth discussing from the point of view of the credibility of deterrence and what contribution game theory has to make to its understanding. The game has three pure Nash equilibria (and several mixed ones) that can be easily obtained with GamePlan and that we will discusse in words. The first one yields a "challenger victory" with a challenge followed by hawkish behavior from the challenger while the defender adopts a dovish stand. In the second one, the defender is hawkish while the challenger is dovish and doesn't challenge in the first place. This could be called a “self deterrence" equilibrium. The third equilibrium is pictured in Figure 3.04 and deserves some more game theoretic attention. Here the challenger appears to be deterred by the threat of eventual nuclear war. The trouble is that the execution of the threat is not a rational development from Node 2. Indeed, both players would be better off switching to a dovish strategy in response to their opponent's hawkish stand. In game theoretic terms, the Nash equilibrium of Figure 3.04 is not subgame perfect. There is clearly a subgame starting at Node 2 in which the two-sided hawkish stand is not a Nash equilibrium. 9 Applicable Game Theory Figure 3.04: A Non-Credible Threat When one restricts the solving to subgame perfect equilibria, only three solutions arise, the challenger victory and the self deterrence equilibria described above, and a mixed strategy solution with a challenge followed by a high probability of dovish behavior on both sides. However, that equilibrium involves a small but distinct probability of all out escalation. Of course, the model would deserve to be further developed, especially from the hawkish/hawkish stand that would presumably yield some further decision steps. However, with a similar structure as that starting from Node 2, there is little difference in the type of equilibria obtained (see homework problem #2). The question arises again of whether we have learnt from game theory something about deterrence that we did not already know. The answer is a bit ambiguous. First, we have not covered all possible preference patterns that could arise in such situations. But in the above example, we were able to rule out one type of equilibrium by a purely game theoretic argument: the lack of subgame perfection in Figure 3.04 was interpreted as a lack of credibility in the underlying threats of escalation. However, of the two valid pure equilibria remaining, one assures a challenger victory and the other maintains the status quo through something like self-deterrence. Which of the two should spontaneously arise from a real world situation? Each of the two has the flavor of a self-fulfilling prophecy: it is rational as long as it is assumed. But what if the two sides differ in their assumptions of which prophecy will come about? Game theory does not provide very convincing answers to this question of equilibrium selection. Perhaps further refinements of the model will provide better answers. 3.6 Deterrence by Uncertainty In many actual deterrence situations, the challenger is unsure about the defender's true preferences. Is he facing a weak or a strong defender? The uncertain case is represented in Figure 3.05 below. There is now a chance move at the beginning with specified probabilities that the defender is weak or strong. The challenger never really knows which type of defender he is facing although he will form beliefs about it. If the defender were weak, backing down would be costly to the challenger while escalating would be more beneficial. But if the defender is strong, escalation will be costly and backing down will be less so. Of course, the defender's preferences follow a similar logic. 10 Applicable Game Theory Figure 3.05: Challenger Uncertainty There are several solutions to this game depending on the initial chance move probabilities that are usually called "initial" (or prior) beliefs since they will appear as such at the challenger’s first decision turn. For instance, let us take a probability of the defender being weak of 90%. We find two kinds of solutions (in the perfect mode): on the one hand, there are several equilibria where the challenger does not make any demand. One of these is illustrated in Figure 3.06. Such solutions can be appropriately called "successful deterrence" equilibria since the challenger is deterred from making demands by an expectation that the defender will always resist and that he will have to backdown, at least with some high probability. Figure 3.06: Successful Deterrence 11 Applicable Game Theory Figure 3.07: Failure of Deterrence On the other hand, there is one equilibrium shown in Figure 3.07 where a challenge occurs and the defender resists with certainty when he is strong and with probability 1/9 (approximately 0.1111) when he is weak. The challenger ends up escalating with probability 2/3. This can be accurately described as the "failure of deterrence" equilibrium since war eventually breaks out with significant probability. The solutions raise interesting and important technical and theoretical questions. As noted before, the numbers above the nodes in the solution display indicate players' beliefs about where they are within information sets. For instance, b above Node 1 means that Player 1 has a 90% chance of being at that node when facing his first decision turn. The intriguing fact is that beliefs are 50% at each of Nodes 5 and 6 in both solutions shown. The question is how these come about. In Figure 3.07, it is not too difficult to reconstruct the logic of that: after Player 1's challenge, Node 2 will be reached with probability 0.9 and Node 3 with probability 0.1.13 Since Player 2 resists with certainty at Node 4, the probability of reaching Node 6 is also 0.1. But from Node 2, there is only a probability 1/9 of proceeding to Node 5. So, standard laws of conditional probabilities tell us that Node 5 will be reached with probability 0.9×(1/9)=0.1 (while the game will end in submit at Node 3 with probability 0.9) ×(8/9)=0.8. So, if Player 1 ever reaches his second decision turn, he can infer that he has equal chances (50%) of being at Nodes 5 or 6 since these could be reached with equal probabilities (p=0.1). Unfortunately, this argument cannot be made for the successful deterrence solution: Since Player 1 never challenges in the first place, the probabilities of reaching all later nodes are all zero! So, the beliefs displayed above nodes 5 and 6 in Figure 3.06 cannot result from a similar argument. Indeed, they are wholly arbitrary. As noted in Chapter 1, the deterrence equilibria in this case are simply perfect Bayesian since strategies are optimal given beliefs and beliefs are consistent with strategies on the equilibrium path. However, off the equilibrium path, beliefs need not be consistent in the sense of Bayes' law. However, the deterrence failure equilibrium does not have any such inconsistency since all information sets are on the equilibrium path. In game theoretic terms, the equilibrium of Figure 3.07 is sequential. 13 Beliefs at singletons are always 1. 12 Applicable Game Theory Again, the question of whether the game theoretic analysis has taught us something we did not already know has an ambiguous answer. On the one hand, deterrence equilibria arise even in the case of very low chances (10%) that a defender be strong. On the other hand that type of equilibrium is vulnerable to the game theoretic criticism that beliefs off the equilibrium path are fairly arbitrary. Indeed, the only equilibrium that passes the stronger sequentiality test is the deterrence failure one that agrees with the conventional wisdom: a defender that is likely to be weak isn't likely to deter a strong challenger. 3.7 Nuclear Brinkmanship The term "brinkmanship" was coined after the Cuban missile crisis as a combination of the term statesmanship and the expression "on the brink" (of disaster) to describe Kennedy's skillful handling of the risks of nuclear escalation to bring about a successful outcome of the crisis. Together with the prevalent thinking that a threat of deliberate nuclear retaliation could not be credible between approximately equal opponents, this was seen as the triumph of the "strategy that leaves something to chance." The first game model that attempted to capture this conception of deterrence is due to Robert Powell.14 A crisis situation is viewed as a ladder of escalation steps that the two sides can in turn climb. Each time a further step is taken, an autonomous risk of nuclear escalation rises. At each of its turns, each side may either quit (surrender or backdown) and therefore lose the contest, attack by conducting a full scale nuclear strike with disastrous results for both sides, or simply escalate, thus handing the onus of a further escalation to the other side. To represent the autonomous risk of losing control and plunging accidentally into full-scale nuclear war, Powell introduces a chance move at the end of each escalation step. Chance decides with a probability that rises with the escalation steps whether to plunge the players into nuclear war or to allow them a further turn. A typical chance node between two escalation steps, together with the two possible chance moves is pictured in the upper part of Figure 3.08. 14 See Nuclear Deterrence Theory (1990) by Robert Powell. 13 Applicable Game Theory Figure 3.08: The Risk of Loosing Control There is a much simpler way of representing that situation that is pictured in the lower part of Figure 3.08. The chance node and its associated moves are entirely replaced by a single move between the two escalation steps with additional attributes: (1) a discount factor that is in fact the probability that chance will allow one further turn; and (2) a pair of payoffs that are the outcomes of the final chance move multiplied by the probability that they occur. This representation has the great advantage of simplifying the representation of the complex games constructed by Powell. A typical case with only three escalation steps is pictured in Figure 3.09. The "attack" move is a deliberate escalation to nuclear war and is given the arbitrary payoff -2. At Node 2, escalation involves a probability of nuclear war assumed to be 1/3 and the resulting expected payoff is –2/3. At Node 3, this probability rises to 2/3 with corresponding expected payoff –4/3. Figure 3.09: Three-Steps Escalation Game This game has two (perfect) equilibria that are easily obtained by backward induction. Although the defender will rationally submit at Node 4, the chance of nuclear war resulting from escalating at Node 3 results in an expected payoff E=-4/3+1/3=-1 for that move. So, the challenger may rationally either escalate or backdown at Node 3, 14 Applicable Game Theory resulting in the defender either submitting or escalating at Node 2, and the challenger either waiting (in self-deterrence) or challenging at Node 1. Figure 3.10: Four-Steps Escalation Game The situation changes somewhat when the escalation ladder is made finer. In Figure 3.10 there are four escalation steps with a similar pattern. Now there is only one (perfect) equilibrium with the defender always submitting and a challenge at Node 1. The trouble is that if one adds an escalation step with a similar pattern (incrementing the chance of autonomous nuclear war by 20% at each step of the ladder), the result is deterrence (see homework problem #4). Indeed, adding one escalation step at a time results in this same oscillating pattern in the result: with an even number of equal steps there is a successful challenge and with an odd number there is deterrence. The problem with such a model is that, in reality, it is mostly up to the players to choose the magnitude and number of escalation steps. This is of course possible to model if one adds several branches to the game of Figure 3.10. With enough options, the challenger can then challenge exactly to the point where the next escalation step is too risky for the defender. Then the challenger always wins. This however presupposes that the challenger knows exactly where that point is. There are other features in Powell's models that are debatable. At the last step the deciding player has no choice other than defeat or a deliberate plunge into full-scale nuclear war. So, the results depend on the assumption that there is no further choice short of nuclear attack. In particular, the game allows no temporizing or de-escalation. This is troublesome since there is almost always such an alternative. Finally, the attack option at all escalation steps is really purely cosmetic since it is always inferior to quitting. One could just as well dispense with it. This type of model becomes more interesting if one adds incomplete information features about the cost of submitting versus that of taking nuclear risks. The critical strategic decision for the challenger is then to target as precisely as possible the escalation level that the defender will not accept to raise. A typical one-sided incomplete information version of the above game is pictured in Figure 3.11. 15 Applicable Game Theory Figure 3.11: One-Sided Incomplete Information In the lower part of the game, the defender is strong since the cost of submitting is higher. Of course, the solution depends on the probability that the defender is strong. In essence, this is a variation of the model of section 3.6 with the added features of discounting and costly non-final moves. Two-sided incomplete information games are also possible with this structure. 3.8 Crisis Stability15 The models proposed by Powell and by Kilgour and Zagare have their advantages in capturing some essential effects of uncertainty on deterrence and brinkmanship. But the limitations on timing and options implicit in both models are substantial. For instance, Powell does not allow a de-escalation option. The players can either escalate or surrender. The Kilgour and Zagare types of models do allow the challenger to back down, but this is pictured as quasi-surrender. And in both classes of models the timing is highly contrived. In real crises, there is always a tomorrow and the possibility to temporize or to initiate some de-escalation sequence that may lead through tacit bargaining to a stabilization of the crisis and to an eventual return to the status quo ante. It is in response to those criticisms that the following alternative models have been proposed. 15 Section 7 of Chapter 1 on advanced game structures should read before this one. 16 Applicable Game Theory Figure 3.12: An Open Ended Crisis Game At the basis of any crisis is a status quo that can either persist or be challenged by one side or the other. A challenge leads to a state of crisis that can also either persist or be resolved to the advantage of one side. To represent the persistence of a state of affairs, the game of Figure 3.12 involves two loops. In the status quo loop, the players play in turn and either may stay or escalate. If one player escalates, they move to the crisis loop. If he stays, he gives the opponent the turn and both receive a payoff of zero. Note that a discount factor of 0.99 is associated to the choice of stay. This is similar to the models of the previous section: one may imagine a chance node on each of the "stay" moves with a probability 0.99 of continuing the game and a probability 0.01 of ending it in the outcome (0,0). As a result, the players value the immediate future with a discount factor of 0.99. If an escalation occurs, the players find themselves in the crisis loop. There, the discount factor is 0.8 and a payoff to both sides of -2 is involved in each "resist" move. Here again, one may picture a chance move with probability 0.2 of ending the game in an outcome (-10,-10). If either side submits, the other side prevails and the crisis ends. This game has four distinct solutions. Two are pre-emption equilibria where one side escalates and the other side submits. In a third solution, both sides escalate but neither submits with certainty. Instead, a crisis persists with probability 0.6786 at each turn. This is a brinkmanship equilibrium where each side hopes that the other side will be first to blink. The fourth solution is a deterrence equilibrium where neither side escalates in fear of a continuing crisis. One limitation of this model is the absence of any de-escalation option. The game of Figure 3.13 offers one remedy by introducing a few more escalation steps. A first escalation is only a challenge that may lead to an immediate surrender by the other side. Only if that other side chooses to resist does the game enter the crisis stage. In that case, the initial challenger may backdown, wait, or escalate further by choosing strike. This game has only two solutions, a pre-emption by either side, or deterrence. 17 Applicable Game Theory Figure 3.13: The De-escalation Option However, the model still suffers from some drawbacks. For instance, it does not seem that either side has any initial advantage to escalate and either can de-escalate a crisis initiated by the other side and return to the status quo. This is not realistic. The game of Figure 3.14 is the next answer. Figure 3.14: Mutual Deterrence Here, if one side escalates and the other does not take up the challenge, the game enters a new loop where the challenger has a clear benefit and the defender suffers losses. Moreover, once the crisis loop is entered, it takes both sides' willingness to de-escalate in order to return to the status quo ante. Of course, various discount factors account for the chances of a loss of control in the heat of the crisis. This game has thirteen distinct solutions (although many are symmetrical). Two interesting ones merit to be discussed briefly. One solution could be called the Cold War equilibrium whereby both sides tend to challenge each other periodically and go through a few crisis turns before de-escalating back to the status quo ante. There is also a slightly less dangerous variation of that Cold War equilibrium where both sides 18 Applicable Game Theory challenge each other somewhat less frequently and return to the status quo without engaging in a full-scale crisis. Of course, further variations on the theme of these models are possible, with different coefficients and more complex structures. But one feature that recurs in the models of this section is the possibility of deterrence that allows crises to occur, escalate, and eventually return to the status quo ante. In both above classes of models (Powell and Kilgour-Zagare) this behavior does not seem to ever occur. Indeed, in many cases there is a very efficient deterrence that never allows crises to develop. And when they do develop, they never return to the status quo. Finally, one must note the explosion of the number of solutions in the last model. This is in fact a typical discounted stochastic game model where solutions are generally numerous. This raises again a question that game theory has never dealt with adequately: which of these equilibria is the most likely rational outcome of such a game? 3.9 Extended Deterrence Although the French felt very insecure about the U.S. nuclear umbrella during the Cold War, there is ample historical evidence of states relying on ententes and alliances to ensure their security. Indeed, NATO is the most prominent example of a successful defensive alliance. Each member relies on the commitment by all other members to join in its defense to deter anyone from threatening its vital interests. Sometimes, however, ententes or alliances can have the opposite of a deterrent effect. One of the most prominent examples is the July Crisis of 1914 that led to WWI. After the assassination, in Sarajevo, of the heir to the Dual Monarchy by a Serbian nationalist, Austria sent Serbia an unacceptable ultimatum.16 But, before engaging in actual hostilities, the Austrians checked that their German allies would support them, should Russia come to the aid of its Serbian ally, and they were given "carte blanche."17 In turn, Russia checked on its French ally before committing to the defense of Serbia. And France, as well as everyone else, checked on Great Britain and even on the United States, Italy, and so on. This created a situation where each proponent could involve a probability of military assistance in its calculus of respective capabilities. Optimistic but contradictory estimates on the two sides led to an uncontrollable escalation. There is in fact a difference between ententes and alliances. The former is merely an agreement between the partners while the latter is a formal and usually public commitment. So, if an alliance leaves (hopefully) little doubt about the responsibilities of the parties to come to each other's aid, an entente can leave the nature and strength of the commitment to anyone's best guess. This situation is pictured in Figure 3.15. 16 Austria demanded that its police be given free access to Serbia in order to investigate the crime, a clearly unacceptable encroachment of Serbian sovereignty. It has been argued that Austria’s real purpose was to check the growing Serbian power in the Balkans. 17 This meant that Germany would fight on the side of Austria no matter the reasons. 19 Applicable Game Theory The Entente may or may not come into play, depending on the strength of Supporter's commitment to Defender. Challenger, however, is left guessing about it. So, Supporter is first to move in making its commitment, presumably at the request of Defender. Challenger, however, faces the decision to go ahead with its challenge with some uncertainty about what was really agreed to, and this is represented by the information sets at its two turns. After Defender chooses to resist, Challenger may be facing a single Defender or the whole Entente with much greater capabilities. The game has several solution, some achieving deterrence and others not. In the final analysis, this is a question of expectation formation. If Supporter's declaration is sufficiently firm, Challenger will believe that it is likely to face the Entente. On the other hand, Supporter might lack credibility and still be dragged into the fight as may have been the case in July 1914. Figure 3.15: An Entente Deterrence Game 3.10 Economic Issues of Entry Deterrence Neoclassical economics celebrates competition. If firms compete for customers prices will be lower, and products more abundant than if firms are allowed to monopolize a market shifting power away from consumers to a single greedy profit maximizing producer. The social benefits of competition are considered sufficiently great for anticompetitive behavior to be prohibited by law. If the erection of barriers to competition is considered reprehensible and even punishable by law, the maximization of one's own profit is not. In fact, the true beauty of the market system allegedly lies in its ability to foster outcomes that are good for society while allowing each individual agent the freedom to act in his own self-interest. But then, a monopolist should indeed try to protect his status since entry of a new firm in the 20 Applicable Game Theory market would necessarily reduce his profits. Barring foul play, what is reprehensible about protecting one's own interests? And if there is profitable room for two, could attempts by monopolists to lock out potential competitors always be successful? In the terms of game theory, one would like to know whether an incumbent firm can credibly and rationally deter the entry of a potential rival. This requires that the rival be persuaded that, because of the retaliatory measure threatened by the incumbent, entry will result in a negative payoff. But it must also be the case that, if deterrence fails, it is still in the incumbent's best interest to implement the threatened course of action. It turns out, that such strategic entry deterrence is indeed possible and, its terms have been explored and refined in the vast game theoretic literature on entry deterrence. Following Wilson (1992) one can distinguish between three types of models: preemption models, signaling models, and predation models. Preemption refers to actions such as early investment in productive capacity, customer goodwill, or research and development taken by the incumbent in order to preserve its monopoly position. Signaling models focus instead on the ways in which an incumbent can reliably convey information that discourages potential entrants. Finally predation refers to those costly actions that an incumbent may take against an entrant in order to discourage any subsequent entry. In this section we will only discuss the predation model. The story of predation starts with a characterization of the incumbent firm's character. The incumbent can be weak or strong. If weak, it would loose if it fought a firm who enters its market. If it is strong, the incumbent looses more by accommodating a new entrant than by fighting him. So, again two players are involved in such a game: the incumbent and the entrant. The entrant can decide to enter or not, while the incumbent then decides to accommodate or fight. An entrant that is fought has a negative payoff while the entrant that is accommodated has a positive outcome. A simple game model of this situation can be derived from Figure 3.01 with adequate payoff parameters. But the game becomes more strategically interesting if the incumbent can face several rounds of entry as in Figure 3.02, with a first round of entry and incumbent fighting or accommodation followed by a second round of entry followed by an incumbent response. The incumbent can be weak or strong. The game reproduced below is inspired by Kreps and Wilson (1982) 21 Applicable Game Theory Figure 3.16: The Chain Store Paradox Even with a small probability that the incumbent is strong, the potential entrant can be deterred. This is because a weak incumbent has an incentive to fight a first round of entry. Such a response would deter a second round of entry, since the potential entrant would now be persuaded that the incumbent is strong. The game has several such solutions that shade each of the firm's behavior somewhat. In particular, the entrant may be deterred from entry probabilistically in the second round. But all solutions have one behavioral point in common. The incumbent always fights a first round entrant with positive probability, even if he is weak. Homework 1. Edit the game of Table 3.01 by entering symmetric payoffs that respect the given preference ordering and solve using GamePlan. What happens to the mixed equilibrium as the payoff of nuclear disaster becomes worse and worse? 2. Edit the game of Figure 3.03 by adding Nodes 5, 6, and 7 with a structure similar to that of Nodes 2, 3, and 4. Then make the hawkish move from Node 3 continue on to Node 5. Add and adjust payoffs to reflect the possible pluses and minuses in utilities resulting from the adoption of an early hawkish stand by both sides. Solve the game and comment on the solutions. 3. Edit the chance node probabilities (the initial beliefs) in Figure 3.05 first to 50% and then to 90% chances that the defender is strong. Solve in perfect Bayesian mode and discuss your solutions in terms of the credibility of deterrence. 4. Edit the game of Figure 3.10 by adding one escalation step and changing the incremental probability of nuclear war from 25% to 20% at each step. Solve. 5. Solve all three game models of section 3.8 in Markov perfect and sequential modes and discuss the solutions. 6. Solve the game of section 3.9 in sequential mode and discuss the solutions. 22