CPS 296.1 Normal-form games Vincent Conitzer conitzer@cs.duke.edu Rock-paper-scissors Column player (also known as player 2) (simultaneously) chooses a column 0, 0 -1, 1 1, -1 Row player (also known as player 1) chooses a row A row or column is called an action or (pure) strategy 1, -1 0, 0 -1, 1 -1, 1 1, -1 0, 0 Row player’s utility is always listed first, column player’s second Zero-sum game: the utilities in each entry sum to 0 (or a constant) Three-player game would be a 3D table with 3 utilities per entry, etc. Matching pennies (~penalty kick) L R L 1, -1 -1, 1 R -1, 1 1, -1 “Chicken” • Two players drive cars towards each other • If one player goes straight, that player wins • If both go straight, they both die S D D S D D S S 0, 0 -1, 1 1, -1 -5, -5 not zero-sum Rock-paper-scissors – Seinfeld variant MICKEY: All right, rock beats paper! (Mickey smacks Kramer's hand for losing) KRAMER: I thought paper covered rock. MICKEY: Nah, rock flies right through paper. KRAMER: What beats rock? MICKEY: (looks at hand) Nothing beats rock. 0, 0 1, -1 1, -1 -1, 1 0, 0 -1, 1 -1, 1 1, -1 0, 0 Dominance • Player i’s strategy si strictly dominates si’ if – for any s-i, ui(si , s-i) > ui(si’, s-i) • si weakly dominates si’ if – for any s-i, ui(si , s-i) ≥ ui(si’, s-i); and – for some s-i, ui(si , s-i) > ui(si’, s-i) strict dominance weak dominance -i = “the player(s) other than i” 0, 0 1, -1 1, -1 -1, 1 0, 0 -1, 1 -1, 1 1, -1 0, 0 Prisoner’s Dilemma • Pair of criminals has been caught • District attorney has evidence to convict them of a minor crime (1 year in jail); knows that they committed a major crime together (3 years in jail) but cannot prove it • Offers them a deal: – If both confess to the major crime, they each get a 1 year reduction – If only one confesses, that one gets 3 years reduction confess confess don’t confess don’t confess -2, -2 0, -3 -3, 0 -1, -1 “Should I buy an SUV?” accident cost purchasing cost cost: 5 cost: 3 cost: 5 cost: 5 cost: 8 cost: 2 cost: 5 cost: 5 -10, -10 -7, -11 -11, -7 -8, -8 Mixed strategies • Mixed strategy for player i = probability distribution over player i’s (pure) strategies • E.g.,1/3 , 1/3 , 1/3 • Example of dominance by a mixed strategy: 1/2 1/2 3, 0 0, 0 0, 0 3, 0 1, 0 1, 0 Usage: σi denotes a mixed strategy, si denotes a pure strategy Checking for dominance by mixed strategies • Linear program for checking whether strategy si* is strictly dominated by a mixed strategy: • maximize ε • such that: – for any s-i, Σsi psi ui(si, s-i) ≥ ui(si*, s-i) + ε – Σsi psi = 1 • Linear program for checking whether strategy si* is weakly dominated by a mixed strategy: • maximize Σs-i[(Σsi psi ui(si, s-i)) - ui(si*, s-i)] • such that: – for any s-i, Σsi psi ui(si, s-i) ≥ ui(si*, s-i) – Σsi psi = 1 Iterated dominance • Iterated dominance: remove (strictly/weakly) dominated strategy, repeat • Iterated strict dominance on Seinfeld’s RPS: 0, 0 1, -1 1, -1 -1, 1 0, 0 -1, 1 -1, 1 1, -1 0, 0 0, 0 1, -1 -1, 1 0, 0 Iterated dominance: path (in)dependence Iterated weak dominance is path-dependent: sequence of eliminations may determine which solution we get (if any) (whether or not dominance by mixed strategies allowed) 0, 1 1, 0 0, 0 0, 0 1, 0 0, 1 0, 1 1, 0 0, 0 0, 0 1, 0 0, 1 0, 1 1, 0 0, 0 0, 0 1, 0 0, 1 Iterated strict dominance is path-independent: elimination process will always terminate at the same point (whether or not dominance by mixed strategies allowed) Two computational questions for iterated dominance • 1. Can a given strategy be eliminated using iterated dominance? • 2. Is there some path of elimination by iterated dominance such that only one strategy per player remains? • For strict dominance (with or without dominance by mixed strategies), both can be solved in polynomial time due to path-independence: – Check if any strategy is dominated, remove it, repeat • For weak dominance, both questions are NP-hard (even when all utilities are 0 or 1), with or without dominance by mixed strategies [Conitzer, Sandholm 05] – Weaker version proved by [Gilboa, Kalai, Zemel 93] Two-player zero-sum games revisited • Recall: in a zero-sum game, payoffs in each entry sum to zero – … or to a constant: recall that we can subtract a constant from anyone’s utility function without affecting their behavior • What the one player gains, the other player loses 0, 0 -1, 1 1, -1 1, -1 0, 0 -1, 1 -1, 1 1, -1 0, 0 Note: a general-sum k-player game can be modeled as a zerosum (k+1)-player game by adding a dummy player absorbing the remaining utility, so zero-sum games with 3 or more players have to deal with the difficulties of general-sum games; this is why we focus on 2-player zero-sum games here. Best-response strategies • Suppose you know your opponent’s mixed strategy – E.g., your opponent plays rock 50% of the time and scissors 50% • • • • • What is the best strategy for you to play? Rock gives .5*0 + .5*1 = .5 Paper gives .5*1 + .5*(-1) = 0 Scissors gives .5*(-1) + .5*0 = -.5 So the best response to this opponent strategy is to (always) play rock • There is always some pure strategy that is a best response – Suppose you have a mixed strategy that is a best response; then every one of the pure strategies that that mixed strategy places positive probability on must also be a best response How to play matching pennies Them L Us • • • • • R L 1, -1 -1, 1 R -1, 1 1, -1 Assume opponent knows our mixed strategy If we play L 60%, R 40%... … opponent will play R… … we get .6*(-1) + .4*(1) = -.2 What’s optimal for us? What about rock-paper-scissors? Matching pennies with a sensitive target Them L Us R L 1, -1 -1, 1 R -2, 2 1, -1 • If we play 50% L, 50% R, opponent will attack L – We get .5*(1) + .5*(-2) = -.5 • What if we play 55% L, 45% R? • Opponent has choice between – L: gives them .55*(-1) + .45*(2) = .35 – R: gives them .55*(1) + .45*(-1) = .1 • We get -.35 > -.5 Matching pennies with a sensitive target Them L Us R L 1, -1 -1, 1 R -2, 2 1, -1 • What if we play 60% L, 40% R? • Opponent has choice between – L: gives them .6*(-1) + .4*(2) = .2 – R: gives them .6*(1) + .4*(-1) = .2 • We get -.2 either way • This is the maximin strategy – Maximizes our minimum utility Let’s change roles Them L Us R L 1, -1 -1, 1 R -2, 2 1, -1 • Suppose we know their strategy • If they play 50% L, 50% R, – We play L, we get .5*(1)+.5*(-1) = 0 • If they play 40% L, 60% R, – If we play L, we get .4*(1)+.6*(-1) = -.2 – If we play R, we get .4*(-2)+.6*(1) = -.2 • This is the minimax strategy von Neumann’s minimax theorem [1927]: maximin value = minimax value (~LP duality) Minimax theorem [von Neumann 1927] • Maximin utility: maxσi mins-i ui(σi, s-i) (= - minσi maxs-i u-i(σi, s-i)) • Minimax utility: minσ-i maxsi ui(si, σ-i) (= - maxσ-i minsi u-i(si, σ-i)) • Minimax theorem: maxσi mins-i ui(σi, s-i) = minσ-i maxsi ui(si, σ-i) • Minimax theorem does not hold with pure strategies only (example?) Practice games 20, -20 0, 0 0, 0 10, -10 20, -20 0, 0 10, -10 0, 0 10, -10 8, -8 Solving for minimax strategies using linear programming • maximize ui • subject to for any s-i, Σsi psi ui(si, s-i) ≥ ui Σsi psi = 1 Can also convert linear programs to two-player zero-sum games, so they are equivalent General-sum games • You could still play a maximin strategy in generalsum games – I.e., pretend that the opponent is only trying to hurt you • But this is not rational: 0, 0 1, 0 3, 1 2, 1 • If Column was trying to hurt Row, Column would play Left, so Row should play Down • In reality, Column will play Right (strictly dominant), so Row should play Up • Is there a better generalization of maximin strategies in zerosum games to general-sum games? Nash equilibrium [Nash 50] • A vector of strategies (one for each player) is called a strategy profile • A strategy profile (σ1, σ2 , …, σn) is a Nash equilibrium if each σi is a best response to σ-i – That is, for any i, for any σi’, ui(σi, σ-i) ≥ ui(σi’, σ-i) • Note that this does not say anything about multiple agents changing their strategies at the same time • In any (finite) game, at least one Nash equilibrium (possibly using mixed strategies) exists [Nash 50] • (Note - singular: equilibrium, plural: equilibria) Nash equilibria of “chicken” D S S D D S D S 0, 0 -1, 1 1, -1 -5, -5 • (D, S) and (S, D) are Nash equilibria – They are pure-strategy Nash equilibria: nobody randomizes – They are also strict Nash equilibria: changing your strategy will make you strictly worse off • No other pure-strategy Nash equilibria Nash equilibria of “chicken”… D S D S 0, 0 -1, 1 1, -1 -5, -5 • Is there a Nash equilibrium that uses mixed strategies? Say, where player 1 uses a mixed strategy? • Recall: if a mixed strategy is a best response, then all of the pure strategies that it randomizes over must also be best responses • So we need to make player 1 indifferent between D and S • Player 1’s utility for playing D = -pcS • Player 1’s utility for playing S = pcD - 5pcS = 1 - 6pcS • So we need -pcS = 1 - 6pcS which means pcS = 1/5 • Then, player 2 needs to be indifferent as well • Mixed-strategy Nash equilibrium: ((4/5 D, 1/5 S), (4/5 D, 1/5 S)) – People may die! Expected utility -1/5 for each player The presentation game Presenter Audience Pay attention (A) Do not pay attention (NA) Put effort into presentation (E) Do not put effort into presentation (NE) 4, 4 0, -2 -16, -14 0, 0 • Pure-strategy Nash equilibria: (A, E), (NA, NE) • Mixed-strategy Nash equilibrium: ((1/10 A, 9/10 NA), (4/5 E, 1/5 NE)) – Utility 0 for audience, -14/10 for presenter – Can see that some equilibria are strictly better for both players than other equilibria, i.e. some equilibria Pareto-dominate other equilibria The “equilibrium selection problem” • You are about to play a game that you have never played before with a person that you have never met • According to which equilibrium should you play? • Possible answers: – Equilibrium that maximizes the sum of utilities (social welfare) – Or, at least not a Pareto-dominated equilibrium – So-called focal equilibria • “Meet in Paris” game - you and a friend were supposed to meet in Paris at noon on Sunday, but you forgot to discuss where and you cannot communicate. All you care about is meeting your friend. Where will you go? – Equilibrium that is the convergence point of some learning process – An equilibrium that is easy to compute –… • Equilibrium selection is a difficult problem Some properties of Nash equilibria • If you can eliminate a strategy using strict dominance or even iterated strict dominance, it will not occur (i.e., it will be played with probability 0) in every Nash equilibrium – Weakly dominated strategies may still be played in some Nash equilibrium • In 2-player zero-sum games, a profile is a Nash equilibrium if and only if both players play minimax strategies – Hence, in such games, if (σ1, σ2) and (σ1’, σ2’) are Nash equilibria, then so are (σ1, σ2’) and (σ1’, σ2) • No equilibrium selection problem here! How hard is it to compute one (any) Nash equilibrium? • Complexity was open for a long time – [Papadimitriou STOC01]: “together with factoring […] the most important concrete open question on the boundary of P today” • Recent sequence of papers shows that computing one (any) Nash equilibrium is PPAD-complete (even in 2-player games) [Daskalakis, Goldberg, Papadimitriou 2006; Chen, Deng 2006] • All known algorithms require exponential time (in the worst case) What if we want to compute a Nash equilibrium with a specific property? • For example: – An equilibrium that is not Pareto-dominated – An equilibrium that maximizes the expected social welfare (= the sum of the agents’ utilities) – An equilibrium that maximizes the expected utility of a given player – An equilibrium that maximizes the expected utility of the worst-off player – An equilibrium in which a given pure strategy is played with positive probability – An equilibrium in which a given pure strategy is played with zero probability – … • All of these are NP-hard (and the optimization questions are inapproximable assuming P ≠ NP), even in 2-player games [Gilboa, Zemel 89; Conitzer & Sandholm IJCAI-03/GEB-08] Search-based approaches (for 2 players) • Suppose we know the support Xi of each player i’s mixed strategy in equilibrium – That is, which pure strategies receive positive probability • Then, we have a linear feasibility problem: – for both i, for any si Si - Xi, pi(si) = 0 – for both i, for any si Xi, Σp-i(s-i)ui(si, s-i) = ui – for both i, for any si Si - Xi, Σp-i(s-i)ui(si, s-i) ≤ ui • Thus, we can search over possible supports – This is the basic idea underlying methods in [Dickhaut & Kaplan 91; Porter, Nudelman, Shoham AAAI04/GEB08] • Dominated strategies can be eliminated Solving for a Nash equilibrium using MIP (2 players) [Sandholm, Gilpin, Conitzer AAAI05] • maximize whatever you like (e.g., social welfare) • subject to – for both i, for any si, Σs-i ps-i ui(si, s-i) = usi – for both i, for any si, ui ≥ usi – for both i, for any si, psi ≤ bsi – for both i, for any si, ui - usi ≤ M(1- bsi) – for both i, Σsi psi = 1 • bsi is a binary variable indicating whether si is in the support, M is a large number Lemke-Howson algorithm (1-slide sketch!) RED BLUE player 2’s utility as function of 1’s mixed strategy RED • • • BLUE GREEN ORANGE 1, 0 0, 2 0, 1 1, 0 player 1’s utility as function of 2’s mixed strategy redraw both GREEN best-response strategies ORANGE unplayed strategies Strategy profile = pair of points Profile is an equilibrium iff every pure strategy is either a best response or unplayed I.e. equilibrium = pair of points that includes all the colors – … except, pair of bottom points doesn’t count (the “artificial equilibrium”) • Walk in some direction from the artificial equilibrium; at each step, throw out the color used twice Correlated equilibrium [Aumann 74] • Suppose there is a trustworthy mediator who has offered to help out the players in the game • The mediator chooses a profile of pure strategies, perhaps randomly, then tells each player what her strategy is in the profile (but not what the other players’ strategies are) • A correlated equilibrium is a distribution over pure-strategy profiles so that every player wants to follow the recommendation of the mediator (if she assumes that the others do so as well) • Every Nash equilibrium is also a correlated equilibrium – Corresponds to mediator choosing players’ recommendations independently • … but not vice versa • (Note: there are more general definitions of correlated equilibrium, but it can be shown that they do not allow you to do anything more than this definition.) A correlated equilibrium for “chicken” D S 0, 0 -1, 1 D 20% 40% S 1, -1 -5, -5 40% 0% • Why is this a correlated equilibrium? • Suppose the mediator tells the row player to Dodge • From Row’s perspective, the conditional probability that Column was told to Dodge is 20% / (20% + 40%) = 1/3 • So the expected utility of Dodging is (2/3)*(-1) = -2/3 • But the expected utility of Straight is (1/3)*1 + (2/3)*(-5) = -3 • So Row wants to follow the recommendation • If Row is told to go Straight, he knows that Column was told to Dodge, so again Row wants to follow the recommendation • Similar for Column A nonzero-sum variant of rock-paperscissors (Shapley’s game [Shapley 64]) • • • • 0, 0 0, 1 1, 0 0 1/6 1/6 1, 0 0, 0 0, 1 1/6 0 1/6 0, 1 1, 0 0, 0 1/6 1/6 0 If both choose the same pure strategy, both lose These probabilities give a correlated equilibrium: E.g. suppose Row is told to play Rock Row knows Column is playing either paper or scissors (50-50) – Playing Rock will give ½; playing Paper will give 0; playing Scissors will give ½ • So Rock is optimal (not uniquely) Solving for a correlated equilibrium using linear programming (n players!) • Variables are now ps where s is a profile of pure strategies • maximize whatever you like (e.g., social welfare) • subject to – for any i, si, si’, Σs-i p(si, s-i) ui(si, s-i) ≥ Σs-i p(si, s-i) ui(si’, s-i) – Σs ps = 1