On the Non-Robustness∗ of Nash Implementation Takashi Kunimoto† Incomplete and Preliminary First Version: March 2006 This Version: May 2006 Abstract. I consider the implementation problem under complete information and employ Nash equilibrium as a solution concept. A socially desirable rule is given as a correspondence from the set of states to the set of outcomes. This social choice rule is said to be implementable in Nash equilibrium if there exists a mechanism such that all (pure strategy) Nash equilibrium outcomes are socially desirable. Maskin (1999) clarified the conditions on the social choice rules under which he was able to construct a very general Nash implementing mechanism when there are at least three players. This paper identifies the minimal amount of incomplete information which prevents the Maskinian mechanism from being robustly implemented in Nash equilibrium. More precisely, with mild constrains on the social choice rules, one can construct a canonical perturbation of the complete information structure under which a sequence of Bayesian Nash equilibria of the Maskinian mechanism supports a non-Nash equilibrium outcome in the limit. Therefore, there is a minimal sense in which the Maskinian mechanism is not robust to incomplete information. This non-robustness result is also extended to the environment with two players. JEL classification: C72, D78, D82, D83 Keywords: Approximate common knowledge; Implementation; Robustness; Nash equilibrium ∗ I am thankful to Hassan Benchekroun, Dip Majumdar, and Roberto Serrano for the discussion. I am grateful to financial support from McGill University through the startup grant and the James McGill Professor fund. † Department of Economics, McGill University, 855 Sherbrooke Street West, Montreal, Quebec, H3A2T7, CANADA, takashi.kunimoto@mcgill.ca URL: http://people.mcgill.ca/takashi.kunimoto/ 1 Introduction Consider an environment with a set of players and a set of feasible outcomes, together with a set of possible states, each of which generates the players’ preferences over these outcomes. In such an environment, implementation theory has provided a number of characterizations of which social choice rules can be achieved by checking if one can design a mechanism (or game form) such that all the equilibrium outcomes of the induced game end up being socially optimal. Thus, the implementing mechanism must have the property that all equilibria of the mechanism lead to the desired outcomes. This is sometimes called full implementation. As the answer for this characterization relies crucially on the informational assumption and on solution concept, this paper only considers environments with complete information and employs Nash equilibrium as a solution concept. Focusing on this special class of setups, Maskin (1999) showed that if a social choice rule is monotonic and satisfies no-veto-power (which I define later), it can be implemented in Nash equilibrium when there are at least three players. 3 His proof is based on constructing a general mechanism all of whose Nash equilibrium outcomes coincide with the social choice rule. Although Maskin’s theorem is general enough to encompass a wide range of applications, it has had a limited impact on the more applied mechanism design literature, such as auction theory and contract theory. This literature rather focuses on the existence of some equilibrium of some mechanism that leads to the desired outcomes. For this less demanding question, the revelation principle lends us the following strong methodology: if the social choice rule can arise as an equilibrium in some game form, then it will arise in a truth-telling equilibrium of the direct mechanism (where each player truthfully reports his information and the designer chooses an outcome assuming they are truthful). One reason for the unpopularity of the Maskinian mechanism is that his result relies on a complicated indirect, or “augmented,” mechanism (henceforth I abbreviate the Maskinian mechanism to the M-mechanism) in which players report more than their information, despite the fact that the multiplicity of equilibria is relevant in designing optimal mechanisms. In fact, the M-mechanism uses the so-called “integer game”: The players are asked to announce an integer, and if certain strategy profiles occur, the player who announced the highest integer “wins.” Although the integer game is a powerful device to knock down undesired outcomes, it has a very “unrealistic” nature making it impractical to many applied researchers. Obviously, one possible route to tackle this problem is to ask when one can design a “reasonable” mechanism without relying on the integer game-like devices. This paper does not take this direction, however. Rather, I take the M-mechanism seriously and I ask when the M-mechanism is no longer “robust.” 3 Maskin’s original paper had been circulated as a MIT working paper since 1977. However, his proof was not as correct as his theorem exactly said. 1 There are at least three perspectives from which the Maskinian game form is not robust: (1) the non-robustness to renegotiation; (2) the non-robustness to collusion; and (3) the non-robustness to informational assumption. This paper pays special attention to (3) the robustness to informational assumption. The initial assumption this paper makes is complete information, which entails the common knowledge of payoffs. Since this common knowledge requirement seems to be very stringent, I take it as at best a simplifying assumption. Then, I will clarify if there is any assumption weaker than common knowledge that is sufficient to sustain, or approximate, the Nash implementability given by the M-mechanism. The answer turns out to depend upon how I weaken the common knowledge assumption. I will follow and extend the approach proposed by Monderer and Samet (1989). The main result of this paper shows that, with mild constrains on the social choice rules, one can construct a canonical perturbation of the complete information structure under which a sequence of Bayesian Nash equilibria of the M-mechanism supports a non-Nash equilibrium outcome in the limit. Furthermore, in terms of topology, I quantify the minimal perturbation of the complete information needed for the main result. Namely, if I perturb the complete information in such a way that, with probability close to 1, there is approximate common knowledge in the sense of Monderer and Samet (1989) – which I carefully define later – about which payoff state is being realized, I obtain the non-robustness of the M-mechanism. This nonrobustness result gives us a precise sense in which the M-mechanism is not robust if we believe that the perturbation of the complete information is an excessively weak relaxation of the common knowledge assumption. If this is the case, the Mmechanism is not attractive for the more applied mechanism design literature not necessarily because the integer game is unrealistic but because it is dubious to assume that complete information is exactly satisfied in a given environment. Following “Wilson’s doctrine,” Bergemann and Morris [B-M] (2005) have also addressed some of the similar issues of robust implementation. 4 B-M fix a payoff type space and extend it to a richer set of type spaces à la Harsanyi-Mertens-Zamir as a relaxation of the common knowledge assumption. Then, B-M consider all richer type spaces consistently extended from the original payoff type space so that they define “robust implementation” as interim (Bayesian) implementation on all type spaces. In particular, B-M show that robust implementation is possible if and only if it is possible to implement the social choice rule using an iterative deletion of strictly dominated strategies. If I restrict my attention to complete information, B-M’s result says that their robust implementation is equivalent to implementation in (correlated) rationalizability. 4 Wilson’s doctrine is summarized as the following comment by himself (1987): “I foresee the progress of game theory as depending on successive reductions in the base of common knowledge required to conduct useful analyses of practical problems. Only by repeated weakening of common knowledge assumptions will the theory approximate reality.” 2 While this paper and B-M (2005) share the same robustness criterion as a relaxation of common knowledge, there are two features which make this paper distinct from B-M (2005). First, using the equivalence class of type spaces induced by the topology, I require that implementation be robust only to this equivalence class of type spaces, and not to all type spaces. This makes the robustness of this paper substantially weaker than B-M’s. Second, in order to make tight connections with the results in the implementation literature, I require that the class of permissible type spaces be restricted to be consistent with the use of Nash equilibrium under complete information. This requirement further makes the robustness of this paper weaker than B-M’s. I elaborate on this a little bit more. Suppose, for example, that there are just two players i and j, the choice of strategy by i will depend on what i believes j’s choice will be, which in turn will depend on what i believes j believes i’s choice will be and so on. In games with complete information, this infinite regress in beliefs can be generally “cut through” by the imposition of Nash equilibrium. If I believe that the requirement of “being cut through” is too restrictive, the use of Nash equilibrium is not appropriate under complete information and hence not appropriate for implementation as well. If Nash implementation is not appropriate under complete information, there is no point to ask when Nash implementation is robust because it is not robust from the beginning. In this paper, I “assume” that Nash implementation is appropriate under complete information. Then, I ask when Nash implementation is robust or not. The rest of the paper is organized as follows. In section 2, I formalize the setup and definitions. In section 3, I introduce the M-mechanism (the Maskinian mechanism) and summarize some existing results in the literature. In section 4, I show the main result. In section 5, I extend the main result to the environments with two players. Section 6 concludes. 2 2.1 The Setup The Environment There is a finite set N = {1, . . . , n} of players. Let f : Θ → A be a social choice correspondence, where Θ denotes the set of payoff states, and A denotes the set of outcomes, which is fixed across the payoff states. I assume that the number of payoff states is at least two, i.e., |Θ| ≥ 2. Associated with each state θ is a preference profile θ , which is a list (θ1 , . . . , θn ) where θi is player i’s state θ preference relation over A. I read a θ a as “a is at least as good as a in state θ.” I read a θ a as “a is strictly preferred to a in state θ.” Players do not observe the state directly but are informed of the state via signals. Player i’s signal set is Si and I assume |Si | = |Θ| for each i ∈ N . A signal profile is an element s = (s1 , . . . , sn ) ∈ S = ×i∈N Si . Let μ be a prior probability over Θ × S. I designate sθ to be the signal profile in which each player’s signal sθi corresponds to 3 the state θ. 5 Complete information refers to the environment in which μ(θ, s) = 0 whenever s = sθ . This specification of the signal space is without loss of generality as long as I only consider environments with complete information. Given a mechanism Γ = (M, g), a designer is interested in the set of Nash equilibrium (NE) outcomes under complete information. Here M ≡ ×i∈N Mi , Mi is player i’s message space and g : M → A is the outcome function. The designer, however, entertains the possibility that players face uncertainty about payoffs. Then, he has to take into account the set of “nearby” incomplete information structures in which the original complete information structure is subsumed. The designer then employs Bayesian Nash equilibrium (BNE) as a solution concept for those nearby incomplete information games. He wants to ascertain that his prediction about players’ strategic behavior changes continuously with respect to some topology. 2.2 Definitions and Notations Mathematically, the coarser the topology chosen, the larger the set of continuous correspondences with respect to it and therefore, the harder the achievement of robust implementation with respect to it. By the same token, the finer the topology chosen, the less it demands that the BNE correspondence be continuous with respect to it. In order to talk about the topology, I have to explicitly expound the topological structure of the domain of the BNE correspondence into which the players’ belief structure is embedded. Let (Θ × S, μ) be a complete information structure. I shall construct the set of states of the world, called Ω, that is consistently extended from a given complete information environment. I want to keep track of the complete information environment and let it be embedded in Ω so that we are able to coherently discuss the epistemic states of all players when the environment is subject to incomplete information. 6 Definition 1 A space Θ × S is (algebraically) immersed in Ω if there exists a one-to-one correspondence h : Θ × S → Ω. h is said to be an immersion of Θ × S into Ω. Definition 2 Assume that Θ × S is immersed in Ω. A probability space (Ω, Σ, P ∗ ) is a consistent extension from the complete information structure (Θ × S, μ) if ∗ Θ×S , μ , where P ∗ ≡ μ◦h−1 . (Ω, Σ, P ) is equivalent to the probability space Θ × S, 2 Let (Ω, Σ, P ∗ ) be a probability space consistently extended from the complete information structure (Θ × S, μ). I fix a measurable space (Ω, Σ) throughout the argument while I change the probability distributions. In particular, I am interested k ∗ in a net of probability distributions {P k }∞ k=1 for which P → P as k → ∞ in “a certain” property preserving way. The very issue is that I have to be specific about 5 The notation sθ will be used extensively throughout the paper. Players’ epistemic state dictates what they know or believe about the game and about each other’s actions, knowledge, and beliefs. 6 4 with respect to which topology the two probability distributions are deemed to be close. Let P be the space of all probability distributions over Ω. Let F be the space of all partitions of Ω, the elements of which are in Σ. Let F ∗ be the subset of F such that for any Π ∈ F ∗ , ω ∈ Π(ω) for each ω ∈ Ω. An element of the form Π = (Πi )i∈N ∈ (F ∗ )n is called a partition structure. Let Π∗ : Ω → 2Ω be the finest possibility correspondence that is coarser than Πi for each i ∈ N . Following the result of Aumann (1976), I say that an event E is common knowledge at ω if Π∗ (ω) ⊂ E. I fix a partition structure Π = (Πi )i∈N ∈ (F ∗ )n for the moment. Player i’s strategy in the game Γ(P ) is a function σi : Ω → Mi which is Πi -measurable. Let σ be a strategy profile in a game Γ(P ). I take for granted Nash equilibrium as a reasonable solution in a game with complete information. Therefore, players are assumed to choose their strategy independently of other players’ choice provided there is common knowledge about what game being played. This implies that the amount of incomplete information I allow for the robustness analysis should not contradict the use of Nash equilibrium under complete information. The formalization of this is summarized by the following measurability condition on possible strategy profiles. Definition 3 Let a partition structure Π ∈ F n and the associated game Γ(P ) be given. A strategy profile σ is consistent with the complete information structure if, for any ω, ω ∈ Ω, whenever there exists a profile (θ, sθ ) ∈ Θ × S such that h(θ, sθ ) = ω̃ for any ω̃ ∈ Π∗ (ω) ∪ Π∗ (ω ), then we have σi (ω) = σi (ω ) for each i ∈ N . When σ is consistent with the complete information structure, we simply say that it is a consistent strategy profile. By focusing only on consistent strategy profiles, I exclude the extent of correlation of equilibrium strategies which invalidates the original equilibrium analysis under complete information. Otherwise, there is a fundamental inconsistency: On the one hand, I fix a type space as coarse as the complete information. On the other hand, I allow for the strategy to vary arbitrarily more finely than the coarse type space that I fixed in the beginning. If I anticipated the corresponding strategy will vary arbitrarily more finely than the coarse type space, then I should question the coarseness of the original type space rather than the use of Nash equilibrium. It must be too coarse for the equilibrium analysis to be valid from the beginning. Thus, my stance is consistent with the use of Nash equilibrium. 7 In other words, carrying out Nash equilibrium analysis of the original complete information game by embedding Θ in the larger type space incorporates the indirect restriction that two distinct types of a player who share the same payoff state θ and hierarchy belief on θ choose the same action. 7 Moreover, I restrict my attention only to pure strategy Nash equilibrium. This is consistent with almost all papers in the implementation literature. However, this restriction is less problematic here because this paper focuses on the impossibility result. 5 In this paper, I take ordinal preferences over the set of (pure) outcomes as a primitive. In order to analyze players’ behavior under uncertainty, I shall extend the ordinal preferences into preferences over “acts.” An act is a mapping α : Ω → A. A belief is a probability distribution β on Ω. I read “α βi α ” as “player i prefers α to α under his belief β.” I assume that a set of axioms is imposed on preference relations over acts enough to obtain the subjective expected utility representation. 8 Furthermore, I assume that any representation for preferences over acts is permissible as long as it is consistent with the ordinal preferences over pure outcomes. The act αΓσ induced by σ under Γ is defined by αΓσ (ω) = g(σ(ω)) for any ω ∈ Ω. With these notations, I shall define Nash equilibrium NE and Bayesian Nash equilibrium. Definition 4 Let a partition structure Π = ×i∈N Πi ∈ (F ∗ )n and the associated game Γ(P ) be given. A consistent strategy profile σ is a Bayesian Nash equilibrium (BNE) of Γ(P ) if σi is Πi -measurable for each i ∈ N , and for each i ∈ N , state P (·|Πi (ω)) ω with P (Πi (ω)) > 0, and strategy σi which is Πi -measurable, we have αΓσ i αΓ . σi ,σ−i The above definition suffices for σ to be a Nash equilibrium of the game Γ(P ) if P is a complete information prior. Define the set of acts A ≡ AΩ . I define ψΓBN E : P → A as the Bayesian Nash equilibrium correspondence associated with the mechanism Γ. Here, A is endowed with product topology. I shall introduce a topology which enables us to determine how close any two probability distributions are. To define such topologies, I need some definitions and notations. Monderer and Samet (1989) introduced the concept of “common p-belief ” as an approximation to common knowledge, which is common 1-belief. Let Biq (E) ≡ {ω ∈ Ω|P (E|Πi (ω)) ≥ q} denote the set of states in which player i assigns probability at least q to the event E. I call this player i’s q-belief operator. In particular, when q = 1, I call Bi1 player i’s 1-belief operator corresponding to player i’s knowledge operator. 9 An event E is said to be q-evident if E ⊂ Biq (E) for all i ∈ N . This means that whenever E is true, everyone believes with probability at least q that E is true. An event E is said to be common q-belief at ω if there exists a q-evident event F such that ω ∈ F ⊂ i∈N Biq (E). I will loosely say that an event E is approximate common knowledge at ω if E is common q-belief at ω, for q close to 1. Consider the notion of the closeness of probability distributions. Define d0 by the rule d0 (P, P ) = sup |P (E) − P (E)|. E⊂Ω 8 More specifically, I assume that β satisfies completeness, transitivity, continuity, and the independence axiom. 9 Here we define “knowledge” as belief with probability 1. 6 Let P ∗ be the complete information prior. I will require extra conditions on conditional probabilities. Define G (η) to be the set of all states in which there is a common (1 − η)-belief about what game is being played as follows: G (η) = ω ∈ Ω ∃θ ∈ Θ such that Γ(θ) is common (1 − η)-belief at ω . Let d1 (P ) = inf{η | P (G (η)) = 1}, and d∗ (P, P ) = max{d0 (P, P ), d1 (P ), d1 (P )}. Note that d1 (P ∗ ) = 0 by definition. By construction, d∗ is non-negative and sym metric, and d∗ (P, P ) = 0 if and only if P = P and both p and P are complete information priors. d∗ generates a topology in the following sense: a net {P k | k ∈ K}, where K is a directed index set with partial order , converges to P ∗ if and only if for any ε > 0, there is a k̄ ∈ K such that k k̄ implies d∗ (P k , P ∗ ) < ε. The reader is referred to Kunimoto (2006) for the details on the topology induced by d∗ . Definition 5 Let Γ be a mechanism. ψΓBN E is upper hemi-continuous at a complete information prior P ∗ with respect to the topology induced by d∗ if, ψΓBN E (P k ) → ψΓN E (P ∗ ) as k → ∞ whenever d∗ (P k , P ∗ ) → 0 as k → ∞. Here ψΓN E (P ∗ ) denotes the set of NE outcomes of the game Γ(P ∗ ). The next result is given as a corollary of Theorem 1 in Kunimoto (2006). Theorem 1 (Kunimoto (2006)) Suppose that Θ is finite. Then, the Bayesian Nash equilibrium correspondence associated with any mechanism is upper hemi-continuous at any complete information prior with respect to the topology induced by d∗ . I want to apply the above result to the notion of robust Nash implementation. Let N E(Γ(θ)) ⊂ M be the entire set of (pure strategy) Nash equilibria of the complete information game Γ(θ). First, I shall define the notion of Nash implementation. Definition 6 A social choice correspondence f is implementable in Nash equilibrium if there exists a mechanism Γ = (M, g) such that for any θ ∈ Θ, (1) for any a ∈ f (θ), there is a Nash equilibrium mθ of the game Γ(θ) such that g(mθ ) = a and (2) g(N E(Γ(θ)) ⊂ f (θ). Condition (1) of Nash implementation requires that there be always a Nash equilibrium outcome which coincides with any socially desirable outcome in each state. Condition (2) of Nash implementation requires that every Nash equilibrium outcome coincide with some socially desirable outcome in each state. Having defined Nash implementation, I introduce the concept of robust Nash implementation as follows. 7 Definition 7 A social choice correspondence f is robustly implementable relative to d∗ (d∗∗ ) if (1) it is implementable in Nash equilibrium and (2) the Bayesian Nash equilibrium correspondence associated with some Nash implementing mechanism is upper hemi-continuous at any complete information prior with respect to the topology induced by d∗ (d∗∗ ). In particular, Theorem 1 implies that all Nash implementing mechanisms are robust relative to d∗ . To strengthen this robustness result on Nash implementation, I introduce a slightly coarser topology than that induced by d∗ . Let d˜1 (P ) = inf η P (G (η)) ≥ 1 − η . Define d∗∗ (P, P ) as follows: d∗∗ (P, P ) = max d0 (P, P ), d˜1 (P ), d˜1 (P ) . The reader is referred to Kunimoto (2006) for the details on the topology induced by d∗∗ . Definition 8 Let Γ be a mechanism. ψΓBN E is upper hemi-continuous at a complete information prior P ∗ with respect to the topology induced by d∗∗ if, ψΓBN E (P k ) → ψΓN E (P ∗ ) as k → ∞ whenever d∗∗ (P k , P ∗ ) → 0 as k → ∞. Here ψΓN E (P ∗ ) denotes the set of NE outcomes of the game Γ(P ∗ ). 3 The Maskinian Mechanism (the M-mechanism) In his “classic” paper, Maskin (1999) showed that monotonicity is a necessary condition for Nash implementation. Definition 9 A social choice correspondence f is monotonic if for every pair of states θ and θ and any a ∈ A with a ∈ f (θ) such that for each player i and b ∈ A, b θi a ⇒ b θi a, then we have a ∈ f (θ ). Maskin (1999) also provided sufficient conditions for Nash implementation when there are at least three players. Before stating his result, I need one more definition. Definition 10 A social choice correspondence f satisfies no veto power if, for all θ ∈ Θ and all a ∈ A, whenever there exists i ∈ N such that, for all j = i and all b ∈ A, if a θi b, then f (θ) = a. No veto power says that if an outcome is at the top of n − 1 players’ preference orderings, then the last player cannot prevent this outcome from being the social optimum. 8 Theorem 2 (Maskin (1999)) Let the number of players be at least three, i.e., n ≥ 3. Then, any monotonic social choice correspondence f satisfying no veto power is implementable in Nash equilibrium. The proof of Theorem 2 is based on the construction of a canonical mechanism which I call the Maskinian mechanism. Define the Maskinian mechanism Γ = (M, g), in which each message mi ∈ Mi allowed to player i consists of an alternative, a payoff state and a nonnegative integer. Thus, a typical message sent by player i is denoted mi = (ai , θ i , z i ), where ai ∈ A, θ i ∈ Θ and z i ∈ {0, 1, 2, . . . }. The outcome function g of the Mmechanism Γ is defined with the following rules, where m = (m1 , . . . , mn ): 1. If all players announce the same message, mi = (ai , θ i , z i ) = (a, θ, 0) for all i ∈ N , and a ∈ f (θ), then g(m) = a. 2. If all players but one announce the same message, that is, mj = (a, θ, 0) for all j = i with a ∈ f (θ) and player i announces mi = (ai , θ i , z i ) = (a, θ, 0), then we can have three cases: ⎧ if a θi ai ⎨ ai a if ai θi a g(m) = ⎩ a(i, θ) if a ∼θi ai where a(i, θ) is the worst outcome for player i in state θ. 3. In all other cases, an integer game is played. That is, ∗ g(m) = ai , where i∗ is the lowest index among those who announce the highest integer, ∗ i.e., z i = maxj z j . This paper makes a slight modification on Rule 2 of the original game form in Maskin (1999). Rule 2 of the original canonical mechanism is as follows: i a if a θi ai g(m) = a if ai θi a It is important to note that regardless of this modification, Maskin’s sufficiency result (Theorem 2) continues to be valid. 4 An Impossibility Theorem In this section, I pursue an impossibility theorem on robust Nash implementation. To do so, I impose two assumptions on the social choice correspondences. 9 Assumption 1 A social choice correspondence f satisfies the following two conditions: 1. There are two states θ, θ such that there are a ∈ f (θ) and a ∈ f (θ ) for which a θi a and a θi a for some player i ∈ N . 2. a θi a(i, θ) for each a ∈ f (θ), i ∈ N , and θ ∈ Θ. Assumption 1-1 says that the social choice correspondence f describes the objective of a society in which there is some minimal degree of congruence of interests across some states for some player. Namely, some player i prefers a to a across θ and θ . Assumption 1-2 says that the social choice correspondence f never gives the worst outcome for any player in any state. Most importantly, Assumption 1-2 with Rule 2 of the M-mechanism ensures that the truth telling Nash equilibrium mθ is a strict Nash equilibrium of the game Γ(θ) for any θ ∈ Θ. With the terminology provided by Kunimoto (2006), Monderer and Samet (1989) showed that any strict Nash equilibrium is lower hemi-continuous with respect to the topology induced by d∗∗ . In this sense, any truth telling equilibrium of the M-mechanism is robust to incomplete information. I also impose some topological structure on the set of outcomes A. Assumption 2 The set of outcomes A is compact. Since the objective of this paper is to establish an impossibility result, the compactness of A is a fairly innocuous assumption. The next theorem is this paper’s main result. Theorem 3 Let the number of players be at least three, i.e., n ≥ 3. Let a social choice correspondence f satisfy Assumption 1. Let Γ = (M, g) be the M-mechanism which implements f in Nash equilibrium under complete information. Assume further that the set of outcomes A satisfies Assumption 2. Then, for any complete information prior P ∗ , there is a net of probability distributions {P k }∞ k=1 with the ∗∗ k ∗ property that d (P , P ) → 0 as k → ∞ for which the Bayesian Nash equilibrium correspondence associated with Γ exhibits a discontinuity at P ∗ . Before going into the proof, I will sketch how to prove the theorem. First, I focus on two states θ and θ given by Assumption 1-1. Second, I construct a perturbation of the complete information only around θ and θ while the other states remain common knowledge. Third, in a Bayesian game induced by the perturbation in the previous step, I propose a strategy profile σ ∗ and show that σ ∗ constitutes a Bayesian Nash equilibrium (Proposition 1). Finally, I show that σ ∗ supports a non-Nash equilibrium outcome of the game Γ(θ) in the limit. Proof of Theorem 3: Let θ and θ be two payoff states satisfying Assumption 1-1. Suppose that mθ is the truth telling Nash equilibrium of the game Γ(θ) where mθi = (a, θ, 0) with a ∈ f (θ) for each i ∈ N and mθ is the truth telling Nash 10 equilibrium of the game Γ(θ ) where mθi = (a , θ , 0) with a ∈ f (θ ) for each i ∈ N . Let us consider a canonical perturbation of the complete information structure. This canonical perturbation is constructed so as to preserve the complete information assumption in any state other than θ and θ . Therefore, without loss of generality, we may continue the rest of the argument as if there were only two payoff states θ and θ . Define as follows: p = μ(θ, sθ |{θ, θ }) 1 − p = μ(θ , sθ |{θ, θ }) We define Ω = {(0, 0), (1, 0), (1, 1), (2, 1), (2, 2)}. Let us define h as a mapping from Θ × S to Ω. h−1 (0, 0) = (θ , sθi , sθ−i ), h−1 (1, 0) = (θ, sθi , sθ−i ), h−1 (1, 1) = h−1 (2, 1) = h−1 (2, 2) = (θ, sθi , sθ−i ). Therefore, h is an immersion from Θ×S into Ω. We consider the following probability distribution P ε ∈ Ω: • P ε (0, 0) = 1 − p; • P ε (1, 0) = pε; • P ε (1, 1) = pε(1 − ε); • P ε (2, 1) = pε(1 − ε)2 ; • P ε (2, 2) = p(1 − ε)3 . Players’ partition structure Π is given as follows: • Πi (0, 0) = {(0, 0)}, Πi (1, 0) = Πi (1, 1) = {(1, 0), (1, 1)}, Πi (2, 1) = Πi (2, 2) = {(2, 1), (2, 2)}; • Πj (0, 0) = Πj (1, 0) = {(0, 0), (1, 0)}, Πj (1, 1) = Πj (2, 1) = Πj (2, 2) = {(1, 1), (2, 1), (2, 2)} for all j = i. Fix this partition structure Π throughout the rest of the argument. The matrix below displays the type space for the nearby incomplete information games we have explained. The row is i’s signal and the column is everybody else’s signal. i’s signal 0 1 2 j’s signal (∀j = i) 0 1 2 1−p 0 0 pε pε(1 − ε) 0 0 pε(1 − ε)2 p(1 − ε)3 11 In the appendix, I show that this perturbation of the complete information structure is characterized by a convergent net of probability distributions with respect to the topology induced by d∗∗ . Namely, d∗∗ (P ε , P ∗ ) → 0 as ε → 0. 10 For strategy profile σ to be consistent with complete information, any strategy profile σ : Ω → M in the Bayesian game Γ(P ε ) must satisfy the property that σj (2, 1) = σj (2, 2) for any j = i. The next proposition explicitly constructs a Bayesian Nash equilibrium σ ∗ of the game Γ(P ε ) for any ε > 0 sufficiently small. Let mθi denote player i’s message for which mθi = (a, θ, 0) for all i ∈ N , where a ∈ f (θ). Proposition 1 There exists a sufficiently small ε̄ > 0 such that for any ε ∈ (0, ε̄] there exists a Bayesian Nash equilibrium σ ∗ of the game Γ(P ε ) with the following properties: • σi∗ (0, 0) = mθi • σi∗ (1, 0) = σi∗ (1, 1) = m∗i • σi∗ (2, 1) = σi∗ (2, 2) = mθi • σj∗ (0, 0) = σj∗ (1, 0) = mθj for any j = i • σj∗ (1, 1) = σj∗ (2, 1) = σj∗ (2, 2) = mθj for any j = i Proof of Proposition 1: Until the end of the proof, we keep assuming m∗i = mθi . It will turn out that this assumption can be modified without changing all the conclusions. First, we claim that for any j = i, σj∗ (0, 0) = σj∗ (1, 0) = mθj is a best ∗ (0, 0) = σ ∗ (1, 0) = mθ if ε is sufficiently small. response to σ−j −j −j Fix any player j = i. Playing mθj conditional upon {(0, 0), (1, 0)} gives the following lottery: • g(mθ ) with probability (1 − p)/(1 − p + pε) at (0, 0) in Γ(θ ) • g(mθ ) with probability pε/(1 − p + pε) at (1, 0) in Γ(θ) Consider any other message mj . Playing mj conditional upon {(0, 0), (1, 0)} gives the following lottery: • g(mj , mθ−j ) with probability (1 − p)/(1 − p + pε) at (0, 0) in Γ(θ ) • g(mj , mθ−j ) with probability pε/(1 − p + pε) at (1, 0) in Γ(θ) 10 This perturbation is first used in an example in Kunimoto (2005) for achieving a different objective. 12 Because mθ is the truth telling Nash equilibrium of the game Γ(θ ) and if we apply Rule 2 of the M-mechanism Γ with Assumption 1-2, we have g(mθ ) θj g(mj , mθ−j ) ∀mj = mθj . By continuity of preferences over acts, we show that mθj is a best response to the strategies of other players specified above for ε sufficiently small. Second we claim that σj∗ (1, 1) = σj∗ (2, 1) = σj∗ (2, 2) = mθj for any j = i is a best response to σi∗ (1, 1) = mθi and σi∗ (2, 1) = σi∗ (2, 2) = mθi and σk∗ (1, 1) = σk∗ (2, 1) = / {i, j}. Moreover, by the partition structure Π, we have σk∗ (2, 2) = mθk for any k ∈ σj∗ (1, 1) = σj∗ (2, 1) = σj∗ (2, 2) for any j = i. Note that mθ constitutes the truth telling Nash equilibrium of the game Γ(θ). Fix any player j = i. Playing mθj conditional upon {(1, 1), (2, 1), (2, 2)} gives the following lottery: • g(mθi , mθ−i ) with probability ε at (1, 1) in Γ(θ) • g(mθ ) with probability ε(1 − ε) at (2, 1) in Γ(θ) • g(mθ ) with probability (1 − ε)2 at (2, 2) in Γ(θ) Consider any other message mj . Playing mj conditional upon {(1, 1), (2, 1), (2, 2)} gives the following lottery: • g(mθi , mj , mθ−{i,j} ) with probability ε at (1, 1) in Γ(θ) • g(mj , mθ−j ) with probability ε(1 − ε) at (2, 1) in Γ(θ) • g(mj , mθ−j ) with probability (1 − ε)2 at (2, 2) in Γ(θ) Because mθ is the truth telling Nash equilibrium of the game Γ(θ) and if we apply Rule 2 of the Maskinian game form Γ with Assumption 1-2, we have g(mθ ) θj g(mj , mθ−j ) ∀mj = mθj . By continuity of preferences over acts, we show that mθj is a best response to the strategies of other players specified above for ε sufficiently small. Third, we claim that σi∗ (1, 0) = σi∗ (1, 1) = mθi “can” (not must) be a best ∗ (0, 0) = σ ∗ (1, 0) = mθ and σ ∗ (1, 1) = σ ∗ (2, 1) = σ ∗ (2, 2) = mθ . response to σ−i −i −i −i −i −i j At states {(1, 0), (1, 1)}, player i knows that the game Γ(θ) is played. However, player i does not know if all other players know that the game Γ(θ) is played. Conditional 13 on {(1, 0), (1, 1)}, player i believes with probability z that all other players do not know which game is being played and with probability 1 − z that all other players know the game Γ(θ) is played. We can calculate z as follows: z= ε 1 = > 1/2 ε + ε(1 − ε) 2−ε as long as ε > 0 Playing mθi conditional upon {(1, 0), (1, 1)} gives the following lottery: • g(mθ ) with probability z > 1/2 at (1, 0) • g(mθi , mθ−i ) with probability 1 − z < 1/2 at (1, 1) Playing mθi conditional upon {(1, 0), (1, 1)} gives the following lottery: • g(mθi , mθ−i ) with probability z > 1/2 at (1, 0) • g(mθ ) with probability 1 − z < 1/2 at (1, 1) Applying Rule 2 of the M-mechanism Γ with Assumption 1-1, we can rewrite the above expressions: Playing mθi conditional upon {(1, 0), (1, 1)} gives the following lottery: • g(mθ ) with probability z > 1/2 at (1, 0) • g(mθi , mθ−i ) with probability 1 − z < 1/2 at (1, 1) Playing mθi conditional upon {(1, 0), (1, 1)} gives the following lottery: • g(mθ ) with probability z > 1/2 at (1, 0) • g(mθ ) with probability 1 − z < 1/2 at (1, 1) Since mθ is the truth telling Nash equilibrium of the game Γ(θ) and due to Rule 2 of the M-mechanism with Assumption 1-2, we have g(mθ ) θi g(mθi , mθ−i ). We assign the following utility value to each outcome as follows: ui (g(mθ ); θ) = ui (g(mθi , mθ−i ); θ) = 0, ui (g(mθ ); θ) = 3, and ui (mθi , mθ−i ; θ) = −1. Playing mθi conditional upon {(1, 0), (1, 1)} gives the following expected utility: 0 × z + 0 × (1 − z) = 0 14 Playing mθi conditional upon {(1, 0), (1, 1)} gives the following expected utility: 3 × z − 1 × (1 − z) = 4z − 1 > 1 ∵ z > 1/2 Therefore, mθi is a “better” response to the belief specified above than mθi . In other words, mθi is “not” a best response. Note that the assignment of utilities here is not important. That is, even if we perturb the utilities slightly, the same argument goes through. If mθi is indeed a best response, this shows σi∗ (1, 0) = σi∗ (1, 1) = mθi can be a candidate for part of the Bayesian Nash equilibrium. Suppose, to the contrary, that mθi is not a best response. Assume that ui (·; θ) : A → R is continuous. Since Rule 2 of the M-mechanism is only relevant here, the choice of messages is equivalent to the choice of outcomes. Since A is compact by Assumption 2, by Weierstrass theorem, there must exist m∗i = mθi such that m∗i ∈ arg max z × ui g(m̃i , mθ−i ); θ + (1 − z) × ui g(m̃i , mθ−i ); θ m̃i ∈Mi In this case, all we have to do is to replace mθi with m∗i and check that player j’s best responses are unchanged at all states. We claim that for any j = i, σj∗ (0, 0) = σj∗ (1, 0) = mθj is a best response to σi∗ (0, 0) = mθi , σi∗ (1, 0) = σi∗ (1, 1) = m∗i and / {i, j} if ε is sufficiently small. σk∗ (0, 0) = σk∗ (1, 0) = mθk for any k ∈ Fix any player j = i. Playing mθj conditional upon {(0, 0), (1, 0)} gives the following lottery: • g(mθ ) with probability (1 − p)/(1 − p + pε) at (0, 0) in Γ(θ ) • g(m∗i , mθ−i ) with probability pε/(1 − p + pε) at (1, 0) in Γ(θ) Consider any other message mj . Playing mj conditional upon {(0, 0), (1, 0)} gives the following lottery: • g(mj , mθ−j ) with probability (1 − p)/(1 − p + pε) at (0, 0) in Γ(θ ) • g(m∗i , mj , mθ−{i,j} ) with probability pε/(1 − p + pε) at (1, 0) in Γ(θ) Because mθ is the truth telling Nash equilibrium of the game Γ(θ ) and if we apply Rule 2 of the M-mechanism Γ with Assumption 1-2, we have g(mθ ) θj g(mj , mθ−j ) ∀mj = mθj . By continuity of preferences over acts, we show that mθj is a best response to the belief specified above for ε sufficiently small. 15 We claim that σj∗ (1, 1) = σj∗ (2, 1) = σj∗ (2, 2) = mθj for any j = i is a best response to σi∗ (1, 1) = m∗i and σi∗ (2, 1) = σi∗ (2, 2) = mθi and σk∗ (1, 1) = σk∗ (2, 1) = σk∗ (2, 2) = / {i, j}. mθk for any k ∈ Fix any player j = i. Playing mθj conditional upon {(1, 1), (2, 1), (2, 2)} gives the following lottery: • g(m∗i , mθ−i ) with probability ε at (1, 1) in Γ(θ) • g(mθ ) with probability ε(1 − ε) at (2, 1) in Γ(θ) • g(mθ ) with probability (1 − ε)2 at (2, 2) in Γ(θ) Consider any other message mj . Playing mj conditional upon {(1, 1), (2, 1), (2, 2)} gives the following lottery: • g(m∗i , mj , mθ−{i,j} ) with probability ε at (1, 1) in Γ(θ) • g(mj , mθ−j ) with probability ε(1 − ε) at (2, 1) in Γ(θ) • g(mj , mθ−j ) with probability (1 − ε)2 at (2, 2) in Γ(θ) Because mθ is the truth telling Nash equilibrium of the game Γ(θ) and if we apply Rule 2 of the Maskinian game form Γ with Assumption 1-2, we have g(mθ ) θj g(mj , mθ−j ) ∀mj = mθj . By continuity of preferences over acts, we show that mθj is a best response to the strategies of other players specified above for ε sufficiently small. ∗ (0, 0) = Finally, we check that at state (0, 0), mθi is a best response to σ−i ∗ (1, 0) = mθ . Since mθ is the truth telling Nash equilibrium of the game Γ(θ ) σ−i −i and if we apply Rule 2 of the Maskinian game form with Assumption 1-2, we have g(mθ ) θi g(mi , mθ−i ) ∀mi = mθi . This completes the proof of Proposition 1. To complete the proof of the theorem, it remains to check that a non-Nash equilibrium outcome is supported by the Bayesian Nash equilibrium σ ∗ . In fact, we have the following: g(σ ∗ (1, 1)) = g(m∗i , mθ−i ) = g(N E(Γ(θ))) = g(mθ ), where h(θ, sθ ) = (1, 1). Because if we apply Rule 2 of the M-mechanism with Condition 2 of Assumption 1, we have g(mθ ) θi g(m∗i , mθ−i ). 16 This implies the failure of the upper hemi-continuity of the Bayesian Nash equilibrium correspondence at any complete information prior. This completes the proof of Theorem 3. When I combine Theorem 2 with Maskin’s sufficiency result on Nash implementation, called Theorem 2 in this paper, I have the following corollary. Corollary 1 Let the number of players be at least three, i.e., n ≥ 3. Let the set of outcomes A satisfy Assumption 2. Suppose that f is a monotonic social choice correspondence satisfying no veto power and Assumption 1. Then, f is not robustly implementable relative to d∗∗ by the M-mechanism. The implication of the above corollary is as follows: If I am concerned with achieving robust implementation relative to d∗∗ , I have to devise another canonical mechanism which must be different from the Maskinian game from. 5 Two-Person Nash Implementation In this section, I shall extend the non-robustness of Nash implementation to the case of two players. This is an important extension because the two-person problem has a bearing on a wide variety of bilateral contracting and bargaining problems. First, note that no-veto-power is not a necessary condition for Nash implementation even when there are more than two players. Furthermore, no-veto-power is unacceptably strong in the environment with two players. Moore and Repullo (1990) fully characterized necessary and sufficient conditions for Nash implementation when there are at least three players. Before turning my attention to two-person Nash implementation, I briefly review the result of Moore and Repullo. For any i ∈ N, θ ∈ Θ, and C ⊂ A, let Mi (C, θ) denote the set of maximal elements in C for player i in state θ. Mi (C, θ) may, of course, be empty. For a given mechanism Γ, I can select some message profile m(a, θ) of the game Γ(θ) whose outcome is a. Given m(a, θ), let Ci (a, θ) denote the set of outcomes that player i can generate by varying his own message, keeping the other players’ message choices, m−i (a, θ), fixed: Ci (a, θ) ≡ {c ∈ A| c = g (mi , m−i (a, θ)) for some mi ∈ Mi } Definition 11 A social choice correspondence f satisfies Condition μ if there is a set A∗ ⊂ A, and for each i ∈ N, θ ∈ Θ, and a ∈ f (θ), there is a set Ci (a, θ) ⊂ A∗ , with a ∈ Mi (Ci (a, θ), θ) such that for all θ ∗ ∈ Θ, the following three conditions are satisfied: 1. (Monotonicity): If a ∈ Mi (Ci (a, θ), θ ∗ ) for each i ∈ N , then a ∈ f (θ ∗ ). 2. (Limited No Veto Power): If c ∈ Mi (Ci (a, θ), θ ∗ ) for some i ∈ N and c ∈ Mj (A∗ , θ ∗ ) for all j = i, then c ∈ f (θ ∗ ). 17 3. (Unanimity): If d ∈ Mi (A∗ , θ ∗ ) for each i ∈ N , then d ∈ f (θ ∗ ). Theorem 4 (Moore and Repullo (1990)) Suppose there are three or more players, i.e., n ≥ 3. Then, a social choice correspondence is implementable in Nash equilibrium if and only if it satisfies Condition μ. Condition μ is not sufficient for two-person Nash implementation. Here I discuss another condition which, together with Condition μ, is sufficient for two-person Nash implementation. A social choice correspondence f is said to satisfy the nonempty lower intersection property if, for all θ, θ ∈ Θ with θ = θ , and for all a ∈ f (θ) and a ∈ f (θ ), there exists outcome c ∈ A such that a θ1 c and a θ2 c. Theorem 5 (Dutta and Sen (1991)) Suppose there are two players, i.e., n = 2. Then, a social choice correspondence is implementable in Nash equilibrium if it satisfies both Condition μ and the nonempty lower intersection property. Dutta and Sen’s proof is done by constructing a mechanism. Define the extended M-mechanism Γ̃ = (M̃ , g̃), in which each message m̃i ∈ M̃i contains one extra set of messages, {F, N F }, as well as the previous three ingredients in the M-mechanism. Here the set {F, N F } comprises the two elements, “flag” and “no flag.” Thus, a typical message sent by player i is denoted m̃i = (ai , θ i , r i , z i ), where ai ∈ A, θ i ∈ Θ, r i ∈ {F, N F }, and z i ∈ {0, 1, 2, . . . , }. The outcome function g̃ of the extended M-mechanism Γ is defined with the following rules, where m̃ = (m̃1 , m̃2 ): 1. If m̃1 = (a, θ, N F, z 1 ), m̃2 = (a, θ, N F, z 2 ), and a ∈ f (θ), then g̃(m̃) = a. 2. If m̃1 = (a1 , θ 1 , N F, z 1 ) and m̃2 = (a2 , θ 2 , N F, z 2 ), then g̃(m̃) = x, where x is 2 1 chosen such that a2 θ1 x and a1 θ2 x. 3. If m̃i = (ai , θ i , F, z i ) and m̃j = (aj , θ j , N F, z j ), then we have three cases: ⎧ j ⎪ if aj θi ai ai ⎨ j g(m̃) = aj if ai θi aj ⎪ ⎩ a(i, θ j ) if ai ∼θj aj i where a(i, θ j ) is the worst outcome for player i in state θ j . 4. In all other cases, an integer game is played. That is, ∗ g̃(m̃) = ai , where i∗ is the lowest index among those who announce the highest integer, ∗ ∗ i.e., z i = max{z 1 , z 2 }. More specifically, z i = z 1 if there is a tie, i.e., z 1 = z 2 . 18 Rule 1, 3, and 4 of the extended M-mechanism are the same as Rule 1, 2, and 3 of the M-mechanism, respectively. Rule 2 of the extended M-mechanism is well defined as well as the social choice correspondence satisfies the nonempty lower intersection property. Combining Theorem 3 with some minor modification and Theorem 5 (Dutta and Sen (1991)), I have the following result which is parallel to Corollary 1 in the previous section. Corollary 2 Let the number of players be two, i.e., n = 2. Let the set of outcomes A satisfy Assumption 2. Suppose that a choice choice correspondence f satisfies Condition μ, the nonempty lower intersection property, and Assumption 1. Then, f is not robustly implementable relative to d∗∗ by the extended M-mechanism. 6 Concluding Remarks This paper shows that the M-mechanism is not a robust Nash implementing mechanism relative to d∗∗ . This is the minimal sense in which the M-mechanism is not robust to incomplete information. As I already discussed in the introduction, Bergemann and Morris [B-M] (2005) also addressed the issues of robust implementation. In sum, B-M’s robustness reduces to using much more robust solution concepts, such as ex post equilibrium and rationalizability. I argue that their robustness requirement is too stringent to robustify the existing implementation results. It is true that if I require that the mechanism be robust to all type spaces, the resulting robust mechanism must be indeed robust in a practical sense. However, Jehiel, Moldovanu, Meyer-ter-Vehn, and Zame (2005) recently showed that in environments where signals are multi-dimensional and the valuations are interdependent, generically only constant social choice functions are implementable in ex post equilibrium. 11 No matter how attractive B-M’s robustness is, if all we can robustly implement are constant social choice functions, we should consider possibilities that there might be some reasonably robust mechanisms that are not exactly robust in B-M’s sense. This is the direction in which I would like to make further progress with the future works. 7 Appendix In this appendix, I will show that the canonical perturbation of the complete information structure is characterized by the topology induced by d∗∗ . Lemma 1 There is approximate common knowledge at (0, 0) and (2, 2) about which game being played. 11 Bikhchandani (2005) showed that the existence of ex post implementable mechanisms in models with multi-dimensional signals and interdependent values is possible when there is no consumption externality. Thus, the conclusion and implication of Jehiel, Moldovanu, Meyer-ter-Vehn, and Zame (2005) should be weakened accordingly. 19 Proof: Since player i knows that the game Γ(θ ) is played at (0, 0), we denote Bi1 ({(0, 0)}) = {(0, 0)}, i.e., {(0, 0)} is the state in which player i assigns probability 1 to event {(0, 0)}. Furthermore, we must have Biq ({(0, 0)}) = {(0, 0)} for any q < 1. Since all other players do not know which game being played at (0, 0), we want to know any such j’s (j = i) q-belief operator Bjq ({(0, 0)}). Set q(ε) = 1−p . 1 − p + pε Note that q(ε) → 1 as ε → 0. Consider j’s 1-belief operator. Then we have Bj1 ({(0, 0)}) = ∅. But we have for j’s q-belief operator that Bjq ({(0, 0)}) = {(0, 0), (1, 0)} q(ε) for q = q(ε). Note that {(0, 0)} because {(0, 0)} ⊂ Bk ({(0, 0)}) for is q(ε)-evident q q k ∈ N . Since Bi ({(0, 0)}) ∩ j=i Bj ({(0, 0)}) = {(0, 0)} for q = q(ε), we claim that when ε is sufficiently small, it is a common q(ε)-belif at (0, 0) that the game Γ(θ ) is played. Consider the event {(2, 2)}. Since all other players but i are able to distinguish between {(2, 2)} and others, we have Bj1 ({(2, 2)}) = {(2, 2)} for any j = i. Furthermore, we have that Bjq ({(2, 2)}) = {(2, 2)} for any q < 1. Next consider player i. We have i’s 1-belief operator as Bi1 ({(2, 2)}) = ∅. Next consider i’s q-belief operator. We have that Biq ({(2, 2)}) = {(2, 1), (2, 2)} for q = 1 − ε. Since {(2, 2)} ⊂ Bkq ({(2, 2)}) for each k ∈ N q = 1 − ε, the event {(2, 2)} is (1 − ε)-evident. Note that and for q q Bi ({(2, 2)})∩ j=i Bj ({(2, 2)}) = {(2, 2)} for q = 1 − ε. Hence, when ε is sufficiently small, it is a common (1 − ε)-belief at state {(2, 2)} that the game Γ(θ) is played. Corollary 3 d∗∗ (P ε , P ∗ ) → 0 as ε → 0. That is, there exists a sequence {q k }∞ k=1 converging to 1 such that with probability at least q k , there is a common q k -belief about which game being played. Proof: Define {ε(k)}∞ k=1 as a seququence such that ε(k) → 0 as k → ∞. Let qI (ε) = 1 − ε. Then, there is a common qI (ε)-belief at (2, 2) that the game Γ(θ) being played. Let qJ (ε) = 1−p . 1 − p + pε Then, there is common qJ (ε)-belief at (0, 0) that the game Γ(θ ) being played. Define q ∗ (ε) as follows: q ∗ (ε) = min{1 − p + p(1 − ε)3 , qI (ε), qJ (ε)}. We set q k ≡ q ∗ (ε(k)) for each k. Then, with probability at least q k , there is a common ε(k) }∞ . q k -belief about which game being played for each k. Define {P k }∞ k=1 ≡ {P k=1 k ∞ Therefore, we can confirm that {P }k=1 is a convergent net of probability distributions converging to the complete information prior P ∗ according to the topology induced by d∗∗ . 20 References [1] R.J. Aumann: “Agreeing to Disagree,” Annals of Statistics, 6, (1976), 12361239. [2] D. Bergemann and S. Morris: “Robust Implementation: The Role of Large Type Spaces,” mimeo, (June 2005). [3] S. Bikhchandani: “The Limits of Ex-Post Implementation Revisited,” mimeo, UCLA [4] B. Dutta and A. Sen: “A necessary and Sufficient Condition for Two-Person Nash Implementation,” Review of Economic Studies, 58, (1991), 121-128. [5] P. Jehiel, M. Meyer-ter-Vehn, B. Moldovanu, and W. Zame: “The Limits of Ex-Post Implementation,” (2005), forthcoming in Econometrica [6] T. Kunimoto: “Robust Implementation under Approximante Common Knowledge,” mimeo, (November 2005). [7] T. Kunimoto: “The Robustness of Equilibrium Analysis: The Case of Undominated Nash Equilibrium,” mimeo, (January 2006). [8] E. Maskin: “Nash Equilibrium and Welfare Optimality,” Review of Economic Studies, 66, (1999), 23-38. [9] D. Monderer and D. Samet: “Approximating Common Knowledge with Common Beliefs,” Games and Economic Behavior, 1, (1989), 170-90. [10] J. Moore and R. Repullo: “Nash Implementation: A Full Characterization,” Econometrica, 58, (1990), 1083-1099. [11] A. Rubinstein: “The Electronic Mail Game: Strategic Behavior under ‘Almost Common Knowledge’,” American Economic Review, 79, (1989), 385-391. 21