1 Moral Hazard: Multiple Agents • Multiple agents (firm?) — Partnership: Q jointly affected 1.1 Moral Hazard in a Team Holmstrom(82) Deterministic Q. 2 ∂Q Output Q(a) ∼ F (q|a), ∂a > 0, ∂ Q 2 < 0, i — Individual qi’s. (tournaments) ∂ai 2 • Common shocks, cooperations, collusion, monitor ing. Q dqij = ∂a∂ ∂a ≥ 0, (dq)ij —negative definite. i j Agents: ui(w) = w. Agents: i = 1, . . . , n. Partnership w(Q) = {w1(Q), . . . , wn(Q)}, Ui(wi, ai) = ui(wi) − ψ i(ai) such that for all Q, Efforts: a = (a1, . . . , an), with ai ∈ [0, ∞) Problem: free-riding (someone else works hard, I gain) Output q = (q1, . . . , qn) ∼ F (q|a). First-best: Principal: R-N. Pn i=1 wi(Q) = Q. ∂Q(a∗) 0 ∗ ∂ai = ψ (ai ). At F-B: Q(a∗) − Agents’ choices: FOC: dwi[Q(ai, a∗−i)] ∂Q(ai, a∗−i) dQ ∂ai = ψ 0(ai) ? Nash (a∗i ) = FB (a∗i )? Thus ∃z = (z1, . . . , zn). Note, b-b looses from higher Qs. Comments: b-b is a residual claimant (in fact each agent is a residual claimant in a certain interpretation (!)). Locally: dwi[Q(ai,a∗−i )] = 1, thus wi(Q) = Q + Ci. dQ Budget Pn ∗ i=1 ψ i(ai ) > 0. Pn i=1 wi(Q) = Q for all (!) Q. Not the same as Alchian & Demsetz (equity for manager’s incentives to monitor agents properly). ? Other ways to support first-best? This requires a third party: budget breaker Mirrlees contract: reward (bonus) bi if Q = Q(a∗), Let zi = −Ci–payment from agent i. P ∗ ∗ Thus n i=1 zi + Q(a ) ≥ nQ(a ) and zi ≤ Q(a∗) − ψ i(a∗i ). penalty k otherwise. (bonuses for certain targets) As long as bi − ψ i(a∗i ) ≥ −k, F-B can be supported, P moreover if b’s and k exist so that Q(a∗) ≥ n i=1 bi, no b-b needed. 1.2 Interpretation: Debt financing by the firm. Firm commits to repay debts of D = Q(a∗) − bi to each i. P bi, and Special Examples of F-B (approx) via different schemes Legros & Matthews (’93), Legros & Matsushima (’91). If cannot, creditors collect Q and each employer pays k. (Hm...) Issues: (1) Multiple equilibria (like in all coordinationtype games, and in Mechanism-Design literature). No easy solution unless (2) actions of others are observed by agents, and the principal can base his compensation on everyone’s reports. Not a problem with Holmstrom though (Positive effort of one agent increases effort from others). • Deterministic Q, finite A’s, detectable deviations. Say, ai ∈ {0, 1}. And Qf b = Q(1, 1, 1). Let Qi = Q(ai = 0, a−i = (1, 1)). Suppose Q1 6= Q2 6= Q3. Shirker identified and punished (at the benefit of the others). (3) Deterministic Q. Similarly, even if Q1 = Q2 6= Q3. • Approx. efficiency, n = 2. Idea: use one agent to monitor the other (check with prob ε). Check: Agent 1. Set a2 = 1, ∙ ¸ 2 ((a+1)−1)2 a − 2 = 0. maxa 2 ai ∈ [0, ∞) Q = a1 + a2, ψ i(ai) = a2i /2. Agent 2. a2 ≥ 1 7→ Q ≥ 1. Implies a∗2 = 1, U2 = 1 − ε/2. F-B: a∗i = 1. a2 < 1 guarantees Q < 1 with prob. ε. L & M propose: agent 1 chooses a1 = 1 with pr = 1−ε. Obtain a∗2 = 12 , and U2 = 54 − εk. When Q ≥ 1, For, k ≥ 12 + 41ε , a∗2 = 1 is optimal. ( w1(Q) = (Q − 1)2/2 w2 = q − w1(Q). when Q < 1, ( w1(Q) = Q + k w2(Q) = 0 − k. • Random output. Cremer & McLean works. (condi tions?) 1.3 Observable individual outputs Agent’s choice: a ∈ arg max ŵ(a). 2 2 E(eaε) = ea σ /2, for ε ∼ N (0, σ 2). q1 = a1 + ε1 + αε2, (back to General case) Agent i q2 = a2 + ε2 + αε1. ε1, ε2 ∼ iid N (0, σ 2). V (w1) = V ar(v1(ε1 + αε2) + u1(ε2 + αε1)) CARA agents: u(w, a) = −e−µ(w−ψ(a)), ψ(a) = 12 ca2. Linear incentive schemes: h = σ 2 (v1 + αu1)2 + (u1 + αv1)2 Then, agent’s problem: w1 = z1 + v1q1 + u1q2, w2 = z2 + v2q2 + u2q1. maxa No relative performance weights: ui = 0, i ⎧ ⎨ z1 + v1a + u1a2 − 12 ca2− ⎫ ⎬ i . 2h ⎩ − µσ (v1 + αu1)2 + (u1 + αv1)2 ⎭ 2 Solution a∗1 = vc1 (as in one A case). Principal: maxa,z,v,u E(q − w), h i v2 ŵ1 = z1+ 12 c1 + u1cv2 − µσ 2 subject to E −e−µ(w−ψ(a)) ≥ u(w̄). h i Define ŵ(a), as −e−µŵ(a) = E −e−µ(w−ψ(a)) . Principal: maxz1,v1,u1 ½ 2 h i (v1 + αu1)2 + (u1 + αv1)2 . v1 c − µ v2 z1 + c1 + u1cv2 ¶¾ s.t. ŵ1 ≥ w̄ Principal: maxv1,u1 1.4 Tournaments Lazear & Rosen (’81) ½ h i¾ v12 µσ 2 v1 1 2 2 . c − 2 c 2 − (v1 + αu1) + (u1 + αv1) Agents: R-N, no common shock. To solve: (1) find u1 to minimize sum of squares (risk) qi = ai + εi. ε ∼ F (·), E = 0, V ar = σ 2. (2) Find v1 (trade-off) risk-sharing, incentives Cost ψ(ai). 2α v . Obtain u1 = − 1+α 2 1 F-B: 1 = ψ 0(a∗). The optimal incentive scheme reduce agents’ exposure to common shock. wi = z + qi. v1 = 1+α2 . 2 1+α +µcσ 2(1−α2)2 z + E(qi) − ψ(a∗) = z + a∗ − ψ(a∗) = ū. Tournament: qi > qj → prize W , both agents paid z. Agent: z + pW − ψ(ai) →ai max. p = P r(qi > qj ) = = P r(ai − aj > εj − εi) = H(ai − aj ). EH = 0, V arH = 2σ 2. ∂p FOC: W ∂a = ψ 0(ai). i W h(ai − aj ) = ψ 0(ai). 1 . Symmetric Nash: (+FB): W = h(0) 1.5 Cooperation and Competition • Inducing help vs Specialization • Collusion among agents • Principal-auditor-agent Itoh (’91) z + h(0) − ψ(a∗) = ū. 2 agents: qi ∈ {0, 1}, (ai, bi) ∈ [0, ∞) × [0, ∞). Result: Same as FB with wages. Ui = ui(w) − ψ i(ai, bi), ui(w) = Extension: multiple rounds, prizes progressively increasing. ψ i(ai, bi) = a2i + b2i + 2kaibi, k ∈ [0, 1]. H(0) √ w. Pr(qi = 1) = ai(1 + bi). Agents: Risk-averse+Common Shock. Trade-off between (z, q) contracts and tournaments. i ), wi — payment to i when q = j, Contract: wi = (wjk i jk q−i = k. By itself (ignoring change in a) and if b∗ is small, and since change in w increases risk, it is costly for the prin cipal to provide these incentives. No Help: bi = 0. w0 = 0, ai(1 − w1) →w1 max, √ s.t, ai = 12 w1 (IC) and IR is met. ... Even if a adjusts, since it is different from the first-best for the principal with b = 0, the principal looses for sure. Getting Help: Agent i solves (given aj , bj , w, w11 > w10, w01 > w00 = 0.) √ √ a(1 + bj )aj (1 + b) w11 + (1 − a(1 + bj ))aj (1 + b) w01+ √ a(1 + bj )(1 − aj (1 + b)) w10 − a2 − b2 − 2kab → max Thus, if k is positive, there is a discontinuity at b = 0, thus “a little” of help will not help: for all b < b∗ principal is worse-off. a,b FOC+symm: consider ³ ∂ ∂b ´ For k = 0, help is always better. √ √ √ a2(1+b) ( w11 − w10)+a(1−a(1+b)) w01 = 2(b+ak) Two-step argument: 1. If ahelp ≥ ab=0, marginal cost of help is of second order, always good. If (as in No Help) w11 = w10, w01 = 0, and k > 0, we have RHS = 0, LHS > 0 for any b ≥ 0. 2. Show that ahelp ≥ ab=0. Therefore, need to change w significantly to get any b close to 0. (Even to get b = 0 with F OCb = 0) 1.6 s.t (a∗1, a∗2)—NE in efforts, and CEi ≥ 0. Cooperation and collusion. Individual choices: vi = ψ 0i(ai). CARA agents: u(w, a) = −e−µi(wi−ψi(a)). qi = ai+εi, (ε1, ε2) ∼ N (0, V ), where V = à σ 21 σ 12 σ 12 σ 22 ρ = σ 12/(σ 1σ 2). Linear incentive schemes: ! , u are set to minimize risk-exposure: ui = −vi σσi ρ. j Total risk exposure: h i P2 2σ 2(1 − ρ) . µ v i=1 i i i • Full side-contracting: w1 = z1 + v1q1 + u1q2, w2 = z2 + v2q2 + u2q1. • No side contracts (CE2(a1, a2) analogously): CE1(a1a2) = z1 + v1a1 + u1a2 − ψ 1(a1) µ − 1 (v12σ 21 + u21σ 22 + 2v1u1σ 12) 2 • (?) Enough to consider contracts on (a1, a2). • Problem reduces to a single-agent problem with µ1 = 1 1 µ1 + µ2 , with costs ψ(a1, a2) = ψ 1(a1) + ψ 2(a2). • Full side contracting dominates no s-c iff ρ ≤ ρ∗. (cooperation vs relative-performance evaluation). Principal (RN): (1 − v1 − u2)a1 + (1 − u1 − v2)a2 − z1 − z2 → max • MD schemes. 1.7 Supervision and Collusion Principal: reward Monitor for y ∗ with w ≥ k. Principal: V > 1, (Punish when there is not y ∗?) Agent: cost c ∈ {0, 1}, Pr(c = 0) = 1 2. Suppose not, that is 12 pk > z. (and thus suppose that wmon = 0) Monitor: cost z, Proof y ∗, Pr(y ∗|c = 0) = p). Assume: V > 2 (so P = 1 is optimal without monitor) With monitor (no collusion) ³ ´ 1 pV + 1 − 1 p (V − 1) − z 2 2 Compare to V − 1. Collision: Agent-Monitor: Tagent → (kT )monitor , k ≤ 1. max T = 1. ³ ´ Principal: 12 p(V − k) + 1 − 12 p (V − 1). • No gain for allowing collusion • If k is random, then possible.