Document

advertisement

Static Games of Incomplete

Information

.

Mechanism design

• Typically a 3-step game of incomplete info

Step 1: Principal designs mechanism/contract

Step 2: Agents accept/reject the mechanism

Step 3: Agents that have accepted, play the game specified by mechanism

• Constant theme: Incomplete information and binding individual rationality constraints prevent efficient outcomes

Nonlinear pricing

• A monopolist produces good at marginal cost c and sells quantity q

• Consumer transfers T to seller and has utility u

1

( q , T , θ)= θ V ( q )T , V (0)=0, V / >0, V // <0

• θ is private knowledge for buyer

• The game:

1. Seller offers tariff T ( q ): specifies a price for qty q

2. Consumer accepts/rejects

• If seller knows θ, she will charge T = θ V ( q ), her profit, θ V ( q )cq. This is maximized at some q given by θ V / ( q )= c

Nonlinear pricing

• Seller’s expected profit:

Eu

0

• Seller faces two constraints: p ( T

 c q )

 p ( T

 c q )

1. Individual Rationality ( IR ): Consumer should be willing to purchase

2. Incentive Compatibility ( IC ): Consumer should consume the bundle intended for his type

• IR

1

: 

V ( q )

• IC

1

: 

V ( q )

T

T

0 ; and IR

2

:

 

V ( q )

T

V ( q )

T

0

; and IC

2

: 

V ( q )

T

 

V ( q )

T

• First step: To show that only IR

1 and IC

2 are binding

Nonlinear pricing

• First note: IR

1 and IC

2 imply IR

2

• IR

2 can’t be binding unless =0

• However, IR

1 must bind. Else seller can increase

T & T by same amount and increase revenue

• Also, IC

2 must be binding, else seller can increase

T , satisfy all constraints and increase revenue

• The high-type’s indifference curve is always steeper than the low type’s for any allocation

• This implies that high type consumes more than low type: q

 q

Nonlinear pricing

• Eliminating transfers, principal’s objective function is: max {([ q , q p

  p (

  

)] V ( q )

 p c q )

 p (

V ( q )

 c q )}

• FOC wrt q :

V

/

( q )

 c /( 1

 p (

 p

 

)

)

• FOC wrt q :

V

/

( q )

 c

• Check that IC

1 is satisfied

• Note: Quantity purchased by high-type is optimal

Quantity purchased by low-type is sub-optimal

• Seller sacrifices efficiency for rent-extraction!

Auctions

• Seller has unit of good and there are two bidders

   p

• Buyer’s expected probability of getting the good are

X & X and payments are T & T

• The constraints are:

IR

1

:

X

T

0 ; IR

2

:

IC

1

: 

X

T

 

X

T

; IC

2

:

X

T

0

X

T

 

X

T

• What is seller’s optimal contract?

Auctions

• Seller’s expected profit is: p T

 p T

• Again, IR

1 and IC

2 are binding. The seller’s profit:

Eu

0

(

  p

) X

 p

X

• Also, ex-ante prob of a player getting good, p X

 p X

• Moreover,

X

 p

 p

2

1

2

• Case 1:   p

. The seller sets and X

 p

 p

2

Optimal mechanism: Not to sell if both announce low-type; sell to hightype if they announce different types; sell wp ½ to each if both announce high type

  p

X

 p

 p

2

Optimal mechanism: Sell to high-type if bidders announce different types, and sell wp ½ to each if they both announce high-type or low-type

Moral Hazard

•Consider a Principal and an agent who can exert costly effort, e

•Agent receives transfer, t, and has utility;

U=u(t)ψ(e), with u / >0, u // <0.

•Production is stochastic, and production level, q

{ q , q } , q

 q

• Stochastic influence of effort on production:

Pr{ q

 q e

0 }

 

0

; Pr{ q

 q e

1 }

 

1

,

1

 

0

Moral Hazard

• Principal can offer a contract, {t( )}, that depends on observed, random output t q

• Let Principal’s profit with qty q be S(q)

• His profit when agent expends effort e=0 is:

V

0

 

0

[ S ( q )

 t ]

( 1

 

0

)[ S ( q )

 t ]

• His profit when agent expends effort e=1 is:

V

1

 

1

[ S ( q )

 t ]

( 1

 

1

)[ S ( q )

 t ]

Incentive Feasible Contracts

• Induce positive effort and ensure participation

• Incentive constraint :

1 u ( t )

( 1

 

1

) u ( t )

   

0 u ( t )

( 1

 

0

) u ( t )

• Participation constraint :

1 u ( t )

( 1

 

1

) u ( t )

  

0

Complete Information Benchmark

• Complete info or First-Best: Principal observes effort

• Principal’s problem is: max

{( t , t )}

1

( S

 t )

( 1

 

1

)( S

 t ) subject to:

1 u ( t )

( 1

 

1

) u ( t )

  

0

• Using Lagrangian, μ, and from FOCs we have,

  u

/

1

( t

*

)

1

( t

*

)

0 u

/

• From the above equations, we have that: t

*  t

*  t

*

• Thus, Agent obtains full insurance!

• The optimal transfer is: t * = u -1 ( ψ)=h(ψ), where h=u -1

First Best Case

• When there is complete information

• Principal’s profit from inducing effort e=1:

V

1

=

1

S

( 1

 

1

) S

 h (

)

• If agent exerted 0 effort, principal would earn:

V

0

=

0

S

( 1

 

0

) S

• Inducing effort is optimal for principal if:

  

S

 h (

)

, where

   

1

 

0

;

S

S

S

• Principal’s First-Best cost of inducing effort is: h(ψ)

Second-Best: In terms of transfers

• Agent is risk-averse

• Principal’s problem, P, is:

• (P): max

{( t , t )}

1

( S

 t )

( 1

 

1

)( S

 t ) subject to:

1 u ( t )

( 1

 

1

) u ( t )

  

0 , and

1 u ( t )

( 1

 

1

) u ( t )

   

0 u ( t )

( 1

 

0

) u ( t )

• First ensure concavity of (P): Let u

 u ( t ); u

 u ( t )

Second-Best: In terms of utilities

• The Principal’s program can be rewritten in terms of utilities

• (P / ): max

{( u , u )}

1

( S

 h ( u ))

( 1

 

1

)( S

 h ( u )) subject to :

1 u

( 1

 

1

) u

 

1 u

( 1

 

1

) u

 

 

0 u

( 1

 

0

) u

0

• Principal’s objective function is concave in ( u , u ) because h(.) is convex, and the constraints are linear

• The KKT conditions are necessary and sufficient

Both IR and IC are binding

• Let λ & μ be Lagrange multipliers for IC & IR

• The FOCs, upon rearranging terms, are:

1 u

/

( t

SB

)

   

 

1

1

; u

/

( t

SB

)

   

 

1

 

1 where, t

SB

, t

SB are second-best optimal transfers

• From these,  

1

 

1 u

/

( t

SB

)

1 u

/

( t

SB

)

0 , so IR is binding

• Also,  

1

( 1

 

1

) 1

( u

/

( t

SB

)

1 u

/

( t

SB

)

)

0

, so IC is binding

Second-Best Solution

• The variables (

SB t , t

SB

, λ, μ ) are solved simultaneously from two FOCs, IC and IR

• The second-best optimal transfers are: t

SB  h (

 

( 1

 

1

)

 

); t

SB  h (

   t

SB  t

SB

1 t

1

 

)

: contract does not provide full insurance

• 2 nd Best cost of inducing effort: C SB =

SB 

( 1

 

1

) t

SB

• Clearly, for the Principal, C SB > C FB . So Principal induces high effort (e=1) less often than in first-best

• There is under-provision of effort in the second-best

Mechanism design with a single agent

• Agent’s type  

[

,

] with distribution/density P (

) / p (

)

• Type-contingent allocation is fn.

  y (

)

( x (

), t (

• Defn: A decision function x :

  X if

)) there exists a transfer t (.) such that allocation y (.) is incentive-compatible, i.e.

u

1

( y (

),

)

 u

1

( y (

 ˆ

),

),

(

,

 ˆ

)

[

,

]

[

,

]

• Theorem: A piecewise C 1 decision fn x (.) is implementable only if k n 

1

 

(

 u

1

 u

1

/

/

 x k

 t

) dx k d

0 whenever x

 x (

), t

 t (

) and x is differentiable at θ

Mechanism design with a single agent

• Sketch of proof: Type

 ˆ

(

 ˆ

,

)

 u

1

( x (

 ˆ

), t (

 ˆ

),

)

The FOC and SOC are

 

(

  ˆ

,

)

0 ,

 2 

(

  ˆ

2

,

)

0

Totally differentiating the first equation,

 2 

(

  ˆ

2

,

)

 2 

(

  ˆ  

,

)

0

The (local) SOC becomes k n 

1

 



 u

1

 x k

 dx k d

 

 u

1

 t dt d

0

 2 

(

 

,

  ˆ

)

0

Rewrite the FOC we get, k n 

1



 u

1

 x k

 dx k d

Eliminating, dt / d θ, k n 

{[

1

 



 u

1

 x k



 u

1

 t

 

 u

1

 t

 or,

 u

1

 t dt d

0

 u

1

 x k

] /

 u

1

 t

} dx d

 k

0

Mechanism design with a single agent

• The sorting / single crossing / constant sign (CS) condition is:

 



 u

1

 u

1

/

/

 x k

 t



0

• Note that

 u

1

 u

1

/

/

 x

 t k is agent’s marginal rate of substitution between decision k and transfer t

• Consider x to be output supplied by agent, i.e.,

 u

1

 x

0

• Then sorting condition means that the agent’s indifference curve in ( x , t ) space,

 u

1

 u

1

/

/

 x

 t

, is decreasing in θ

• If θ

2

> θ

1

, y ( θ

1

)=( x ( θ

1

), t ( θ

1

)), y ( θ

2

)=( x ( θ

2

), t ( θ

2

)), then y ( θ

2

)> y ( θ

1

)

• Theorem: If decision space is 1-dim and CS holds, then a necessary condition for x (.) to be implementable is that it is monotonic.

• What about sufficiency?

Optimal mechanisms for one agent

• The assumptions:

A1: u

A2: Quasi-linear utilities:

Principal: u

0

( x, t, θ )= V

0

( x, θ )t ; Agent: u

1

( x, t, θ )= V

1

( x, θ )+ t

A3: n=1, i.e., decision is 1-dim and CS holds.

 2

V

1

 x

 

0

A4:

V

1

 

0

A5:

 2

V

0

 x

 

0

A6:

3 V

1

 x

 

2

0 , &

 x 2

3 V

1

 

0

Optimal mechanisms for one agent

• The problem: Principal maximizes his expected utility

{ x max

(

), t (

)}

E

 u

0

( x (

), t (

),

) subject to: (IR) u

1

( x ( θ ), t ( θ ), θ u θ

(IC) u

1

( x ( θ ), t ( θ ), θ )≥ u

1

( x (

 ˆ

), t (

 ˆ

),

)

(

,

 ˆ

)

• Let

U

1

(

)

  max

 ˆ

  u

1

( x (

), t (

),

) u

1

( x (

 ˆ

), t (

 ˆ

),

)

 u u

1

( x (

), t (

),

)

0

• From Envelope theorem, dU

1 d

 u

1

 

V

1

 

• This implies that,

U

1

(

)

 u

 

V

1

( x

(

~

~

,

~

) d

~

Optimal mechanisms for one agent

• Further, u

0

= V

0

+ V

1

- U

1

≡ Social surplus-

• Principal’s objective function:

Agent’s utility

[ V

0

( x (

),

)

V

1

( x (

),

)

 

V

1

( x

 

~

(

~

),

~

) d

~

] p (

) d

 

[ V

0

( x (

),

)

V

1

( x (

),

)

1

P (

 p (

)

)

V

1

( x (

 

),

)

] p (

) d

• Since monotonicity is necessary and sufficient for implementability, Principal’s optimization program becomes max

{x(.)}

[ V

0

( x ,

)

V

1

( x ,

)

1

P (

 p (

)

)

V

1

( x ,

 

)

] p (

) d

 s.t. x (.) is monotonic

Optimal mechanisms

• We solve the principal’s program ignoring monotonicity

• The solution to the relaxed program is

V

0

 x

V

1

 x

1

P (

 p (

)

)

 2

V

1

 x

 

• The principal faces a trade-off between maximizing total surplus ( V

0

+ V

1

) and appropriating the agent’s info rent ( U

1

)

• When is it legit to focus on relaxed program?

When solution x * ( θ) to above eq is monotonic. Differentiating,

(

 2

V

0

 x

2

 2

V

1

 x

2

1

P (

 p (

)

)

 x

2

3

V

1

 

) dx

* d

 2

V

1

 x

  d

[ d

(

1

P p (

(

)

)

)

1 ]

 2

V

0

 x

 

1

P ( p (

)

)

 3

V

1

 x

 

2

When Hazard rate is monotone : d d



1

 p (

P

)

(

)



0

Download