Document 11233667

advertisement
Penn Institute for Economic Research
Department of Economics
University of Pennsylvania
3718 Locust Walk
Philadelphia, PA 19104-6297
pier@econ.upenn.edu
http://economics.sas.upenn.edu/pier
PIER Working Paper 16-009
“Mechanism Design with Costly Verification and Limited Punishments “
BY
Yunan Li
http://ssrn.com/abstract=2764458
Mechanism Design with Costly Verification
and Limited Punishments∗
Yunan Li†
University of Pennsylvania
April 4, 2016
Latest Version
Abstract
A principal has to allocate a good among a number of agents, each of whom values
the good. Each agent has private information about the principal’s payoff if he receives
the good. There are no monetary transfers. The principal can inspect agents’ reports
at a cost and penalize them, but the punishments are limited. I characterize an optimal
mechanism featuring two thresholds. Agents whose values are below the lower threshold
and above the upper threshold are pooled, respectively. If the number of agents is
small, then the pooling area at the top of value distribution disappears. If the number
of agents is large, then the two pooling areas meet and the optimal mechanism can be
implemented via a shortlisting procedure.
Keywords: Mechanism Design, Costly Verification, Limited Punishments
JEL Classification: D82
∗
I would like to express my gratitude to Rakesh Vohra for his encouraging support since the very beginning
of the project. I am also grateful to Tymofiy Mylovanov, Steven Matthews, Mariann Ollar, Mallesh Pai and
the participants in seminars at University of Pennsylvania for valuable discussions. All remaining errors are
my responsibility. Any suggestion or comment is welcome.
†
Author’s addresses: Yunan Li, Economics Department, University of Pennsylvania, Philadelphia, PA,
USA, yunanli@sas.upenn.edu.
1
1
Introduction
In many large organizations scarce resources must be allocated internally without the
benefit of prices. Examples include the head of personnel for an organization choosing one
of several applicants for a job, venture capital firms choosing which startup to fund and
funding agencies allocating a grant among researchers. In these settings the principal must
rely on verification of agents’ claims, which can be costly. For example, the head of personnel
can verify an job applicant’ claim or monitor his performance once he is hired. A venture
capital firm can investigate the competing startups and audit the progress of a startup once
it is funded. Furthermore, the principal can punish an agent if the claims are found to be
false. For example, the head of personnel can fire an employee or deny a promotion. Venture
capitals and funding agencies can cut off funding.
Prior work has examined two extreme cases. In Ben-Porath et al. (2014), verification is
costly, but punishment is unlimited in the sense that an agent can be rejected, and does not
receive the good. In Mylovanov and Zapechelnyuk (2014), verification is free, but punishment
is limited. In this paper I consider a situation with both costly verification and limited
punishment. I interpret inspection or verification as acquiring information (e.g., requesting
documentation, interviewing an agent, or monitoring an agent at work), which could be
costly. Moreover, punishment can be limited because inspection is imperfect or information
arrives only after the agent has been hired for a while.
The trade-off faced by the principal is as follows. The principal would like to give the
good to an agent whose value is high, but this encourages all agents to exaggerate. The
principal can discourage agents from exaggerating by rationing at the bottom/top or by
verification and punishment. Rationing at the bottom/top makes it less attractive for a low
value agent to misreport as a high value agent but reduces allocative efficiency. Verification
and punishment clearly discourage agents from misreporting, but verification is costly. It is
not obvious which is the best way to provide incentives.
In the main part of the paper, I consider the symmetric environment and characterize an
2
optimal symmetric mechanism in this setting. If the number of agents is sufficiently small,
the optimal mechanism is an one-threshold mechanism as in Ben-Porath et al. (2014). The
optimal allocation rule is efficient at the top of valuation distribution, and involves pooling
at the bottom. For intermediate and large number of agents, the optimal allocation rule
also involves pooling at the top as in Mylovanov and Zapechelnyuk (2014). Specifically,
the optimal mechanism is a two-threshold mechanism. If there are agents whose values
are above the upper threshold, one of them is chosen at random. If all agents’ values are
below the upper threshold, but some are above the lower threshold, the one with the highest
value is chosen. If all agents’ values are below the lower threshold, one of them is chosen
at random. For a sufficiently large number of agents, the two thresholds coincide and the
optimal mechanism can be implemented using a shortlisting procedure. Agents whose values
are above the threshold are shortlisted for sure, and agents whose values are below the
threshold are shortlisted with some probability. The principal chooses one agent from the
shortlist at random. The fact that the optimal mechanism depends on the number of agents
implies that small and large organizations should behave differently.
The distinction between small and intermediate number of agents is absent if either
verification is free or punishment is unlimited. This distinction is important since it allows us
to establish a connection between Ben-Porath et al. (2014) and Mylovanov and Zapechelnyuk
(2014).
In Section 5.1, I study the general model with asymmetric agents. In this setting, threshold mechanisms are still optimal. The analysis, however, is much more complex. Though
there is still a unique lower threshold for all agents, different agents may face different upper
thresholds. Using this result, I revisit the symmetric environment and characterize the set
of all optimal threshold mechanisms. I find that limiting the principal’s ability to punish
agents also limit her ability to treat agents differently. In particular, when a one-threshold
mechanism is optimal, the set of all optimal threshold mechanisms shrinks as the punishments become more limited. Eventually, the unique optimal mechanism is symmetric. If
3
the punishments are sufficiently limited so that a two-threshold mechanism or a shortlisting
procedure, the principal can still treat agents differently but to the extent that they share
the same set of thresholds. The comparison is less clear in this case because the sets of
optimal mechanisms are disjoint for different punishments.
Technically, I use tools from linear programming, which allows me to analyze Ben-Porath
et al. (2014) and Mylovanov and Zapechelnyuk (2014) in a unified way. It also allows me
to get some results on optimal mechanisms in the asymmetric environment with limited
punishments, unavailable in Mylovanov and Zapechelnyuk (2014).
The rest of the paper is organized as follows. Section 2 presents the model. Section 3
characterizes symmetric optimal mechanisms when agents are ex ante identical. Section 4
discusses the properties of optimal mechanisms. Section 5.1 studies the general asymmetric environment. Section 5.2 considers a variation of the model in which punishments are
independent of allocation.
1.1
Other Related Literature
This paper is related to the costly state verification literature. The first contribution
in the series is Townsend (1979) who studies a model of a principal and a single agent. In
Townsend (1979) verification is deterministic. Border and Sobel (1987) and Mookherjee and
Png (1989) generalize it by allowing random inspection. Gale and Hellwig (1985) consider
the effects of costly verification in the context of credit markets. These models differ from
what I consider here in that in their models there is only one agent and monetary transfers
are allowed.
Technically, this paper is related to literature on reduced form implementation — see,
e.g., Maskin and Riley (1984), Matthews (1984), Border (1991a) and Mierendorff (2011).
The most related one is Pai and Vohra (2014), who also use reduced form implementation
and linear programming to derive optimal mechanisms for financially constrained agents.
4
2
Model
The set of agents is I ≡ {1, . . . , n}. There is a single indivisible object to allocate among
them. The value to the principal of assigning the object to agent i is vi , where vi is agent i’s
private information. I assume vi ’s are independently distributed, whose density fi is strictly
positive on Vi := [v i , v i ] ⊂ R+ . The assumption that an agent’s value to the principal is
always non-negative simplifies some statements but the results can easily extend to include
negative values. I use Fi to denote the corresponding cumulative distribution function. Let
Q
V = i Vi . Agent i gets a payoff of bi (vi ) if he receives the object, and 0 otherwise. The
principal can inspect agent i’s report at a cost ki ≥ 0 if agent i receives the object and kiβ ≥ 0
if agent i does not receive the object. I allow for inspection costs to be different depending on
whether an agent gets the object. This is natural in some environments. For example, if the
object is a job slot and the private information is about an agent’s ability, then it is easier to
inspect an agent who is hired. I assume inspection perfectly reveals an agent’s type and the
cost to the agent of providing information is zero. If agent i is inspected, the principal can
impose a penalty ci (vi ) ≥ 0 if agent i receives the object, and cβi (vi ) ≥ 0 if agent i does not
receive the object. In Ben-Porath et al. (2014), the principal can inspect an agent whether
or not he receives the object, i.e., ki = kiβ . However, the principal can only penalize an agent
if he receives the object, i.e., cβi = 0. In Mylovanov and Zapechelnyuk (2014), the principal
can only inspect and penalize an agent if he receives the object, i.e., kiβ = ∞ and cβi = 0. For
the rest of the paper, I follow Ben-Porath et al. (2014) and Mylovanov and Zapechelnyuk
(2014) and assume that cβi = 0. The interpretation is that the principal can only penalize
an agent by taking the object back possibly after a number of periods (e.g., rejecting a job
applicant or firing him after a certain length of employment). In Section 5.2 I discuss to
what extent this assumption can be relaxed.
I invoke the Revelation Principle and focus on direct mechanisms in which truth-telling
is a Bayes-Nash equilibrium. Clearly, if an agent is inspected, it is optimal to penalize him if
and only if he is found to have lied. A direct mechanism is a pair (p, q), where pi : V → [0, 1]
5
denotes the probability i is assigned the object, qi : V → [0, 1] denotes the probability of
inspecting i conditional on the object being assigned to agent i, and qiβ : V → [0, 1] denotes
the probability of inspecting i conditional on the object not being assigned to agent i. The
P
mechanism is feasible if i pi (v) ≤ 1 for all v ∈ V. The principal’s objective function is
"
Ev
n
X
#
pi (v) (vi − qi (v)ki ) − (1 −
pi (v))qiβ (v)kiβ
.
(1)
i=1
The incentive compatibility constraint for agent i is
Ev−i [pi (vi , v−i )bi (vi )]
h
i
≥ Ev−i pi (vi0 , v−i ) (bi (vi ) − qi (vi0 , v−i )ci (vi )) − (1 − pi (vi0 , v−i ))qiβ (vi0 , v−i )cβi (v) , ∀vi , vi0 .
Clearly, since cβi = 0, it is optimal to set qiβ = 0. Then the principal’s objective function
becomes
"
Ev
n
X
#
pi (v) (vi − qi (v)ki ) .
(2)
i=1
The incentive compatibility constraint for agent i becomes
Ev−i [pi (vi , v−i )bi (vi )] ≥ Ev−i [pi (vi0 , v−i ) (bi (vi ) − qi (vi0 , v−i )ci (vi ))] , ∀vi , vi0 .
(3)
Given (p, q), let Pi (vi ) = Ev−i [pi (vi , v−i )], and Qi (vi ) = Ev−i [pi (vi , v−i )qi (vi , v−i )] /Pi (vi )
if Pi (vi ) 6= 0 and Qi (vi ) = 0 otherwise. The principal’s problem can be written in the reduced
form:
max
P,Q
n
X
Evi [Pi (vi ) (vi − Qi (vi )ki )] ,
i=1
subject to
Pi (vi )bi (vi ) ≥ Pi (vi0 ) (bi (vi ) − Qi (vi0 )ci (vi )) , ∀vi , vi0 ,
(IC)
0 ≤ Qi (vi ) ≤ 1, ∀vi ,
(F1)
6
XZ
i
Pi (vi )dFi (vi ) ≤ 1 −
Si
Y
Z
1−
dFi (vi ) , ∀Si ⊂ Vi .
(F2)
Si
i
If ki = 0, then this is Mylovanov and Zapechelnyuk (2014). If ci (vi ) = bi (vi ), then this is
Ben-Porath et al. (2014).
In both Mylovanov and Zapechelnyuk (2014) and Ben-Porath et al. (2014), it is easy to
solve for the optimal Qi . If ki = 0, then Q(vi ) = 1 for all vi ∈ Vi . If ci (vi ) = bi (vi ), then the
(IC) constraint becomes Pi (vi )bi (vi ) ≥ Pi (vi0 )bi (vi ) (1 − Qi (vi0 )) for all vi and vi0 . The (IC)
constraint holds if and only if
inf Pi (vi ) ≥ Pi (vi0 ) (1 − Qi (vi0 )) , ∀vi0 .
vi
Since the principal’s objective function is strictly decreasing in Qi , it is optimal to set Qi (vi ) =
1 − ϕi /Pi (vi ) for all vi ∈ Vi , where ϕi := inf vi Pi (vi ). In general, for ki > 0 and ci (vi ) 6= bi (vi ),
solving for the optimal Qi is hard.
For tractability, I assume ci (vi ) = ci bi (vi ) for some 0 < ci ≤ 1. This assumption is
natural in some applications. In the job slot example, this assumption is satisfied if an agent
gets a private benefit for each period he is employed and the penalty is being fired after a
pre-specified number of periods. In the example of venture capital firms or funding agencies,
this assumption is satisfied if agents receive funds periodically and the penalty is cutting
off funding after certain periods. Furthermore, this assumption allows us to obtain a clear
analysis on the interaction between verification cost (k) and level of punishment (c). Lastly,
this assumption can be relaxed, and the results can easily extend if ci (vi )/bi (vi ) is minimized
at v i .1
1
The (IC) constraint can be rewritten as:
Qi (vi0 )
bi (vi )
≥
ci (vi )
Pi (vi )
1−
, ∀vi , vi0 .
Pi (vi0 )
Suppose ci (vi )/bi (vi ) is minimized at v i and Pi (vi ) is non-decreasing. Then given vi0 , the left-hand side of
the above inequality is maximized at v i . Hence, redefine ci := ci (v i )/bi (v i ) and the (IC) constraint holds if
and only if (4) holds.
7
Under the assumption that the penalty, ci (vi ), is proportional to the private benefit,
bi (vi ),, the (IC) constraint becomes Pi (vi ) ≥ Pi (vi0 ) (1 − ci Qi (vi0 )) for all vi and vi0 . The (IC)
constraint holds if and only if
ϕi ≥ Pi (vi0 ) (1 − ci Qi (vi0 )) , ∀vi0 .
(4)
Since Qi (vi0 ) ≤ 1, then (4) holds only if
(1 − ci )Pi (vi0 ) ≤ ϕi , ∀vi0 .
(5)
Remark 1 Note that if ci = 1 as in Ben-Porath et al. (2014), then (5) is satisfied automatically. This explains why there is no pooling at the top in Ben-Porath et al. (2014). In
contrast, if 0 < ci < 1, then (5) imposes an upperbound on Pi and, as a result, there may be
pooling at the top.
For the rest of the paper, I assume that 0 < ci < 1. Suppose (5) holds, then it is optimal
to set Qi (vi ) = (1 − ϕi /Pi (vi )) /ci for all vi ∈ Vi . Substitute this into the principal’s objective
function:
n
X
i=1
Evi
ki
Pi (vi ) vi −
ci
+
ϕi ki
.
ci
(6)
For the main part of the paper, I assume vi ’s are identically distributed, whose density
f is strictly positive on V = [v, v] ⊂ R+ . I use F to denote the corresponding cumulative
distribution function. In addition, I assume ci = c and ki = k for all i. The main results of
the paper can be extended to environments in which the valuations (vi ) of different agents
may follow different distributions (Fi ), and both the punishments (ci ) and verification costs
(ki ) may be different for different agents. I discuss the general asymmetric setting in Section
5.1. In this symmetric setting, there exists an optimal mechanism that is symmetric. Thus, I
focus on symmetric mechanisms in Sections 3 and 4. In what follows, I suppress the subscript
i whenever the meaning is clear.
8
3
Optimal Mechanism
3.1
Optimal Mechanism for Fixed ϕ
Fix ϕ = inf v P (v). I first solve the following problem (OP T − ϕ):
k
ϕk
+
,
max Ev P (v) v −
P,Q
c
c
subject to
ϕ
ϕ ≤ P (v) ≤
, ∀v,
n 1 − c
Z
Z
n P (v)dF (v) ≤ 1 − 1 − dF (v) , ∀S ⊂ V.
S
(IC0 )
(F2)
S
Note that (OP T − ϕ) is feasible only if ϕ ≤ 1/n. To solve this problem, I approximate the
continuum type space with a finite partition, characterize an optimal mechanism in the finite
model and take limits. Later on, I show the limiting mechanism is optimal in the continuum
model.
3.1.1
Finite Case
Fix an integer m ≥ 2. For t = 1, . . . m, let
(2t − 1)(v − v)
v t :=v +
,
2m t(v − v)
(t − 1)(v − v)
t
f :=F v +
−F v+
.
m
m
Consider the finite model in which vi can take only m possible different values, i.e., vi ∈
{v 1 , . . . , v m } and the probability mass function satisfies f (v t ) = f t for t = 1, . . . , m. The
corresponding problem of (OP T − ϕ) in the finite model, denoted by (OP T m − ϕ), is given
9
by:
max
P
m
X
t=1
k
ϕk
t
f P v −
+
,
c
c
t
t
subject to
ϕ
, ∀t,
1−c
(IC0 )
, ∀S ⊂ {1, . . . , m}.
(F2)
ϕ ≤ Pt ≤
!n
n
X
t
X
t
f P ≤1−
t∈S
f
t
t∈S
/
To solve (OP T m − ϕ), I first rewrite it as a polymatroid optimization problem. Define
P
P
t n
and H(S) := G(S) − nϕ t∈S f t for all S ⊂ {1, . . . , m}. Define
G(S) := 1 −
t∈S
/ f
z t := f t (P t − ϕ) for all t = 1, . . . , m. Clearly, P t ≥ ϕ if and only if zt ≥ 0 for all t = 1, . . . , m.
Using these notations, constraint (F2) can be rewritten as
n
X
z t ≤ H(S), ∀S ⊂ {1, . . . , m}.
t∈S
It is easy to verify that H(∅) = 0 and H is submodular. However, H is not monotonic.
Define H(S) := minS 0 ⊃S H(S). Then H(∅) = 0, and H is non-decreasing and submodular.
Furthermore, by Lemma 2,
( ) ( )
X
X
z z ≥ 0, n
z t ≤ H(S), ∀S = z z ≥ 0, n
z t ≤ H(S), ∀S ,
t∈S
t∈S
Thus, (OP T m − ϕ) can be rewritten as (OP T m1 − ϕ)
max
z
m
X
t=1
z
t
k
v −
c
t
+ϕ
m
X
f tvt,
t=1
subject to
0 ≤ zt ≤
10
cϕf t
, ∀t,
1−c
(IC0 )
n
X
z t ≤ H(S), ∀S ⊂ {1, . . . , m}.
(F2)
t∈S
Without the upper-bound on z t in (IC0 ), this is a standard polymatroid optimization problem, and can be solved using the greedy algorithm. With the upper-bound, this is a weighted
polymatroid intersection problem and there exist efficient algorithms solving the optima if
the weights (v t − k/c) are rational. See, for example, Cook et al. (2011) and Frank (2011).
In this paper, I solve the problem using a “guess-and-verify” approach. Though we cannot
directly apply the greedy algorithm to (OP T m1−ϕ), it is not hard to conjecture the optimal
solution. Intuitively, z t = 0 if v t < k/c. Consider v t ≥ k/c. Since H is submodular and
the upperbound on z t is linear in f t , the solution found by the greedy algorithm potentially
violates the upperbound for large t. Thus, it is natural to conjecture that there exists a
cutoff t such that for t above the cutoff, (IC0 ) binds, i.e., z t = cϕf t /(1 − c); and for t below
the cutoff, (F2) binds.
Formally, let S t = {t, . . . , m} for all t = 1, . . . , m, and S m+1 = ∅. If ϕ ≤ (1 − c)/n, let
t := 0; otherwise, there exists a unique t ∈ {1, . . . , m + 1} such that
t
H(S ) ≤ n
m
X
cϕf τ
τ =t
Define
z t :=











and
1−c
and H(S
t+1
m
X
cϕf τ
)>n
.
1−c
τ =t+1
cϕf t
1−c
1
H(S t )
n
1
n
if t > t
−
cϕf t
τ =t+1 1−c
Pm
H(S t ) − H(S t+1 )
if t = t ,
if t < t


 z t if v t ≥ k
c
t
ẑ :=
.

 0 if v t < k
(7)
c
Then ẑ t is feasible following from the fact that H(∅) = 0, and H is non-decreasing and
submodular. Furthermore, I can verify the optimality of ẑ t by the duality theorem:
11
Lemma 1 ẑ t defined in (7) is an optimal solution to (OP T m1 − ϕ).
t
Let P := z t /f t + ϕ and


 Pt
t
Pm :=

 ϕ
if v t >
k
c
if v t <
k
c
.
(8)
The following corollary directly follows from Lemma 1:
Corollary 1 Pm defined in (8) is an optimal solution to (OP T m − ϕ).
3.1.2
Continuum Case
I characterize the optimal solution in the continuum case by taking m to infinity. Let v l
be such that F (v l )n−1 = nϕ and
nϕ
n
[1 − F (v)] ≥ 0 .
v := inf v 1 − F (v) −
1−c
u
(9)
Note that if ϕ ≤ (1 − c)/n, then v u = v. Let P be defined as follows: If v l < v u , let
P (v) :=






ϕ
1−c
if v ≥ v u
F (v)n−1




 ϕ
if v l < v < v u .
if v ≤ v l
If v l ≥ v u , let
nϕ
v̂ := inf v 1 − nϕF (v) −
[1 − F (v)] ≥ 0 ∈ [v u , v l ],
1−c
and
P (v) :=



ϕ
1−c

 ϕ
12
if v ≥ v̂
if v < v̂
.
(10)
Finally, let


 P (v)
∗
P (v) :=

 ϕ
if v ≥
k
c
if v <
k
c
.
(11)
I show in appendix that P ∗ is the “pointwise limit” of {Pm }. Moreover, P ∗ is an optimal
solution to (OP T − ϕ).
Theorem 1 P ∗ defined in (11) is an optimal solution to (OP T − ϕ).
3.2
Optimal ϕ
I complete the characterization of the optimal mechanism by characterizing the optimal
ϕ. First, if inspection is sufficiently costly or the principal’s ability to punish an agent is
sufficiently limited, then pure randomization is optimal.
Theorem 2 If v − k/c ≤ Ev [v], then pure randomization, i.e., P ∗∗ = 1/n is optimal.
To make the problem more interesting, in what follows, I assume:
Assumption 1 v − k/c > Ev [v].
Recall that given ϕ, v l is uniquely pinned down by F (v l )n−1 = nϕ and v u is uniquely
pinned down by (9). Define v ∗ and v ∗∗ by the equations (12) and (13), respectively:
Ev [v] − Ev [max{v, v ∗ }] +
(12)
k
= 0.
Ev [v] − Ev [min{v, v }] + (1 − c) Ev [v] − Ev [max{v, v }] +
c
∗∗
k
= 0,
c
∗∗
(13)
They are well defined under Assumption 1. Furthermore, v ∗∗ > v ∗ ≥ k/c. Finally, let
F (v)n−1 (1 − F (v))
n
v := sup v − 1 + F (v) ≤ 0 .
1−c
\
The optimal mechanism is characterized by the following theorem:
13
(14)
Theorem 3 Suppose Assumption 1 holds. There are three cases.
1. If F (v ∗ )n−1 ≥ n(1 − c), then the optimal ϕ∗ = F (v ∗ )n−1 /n and the following allocation
rule is optimal:


 F (v)n−1
∗∗
P (v) :=

 ϕ∗
if v ≥ v ∗
.
if v < v ∗
2. If F (v ∗ )n−1 < n(1 − c) and v ∗∗ ≤ v \ , then the optimal ϕ∗ = (1 − c)/n(1 − cF (v ∗∗ )) and
the following allocation rule is optimal
∗∗
P (v) :=



ϕ∗
1−c

 ϕ∗
if v ≥ v ∗∗
.
if v < v ∗∗
3. If F (v ∗ )n−1 < n(1 − c) and v ∗∗ > v \ , then the optimal ϕ∗ is defined by
k
Ev [v] − Ev [min{v, v (ϕ )}] + (1 − c) Ev [v] − Ev [max{v, v (ϕ )}] +
= 0,
c
u
∗
l
∗
(15)
and the following allocation rule is optimal:
P ∗∗ (v) :=






ϕ∗
1−c
if v ≥ v u (ϕ∗ )
F (v)n−1




 ϕ∗
if v l (ϕ∗ ) < v < v u (ϕ∗ ) .
if v ≤ v l (ϕ∗ )
To understand the result, consider the following implementation of the optimal mechanism. There are two thresholds. I abuse notation here and denote them by v l and v u with
v ≤ v l ≤ v u ≤ v. If every agent reports a value below v l , then an agent is selected uniformly
at random and receives the good and no one is inspected. If any agent reports a value above
v l but all reports are below v u , then the agent with the highest reported value receives the
good, is inspected with some probability (proportional to 1/c) and is penalized if he is found
to have lied. If any agent reports a value above v u , then an agent is selected uniformly at
14
random among all the agents whose reported value is above v u , receives the good, is inspected with some probability (proportional to 1/c) and is penalized if he is found to have
lied. I call a mechanism a one-threshold mechanism if v u = v, a mechanism a shortlisting
mechanism if v l = v u , and a mechanism a two-thresholds mechanism if v l < v u , .
Consider the impact of a reduction in v l . Intuitively, this raises the total inspection cost as
well as the allocation efficiency at the bottom of the value distribution. After some algebra,
one can verify that the increase in inspection cost is proportional to k/c and the increase
in allocation efficiency is proportional to Ev [v] − Ev [max{v, v l }]. However, as v l decreases,
agents with low v’s get worse off and have stronger incentives to misreport as a high type.
Since the principal’s ability to penalize an agent is limited, more pooling at the top, i.e.,
a lower v u is required to restore the incentive compatibility. This reduces the allocation
efficiency at the top by an amount proportional to [Ev [min{v, v u }] − Ev [v]] /(1 − c). In an
optimal mechanism, the marginal gain from a reduction in v l (proportional to the left-hand
side of (16)) must equal the marginal cost (proportional to the right-hand side of (16)):
Ev [v] − Ev [max{v, v l }] =
Ev [min{v, v u }] − Ev [v] k
+ .
1−c
c
(16)
This is precisely the case captured by the third part of Theorem 3 (compare (16) with (15)).
If the limited punishment constraint does not bind, i.e., v u = v, there is no efficiency loss at
the top and [Ev [min{v, v u }] − Ev [v]] /(1 − c) = 0. In this case, (16) becomes (12) (v l = v ∗ )
and the optimal mechanism is characterized by the first part of Theorem 3. If the principal’s
ability to punish an agent is sufficiently limited so that v u = v l (= v ∗∗ ), then (16) becomes
(13) and the optimal mechanism is characterized by the first part of Theorem 3.
Remark 2 v ∗ is strictly increasing in k/c. If n(1 − c) < 1, there is no pooling at the top
for k > 0 sufficiently large. If k = 0, then v ∗ = v and F (v ∗ )n−1 = 0 < n(1 − c) for any
0 < c < 1. That is, with limited punishment, there is always pooling at the top if verification
is free (Mylovanov and Zapechelnyuk (2014)).
15
4
Properties of Optimal Mechanism
The properties of optimal mechanisms crucially depend on the number of agents (n), the
verification cost (k) and the level of punishment (c). Let ρ := k/c ≥ 0. I call ρ the effective
verification cost. The effective verification cost, ρ, is strictly decreasing in c. This is because
a smaller c implies a lower level of punishment, which makes inspection less attractive, or
essentially more costly.
Let n∗ (ρ, c) < 1/(1 − c) be defined by
∗ (ρ,c)−1
F (v ∗ )n
= n∗ (ρ, c)(1 − c),
(17)
Ev [v] − Ev [max{v, v ∗ }] + ρ = 0.
(12)
where v ∗ is defined by
Since v ∗ is independent of n, by Theorem 3, one-threshold mechanisms are optimal if and
only if n ≤ n∗ (ρ, c). Intuitively, for fixed v ∗ , an agent whose type below v ∗ gets the object
with probability
ϕ∗ =
1
F (v ∗ )n−1 ,
n
which is strictly decreasing in n. That is, a low type agent gets worse off and has stronger
incentive to misreport as a high type agent when the number of agent, n, increases. For n
sufficiently large, the (IC) constraint is violated without pooling at the top.
Since v ∗ is strictly increasing in ρ, the left-hand side of (17) is strictly decreasing in n,
and the right-hand side of (18) is strictly increasing in n, we have n∗ is strictly increasing in
ρ. Fixed ρ, v ∗ is independent of c, but the left-hand side of (17) is strictly decreasing in c.
Hence, n∗ is strictly increasing in c.
Let n∗∗ (ρ, c) < 1/(1 − c) be defined by
∗∗
∗∗
1 − F (v ∗∗ )n (ρ,c)
F (v ∗∗ )n (ρ,c)−1
=
,
1 − F (v ∗∗ )
1−c
16
(18)
where v ∗∗ is given by
k
Ev [v] − Ev [min{v, v }] + (1 − c) Ev [v] − Ev [max{v, v }] +
= 0.
c
∗∗
∗∗
(13)
Then v ∗∗ ≤ v \ if and only if n ≥ n∗∗ (ρ, c). By Theorem 3, shortlisting mechanisms are
optimal if and only if n ≥ n∗∗ (ρ, c). Intuitively, a low type agent gets worse off and has
stronger incentive to misreport as a high type agent when the number of agent, n, increases.
Thus, pooling areas at both the bottom and the top of the value distribution are increased
to ensure the mechanism is incentive compatible and save the inspection cost. Eventually,
for a sufficiently large number of agents, the two pooling areas meet and there is a unique
threshold so that all agents whose value above the threshold are pooled together and all
agents whose value below the threshold are pooled together.
Recall that v ∗∗ > v ∗ . Hence,
∗
∗
∗
F (v ∗ )n (ρ,c)−1
1 − F (v ∗∗ )n (ρ,c)
F (v ∗∗ )n (ρ,c)−1
>
= n∗ (ρ, c) ≥
.
1−c
1−c
1 − F (v ∗∗ )
Since the left-hand side of (18) is strictly increasing in n, and the right-hand side of (18) is
strictly decreasing in n, we have n∗∗ (ρ, c) > n∗ (ρ, c). It is easy to see that v ∗∗ (ρ, c) is strictly
increasing in both ρ and c, and independent of n. I show in Lemma 4 that if n(1 − c) < 1
then v \ is strictly increasing in n and strictly decreasing in c. Hence n∗∗ (ρ, c) is strictly
increasing in both ρ and c. These results are summarized by the following corollary:
Corollary 2 Given k > 0 and c ∈ (0, 1), there exists 0 < n∗ (ρ, c) < n∗∗ (ρ, c) < 1/(1 − c)
such that the following is true:
1. If n ≤ n∗ (ρ, c), then one-threshold mechanisms are optimal; if n∗ (ρ, c) < n < n∗∗ (ρ, c),
then two-thresholds mechanisms are optimal; if n ≥ n∗∗ (ρ, c), then shortlisting mechanisms are optimal.
2. n∗ (ρ, c) and n∗∗ (ρ, c) are strictly increasing in ρ and c.
17
3. v ∗ (n, ρ, c) is strictly increasing in ρ and independent of n and c. v ∗∗ is strictly increasing
in ρ and c and independent of n. If n∗ (ρ, c) < n < n∗∗ (ρ, c), then v l (n, ρ, c) is strictly
increasing in n, ρ and c and v u (n, ρ, c) is strictly decreasing in n and strictly increasing
in ρ and c.
The impact of k is straightforward. As k increases, inspection becomes more costly. The
optimal mechanism sees more pooling at the bottom (measured by ϕ) to save inspection cost.
This relaxes the upper-bound on P , which leads to less pooling at the top or no pooling at
the top at all. The impact of c is ambiguous. On the one hand, given the amount of pooling
at the bottom, i.e., ϕ, an increase in c lowers the upper-bound on P (see (5)), which implies
more pooling at the top. On the other hand, an increase in c makes verification more costly.
Similar to the case of an increase in k, this change increases the amount of pooling at the
bottom, i.e., ϕ, and relaxes the upper-bound on P . As a result, there may be less pooling at
the top or no pooling at the top at all. The second channel is absent if verification is free,
i.e., k = 0. This is further illustrated by the following numerical example.
Example 1 Consider a numerical example in which vi ’s are uniformly distributed on [0, 1].
There are n = 8 agents. The inspection cost is k = 0.4. I abuse notation a bit and redefine
v l = v u = v ∗∗ if v l > v u . Figure 1 plots v l and v u as functions of c. Observe that the change
of v u is not monotonic. As c increases, the pooling area at the top first increases and then
decreases.
Finally, a careful examination of (17) and (12) proves the following corollary:
Corollary 3 limc→1 n∗ (k/c, c) = ∞ and limk→0 n∗ (k/c, c) = 0.
Corollary 3 shows that as the principal’s ability to punish an agent becomes unlimited,
the model collapses to Ben-Porath et al. (2014) and as the inspection cost diminishes, the
model collapses to Mylovanov and Zapechelnyuk (2014).
18
1
vl
vu
0.98
v
0.96
0.94
0.92
0.9
0.88
0.8
0.82
0.84
0.86
0.88
0.9
0.92
0.94
0.96
0.98
1
c
Figure 1: The impact of level of punishment (c)
5
Extensions
5.1
Asymmetric Environment
In this section, I consider the general model with asymmetric agents. Fix ϕi = inf vi Pi (vi ).
I first solve the following problem (OP T H − ϕ):
max
P,Q
n
X
i=1
Evi
ki
Pi (vi ) vi −
ci
+
ϕi ki
,
ci
subject to
ϕi ≤ Pi (vi ) ≤
XZ
i
Si
ϕi
, ∀vi ,
1 − ci
0 ≤ Qi (vi ) ≤ 1, ∀vi ,
Z
Y
Pi (vi )dFi (vi ) ≤ 1 −
1−
dFi (vi ) , ∀Si ⊂ Vi .
i
19
Si
(IC0 )
(F1)
(F2)
P
Clearly, (OP T H − ϕ) is feasible only if
i
ϕi ≤ 1. As in the symmetric case, I approximate
the continuum type space with a finite partition, solve an optimal mechanism in the finite
model and take limits. The following theorem gives an optimal solution to (OP T H − ϕ):
Theorem 4 There exists dl and dui for i = 1, . . . , n such that P ∗ defined by
Pi∗ (vi )
:=


 P i (vi )
if vi >
ki
ci

 ϕi
if vi <
ki
.
ci
,
(19)
where

ϕi



1−ci

 Q
P i (vi ) :=
v −
k F
 j6=i,duj ≥vi − cii j i



 ϕ
if vi > dui +
ki
ci
+
kj
cj
if dl +
ki
ci
< vi < dui +
if vi < dl +
i
ki
ci
ki
ci
.
(20)
ki
ci
is an optimal solution to (OP T H − ϕ).
Note that threshold mechanisms are still optimal. However, even though there is a unique
lower threshold dl such that all agents whose “net” value vi − ki /ci below the threshold are
pooled, there can be up to n distinctive upper thresholds dui (i = 1, . . . , n). To illustrate
how the mechanism works, assume du1 = · · · = duj > duj+1 = . . . dun . If there exists some agent
i (1 ≤ i ≤ j) whose “net” value vi − ki /ci is above dui , then one of such agents is selected
at random and receives the good. If vi − ki /ci < dui for all 1 ≤ i ≤ j but vi − ki /ci ≥ duj+1
for some 1 ≤ i ≤ j, then the agent with the highest reported “net” value among the first
j agents receives the good. If vi − ki /ci < duj+1 for all 1 ≤ i ≤ j but vi − ki /ci ≥ duj+1
for some j + 1 ≤ i ≤ n, then one agent is selected at random among all the agents whose
reported “net” value is above duj+1 and receives the good. If vi − ki /ci < duj+1 for all i but
vi − ki /ci ≥ dl for some i, then the agent with the highest reported “net” value receives the
good. If vi − ki /ci < dl for all i, then one agent is selected at random and receives the good.
20
Because of the complication of pooling areas at the top, it is much harder to find an
optimal solution to (OP T H − ϕ). Specifically, dui ’s are solved recursively from the largest
−
to the smallest. Furthermore, to characterize the optimal →
ϕ := (ϕ1 , . . . , ϕn ), without priori
knowledge of which set of agents share the same upper threshold, one must consider 2n
different cases.2 Thus, I leave the full characterization of optimal mechanisms for future
research.
−
Though it is extremely hard to characterize the optimal →
ϕ due to the complication of
pooling areas at the top, I give a partial characterization of optimal mechanisms when the
upperbounds on Pi do not bind, i.e., dui = v i −
ki
ci
for all i. By a similar argument to that
in the proof of Theorem 2, one can show that pure randomization is optimal if inspection
is sufficiently costly or the principal’s ability to punish an agent is sufficiently limited, i.e.,
v i − ki /ci ≤ Evi [vi ] for all i. To make the problem interesting, in what follows, I assume:
Assumption 2 v i − ki /ci > Evi [vi ] for some i.
If dui = v i −
ki
ci
for all i, then dl ≥ max v j − kj /cj is such that
n
X
i=1
Y
n
ki
ki
l
l
=
Fi d +
.
ϕi Fi d +
ci
c
i
i=1
Lemma 17 shows that there exists a unique dl satisfying the above equation. Note that
unless ϕi = 0 for all i, we have dl > max v j − kj /cj . Clearly, in optimum, ϕi > 0 for some
i. Hence, dl > max v j − kj /cj . Let d∗i (i = 1, . . . , n) be defined by
Evi [vi ] − Evi
ki
ki
∗
max vi , di +
+ = 0,
ci
ci
(21)
and dl∗ := maxi d∗i .
2
Assume, without loss of generality, that P
du1 ≥ · · · ≥ dun . If there are i distinctive upper thresholds, then
n
i
there are Cn possibilities. In total, there are i=1 Cni = 2n possibilities to consider.
21
Theorem 5 Suppose Assumption 2 holds. If
Y
n
n
X
Y ki
ki kj
ki
l∗
l∗
Fi d +
≤
,
(1 − ci )
Fi d +
Fj v j − +
c
c
c
c
i
j
i
i
i=1
i=1
j6=i
−
then the set of optimal →
ϕ is the convex hull of
 Q
∗
kj
ki

∗

i
∈
arg
max
d
,
ϕ
=
(1
−
c
)
F
+
∀i 6= i∗ ,
v
−
i i
i
i
j
 j6=i j
ci
cj
P
→
−
Qn
Q
kj
ki
ϕ l∗ ki
∗ ki
i=1 Fi d + ci − i6=i∗ (1−ci )
j6=i Fj v j − ci + cj Fi d + ci


 ϕi∗ =
l∗ ki∗
Fi∗ d + c
i∗




.



−
For each optimal →
ϕ ∗ , the following allocation rule is optimal:
Pi∗∗ (vi )
5.1.1
:=


 Q
j6=i
Fj vi −
ki
ci

 ϕ∗i
+
kj
cj
if vi ≥ dl +
l
if vi < d +
ki
ci
.
ki
ci
Symmetric Environment Revisited
In this section, I revisit the symmetric environment. In the symmetric environment, an
optimal mechanism must satisfy: du1 = · · · = dun . To understand the intuition behind this
result, note first that in the symmetric environment dui ≥ duj only if ϕi ≥ ϕj . Assume,
without loss of generality, that du1 ≥ · · · ≥ dun . Consider, for simplicity, the case in which
d > du1 > du2 > du3 , which implies that ϕ1 > ϕ2 . We can construct a new mechanism in which
P
ϕ∗1 = ϕ∗2 = 2i=1 ϕi /2 and ϕi = ϕ∗i for all i ≥ 3. In this new mechanism, agents 1 and 2 share
the same upper threshold du∗ ∈ (du2 , du1 ) and the upper thresholds of the other agents remain
the same. The new mechanism improves the principal’s value by allocating the good between
agents 1 and 2 more efficiently when their “net” values, vi − ki /ci , lie between (du2 , du1 ).
−
This property of optimal mechanisms facilitates our analysis of optimal →
ϕ . Theorem 6
−
below characterizes the set of all optimal →
ϕ . Let v ∗ , v ∗∗ and v \ be defined by (12), (13) and
(14), respectively.
Theorem 6 Suppose Assumption 1 holds. There are three cases.
22
−
1. If F (v ∗ )n−1 ≥ n(1 − c), then the set of optimal →
ϕ is the convex hull of
→
−
ϕ ϕi∗ = F (v ∗ )n−1 − (n − 1)(1 − c), ϕj = 1 − c ∀j 6= i∗ , i∗ ∈ I .
−
For each optimal →
ϕ ∗ , the following allocation rule is optimal:


 F (vi )n−1
∗∗
Pi (vi ) :=

 ϕ∗
i
if vi ≥ v ∗
if vi < v
.
∗
−
2. If F (v ∗ )n−1 < n(1 − c) and v ∗∗ ≤ v \ , then the set of optimal →
ϕ is the convex hull of
 
 ϕi = (1 − c)F (v ∗∗ )j−1 if j ≤ h − 1, ϕi = 1−c ∗∗ − Ph−1 (1 − c)F (v ∗∗ )j−1 ,
h
j=1
j
1−cF (v )
→
−
ϕ

 ϕij = 0 if j ≥ h + 1 and (i1 , . . . , in ) is a permutation of (1, . . . , n)





where 1 ≤ h ≤ n is such that
1 − F (v ∗∗ )h−1
1
1 − F (v ∗∗ )h
≤
<
.
1 − F (v ∗∗ )
1 − cF (v ∗∗ )
1 − F (v ∗∗ )
−
For each optimal →
ϕ ∗ , the following allocation rule is optimal:
Pi∗∗ (vi )
:=



ϕ∗i
1−c

 ϕ∗i
if vi ≥ v ∗∗
.
if vi < v ∗∗
−
3. If F (v ∗ )n−1 < n(1 − c) and v ∗∗ > v \ , the the set of optimal →
ϕ is the convex hull of
→
−
ϕ ϕij = (1 − c)F (v u (ϕ∗ ))j−1 ∀j and (i1 , . . . , in ) is a permutation of (1, . . . , n) ,
where ϕ∗ is defined by (15) and, for each ϕ, v l is such that F (v l )n−1 = ϕ and v u is
23
,
−
defined by (9). For each optimal →
ϕ ∗ , the following allocation rule is optimal:
Pi∗∗ (vi ) :=






ϕ∗i
1−c
if vi ≥ v u (ϕ∗ )
F (vi )n−1




 ϕ∗
i
if v l (ϕ∗ ) < vi < v u (ϕ∗ ) .
if vi ≤ v l (ϕ∗ )
Theorem 6 illustrates how limited punishment restricts the principal’s ability to treat
agents differently. Suppose F (v ∗ )n−1 ≥ n(1 − c), i.e., the upperbounds do not bind. In
Ben-Porath et al. (2014) (c = 1), there is a class of optimal mechanisms called favoredagent mechanisms. In a favored-agent mechanism, there exists a favored-agent i∗ whose
ϕ∗i = F (v ∗ )n−1 and ϕi = 0 for any other agent i. However, if c < 1, then in an optimal
mechanism no agent can receive the good with a probability lower than 1 − c since otherwise
some upperbounds on Pi bind. Fix the ratio of k/c so that v ∗ remains the same, the optimal
−
−
set of →
ϕ shrinks as c becomes smaller. When F (v ∗ )n−1 = n(1 − c), the unique optimal →
ϕ∗
is such that ϕ∗1 = · · · = ϕ∗n .
Suppose F (v ∗ )n−1 < n(1 − c), then the principal can again treat agents differently but
to the extent that they share the same upper threshold. Assume, without loss of generality,
that an agent with a smaller index is more favored by the principal in terms of a larger ϕi .
Then, in an optimal mechanism, the first h agents cannot be favored too much in the sense
P
P
that hi=1 ϕi ≤ (1 − c) hi=1 F (v u )i−1 for all h = 1, . . . , n.
5.2
Punishments Independent of Allocation
In this section, I consider a variation of the model in which I assume kiβ < ∞ and cβi > 0.
This means that the principal can acquire information about an agent and penalize the agent
even if he does not receive the object. In general, given pi (v), it is optimal for the principal
to
min
qi (v),qiβ (v)
pi (v)qi (v)ki + (1 − pi (v))qiβ (v)kiβ
24
subject to
pi (v)qi (v)ci (vi ) + (1 − pi (v))qiβ (v)cβi (vi ) = c̄i (v),
(22)
where c̄i (v) ≤ pi (v)ci (vi ) + (1 − pi (v))cβi (vi ) is the expected punishment. There are three
cases. If ki /ci (vi ) < kiβ /cβi (vi ), then it is optimal to set qi (v) = min{c̄i (v)/pi (v)ci (vi ), 1} and
qiβ (v) = max{0, (c̄i (v) − pi (v)ci (vi ))/(1 − pi (v))cβi (vi )}. Similarly, if ki /ci (vi ) > kiβ /cβi (vi ),
then it is optimal to set qiβ (v) = min{1, c̄i (v)/(1 − pi (v))cβi (vi )} and qi (v) = max{0, (c̄i (v) −
(1 − pi (v))cβi (vi ))/pi (v)ci (vi )}. Finally, if ki /ci (vi ) = kiβ /cβi (vi ), then any qi (v) and qiβ (v)
satisfying (22) is optimal.
For simplicity, I assume that kiβ = ki and cβi = ci . The results can readily extend to
more general cases, e.g., kiβ > ki and cβi < ci . Given (p, q), let Pi (vi ) := Ev−i [pi (vi , v−i )] and
i
h
P̂ (vi ) := Ev−i pi (vi , v−i )qi (vi , v−i ) + (1 − pi (vi , v−i ))qiβ (vi , v−i ) . The principal’s problem can
be written in the reduced form:
max
P,Q
n
X
i
h
Evi Pi (vi )vi − P̂ (vi )ki ,
i=1
subject to
XZ
i
Si
Pi (vi )bi (vi ) ≥ Pi (vi0 )bi (vi ) − P̂i (vi0 )ci (vi ), ∀vi , vi0 ,
(IC)
0 ≤ P̂i (vi ) ≤ 1, ∀vi ,
Z
Y
dFi (vi ) , ∀Si ⊂ Vi .
Pi (vi )dFi (vi ) ≤ 1 −
1−
(F1)
i
(F2)
Si
For tractability, I assume ci (vi ) = ci bi (vi ). The (IC) constraint becomes Pi (vi ) ≥ Pi (vi0 ) −
P̂ (vi0 )ci for all vi and vi0 . This constraint holds if and only if
ϕi ≥ Pi (vi0 ) − P̂ (vi0 )ci , ∀vi0 .
25
(23)
Since P̂i (vi0 ) ≤ 1, (23) holds only if
Pi (vi0 ) ≤ ϕi + ci , ∀vi0 .
(24)
Suppose (24) holds, then it is optimal to set P̂i (vi ) = (Pi (vi ) − ϕi )/ci for all vi ∈ Vi .
Substituting this into the principal’s objective function yields:
n
X
i=1
Evi
ki
ϕi ki
Pi (vi ) vi −
+
.
ci
ci
(6)
Note that, given {ϕi }, the principal’s objective function is as same as that in the case that
cβi = 0. The only difference is the upperbound on Pi . If the upperbound does not bind in the
original problem, i.e., ϕi /(1 − ci ) ≥ 1, then ϕi + ci ≥ 1 which implies that the upperbound
does not bind in the new problem either. The reverse is also true. If the upperbound binds
in the original problem, i.e., ϕi /(1 − ci ) < 1, then the new upperbound is larger:
ϕi
ϕi
= ci 1 −
> 0.
ϕ i + ci −
1 − ci
1 − ci
Hence, any feasible solution to the new problem is also feasible in the original problem. This
can be seen more clearly if we revisit (23). In (23), we have P̂i (vi0 ) ≤ 1. In contrast, if cβi = 0
which implies that qiβ = 0, then we have P̂i (vi0 ) ≤ Pi (vi0 ) ≤ 1. This is intuitive since allowing
the principal to penalize an agent even if he does not receive the object clearly relaxes the
principal’s problem.
For simplicity, as in the main part of the paper, I assume vi ’s are identically distributed
and ci = c and ki = k for all i. Without loss of generality, I can focus on symmetric
mechanisms. In what follows, I suppress the subscript i whenever the meaning is clear.
Fix ϕ = inf v P (v). I first solve the following problem (OP T I − ϕ):
k
ϕk
max Ev P (v) v −
+
,
P,Q
c
c
26
subject to
ϕ ≤ P (v) ≤ ϕ + c, ∀v,
n
Z
Z
n P (v)dF (v) ≤ 1 − 1 − dF (v) , ∀S ⊂ V.
S
(IC0 )
(F2)
S
Note that (OP T I − ϕ) is feasible only if ϕ ≤ 1/n. Let v l be such that F (v l )n−1 = nϕ and
redefine
v u := inf {v |1 − F (v)n − n(ϕ + c)[1 − F (v)] ≥ 0 } .
(25)
Redefine P as follows: If v l < v u , let



ϕ+c



P (v) :=
F (v)n−1




 ϕ
if v ≥ v u
if v l < v < v u .
if v ≤ v l
If v l ≥ v u , let
v̂ := inf {v |1 − nϕF (v) − n(ϕ + c) [1 − F (v)] ≥ 0} ∈ [v u , v l ],
and
P (v) :=


 ϕ+c
if v ≥ v̂

 ϕ
if v < v̂
(26)
.
Finally, redefine


 P (v)
∗
P (v) :=

 ϕ
if v ≥
k
c
if v <
k
c
.
(27)
Then we have the following theorem:
Theorem 7 P ∗ defined in (27) is an optimal solution to (OP T I − ϕ).
The proof is similar to that of Theorem 1 and neglected here. I complete the charac27
terization of the optimal mechanism by characterizing the optimal ϕ. First, if inspection is
sufficiently costly or the principal’s ability to punish an agent is sufficiently limited, then
pure randomization is optimal. In particular, Theorem 2 still applies here. To make the
problem more interesting, in what follows, I assume Assumption 1 holds.
Recall that given ϕ, v l is uniquely pinned down by F (v l )n−1 = nϕ and v u is uniquely
pinned down by (25). Let v ∗ be defined by (12) and redefine v ∗∗ by
Ev [v] − Ev [min{v, v ∗∗ }] + Ev [v] − Ev [max{v, v ∗∗ }] +
k
= 0.
c
(28)
They are well defined under Assumption 1. Furthermore, v ∗∗ > v ∗ ≥ k/c. Finally, redefine
v \ := sup v F (v)n−1 + nc (1 − F (v)) − 1 + F (v)n ≤ 0 .
(29)
The optimal mechanism is characterized by the following theorem:
Theorem 8 Suppose Assumption 1 holds. There are three cases.
1. If F (v ∗ )n−1 ≥ n(1 − c), then the optimal ϕ∗ = F (v ∗ )n−1 /n and the following allocation
rule is optimal:


 F (v)n−1
∗∗
P (v) :=

 ϕ∗
if v ≥ v ∗
.
if v < v ∗
2. If F (v ∗ )n−1 < n(1−c) and v ∗∗ ≤ v \ , then the optimal ϕ∗ = (1−nc)/n(1−nc+ncF (v ∗∗ ))
and the following allocation rule is optimal:


 ϕ∗ + c
∗∗
P (v) :=

 ϕ∗
if v ≥ v ∗∗
.
if v < v ∗∗
3. If F (v ∗ )n−1 < n(1 − c) and v ∗∗ > v \ , then the optimal ϕ∗ is defined by
Ev [v] − Ev [min{v, v u (ϕ∗ )}] + Ev [v] − Ev [max{v, v l (ϕ∗ )}] +
28
k
= 0,
c
(30)
and the following allocation rule is optimal:



ϕ∗ + c



P ∗∗ (v) :=
F (v)n−1




 ϕ∗
if v ≥ v u (ϕ∗ )
if v l (ϕ∗ ) < v < v u (ϕ∗ ) .
if v ≤ v l (ϕ∗ )
The proof is similar to that of Theorem 3 and neglected here. Note that the optimal
mechanism obtained here is very similar to what obtained if the principal can only penalize
an agent who receives the object, but has different thresholds for pooling when the limited
punishments constraint is binding.
6
Conclusion
In this paper, I study the problem of a principal who have to allocate a good among a
number of agents. There are no monetary transfers. The principal can inspect agents’ reports
at a cost and penalize them, but the punishments are limited. I characterize an optimal
mechanism in this setting. This paper includes Ben-Porath et al. (2014) and Mylovanov and
Zapechelnyuk (2014) as special cases and bridges their connections. For a small number of
agents, the optimal mechanism only involves pooling area at the bottom of value distribution
as in Ben-Porath et al. (2014). As the number of agents increase, pooling at the top is
required to guarantee the incentive compatibility as in Mylovanov and Zapechelnyuk (2014).
A
Omitted Proofs
A polymatroid is a polytope of type
)
X
xe ≤ f (A) for all A ⊂ E ,
x ∈ RE x ≥ 0,
(
P (f ) :=
e∈A
where E is a finite set and f : 2E → R+ is a submodular function.
29
(31)
Lemma 2 There exists a monotone and submodular function g : 2E → R+ with g(∅) = 0
and P (f ) = P (g).
Proof. Let g(∅) := 0 and g(X) := minA⊃X f (A) for X 6= ∅. Let X ⊂ Y ⊂ E. If X = ∅,
then g(X) = 0 ≤ g(Y ). If X 6= ∅, since A ⊃ Y implies that A ⊃ X, we have
g(X) = min f (A) ≤ min f (A) = g(Y ).
A⊃X
A⊃Y
That is, g is monotone. Let e ∈ E\Y . I want to show that
g(Y ∪ {e}) − g(Y ) ≤ g(X ∪ {e}) − g(X).
Recall that g(∅) = 0 ≤ minA f (A). It suffices to show that g is submodular, i.e.,
min f (C) + min f (D) ≤
C⊃Y ∪{e}
min f (A) + min f (B).
D⊃X
A⊃X∪{e}
B⊃Y
Let A∗ ∈ arg minA⊃X∪{e} f (A) and B ∗ ∈ arg minB⊃Y f (B). Then A∗ ∪ B ∗ ⊃ Y ∪ {e} and
A∗ ∩ B ∗ ⊃ X. Hence
min f (A) + min f (B) = f (A∗ ) + f (B ∗ )
A⊃X∪{e}
B⊃Y
≥ f (A∗ ∪ B ∗ ) + f (A∗ ∩ B ∗ )
≥
min f (C) + min f (D),
C⊃Y ∪{e}
D⊃X
where the first inequality holds since f is submodular. Finally, I want to show that P (f ) =
P (g). Since f (A) ≥ g(A) for all A ⊂ E, we have P (g) ⊂ P (f ). Suppose there exists x ∈ P (f )
P
and x ∈
/ P (g). Then there exists A 6= ∅ such that e∈A xe > g(A). By construction, there
P
P
exists B ⊃ A such that g(A) = f (B). However, then we have e∈B xe ≥ e∈A xe > g(A) =
f (B), a contradiction to that x ∈ P (f ). Hence P (f ) = P (g).
30
Proof of Lemma 1. First, since H(∅) = 0, and H is non-decreasing and submodular, ẑ t
is feasible. Next, I show that ẑ t is optimal.
I begin by characterizing H. Clearly, there exists a unique t ∈ {1, . . . , m} such that
t−1
X
1
n
!n−1
fτ
τ =1
1
<ϕ≤
n
t
X
!n−1
fτ
.
τ =1
It is easy to verify that3


 1 − Pt−1 f τ n − nϕ Pm f τ
τ =t
τ =1
t
H(S ) =

 1 − nϕ
Let ∆(t) := H(S t )−n
cϕf τ
τ =t 1−c .
Pm
if t > t
.
(32)
if t ≤ t
It is easy to see that ∆(t) is concave in t, and ∆(m+1) = 0.
Recall that if ∆(1) = 1 − nϕ/(1 − c) ≥ 0, then t = 0; otherwise, there exists a unique
t ∈ {1, . . . , m + 1} such that
t
H(S ) ≤ n
m
X
cϕf τ
τ =t
1−c
and H(S
t+1
m
X
cϕf τ
)>n
.
1−c
τ =t+1
Consider the dual to problem (OP T m1 − ϕ), denoted by (DOP T m1 − ϕ),
min
λ,β,µ
m
X
cϕf t λt
t=1
1−c
+
X
β(S)H(S) + ϕ
m
X
f tvt,
t=1
S
subject to
vt −
X
k
− λt + µt − n
β(S) ≥ 0, ∀t,
c
S3t
λ ≥ 0, β ≥ 0, µ ≥ 0.
3
This result can be seen as a corollary of Lemmas 8 and ?? in the asymmetric case.
31
Let ẑ be define in (7), and (λ̂, β̂, µ̂) be the corresponding dual variables. Let t0 be such that
v t ≥ k/c if and only if t ≥ t0 .
Case 1: v t <
k
c
or t < t0 .
In this case, we have
t
ẑ :=



cϕf t
1−c
if t > t
.
if t ≤ t

 0
Let β̂(S) = 0 for all S. If v t < k/c, let λ̂t = 0 and µ̂t = k/c − v t > 0; if v t ≥ k/c, let µ̂t = 0
and λ̂t = v t − k/c ≥ 0. It is easy to see this is a feasible solution to (DOP T m1 − ϕ), and
the complementary slackness conditions are satisfied. Finally, the dual objective is equal to
the primal objective:
m
m
X
X
k
cϕf t
t
v −
+ϕ
f tvt.
1−c
c
0
t=1
t=t
By the duality theorem, ẑ is an optimal solution to (OP T m1 − ϕ).
Case 2: v t ≥
k
c
or t ≥ t0 .
ẑ t :=









cϕf t
1−c
1
H(S t )
n
if t > t
−
cϕf t
τ =t+1 1−c
Pm

1
t
t+1

H(S
)
−
H(S
)

n




 0 if t < t0
if t = t
,
if t0 ≤ t < t
Let β̂(S) > 0 if S = S t for t0 ≤ t ≤ t; and β̂(S) = 0 otherwise. If t < t0 , let λ̂t = 0
and µ̂t = k/c − v t ≥ 0. If t0 ≤ t ≤ t, let λ̂t = µ̂t = 0, β̂(S t ) = (v t − v t−1 )/n for t > t0
0
0
and β̂(S t ) = (θt − k/c)/n. If t > t, let λ̂t = v t − v t and µ̂t = 0. It is easy to see this
is a feasible solution to (DOP T m1 − ϕ), and the complementary slackness conditions are
32
satisfied. Finally, the dual objective is equal to the primal objective:
m
t
m
X
X
X
k
1
1
cϕf t
k
t0
t0
t
t
t−1
t
+
H(S ) v −
H(S ) v − v
+
v −
+ϕ
f tvt.
n
c
n
1
−
c
c
0
t=1
t=t +1
t=t+1
By the duality theorem, ẑ is an optimal solution to (OP T m1 − ϕ).
Lemma 3 An optimal solution to (OP T − ϕ) exists.
Proof. Let D denote the set of feasible solutions, i.e., solutions satisfying (IC0 ) and (F2).
Consider D as a subset of L2 , the set of square integrable functions with respect to the probability measure corresponding to F . Topologize L2 with its weak∗ , or σ(L2 , L2 ), topology. It
is straightforward to verify that D is σ(L2 , L2 ) compact. See, for example, Border (1991b).
Let V (ϕ) := supP ∈D Ev P (v) v − kc + ϕk
. Let {Pm } be a sequence of feasible solutions
c
to (OP T − ϕ) such that
Z
k
ϕk
→ V (ϕ).
Pm (v) v −
dF (v) +
c
c
By Helly’s selection theorem, after taking subsequences, I can assume there exists P such that
{Pm } converges pointwise to P . Since D is σ(L2 , L2 ) compact, after taking subsequences
again, I can assume P ∈ D such that {Pm } converges to P in σ(L2 , L2 ) topology. Since
v − k/c ∈ L2 , the weak convergence of {Pm } implies that
Z
Proof of Theorem 1.
k
P (v) v −
c
dF (v) +
ϕk
= V (ϕ).
c
Let {Pm } be the sequence of optimal solutions to (OP T m − ϕ)
defined in Corollary 1. I can extend Pm to V by setting
Pm (v) :=
Pmt
(t − 1)(v − v)
t(v − v)
for v ∈ v +
,v +
, t = 1, . . . , m.
m
m
33
Recall that H is given by (32). Thus, there are three cases. If t > t, then
t
P :=













ϕ
1−c
if t > t
1
1
−n
n
P
t
τ =1
fτ
n P
− m
τ =t+1
ϕf t
1−c
ft
Pt−1 τ n
τ n− 1
f
τ =1
τ =1 f
n
ft
Pt−1 τ
1 Pt
τ n −ϕ
τ =1 f
τ =1 f
n
ft
Pt
1
n
)
(




(







 ϕ
)
(
)
if t = t
if t < t < t .
if t = t
if t < t
If t = t, then
t
P :=






ϕ
1−c
if t > t
P
1
−ϕ t−1
t=1
n
ϕf t
τ =t+1 1−c
Pm
f t−
ft




 ϕ
if t = t .
if t < t
If t < t, then
t
P :=



ϕ
1−c

 ϕ
if t > t
.
if t < t
It is easy to see that {Pm } converges pointwise to P ∗ , which is a feasible solution to (OP T −
ϕ).
To show the optimality of P ∗ , let P̂ be an optimal solution to (OP T − ϕ), which exists
by Lemma 3. Define P̂m be such that
P̂mt
1
:= t
f
Z
v+
t(v−v)
m
P̂ (v)dF (v) for t = 1, . . . , m,
v+
(t−1)(v−v)
m
and it can be extended to V by setting
P̂m (v) :=
P̂mt
(t − 1)(v − v)
t(v − v)
for v ∈ v +
,v +
, t = 1, . . . , m.
m
m
34
By the Lebesgue differentiation theorem, {P̂m } converges pointwise to P̂ . It is easy to verify
that P̂m defined on {v 1 , . . . , v m } is a feasible solution to (OP T m − ϕ). Hence
m
X
f
t
P̂mt
t=1
m
ϕk X t t
k
ϕk
k
t
t
+
≤
f Pm v −
+
v −
c
c
c
c
t=1
By the dominated convergence theorem,
m
X
f
t
P̂mt
t=1
k
v −
c
t
Z
k
P̂m (v) v −
c
=
V
Z
dF (v) →
V
k
P̂ (v) v −
dF (v)
c
and
m
X
t=1
f
t
Pmt
k
v −
c
t
Z
=
V
k
Pm (v) v −
c
Z
dF (v) →
V
k
P (v) v −
c
∗
dF (v).
Hence, P ∗ is optimal, i.e.,
Z
V
Z
k
k
P (v) v −
dF (v) =
P̂ (v) v −
dF (v).
c
c
V
∗
Lemma 4 Suppose (1 − c)/n ≤ ϕ ≤ min{1/n, 1 − c}. Then v l ≥ v u if and only if v l ≤ v \ ,
where v \ is defined by (14). Furthermore, if n(1 − c) < 1 then v \ is strictly increasing in n
and strictly decreasing in c.
Proof. Since (1 − c)/n ≤ ϕ ≤ min{1/n, 1 − c}, v l and v u satisfies:
1 − F (v u )n
F (v l )n−1
=
.
1 − F (v u )
1−c
Define
∆(v) :=
F (v)n−1 (1 − F (v))
− 1 + F (v)n .
1−c
35
(33)
Then ∆(v) = −1 < 0 and ∆(v) = 0. Then
∆0 (v) =
F (v)n−2 f (v)
[−cnF (v) + n − 1] .
1−c
Clearly, the term in the brackets is strictly decreasing in v. Moreover, ∆0 (v) = n − 1 > 0
and ∆0 (v) = n(1 − c) − 1.
If n(1 − c) ≥ 1, then ∆0 (v) ≥ 0 for all v. Hence ∆(v) is non-decreasing, and therefore
∆(v) ≤ 0 for all v. Hence
F (v l )n−1
1 − F (v l )n
1 − F (v u )n
=
≤
,
1 − F (v u )
1−c
1 − F (v l )
which implies v l ≥ v u .
If n(1 − c) < 1, then there exists v [ such that ∆0 (v) > 0 for v ∈ [v, v [ ] and ∆0 (v) < 0
for v ∈ [v [ , v]. Hence ∆(v) in strictly increasing in [v, v [ ], and strictly decreasing in [v [ , v].
Hence there exists a unique v \ ∈ (v, v) such that ∆(v) ≤ 0 if and only if v ≤ v \ . By (33),
this implies that v l ≥ v u if and only if v l ≤ v \ . Finally, for any v, ∆(v) is strictly decreasing
in n, and strictly increasing in c. Hence v \ is strictly increasing in n, and strictly decreasing
in c.
Lemma 5 Z(ϕ) ≤ Z1 (v l (ϕ)).
Proof. Fix ϕ and the corresponding v l . Note that Z1 (v l ) is attained by the following
allocation rule
P1 (v) :=


 F (v)n−1
if v ≥ max v l , kc
.
if v < max v l , kc

 ϕ
It is easy to see that P1 − P ∗ is non-decreasing, and
Z
v
Z
v
P1 (v)dF (v) =
v
v
36
P ∗ (v)dF (v) =
1
.
n
Moreover, v−k/c is non-decreasing. Hence by Lemma 1 in Persico (2000), Z(ϕ) ≤ Z1 (v l (ϕ)).
Lemma 6 If ϕ ≤ 1 − c, then Z(ϕ) ≤ Z2 (v̂(ϕ)).
Proof. Fix ϕ and the corresponding v̂. Note that Z2 (v̂) is attained by the following allocation
rule
P2 (v) :=



ϕ
1−c

 ϕ
if v ≥ max v̂, kc
k .
if v < max v̂, c
It is easy to see that P2 − P ∗ is non-decreasing, and
Z
v
Z
v
P2 (v)dF (v) =
v
P ∗ (v)dF (v) =
v
1
,
n
Moreover, v−k/c is non-decreasing. Hence by Lemma 1 in Persico (2000), Z(ϕ) ≤ Z1 (v2 (ϕ̂)).
Proof of Theorem 3. First, if ϕ ≤ (1 − c)/n, then v u = v̂ = v, and
∗
P (v) :=



ϕ
1−c

 ϕ
if v ≥
k
c
if v <
k
c
.
The principal’s objective is
cϕ
1−c
v
Z
k
c
Z v
k
v−
dF (v) + ϕ
vdF (v),
c
v
which is strictly increasing in ϕ. Hence, in optimum, ϕ ≥ (1 − c)/n.
Given ϕ, let Z(ϕ) denote the principal’s payoff. Suppose ϕ ≥ 1−c or F (v l )n−1 ≥ n(1−c).
Then v u = v, and the principal’s payoff is Z(ϕ) = Z1 (v l (ϕ)), where
v
k
Z1 (v ) :=
v−
F (v)n−1 dF (v)
k
c
max{v l , c }
l
Z
37
1
+ F (v l )n−1
n
Z
max{v l , kc }
v
k
k
1
v−
dF (v) + F (v l )n−1 .
c
n
c
If v l < k/c, then Z1 (v l ) is strictly increasing in v l . If v l ≥ k/c, then
Z10 (v l )
n−1
k
l n−2
l
l
=
F (v ) f (v ) Ev [v] − Ev [max{v, v }] +
.
n
c
Clearly, the term inside the braces is strictly decreasing in v l . Recall that v ∗ ≥ k/c is defined
by (12). Then Z10 (v l ) ≥ 0 if and only if v l ≤ v ∗ . Hence, Z1 achieves its maximum at v l = v ∗ .
I show in Lemma 5 in appendix that for any ϕ and the corresponding v l , we have
Z(ϕ) ≤ Z1 (v l (ϕ)) ≤ Z1 (v ∗ ).
Thus, if F (v ∗ )n−1 ≥ n(1 − c), it is optimal to set v l = v ∗ . This proves the first part of
Theorem 3.
If F (v ∗ )n−1 < n(1 − c), then in optimum ϕ ≤ 1 − c or F (v l )n−1 ≤ n(1 − c). Since
(1 − c)/n ≤ ϕ ≤ 1/n, there is a one-to-one correspondence between v̂ and ϕ. Given ϕ, v̂(ϕ)
is uniquely pinned down by
1 − nϕF (v̂) −
nϕ
[1 − F (v̂)] = 0.
1−c
If ϕ is such that v l ≥ v u , then Z(ϕ) = Z2 (v̂(ϕ)), where
max{v̂, kc }
k
v−
dF (v)
c
v
Z v
1
k
1−c
k
+
v−
dF (v) +
.
n(1 − cF (v̂)) max{v̂, kc }
c
n(1 − cF (v̂)) c
1−c
Z2 (v̂) :=
n(1 − cF (v̂))
Z
If v̂ < k/c, then Z2 (v̂) is strictly increasing in v̂. If v̂ ≥ k/c, then
Z20 (v̂)
cf (v̂)
=
n(1 − cF (v̂))2
k
Ev [v] − Ev [min{v, v̂}] + (1 − c) Ev [v] − Ev [max{v, v̂}] +
c
38
.
Clearly, the term inside the braces is strictly decreasing in v̂. Recall that v ∗∗ > v ∗ ≥ k/c is
defined by (13). Since Z20 (v̂) ≥ 0 if and only if v̂ ≤ v ∗∗ , Z2 achieves its maximum at v̂ = v ∗∗ .
I show in Lemma 6 in the appendix that for any ϕ ≤ 1 − c and the corresponding v̂, we have
Z(ϕ) ≤ Z2 (v̂(ϕ)) ≤ Z2 (v ∗∗ ).
This, together with Lemma 4, proves the second part of Theorem 3.
Suppose F (v ∗ )n−1 < n(1 − c) and v ∗∗ > v \ . Then
Z max{vu , k } c
k
k
dF (v) +
v−
F (v)n−1 dF (v)
Z(ϕ) =ϕ
v−
k
c
c
v
max{v l , c }
Z v
ϕ
k
ϕk
+
.
v−
dF (v) +
1 − c max{vu , kc }
c
c
Z
max{v l , kc }
If ϕ is such that v l < k/c, then Z2 (v̂) is strictly increasing in ϕ. If v l ≥ k/c, then
Z 0 (ϕ) =
k
1
[Ev [v] − Ev [min{v, v u }]] + Ev [v] − Ev [max{v, v l }] + .
1−c
c
Since both v l and v u are strictly increasing in ϕ, Z 0 (ϕ) is strictly decreasing in ϕ. Let ϕ∗ be
such that
k
Ev [v] − Ev [min{v, v (ϕ )}] + (1 − c) Ev [v] − Ev [max{v, v (ϕ )}] +
= 0.
c
u
∗
l
∗
(15)
Compare (15) with (13) and (12), and it is easy to see that v u (ϕ∗ ) > v ∗∗ > v l (ϕ∗ ) > v ∗ .
Since Z 0 (ϕ) ≥ 0 if and only if ϕ ≤ ϕ∗ , Z achieves its maximum at ϕ = ϕ∗ . This proves the
third part of Theorem 3.
Proof of Theorem 2.
If v − k/c ≤ Ev [v], then Z1 (v) is strictly increasing in v l and
achieves its maximum when v l = v. By Lemma 5, Z(ϕ) ≤ Z1 (v l (ϕ)) ≤ Z1 (v). Note that
Z1 (v) can be achieved via pure randomization. This completes the proof.
39
Proof of Corollary 2. The analysis in Section 4 has proved most results of Corollary 2.
What is left to prove is that if n∗ (ρ, c) < n < n∗∗ (ρ, c), then v l (n, ρ, c) is strictly increasing
in n, ρ and c and v u (n, ρ, c) is strictly decreasing in n and strictly decreasing in ρ and c.
If n∗ (ρ, c) < n < n∗∗ (ρ, c), then v l and v u satisfies (33). It is easy to see that v u is strictly
increasing in v l and vice versa. Let
k
,
∆l (v , n, ρ, c) := Ev [v] − Ev [min{v, v }] + (1 − c) Ev [v] − Ev [max{v, v }] +
c
l
u
l
where v u is a function of v l , n and c defined by (33). We have
∂∆l
∂v l
∂∆l
∂n
∂∆l
∂ρ
∂∆l
∂c
∂v u
− (1 − c)F (v l ) < 0,
∂v l
∂v u
= −[1 − F (v u )]
> 0,
∂n
= −[1 − F (v u )]
= 1 − c > 0,
k
l
= − Ev [v] − Ev [max{v, v }] +
> 0.
c
Hence, by the implicit function theorem, we have ∂v l /∂n > 0, ∂v l /∂ρ > 0 and ∂v l /∂c > 0.
To see that ∂v u /∂n < 0, let
∆(v u , v l , n) :=
F (v l )n−1 (1 − F (v u ))
− 1 + F (v u )n .
1−c
Then
1 − F (v u )n
∂∆
F (v l )n−1
u n−1
u
u n−1
= −
+ nF (v )
f (v ) = −
+ nF (v )
f (v u ) < 0,
∂v u
1−c
1 − F (v u )
F (v l )n−1 [1 − F (v u )] log F (v l )
∂∆
=
+ F (v u )n log F (v u ) < 0.
∂n
1−c
Hence, ∂v u /∂n = −(∂∆/∂n)/(∂∆/∂v u ) < 0.
40
Let
k
∆u (v , n, ρ, c) := Ev [v] − Ev [min{v, v }] + (1 − c) Ev [v] − Ev [max{v, v }] +
,
c
u
u
l
where v l is a function of v u , n and c defined by (33). We have
∂∆u
∂v u
∂∆u
∂n
∂∆u
∂ρ
∂∆u
∂c
= −[1 − F (v u )] − (1 − c)F (v l )
= −(1 − c)F (v l )
∂v l
< 0,
∂v u
∂v l
< 0,
∂n
= 1 − c > 0,
k
l
= − Ev [v] − Ev [max{v, v }] +
> 0.
c
Hence, by the implicit function theorem, we have ∂v u /∂n < 0, ∂v u /∂ρ > 0 and ∂v u /∂c > 0.
To see that ∂v l /∂n > 0, note that
(n − 1)F (v l )n−2 f (v l )[1 − F (v u )]
∂∆
=
> 0.
∂v l
1−c
Hence, ∂v l /∂n = −(∂∆/∂n)/(∂∆/∂v l ) > 0.
B
Asymmetric Environment
B.1
Finite Case
Let D := ∪i [v i − ki /ci , vi − ki /ci ]. Let d := inf D and d := sup D. Fix an integer m ≥ 2.
For t = 1, . . . m, let
(2t − 1)(d − d)
dt :=d +
,
2m
t(d − d) ki
(t − 1)(d − d) ki
t
fi :=Fi d +
+
− Fi d +
+
, i = 1, . . . , n.
m
ci
m
ci
41
Consider the finite model in which vi − ki /ci can take only m possible different values,
i.e., vi − ki /ci ∈ {d1 , . . . , dm } and the probability mass function satisfies fi (dt ) = fit for
t = 1, . . . , m. It is possible that fit = 0 for some t. The corresponding problem of (OP T H−ϕ)
in the finite model, denoted by (OP T Hm − ϕ), is given by:
max
" m
n
X
X
P
#
fit Pit dt +
t=1
i=1
ϕi ki
,
ci
subject to
ϕi
, ∀t,
1 − ci
(IC0 )
fit , ∀Si ⊂ {1, . . . , m}.
(F2)
ϕi ≤ Pit ≤
n X
X
fit Pit
≤1−
i=1 t∈S
/ i
i=1 t∈Si
Define H(S) := 1 −
Qn P
t∈S
/ i
i=1
n X
Y
fit −
Pn P
i=1
t∈Si
ϕi fit for all S := (S1 , . . . , Sn ) and Si ⊂
{1, . . . , m}. Define H(S) := minS 0 ⊃S H(S 0 ) for all S. Let zit := fit (Pit − ϕi ) for all i and t.
By Lemma 2, (OP T Hm − ϕ) can be rewritten as (OP T Hm1 − ϕ)
max
z
n X
m
X
i=1 t=1
zit dt
+
n
X
ϕi
m
X
t=1
i=1
fit dt
ki
+
ci
!
,
subject to
ci ϕi fit
≤
, ∀i, ∀t,
1 − ci
(IC0 )
zit ≤ H(S), ∀S ⊂ {1, . . . , m}n .
(F2)
0≤
n X
X
zit
i=1 t∈Si
Note that if fit = 0, then zit = 0 by definition and therefore (IC0 ) is satisfied automatically.
Lemma 7 proves a useful property of H. Lemma 8 characterizes H.
Lemma 7 If H(S) < 1 −
Pn
i=1
ϕi and S 0 ⊂ S, then H(S 0 ) ≤ H(S).
42
Proof. Consider S = (S1 , . . . , Sn ). We have
H(S) − 1 +
n
X
ϕi =
i=1
n
X
i=1
ϕi
X
τ ∈S
/ i
fiτ −
n X
Y
fiτ .
i=1 τ ∈S
/ i
P
τ
Let Sisupp := {t|fit > 0}. If Si = Sisupp for some i, then τ ∈S
/ i fi = 0 and therefore H(S) ≥
P
P
P
τ
1 − ni=1 ϕi . Hence, H(S) < 1 − ni=1 ϕi implies that Si 6= Sisupp or τ ∈S
/ i fi > 0 for all i.
Q P
τ
0
Thus, ϕi ≤ j6=i τ ∈S
/ j fj for all i. Let S := (S1 , . . . , Si−1 , Si \{t}, Si+1 . . . , Sn ). Then

H(S) − H(S 0 ) = fit 

YX
fjτ − ϕi  ≥ 0.
j6=i τ ∈S
/ j
Hence, H(S 0 ) ≤ H(S). By induction, H(S 0 ) ≤ H(S) for all S 0 ⊂ S.
Lemma 8 H(S) = min {H(S), 1 −
Pn
i=1
ϕi }.
P
Proof. Recall that H(S) = minS 00 ⊃S H(S). Since S 1 ⊃ S and H(S 1 ) = 1 − ni=1 ϕi , we
P
have H(S) ≤ 1 − ni=1 ϕi .
P
P
Suppose H(S) ≤ 1 − ni=1 ϕi . Let S 00 ⊃ S. If H(S 00 ) ≥ 1 − ni=1 ϕi , then H(S) ≤
P
P
1 − ni=1 ϕi ≤ H(S 00 ). If H(S 00 ) < 1 − ni=1 ϕi , then H(S) ≤ H(S 00 ) by Lemma 7. Hence,
H(S) = H(S).
P
ϕi . I claim that H(S) = 1 − ni=1 ϕi . Suppose not, then
P
there exists S 00 ⊃ S such that H(S 00 ) < 1 − ni=1 ϕi . Then, by Lemma 7, H(S) ≤ H(S 00 ) <
P
P
1 − ni=1 ϕi , a contradiction. Hence, H(S) = 1 − ni=1 ϕi .
Suppose H(S) > 1 −
Pn
i=1
Algorithm 1 below describes an algorithm that finds a feasible solution to (OP T Hm−ϕ).
In order to describe the mechanism, I first introduce some notations. Let Sit := {t, . . . , m}
and Sim+1 := ∅ for all i and t and S t := {t, . . . , m}n and S m+1 := ∅ for all t. Define S+(t, i) :=
(S1 , . . . , Si−1 , Si ∪ {t}, Si+1 . . . , Sn ) and S − (t, i) := (S1 , . . . , Si−1 , Si \{t}, Si+1 . . . , Sn ).
m
m
m
Algorithm 1 Let I0m := {i |fim = 0} and z m
i = 0 for all i ∈ I0 . Define I1 ⊂ I\I0 ,
m,1
m m
π , . . . , π m,n , S m,1 , . . . , S m,n
and z m
/ I0m recursively as follows.
i for all i ∈
43
1. Let I1m = ∅ and ν = 1.
2. If I1m = I\I0m , then go to step 5. Otherwise, let ι = 1 and go to step 2.
3. If there exists I 0 6= ∅ such that |I 0 | = ι, I 0 ∩ (I0m ∪ I1m ) = ∅ and
!
S+
H
X
− H(S) ≤
(m, i)
i∈I 0
X ci ϕ i f m
i
i∈I 0
1 − ci
,
m
where Sj = Sjm if j ∈ I1m and Sj = Sjm+1 otherwise, then let z m
i ≤ ci ϕi fi /(1 − ci ) for
i ∈ I 0 be such that
!
X
zm
i = H
S+
i∈I 0
X
(m, i)
− H(S).
i∈I 0
Let π m,ν := I 0 and S m,ν := S. Redefine ν as ν + 1 and I1m as I 0 ∪ I1m and go to step
2. If there does not exist such an I 0 , go to step 4.
4. If ι < n − |I0m ∪ I1m |, redefine ι as ι + 1 and go to step 3. If ι = n − |I0m ∪ I1m |, go to
step 5.
m
m
m
5. Let nm := ν − 1 and z m
i := ci ϕi fi /(1 − ci ) for all i ∈ I\ (I0 ∪ I1 ).
τ τ
Let 1 ≤ t ≤ m − 1. Suppose we have defined I0τ , I1τ , π τ,1 , . . . , π τ,n , S τ,1 , . . . , S τ,n
and z τi for all i for all τ ≥ t + 1. Let I0t := {i|fit = 0} and z ti := 0 for all i ∈ I0t . Define
t t
I1t ⊂ I\I0t , π t,1 , . . . , π t,n , S t,1 , . . . , S t,n and z ti for all i ∈
/ I0t recursively as follows.
1. Let I1t := ∅ and ν = 1.
2. If I1t = I\I0t , then go to step 5. Otherwise, let ι = 1 and go to step 2.
3. If there exists I 0 6= ∅ such that |I 0 | = ι, I 0 ∩ (I0t ∪ I1t ) = ∅ and
!
min H
tj
S+
X
(t, i)
i∈I 0
−
n X
X
j=1 τ ∈Sj
44
z τj ≤
X ci ϕi f t
i
i∈I 0
1 − ci
,
where S = (S1t1 , . . . , Sntn ) with tj ≥ t if j ∈ π t,1 ∪ · · · ∪ π t,ν−1 , tj = t + 1 if j ∈ I 0 and
tj ≥ t + 1 otherwise, then let z ti ≤ ci ϕi fit /(1 − ci ) for i ∈ I 0 be such that
!
X
i∈I 0
z ti
= min H
tj
S+
X
(m, i)
i∈I 0
−
n X
X
z τi .
i=1 τ ∈Si
Let π t,ν := I 0 and St,ν as a minimizer of the right-hand side of the above equation such
that there is no S ) S t,ν which is also a minimizer. Redefine ν as ν + 1 and I1t as
I 0 ∪ I1t and go to step 2. If there does not exist such an I 0 , go to step 4.
4. If ι < n − |I0t ∪ I1t |, redefine ι as ι + 1 and go to step 3. If ι = n − |I0t ∪ I1t |, go to step
5.
5. Let nt := ν − 1 and z ti := ci ϕi fit /(1 − ci ) for all i ∈ I\ (I0t ∪ I1t ).
Let z be a solution found by Algorithm 1. I first prove that z is a feasible solution to
t
t
(OP T Hm1 − ϕ). For each i and t, let P i := z ti /fit + ϕi if fit > 0 and P i = 0 otherwise.
Then z is a feasible solution to (OP T Hm1 − ϕ) if and only if P is a feasible solution to
(OP T Hm − ϕ). Lemmas 10 and 11 below prove that P is non-decreasing. By Theorem
2 in Mierendorff (2011), P is a feasible solution to (OP T Hm − ϕ) if and only if for all
t1 , . . . , tn ∈ {1, . . . , m}
n X
X
t
P i ≤ H(S),
i=1 t∈Si
where S = (S1t1 , . . . , Sntn ). By construction, this is true if
Lemma 9 For all t1 , . . . , tn ∈ {1, . . . , m},
n X
X
z ti ≤ H(S),
(34)
i=1 t∈Si
where S = (S1t1 , . . . , Sntn ).
t
Proof. For each t, let π t,0 := ∅ and π t,n +1 := I\(I0t ∪ I1t ). Suppose S ⊂ S m , i.e., ti ≥ m for
45
all i. By construction, we have
!
X
zm
i = H
X
∅+
i∈π m,1
(m, i) ,
i∈π m,1
!
X ci ϕ i f m
i
zm
≤H
i ≤
1 − ci
i∈I 00
i∈I 00
X
∅+
X
(m, i) , ∀I 00 ( π m,1 .
i∈I 00
Thus, (34) holds if ti = m + 1 for all i ∈
/ π m,1 . Suppose we have shown that (34) holds
if ti = m + 1 for all i ∈
/ π m,1 ∪ · · · ∪ π m,ν−1 and ν ≥ 2. Suppose ti = m + 1 for all
P
i∈
/ π m,1 ∪ · · · ∪ π m,ν . Let S 0 := ∅ + i∈πm,1 ∪···∪πm,ν−1 (m, i) and I 0 := {i ∈ π m,ν |ti = m}. By
construction, we have for all, I 0 ⊂ π m,ν ,
!
X
zm
i = H
S0 +
zm
i ≤
i∈I 0
Since S −
(m, i)
− H(S 0 ) if I 0 = π m,ν and ν ≤ nm ,
i∈I 0
i∈I 0
X
X
X ci ϕi f m
i
i∈I 0
1 − ci
P
i∈π m,ν (m, i)
!
≤H
X
S0 +
− H(S 0 ) if I 0 ( π m,ν or ν = nm + 1.
(m, i)
i∈I 0
⊂ S 0 , we have
n
X
X
z ti
=
n X
X
i=1 t∈Si0 \Si
X X
z ti −
i=1 t∈Si0
z ti
i∈π
/ m,ν t∈Si
!
≥ H(S 0 ) − H
X
S−
(m, i)
i∈π m,ν
!
≥H
S0 +
X
(m, i)
− H(S),
i∈I 0
where the last inequality holds since H is submodular. Hence,
n X
X
i=1 t∈Si
z ti =
n X
X
z ti −
i=1 t∈Si0
!
n
X
X
z ti + H
i=1 t∈Si0 \Si
S0 +
X
(m, i)
− H(S 0 )
i∈I 0
!
≤ H(S 0 ) − H
S0 +
X
(m, i)
i∈I 0
!
+ H(S) + H
S0 +
X
i∈I 0
= H(S).
46
(m, i)
− H(S 0 )
Suppose S ⊂ S t+1 +
P
π t,ν |ti = t} and S 0 := S −
i∈π t,1 ∪···∪π t,ν (t, i)
P
i∈I 0 (t, i).
for t ≤ m − 1 and 1 ≤ i ≤ nt + 1. Let I 0 := {i ∈
By construction, we have
!
X
z τi
≤H
0
S +
i∈I 0
X
(t, i)
−
n X
X
i∈I 0
z τi
= H (S) −
i=1 τ ∈Si0
n X
X
z τi .
i=1 τ ∈Si0
Hence, (34) holds.
Lemma 10 Suppose fit , fit+1 > 0. Then z t+1
∈ I1t+1 implies that z ti ∈ I1t .
i
Proof. Suppose fit , fit+1 > 0 and z t+1
∈ I1t+1 . Then there exists S with Sj ⊂ Sjt+1 for all j
i
and t + 1 ∈ Si such that
n X
X
z τj = H(S).
j=1 τ ∈Sj
Suppose H(S) < 1 −
Pn
j=1
ϕj . Since, by Lemma 9,
XX
j6=i τ ∈Sj
z τj +
X
z τi ≤ H(S − (t + 1, i)),
τ ∈Si \{t+1}
we have
ci ϕi fit+1
1 − ci


≥ z t+1
≥ H(S) − H(S − (t + 1, i)) = fit+1 
i
YX
fjτ − ϕi  ,
j6=i τ ∈Sj
where the last equality holds by Lemmas 7 and 8. This implies that
Hence,
z ti
≤H(S + (t, i)) −
n X
X
z τj
j=1 τ ∈Sj
≤H(S + (t, i)) − H(S)


YX
ci ϕi fit
=fit 
fjτ − ϕi  ≤
.
1
−
c
i
j6=i τ ∈S
j
47
Q
j6=i
P
τ ∈Sj
fjτ ≤
ϕi
.
1−ci
Suppose H(S) ≥ 1−
Pn
j=1
ϕj , then by Lemmas 7 and 8, H(S) = H(S +(t, i)) = 1−
Pn
j=1
ϕj .
Hence,
z ti ≤H(S + (t, i)) −
n X
X
z τj
j=1 τ ∈Sj
≤H(S + (t, i)) − H(S)
=0 ≤
ci ϕi fit
.
1 − ci
Hence, zit ∈ I1t .
t
Lemma 11 P i is non-decreasing in t on {t |fit > 0 }.
t
Proof. Suppose fit , fit+1 > 0. Suppose z t+1
∈
/ I1t+1 , then P i ≤
i
ϕi
1−ci
t+1
= P i . Suppose
z t+1
∈ I1t+1 . Then there exists S with Sj ⊂ Sjt+1 for all j and t + 1 ∈ Si such that
i
n X
X
z τj = H(S).
j=1 τ ∈Sj
Suppose H(S) < 1 −
Pn
j=1
ϕj . By the proof of Lemma 10, we have
t
Pi ≤
YX
t+1
fjτ ≤ P i .
j6=i τ ∈Sj
Suppose H(S) ≥ 1 −
Pn
j=1
t
t+1
ϕj . By the proof of Lemma 10, we have P i = ϕi ≤ P i .
By Lemmas 9 and 11 and Theorem 2 in Mierendorff (2011), P is a feasible solution to
(OP T Hm − ϕ), or equivalently, z is a feasible solution to (OP T Hm1 − ϕ). Let


 zt
i
t
ẑi :=

 0
if dt ≥ 0
.
(35)
if dt < 0
Clearly, ẑ is also a feasible solution to (OP T Hm1 − ϕ). As I prove below in Lemma 14,
ẑ is an optimal solution to (OP T Hm1 − ϕ). Before doing that, I first prove some useful
48
properties of S t,ν and z.
t+1
Lemma 12 S t,1 ⊃ S t+1,n +
P
i∈π t,ν (t, i) for 1 ≤ t ≤ m.
+ 1, i) for 1 ≤ t ≤ m − 1; and S t,ν+1 ⊃ S t,ν +
P
i∈π t+1,nt+1 (t
P
Proof. By construction, S m,ν+1 ⊃ S m,ν + i∈πm,ν (m, i). Let t ≤ m − 1 and I 0 = π t,1 . Let
P
P P
t+1
S := S t+1,n + i∈πt+1,nt+1 (t + 1, i). Then nj=1 τ ∈Sj z τj = H(S). Suppose S t,1 6⊃ S. Let
t
S 0 := S ∪ S t,1 . Then Sj0 = Sjj for some tj ≥ t + 1 for all j. By Lemma 9, we have
n
X
X
z τj
j=1 τ ∈S 0 \S t,1
j
j
=
n X
X
z τj −
j=1 τ ∈Sj
n
X
X
z τj
j=1 τ ∈S t,1 ∩Sj
j
≥H(S) − H(S ∩ S t,1 ).
Hence,
!
H
X
(t, i)
S +
0
−
n X
X
z τj
−H
i∈I 0
!
X
(t, i)
S +
0
!
−H
S
t,1
+
i∈I 0
≤ H
S0 +
X
X
(t, i)
i∈I 0
!
"
!
n
X
X
X
(t, i) +
+
z τj
j=1 τ ∈Sj0
i∈I 0
=H
S
t,1
(t, i)
#
−
n
X
j=1 τ ∈S t,1
j
X
j=1 τ ∈S 0 \S t,1
j
j
!
"
− H(S) − H
i∈I 0
z τj
X
(t, i)
S t,1 +
#
− H(S ∩ S t,1 )
i∈I 0
≤0,
where the last inequality holds since H is sub-modular. A contradiction. Hence, S t,1 ⊃
P
t+1
S t+1,n + i∈πt+1,nt+1 (t + 1, i). By a similar argument, one can show that S t,ν+1 ⊃ S t,ν +
P
i∈π t,ν (t, i) for all t ≤ m − 1.
49
By Lemmas 7, 8 and 12, there exists t and ν such that
!
H
S t,ν +
X
(t, i)


 1 − P ϕi
i
=
P

 H S t,ν +
if ν ≥ ν, t ≥ t
i∈π t,ν (t, i)
i∈π t,ν
<1−
P
i
.
(36)
ϕi otherwise
By a similar argument to that in Lemma 10, we have
/ π t,1 ∪ · · · ∪ π t,ν , then z ti = 0.
Lemma 13 If t > t or t = t and i ∈
Lemma 14 ẑ define in (35) is an optimal solution to (OP T Hm1 − ϕ).
Proof. Consider the dual to problem (OP T Hm1 − ϕ), denoted by (DOP T Hm1 − ϕ),
min
λ,β,µ
n X
m
X
λ t ci ϕ i f t
i
i=1 t=1
1 − ci
i
+
X
β(S)H(S) +
S
n
X
ϕi
m
X
t=1
i=1
ki
fit dt +
ci
!
,
subject to
dt − λti + µti −
X
β(S) ≥ 0 if fit > 0, ∀i, ∀t,
Si 3t
λ ≥ 0, µ ≥ 0, β ≥ 0.
Let ẑ be defined by (35) and (β̂, λ̂, µ̂) be the corresponding dual variables. Let t0 be such
0
that dt ≥ 0 if and only if t ≥ t0 .
t
Case 1: t < t0 or dt < 0. Let β̂ S t,n +
P
i∈π t,nt (t, i)
≥ 0 for t ≥ t0 and β̂(S) = 0
otherwise. Let µ̂ti = 0 if t ≥ t0 and λ̂ti = 0 if t < t0 . (i) If t < t0 , then β̂(S) = 0 for all
P
t
S such that Si 3 t. Hence, µ̂ti = −dt ≥ 0. (ii) If t = t0 , let β̂ S t,n + j∈πt,nt (t, j) =
dt ≥ 0. If i ∈ I1t , let λ̂ti = 0. If i ∈
/ I0t ∪ I1t , let λ̂ti = dt ≥ 0. (iii) If t > t0 , let
P
t
β̂ S t,n + j∈πt,nt (t, j) = dt − dt−1 ≥ 0. If i ∈ I1t , let λ̂ti = 0. If i ∈
/ I0t ∪ I1t , let
∗
0
t0
λ̂ti = dt − dt ≥ 0 where t∗ = inf{t0 > t|S t ,n 3 i}. Thus, (λ̂, µ̂, β̂) is a feasible solution
to (DOP T Hm1 − ϕ) and the complementary slackness conditions are satisfied. Finally, it
50
is easy to verify that the dual objective is equal to the primal objective. By the duality
theorem, ẑ is an optimal solution to (OP T Hm1 − ϕ).
t
Case 2: t ≥ t0 or dt ≥ 0. Let β̂ S t,n +
P
i∈π t,nt
(t, i) ≥ 0 for t ≥ t and β̂(S) = 0
otherwise. Let µ̂ti = 0 if t ≥ t and λ̂ti = 0 if t < t. (i) If t < t0 , then β̂(S) = 0 for all S such
P
t
that Si 3 t. Hence, µ̂ti = −dt ≥ 0. (ii) If t0 ≤ t ≤ t, let β̂ S t,n + i∈πt,nt (t, i) = dt − dt−1 ≥
P
0
t
0 if t > t0 and β̂ S t,n + i∈πt,nt (t, i) = dt if t = t0 . If i ∈ I1t , let λ̂ti = 0. Note that by
P
t
t
t,nt
t
/ I0 . (iii) If t > t, let β̂ S
Lemma 13, i ∈ I1 for all i ∈
+ j∈πt,n (t, j) = dt − dt−1 ≥ 0.
∗
0
t0
/ I0t ∪ I1t , let λ̂ti = dt − dt ≥ 0 where t∗ = inf{t0 > t|S t ,n 3 i}.
If i ∈ I1t , let λ̂ti = 0. If i ∈
Thus, (λ̂, µ̂, β̂) is a feasible solution to (DOP T Hm1 − ϕ) and the complementary slackness
conditions are satisfied. Finally, it is easy to verify that the dual objective is equal to the
primal objective. By the duality theorem, ẑ is an optimal solution to (OP T Hm1 − ϕ).
Finally, let
Pim,t :=


 P m,t
i
if dt ≥ 0
.
(37)
t

 ϕi
if d < 0
The following corollary directly follows from Lemma 14:
Corollary 4 P m defined in (37) is an optimal solution to (OP T Hm − ϕ).
Before moving on to the continuum case, I prove the following two lemmas which is useful
in characterizing the limit of P m .
t∗
t∗
Lemma 15 Suppose S t,ν = (S11 , . . . , Snn ). Then t∗i = t if i ∈ π t,1 ∪ · · · ∪ π t,ν−1 , t∗i = t + 1
if i ∈ I1t+1 ∪ π t,ν \ (π t,1 ∪ · · · ∪ π t,ν−1 ) and t∗i ∈ {t + 1, m + 1} otherwise. Furthermore, for
h∈
/ I1t+1 ∪ π t,1 ∪ · · · ∪ π t,ν , we have
Pt∗i −1
1. If
ϕh
1−ch
2. If
Q
ϕh
− i6=h
1−ch
Pt∗i −1
fiτ < 0 and H S t,ν +
P
3. If
ϕh
1−ch
Pt∗i −1
fiτ < 0 and H S t,ν +
P
−
−
Q
Q
i6=h
i6=h
τ =1
τ =1
τ =1
fiτ ≥ 0, then t∗h = t + 1.
51
i∈π t,ν (t, i)
i∈π t,ν (t, i)
< 1−
Pn
= 1−
i=1
Pn
ϕi , then t∗h = m+1.
i=1
ϕi , then t∗h = t + 1.
Proof. By construction, t∗i = t + 1 if i ∈ π t,ν . By Lemma 12, t∗i = t if i ∈ π t,1 ∪ · · · ∪ π t,ν−1
and t∗i = t + 1 if i ∈ I1t+1 \ (π t,1 ∪ · · · ∪ π t,ν−1 ). If t = m, then, by construction, t∗i = m + 1
for i ∈
/ π t,1 ∪ · · · ∪ π t,ν−1 .
Let t ≤ m − 1. For the ease of notation, let I 0 = π t,ν and S = (S1t1 , . . . , Sntn ) is such that
ti = t if i ∈ π t,1 ∪ · · · ∪ π t,ν−1 , ti = t + 1 if i ∈ I1t+1 ∪ π t,ν \ (π t,1 ∪ · · · ∪ π t,ν−1 ) and ti ≥ t + 1
otherwise. Fix h ∈
/ I1t+1 ∪ π t,1 ∪ · · · ∪ π t,ν and ti for all i 6= h. Define
!
∆(th ) := H
S+
X
(t, i)
−
i∈I 0
n X
X
z τi .
i=1 τ ∈Si
There exists t ≤ t∗ ≤ m + 1 such that if t + 1 ≤ th ≤ t∗ , then ∆(th ) = 1 −
Pn P
τ
∗
i=1
τ ∈Si z i ; and if t < th ≤ m + 1, then
ti −1
YX
∆(th ) = 1 −
!
fiτ
i∈I
/ 0 τ =1
t−1
YX
!
fiτ
−
i∈I 0 τ =1
m
XX
i∈I
/ 0 τ =ti
fiτ ϕi
−
m
XX
fiτ ϕi
i∈I 0 τ =t
−
Pn
i=1
n X
m
X
ϕi −
z τi .
i=1 τ =ti
Recall that z thh = ch ϕh fhth /(1 − ch ) for all th ≥ t + 1. If th < t∗ , we have ∆(th + 1) − ∆(th ) =
ch ϕh fhth /(1 − ch ) ≥ 0. Hence, ∆(t + 1) ≤ ∆(th ) for all th ≤ t∗ . If th > t∗ , we have
∆(th + 1) − ∆(th )



ti −1
Y
X
ϕh
=fhth 
fiτ 
−
1 − ch
0
τ =1
i∈I
/ ,i6=h
t−1
YX
!
fiτ  .
i∈I 0 τ =1
If th = t∗ , we have
∆(th + 1) − ∆(th )



ti −1
Y
X
ϕh
≥fhth 
−
fiτ 
1 − ch
0
τ =1
i∈I
/ ,i6=h
Hence, if
ϕh
1−ch
−
Q
i∈I
/ 0 ,i6=h
Pti −1
τ =1
fiτ
Q
i∈I 0
Pt−1
τ =1
t−1
YX
!
fiτ  .
i∈I 0 τ =1
fiτ ≥ 0, then ∆(th + 1) ≥ ∆(th ) for
all th ≥ t∗ . Furthermore, since ∆(t + 1) ≤ ∆(th ) for all th ≤ t∗ , we have ∆(t + 1) ≤ ∆(th )
52
for all th ≥ t + 1, hence t∗h = t + 1.
Q
Pti −1 τ Q
Pt−1 τ ϕh
−
f
< 0, then ∆(th +1) ≤ ∆(th ) for all th > t∗ .
If 1−c
0 ,i6=h
0
i
i
∈I
/
τ
=1
i∈I
τ =1 fi
h
Hence, ∆(m + 1) ≤ ∆(th ) for all th > t∗ . Recall that ∆(t + 1) ≤ ∆(th ) for all th ≤ t∗ . Hence,
t∗h ∈ arg min {∆(t + 1), ∆(m + 1)}.
t∗
t∗
Lemma 16 Suppose S t,ν = (S11 , . . . , Snn ) and h ∈
/ I1t+1 ∪ π t,1 ∪ · · · ∪ π t,ν , we have t∗h = t + 1
implies that h ∈ I1t .
P
P
Proof. Suppose H S t,ν + i∈πt,ν (t, i) = 1 − ni=1 ϕi , then by Lemma 13, h ∈ I1t . Suppose
P
Q Pt∗i −1 τ
P
ϕh
H S t,ν + i∈πt,ν (t, i) < 1 − ni=1 ϕi . By Lemma 15, 1−c
≥
τ =1 fi . Hence,
i6=h
h
!
z th
≤H
S
t,ν
X
+
(t, i) + (t, h)
i∈π t,ν
!
−H
S
t,ν
+
X
(t, i)
i∈π t,ν


∗ −1
i
Y tX
fiτ − ϕh 
≤fht 
i6=h τ =1
≤fht
ϕh
− ϕh
1 − ch
=
ch ϕh fht
.
1 − ch
Hence, h ∈ I1t .
B.2
Continuum Case
I characterize an optimal solution in the continuum case by taking m to infinity. Let
m
I1m,t denote I1t and t be defined by (36) when D is discretized by m grid points. Clearly, if
m
m
m
i ∈ I1m,t then i ∈ I12m,2t−1 . Let ti := max t|i ∈ I1m,t and di := d + (ti −1)(d−d)
. Then the
m
n 2κ o
is non-decreasing and bounded from above by d. Hence, the sequence
sequence of di
κ
dui
2κ
κ
2κ
:= limκ→∞ di denote its limit. For each κ, let d2 := d + (t −1)(d−d)
,
2κ
κ
which is bounded. After taking subsequences, we can assume d2 κ converges and let
converges and let
53
κ
dl = limκ→∞ d2 denote its limit. Let

ϕi



1−ci

 Q
P i (vi ) :=
F
k
j vi −
i
j6=i,du
j ≥vi − ci




 ϕ
if vi > dui +
ki
ci
+
kj
cj
if dl +
ki
ci
ki
ci
< vi < dui +
if vi < dl +
i
ki
ci
.
ki
ci
Finally, let


 P i (vi )
∗
Pi (vi ) :=

 ϕ
i
if vi >
ki
ci
if vi <
ki
.
ci
(19)
I show below that P ∗ is the pointwise limit of P m and an optimal solution to (OP T H − ϕ).
m
Proof of Theorem 4. We can extend P i (Pim ) to [v i , v i ] by setting, for each t = 1, . . . , m,
m
P i (vi )
:=
m,t
P i (Pim (vi )
:=
Pim,t )
(t − 1)(d − d) ki
t(d − d) ki
for vi ∈ d +
+ ,d +
+
.
m
ci
m
ci
m
I show that, after taking subsequences, P i converges to P i pointwise.
2κ
2κ
κ
First, by construction, P i (vi ) = ϕi for all vi < d2 + kcii , we have limκ→∞ P i (vi ) = P i (vi )
for all vi < dl +
ki
.
ci
2κ
Similarly, by construction, P i (vi ) =
2κ
limκ→∞ P i (vi ) = P i (vi ) for all vi > dui +
Suppose dl < vi −
ki
ci
ϕi
1−ci
2κ
for all vi > di +
ki
,
ci
we have
ki
.
ci
< dui . Assume without loss of generality that du1 ≥ · · · ≥ dun ≥ dl .
If dui = dl , then we are done. Assume for the rest of the proof that dui > dl . Let dun+1 := dl .
Consider vi such that dui ≥ duj > vi − kcii > duj+1 for some j ≥ i. For m sufficiently large, there
exists t such that
duj+1 < d +
(t − 1)(d − d)
t(d − d)
ki
(t + 1)(d − d)
<d+
< vi − < d +
< duj .
m
m
ci
m
Hence, by construction, we have I1m,t = I1m,t+1 = {1, . . . , j}. By Lemmas 15 and 16, there
exists S = (S1t1 , . . . , Sntn ) such that ti = t + 1, th ∈ {t, t + 1} if h ≤ j and h 6= i, th = m + 1
54
if h > j, and
m,t
fit P i − ϕi = H(S + (t, i)) − H(S).
In particular,
m,t
fit P i − ϕi ≤ H(S 0 + (t, i)) − H(S 0 )
!
t
Y X
= fit
fhτ − ϕi ,
h≤j,h6=i τ =1
m+1
where S 0 = (S1t+1 , . . . , Sjt+1 , Sj+1
, . . . , Snm+1 ); and
m,t
fit P i − ϕi ≥ H(S 00 − (t, i)) − H(S 0 )
!
t−1
Y X
fhτ − ϕi ,
= fit
h≤j,h6=i τ =1
m+1
, . . . , Snm+1 ). Hence,
where S 00 = (S1t , . . . , Sjt , Sj+1
t−1
Y X
fhτ
≤
m,t
Pi
≤
h≤j,h6=i τ =1
t
Y X
fhτ .
h≤j,h6=i τ =1
2κ
Take m = 2κ to infinity and we have limκ→∞ P i (vi ) = P i (vi ).
It follows that, after taking subsequences, Pim converges to Pi∗ pointwise. P ∗ is feasible
by a similar argument to that in the proof of Lemma 3 and optimal by a similar argument
to that in the proof of Theorem 1.
B.3
Optimal One-threshold Mechanism
Lemma 17 There exists a unique dl ≥ max v j − kj /cj such that
n
X
i=1
Y
n
ki
ki
l
ϕi Fi dl +
=
Fi d +
.
ci
c
i
i=1
55
(38)
Proof. If ϕi = 0 for all i, then dl = max v j − kj /cj is the unique solution to (38). Assume
for the rest of the proof that ϕi > 0 for some i. Let
l
∆(d ) :=
n
X
ϕi Fi
i=1
ki
d +
ci
l
−
n
Y
ki
d +
ci
kj
dl +
cj
#
Fi
i=1
l
.
Then
∆0 (dl ) =
n
X
i=1
If ∆(dl ) ≤ 0, then ϕi ≤
Q
fi
ki
dl +
ci
F
dl +
j
j6=i
kj
cj
"
ϕi −
Y
Fj
j6=i
.
for all i and the strict inequality holds for some
i, which implies ∆0 (dl ) < 0. Since ∆l max v j − kj /cj ≥ 0 and ∆l (max {v j − kj /cj }) =
P
l
i ϕi − 1 ≤ 0, there exist a unique d satisfying (38).
P
−
Proof of Theorem 5. Let Φ(dl , du1 , . . . , dun ) ⊂ {→
ϕ | ϕi ≤ 1} denote the feasible set of
→
−
ϕ given dl and du1 , . . . , dun . I often abuse notation and use Φ to denote the feasible set when
−
its meaning is clear. Fix dl > max v j − kj /cj and dui = v i − kcii . Then →
ϕ is feasible if and
only if
n
X
i=1
Y
n
ki
ki
l
l
ϕi Fi d +
=
Fi d +
,
ci
ci
i=1
Y ki kj
ϕi
Fj v i − +
, ∀i.
≤
ci
cj
1 − ci
j6=i
−
Then the set of feasible →
ϕ , Φ, is non-empty if and only if
Y
n
n
X
ki Y
ki kj
ki
l
l
(1 − ci )Fi d +
Fj v i − +
≤
Fi d +
c
c
c
ci
i
i
j
i=1
i=1
j6=i
Suppose Φ is non-empty. It is not hard to see that Φ,is convex. Since the objective function
−
−
is linear in →
ϕ and the feasible set is convex, given dl and du , there is an optimal →
ϕ which is
an extreme point.
56
−
Clearly, →
ϕ is an extreme point of Φ if and only if there exists i∗ such that
n
X
Y
n
ki
ki
l
l
=
,
ϕi Fi d +
Fi d +
c
c
i
i
i=1
i=1
Y kj ki
+
, ∀j 6= i∗ .
ϕj = (1 − cj )
Fi v j −
cj
ci
i6=j
For ease of notation, let i∗ = 1. Let ϕ̄j := (1 − cj )
Q
F
i6=j i v j −
kj
cj
+
ki
ci
for all j. Then the
principal’s payoff is given by Z1,i∗ (dl ) defined as follows:
l
Z1,1 (d ) :=
n Z
X
i=1
vi
ki Y
ki kj
Fj vi − +
dFi (vi )
o vi −
n
k k
ci j6=i
ci
cj
max dl + c i , c i
i
i
o
n
k k
max dl + c i , c i
ki
ϕi dFi (vi )
+
vi −
c
i
v
i
i6=1
Qn F dl + ki − P F dl +
Z maxndl + k1 , k1 o c1 c1
i=1 i
i6=1 i
ci
k1
+
v1 −
c1
v1
F1 dl + kc11
P
Qn
ki
ki
l
l
F
d
+
−
F
d
+
ϕi
X ϕ ki k1 i=1 i
i6=1 i
ci
ci
i
+
+
.
c
c
l + k1
i
1
F
d
i6=1
1
c1
XZ
i
i
ki
ci
ϕi
dF1 (v1 )
If dl < 0, it is not hard to show that Z1,1 is strictly increasing in dl . If dl ≥ 0, then, after
some algebra, we have

l
f
Y
X
i d +
k
k
j
i
0
l
l
l
Fj d +
− ϕi
Z1,1 (d ) = 
fi d +
c
c
i
j
j6=i,1
i6=1
"Z l k1 #
d +c
1
k
k
1
1
·
v1 − dl −
dF1 (v1 ) +
.
c1
c1
v1
Since ϕi ≤
Q
l
F
j6=i j d +
kj
cj
ki
ci
− F i dl +
F12 dl + kc11
F1 dl +
k1
c1
ki
ci
f 1 dl +
, the first-term in the above equation is strictly positive. The
k
d∗1 + c 1
v1
1
v1 −
d∗1
k1
−
c1
57
dF1 (v1 ) +
k1
= 0.
c1


second-term is strictly decreasing in dl . Let d∗1 be such that
Z
k1
c1
(39)
0
0
Then Z1,1
(dl ) > 0 if dl < d∗1 and Z1,1
(dl ) < 0 if dl > d∗1 . Hence, Z1,1 (dl ) achieves its maximum
at dl = d∗1 .
Suppose d∗1 ≥ d∗2 . Note that by the same argument as that in Lemma 17, we have
Φ d∗2 , v 1 − kc11 , . . . , v n − kcnn 6= ∅ then Φ d∗1 , v 1 − kc11 , . . . , v n − kcnn 6= ∅. Suppose both
Φ d∗2 , v 1 − kc11 , . . . , v n − kcnn and Φ d∗1 , v 1 − kc11 , . . . , v n − kcnn are non-empty. Then
Z1,1 (dl ) − Z1,2 (dl )
" n
X #
Y k
k
i
i
=
Fi dl +
−
F i dl +
ϕi
c
ci
i
i=1
i=1

" Z l k1 #

d +c
1
k1
1
1
k1
dF1 (v1 ) +
−
·
v1 −
 F dl + k1
c1
c1
v1
F2 dl +
1
c1
"Z
k2
c2
k
dl + c 2
2
v2
v1 −
k2
c2
#
k2 
dF2 (v2 ) +
c2 
If dl = d∗2 , then by definition we have
Z1,1 (d∗2 ) − Z1,2 (d∗2 )
" n
X Y ki
∗
=
Fi d2 +
−
Fi d∗2 +
c
i
i=1
i=1
" n
Y
X ki
∗
≥
F i d2 +
−
Fi d∗2 +
ci
i=1
i=1
ki
ci
ki
ci
ϕi
#

F
"Z
1
1
d∗2 +
k1
c1
k
d∗2 + c 1
v1
1

#

k1
k1
dF1 (v1 ) +
− d∗2
v1 −

c1
c1
#
ϕi (d∗2 − d∗2 ) = 0,
where the last inequality holds since d∗1 ≥ d∗2 and the inequality holds strictly if d∗1 > d∗2 .
Hence, Z1,1 (d∗1 ) ≥ Z1,2 (d∗2 ) and the inequality holds strictly if d∗1 > d∗2 .
Let dl∗ := maxi d∗i . If
Y
n
n
X
Y ki kj
ki
ki
∗
l∗
(1 − ci )
Fj v j − +
Fi d +
≤
Fi d +
,
c
c
c
c
i
j
i
i
i=1
i=1
j6=i
then Φ dl∗ , v 1 −
k1
, . . . , vn
c1
−
kn
cn
is feasible, this completes the proof.
58
B.4
Symmetric Environment Revisited
−
Fix →
ϕ and let dui and dl be the associated optimal thresholds. Assume, without loss of
generality, du1 ≥ · · · ≥ dun ≥ dl . Let 1 ≤ ξ1 < · · · < ξL ≤ n be such that du1 = · · · = duξ1 ,
duξι > duξι +1 = · · · = duξι+1 for ι = 1, . . . , L − 1 and duξL > duξL +1 = · · · = dun = dl . Note that in
the symmetric environment only if dui ≥ duj only if ϕi ≥ ϕj . The proof of Theorem 6 uses
the following properties of dl and dui :
Lemma 18 If
ϕi
1−c
≥ 1 for all i ≤ ξ1 , then duξ1 = v − kc ; otherwise
ϕi
1−c
< 1 for all i ≤ ξ1 and
duξ1 satisfies
ξ
ξ1
k 1 X ϕi
k
u
u
1 − F dξ1 +
=
1 − F dξ1 +
,
c
1
−
c
c
i
i=1
ξ1
X
ϕi
k
[1 − F (v)] if v ≷ duξ1 + ,
1 − F (v) ≶
1−c
c
i=1
ξ1 −1
ϕi
k
≥ F duξ1 +
, ∀i = 1, . . . , ξ1 .
1−c
c
ξ1
For ι = 1, . . . , L − 1,
ϕi
1−c
< 1 for ξι + 1 ≤ i ≤ ξι+1 and duξι+1 satisfies
ξ
ξ
ξι+1
X
ϕi
k ι
k ι+1
k
F dξι+1 +
− F dξι+1 +
=
1 − F dξι+1 +
,
c
c
1
−
c
c
i
i=ξ +1
ι
ξι+1
k
ϕi
[1 − F (v)] if v ≷ dξι+1 + ,
1−c
c
i=ξι +1
ξ −1
ϕi
k ι+1
u
≥ F dξι+1 +
, ∀i = ξι + 1, . . . , ξι+1 .
1−c
c
F (v)ξι − F (v)ξι+1 ≶
X
Finally, dl satisfies
ξ
n
n
X
k L X
k
ϕi
k
l
l
F d +
=
ϕi F d +
+
1−F d +
,
c
c
1
−
c
c
i=1
i=ξL +1
ξL X
n
n
X
k
ϕi
k
k
k
F d+
≶
ϕi F d +
+
1−F d+
if v ≶ dl + .
c
c
1−c
c
c
i=1
i=ξ +1
l
L
59
m
The arguments used to prove Lemma 18 is similar to that used to show P i converges to
P i if dl < vi −
ki
ci
< dui and are neglected here.
Proof of Theorem 6. The first part of the theorem directly follows from Theorem 5.
−
Assume, for the rest of the proof, that F (v ∗ )n−1 < n(1 − c). Consider an optimal →
ϕ and
let dui and dl be the associated optimal thresholds. Assume, without loss of generality,
du1 ≥ · · · ≥ dun ≥ dl . Let ξι (ι = 1, . . . , L) be defined as in the beginning of this subsection.
First, I show that L = 1. Suppose, to the contradiction, that L ≥ 2. Suppose duξ2 < 0,
then the principal’s objective function is strictly increasing in ϕi for i > ξ2 . Hence, in
optimum, it must be that duξ2 ≥ 0. Let
ϕ∗i
ξ2
1 X
=
ϕj , for all i = 1, . . . , ξ2 ,
ξ2 j=1
u
u∗
u∗
and ϕ∗i = ϕi for all i > ξ2 . Then du∗
1 = . . . dξ2 and di = di for all i > ξ2 . There are two
cases: (1) ϕi < 1 − c for all i ≤ ξ1 and (2) ϕi ≥ 1 − c for all i ≤ ξ1 .
Case 1: ϕi < 1 − c for all i ≤ ξ1 . In this case, ϕ∗1 < 1 − c. Then du∗
ξ2 is defined by
X
ξ
ξ2
k
ϕ∗i
k 2
u∗
u∗
1 − F dξ2 +
= 1 − F dξ2 +
.
c
1−c
c
i=1
∗
−
−
Hence, duξ2 < duξ2 < duξ1 . Let Z(→
ϕ ) denote the principal’s payoff given →
ϕ . Then
−
Z(→
ϕ ∗ ) − Z(ϕ)
"Z
#
∗
Z duξ ∗ + k ξ2
v
X
c
2
k
ϕi
k
=
vi −
dF (vi ) +
vi −
F (vi )ξ2 −1 dF (vi )
k
k
u∗
u
c
1
−
c
c
dξ + c
dξ + c
i=1
2
2
Z
ξ2
v
X
k
ϕi
−
vi −
dF (vi )
k
u
c 1−c
i=ξ1 +1 dξ2 + c
"Z u k #
Z v
ξ1
dξ + c
X
1
k
k
ϕ
i
−
vi −
F (vi )ξ1 −1 dF (vi ) +
vi −
dF (vi )
k
k
u +
u +
c
c
1
−
c
d
d
ξ
ξ
i=1
c
c
2
1
60
(40)
ξ2
Z v
k
k X ϕ∗i
ξ2 −1
ξ2 F (v)
dF (v) +
dF (v)
=
v−
v−
∗ k
k
c
c i=1 1 − c
du
du
ξ2 + c
ξ2 + c
!
X
ξ2
Z duξ + k Z u ξ2
c
1
ϕi
k
k X ϕi
ξ1 −1
−
+ ξ1 F (v)
dF (v) −
dF (v)
v−
v−
k
u +k
c
1
−
c
c
1
−
c
du
+
d
ξ
ξ
i=1
c
c
i=ξ +1
Z
k
du∗
ξ +c
2
1
2
1
ξ2
Z duξ + k c
1
k
k X ϕi
ξ2 −1
ξ2 F (v)
dF (v) +
dF (v)
=
v−
v−
k
u∗ + k
c
c
1
−
c
du
+
d
ξ2
ξ2
i=1
c
c
!
X
Z duξ + k ξ2
c
1
k
ϕi
−
v−
+ ξ1 F (v)ξ1 −1 dF (v)
k
c
1−c
du
ξ2 + c
i=ξ1 +1
" " #
X
ξ1 #
ξ2
X
ξ1
ξ2
k
k
k
k
ϕ
ϕ
i
i
du∗
− F duξ1 +
+ du∗
− F du∗
=duξ1 F duξ1 +
ξ2 +
ξ2 F
ξ2 +
c i=1 1 − c
c
c
c i=ξ +1 1 − c
1
" ξ2
X
ξ1 #
ξ2
k
ϕi
k
k
− duξ2 F duξ2 +
− F duξ2 +
− F duξ2 +
c
c i=ξ +1 1 − c
c
1
#
#
"
k
Z duξ + k "
Z du∗
ξ2
ξ1
X
X
ξ2 + c
c
1
ϕ
ϕ
i
i
− F (v)ξ1 dv −
− F (v)ξ1 dv,
−
F (v)ξ2 − F (v)
F (v)
∗ k
k
u
u
1
−
c
1
−
c
dξ + c
dξ + c
i=1
i=ξ
Z
k
du∗
ξ2 + c
1
2
2
where the third equality holds since
Pξ2
i=1
ϕi =
Pξ2
i=1
ϕ∗i and the last equality holds by
integration by parts. Since du∗
ξ2 satisfies (40),
1−F
duξ1
k
+
c
ξ1
X
ξ1
ϕi
k
u
= 1 − F dξ2 +
c
1−c
i=1
ξ1
X
k
ϕi
, ∀v < duξ1 + ,
1 − F (v) < [1 − F (v)]
1−c
c
i=1
ξ1
and
ξ
X
ξ
ξ2
k 2
k
ϕi
k 1
u
u
u
1 − F dξ2 +
= 1 − F dξ2 +
+ 1 − F dξ2 +
c
c
1
−
c
c
i=ξ +1
1
ξ2
1 − F (v)ξ2 > [1 − F (v)]
ϕi
k
+ 1 − F (v)ξ1 , ∀v > duξ2 + ,
1−c
c
+1
X
i=ξ1
61
we have
−
Z(→
ϕ ∗ ) − Z(ϕ)
!
ξ1
ξ2
X
X
ϕi
ϕi
u∗
> dξ1 − dξ2
− 1 − dξ2 − dξ2
1−c
1−c
i=1
ξ1 +1
!
k
Z du∗
Z duξ + k X
ξ2
ξ1
X
ξ2 + c
c
1
ϕi
ϕi
+
dv −
− 1 dv = 0,
k
u∗ + k
1
−
c
1
−
c
du
+
d
ξ
ξ
i=1
c
c
i=ξ +1
u∗
1
2
2
a contradiction.
Case 2: ϕi ≥ 1 − c for all i ≤ ξ1 . If ϕ∗1 ≥ 1 − c, then du∗
ξ2 = d. In this case, we have
−
Z(→
ϕ ∗ ) − Z(ϕ)
Z v
ξ2 Z v
ξ2
X
X
ϕi
k
k
ξ2 −1
=
F (vi )
dF (vi ) −
dF (vi )
vi −
vi −
u +k
u +k
c
c
1
−
c
d
d
ξ
ξ
i=1
c
c
i=ξ +1
1
2
ξ1
−
XZ
v
vi −
k
du
ξ2 + c
i=1
k
c
2
F (vi )ξ1 −1 dF (vi )
!
X
Z v
ξ2
ϕi
k
k
+ ξ1 F (v)ξ1 −1 dF (v)
=
v−
ξ2 F (v)ξ2 −1 dF (v) −
v−
k
k
c
c
1−c
du
du
ξ2 + c
ξ2 + c
i=ξ1 +1
#v
"
ξ2
X
k
ϕ
i
ξ2
ξ1 = v−
F (v) − F (v)
− F (v) c
1−c
k
i=ξ +1
Z
v
du
ξ +c
1
2
Z
ξ2
"
v
k
du
ξ +c
i=ξ1
2
ϕi
− F (v)ξ1 dv,
1
−
c
+1
X
F (v)ξ2 − F (v)
−
#
where the last equality follows integration by parts. Since
X
ξ
ξ
ξ2
k
ϕi
k 1
k 2
u
u
u
1 − F dξ2 +
+ 1 − F dξ2 +
= 1 − F dξ2 +
,
c
1−c
c
c
i=ξ +1
1
[1 − F (v)]
ξ2
X
i=ξ1
ϕi
k
+ 1 − F (v)ξ1 < 1 − F (v)ξ2 , ∀v > duξ2 + ,
1−c
c
+1
62
we have
X
Z v
ξ2
ξ2
X
ϕi
ϕi
k
→
−
∗
u
Z( ϕ ) − Z(ϕ) > − v − − dξ2
+
dv = 0,
u +k
c
1
−
c
1
−
c
d
ξ
i=1
c i=ξ +1
1
2
u∗
a contradiction. If ϕ∗1 < 1−c, then let du∗
1 = · · · = dξ2 be defined by (40). Note that if ξ1 = 1
−
and ϕ1 /(1 − c) = 1, then the new mechanism using →
ϕ ∗ coincides with the old mechanism
−
using →
ϕ . In this case, we can redefine du1 := duξ2 without changing the mechanism. Except
−
−
for this case, we can show, by a similar argument to that in Case 1, that Z(→
ϕ ∗ ) − Z(→
ϕ ) > 0,
a contradiction.
Hence, by induction, we have L = 1. For ease of notation, let h := ξ1 . Next, we show
that j = 0 or 1. Suppose, to the contradiction, that 0 < j < n. Suppose dl < 0, then the
principal’s objective function is strictly increasing in ϕi for i > j. Hence, in optimum, it
must be that dl ≥ 0. Let
n
1X
=
ϕj , for all i = 1, . . . , n.
n j=1
ϕ∗i
u∗
Case 1: ϕi < 1 − c for all i ≤ j. In this case, ϕ∗1 < 1 − c. Let du∗
1 = · · · = dn be defined
by
n X
n
k
ϕ∗i
k
u∗
u∗
1 − F dj +
=
1 − F dj +
.
c
1
−
c
c
i=1
(41)
u
l∗
Then du∗
j < dj . Let d be defined by
F
k
d +
c
l∗
n
=
n
X
i=1
ϕ∗i F
k
d +
c
l∗
.
l∗
u∗
Then dl < dl∗ . There are two subcases to consider: (i) dl∗ ≤ du∗
j and (ii) d > dj .
(i) Suppose dl∗ ≤ du∗
j . Then
−
−
Z(→
ϕ ∗ ) − Z(→
ϕ)
63
(42)
"Z
n
X
∗
#
∗
k Z du∗
Z v
j +c
ϕ
k
k
k
i
=
ϕ∗i dF (vi ) +
F (vi )n−1 dF (vi ) +
dF (vi )
vi −
vi −
vi −
k
k
u∗
c
c
c
1
−
c
l∗
v
d +c
dj + c
i=1
n Z dl + k n Z v
X
X
c
k
ϕi
k
−
vi −
ϕi dF (vi ) −
dF (vi )
vi −
c
c 1−c
l k
i=1 v
i=j+1 d + c
"Z u k #
Z v j
dj + c
X
ϕ
k
k
i
−
F (vi )j−1 dF (vi ) +
dF (vi )
vi −
vi −
k
k
u
c
c
1
−
c
l
d
+
d
+
j
i=1
c
c
X
n
k Z dl∗ + k Z duj + k Z du∗
n
+
j
c
c
c
k
k
k X ϕi
n−1
=
v−
v−
nF (v) dF (v) +
v−
dF (v)
ϕi dF (v) +
k
c i=1
c
c
1
−
c
dl + kc
dl∗ + kc
du∗
+
j
i=1
c
X
Z duj + k Z duj + k n
c
c
ϕi
k
k
dF (v) −
jF (v)j−1 dF (v)
v−
v−
−
k
k
c
1
−
c
c
l
l
d +c
d +c
i=j+1
" " #
X
j #
n
X
j
n
k
k
k
ϕ
k
ϕ
i
i
=duj F duj +
− F duj +
+ du∗
F du∗
− F du∗
j
j +
j +
c i=1 1 − c
c
c
c i=1 1 − c
" #
n
n
k X
k
l∗
l∗
l∗
+d F d +
ϕi − F d +
c i=1
c
"
X
X
j #
n
n
k
k
k
ϕ
i
ϕ i + F dl +
+ F dl +
+ dl −F dl +
c i=1
c i=j+1 1 − c
c
k
k
Z dl∗ + k
Z du∗
Z du∗
n
n
X
X
j +c
j +c
c
ϕi
n
dv
−
F (v)
ϕi dv −
F (v) dv −
F (v)
u+ k
1
−
c
l∗ + k
dl + kc
d
d
j
i=1
i=1
c
c
#
Z duj + k "
n
X
c
ϕi
+
F (v)
+ F (v)j dv,
k
1
−
c
l
d +c
i=j+1
dl + kc
where the second equality holds since
Pn
i=1
ϕ∗i =
Pn
i=1
ϕi and the last equality holds by
l∗
integration by parts. Since du∗
j satisfies (41), d satisfies (42),
1−F
duj
k
+
c
j
j
1 − F (v) <
n
X
k
ϕi
u
=
1 − F dj +
,
1−c
c
i=j+1
n
X
ϕi
[1 − F (v)] , ∀v < duj ,
1
−
c
i=j+1
and
j
X
n
n
X
k
ϕi
k
k
l
l
l
1−F d +
+
1−F d +
+
ϕi F d +
= 1,
c
1
−
c
c
c
i=j+1
i=1
64
1 − F (v)j +
n
n
X
X
ϕi
[1 − F (v)] +
ϕi F (v) < 1, ∀v > dl
1
−
c
i=1
i=j+1
F (v)j − F (v)n > [1 − F (v)]
n
X
ϕi
, ∀v > dl = duj+1 ,
1
−
c
i=j+1
we have
−
−
Z(→
ϕ ∗ ) − Z(→
ϕ)
!
!
j
n
n
X
X
X
ϕi
ϕi
ϕ
i
u∗
l
− 1 + dj 1 −
+d
1−c
1−c
1−c
i=1
i=j+1
i=1
>duj
Z
−
dl∗ + kc
dl + kc
n
X
ϕi
dv −
1−c
i=j+1
Z
k
du∗
j +c
dl∗ + kc
n
X
ϕi
dv −
1−c
i=j+1
Z
k
du∗
j +c
k
du
j+c
!
j
X
ϕi
− 1 dv = 0,
1−c
i=1
a contradiction.
l∗
u∗
(ii) Suppose dl∗ > du∗
j . In this case, redefine d = dj such that
n X
k
ϕ∗i
k
∗
l∗
l∗
ϕi F d +
+
1−F d +
= 1.
c
1
−
c
c
i=1
u
Then dl < dl∗ = du∗
j < dj . By a similar argument to that in case (ii), we can show that
−
−
Z(→
ϕ ∗ ) − Z(→
ϕ ) > 0, a contradiction.
u∗
l∗
Case 2: ϕi ≥ 1−c for all i ≤ j. If ϕ∗1 ≥ 1−c, then let du∗
1 = . . . dn = d and d be defined
−
−
by (42). By a similar argument to that in Case 1, we can show that Z(→
ϕ ∗ ) − Z(→
ϕ ) > 0, a
contradiction.
u∗
l∗
If ϕ∗1 < 1 − c, then let du∗
1 = . . . dn = d be defined by (41) and d be defined by (42).
−
Note that if j = 1 and ϕ1 /(1 − c) = 1, then the new mechanism using →
ϕ ∗ coincides with
−
the old mechanism using →
ϕ . In this case, we can redefine du1 := dl without changing the
mechanism. Except for this case, we can show, by a similar argument to that in Case 1, that
−
−
Z(→
ϕ ∗ ) − Z(→
ϕ ) > 0, a contradiction.
Hence, j = 0 or n.
65
Case 1: j = 0. In this case,
∗
P (v) =



ϕi
1−c
if v ≥ dl +
l

 ϕi
if v < d +
k
c
.
k
c
−
−
is an optimal mechanism given →
ϕ . Furthermore, →
ϕ and dl must satisfy
X
i
i
k
k
ϕj
l
l
1−F d +
≤1−F d +
, ∀i ≤ n,
c
1
−
c
c
j=1
X
X
n
n
k
k
ϕi
l
l
F d +
ϕi + 1 − F d +
= 1.
c i=1
c
1
−
c
i=1
(43)
(44)
In particular, (43) holds for i = n, which implies
n
n
X
1 − F dl + kc
ϕi
.
≤
l + k
1
−
c
1
−
F
d
c
i=1
Substituting this into (44) yields
F
k
d +
c
l
n−1
≤
n
X
i=1
n (1 − c) 1 − F dl + kc
.
ϕi ≤
1 − F dl + kc
By the proof of the second part in Theorem 3, j = 0 is optimal if v ∗∗ ≤ v \ in which case the
−
optimal dl = du1 = · · · = dun = v ∗∗ − kc . The set of optimal →
ϕ is given by Φ(dl , du1 , . . . , dun ).
−
Clearly, ϕ ∈ Φ if and only if →
ϕ satisfies conditions (43) and (44). Since v ∗∗ ≤ v \ implies that
1≤
1
1 − F (v ∗∗ )n
≤
,
1 − cF (v ∗∗ )
1 − F (v ∗∗ )
there exists 1 ≤ h ≤ n such that
1 − F (v ∗∗ )h−1
1
1 − F (v ∗∗ )h
≤
<
.
1 − F (v ∗∗ )
1 − cF (v ∗∗ )
1 − F (v ∗∗ )
66
Hence, for all i > h (43) holds if (44) holds. Given this, it is easy to see that the set of
−
optimal →
ϕ is the convex hull of
 
 ϕi = (1 − c)F (v ∗∗ )j−1 if j ≤ h − 1, ϕi = 1−c ∗∗ − Ph−1 (1 − c)F (v ∗∗ )j−1 ,
h
j=1
j
1−cF (v )
→
−
ϕ

 ϕij = 0 if j ≥ h + 1 and (i1 , . . . , in ) is a permutation of (1, . . . , n)



.


Case 2: j = n. In this case, let du := du1 = · · · = dun . In this case,
P ∗ (v) =






ϕi
1−c
F (v)n−1




 ϕi
if v ≥ du +
if dl +
k
c
k
c
< v < du +
if v ≤ dl +
k
c
.
k
c
−
Furthermore, →
ϕ , dl and du must satisfy
X
i
i
k
k
ϕj
u
u
1−F d +
≤1−F d +
, ∀i ≤ n − 1,
c
1−c
c
j=1
X
n
n
ϕi
k
k
u
u
,
=1−F d +
1−F d +
c
1−c
c
i=1
n
n
k X
k
l
l
F d +
ϕi = F d +
.
c i=1
c
(45)
(46)
(47)
(46) and (47) imply that dl and du satisfy
n
n−1
F dl + kc
1 − F du + kc
=
.
1−c
1 − F du + kc
By the proof of the third part in Theorem 3, j = n is optimal if v ∗∗ > v \ in which case the
optimal dl = v l (ϕ∗ ) −
k
c
−
and the optimal du1 = · · · = dun = v u (ϕ∗ ) − kc . The set of optimal →
ϕ
−
is given by Φ(dl , du1 , . . . , dun ). Clearly, ϕ ∈ Φ if and only if →
ϕ satisfies conditions (45)-(47).
It is easy to see that Φ is the convex hull of
→
−
ϕ ϕij = (1 − c)F (v u (ϕ∗ ))j−1 ∀j and (i1 , . . . , in ) is a permutation of (1, . . . , n) .
67
References
Ben-Porath, E., Dekel, E., Lipman, B. L., 2014. Optimal allocation with costly verification.
American Economic Review 104 (12), 3779–3813.
Border, K. C., 1991a. Implementation of Reduced Form Auctions: A Geometric Approach.
Econometrica 59 (4), 1175–1187.
Border, K. C., 1991b. Implementation of reduced form auctions: A geometric approach.
Econometrica, 1175–1187.
Border, K. C., Sobel, J., 1987. Samurai accountant: A theory of auditing and plunder. The
Review of Economic Studies 54 (4), 525–540.
Cook, W., Cunningham, W., Pulleyblank, W., Schrijver, A., 2011. Combinatorial Optimization. Wiley Series in Discrete Mathematics and Optimization. Wiley.
Frank, A., 2011. Connections in Combinatorial Optimization. Oxford Lecture Series in Mathematics and Its Applications. OUP Oxford.
Gale, D., Hellwig, M., 1985. Incentive-compatible debt contracts: The one-period problem.
The Review of Economic Studies 52 (4), 647–663.
Maskin, E. S., Riley, J. G., 1984. Optimal Auctions with Risk Averse Buyers. Econometrica
52 (6), 1473–1518.
Matthews, S. A., jun 1984. On the Implementability of Reduced Form Auctions. Econometrica.
Mierendorff, K., 2011. Asymmetric reduced form auctions. Economics Letters 110 (1), 41–44.
68
Mookherjee, D., Png, I., 1989. Optimal auditing, insurance, and redistribution. The Quarterly Journal of Economics 104 (2), 399–415.
Mylovanov, T., Zapechelnyuk, A., 2014. Mechanism design with ex-post verification and
limited punishments.
Pai, M. M., Vohra, R. V., mar 2014. Optimal auctions with financially constrained buyers.
Journal of Economic Theory 150, 383–425.
Persico, N., 2000. Information acquisition in auctions. Econometrica 68 (1), 135–148.
Townsend, R. M., 1979. Optimal contracts and competitive markets with costly state verification. Journal of Economic Theory 21 (2), 265 – 293.
69
Download