and the sanction for a criminal who is caught committing the crime is S

advertisement

Preliminary Draft – April. 29, 2009

Sequential versus Simultaneous Law Enforcement

Shmuel Leshem

 and Avraham Tabbach

ABSTRACT

This paper compares Stackelberg and Nash solutions of an inspection game in which the enforcement technology exhibits diminishing marginal return. We show that (i) the level of compliance and the enforcer’s payoff are (weakly) higher if either the enforcer or the offender is the Stackelberg leader than in a Nash game, and (ii) the offender’s payoff is

(weakly) higher if the offender is the Stackelberg leader than in a Nash game. Moreover, depending on the enforcement technology and the ratio between the sanction for and gains from non-compliance, the enforcer’s payoff may be higher if the offender, rather than the enforcer, is the Stackelberg leader. This suggests that reactive enforcement might be socially preferable to proactive enforcement.

University of Southern California School of Law.

Tel-Aviv University School of Law.

1. Introduction

Economic models of crime usually consider a sequential game in which an enforcer—the state or the police—acts as a Stackelberg leader by committing to an enforcement strategy. A potential offender, responding to the enforcer’s choice of enforcement level, then decides whether or not to commit an offense. A related strand of literature has examined an inspection game in which the enforcer and the offender choose their actions simultaneously so that their equilibrium strategies constitute best responses. This paper develops a simple model that introduces the possibility that the offender acts as a

Stackelberg leader. Our main result is that the enforcer’s payoff might be higher if the offender, rather than the enforcer, is the Stackelberg leader. This suggests that reactive enforcement might be superior to proactive enforcement.

We consider a modified inspection game between an enforcer and an offender. The enforcer must choose enforcement expenditures; the offender must choose a probability of compliance. Non-compliance yields gains to the offender, but causes harm to the enforcer. Detection of non-compliance eliminates the offender’s gain, prevents harm, and subjects the offender to a sanction. In contrast to standard inspection games, we assume that the enforcer’s strategy is continuous and that the probability of detecting noncompliance is increasing at a decreasing rate in the enforcement level. A salient feature of this setting is that the enforcer’s optimal enforcement expenditures are increasing in the offender’s probability of non-compliance.

Consider the case in which the enforcer moves first; that is, the enforcer pre-commits to an observable level of enforcement. The enforcer’s problem is to choose enforcement expenditures to maximize its payoff given that the offender responds optimally to the enforcer’s choice. We show that, given that the offender complies with positive probability, the offender’s equilibrium probability of non-compliance is lower than in a

Nash game. The enforcer’s equilibrium payoff, in turn, is higher than its Nash equilibrium payoff. Moreover, since the offender’s probability of non-compliance, but not the enforcer’s level of enforcement, is lower than in a Nash game, the offender’s equilibrium payoff is lower than its Nash equilibrium payoff.

Next, consider the case in which the offender moves first; that is, the offender precommits to an observable compliance strategy. For example, suppose the offender can choose an observable frequency of non-compliance in a certain period. The offender’s problem is to choose a probability of non-compliance to maximize its payoff given that the enforcer responds optimally to the offender’s choice. We show that, given that the offender complies with positive probability, the offender’s probability of non-compliance is lower than in a Nash game. To see why, note that the offender’s Nash equilibrium strategy is optimal given the enforcer’s Nash equilibrium strategy; that is, without incorporating the effect of the offender’s choice on the enforcer’s enforcement expenditures. Since the enforcer’s best response is increasing in the offender’s probability of non-compliance, the offender’s optimal first-mover strategy is to comply with a higher probability relative to the Nash equilibrium strategy.

2

Since the offender’s probability of non-compliance is lower in an offender-leadership game than in a Nash game, the equilibrium enforcement expenditures are lower in an offender-leadership game than in a Nash game (since the enforcer’s optimal enforcement expenditures are increasing in the offender’s non-compliance strategy). Moreover, the offender’s lower probability of non-compliance in an offender-leadership game implies that the enforcer’s equilibrium payoff is higher than its Nash equilibrium payoff. Thus, both the offender’s and the enforcer’s equilibrium payoffs are higher if the offender moves first than in a Nash game.

Whether the enforcer’s equilibrium payoff is higher in an offender- versus enforcerleadership game depends on the relative probability of non-compliance in the two cases.

Specifically, if the offender’s probability of non-compliance is lower in an offenderleadership game, the enforcer’s equilibrium payoff is higher if the offender moves first.

But even if the offender’s probability of non-compliance is higher in an offenderleadership game, the enforcer’s equilibrium payoff might still be higher if the offender moves first. The reason is that, notwithstanding the higher probability of non-compliance in an offender-leadership game, the enforcer’s equilibrium enforcement expenditures are lower in an offender- versus enforcer-leadership game. The enforcer’s saving in enforcement expenditures in an offender-leadership game might thus outweigh the greater harm from non-compliance in an enforcer-leadership game.

The paper proceeds as follows. Section 2 reviews relevant literature. Section 3 sets up the model. Section 4 compares the equilibrium outcomes in simultaneous and sequential games. Section 5 concludes.

2. Related Literature

The literature on the economics of crime, beginning with Becker’s (1968) seminal article, has implicitly examined a sequential game between an enforcer and an offender in which the enforcer pre-commits to a continuous enforcement strategy. Underlying this literature is the assumption that non-compliance always results in harm; the level of enforcement merely determines the probability with which non-compliance is detected and punished ex post. The enforcer’s objective accordingly is to minimize the sum of expected harm and enforcement costs by choosing an optimal level of deterrence ; that is, by directly affecting the offender’s non-compliance strategy (see, e.g., Garoupa, 1997; Polinsky and

Shavell, 2007). Since non-compliance always results in harm, the enforcer has incentives to detect non-compliance if and only if the enforcer moves first. In a simultaneous or an offender-leadership game, by contrast, the enforcer’s best response is to not enforce the law as the enforcer cannot affect the offender’s probability of non-compliance.

A related strand of literature has considered an inspection game in which the enforcer and the offender act simultaneously (see, e.g., Graetz et al., 1986; Tsebelis, 1990 ) . In a simultaneous game, the enforcer’s and the offender’s equilibrium strategies constitute best responses. Inspection games commonly share the implicit assumption that enforcement is designed to prevent harm from non-compliance, rather than merely to detect and punish non-compliance ex post. The enforcer’s objective in inspection games

3

is thus to minimize the sum of the expected harm from non-compliance and the enforcement expenditures by choosing an optimal level of harm prevention , taking the level non-compliance as given.

This paper studies a synthesized model of law enforcement. Similarly to deterrence models, we assume that the enforcement technology exhibits diminishing marginal return; i.e., the probability of detecting non-compliance is decreasing at an increasing rate with the enforcement expenditures. Like prevention models, we assume that the enforcement technology is preventive in that detection of non-compliance prevents harm, rather than merely punishes non-compliance after harm has occurred.

In this synthesized model, the players’ objectives depend on the sequence of moves in the game. In an enforcer-leadership game, the enforcer’s objective is to minimize the sum of the expected harm from non-compliance and the enforcement expenditures by directly affecting the level of deterrence--that is, the offender’s non-compliance strategy—as well as by preventing harm resulting from non-compliance. The offender’s strategy, in turn, constitutes a best response to the enforcer’s strategy. In a simultaneous game, the enforcer’s objective is to minimize the sum of the expected harm from non-compliance and the enforcement costs, given the offender’s non-compliance strategy. The offender’s objective similarly is to maximize its net gain from non-compliance, given the enforcer’s enforcement strategy. Finally, in an offender-leadership game, the offender’s objective is to maximize its net gains from non-compliance by choosing an optimal non-compliance strategy. The offender’s optimal probability of non-compliance, in turn, takes into account the effect of the offender’s strategy on the enforcer’s enforcement strategy. The enforcer’s strategy, in turn, constitutes a best response to the offender’s strategy.

This paper is also related to the literature on strategic commitment in contests. Dixit

(1987) shows that a favorite contestant in a Nash game (a contestant who is expected to win with probability greater than one half) will over-commit resources as a Stackelberg leader relative to the Nash equilibrium strategy. Consequently, if the favorite contestant moves first, its equilibrium payoff is higher and the underdog’s equilibrium payoff is lower relative to their Nash equilibrium payoffs. Baik and Shogren (1992) extended

Dixit’s analysis by endogenizing the sequence of moves. They show that both the favorite’s and the underdog’s payoffs are higher if the underdog, rather than the favorite, moves first. Our result is reminiscent of Baik and Shogren’s in that here too both players’ equilibrium payoffs might be higher if one is the Stackelberg leader. In contrast to Baik and Shogren’s result, however, here the Pareto-optimal sequence of moves depends on the parameters of the game. In particular, here the enforcer always prefers to move first if the costs of deterring non-compliance are sufficiently low.

3. Model

Consider two strategic, risk-neutral players: an offender and an enforcer. The offender’s strategy is a probability of non-compliance, q

[ 0 , 1 ].

A possible interpretation of the offender’s strategy is that it represents the frequency with which the offender complies with the law in some time interval, or, alternatively, the offender’s degree of non-

4

compliance at one point in time. The enforcer’s strategy is a level of monitoring expenditures, k

[ 0 ,

).

The enforcer’s probability of detecting non-compliance is given by p ( k )

0 , p ' ( k )

0 , p '' ( k )

0 , and p ' '' ( k )

0 p ( k )

[ 0 , 1 ) , where

; that is, (i) the probability of detecting non-compliance is increasing at a decreasing rate in the monitoring expenditures, and (ii) the marginal probability of detecting non-compliance is decreasing at an increasing rate in the monitoring expenditures. The assumption that the detection probability is concave in the enforcement expenditures is common in deterrence models of law enforcement. In contrast to these models, however, we assume that the enforcement technology is preventive in that detection of non-compliance results in elimination of the offender’s gains and prevention of harm. To ensure an interior solution for the enforcer’s choice of monitoring expenditures, we further assume that p

( 0 )

1

H

and k lim

  p

( k )

0 .

The players’ payoffs are as follows. If non-compliance is undetected, the offender obtains positive gains of G and the enforcer obtains

( H

 k ) , where H represents the harm from non-compliance and k the enforcement expenditures. If non-compliance is detected, the offender pays a sanction S and the enforcer obtains – k . We thus assume that the sanction for non-compliance is costless and is not paid to the enforcer. If the offender complies, the offender obtains 0 and the enforcer k

. The enforcer’s and the offender’s payoff schedules reflect the notion that the enforcer would rather not monitor if the offender always complies, but will spend a positive amount on monitoring if the offender does not comply with sufficiently high probability.

We proceed by comparing three game configurations: a simultaneous game (SIM), a sequential game with enforcer-leadership (SEQe), and a sequential game with offenderleadership (SEQo). In SIM, both the enforcer and the offender choose their strategies independently and simultaneously. The solution concept is Nash equilibrium. In SEQe, the enforcer commits to observable monitoring expenditures. The offender’s compliance strategy, in turn, constitutes a best response to the enforcer’s monitoring choice. In SEQo, the offender commits to an observable non-compliance strategy. The enforcer’s monitoring strategy, in turn, constitutes a best response to the offender’s choice of compliance. We assume that the first-mover can commit to carry out his first-stage strategy irrespective of the second-mover’s strategy, but that the second-mover’s strategy must be a best response to the first-mover’s strategy. The solution concept in a sequential game is thus subgame perfect Nash equilibrium.

4. Equilibrium under Different Move-Sequences

4.1 Simultaneous-move game

Consider a game in which the enforcer and the offender choose their strategies simultaneously. We begin by constructing the players’ best response functions. Consider first the offender’s best response as a function of the enforcer’s choice of monitoring expenditure, k . The offender chooses q to solve the following problem:

5

max q

[ 0 , 1 ]

 q

 

1

 p ( k )

G

 p ( k ) S

 

. (1)

Differentiating the objective function in (1) with respect to offender’s best response is thus is indifferent between q

[ q

1 ( 0 ) if p ( k )

(

)

0 , 1 ] . Letting k

~

be such that

G

G

S q gives

. If p ( k )

 p (

G

G

S k

G

)

 p ( k )( G

G

G

S

S ) . The

, the offender

, the offender’s best response correspondence is q br

( k )

1



[

0

0 , 1 ] if if if k

0 k

 k

~ k

~ k .

~ k

(2)

Next, consider the enforcer’s best response as a function of the offender’s choice of noncompliance, q . The enforcer chooses k

[ 0 ,

) to solve the following problem: min k

 q ( 1

 p ( k )) H

 k

.

(3)

Differentiating the objective function in (3) with respect to k gives

Letting q be such that q

1 p ' ( 0 ) H

, the enforcer’s best response function is:

( qHp ' ( k )

1 ).

k br

( q )



 k k

0

:

: p ' ( k p ' ( k )

)

1 qH

1

H if if if

0

 q

 q

1

 q

 q q

1 .

(4)

If

0

0

 q

 q

 q q

, then the enforcer’s best response is to not monitor. This follows because for

, the enforcer’s marginal benefit from monitoring, qHp ' ( 0 ) , is lower than the marginal cost, 1. If 1

 q

 q

, then the enforcer’s best response equates the marginal benefits and costs from monitoring. In particular, for 1

 q

 q

, the enforcer’s optimal monitoring expenditures are increasing in q .

1

This is because the higher is q , the greater the enforcer’s benefit from detecting non-compliance ( qH ). If q

1

, the enforcer’s optimal monitoring expenditures are such that p ' ( k )

1

H

. We shall denote k br

( 1 ) by k

ˆ .

Lemma 1 presents the equilibrium strategies (marked with star) in a simultaneous game.

1 To see that the enforcer’s best response is increasing in q , note that dk br dq

( q )   p

1

'' q

2

H

0 .

If p ' ' '

0 , then k br

( q ) is convex.

6

Lemma 1 (equilibrium strategies in SIM)

~

Let k be such that p (

~ k )

G

G

S

and k

ˆ be such that p ' ( k

ˆ

)

1

H

. Then, if the enforcer and offender move simultaneously, the following strategy profiles are the unique Nash equilibria:

(a) If

~ k

 k

ˆ

, then

(b) If

~ k

 k

ˆ

, then k

* k

*

 k

ˆ and q

* 

1 (no-compliance equilibrium).

~ k and q

* 

1 p ' ( k

*

) H

(partial-compliance equilibrium)║

The equilibrium outcome—no-compliance or partial compliance equilibrium—depends on the ratio

G

G

S

( 0 , 1 ) and H . Specifically, given H , there is a cutoff value

G

G

S

such that if

G

G

S

(

)

G

G

S

, then the enforcer’s and offender’s best response functions intersect at

1 (< 1).

2

Similarly, given

G

G

S

, there is a cutoff value H such that if H

 offender’s and enforcer’s best response functions intersects at q = 1 (< 1).

(

) H q =

, then the

Note that there does not exist an equilibrium in which response to k

* 

0 is q

* 

1 k

* 

0

, since the offender’s best

, which in turn induces the enforcer to deviate to k

* 

0 (by the assumption that p

( 0 )

1

H

). Likewise, there does not exist an equilibrium in which q

* 

0

, since the enforcer’s best response to q

* 

0 is k

* 

0 , which in turn induces the offender to deviate to q

* 

1 .

Figure 1 presents the equilibrium outcomes in SIM:

2 Note that, as

G

G

S

0 , q

* 

1

H

and k

* 

0 .

7

1 q k br

(q, H

1

)

No Partial

Compliance

Equilibrium (1)

Compliance

Equilibrium

No

Compliance

Equilibrium (2) q br

(k, S

2

)

S), q k br

(q, H

2

) k

ˆ

( H

1

) q br

(k, S

1

)

S), k * k

ˆ

( H

2

)

Figure 1

Best responses curves in a simultaneous game k

If the enforcer’s and the offender’s best response curves intersect to the left of (at) resulting equilibrium is a non-compliance (partial compliance) equilibrium. k

~

, the

Remark : In a partial-compliance equilibrium, q

* is decreasing in

G

G

S

and increasing in

(

H ( d

~ k dS

 dq * dS

0 ,

 d

~ k dG

0 ,

 dq *  dG

0 , d

~ k dH

0 ,

 dq * dH

0

0 ), whereas k

*

is increasing in

G

G

S

and is invariant to H

).

3 This can be easily seen from Figure 1: an increase in S or a decrease in G shifts the offender’s best response curve to the left, resulting in a new equilibrium where both

* q and k

*

are lower as compared to the initial equilibrium. An increase in H moves down the enforcer’s best response curve, resulting in a new equilibrium where

* q is lower, but k

*

remains unchanged, as compared to the initial equilibrium.

4.2 Enforcer-Leadership Game

We now turn to the case where the enforcer acts as a Stackelberg leader. In an enforcerleadership game, the enforcer can pre-commit to enforcement expenditures irrespective of the offender’s choice of probability of non-compliance. The offender’s choice of non-

3 More precisely, the effect of a marginal increase in S or G on q * depends on the elasticity of the marginal detection probability with respect to the detection probability ( dp ' dp p p '

 p '' p

( p ' )

2

). The effect of a marginal increase in S or G on k * depends on the elasticity of the detection probability with respect to the monitoring expenditures ( dp dk k p

 p p

' k ).

8

compliance, in turn, constitutes a best response to the enforcer’s strategy. The enforcer thus chooses k

[ 0 ,

) to solve the following problem: min k

[ 0 ,

)

 q br

( k )( 1

 p ( k ) H

 k

(5) where q br

( k )

1

0 if if

0

 k k

~ k

~ k .

(6)

(6) reflects the notion that if the offender is indifferent between compliance and noncompliance (i.e., if k

~ k ) , the offender always complies. The reason is that, since the offender strictly prefers to comply if equilibrium in which k k where

~ k

 k '

 k enforcer spends k

 k

~ k

 k

~

, there does not exist subgame perfect as the enforcer can increase its payoff by deviating to ' and the offender always complies.

,

. The only subgame perfect equilibrium, therefore, is one in which the

~

Incorporating the offender’s best response function in (6) into the enforcer’s objective function in (5), the enforcer’s problem can be rewritten as min k

 k

( 1

 p ( k )

H

 k if if

0

 k

 k

 k

~

~ k .

(7)

The enforcer’s objective function thus exhibits a downward jump at

~ k .

4

Clearly the enforcer never chooses function for 0

 k

~ k with respect to k k

 k

~

. Differentiating the enforcer’s objective

and equating to zero gives p ' ( k ) H

1 .

Letting k

ˆ be such that p ' ( k

ˆ

) H

1 ,

5

the enforcer’s optimal monitoring strategy, k

*

, is given by k

*  k

ˆ

~ k if if

~ k k

~

[( 1

[( 1

 p ( k

ˆ

)] H p ( k

ˆ

)] H

 k

 k

ˆ

.

ˆ

.

(8)

If

~ k

[ 1

 p ( k

ˆ

)] H

 k

ˆ

, then the enforcer’s expected payoff if the offender never complies, the enforcer spends k

ˆ , and the expected harm is

[ 1

 p ( k

ˆ

)] H , is greater than the enforcer’s expected payoff if the offender always complies and the enforcer spends

~ k . If, by contrast, k

~

[ 1

 p ( k

ˆ

)] H

 k

ˆ

, then the enforcer’s expected payoff if the

4 More precisely, note that k lim

~ k

( 1

 p ( k )) H

 k

G

S

S

5 Note that k

ˆ was previously defined in Section 3.1

H

~ k

~ k .

9

offender always complies and the enforcer spends harm is [ 1

 p ( k

ˆ

)] H . k

~

is greater than the enforcer’s expected payoff if the offender never complies, the enforcer spends k

ˆ , and the expected

We summarize these results in Lemma 2.

Lemma 2 (equilibrium strategies in SEQe)

~

Let k be such that p (

~ k )

G

G

S

and k

ˆ be such that p ' ( k

ˆ

)

1

H

. Then the following strategy profiles constitute the unique subgame perfect equilibria:

(a) If

~ k

 k

ˆ 

[ 1

 p ( k

ˆ

)] H , then k

*  k

ˆ and q * 

1 (no-compliance equilibrium).

(b) If

~ k

 k

ˆ 

[ 1

 p ( k

ˆ

)] H , then k

* 

~ k and q

* 

0 (full-compliance equilibrium). ║

Observe that in the no-compliance equilibrium, the role of enforcement is merely to decrease the harm from non-compliance. Since the enforcer does not aim at deterring non-compliance, the enforcement strategy is strictly preventive . In the full-compliance equilibrium, by contrast, the role of enforcement is to induce compliance. Furthermore, given that the offender always complies, the enforcer does not aim at preventing harm expost. The enforcement strategy is thus strictly deterrent .

The equilibrium outcome—no-compliance or partial compliance equilibrium—depends on the ratio

G

G

S

( 0 , 1 ) and H . Specifically, given H , there is a cutoff value

G

G

S

such that if if

G

G

S

H

(

)

(

) H

G

G

S

, then

, then q q = 1 (0).

= 1 (0).

6

Similarly, given

G

G

S

, there is a cutoff value H such that

The explanation of Lemma 2 is as follows. If

G

G

S

is sufficiently small (large) (for example, if S is sufficiently large (small)), then the enforcer’s costs of deterring noncompliance are relatively low (high). In particular, as G

G

S

1

, the enforcer’s cost of achieving deterrence approaches zero; by contrast, as G

G

S

0

, the enforcer’s cost of achieving deterrence approaches infinity. Thus, the enforcer fully deters non-compliance ex ante if

G

G

S

is sufficiently low and otherwise prevents harm ex post. Likewise, if H is sufficiently small (large), the enforcer’s benefit from inducing compliance is relatively lower (large). Thus, the enforcer fully deters non-compliance ex ante if H is sufficiently high and otherwise prevents harm ex post.

Figure 2 presents the enforcer’s optimal choice of monitoring expenditures:

6 Note that, as

G

G

S

0 , q

* 

1

H

and k

* 

0 .

10

k

ˆ

( H

1

) k

ˆ

( H

2

) k

~

No-compliance

Equilibrium

-

-

-

~ k

H

H

1

2

,

Enforcer’s

Payoff

Compliance

Equilibrium k

Figure 2

The enforcer’s payoff function in an enforcer-leadership game

The enforcer’s strategy in an enforcer-leadership game depends on the relation between the enforcer’s payoff when the offender never complies and when the offender always complies.

We turn now to compare the equilibrium outcomes in SIM and SEQe.

Proposition 1 (SIM versus SEQe)

(a) If q

* 

1 in SEQe, then q

* 

1 in SIM; if q

* 

1 in SIM, then either q

* 

1 or q

* 

0 in SEQe.

(b) If q

* 

0 in SEQe, then

(i) k

*

in SEQe is either higher than or equal to k

*

in SIM;

(ii)

* q is lower in SEQe than in SIM; and

(iii) the enforcer’s equilibrium payoff is higher and the offender’s equilibrium payoff is lower in SEQe than in SIM. ║

Recall that the enforcer in SEQe can directly affect the offender’s non-compliance strategy. Thus the enforcer in SEQe either fully deters non-compliance ex-ante or merely prevents harm ex-post. The enforcer in SIM, by contrast, takes the offenders’ noncompliance strategy as given, thereby aiming solely to prevent harm ex post. The enforcer’s strategy in SIM therefore either induces a positive probability of compliance

(‘semi-deterrent’ enforcement) or full non-compliance.

11

When

G

G

S

is sufficiently large or H is sufficiently small, the enforcer’s enforcement strategy in both SIM and SEQe is preventive. When

G

G

S

or H is intermediate, the enforcer over-commits enforcement expenditures in SEQe relative to SIM. Accordingly, the enforcement strategy is deterrent in SEQe and preventive in SIM. The enforcer’s equilibrium payoff is thus higher in SEQe, whereas the offender’s equilibrium payoff is higher in SIM. When

G

G

S

is sufficiently small or H is sufficiently large, the enforcer monitoring expenditures are identical in SEQe and SIM. The enforcement strategy is thus deterrent in SEQe and semi-deterrent in SIM. Consequently, the enforcer’s equilibrium payoff is higher in SEQe than in SIM, whereas the offender’s equilibrium payoff is higher in SIM than in SEQe.

4.3 Offender-Leadership Game

Consider now the case where the offender acts as a Stackelberg leader. In an offenderleadership game, we assume, the offender can pre-commit to compliance strategy irrespective of the enforcer’s choice of monitoring expenditures. For example, suppose the offender can choose an observable frequency of non-compliance in a certain period.

The higher the frequency of non-compliance, q , the greater the harm and gain from noncompliance as well as the sanction for non-compliance. Having observed the offender’s choice, the enforcer’s strategy, in turn, constitutes a best response to the offender’s strategy. The offender’s thus chooses q

[ 0 , 1 ] to solve the following problem: max q

[ 0 , 1 ]

 q

( 1

 p ( k br

( q )) G

 p ( k br

( q )) S

 

, (9) where k br

( q )



 k k

0

:

: p ' ( k )

1 /( qH ) p ' ( k )

1 / H if q

 q if if q

 q q

1 .

( 10 )

Recall that q

1 / k br

( q ) is the enforcer’s best response function (see Section 4.2) and that p ' ( 0 ) H is the greatest q such that no enforcement ( k

* 

0 ) is optimal.

Incorporating the enforcer’s best response function in (10) into the offender’s objective function in (9), the offender’s problem can be rewritten as follows max q

[ 0 , 1 ]



 qG q

G

[ G

 p p ( k br

( k

ˆ

)( G

(

 q ))(

S )

G

S ) ] if 0

 q

 q if 1

 q if

 q q

1 .

(11)

12

Clearly the offender never chooses q

 q

. Differentiating the offender’s objective function for 1

 q

 q with respect to q gives

G

 p ( k br

( q ))( G

S )

 qp ' ( k br

( q )) dk br ( q ) dq

( G

S ) . (12)

The first two terms represents the offender’s marginal benefit from increasing q , resulting from the greater probability with which the offender obtains net gains from noncompliance. Recall that, in a partial-compliance Nash equilibrium, q

* is such that the sum of these terms is zero. The last term represents the offender’s marginal cost of increasing q . This marginal cost stems from the higher probability with which noncompliance is detected as q increases.

Observing that dk br dq

( q )   p ' ( k ) qp '' ( k )

(by implicit differentiation of the enforcer’s best response function), the offender’s marginal cost in (12) becomes p ' ( k )

2 p '' ( k )

( G

S ) . (12) can thus be rewritten as

G

( G

S )( p ( k )

 ( p ' ( k ))

2

).

p ( k )' '

(13)

If (13) is negative for k

[ 0 , k

ˆ

) , then q

*  q ; if (13) is positive for k

[ 0 , k

ˆ

] , then q

* 

1 ; if there exists k

[ 0 , k

ˆ

) such that (13) is equal to zero, then q

* 

[ q , 1 ) . To simplify the analysis, we will assume p ' '' p '

( p '' )

2

1 , 7 which ensures a unique solution to the offender’s problem.

Lemma 3 presents the equilibrium strategies when the offender moves first.

Lemma 3 (equilibrium strategies under SEQe)

Let k

 

 k

0

: p ( k )

 p ' ( k )

2 p '' ( k )

G

G

S if if p ' ( 0 )

2 p '' ( 0 ) p ' ( 0 )

2 p '' ( 0 )

G

G

S

G

G

S

(14) and k

ˆ

be such that p ' ( k )

1

H

. Then, if the offender moves first, the following strategy profiles constitute the unique subgame perfect equilibria:

(a) if k

 

0 , then k

* 

0 and q

*  q (no-enforcement equilibrium).

7 p ' '' p ''

( p '' )

2

is the elasticity of p ' with respect to p ' ' ; that is, dp '' dp ' p ' p ''

. Specifically, if p ' '' p ''

( p '' )

2

1 , the offender’s marginal net benefit from increasing the probability of non-compliance is decreasing.

13

If, in addition,

(b)

(c)

If

If k k

 k

ˆ

, then k

*

 k

ˆ

 p ' ( 0 )

2 p '' ( 0 )

, then k

*

1 , then k

 

0 for all G, S, and H.

 k

 and q

*  p ' ( k

1

) H

( q , 1 ) (partial-compliance equilibrium).

 k

ˆ

and q

* 

1 (no-compliance equilibrium). ║

If

 p ' ( 0 )

2 p '' ( 0 )

1 , then the offender commits to q

*  q for all S , G , and H . This is because the offender’s marginal benefit from increasing q above q is lower than the marginal cost.

Given the offender’s strategy, the enforcer’s best response is to not monitor. If

 p ' ( 0 )

2 p '' ( 0 )

1 , by contrast, the equilibrium outcome depends on

G

G

S sufficiently small

G

G

S and H . Specifically, for or sufficiently large H , the offender commits to q

*  q and the enforcer respond with k

* the offender commits to

0 . For intermediate value of q

* 

( q , 1 )

G

G

S

or intermediate value of H , and the enforcer respond with k

*  k

~

. Finally, for sufficiently large value of

G

G

S

or sufficiently small value of q

* 

1 and the enforcer respond with k

*  k

ˆ

.

H , the offender commits to

Figure 3 presents the equilibrium outcome in SEQo.

No-

Offender’s

Payoff enforcement

Equilibrium

Partial-

Compliance

Equilibrium

No-

Compliance

Equilibrium q G q

k

k + k

ˆ

Figure 3

The offender’s payoff function in an offender-leadership game

The offender’s strategy in an offender-leadership game depends on the relation between the offender’s payoff when the offender does not comply with probability q and when the offender does not comply with probability greater than q .

14

We turn now to compare the equilibrium outcome in SIM and SEQo.

Proposition 2 (SIM versus SEQo)

(a) If q

* 

1 in SEQo, then q

* 

1 in SIM; if q

* 

1 in SIM, then either q

* 

[ q , 1 ) in SEQo.

(b) If q

* 

[ q , 1 ) in SEQo, then: q

* 

1 or

(i) both

* q and k

*

are lower in SEQo than in SIM; and

(ii) both the enforcer’s and the offender’s equilibrium payoffs are higher in SEQo than in

SIM. ║

That the offender’s equilibrium payoff in SEQo is not lower than in SIM is straightforward: the offender can always obtain his Nash equilibrium payoff in SEQo by committing to his Nash equilibrium strategy. That the offender’s equilibrium payoff is higher in SEQo than in SIM stems from the fact that the offender’s Nash equilibrium strategy does not take into account the fact that the enforcer’s best response is increasing in q . Consequently, the offender’s marginal benefit from increasing q * in SIM is lower than the marginal cost (see (13)). The offender can thus increase its payoff in SEQo by committing to a lower q .

To see why the enforcer’s equilibrium payoff is higher in SEQo than in SIM, note that the enforcer’s equilibrium payoff is decreasing in q

. Formally, recall that the enforcer’s payoff as a function of q is

 q ( 1

 p ( k br

( q )) H

 k br

( q ) , where k br

( q ) is the enforcer’s best response function (see (11)).

Differentiating the enforcer’s payoff function with respect to q (by the envelope theorem) gives

( 1

 p ( k br

( q )) H

0 .

The enforcer’s payoff is thus increasing as q decreases.

Intuitively, if q decreases and k remains unchanged, the enforcer’s payoff increases because the detection probability remains unchanged but the probability of noncompliance is lower. It follows that, if the enforcer chooses lower monitoring expenditures than its Nash strategy, the enforcer’s payoff must be higher than if it kept its monitoring expenditures unchanged.

We now turn to compare the equilibrium outcome in SEQo and SEQe.

Proposition 3 (SEQe versus SEQo)

Suppose q

* 

1 in either SEQo or SEQe. Then

(a) k

* is lower in SEQo than in SEQe.

(b)

* q is either higher or lower in SEQo than in SEQe.

(c) The offender’s equilibrium payoff is higher in SEQo than in SEQe.

(d) The enforcer’s equilibrium payoff is either higher or lower in SEQo than in SEQe; in particular:

15

(i) if

* q is lower in SEQo than in SEQe, then the enforcer’s equilibrium payoff is higher in SEQo than in SEQe;

(ii) if

G

G

S is sufficiently small, then the enforcer’s equilibrium payoff is higher in SEQe than in SEQo. ║

To see why the enforcer’s equilibrium monitoring expenditures are lower in SEQo than in SEQe, consider two cases: q

* 

1 in SIM and q

* 

1 in SIM.

First, suppose q

* 

1 in SIM. Recall from Propositions 1 and 2, respectively, that k * in

SEQe is higher than or equal to k

*

in SIM and that k

*

is strictly lower in SEQo than in

SIM. It follows that k

*

is strictly higher in SEQe than in SEQe.

Next, suppose q

* 

1 in SIM. Consider first the case where q

* 

0 in SEQe. Then, from proposition 1, k * is strictly higher in SEQe than in SIM. Since, by Proposition 1, k * in

SIM is higher than or equal to k

*

in SEQo, it follows that k

*

is strictly higher in SEQe than in SEQe. Consider next the case where q

* 

1 in SEQe. Then k

*

in SEQe is equal to k

*

in SIM. But the assumption that q

* 

1 in either SEQe or SEQo implies that q

* 

1 in

SEQo. By Proposition 2, k

*

is lower in SEQe than in SIM. Thus, k

*

is strictly higher in

SEQe than in SEQo.

To see why the offender’s equilibrium payoff is higher in SEQo than in SEQe, consider again two cases: q

* 

1 in SIM and q

* 

1 in SIM.

First, suppose q

* 

1 in SIM. Recall from Propositions 1 and 2, respectively, that the offender’s equilibrium payoff is higher in SEQo than in SIM, but lower in SEQe than in

SIM. It follows that the offender’s equilibrium payoff is higher in SEQo than in SEQe.

Next, suppose q

* 

1 in SIM. Consider first the case where q

* 

0 in SEQe. Then, from proposition 1, the offender’s equilibrium payoff is strictly lower in SEQe than in SIM.

Since, by Proposition 2, the offender’s equilibrium payoff is strictly higher in SEQo than in SIM, it follows that the offender’s equilibrium payoff is strictly higher in SEQo than in

SEQe. Consider next the case where q

* 

1 in SEQe. Then, by Proposition 1, the offender’s equilibrium payoff in SEQe is equal to that in SIM. But the assumption that q * 

1 in either SEQe or SEQo implies that q * 

1 in SEQo. It follows from Proposition

2 that the offender’s equilibrium payoff is higher in SEQo than in SIM. Thus, the offender’s equilibrium payoff is higher in SEQo than in SEQe.

Now, the enforcer’s equilibrium payoff in SEQo may be higher than in SEQe because the enforcer over-commits monitoring expenditures in SEQe relative to SEQo. Thus, if q

*

is sufficiently low in SEQo than in SEQe, the enforcer’s equilibrium payoff is higher in

SEQo than in SEQe.

16

Example 1. Suppose the probability of detecting non-compliance is given by p(k) =

1

 e

  k

, where

 

1

H

.

Then the enforcer’s equilibrium payoff is higher (lower) in SEQe than in SEQo if

G

G

S

(

)( 1

 e

1 )

. ║

We prove this result in the Appendix. If p ( k )

1

 e

  k

, then

 p ' ( 0 )

2 p '' ( 0 )

1 . It follows that, for all G , S , and H, the offender in SEQo commits to q .

The offender’s best response, in turn, is to not monitor. Whether the enforcer’s equilibrium payoff is higher if it moves first or second depends on

G

G

S

. Specifically, if

G

G

S is such that q

* 

1 in SEQe, then the enforcer’s equilibrium payoff is higher in SEQo than in SEQo. If

G

G

 is such that q

* 

0

S in SEQe, then the enforcer’s equilibrium payoff may be either higher or lower in SEQo than in SEQe. In particular, if the equilibrium monitoring expenditures in SEQe are sufficiently high (i.e.,

G

G

S

1

1 e

), then the enforcer’s equilibrium payoff is higher in

SEQo than in SEQe.

Example 2. Suppose the probability of detecting non-compliance is given by p ( k

(b)

)

(a) If

If

1

( k

 

0

 

 

1

1

)

, where

 

0 and H

1

.

Then, if q

* 

1 in either SEQo or SEQe:

, then the enforcer’s equilibrium payoff is higher in SEQe than in SEQo.

, then the enforcer’s equilibrium payoff is higher (identical) in SEQe than in

SEQo if G

G

S

(c) If

1

SEQo if

G

G

S

(

)

(

)

1

1

; that is, if

(

1

 

)

1 / q

*

.

 q in SEQo.

2

, then the enforcer’s equilibrium payoff is higher (lower) in SEQe than in

(

)

We prove this result in the Appendix. Example 2 illustrates that whether the enforcer’s equilibrium payoff is higher in SEQe than in SEQo depends on the effectiveness of the enforcement technology and

G

G

S

. Specifically, when the enforcement technology is relatively ineffective (

 

1 ), the enforcer prefers to move first for any G

G

S

( 0 , 1 ) . The intuition is that when the enforcement technology is relatively ineffective, the offender’s equilibrium probability of non-compliance in SEQo is relatively high as compared to the offender’s Nash equilibrium strategy. The enforcer’s payoff from completely deterring non-compliance in SEQe is accordingly lower than his payoff from responding optimally to partial non-compliance in SEQo. Moreover, as we show in the Appendix, there is a range of values of

G

G

S in which the offender never complies in SEQo, but always complies in SEQe.

When the enforcement technology is relatively effective (

 

1 ), by contrast, the enforcer’s payoff is higher in SEQe than in SEQo if

G

G

S

is sufficiently small. More specifically, as we show in the Appendix, there is a range of values of

G

G

S in which the offender never complies in SEQe, but partially complies in SEQo. For these values of

G

G

S

, the enforcer’s payoff is higher in SEQo than in SEQe. If the offender always complies in SEQe, then if the offender complies with probability greater than q in SEQo,

17

the enforcer’s payoff is higher in SEQe than in SEQo; if, by contrast, the offender complies with probability q in SEQo, then whether the enforcer’s payoff is higher or lower in SEQe relative to SEQo depends on

G

G

S

5. Conclusion

.

This paper shows that reactive enforcement might be superior to proactive enforcement.

We studied a modified inspection game in which the enforcement technology exhibits diminishing marginal return. We showed that the offender enjoys a first-mover advantage as compared to a Nash game. In particular, in an offender-leadership game, the offender’s non-compliance decision incorporates the effect of non-compliance on the enforcer’s enforcement strategy. Consequently, the level of compliance is higher and the level of enforcement is lower in an offender-leadership game than in a Nash game. This implies that the enforcer’s payoff is higher in an offender-leadership game than in an enforcerleadership game if the level of compliance is higher in an offender-leadership game. If, by contrast, the level of compliance is lower in an offender-leadership game, then the enforcer’s payoff might still be higher in an offender-leadership game if the enforcer’s costs of achieving deterrence in an enforcer-leadership game are sufficiently high.

18

Appendix

Example 1. Suppose the probability of detecting non-compliance is given by p ( k )

1

 e

  k

, where

 

1

H

.

Then the enforcer’s equilibrium payoff is higher (lower) in

SEQe than in SEQo if

G

G

S

(

)( 1

1 e

)

. ║

First, note that p ( k )

[ 0 , 1 ), p ( 0 )

0 , p ' ( k )

  e

  k 

0 , p ' ( 0 )

  

1

H

, and p ' ' ( k )

  

2 e

  k 

0 .

Next, consider the equilibrium outcomes in SEQe. Recall that p ' ( k

ˆ

)

1

H

; hence k

~

1

 ln( S

G

S

) , k

ˆ 

1

 ln(

H ) , and p ( k

ˆ

)

1

( wH )

1 .

p (

~ k )

G

G

S

and

The enforcer’s equilibrium

[ k

ˆ 

( 1

 p ( k

ˆ

)) H ]

 

1

(ln(

H deter non-compliance iff

)

~ k

1 )

 k

ˆ 

1

( 1

 payoff ln( e

H p ( k

ˆ

)) H iff 1

 ln( if

S

G )

S

  q

* 

1

) . Thus, the enforcer prefers to fully

1

 ln( e

H ) is

iff

G

G

S

1

 e

1

H

1 .

We summarize these results in the following Lemma:

Lemma A1 (equilibrium in SEQe)

Assume the enforcement technology is given by p ( k )

1

 e

  k

, where

 

1

H

.

Then, if the enforcer moves first, the following strategy profiles are the unique subgame perfect equilibria:

(a) If

(b) If

G

G

S

G

G

S

1

1

 e

1

H

1 e

H

, then k

*

, then k

*

1

 ln(

1

S

G

S

)

 ln( e

H )

~ k and q

* k

ˆ and q

*

0 .

1 .

Next, consider the equilibrium outcomes in SEQo. From (12), the offender’s maximization problem for q

( q , 1 ) is max q

( q , 1 ) q [ G

 p ( k br

( q ))( G

S ) ] , where q

1 p ' ( 0 ) H

1

H

.

Differentiating with respect to q and rearranging (see (13)) gives

G

( G

S )( 1

 e

  k   2

( e

2 e

 k

 k

)

2

)

 

S .

The offender thus chooses q

*  q for all S , G and H .

We summarize this result in the following Lemma:

19

Lemma A2 (equilibrium in SEQo)

Assume the enforcement technology is given by p ( k )

1

 e

  k

, where

 

1

H

.

Then, if the offender moves first, the offender complies with probability q

1

H and the enforcer spends zero for all G, S and H.

To show that the enforcer’s equilibrium payoff is higher (lower) is SEQe than in SEQo if

G

G

S

~ k

(

1

)( ln(

1

S

G

S

1 e

)

)

, recall that (i) the enforcer’s equilibrium payoff in SEQe is

if q

* 

0 and

1

 ln( e

H )

 

1

if q

* 

1 (see Lemma A1), and (ii) the enforcer equilibrium payoff in SEQo is

 q H

 

1

( see Lemma A2). Thus, the enforcer’s equilibrium payoff is higher (lower) in SEQe than in SEQo if

G

G

S

(

)( 1

1 e

) .

Example 2. Suppose the probability of detecting non-compliance is given by p ( k )

1

(

(a) If

1

(b) If

1 k

 

)

, where

 

0 and H

1

.

Then, if q

* 

1 in either SEQo or SEQe:

, then the enforcer’s equilibrium payoff is higher in SEQe than in SEQo.

 , then the enforcer’s equilibrium payoff is higher (lower) in SEQe than in

SEQo if q

* 

(

) q in SEQo.

(c) If

SEQe if

G

G

S

1 , then the enforcer’s equilibrium payoff is higher (lower) in SEQo than in

(

) 1

(

1

 

)

1 /

.

First, note that p ( k )

[ 0 , 1 ), p ( 0 )

0 , p ' ( k )

  

( k

 

)

(

 

1 ) 

0 , p ' ( 0 )

 

1

H

, and p '' ( k )

  

(

 

1 )

 

( k

 

)

(

 

2 ) 

0 .

Next, consider the equilibrium outcomes in SEQe. Recall that p ' ( k

ˆ

)

1 / H . p ( k

ˆ

)

1

(



H )

1

Hence,

1

(

H

)

1 .

~ k

 

( G

S

S )

1 /

  

, p (

~ k )

G

G

S and k

ˆ 

(

 

H )

1

1

 

, and

The k

ˆ 

( 1

 enforcer’s p ( k

ˆ

)) H

(

  equilibrium

H )

1

1

  payoff

(

H

)

1 H if

(

 

H )

1

1 (

 

1

1 q

* 

1

 

1 )

  is

(

 

H )

1

1 ( 1

 

)

1

 

.

thus

20

The enforcer fully deters non-compliance iff

~ k

 k

ˆ 

( 1

 p ( k

ˆ

)) H iff

( G

S

S

)

1 /

   

(

 

H )

1

1 ( 1

 

)

1

 

iff ( G

S

S

)

( H

)

1 ( 1

 

)

 

 2

1

  iff

(

H

)

1 ( 1

 

)

  

2

1

S

S

G iff

S

G

G

1

(

H

)

1 ( 1

 

)

  

We summarize these results in the following Lemma:

2

1 .

Lemma A3 (equilibrium in SEQe)

Assume the enforcement technology is given by p ( k )

1

( k

 

)

, where

H

1

 

0 and

.

Then, if the enforcer moves first, the following strategy profiles are the unique subgame perfect equilibria:

(a) If

S

G

G

1

(

H

)

1 ( 1

 

)

  

2

1 , then k *  

( G

S

S ) 1 /

   

~ k and q * 

0 .

(b) If

S

G

G

1

(

H

)

1 ( 1

 

)

  

2

1 , then k

* 

(

 

H )

1

1

   k

ˆ and q

* 

1 .

Consider now the equilibrium outcomes in SEQo. From (12), the offender’s maximization problem for q

( q , 1 ) is max q

( q , 1 ) q [ G

 p ( k br

( q ))( G

S ) ] , where q

1 p ' ( 0 ) H

 

H

. Differentiating with respect to q and rearranging (see (13)) gives

G

( G

S )( 1

1

1

( k

 

)

) .

Equating to zero and solving for k yields k

  

(

1

1

G

S

S

)

1 /

  

.

The offender in SEQo thus chooses q

* 

( q , 1 ) iff k

ˆ  k

 

0 iff

(

 

H )

1

1

   

(

1

1

G

S

S )

1 /

   

0 iff

H

 for

1

G

G

S

implies

1



H

1 and hence

1

1

(

H

)

/(

 

1 )

, q

* 

1

(

H

)

/(

 

1 )

.

1

1

1

(

H

)

/(

 

1 ) 

G

G

S

1 ). It follows that for

1

G

G

S

1

1

1

(note that

, q

*  q and

Suppose that q

 k

ˆ  k

 

0 . Then

1 /[ p ' ( k

) H ] ; p ( k

) hence q

 

1

1

H

(

(

1

1

1 )(

G

S

S

)

G

S

S

(

1 )

)

.

and

We summarize these results in the following Lemma: p ' ( k

)

 

(

1

1

G

S

S )

(

1 )

. Recall

21

Lemma A4 (equilibrium in SEQo)

Assume the enforcement technology is given by

H

1

 p ( k )

1

( k

 

)

, where

 

0 and

.

Then, if the offender moves first, the following strategy profiles are the unique

subgame perfect equilibria:

(a) If

G

G

S

1

1

1

, then k

* 

0 and q

*  q .

(b) If q

* 

1

H

(

1

1

1

G

S

S

1

1

)

(

H

(

1 )

)

/(

 

1 )

 q

.

G

G

S

1

1

1

, then k

*  

(

1

1

G

S

S )

1 /

    k

 and

(c) If

G

G

S

1

1

1

(

H

)

/(

 

1 ) , then k

* 

(

 

H )

1

1

   k

ˆ and q

* 

1 .

Note that, if q

( 1

 p ( k

)) H k

ˆ  k

 

 

(

1

1

G

S

S )

1

0 , then the expected harm in equilibrium is

. Accordingly, the enforcer’s equilibrium payoff is k

  q

( 1

 p ( k

)) H

 

(

1

1

G

S

S

)

1

 ( 1

)

 

.

The next Lemmas prove the statements made in Example 2.

Lemma A5

Assume the enforcement technology is given by

H

1

.

Then: p ( k )

1

( k

 

)

, where

(i) If q

* 

1 in SEQo, then q

* 

1 in SEQe.

(ii) If q

* 

1 in SEQe, then either q

* 

1 or q

* 

1 in SEQo.

 

1 and

(iii) If q

* 

0 in SEQe and q

* 

( q , 1 ) in SEQo, then the enforcer’s equilibrium payoff is higher in SEQe than in SEQo.

(iv) If q

* 

0 in SEQe and q

*  q in SEQo, then the enforcer’s equilibrium payoff is higher (lower) in SEQo than in SEQe if G

G

S

(

) 1

(

1

 

)

1 /

Proof.

(i) and (ii). We proceed by showing that for

 

1 ,

1

1

1

(

H

)

/(

 

1 ) 

1

(

H

)

1 ( 1

 

)

  

2

1 :

1

(

H

)

1 ( 1

 

)

  

2

1 < 1

1

1

(

H

)

/(

 

1 )

22

iff

1

1

  

/(

 

1 ) 

( 1

 

)

  

2

1

iff

(

   2

) /(

 

1 )

( 1

 

)

iff ( 1

1

)

 

1

( 1

 

) .

Since ln(

  x )

 ln( y ) iff x

 y , the proof is completed by showing that ln for

( 1

1

1

)

  ln( 1

 

) , or

 ln( 1

1

)

, the inequality must hold for ln(

1

1

) , for

 

1 . Since

 ln( 1

if the derivative of the LHS,

 is smaller than the derivative of the LHS,

1

1

; that is, if ln( 1

1

)

2

1

for

1

 ln(

)

1

1

 ln(

1

.

)

1

 

1

1

)

,

To show this, note that ln( 1

1

)

 

1

1

 1

 1 t dt

1

2

1

, for

 

1 , where the equality follows from the definition of the logarithm function and the left inequality follows from the definition of the integral as the area circumscribed between the integrand and the x-axis and the fact that 1 t

is decreasing in t .

(iii) Suppose that q

* 

0 in SEQe and q

*  q

 

( q , equilibrium payoff is greater in SEQo than in SEQe iff

1 )

~ k in

SEQo. Then the enforcer’s

 k

  q

( 1

 p ( k

)) H . That is, iff

( G

S

S

)

1 /

    

(

1

1

G

S

S

)

1

( 1

)

 

Iff

Iff

( 1

 

1

)

1

( 1

( 1

1

1

)

)

, which, as was shown above, holds for

 

1 .

(iv) Suppose G

G

S

1

(

1

 

)

. Then (

1

 

)

Multiplying through by proof shows that if

G

G

S

1

gives

(

1

 

)

( G

S

S

, then

)

1 /

 k

~

  q H

G

S

S

and therefore

, which implies

~ k

.

( G

S

S

)

1 /

 q H

1

1

.

. A similar

Lemma A6

Assume the enforcement technology is given by

H

1

.

Then, if the offender moves first:

(i) If q

* 

1 in SEQe, then q

* 

1 in SEQo.

p ( k )

1

( k

 

)

, where 1

  

0 and

(ii) If

(iii) If q

* q

*

1 in SEQo, then either q

* 

1 or q

* 

0 in SEQe.

0 in SEQe, then the enforcer’s equilibrium payoff is higher in SEQe than in

SEQo.

23

Proof:

(i) and (ii). We proceed by showing that for

1

1

1

(

H

)

/(

 

1 ) 

1

(

H

)

1 ( 1

 

)

  

2

1 :

1

(

H

)

1 ( 1

 

)

  

2

1 > 1

1

1

(

H

)

/(

 

1 )

1

  

0 ,

iff

1

1

  

/(

 

1 ) 

( 1

 

)

  

iff

(

   2

) /(

 

1 )

( 1

 

)

iff ( 1

1

)

 

1

( 1

 

) .

2

1

Let

 

1

1 .

Then we have to show that ( 1

 

)

1 /

 proof of parts (i) and (ii) in Lemma A5 we know that

( 1

( 1

1

)

1

)

 both sides to the power of 1

gives ( 1

 

)

1 /

 

( 1

, for

1

1

) .

 

1 .

Now, from the

, for

 

1 .

Raising

(iii) Suppose that q

* 

0 in SEQe and q

*  q

 

( q , 1 ) equilibrium payoff is higher in SEQe than in SEQo iff

~ k in

SEQo. Then the enforcer’s

 k

  q

( 1

 p ( k

)) H . That is, iff

( G

S

S )

1 /

    

(

1

1

G

S

S

)

1

( 1

)

 

Iff

Iff

( 1

1

)

1

( 1

( 1

1

1

)

)

, which, as was shown above, holds for

 

1 .

Next, suppose that q

* 

0 in SEQe and q

*  q in SEQo. Then, from Lemma 4,

G

G

S that

1

1

1

1

1

1

. Recall that

1

(

1

)

1

  and thus

( 1

G

G

S

1

)

1

, for

(

1

)

1

  

0 ; hence 1

1

 

(

1

 

. Rearranging terms gives

)

.

It follows

S

G

S

(

1

)

, which implies implies

~ k

( G

S

S q H .

)

1 /

 

1

1

.

Multiplying through by

gives

( G

S

S

)

1 /

    

, which

Lemma A7

Assume the enforcement technology is given by

H

1

.

Then, if the offender moves first:

(i) q * 

1 in SEQe iff q * 

1 in SEQo.

p ( k )

1

( k

 

)

, where

 

1 and

24

(ii) If q

* 

0 in SEQe and q

* 

( q , 1 ) in SEQo, then the enforcer’s equilibrium payoff is identical in SEQo and in SEQe.

(iii) If q

* 

0 in SEQe and q

*  q in SEQo, then the enforcer’s equilibrium payoff is weakly higher in SEQo than in SEQe.

Proof.

Parts (i) and (ii) follow from the proof of either Lemma 5 or Lemma 6.

To prove part (iii), suppose that q

* 

0 in SEQe and q

*  q in SEQo. Then, from

Lemma A4,

( G

S

S

)

1

1

2

G

G

S

1

1

1

1

2

.

Rearranging terms gives

.

Multiplying through by

gives

( G

S

S

)

   

,

S

G

S

1

2

, which implies which implies

~ k

 q H .

25

References

Baik, K. and J. Shogren (1992). Strategic behavior in contests: Comment. American

Economic Review 82: 359-362.

Dixit, A. (1987). Strategic behavior in contests. American Economic Review 77: 891-898.

Graetz, M., Reinganum, J. F. & Wilde, L. L (1986). The Tax Compliance Game: Toward an Interactive Theory of Law Enforcement. Journal of Law, Economics and

Organizations , 2: 1-32 .

N. Garoupa (1997) . The Theory of Optimal Law Enforcement.

Journal of Economic

Surveys , 11: 267-295.

Polinsky, A. M. & S. Shavell, Public Enforcement of Law (Handbook of Law and

Economics, Vol. 1, A. M. Polinsky and S. Shavell, eds., Elsevier, 2007, 403-454) .

G. Tsebelis (1990). Are Sanctions Effective? A Game Theoretic Analysis. Journal of

Conflict Resolution 34: 3-28

26

Download