3. Colonel Blotto in the War on Terror: Implications for Event

advertisement
Colonel Blotto in the War on Terror: Implications for Event Frequency
Michael R. Powers
Advanta Center for Financial Services Studies
Temple University
michael.powers@temple.edu
215-204-7293
215-204-4712 (fax)
Zhan Shen
Department of Risk, Insurance, and Healthcare Management
Temple University
1. Introduction
The research literature on terrorism risk is small but growing. Although probabilistic
models are widely used to quantify natural catastrophe risk, such approaches are insufficient in
the study of terrorism. Because of the human intelligence involved in a terrorist’s decision
making, some scholars (see Harris, 2004; Kunreuther and Erwann, 2004; Major, 2002; Sandler
and Arce, 2003; Woo, 2002) have argued that game theory, in conjunction with probability
theory, should be used to develop successful models of terrorism risk.
1.1. Probability Models
Paté-Cornell (2002) presents a classic Bayesian model for better detection and
interpretation in the fusion of terrorist-attack information. The model allows computation of the
posterior probability of an event given its prior probability (before the signal is observed), and
the quality of the signal is characterized by the probabilities of false positives and false
negatives. Another method is to divide the terrorist activity into several steps and evaluate the
corresponding probability associated with each node on a scenario tree. Harris (2004) notes that
if the conditional probabilities of detecting the terrorist at each node can be evaluated, then the
probability that this security method will fail can be determined. Haimes and Horowitz (2004)
apply hierarchical holographic modeling (HHM) to track terrorists. HHM first sets up the multiview image of a system hierarchy of terrorist threats to generate a large number of scenarios, and
then ranks them by likelihoods and consequences. Florentine et al. (2003) use Bayes’ Theorem
as the key tool for analyzing observables identified through the HHM diagram to assess the
likelihood of a terrorist attack. Major (2002) proposes that the probability of a successful
terrorism attack be represented as the product of the probability of remaining undetected and the
conditional probability of destruction. Kunreuther and Erwann (2004) argue that the probability
-2-
of success is determined not only by the amount of the resources the terrorist group allocates to
the attack but also by the resources its opponent allocates to detecting terrorist activity and
defending the target.
1.2. Game Theory
Armstrong (2002) argues that gaming terrorism through “role-playing” is more
appropriate for conflict situations. Oster (2002) discusses the potential of applying game theory
to account for insurance protection against terrorist attacks. Kunreuther and Erwann (2004)
point out that the terrorist group gains value or utility from the damage inflicted on its
adversaries. Both parties are constrained by their own resources and the game becomes one of
strategic decisions as to how to deploy those resources. Major (2002) develops a zero-sum game
between terrorists and defenders with a numerical example requiring sophisticated numerical
analysis. Yusef (2004) claims that the game conflict between U.S. and radical Islamic groups
possesses a Nash Equilibrium of continuous warfare. Sandler and Arce (2003) construct a gametheoretic model that includes different choices of targets (e.g., businesspeople, officials, and
tourists).
1.3. Other Issues
Woo (2002) argues the terrorists are adaptive and will “follow the path of least
resistance” by attacking the most vulnerable targets. He also proposes that the partial derivative
of the probability that the defense is unable to prevent or stop the attack with respect to defensive
resources is a power of the utility function. Harris (2004) observes that terrorists can attack
multiple targets simultaneously, which has not been taken into account by prior models.
2. The Frequency Model
-3-
Extending Major’s (2002) analysis, we write the probability of a successful terrorist
attack on a particular target, i , as
i i i
f i  PrSuccessful Attack at i p1 p2 p3 p4 ,

where:

p1  Pr1 Attack 

p2  Pri Attacked 1 Attack ,
i

p3  PrAttack at i Undetected 1 Attack  i Attacked ,
i

and

p4   PrAttack at i Successful  1 Attack  i Attacked
i
 Attack at i Undetected .
The first of the above probabilities, p1 , is essentially the underlying probability of
 action during a given time period. In the commercial risk analyst’s model, this
terrorist
 an overall “outlook” analysis for a particular future time
probability is generally captured by
period.
2.1. The Colonel Blotto Game
Let Vi denote the intrinsic value of the target, and let Ai and Di denote the intrinsic
amounts of resources allocated to the target by the attacker and defender, respectively. We posit

that Ai  Di  Vi .


For an attacker that reaches its target undetected, we model the ensuing combat by

assuming: (1) that the two sides engage in a sequence of micro contests in which each side
stakes one unit, and has an equal probability of winning, (2) that the attacker initially possesses
Aic units (for some c  0 ), and (3) the defender initially possesses Dic units. From the classical


-4-

gambler’s ruin model, it follows that the probability that the attacker will win the macro combat
is
p4  
i
Aic
.1
c
c
Ai  Di
(1)
2.2. Detecting Terrorists
 We model the defender’s search for the attacker by assuming: (1) that the searchable size
of the target’s domain consists of Vi s (for some s  0 ) independent units, (2) that the attacker
possesses Ais units to distribute across the target’s Vi s units at random, and (3) that the defender


s
possesses Di units, each of which it uses in succession to try to detect one of the attacker’s units.


As
The probability that the first unit deployed by the defender will detect the attacker is thus is .
Vi

As
Given that the first unit fails, the probability that the second unit will succeed is s i ; given
Vi 1
As
that the first two units both fail, the probability that the third unit will succeed is s i ; and so
Vi  2

forth. Consequently, the probability that all Dis units will fail to detect is given by

s 
s  
s
 As 


A
A
A
p3  1 is 1 s i 1 s i  1 s i s .
 Vi  V
 Vi  2   Vi  Di  1
i 1
i
Since the value of the target is invariably much greater than the values of the resources allocated

by the attacker and defender, respectively, it is reasonable to assume Vi s  Ais and Vi s  Dis ,
from which it follows that


Note that other stochastic combat models could be used to derive the same expression for the attacker’s probability
of success. For example, if we consider the macro-contest to consist of a “race” in time to a certain random event,
where that event is given by a Poisson process with parameter (rate) Aic t for the attacker and by a Poisson process
1
with parameter (rate) Dic t for the defender, then the probability of victory for the attacker is exactly the same as in
(1).


-5-
 AsD s 
i
p3   exp i s i .
 Vi 

(2)
3. Equilibrium Solutions
Treating the approximation in (2) as exact, and combining it with (1) yields
 AsD s  Ac
f i   i exp i s i  c i c ,
 Vi  Ai  Di

where  i  p1 p2 .
i

Now consider the defender’s post-attack effective loss,

 n

LD  D Vi I i ,
i1

where
D
is an increasing and concave-downward function, and the I i are Bernoulli( f i ) random

variables. If the I i are i.i.d. (as in the case of simultaneous attacks, addressed in Theorem 1


below), then, for small values of f i ,2

n
ELD    f i
i1
D

V ,
(3)
i
n
and we will take this approximation to be exact. If the I i are such that

I
i
1 (as in the case of
i1
one random attack, addressed in Theorem 2 below), then the approximation in (3) is in fact


exact.
We model the problem as a zero-sum game in which the attacker must select the vector
A  A1, A2 ,
, An  to maximize E LD  subject to
n
A
j
 A , and the defender must select the
j1

2
 assumption in the case of terrorism risk.
This is generally a reasonable

-6-
vector D  D1, D2 ,
, Dn  to minimize E LD  subject to
n
D
j
 D . We consider two variations,
j1
one with simultaneous attacks at all targets and one with a random attack at only one target. In


the former case,  i  p1 for all i .3 In the latter
case, the vector of probabilities,

n
  1,  2 ,

,  n  (subject to

  1,  2 ,
,  n  is endogenized as a strategy of the attacker, then strategic equilibrium cannot

j
 p1 ),4 is determined exogenously. We show that if
j1

exist.

3.1. Simultaneous Attacks
First, consider the case in which the terrorists attack all targets simultaneously; i.e., such
that  i  p1 and
 AsD s  Ac
f i  p1 exp i s i  c i c .
 Vi  Ai  Di

n
V
Theorem 1: Define   A

12
j
j1
and   D
(4)
n
V
12
j
j1
some positive constant k ) for all i . If the conditions

s

c  1 s1   ,
(i)




1   
c
(ii)
c
(iii)
s


1   
c
1 
s
i
This is because p2  1 for all i.
4
This is because

 p2   1 .
n
i1


, and
1
3
, and let  i  p1 and
i

-7-


D
V  kV
i
12
i
(for
hold, then the vectors


A  V11 2 , V21 2 ,

, Vn1 2 , D  V11 2 , V21 2 ,
, Vn1 2

form a strategic equilibrium.
 See the appendix.
Proof:


Note that for small values of  and   , conditions (i), (ii), and (iii) can be replaced by
the simpler (but less sharp) alternatives
s  0, 1


and

c  1 s  , 1.
Note further that in equilibrium,

  ,
 p exp  
1   
c
fi
1
(5)
c
which is both constant across i and independent of the parameter s . A closer examination of (5)
 that for small values of  ,
reveals

fi
 
p

1   
1
c
c

Ac
 p1 c
.
A  Dc
(6)
3.2. Random Attack on One Target
 Now consider the case in which the terrorists decide to attack only one target, chosen
randomly with probability  i (such that
n

j
 p1 ), so that
j1
 AsD s  Ac
f i  i exp i s i  c i c .
 V
 Ai  Di
i

-8-
Theorem 2: Define   A
n
V
12
j
, D
n
V
j1
let  i  Vi1 2  and
D
, and   p1
j1
V  kV
i
12
j

i
n
V
1 2 
j
j1
(for some positive constant k ) for all i . If conditions (i)




through (iii) of Theorem 1 hold, then the vectors



A  V ,  V ,
12
1
 1
, where   0, , and
 2
, V
12
2
12
n


12
12
, D  V1 , V2 , , Vn1 2



form a strategic equilibrium.

Proof: Replace the factor p1

n
12
in each term of the objective function (  f i
D Vi  p1kVi
D
V )
i
i1
by 
D
V  V
i
1 2 
i
kVi  . The proof is then completely analogous to that of Theorem 1.

Note that in equilibrium,



   ,
exp  
1   
c
1 2 
i
f i  V
(7)
c
which is proportional to Vi1 2  and independent of the parameter s . For small values of  ,

 
1   
c
1 2 
i
f i  V

c
 Vi1 2 
Ac
.
Ac  D c 
(8)

Finally, consider what happens if we attempt to endogenize   1,  2 ,
,  n  (subject to

n

j
 p1 ) as a strategy for the attacker.

j1

Theorem 3: If the attacker is free to select   1,  2 ,
,  n  (subject to
n

j1
strategy, then an equilibrium solution cannot exist.

-9-

j
 p1 ) as a
Proof: Assume that an equilibrium exists. Since
 2 E LD 
 2i
 0 for all i , it follows that the
optimal   1,  2 ,
,  n  must occur at a boundary. In other words, the attacker does best by


 AsD s  Ac
1  2
selecting  i  p1 for the target i for which exp i s i  c i c Vi  is maximized, and by
 Vi  Ai  Di

selecting  i  0 otherwise. However, as soon as the attacker selects  i  p1 for some target i ,



the defender’s best response is to allocate all of its resources to i . As a result,

  s s 

Ai Di
Aic
1   2
is no longer maximized at i , and we have a contradiction.
exp s  c
V
c i

 Vi  Ai  Di
There really is no need to endogenize 
  1,  2 ,

,  n  because the probabilities
 i  Vi1 2  identified by Theorem 2 are imposed automatically by the identification of
D


V  kV
i

i

. In other words, once we know
and endogenizing   1,  2 ,

D
V  kV
i

, the choice of  i  Vi1 2  is forced,
,  n  actually over-specifies the model.



i
4. Discussion
4.1. Intrinsic Value
Although the intrinsic value unit proposed in Section 2 is somewhat vague, it facilitates
the comparison of qualitatively different types of assets (money, land, buildings, life, health,
etc.). For simplicity, and especially in an insurance context in which losses are compensated
with money, it is reasonable to think of the intrinsic value actually as some transformation of
monetary value.
4.2. Results
-10-
Despite their differences regarding the attack mechanism, Theorems 1 and 2 provide
identical equilibrium allocations of resources for the defender and the attacker. In both cases, the
attacker and defender simply allocate their resources in direct proportion to the square root of the
intrinsic values of the various targets.
Under the conditions of Theorem 1, the equilibrium frequency – i.e., the probability of a
successful attack, given by equations (5) and (6) – is exactly the same for every target, depending
only on the ratio of the total resources available to the attacker to the total resources available to
the defender, and increases as the attacker’s relative resources increase. Consequently, the
frequency of an event at target i is independent of the target’s value, which means that in setting
insurance premiums, the frequency component for every insured target should be the same.

Naturally, the model underlying Theorem 1 is somewhat unrealistic because terrorists do not
have sufficient resources to attack all possible targets simultaneously.
The conditions underlying Theorem 2 address this shortcoming by allowing the terrorist
to attack only one target at a time. In this case, the equilibrium frequency – given by equations
(7) and (8) – increases with the value of the target. This makes intuitive sense because one
would expect a terrorist organization that can choose only one target to seek a high return on its
invested resources. The result also indicates that in setting insurance premiums, the frequency
component should increase in proportion to Vi1 2  . Since Theorem 3 ensures that there is no
equilibrium solution in which the attacker endogenizes the attack probability, the most important

remaining issue is to estimate the parameter
 of the loss function.
4.3. Estimating the Loss Function

-11-
Although many different types of loss functions exist, it is generally practicable to
approximate any increasing and concave downward loss function with the power function of
Theorem 2. We give the following example.
Consider the loss function proposed by Woo (2002); i.e.,
D
V  ln V .
i
i
V  kV  over the catastrophe loss interval 10 , 10  by
6
approximate this function by
D
We can
i
12
i

solving the following two equations simultaneously:

ln 106  k 106
    and ln 10  k 10  .

12
12


This yields

12
  10 
ln 10  10 
ln 1012
6
6


 

 2  106 ,
which implies

  log10 2  0.05
1
6
and

   ln 10   6.91.
k
2

10 
ln 106
6
1
log10 2
6 6

5. Conclusions
In extending Major’s (2002) work, we allow the terrorists to attack multiple targets at the
same time. In addition, since both players want to maximize or minimize their payoffs (expected
gains or losses), we employ the method of Lagrange multipliers to solve the problem. Also, we
update the conditional probability of destruction given remaining undetected based upon the
-12-
classical gambler’s ruin model. Our analytical results show that the defenders should allocate
their resources to all targets proportionally, which contradicts Woo’s (2002) rule of following the
path of least resistance.
Nonetheless, there are some limitations in our model. First, Yusef (2004) argues that the
terrorism game is a non-zero-sum nature, as are most games in the real world because of
substantial differences between the utility functions of the opponents. Second, although we
recognize that targets have different values, we still assume the targets to be homogeneous apart
from their values. Sandler and Arce (2003) suggest that there might be different types of target
choices, such as business and tourism, which cannot be treated in the same way. These issues
should be addressed through further research.
-13-
References
Armstrong, J. Scott, “How to Avoid Surprises in the War on Terrorism: What to Do when You
Are between Iraq and a Hard Place,” 2002, the Wharton School, University of Pennsylvania,
Philadelphia, PA, online at: http://www.chforum.org/library/gaming.html.
Florentine, Christopher, Isenstein, Mindy, Libet, Jared, Neece, Steve , Zeng, Jim, Haimes, Yacov
Y., and Horowitz, Barry M. (2003), “A Risk-Based Methodology for Combating Terrorism,”
Proceedings of the 2003 Systems and Information Engineering Design Symposium.
Haimes, Yacov Y. and Horowitz, Barry M. (2004), “Adaptive Two-Player Hierarchical
Holographic Modeling Game for Counterterrorism Intelligence Analysis,” Journal of
Homeland Security and Emergency Management, 1, 3.
Harris, Bernard (2004), “Mathematical Methods in Combating Terrorism,” Risk Analysis, 24, 4,
985-988.
Kunreuther, Howard and Erwann, Michel-Kerjan (2004), “Insurability of (Mega)-Terrorism
Risk: Challenges and Perspectives,” OECD Task Force on Terrorism Insurance, p. 56.
Major, John A. (2002), “Advanced Techniques for Modeling Terrorism Risk,” Journal of Risk
Finance, 4, 1.
Paté-Cornell, Elisabeth (2002), “Fusion of Intelligence Information: A Bayesian Approach,”
Risk Analysis, 22, 3, 445-454.
Sandler, Todd and Arce, Daniel G. (2003), “Terrorism and Game Theory,” Simulation and
Gaming, 34, 3.
Oster, Christopher (2002), “Can the Risk of Terrorism Be Calculated by Insurers? Game Theory
Might Do It,” the Wall Street Journal, online at
http://www.gametheory.net/news/Items/011.html.
Woo, Gordon (2002), “Quantitative Terrorism Risk Assessment,” Journal of Risk Finance, 4, 1,
7-14.
Yusef, Moeed (2004), “The United States and Islamic Radicals: Conflict Unending?”
International Social Science Review, 79, 1-2, 27-43.
-14-
Appendix
Proof of Theorem 1
(1) Define the probability function for success of destruction
Based on the analysis above, we set up the probability function based on two theories:
Search theory and the Colonel Blotto game:
As  D s
Ac
pi  pis  pic = exp( i s i )  c i c ,
Vi
Ai  Di
where pis represents the probability of undetected and pic represents the probability of
successful destruction when undetected.
(2) Terrorism model using Lagrange multipliers (generalized)
L   fi
D
(Vi )  p1  k   pi Vi1/ 2 (Note: p1 and k are constants.)
pi  F (Vi , Ai , Di ) = exp(
Ais  Dis
Ai c
)

Vi s
Ai c  Dic
Max L s.t. g ( A)   Ai  A
A
Min L s.t. g ( D)   Di  D
D
grad ( L)  A  grad ( g ( A))  D  grad ( g ( D))  0
 L
L L
L 
, ,
,
, ,

  A 1, ,1,0, ,0  D 0, ,0,1, ,1  0
An D1
Dn 
 A1
We look for the solutions for Ai and Di that satisfy the conditions below.
First-Order Conditions:
p
L
 p1k i  Vi1/ 2  A , i ,
Ai
Ai
p
L
 p1k i  Vi1/ 2  D , i .
Di
Di
Second-Order Conditions:
 2 pi 1/ 2
 2 pi
2 L

p
k

V

 0 , i ,
1
i
Ai2
Ai2
Ai2
 2 pi 1/ 2
 2 pi
2 L
 p1k
Vi

 0 , i .
Di2
Di2
Di2
(3) Relationship among Vi , Ai , and Di (analytical solution for Ai and Di )
The analytical solution for Ai and Di might take the following form:
Ai  Vi1/ 2
Di   Vi1/ 2
where  and  are positive small numbers.
-15-
Thus, we can easily get
A
n
V j1 2
 D
j 1
n
V
j 1
12
j
.
(4) First-order condition validation
(a) First-order conditions w.r.t Ai
pi
As  D s  sAis 1  Dis
Ai c
Ais  Dis cAi c 1  Di c
 exp( i s i ) 


exp(

) c
Ai
Vi
Vi s
Ai c  Dic
Vi s
( Ai  Di c ) 2
Ais  Dis   sAis  c 1  Dis
cAi c 1  Di c 
 exp(
) s c
 c
c
c 2
Vi s
Vi ( Ai  Di ) ( Ai  Di ) 
p
As  D s   sAs  c 1  Dis
cAi c 1  Di c  1/ 2
L
 p1k i  Vi1/ 2  p1k exp( i s i )   s i c

 Vi
c
c
c 2
Ai
Ai
Vi
V
(
A

D
)
(
A

D
)
i
i
i
 i i

  s (Vi1/ 2 ) s  c 1  ( Vi1/ 2 ) s c(Vi1/ 2 ) c 1  ( Vi1/ 2 ) c  1/ 2
 p1k exp( s s )   s

 Vi
1/ 2 c
1/ 2 c
1/ 2 c
1/ 2 c 2 
 Vi ((Vi )  ( Vi ) ) ((Vi )  ( Vi ) ) 
  s s  c 1   s c c 1   c 
 p1k exp( s s )  
 c
c
c
(   c ) 2 
 (   )
(b) First-order conditions w.r.t Di
pi
Ais  Dis  sAis  Dis 1
Ai c
Ais  Dis  cAi c  Di c 1 
 exp(
)
 c
 exp(
)   c
c 2
Di
Vi s
Vi s
Ai  Dic
Vi s
 ( Ai  Di ) 
 exp(
Ais  Dis   sAis  c  Dis 1
cAi c  Di c 1 
)


 s c
c
c
c 2
Vi s
Vi ( Ai  Di ) ( Ai  Di ) 
p
As  D s   sAs  c  D s 1
cA c  Di c 1  1/ 2
L
 p1k i  Vi   p1k exp( i s i )   s i c i c  ci
 Vi
c 2
Di
Di
Vi
Vi ( Ai  Di ) ( Ai  Di ) 
  s (Vi1/ 2 ) s  c  ( Vi1/ 2 ) s 1 c(Vi1/ 2 )c  ( Vi1/ 2 ) c 1  1/ 2
 p1k exp( s s )   s

 Vi
1/ 2 c
1/ 2 c
1/ 2 c
1/ 2 c 2 
 Vi ((Vi )  ( Vi ) ) ((Vi )  ( Vi ) ) 
  s s  c   s 1 c c   c 1 
 p1k exp(  )  
 c
c
c
(   c ) 2 
 (   )
s
s
-16-
(5) Constraints on parameters (derivation from second-order conditions)
(a) Second-order conditions w.r.t Ai
 2 pi 1/ 2  2 pi
2 L
 p1k 2 Vi 
 0 , i
Ai2
Ai
Ai2
 2 pi
Ais  Dis
 sAis 1  Dis   sAis  c 1  Dis
cAi c 1  Di c 

exp(

)

(
)


s
c
c
c
c 2 
Ai2
Vi s
Vi s
Vi ( Ai  Di ) ( Ai  Di ) 
 exp(
Ais  Dis  s ( s  c  1) Ais  c  2  Dis ( Ai c  Di c )  csAis  2c  2  Dis c (c  1) Aic  2  Dic ( Ai c  Di c ) 2  2c 2 Ai2 c  2  Dic ( Ai c  Di c ) 
)  


Vi s
Vi s ( Ai c  Di c ) 2
( Ai c  Di c ) 4


 s 2 Ai2 s  c  2  Di2 s ckAi s  c  2  Di s  c s ( s  c  1) Ais  c  2  Dis 
 s c

 2s c

c
c 2
Vi s ( Ai c  Di c )
Ais  Dis Vi ( Ai  Di ) Vi ( Ai  Di )

 exp(
)



s
s2c2
s
c2
c
2 2c2
c
Vi
 Di
c (c  1) Ai  Di 2c Ai  Di
 csAi



s
c
c 2
c
c 2
c
c 3


V
(
A

D
)
(
A

D
)
(
A

D
)
i
i
i
i
i
 i i

( plug in the solutions of Ai and Di )
 s 2 (Vi1/ 2 ) 2 s  c  2  ( Vi1/ 2 ) 2 s cs (Vi1/ 2 ) s  c  2  ( Vi1/ 2 ) s  c s ( s  c  1)(Vi1/ 2 ) s  c  2  ( Vi1/ 2 ) s 
 s

 2s

1/ 2 c
1/ 2 c
1/ 2 c
1/ 2 c 2
Vi s ((Vi1/ 2 )c  ( Vi1/ 2 )c )
 Vi ((Vi )  ( Vi ) ) Vi ((Vi )  ( Vi ) )

s s
 exp(  )  

1/ 2 s  2 c  2
1/ 2 s
1/ 2 c  2
1/ 2 c
2
1/ 2 2 c  2
1/ 2 c
 ( Vi )
c(c  1)(Vi )  ( Vi ) 2c (Vi )
 ( Vi )
 cs (Vi )



s
1/ 2 c
1/ 2 c 2
1/ 2 c
1/ 2 c 2
1/ 2 c
1/ 2 c 3


V
((

V
)

(

V
)
)
((

V
)

(

V
)
)
((

V
)

(

V
)
)
i
i
i
i
i
i
 i


exp( s s ) c  2 c
Vi ( c   c )
 2 2 s 2 s c cs s s
cs s  c s c c(c  1)
2c 2 c 
s s c
s




s
(
s

c

1)







c  c
c  c
 c   c ( c   c ) 2 

 cs s s cs s  c s c   c (c  1)
2c 2 c 
  s 2 2 s 2 s c  s ( s  c  1) s s c     c



0
c
 c   c    c   c ( c   c ) 2 
  
-17-
(1)
(b) Second-order conditions w.r.t Di
 2 pi 1/ 2  2 pi
2 L

p
k
Vi 
 0 , i
1
Di2
Di2
Di2
 2 pi
Ais  Dis
 sAis  Dis 1   sAis  c  Dis 1
cAi c  Di c 1 

exp(

)

(
)

 s c
c
c
c 2 
Di2
Vi s
Vi s
Vi ( Ai  Di ) ( Ai  Di ) 
 exp(
Ais  Dis  s ( s  1) Ais  c  Dis  2 ( Ai c  Di c )  csAis  c  Dis  c  2 c (c  1) Aic  Dic  2 ( Ai c  Di c ) 2  2c 2 Aic  Di2 c  2 ( Ai c  Di c ) 
)  


Vi s
Vi s ( Ai c  Di c ) 2
( Ai c  Di c ) 4


 s 2 Ai2 s  c  Di2 s  2 csAi s  c  Di s  c  2 s ( s  1) Ais  c  Dis  2 
 s c

 2s c

c
c 2
Vi s ( Ai c  Di c ) 
Ais  Dis Vi ( Ai  Di ) Vi ( Ai  Di )
 exp( 
)

sc
s c2
Vi s
c(c  1) Aic  Dic  2 2c 2 Aic  Di2 c  2 
  csAi  Di


s
c
c 2

( Ai c  Di c ) 2
( Ai c  Di c )3 
 Vi ( Ai  Di )

( plug in the solutions of Ai and Di )
 s 2 (Vi1/ 2 ) 2 s  c  ( Vi1/ 2 ) 2 s  2 cs (Vi1/ 2 ) s  c  ( Vi1/ 2 ) s  c  2 s ( s  1)(Vi1/ 2 ) s  c  ( Vi1/ 2 ) s  2 
 s

 2s

1/ 2 c
1/ 2 c
1/ 2 c
1/ 2 c 2
Vi s ((Vi1/ 2 )c  ( Vi1/ 2 )c ) 
 Vi ((Vi )  ( Vi ) ) Vi ((Vi )  ( Vi ) )
s s
 exp(   )  

1/ 2 s  c
1/ 2 s  c  2
c(c  1)(Vi1/ 2 )c  ( Vi1/ 2 ) c  2 2c 2 (Vi1/ 2 ) c  ( Vi1/ 2 ) 2 c  2 
 cs (Vi )  ( Vi )


s
1/ 2 c
1/ 2 c 2

((Vi1/ 2 ) c  ( Vi1/ 2 ) c ) 2
((Vi1/ 2 ) c  ( Vi1/ 2 ) c )3 
 Vi ((Vi )  ( Vi ) )


exp(  s s ) c c  2
Vi ( c   c )
 2 2 s 2 s c cs s s
cs s s c(c  1)
2c 2 c 
s s c
s




s
(
s

1)







c  c
 c   c  c   c ( c   c ) 2 

 cs s s cs s s   c(c  1)
2c 2 c 
2 2 s 2 s c
s s c


 s  
 s ( s  1)     c
 c
 

0
c
   c    c   c ( c   c ) 2 
  
-18-
(2)
(c) Weak sufficient conditions for parameter constraints
Suppose that all parameters are positive. From inequality (1), we get:
(i) s 2 2 s 2 s c  s ( s  c  1) s s c  0
 s s s  s  c  1  c  1  s  s s s  1  s (1   s s )
 c  1  s(1   s s )
(ii) 
(3)
cs s s cs s c s c

0
c  c
c  c
 
   1    
 
c(c  1)
2c 2 c
(iii) c

0
   c ( c   c )2
c
(This is true, since D > A.)
(4)
c 1
c
 c
2c    c
  c 1  c   c  1 c  c  c   c    c   c
 c(c  1)  c   c   2c 2 c 
 
 
2 
1  
c
c
c
 
2


c c
 1 c
 1    c   c
c
c
 
 
 
 
1   1  
 
 
From inequality (2), we get:
c
c
(5)
(iv) s 2 2 s 2 s c  s ( s  1) s s c  0
 s  1  s s s  s(1   s s )  1
1
s
s
1   
(6)
cs s s cs s s

 0 . (This is always true.)
c  c c  c
c(c  1)
2c 2 c
(vi)  c

0
   c ( c   c )2
(v)
 (c  1)( c   c )  2c c   c 1 c   c  1  c 
This follows because
c 1  c

c 1  c
(7)
c 1
c
 1  c (from (4)), which is always true whenever (i) is valid.
c 1

Download