Document 10996257

advertisement
,,-,,
, .. - ,.
- -..-:I
.II.. , ..-.. .I- I.-..
.I
--I. ..I.
I.
,I.
,-.:-I-.1I-...-...1,
11
- II.I-.---I..
I,,---III
-I..1.
-.
.-, ,:.'i-L-,..j
11..I
.,:--,,,-....I.
I-I,,---: -I,-:.I-.
.---..
I..1 II-1
.IIIl-I.,I,--.1-,
--. I...
-,I,z1,-- "'-,---,II-I,,--. -,--I-1
`---- ,,: ::,,--.I,
..
-. -..
-I,.I--,,.I
-,-.I:I1,
-',
,
. .,,-,,"---,.,-II
,.
-.
,1.
I--,t-------,:q-,..1--.:---,..I,-I-.-.-I..,:
---- ., I-.I.,-.,,- -.-.
--,I-.-'
-I-I I.-,,-,.-.-,.iIi,..d.,---.I
--I-, ---I
,..,-.II
----- ,---.,.,
,:I-o..,,-:
3,--..
.
"..
'-.....
-,.,-.-.II
----.
,-,..-,--.
,
-,,
,
-.
-,
,-,
-,:
.-.
-,-,--,-,I,--.
I
Ct,,....--,,.,,:
I11..--,.
,
,? --..,I...-I..-o
Z
,-2---7:
,,-,--,--,,,
.,II,--,."
-,j
,:
,,., -.. "
-I---.-d,.
-.
,:- -,-,
,--.----,,--.I.,-I-1-.-I..,.-I-,,
4,-:!--.,.
,,, - .t.I..
,---,It". - `-`
---.I.,..1I
%,
:-: -,-,I,--,-...--,,7-,.
.."--- z -1-a:
-, ,,;" `.' ,: :--,
: .--,.,,II,--.,
1,-17I
.--.
1..-,,.-,:
,---̀--,,-,4,,.
...-,
,-,-,,-1.--,II,
, ;.,: ,-I.II
,-,-I-.,.I;-$-.I.
:;. --:,.,,
-VI- -,.,,-I-,.,
..-:-,I.1.
,--,-.
.
;,.-:-.,.II11
.,
-1
,-:.
,I-., .,-.
,:..tI,I----..:
,--,I.-:.,I-,,,,
,.-.
-I:.,:.
--.,
-,._
,I.-II,,,..-,--I
:,-.I-I,-..:-,:-,-.I:,,l.
-"-,,-,--,,.n:.-l:.,,-,.,-I
.
,-,.,-:---I-,..
I:-.-,,, :,,,-.I
,,.,,-.,I--..-:-II
1:"- '--. , ..- -;
,.
.,--,
.-,,l,,--1
.-.
.-,.
,,:,,--,I,,I,.-I..I..--,--,--,---,,,-",,.
-',
-_I..I..--I.
I-,,--I.-..-----,,--,:,.---"--- -,::-, -,-,:
I,;.
I.
,,-- :4.--I ,.,-,-,,.---::,7-",-..I
,I,si,.
..:.,-,t
, ..-,-,- .--,
-.-.
I.-.i.
- ,. IrI..-.1-:
I
-,.'- ,--,-...:-,..I17 ,-.I.I...::,
1Ai--_ -.
; .;.. ,;- ------,-,-,-"r
,.-,,
-,1: -.v -.
,: --:". Z!,.1:,,"::..,.:s-,.
,1
-- .
-':.;1,.-. -,;,,.---II :.., -,
I.I
-I: , , i , -; -,--.
I-.I1...,..;-:'
..1,I ,Z1:-.---"--,.--1
-,,;,,---,II.I-,I..1,
I -I.t
-"
--. ,".
,,:- :,.-'.,-,
-..,,.'.,7----.
2'4 -Z"-.,--.
-,-,,%.,,..--1.:,-,-,--11
III-w,: ,.,
.,-,,-,-.-.I-..,-:-,
jII.:.....-.7.,::,
.:--,--,.-,--:
.-,,.I-.
..
,-.
-,:',,
.:'.
--7-.-.
,-,:,
,-,-I:II
"I,
:
,,,
...
"--,:---,
r,.:,.'.
,
,----,:.I
-,,-:Z,-.
..,.j
,.I
-",;
-:
,
:--Z,--:,
-I.,-.-1,..,-,;-.
.::
-:,I1;
,,:1,-.l
'..,--1. ,
:,-,
-I:
J,
.-:,", -,I-:,,-'..-.-,,
,r,-.-, ,-,,...,----,,,
-1
.-..
..--.I.I
,---,,, .,:,
.,,,.
I----. 7%..-,,:;.I-.-,1.,
.1.--...-,-.- .--I-.,II.
.:.:..
I
I.---,-:-,
"
-II.I--I,.-,
-Iw
-,.--,,
,
-.
,--.I.--,-,
,I7.-..o
-:6I,
_-I.-I:
,-. .e.,.. ,Ii.
.-I.4 .II..--.,:
,:,.,I--:.1--I_-,',I'.,:1,,,,-I.,...r-..,I
.--- ..
.II -., -,,, -̀:-,
%I,.I.IIII
.1,-I,,
',
-..I-,I-,: . -.
II,,-.,---,,.
--,l,.:, --1,:-,':
. --mI.----"-..,w-.,I
I., II:
iI-:,.I,I,1.:1
--.
,-.r,
-,,--,.),I
I-A
t,-U
.:
-,
--.
.--.I--I ,,-,:
..--.
-- ,. ,.,,,..-:,.
,::
.. ,.,-.
L--:i,'- :.,,-:%-,---: ,-,.,- -,-!-II.,-l,,,-:-..-:-:.:...I
--,,I,:,-,I;,-j -I
,,-I.--:...,Iw.-7
-1;-,-,,.-,
---, .r,..--i.
.I,",
,.i..,.,.
.I.---.-.:..1,
: ,--,-:--.'.
-'. -: ,...
I.I---.,,
1,
,--.
i
,.,.-...-.,,.
I,-:.-,-,.-I.--;
-.
.
-,-.,.I
,-,-.-.-,I
I--I,-II,---q3--,,,..1,.I,,qI,..
-:
,-,"-,
I
:
r ,.:
-,,-.
-'.
,
1.-.-,
,.,,-:%
--.,II,
--,-.:
-.
.I,,11,-el':
-,--.
-- ...-,1,,-,.:-,I-..,,..- I-.-,,
L. --,,,,
I-I-I.
-'. ,-,
I..
II.I.-...I
-,I::,
...-,f
-.
,.-.
.-,-,-:
:11,,..-_.:-:,.'-f.
,-,,,,
.,-.
,
,,
U.
II-Z---,.,
-.
'.--,,
-,
...
,,,
...
.,: .,-.-"-.. II,o:---' '..,,--- -,,-,,
,..
---,,l.-,,
-,, ,.-1...-'.
,-,:-.
-,,..-,--11"
-.
1I,-:----mI.
-I:,I--,.--_-.;I], .- I,-. ; o.-,
-,
... ,,
. .-, -,
,I11-I
,--,.--:,.-,-I,--,
:-;
-:,,-!.
-,
, i,
4
,;.,.'
-.-I'.
-,.---".
--..tI"I-I--,".,,,-,I
:,II:---:.,-., -,--,-,,,,Ii
. ;l-.-- .-,, ,; ,: : ,-.:....I..
,- ,I.----.1,,-.
"--.,---'I,.,-..-,,,-:..1,II.---.:-,.
-.
-,,.7i.-,,---II-1-;.:z.
---..
-,1:III...:,;-,-I.j,
.,---,
i
.-.-.
--I-I,.-I::,
-,
1::-q.-:,.l-,.,.,.
I,1..I,-.
,--,- ,-I,.,,;,I,.I:.II,.Z--,
-,,,
.-.
-,:-..-I.
,; ,I
.-`,,----,-",,.
-:.,Ir.,.
. -.
-,:_.",".--,..-rI,....
7 ,:-.-. eI.I.,.,
I
,,,:,--:
---...
.-7,:--.: .-,,...I
.,
.--I--,-",.-:I..:,,,,,,-,
::,'..,,
, I
q-,
-, M.--Z,.-,:-`.
,tI-,
,-.'..,.
I- .1-:
-,-.4
-,,:-1
: ,--I. :. ',!-Z ----I;.-I,,-,-f-I-I -L ;I,:,-, ,...--II
I,,
I..I,-I,,I-I,.,II-,--c!I:-I,, .:
..
.-I.I-,:,
.,,I
.-,.-I
,-.:
I.t-,.,1"
--,:,.,--,
-..I.-I.--,.-;,,-.-...:. :1i, -t -,
.,!,.-.
,----III
;.,,4I
'.--,I,-p..-1,.-,.'!IZI--,I-. ,,
.I.
: , -:,:-,II,--;. . -. -I---,.
,.,,.
..
-,-,-- ',:; -...1,:,
,1.4-1
,---,.;
.."...' I-,,
,,-.: 7. .: -::---:-.::.,-..l
:, ...
,..: .-,,--,q.
:,:.:
,.11I.11I
I..-,,.
---.
IZI.,.
,.-'..-I.-I.1,.----, : -2-1.I
--.-,,-,
-I
-.
III:---.-I..,,---,--,
I....I--,
,
J,;-r,-:.,I
,-,-,--,,,:IIII
:
-..
I,-..
.--,I--;.-,-.
11.,---,-.,,,,-.,,-.:i.-,:.I,
I
!-,,:-::----.i
..
.-,'- ',,-.-,--!--_..
7-,.-,-,,.,
., .,,.-.
,-,- -.-1I.;--I-,-,.e:I.,,---'.
-.
.,F.; --,, .,.--.
:" .:.,,.
-7,- -.
"--:
.,,.'.-.-., ..,I.--,.1.
I- :I-I.:,
--:.,,....:,
,, ,",
;.--7-Z-,.-.---.-:.-,.-Z.1:..I
,,,"`i:
IWmk.lit,
.",-11
I.- ,-,, I.. , .-I
'.---- ",
,-,
: :-'i ..
-,
I..I...,:.
-,-r....'...,,--,:.7...
--..
,-,-I..I.
-,',, , .',-,
,,.-,:", :."". -:, -': ,-.1
, --," .'.
.4-.--,-,
II- .. 1--I--..I,,I,.III:-!-..-..---.III--I.
.,-
-1.
.1 -.1:,.
, .;-.I--"-.I.,I.I.1-..------1,I..i
-,,.M."
-3,
:,Z11--.-,:--C:
--..
I.
I -I-
-.
I.I-"'. .1I-,1-1
- --, '',
,::.:,..1,..
,-.1-1--_----I`---.----1.
1.-.
I .---
,:,.
-.
,t,
-
- - -.
, -- -- LMIn-
---:, -. -..--;': - ---. . 1.
-.- .1:-,-''
-
-!,.
n', ,,,,,
--.
-..:-,:..-.-
- --,--111;_
1,-V
-
I;,I .I:,,,,I---.-1,-.-I-...
.....
I----7.
..
.1
: t-.1 Z:.:14,I
I,..
,-.,-I
-,-.--.1:
-v,
,-:
..-,
-,,II
'.,47-,--I.;
;!I
.- .....I.,1;
-- I.I.--I
,-.-:,I -I.I.., , ,,` ,, -,
---:r,:,
d.-. I-,.
I,-.'-,-----,.I
:;
1,-.1
.-. .-.,.
-,.
.-.
.,.;1Z,-1.-..-,..-.I-,%
-I .l ;F
,,"--,
,Ie
-.,II-----I
-,.
,
1I I
;, ,, II,:
-,I-,m.-I-I-f;.
I.,;4I.;-..,
-.:-"I--,....,-,--.,
.-.
1.!II--,.
I.,w.".:I,
I..--I:
,..,,--"-I
I--iII.!.
......
;I:,:
-..
-,--,,-,
III-,iI-I.-.,.
-.
.._--..-.."..:1%.,
-I
I
I..---,
I-, ---I.-1-I::t.,-1.-,
II.",:.,.
.---,
Ip-;-%,-,
,-.-:,
,
-.
;-I-_-__
.,..1,
11.I.---1I.I..-I-,
I--1,
--I-.I.-1,I-,,.:
I.,.
.,
-- ,,II--.,II------t
.I.,,-.,I---I.-..,.-.I:1...:I., :I .b.1....I,
-, .II..1----.I
.11.,-I.-..,-1
--..
I.,"I:I
,--I..-.-:..-.12
:..;..-;I
,.-.-,...
I.t
.-I-.-,I..,I:
..
.... --,
-- :,..
I-.,.-..-,.,-.,-,..,II.
:-1..---- ... .I.-II
I
,-,I,....-I-:.
i-,I .-a...1._:
-.
1,--II.-.
--.
, ,,-----I.-.:.,--.-..--,.----,,,-III.-..II.-1
...: ,.---I,,
I.%,-.-.;,-.%,,..
A,.,-%-,
, -1:.
-,,:-.-l
:.,L,:
.,.;
.
..
r-I,----,I-..,.--,.,.--,I--..-I
-,,.-I."--.,-.,:
,---_
,
-%::-,.I:1--,..
-L-,-,L...-.-.I.I-:-,;I...-.:,,,,.I,:4.iI:--'._:::
.1,4,,-..I.-.-,4:
.-7
.-..,
:,--,,
--"..
;,I-.
I.-.-.-;II-,-.t-.,m,
II--:--,
1. I-,.. 1,
..
, -;:I -.
,Z-.I,,,I-I-'- -;--'.. . ,-.-II
11
.:-kl
-'.-V..;II.::.-.
.
..I:.-.,,.--.j.-..,,..II.-.-.-.I,-I-,I,-I--"r,.71,I'i- -- ;-:
--..
-,.:,
-. I.,
,.,..-.,-I', -,
I..---:----II-,"-., I-.I:--,--,I.
,-,,-;
-.,.
I-,I-?--.
.:.1. :..:-,:,....
:1-I,.
.-,-I-7,
-..
-..,.---.-,,I .I..--....
- :I,,---.
, ,,.-.,...,;..-----.I-...,,: -I..I.y.
oI.I-II,-I.... ' ,;.;..-%,,,-.II.
...I---,: ,,;,
.---,.-.I
, I.-II.,.,-,,.--1
-o,.,--,
."-,:,
. .Z-,.1-.J-I
.1:,I1,
': ,-,- -.
-Zl--1
,,,,.,
- -:- -..,---4.I-1II-'.1,..:-,-.11.-.1
,..
-:-.,
,.-1.-:,--.I:.,:7,-.,II.I
.,:I-..II,-,,-.I
.
:,,I.
,I..,.I...,,:1
..I,.---,,,q-.;.
I
,-.
I--:,,:,I-.,:
..-,-.
,.,.-,,
,..I-.
,--..,
I,
.----I..'I--I-I..1.. .
'.
-.
',,,-!,.,I,---,, .-:... I,-,".Lxg
,----.-.II-1-:"II.,.
--.. ,,...-.I
:-I,.,- -,
.iI.,I
:
i 2,.-,:I--- ,.--,,1..-,I -II,,.-4:,Z,,.
- .I:--,.--.
:`,-.
7..-.-.
--..
,-I;....-.II-.
.-i.-:,,
,--:I
,.,-I
:j. i,,.--,1,,:---,-,
,...
-.:--,--,
-:,-`t.,.II
..
:I.,
-:,,,...,:
-,-.,-,,.
.,7'
14,--,,o-,
.-II,.--_
---.
r. ..-.-i---.
.---...-..-,.ILII,,,-I::e
-1-,-,-:--oI
, :, .:.,.,.
-::,
..-.,
,,,.
,-.-I.I-,--,.,
.--.
-.--....:-I.--,,-.
..,...
I:;-.
,e.
f,---:.--,,,
..
:.I.-::.,.J.
-..-:.7.,
:_:.
..
C
,
I-,-,:-I
--.,.-::':-.F:..,
,II..,I
- - l,--..-.
.-:.._-.
II.,--I,- 1:iI",.
-. ,-II
,-,
11-,-..-,---,1..II
-,!,7,,--7
,I.
,.-7,,
-,.-I,--,.4,,.-,
II.-.---,,--,,
,-I....-.-.--I-.I-.,.-'.,-:,,,
I-,I.-II.--,-.t,
,.
-,,....--.-,-..
-I.,I.--`". -,,1,--t.,I.II
-F:,.I :I-,,,-, ,,
:-.'.-.-.,
:.-..
,--,,I.1--.I.-,
.I
,--,,-,-...T1,,
II.III1,...-..-I.1.-t:
1I,.-.I..
:,.
-.-I.:
...
: ,II.-...'.I.
.
--,I,.-rI.
..I
...
I,,.,I.,:::-1,..'.
;..,:I.-I-II
I-...I,
.I..d..I-,.
I.
..
,.II-.-I.I.I.--.-,iIII--I-,-.17r..--.:11
,
.::-.-II-.;-II11I-I--..
,
:
.,::
,:,.7 ;-,,?,I;,,I
-L-IiII-.
I!I,,II.,;.---,,,
.I--.II
;I.,
-, 1.-,..
-,.-.--I-.I-.,-IIp.--I..:,-,.11,,.
-- :-III----11-I.--o.--,I-.,.
-l-I....:--:,.,-,L,--:,..I-.
-,II--.-.-I.1
,-1
- ,,:t
-,L-.--.
--I: :-I-.
,.r.:,rI-.-III.,.
,-:x1
Ii:,-I.:,
-11I,-,---.
,-III,.I.I.-,
II...-I.
...
I ,I.m-.,.4,.,'
,-III--II-,
II..I.
..
I-:III
.
";
lJ.a11
-,.,III-,;.i,I -, :I.-,I.II-,I,:.--,,,-IlI
.
-'.,,r
...II.--.'I
I-.,.I.
---.-,I,.,:,.. ,i--,--.,
.- 1..III...II.
III..1-...---.t,
-I-7,.I
:-,
--1:,,,,.
I..,:
,
't
iI.I.-II-..
-I,
-,
:
:'::I-.m-----Z-III.;;
;,,-I.,
'.,::
-e :II-,.,.
; - ;-: -,
I,I.,
,-,-.,...
I. I,-,;.-.I,--I-,-.IIIZ-1..I,,:
-.II.:.-,.-I..II---.
--.I.,--I.,,I-I-I
,..-.-,.
-,Io
i--.-I.-I-I-:,,,.-.1
,.II-I.Ii,
.:--:
,-.:,
-.
-,I-..-111
;.:
-,,,,. -.'- ,.
,iII--..
.I.II-III.I.--I,.-I.-,
;7-.,
':
i...I-..I-.I
.o,:.,
, -..
-:
,:;,.:r,,,II-.I
t.::-,
I : ,I..
!L:,!,
,,,,
,,.--.
.;..II..,I
,1,--,-- ,::
--I-I:.II.-...-I-II-I,.
l..II..--,II,,
,, ,--.
--,II:..
. l
-..
-I.
,.,--i
I-.---17,-,
.,.:,.--i
..-.,-,,
I,.--..,,",I..1,II--"-,
I,..
-I'.-..1.,I-,:,
..
:---I-..-..
-'.,,','- l,,. :,,
- -.
.,
.,,I.I .iI.,--..1..I,
I..
-It.,
-I-I
,..t,--,:-.---L' .kar:,,7,-I
-- III-..I--...-...-.
-..
-. -,-;-,-!,,.,
%.
..
.I,-.
II--.-115.,,..I--.I.-.--,d,::.I
-, :_-- -,--,.i.I.1,.
:
-. ,--.-,;'.,:---.-1,-II.-.
.-- -., : ,.-.I-. .---,--.,,--,, -.,.,-:,
,,,-.I..,,-,
-,I:;,I.,
--:I,1.,,.',,
_:--,:,t-...-,--.--..I--,.I::.
-"..,Z- ...--,
..,,-r-,.-------.---::..,I:," ,---,-,--,
--- . --.-I--: I .:.,I,..-..-,.,.I...m.-I-.-,I.I-",,.
--wI--..
'.-I.,
I...'.
.,
,--..-,--11-..
.
-,.--.,,, -`
.;...-,
,
-.,
I..
.-.
-.
,l:-.
;-I4-'...,,.I.I-.I"IliI--,I:- 11 ,:,.
.-q!.,.l-.
,. I,,..,.,
.:.':.,I.I
I"r-.:-.
7-i"
--..
.
,.I..1.-III-.
.%: .,
,I,.,-1
,.:;,:.,I .- ..
,- -..-.
-,-,'-II-I1.
I,:i.
,...I1..-.
I- ,:--,I., '.P-.,.-, .,'!
,,I.
.-.
-.-.,
I.,e.,...-.1.--:..----.
.1`
I I.
.-1,i:.-.,.;-I-I
I,.-1-,,..Z---,.
--Ii-,
-I
,.;-----.
!.,,..'.-,- b--.. ,
I-,,.
,.-,-..- .;.-:-.p.I-.
:,11:,I.
II-.1,
-.
,..:'',
,.:
,---,
-,,:q--.:
.,-.::Z:,-II..I
.-:I-I-,,-I.I---. ,I----I-..I-,,,_,Z.I_-I.--I.I,:,,
,.J
;i----- .. :!.I......-:II,,I
,,-,I-,.4I..
,1. ,:- -..--,.I-..1,,--,
I
.- ,,,1.-.I,11,,
:-,,.I
.,--...
-- ;:,in -,-. ,--7 .,--I-..It.:.I--..
,,,-:-.
-,---:.
.-.
-, -., ,.:-,.
,,-.-I-:I,
-I
-1.-III-.,;
I .,-ITII-.
li ,I.,1: ,..,,,I.,.,11,,
.,--,
-...II..I.---I.-,I
..
-1.,
1,-I.1.
I-.-.
,-.I...--,,I..I
-::i::-.,-:.
I
I-,;I-- ,,.,,i.1.,-.----.
--;.-;- .. .I:e
.-,.:.-,..I
.,,
,I-,-:--,-I.1,,.
,...
-i,--.
-1L, ,,..-..I-o.
. - -...
I-,...
-.,::.,.-,.,
,.I.I
: , ".
,--,:- ".
---.
.
,,-..I.-,,--...-:,:,.,
.,: --I.":- ....
I,.I--I--I--,I
.-,,-I,---.
-I..I--,-I.I:2--, .:.
; ...,-: I1
---.
.. ,.I-.
-,-,--.I-i
:,; ,--.1- -, I.-.,I
,.I:,,.I-I.:-,.-I,.
4, ,-,,-,..,.I,,
-I..--:.,..,IIZ.
"-.I--.-!:.,.,I..,...
i.-II,.-,,--.I---I"I,
.:..
,
I,--II.,::-. .I-,_,_.
,-,-,--"-.-..->
-,
1-:,
I,.
: ,,,,:,:-II..
` ,!,
I ..,I...-_:
--,- IId.-,...-II.
-,IIII..-",
--"",-...
-,:,:.;:-.I;-,
,.,
I-..-.
.,'
-,:,,,--I-.
--.
I.I-.,
,
.:I.;I.:
,,.
-,.
7.,IIIII1I-I--.-,..I-.,-.-'.
11,--.,,I..I-Ii.,IZ.,-i...:.I.,,.I.,- .I --...
,..,,, I.I..,:--,-..,.,.
-.I.I.I,,,I.I-I.I-.--.-I11I
,,,-..II-,
,,:
II.-]-..4,
.:.I
I-I.-.7
..
,,III-I
-..
-:
1,.II.
..
,,-.-:
': -I-,.",I,,:,:...-I:II
,
.I:
-I...,I.,IIII.I,-I-I.
.I
-.:,.,II-....,:
, .1
i,
--I.,.II..I..!I-,'. II.Z,,
,.,
-II-I.-,i,-"-I,
....
,I::,.I.-..:f
..
.I.-1.,..
,: .I,
:.,,
1.
Za-.
:,I.:..I
I----I-.l...Li.II::,,,I..I.---II!,,-,-..--,--w
,I...1.
,:-.,::.;,-1.-......---I-I-:
x,"x1:"
,,.,;,---,--..-.
...,...---.
:-.:.-.
...II-.-..
I
..-.
..'..I-.-I.
tI .
:1,-1,:I--..I.-,- I , ",
--Iq.-e
:
...-I--I-,--...
:
1.:; ..-.
,,:,.11.
..
..I
-1%,
..
---I.,.7--,.I.,II-,-1,:
,II--,
:,,-.1.I.-,-.I--...----II.1
I
,.-r:".j.I.
,
I
III..
....
!-,, ,I.:--.I-...I
,IIi.-.I,."
.,..
-Z..
.:,-..-...
-I-.-.I.I,
I--II...,.41-,,-I.
-.
-,I.I
.- :--,-.:,.I....I.-I..-..-I.-..,,II.
, ...
:..
,.III....I---.-.III..I-...1-I.-. ....
:-.:I
iII.I.:'..I.
.:-,%I.---I,III
1.II.I-..I:,
..
-I
II....I1,..I,I-I.xI-III;,,..I
II
III..-..I-.-:,-.
--.
-II
I-.-.-I1-.,
1,.I...I
.. ,iI
I-.II...-II-I
I.,.
.
-.,-II....,....-...I-:.Ie-.
II.I.,-,.I-.....-.I
--.`- -I .,., ::
-.
-I..II.IIII.I.-I--.:-I9,....---,I-.
-..
I
-...;,,---...11-,---.I.I.I.II.I-,
I--I.II--I
,-1-iI:-.-.-.-.-I:-Z-.:.-.II.II--.-.I-I.
..-I1.--I:
II,:--I.I
..-.
,-I.-.I....
,-,.-..I-I
--.. "I.,:I,:-,
I.
..
.I.-I:
-..:--,--7-,,:.;
I, ---II,-:II
..-.-..--.II--I.I..
.--I..-:-1711.I
...--. ,-,
---:-..I
I.I--I
I-.-I.
...I
;I1..
.,I--I:.II:,.II.....I.,
I...::
-.
:-1;)Io. I--,.-.III.I.I
,.II-I.II.4I
I,
I.11.,II-.---:I.I..1..-I-...-..
I-I,
..
-,,II,III.i..-Ij-.,,I-..I
I-,II.-II,
1,-.
..
.
.-1IIII.-II,,-I-..,
,.-I
,-I,..I-I
..1.I-II:I
...
I-I.,;-II.I,- -,.:.
!l---I.I..II...--I..I
..
,-..--.I.I
I-III.
--,,.,I.--,.,.
.,.;IoI-,-6-I..-..I.,I,.I,.---I.-.IIII.
.---I..1.,
-I-',I-I
11.---..I.,
,,i.-.II
.- :I..II-I.,..II---I.
-i.
I-7,IIIIII--,
II
-..
.:I.II
.I..I--I
.1-:,-I.---..I.,II::, I,--.IId-II.,7--11 I:,I--.I-:I
I -III..-.II-.
--1d,--,-.,-.-II.II-I----IIII--.e-.I.I.-,.---II:--..-..I.I
,,..I.-I.-..
..I:,
I:--.,I--.,....
-:r--,.----4.-I.I,
-.,..I-.II
I
.
.:.
-I;
.-.-.I
I.-I-...II
.rI.-.,,,
,-.,It
--.-I.-,.II..I...i------I..I-: -II.1.
.,-.,-..
-I.-I.;.
,I
.-.I.
-I--,.-.I-.I-.: ,: :I.I:II-,
: -.;:-I.I..-I.I
I..
-,,.-II-I-1,-.,-1,.x
.- I1.,,...I-....
I-I-.
.-I.;I.
,II
:,I-.-I-.::..
I,-.I--I--I
:-,I-,.,
."
.-.-...,
..
-II-I.I.-::
I..-,
I-...-:.-:,,.,II-I.ItI----:
...I...,I..
,..
,--I-.I----I-.
.I
Z..
;:,::..%
1...p.
:1,
-,..,
:,
,
I--.I.....w:,:..,:,"
..
-I..I.-II..,,I.
1I .I:-.I-..
I,.
L-.:,,,.,,"
....,F.
I:,I
.
I-.---.I:I-I,--...7-iI...I:
.II
.
,.,IIII. .
,. --.
,.II-.,
,,-;:I-.11
.-,-II..I.--.
,:- I.-.iI-,
.-,:
....
'-.,i, I--II-1..--,-.I-,.
I,-.1,-.-..
-.i;---:r-,IIL...I-:-,I.-.-.
--1,.:-4.j
;,-...
I
.I
-.
,..-..I,.I..::"-,,-.:-I-,
.i..
--,...I.
,,-,IIII-It-.--Ir
-. I.,I:.--.-II
:-,.,.I---I-.1--"..,I-I.
I--,.-. --:,---,.I-II--;- li,..;I`.IIII.
-....-.,....----,-'..-:-I.:,
,.
-, :.- :,:.:,
IIIIII
iI.-.I
I-,-Z:....
,III--:I-,II
II.,,i
-I.Z..I-.
I...--...-II.I.Ie1-7..
-I - -.1.
..
-.
.II.III--..,-,I,....-,.III:-I."
I
I,-I.I-I-i-.I-:,,
,-.-.,...-.,
I---.
I-,,
. .I
...
.-:-'I -:I.117.,,:'.
I.:t
-I
-,,---....-I.-.II ...,- I-:.1T-I.,- ,I
I-:I...,-,I1...
,.-.
,,,...
III
7.i
--.-.II-.....II.,I-q:.II
-..
_j-,I.-..1...II.
--...
:..--.I:,,
.II-I.--.
,.7I-I.
. .',-...
,.I
,:- .- f,.--,-:I..-.1....
-I.-l.-,--,,,--.7...-:.:.
,,- II:.-II'-.I:--sII.Z.,Z
I...
I'--.
..--_
I':,:
:1---,-...II
-,,-1 -.-I..t
I-I41,I
.,-..
1'..,.,-,.I
II.,I--I ,.,I.:
::,7:I,...
,".-...,::,-.-.
-,I
-: .--,,.I..--:-t;.II..!.
,...I.I.:..,,
IIII .
..
,,-,,,-,I
..
,;i:-:-",
:I..I..1.1..I..
-.---..-,.o
,,
.'I
-,-.
-I..
..
. ...
1.-.H
U.-.-
::,-:
:,
-e:.,-,-.
o rimg,--,- Pap
-1,.,II.7.,-.----I-IT7.-.1. ur.::
1.I,
j :...-1.:
------t---II--,
-. ,10
- I1- :-1
-,
I--
1..-:
:
-I
-
Stopping Time Equilibria in Games
with Stochastically Monotone Payoffs
by
Charles H. Fine and Lode Li
OR 158-87
February 1987
_______1_11_
Stopping Time Equilibria in Games
with Stochastically Monotone Payoffs
Charles H. Fine and Lode Li*
Sloan School of Management
Massachusetts Institute of Techllnlology
Cambridge, MA 02139
March 1986
Revised, February 1987
Abstract
We analyze stopping games with monotone discounted payoffs. We characterize the purestrategy perfect equilibrium stopping rules for a discrete time model in which the payoff
processes are governed by a general Markov process with stochastic monotone prol)erties
and for a continuous time model that follows a Poisson jump process. For both type of
information processes, we find that the stopping gamlle nmay have mllltiple perfect equilibria.
In each case, we provide the conditions for the mliquclrleess of the e(ilibria. The uniqueness
obtains either from a certain magnitude of asymmetry between the players' reward rates
or from a certain smoothness of the information p)rocess. Te model has applications in
the economics of industrial organization.
*The authors gratefully acknowledge comments of John Cox, (:hi-fill Hnltia.
and fruitfuil discutssions with Erhan Cinlar and Ifichael Harrison.
andl .JIanTirole on an early version of the paper
Stopping Time Equilibria in Games
with Stochastically Monotone Payoffs
Charles H. Fine and Lode Li
1. Introduction
The game theoretic extension of tile optimal stopping theory was initiated by Dynkin (1969)
and followed by Chaput (1974) in an analysis of a class of two-person zero-sunm stopping games. In
this paper, we analyze a class of non-zero-sum stopping gamines with monotone discounted payoffs.
One application of our model is to the analysis of exit (or entry) lehavior by firms competing in a
stochastically declining (or expanding) market. For the exit probllem, as the market shrinks, firms
suffer from declining profits and( face the decision probleml
of when to exit the market. Because a
firm can experience an increase in profit rate from a larger market share if its rivals exit first, a
gaming situation for the exit decisions results. Our modlel extends the existing economics literature
on exit (Ghemawat and Nalebuff (1985); Fudenberg and Tirole (1986)) on two dimensions. First,
our payoff structure is stochastic so that the stopping prollelim is non-trivial; second, we utilize
a general underlying information process which may hlave jlnps. Because of the generality, our'
stopping game may have multiple perfect equilibria.
We characterize the set of equilibria and
provide necessary and sufficient conditions for the uiqueness of the perfect equilibrium.
The
uniqueness obtains either from a certain magnitude of asymmetry in the players' reward rates or
from a certain smoothness of the informliation process. More recently, Huang an( Li (1986) proved
that there is an unique perfect equilibrium in the stopping game if the information process has a
continuous sample path (e.g., a Brownian motion).
In Section 2 we introduce the basic framework for our discrete time model and the solution
to the single-person oI)timal stopping Iproblem.
Section 3 anIalyzes the two-player discrcte-time
stopping game and characterizes all the purc-strategy perfect equilibria. The fourth section presents
the analysis of a continuous time version of the game with a Poissol .ijnul) I)rocess. Some concluding
remarks follow.
2. The Single Player Stopping Problem
Assume a stage payoff or reward function r(XI) whecre 7r(.) is an increasing, real-valued filction
and {X,; t = 1, 2, .. .} is a Markov process (defillned y a family of transition fiulctions, p(z, t; A, ),
x E R, A E B(R), and > t, s, t E + _ {0, 1, 2,. .}. For each period t > 0 and x E R, denote
the probability measure of Xt conditional on X-i =x by Pi'(Xt)
P{XjXt,_l = x}. To model
stochastically monotone payoffs, we assumne that for all t > 0 P (lolnlinates P+1 in the sense of
1
first order stochastic dominance. Formally, for any t > 0, z, y E R.
We denote this ordering by Pt >- Pt.
>y dP(z) >
>y dPt+l(z).
At each time t, the decision maker (or player) observes {X,;.u < t} and then decides whether
to continue play and get r(Xt) in period t or to stop play al(d get zecro payoffs thereafter. We
assume that for each t, the likelihood of having a high fiuture state is increasing in the current state
Xt. That is, for x > ', P 1+l stochastically dominates P. l1 for all t > 0.
A player's strategy is a random time T : - V+, where U} is the space of sample paths of
{X,; t > 0}. We require T to
for every t E ,W+, where
e stopping time. That is, T :
t =- (X,;
-, JV+ is such that {T < t} E ;
< t), the a-field generated by {X,;
< t}. Let
be the
player's discount factor. The player's prob)lem is to choose a stolpping time T so as to maximize
the expected discounted reward
T-i
6 t r-(X,)Xol.
Et[
(2.1)
L=O
This is a standard optimal stopping prob)lem (see, e.g., Shiryayev (1978)).
Define t
- inf{t:
7r(X,) < 0, a.s.}, the first time that stage payoffs are negative almost surely. We assume that
0 < t* < oo. Proposition 2.1 below characterizes the optimal stopping times for this problem. The
proposition is preceded by a technical lemma that orders the expectations of the positive parts of
monotone ordered functions of random varialbles that are ordered l)y stochastic dominance.
Lemma 2.1. Suppose that f and
f2
are increasing filuctions awl f(z) > f 2 (z), for z E R. And
suppose that Z 1 and Z 2 are ran(lom variables with probability measures /L1 andl
L2 ,
respectively.
Moreover, Al stochastically dominates 112. Then E[f(Z 1 )+ ] > E[f 2 (Z 2)+], where
Etfi(Z)+]- E[f(Z)1(o,) (f(Zj))j.
and strict inequality holds if E[ll(,oo)(f (Z1))
]
i = 1,2,
> 0.
Proof. First note that fi > f2 implies {z E R : f(z) > 0} D {z E R
E[ft(Z1)l(ooo)(ft(Z1)) = |f
f 2 (z) > 0}. Since tAl >
2,
(Z)(ll (z)
f I ()>O}
>
>
j
j2(z)>0
>2 (:) >0}
fj(Z)dL(z) > 1
)
.
f 2 (z)d1 2 (Z)-
fi2(z)d(til(z)
f2(()>)}
E[fl(Z 2 )1(,,,) (f 2 (Z 2 ))].
Proposition 2.1.
i. The maximal optimal stopping time for the roblem (lescribe(l in (2.1) can e characterized
by T - inf{t > 0 : X E B}, where Bt,
{ut(Xi) < 0} u,(X,) = 7r(X,) + (SwUt+(Xt), Iand
wt+ l (z) = E[u,+i(X,+L)+IX, = ] with U{t() = 0 for t >
2
t*.
ii. Fort > 0 and x E
R,,
wt(x) > wt+,(z) and ut(z) > it,+ l (X).
Let y = sup{ E
R
ut(z) < 0}.
Then yt is increasing in t, Bt = {Xl < !I} and(Bt C Bt+, for t > 0.
Proof. Part i follows from the optimality principle of dynamic programlllillg, cf. Shiryayev (1978),
Chapter 2.
To prove ii, note that wt = 0, for t > t*. By induction, if wt+l(.) is increasing, then ut() =
7r(.) + 6wt+l() is increasing since 7r is increasing.
]
is increasing by
Hence wl(.) = E[u(Xt)+.
Lemma 2.1. To show that wt and ut are monotone as functions of t, note that
wt.-_(x) = E[,r(X,.-I)+IX.-2 = 1]> 0 = W'().
Again, by induction, if wt+1 - w1+ 2 > 0 for 0 < t < t - 2, then u,(z) - ut+(z) = 6(w+1i(x) wt+2 (x)) > 0. Furthermore, Lemma 2.1 and the assumption of stochastic donlinance imply wt(z) >
wt+l(z) for any x E
since u() is increasing. The facts that B, can be written as {Xt < Yit}
with Yt = sup{x : ut(z) < 0} and that Bt C Bt+l for t > 0 follow from the nmonotonicity of ut(z)
in t and .
Note that u(Xt) is just the expected discounted payoff fromn time t if the player is in the
game at t, decides to stay for one more period(, an(l employs the optinlal stopping time from t + 1
on. Therefore, the optimal policy is to stop the first time that X falls so low that u(Xt) < 0 or
equivalently, Xt < yt. Figure 1 illustrates a sample path of X, the cutoff point Yt as a function of t,
and the optimal stopping time T. The dashed horizontal line represents the zero stage-payoff line,
yV0
sup{z : r(z) < O}. The cutoff points yr always lie below this line. That is, at each time t,
the player is willing to sustain a certain level of current loss in the lhope of receiving future positive
rewards. Certainly, T may not be the unique solution. For example, T' _ inf{t > 0: ut(Xt) < 0}
is another (mininlal) optimal stopping time which may differ froim T with a positive probability.
However, they have, generically, the samle forml.
R or 1t> (i2 . Define for i = 1, 2 and t > 0:
ut, y, and T i as in Proposition 2.1. Then u,_ > 2 , .2> l.t
Uw,
<
_for t > 0, and T' > T 2
Proposition 2.2. Sruppose r l (z) >
2
7r
(z) for z E
a.s.
Proof. Let ti
t > t.
for
inf{t : ri(X,) < 0, a.,s.}, i = 1,2.
Suppose that wl+ 1 > w +
.
Obviously, t
> t 2 and w
2
Then ut(x) == r'() + 6wU+(X) > r ()
+
=
2
= 0 for
5W2+(z) =
U(
EG R, and this in turn implies that wu(z) > wu(z) for x C R. That yt < yd is obvious by
noticing that yt = u(O). Therefore, th-at B C B2 or tllat (Bl)' D (B2)
for t > 0 implies that
{T' > t} = {nlo(B1)c} D {n.> 0 (B2)Y} = {T 2 > t} for t > (. So T' > T 2 a.s. The proof is
completed by induction.
3
The results for 1 > 62 follow from a similar argument.
I
Proposition 2.2 provides a very convenient tool for dealing with the comparative statics analysis
of the optimal stopping time. It says that looking at the stage payoff is sufficient.
For example, in an economic setting, consider a monopolist producing a homogeneous good
at a constant cost c and having the option in each period whether to stay in the market which is
stochastically declining over time, or leave. In period t, the random variable Xt is the intersection
of the linear inverse demand curve; tile constant -b is the slope. The opportunity cost of staying in
the market is k. (Alternately, the firm has to pay a fixed fee, k, each period in order to participate
profit that the monopolist can obtain
in the production activity.) In each period t, the maximuml
is
-(X - C) - k,
4)-k,
if XI >c;
otherwise.
Clearly 7r(-) is an increasing function. Also ar decreases as (. b, or k increases. This fact, together
with an application of Proposition 2.2, implies that the optimal exit time T decreases almost surely
as c, b, k increases or 6 decreases. That is, the monoplolist optimally exits earlier when the marginal
cost of production is higher, a better alternative opportumity exists (a higher k), the price is less
sensitive to the quantity change (a higher b), or the firm is less patient (a smaller
).
3. Equilibria in the Two Player Stopping Game
For i,j = 1, 2, denote by Irii(X,) the stage payoff to player i in the period t when there are j
players in the game and the state is Xi. For j = 1, 2 and for each player i, the stage payoff, 7rii()
is increasing. Analogous to a market game where firils prefer monopoly to sharing the market, we
assume that each player has higher stage payoff when lie is the only player in the gamle. That is,
ril(z) for z E R and i = 1, 2. At each time t, the active players observe {X,; s < t}
and then simultaneously decide whether to stay in the gamle and earn rij(Xr) (depending upon the
7ri 2(z) <
numnler of players who stay in) or to quit and( get zero payoffs there-after. Player i's strategy is a
stopping time Ti, and the payoff at tinle t to player i given his rival's strategy
T- -- Ti, j
i, is
completely determined by observing the history {X,;.s < t}, i.e.,
rM(Xt, T-i) - 7ri2(Xt)l{T_>} + 7ril(Xt)l{_T i}
E
.
Let 6i ( < 6 < 1) be the discount factor for player i.
Definition 3.1. (T 1, T2) is a Nash equililriumn in stopping tinces if TL an(l T2 are stopping tinmes
and(for i = 1, 2,
Ti-1
E1=E
r-l
7r>
E[6't(X,,Ti,
0
f=,
4
a..,
for any stopping time
T.
We are interested in the perfect eqlilibria in the sellse of Selten (1975) (see also Kreps and
Wilson(1982)).
To define a perfect elilihria in the stoppling gamlle setting, first define a family
(0, t > 0) of shift operators mapping
into n by
0,t(.)
= w(t + .),
If T is a stopping time, then we define OT by OTW -
V.s > 0.
OT(w)W.
Regarding the shifts, we observe
that Ot corresponds to taking the time origin to t, and( for any stopping time T, TOt, defined by
TOwo
T(Otw), is the stopping rle specified by T taking t as the time origin. For more about the
shift operators, see Port and Stone (1978).
Definition 3.2.
(T,T
2)
is a perfect equilibrium if T and T 2 are stoppillg times, and for any
stopping time S,
Tios -1
E[ E
rOs-1
6!iar,+(Xs+t,S + T-iOs)I1s
> E[
t=0
Z
bFr+,(s+,, s + T-_is)s]
,
a.s.
1=0
and Ti < S +'TiOs for i = 1,2.
Definition 3.3. Player 1 is stronger than player 2 if 7rli(z) > r2i(z) for j = 1, 2 and z E R.
Definition 3.4. Player 1 is more patient than player 2 if l > 52
To aid the game-theoretic analysis that follows, we first solve four single-firm problems, two
for each player.
That is, by applying the results in the p)revious section, we derive the optilmlal
stopping time for each player i as if there were j players in the game throughout i's stay in the
game. Each single-player problem is indexed by (i, j) and takes 7r;j as the stage payoff. Following
the notations in the previous section, dlefine reclrsively for ij
,? (Xt) =
= O0for t > ti,
where y
w+ii 1 >
li
Yif
<
2'
t,
where t
snp{z E R
Wi2 +l,
Yi
<
-
rij(,Y,) +
yi2
t,
-
6iwl
(X,),
= 1, 2,
1U+
1 (x) = E[tl4
inf{t > 0: 7rij(X) < 0, a.s.,
(X,+1 )+IX, = ]
Iand Tii -
u'(z) < 0). By Proposition 2.2, we have tat
and Til >-11
Ti2 a.s., an
tllat for j
ilf{t >
0: X <
},
for i = 1, 2, and for t > 0,
1, 2, and for t > 0, w
1
>,1+1'
Ui,
and Tij > T 2 a.s. if player 1 is stronger or more patient thllanll player 2.
Intuitively, one may think that since Til and Ti2 are
layer i' mlost optilllistic anl most pes-
simistic dlecision rules, Til and Ti 2 may impose upper andl lower
oll(lds for player i's equilibrium
stopping time. The following p)roposition proves tlat this intuition is correct.
5
Proposition 3.1. Suppose (T1 , T2) is a Nash equilibriuml.
Then Ti2 < Ti < Til, a.s., for i = 1, 2.
Proof. If T-i < Ti a.s., then player i's rol)lem reduces to single-player I)roblem (i, 1) with initial
state XT_,, and certainly Ti < Til since Til is the maximal ol)timal stopp)ing time.
Suppose T-i > Ti with positive problability. Also suppose that Ti > Til with positive probability. Let T = Ti A T i.1 Then given T_i, the difference b)etweenl the equilibrium payoff to i and
the expected payoff to i if T is employed,
TATi-1
Ti-I
67ri,(X,) +
E[ )1E
Z
1=0
TATi-1
-E[
Ti-I
E
3E,5bri(X,)lXo]
,ri2 (X,)+
t=O
I=TATi
TjAT_
=
qr~ii(X,)Xo]
f=T AT_ j
-1
E[
6(ril(X,)-
7ri2(Xt))IXo]
< 0,
I=TAT_ ,
since wri
>
ri2 and T A T-i < Ti A T-i with pIositive pIroblal)ility. This contradicts the fact that Ti
is the best response to T-i. Therefore, Ti < Til a.s. for i = 1, 2.
The other part of the proposition is proved similarly.
I
We now turn to an asymmetric situation in which player I is stronger or more patient than
player 2, that is, 7ri > r2 or 61 > 52. By Proposition 3.1, one natural equilibrium we can easily
infer is (T1 1,T 2 2 ) since player
's equililrium strategies are
its best response to T2 2 is T 1 1, and player 2's eluilil)riumll
olunded b)elow by T12 (> T 22 ), hence
strategies are )oumdled above bly T21
(< T1 1), hence its best response to T 11 is T22 . It is also easy to see that this euilibrium is perfect;
given the history up to time t > 0, (T11, T22) is a e(llilil)rimn for the gamne b)eginning at time t.
However, as will be shown, this is only one of the
any possible p)erfect equilibria for this game.
Therefore, the assumed asymmetry cannot assure that tlhe stronger or more patient player always
outlasts the other player.
Our major result, characterizing all plossible pulr-strategy
)perfectequiliblria is
Proposition 3.2. Sppose player 1 is stronger or moIlrl Iatilnt than Ilayer 2. TheIn (T 1, T 2) is
a perfect equilibrium if and only if T = inf{t > 0 : X E Bit} for i = 1, 2. where the exit sets
Bit, i = 1, 2, t > 0 are defined recursively as follows:
B 1 , = (-oo, y1'] u (A,
n A,)
A,,
where At is a Borel set contained in the set At1 n A 2, n ('1,
A.t={X,:
B,
= A2, n A,,
oo),
7ri 2 (XT) + 6W1+1(Xt) < 0},i = 1, 2,
6
t+l(x
)
= 0,
and w 1+()
for t > t,
= E[u+±1 (X+l)+IX, =
x], for O < t
t,
wllere
Moreover wi2
<
Wit
<
l.
Proof. Backward induction.
For t > t, P{Xt E Br}
<
P{7rll(Xt) > 0} = 0, for i = 1, 2. Also,
wi 2 =
=
=
0, for
i = 1, 2. The proposition is trivially true.
Now assume that the proposition is true for
> t + 1. Note that
Ait C (-oo, ut21, and A, C (",
since wi2
r+
oo),
(3.1)
i
i l
<
w t+l
<. wu
+l for i = 1, 2. Suppose both layers are active at time t. Given any
-
strategy B2e adopted by player 2 at time t, player I will l)e in the game alone from t on if Xt E B 2t,
and he will be involved in a game which gives expected
ayoff m~+
i
if Xt E Bit andl equilibrium
strategies follow from t + 1 on. Remaining active at t, player I expects to get
Ut(Xt)
[7r 2(X,) +
(6W+,(X,)llD%
(X,)
+ [rll(X,) + 6i+(X)lB,,(X,).
2
Player 's best response then is to exit if ut(Xt) < 0 and stay if t(Xt) > 0. Let B, = {u (Xt) <
0}. Player 1 will exit if player 2 exits and Xt < yl or if player 2 stays and X E All. That is
Bit = (B2 , l (-oo, y1l]) U (B, n A).
(3.2)
Similarly, given player l's strategy Bt, 2's best response should satisfy
B2 , = (B, n (-oo,
y])
(B, n A 21 ).
(3.3)
Using fact (3.1), we can rewrite (3.2) anld (3.3) to be
= {(-oo,
I\
(B, n (-oo,
1)} u (B2, n A,,) = (-oo, 1' u (B2,
All),
B = (Bt n (t 21 , oo)) U {A \ (B 1 , n A",)} = (B,, n (!'21, oo)) u A2,
(3.4)
(3.5)
Substituting (3.4) into (3.5), we have
Bc, = (B;, n A,, n (, 1', oo)) u A
since y2 > y1t. Thus Bt is part of the eluililrium if andl oly if B, satisfies (3.6) or B
point of
f2t(-)
(3.6)
is a fixed
where f2t(B)-(B n A, n (y2',oo)) U A, for any B G B(R). The only possible
7
solution is of the form B2 = AtU A,
where A, C Al
n(y2l',
c0). We let At C A,, nA 2 t n(y2', oo)
simply to make At and A't two disjoint sets.
Similarly, we can prove that Bit is part of the equilibrilun if and only if
Bit = (B1 , n A,, n (y,2, oo)) u (A', n A 1 ,) U (-oo,
'l],
(3.7)
which gives us the equivalent characterization of a generic equlllililriul.
Finally, u
2
< u' < u"1 since wi 2
< wi
< wu l
Therefore, ui2 < w < w;
l
for i = 1, 2
and for all t. We conclude the induction.
The importance of the above proposition lies in the fact that it characterizes all possible perfect
In particular, it points out that we can obtai1 all )(erfect equilibria l)y varying A,, a
), for every t. Note that if XI E At, then player l's
Borel subset contained in All n A 2 , n (Yt2',oo
equilibria.
l
expected reward is nonnegative if player 2 exits at t (Xt E (y' , oo)) and is negative if player 2
stays (Xt E A 1 i); player 2's expected rewar(l is nonnegative if player I exits (Xt E ({21, oo)), and is
negative if player 1 stays (Xt E A 21 ), assu111inlg some elquilibrium strategies will be employed from
t + 1 on. Therefore, either the exit of player 1 or the exit of player 2 is consistent with equilibrium
behavior and the multiplicity of the perfect equilibri.a arises. Also. for each t, Air , i = 1, 2, are
funimctions of At+ 1,...,
At;,, the sets picked from t + I on.
Among the many equilibria for this game are two extreme perfect equilibria which give upper
and lower bounds on any equilibrium stopping times (T 1, T2). The first of these, (T 1 , T2 ) is obtained
by letting At = At = 0 for all t. It is not d(ifficult to check that (T1 , T2) = (T11, T 22 ) a.s. (see
Proposition 3.3).
In fact, (T 11 ,T22)
is the equilibrium most p)refterred by player 1, the stronger
player, because, as will shown latter, it gives player I the latest possible exit time and gives player
2 the earliest possible exit time. On the other hand, the second( extreme equilib)rium, (T1,T2),
gives player 2, the weaker player, the latest possil)le exit time and gives I)layer I the earliest
) for all t, where
possible exit time. It is obtained by setting Al = Al = A, n A 2 , n (Yt21',
,
,
Ait- A (At+, . . ., A,).
Proposition 3.3. There are two extreme perfect e(qulilbri;l (Tll, T22) and (PT, T2), where for i =
1, 2, Ti
inf{t > 0: Xt E
1 t 1,
Bt
and thle exit sets Bit are
=
ldefiledl recursively as follows:
(-co, y'] U At, B21 = A 2, n A,.
where At = Alt n (.n', c),
Ail =
{XI: 7ri2(X,) + 6w+l(X)
< (),
r+ = 0, for t > t 1 , an(l 7I)+1,(z) = Eit+(Xf)+jXr = x], for O < t < t;,
8
where
u+I(Xt+l) = [7r 2(XI) + 6'Dt+ 2(X+1)1
) +
+ [i(
(X,+1)
1
Suppose (T,, T2) is an perfect equilibrimlnl. Then T <i T1
2
i
(X(
< T2, a.s.
T 2 2 < T2
1(and-l
T
Proof. Let ( 1 ,T 2 ) be obtained by letting At = A, = 0 for all t. That (TI,T
2
) and( (T, T2) are
perfect equilibria is a direct corollary of Proposition (3.1). We shall use the indluction argument to
show the second assertion as well as Ti = Tii a.s. as a by 1)roduct.
First note that for t > t,
=
and B 2 t, = (-
y]
Assume
< W
--
< wland
tbl
<
2
wt
w
=
= 0. This
y22 ], and( Al, n A', = 0. Thus Bi = (-o,
(-°C, Y?] and A 2, = (-o,
implies that A1
2t =
wIl = 0 andl w22
w= w = bI=
Wti2 1 = 2i1
(-o..lv] _ A,, D A,, 2 A,, _
-2
< W2
.y1],
Then,
(3.8)
(-c.,
ol2] = A2 , D A 2, D A 2 , D (-o.,y, ],
(-o°
and. hence Alt n A, = 0. Using (3.8), we have
= (A n
(Y21,.oo)) U (-oo,.
'l D (A,, n (I.21 oo)) U (-oo0, i']
2 (Afl n (y,, oo) u (Ar, n A 1 ,) u (-oo, 4'I
D Bit = A, U (At n A,,) U (-o,
(3.9)
y4]
D (A2, n Al) U (-oo, yt
= Blt = (-00, .V1]
It can be similarly shown that
(-oo, , 2 ] = B 2, D B 2, D B2,
Therefore,
ft < ul < ti < ull and
ca(Xt,)
=
2,,= i2 < u 2 <
+()l
(X
[zr, 2 (X)= + 6
(X,)+
,((X)
(3.10)
s.how ia < ul as an
We just
jl.
( +)
+ [ix y,)1
)
+ S+,(,)l,)
±,(X,)]Lv (X) + K[(,)
11 (X,) +
< [7r12(X,) +
example,
= U(X,)
+
since
2
,22 = ,i)
T,
T1
< w+l, B2t
2
< uW < w2.
< T1, = T1l
D21t and B, C B.
Hence, we have
We conclude the inductio..
and T22 = T2
•
T2 < T2, a.s.
9
)1 < w
<,i
< w,
and
As a resullt, (3.0) and (3.10) imply that
Since player 1 is assumed to be stronger than player 2, the first extreme case (T 1 1,T22) is,
in some sense, the more appealing equilibrimn. This equilibriulll is also simple in structure, i.e.,
a threshold equilibrium. The following proposition gives a necessary and sufficient condition for
(T 11 , T2 2 ) to be unique. Before proving the proposition, we need a roper definition of uniqueness
of a perfect equilibrium in stopping games. Basically, for two perfect equilibrium to be equal, we
require the Nash strategies of two perfect elquilibriuml for anly slganle are equal almost surely with
respect to the conditional probability at the subgamne.
Definition 3.5. Two perfect equililria (T', T) and (T'; T"') are said equal if for i = 1, 2, t+T O, =
t + T"Ot, P - a.s. for all t > 0 and for all
E R.
Proposition 3.4. Sppose player 1 is stronger or more patient than player 2. Then, (T 1, T 2 2 )
is the unique perfect equilibrium if anl( only if P. 1 {Xt
x
A lln (
1
,
°(o, i2 , and At = (y2,
o)} = 0, for
E R and
1 < t < tll-1.
Proof. For t > t
- 1 and for i = 1, 2, Ail = Air = (-
since
= 0. Then P- 1{X, C (
=w+,
2
and tiv = tbl and ot
=
12
=
=
=
0 implies T2 = T, P
22
Wl+ =Wt+l imply
W .
ta
that AlAt
and
=
(y2l, 0)}
=
ad
Alt n (y2T,oo), and Tii = Ti, i '
P,I {x, E Atln
- a.s., for s > t - 1,
22
Suppose Tii = Ti, i = 1,2, P - a.s., for s > t and for
2
yt21
E R.
Then
ill \
(-co, yt22]. Therefore,
1, 2, P,' - a.s., for s > t - I andl z
bit
=
1~+
=
l =
ti+
t\
, = A,
1
=
and
=
=
R, by the condition that
= 0. The induction is completed.
To prove the necessary condition, let t - sup{s > 0 P-,{X, C (ITy,yl]} > 0}. Since
Ti= Tii, P, - a.s., for > t and xz R and hence B 2, n B 2, = (yr, y~], we have
P,1 t + T 2 0 > t + T 22 0,} > PL, {X, E BD n B 2 ,) = P,{X, E Al, n y,, oo)} > o.
Note that Alt
n (y 2 1', o)
C (y1 l,
/21.
Therefore, the proplosition indicates that for example,
the stronger player never exits first either if the payoffs are so asymmletric tllhat r12
2
equlivalenltly, y < y2) or if the process X is sufficiently "smlloothll
interval ( 12, I 22]
t
r21 (or
that it never jumps over the
The following example is an application of the mIlo(lel to a market setting in which competing
firms have not only the production decisions in each period blit also the option to exit the market.
A Cournot duopoly faces a stochastic linear inverse denland, P = X, - bQ, where Q is the total
output of the duopoly, and X are Markovian and stochastically (lecreasing. The firmns produce a
homogeneous good at constant costs c, i = 1, 2. Firm i'opplorhtity cost of staying in the market is
ki. Without loss of generality, we assulle cl < 2. If oth firmls are in the market, the equilibrium
10
stage payoffs are
rl2(Xt)
i
9(X - 2cl +c 2 )2-k
(X,
X - C) 2 - kl,
if X > 2c2 -c;
if cl < XI < 2c 2
,
CI;
otherwise,
-kl,
Ar22 (Xt)
-
t
- (X
- 2c2 +
l)2 - k2 ,
-k2,
if Xt > 2c 2 otherwise.
l;
If one firm is in the market alone, the monopoly stage Iprofits are, for i = 1, 2,
4 (X
i(t)-{
t
-i)2-
ki,
-ril(Xki,
if Xt > c;
otherwise.
It is easy to see that 7ril > 7ri 2 for i = 1, 2, and if el < c2, k = k2 or ci = c2, kl < k 2, then firm 1
is stronger, i.e.
rlji > 7r2j for j = 1, 2, and the general results otained earlier apI)ly.
4. An Example with a Continuous Time Jump Process
In this section we shall analyze a contilnuous time two-player stochastic exit game analogous
to the discrete time model studied above to illustrate that the Ilmultiplicity of equilibria can also
occur in the continuous time models, which is similarly cause(d by possible jlllllps ill the underlying
information process if no filrther assuml)tion on the magnitude of the asymmetry is made.
an(ld fi rcsp)ectively. Set X 1 = A-B1,
We take A and B, two Poisson processes with intensities
for t > 0 and think of X as the stochastic d(enland of an industry. Process X is an additive process
and we denote the transition functions of X by P(zx, y), , y CE X where ' is the state space of X.
The profit rate, ar:
-4 R, is assunlmed to be strictly increasing and 1)olnl(ded where
°
of integers. Also assume that there is a :/° E
such that r(y ) < 0 alld 7r(y
°
N is the set
+ 1) > 0. Let r be
the discount rate for the firm. Then the single firm problem is to solve
Sill) E,[f
er-" r(X,)dt]
,
T
where the supremumn
is over all stopping times.
Most computations regarding process X we nee(l in or
analysis are contailled in Li (1984)
and readers may notice that they are I)arallel to those for a Brownian motion in Harrison (1985).
Denote the hitting times by T(y) -
inf{t > 0 : X
= y}, T(y) -
T*(y) -- inf{t > 0: Xt > Vy}. Also let
0(x, )
'
E[e
YT( ) 1T(T)<
y
f(z) -E,[
(zX,) Y
,(x,
y) -
oo
].l
e-7r(Xt)1t],
e-"rPt(, y)dt.
11
inf{t > 0
Xt < y} and
It can be shown that
x--y
P'
=
(z,.)
P2
,
if z > y;
if s < ,
(4.1)
where Pi and P2 are two roots of
a
+t
+r 2
,a + # +0+
(4.2)
and
=
-[a ++
r) 2 -4af1,
(+ +
r-
1
(+P +f +r) -2
P2 = t-a[i +th+ r +
4aI.
Noting that r is the r - potential of process X and f is the )otential of a' for X, we have
Lemma 4.1.
f(x) =
Y
a(.y)r/(,
),
a(1 -
)+
(4.3)
yE/
where r(x, y) = O(x, y)r/(y, y) and t7(,
y) = A -
(1 -
2) + r]- ' for , y E
g.
Proof. See Cinlar (1975) for the proof of (4.3). To calculate r, first. note
(,,-rt 1 (Y) (Xf) (it],
(x, i) = E[
by Fubini's theorem. Let T - T(y) and X t = XT+, on {T < oo}. It follows l{y)(Xt) = 0 a.s., for
0 < t < T. Theorem 8.3.16 of Cinlar (1975) then implies
,(z,
y) = E[e -rTrl(,, .V).
(4.4)
Using Theorem 8.3.18 in Cinlar (1975), we have
(4.5)
(a + #f + r)t((y, .) - fh.(y - 1, y) - a(7(y + 1, y) = 1.
Thus, rt(y, y) = [t(1 - P) + f(1 - p-l) + rl-l by (4.4) andll (4.5).
I
Denoting the expected profit if T(?y) is adopted and if X =
> !>l)y
g(,T(y)
g(,,,)
r-"r(X,)dt],
- E [.
it is obvious that g(y, y) = 0, and (4.1), (4.3) and( the strollg Markov prolerty of X gives that for
:> y,
g(X, y) = f(z) - O(x, y)f(y)
z-1
= A[- E
r()(p
-
)+ (p'2
- p-')h(y)],
(4.6)
z=y
12
-------
111
1 1_
____1_11__
where
oo
oo
-Z
,r(z)p
h(y) -
=
z=y
r(Z + /)P2.
(4.7)
z=O
Lemma 4.2. The finction h(.) is strictly incrcasing in y, au(l there is a unique y such that y < yO
and
h(.) < O0anti h(.0 + 1) > 0.
(4.8)
Proof. The first assertion follows from the fact that 7r is increasing and the second assertion is true
since h(t °) > 0, h(-oo) = -oo, and h is increasing.
Now suppose T,(i.) is employed as the exit time. Then
O(,
),
v~z)=
0,
V(g)
is the expected profit. We want to show that T,()
if
,
(4.X)
(4.9)
is ind(eed( the optimal exit time. The following
important lemma is proved first.
Lemma 4.3. The value function v is nonnegative, and
rv(x) - rv(x) +
r(z)
- rv(z) + 7r()
rv(z)
0,
=
< 0,
if z >
if
,
(4.10)
< .o,
(4.11)
where the operator r is defined as
rv(Z) =
v( +t)1) ++v(
- 1) - (
+
)v(x).
Proof. Using (4.6), we calculate
9(,
)-(,
where h is defined in (4.7) and A > 0 is (efine(l
g(X, #) > g(z, ) = 0 if z >
+l(P2-pl)Ah(y),
= p'1
y-1)
i
Lmma 4.1, an1d hiencel
Lemmlla 4.2 implies
. So v(x) > 0.
Since P1 and P2 satisfy elation (4.2), we can verify that
rf(z) - rf(Z) +
7r(x) = ().
r0(z) - rO(x) = 0,
except at
f(z) (y
= y, where
(z) -
0(,, ). Equation (4.10) then follows for
(2, O)f(.). To prove (4.11), we first note that for
- 1) < 0 since
<
rv()
°.
<
= A[a(p 2 - pl)h(y) + (
since g(x,^) =
, rv(x) - rv(x) + 7r(z) =
It is left to show that (4.11) holds for z = ..
- rv(y) + 7r(j) =
>
r(x) <
Bllt
g(V + 1, t) + 7r()
fi +
+
= Aa(p 2 - pl)h(.) < 0.
13
r - (pP2 - f2')r(/)]
(4.12)
The first two equalities in (4.12) follow from direct substitution of (4.6) and (4.8). The third
equality conimes from the fact thllat P2 satisfies equation (4.2). And the last ine(luality is due to
condition (4.8)
Proposition 4.1. Stopping time T,(.)
is a optimal exit time and therefore v(z) is the optimal
valle finction.
Proof. Since v(X 1t)
fO er''rv(X,)ds is
is a left continuous and right-linited process aapted( to X, e-rtv(Xt) -
a martingale. Therefore, for an arblitrary sto)pp)iIlg time T,
E,[erTv-' (Xr) = v(x)
E,[
-rf(rv(x,) - rv(X))dtj,
(4.13)
(see Li (1984) (4.3.14)). If the firm employs T as its exit tinme, the expected profit
VT(X)
E,[
<
e-r7r(Xt)dt]
---
E,j
= v.()
7r-I(XI)(t + e -'T(X)]
+ E
[rv(x,) - r(X) + 7r(x,
)ldt}
< (),
where the two equalities follows from Lemma 4.3 and( the equality comes from (4.13).
I
Notice that if h(# + 1) = 0, we have two optimal exit times. T()
will take T,(#) as the solution. The critical number
and T,(. + 1). But we
and( the optimal exit time T ,(y) have the
following properties. The proof is just the matter of verification.
Proposition 4.2. If r' > r2, then 0'
(increasing a.s.) in a -
< 02 and T.(01)
T,*( 2 ) a.s. Also
(T(t)) is decreasing
and increasing (decreasing a.s.) in r.
Now we turn to the stopping game. Parallel to the notation in the previous sections, denote
by
rij(X,)
the profit rate to play i if there are j players in thetgalle and( the state is Xt at tine t.
Assume that
ri1
>
i2, i = 1,2, rj > r2j, j = 1,2, and11(
7rij()
has thellpro)erties assulne
for the
single player problem.
Definition 4.1. (T1 , T 2 ) is a Nash e(lilil)riumn, if T
Tj A T
E[,J
Ž1
> E [J
-
and T 2 are stopp)ing tintles al(d for i = 1, 2,
Ti
e-'T7ri2(X,)dt + I
E
7ri(XI)-lt]
AT-i
e r.-('(x,) t +
.ATi
14
7r (X) t]
for any stopping time r.
Definition 4.2. (T 1,T 2) is a perfect eulilil)riln if T1 and T2 are stopping times, and for ally
stopping times S and r and for i = 1, 2,
J(S+Tj8s)^(S+T_iOs)
EX[
S+T,Os
e -0TP(X)
(it +
Ie
-r/
wil
(X,
)(tl
s]
.J(S+TOs)A(S+ToS)
S +r5 )A(S+T_j s)
> E[
r S+r
-rti(X)dt +
S+
C'
S+r0s)A(S+T_ies)
e
7ril(Xt)dt| s],
Ii
and Ti < S + TiOs.
Let fii(z) be the counterpart of f(z) when the profit rate is 7rij in a single player problem,
By
and likewise for gij, hij, y90, ii, and vi. For brevity, we also let yiji -- .i and Ti - T(i).
Proposition 4.2, Yil
<
Yi2, i = 1, 2, and yli < Y2i, j = 1, 2.
stof)ping gamlle.
Proposition 4.3. (Tl 1, T 2 2) is a perfect equilibrilm for te
I. Given 1's strategy T 11, we want to show
Proof. Let the initial state be z E
T22
is player 2's
best response. Let T2 be a best response for player 2. Suppose that P {T 2 > Tll} > 0. Define
V
T'
Tll A T2 and T _ 0 V (T 2 - T 11 ). Then
ert2 , 2 l(x,
e rIo2 2 (Xt)dt +
E 1 j
-'
=EIeIr'T'IEvl
-
E[
e-.
2 2()(lt]
r2 l(X,)dt] < 0,
e
since P{T2 > T 11 } > 0 implies that P,{T > 0} > 0 and silnce IY11 < 21. This contradicts the
assumption that T2 is a best response to T11 . Thus T2 < T 1l a.s. and this relduces player 2's
problemn to
Sill) E,[f
e-722(X),lt],
T
and T2 2 is the unique solution.
Similar arguments show that Tll is the unique best response for player I to T22 of player 2 for
any initial state z E
.
Since (Tl 1, T 2 2) is a equilibriull for all possil)le iitial states z, it is sublgamle perfect.
I
In way similar to that in Proposition 3.1 we
all prove
Proposition 4.4. Suppose (T 1, T 2 ) is a equilibrium. Then Ti 2 < Ti < Til a.s., i = 1, 2.
(upper
Note that Ti 2 and Ti 1 are the lower alld
1)ollds for i's Nash equilibriull strategy Ti. If
Y21 > 112, then T2 < T21 < Tl 2 < T a.s., and hence (Tll, T22) is the unique Nash equilibrium. On
15
the other hand it is possible that Yli = Y2j, j
1, 2, even when
=
r2 j, j = 1, 2 but sufficiently
r1li >
i are integer-valued. In this case, (T12 , T21) is another I)erfect equilibrium. We now
close, since
seek to determine the precise boundcs for perfect equlilibria in the general situation. To avoid the
trivial cases, we assume Y21 < Y12
Lemma 4.4. For any initial state x, if player 2 plays T.(!12) with
Y2 > Y21
then the unique best
response of player 1 is
T 1 = Tl111(x<YTy
(4.14)
+ T{(x'>,J2 }) a.s.
2
where T - (T*(yl) A T,(y 2 )) V T 12,
· Y,=
sup{y
I00,
1
[Y2, Y12 +
ull,
+ 1;
otherwise,
Y2-
V---t
ml(,
e(llal y12
if tlhe slprcllllil (Ioes not
Yll, ,2) > O},
] : m(l,
! -
)
7r12
2)
-(4.15)
-
-
(415)
Z=
=Y2
-
+ (ep-y"
-~)hll(.u,).
Proof. Let us begin with z such that 112
V. Then T = T*(yVi) A T,(y 2). Given that 2 plays
< z <
T*(y 2 ), 's payoff by playing (4.14) for some y is
fT
'
TrtI
e -r
,(X, Y, 2) = E,[
2 )fi 2 ()
l'(z, v,
= f 1 2 (X)-
7rI (Xf,)dtl{(xT,=Y,2 }
e
+ E,
2 (Xi,)(t]
Y(z,
J2 , Y) fi 2 (y 2 ) + 1 (z,
-
2,
)gl(.
2
, Yl)
where
'(x,1y,z)
for x,.y,z E X.
-.
-
PI:- P2
(4.16)
For a detailed discussion of the above derivation, see Li (1984). Notice that
'l[x,=T=
= E[-,rT
(,.Y2)
]
;al1d
Oj(Z, Y2,Y)
= E,[e
'(x-T,=Y
}I
2
Also,
7,
32)
Y,
(,
(z-
(1
-
p2)p
a - Y+
-
Pi
p(x, Y2,11) -
(X, .Y2,
-
1)
_
Y2-y+
-
) 2-,2+l
y
(
,12)
2).
(4.17)
02
P2 - Pt
=
(1
Y-Y+l
(-T, , 12).
Using (4.17), we calculate the difference
(,
Y, Y2)-
ul
1 (z,y
-
1, 2)
2
16
YY2).
Y=
rn(Y,
1,
Observe that the sign of the difference is the salle as the sigl of mi. Direct conputation yields
ml(, yl,Y2)-
m(y- - 1,
l, Y2) = (pa-Y+
Also note that ml(y 2,yll,Y 2) = A-'gll(Y2 ,Y)
pY2-tJY+l)
( ) {
0,
< 0,
if !/ > !° 2 ;
if .2 < <
.2.
> 0. So if m1I(Y2, Yll,Y 2 ) > 0, then yJ = oo,
T = T(y 2 ) a.s., and T1 = Tll a.s. If mi(1 2, 1Yll, 1.2) < 0, then there is a unique Y. = sup{y > 1Y2
mI(y, yl, Y2)
.
> 0}. Certainly, we need Y2 < Y12 for this to be the case. But suppose y1 > Y12,
then T > T12 with a strictly positive prolability, a contradiction to Proposition 4.3.
The verification that T is the best response to T2 is simiilar to that in the proof of Proposition
4.1. Let vl be the value function under the policy (4.14). The crucial requiremlents are that vl is
nonnegative and that
ruv(z) -
rvl(z)
rt,(x) - r(x)
+
1rj
2 (x) < 0,
if z > 2;
+ 7rll(x) < 0,
if
(4.18)
x < 12.
For example, in the case that y1 is finite, the payoff to player I is
JV2(z),
1
t(z,,tY
V1(z) =
z > Yt;
<
ifY7 2 <
if
1 ,.
2 ),
if zX<
vl (),
1;
2
which does satisfies nonnegativity and (4.18). Let S be an arlbitrary stopping tilme. Then the value
function associated with it
T (y2)As
-E[
Vs(z)
%e-.rl2(X,)dt
+ J
,
c-rll(t)dt]
(y 2 ) A S
T
r0
+ E[
Cr-'(rvl(X,) - rv(X,) + 7rll(X,))dt]
T (y2)AS
<
V (),
by arguments similar to those of Proposition 4.1 together with the facts in (4.18).
Similarly we can prove
Lemma 4.5. For any initial state x. if player 1 emlloys strategy (4.14) with y2
=
y,, thenI tle
unique best response of player 2 is as follows
T2 = T.(U°)II-,e
)} + T'(lx(T,<yi+ +llx,>yŽ}).
where T' = (T*(yl) A T.(y2)) V (T*(.2 l ) A T*(y1
2
(4.19)
)) V T 22 .
yu = e [y°, yl]: n2(Y, Y, Y1)
sp
tY + 1,
17
>
()},
if tie spremllnu
otilerwise.
exists;
- = { sup{( E [12, Y22 + l:
sII,.reln.ln (loes not eqllal
if tle
m 2(Y, Y, Y12) > 0)
oo,
22 + 1;
otherwise.
and
yl-Il
7L2(Y1 112°
-1
/-,ii22(:2i
- ( 2 a~y -
-
0
_
Y,
-
E Yr(
(
s21xZ)(P2
-P
)
s=y2
(4.20)
yt12-1
2
)(p
-
E"- "
Y 2)
*=y,2
+
+
Y )h2t(a),
y-1
,2(yn,
y Y.)
)
,r2
z=f
2V
-
-
2
,(z)(p
-
P - -)
Y
(4.21)
Pl2
We are now in the position to determline thle exact bollll(s for the perfect equlilibrilllm strategies.
Proposition 4.5. Either (Ttl,T2 2 ) is tile unique perfect
(y,)
such that ; < y
< U12, y21 <
eqilibriil or tllherle is a fixed point
.U2< t and2
V; = sup{y E [y, Y121]: m,l(.y, 1,. ul) > 0}
(4.22)
.t = sup(y
(4.22);
)) > 0},
n2(,
[1/21, Y
(T1, T) is another perfect equlibriumin
where T is (efinedll in (4.14) wit l
i.y al 12 = y and
T2 - T,(T
/ 2), and T1 < T1 < Tl, T22 < T2 < T, a.s. for any perfect eqilihirillm (T1, T2).
Proof. We know from Proposition 4.4 that T2 1 is the longest equilibrinl
stopping time that 2 can
play. Suppose that player 2 employs T 21 . Then by Lenlma 4.4, player 1 responds a policy defined
in (4.14) with y2 = Y21. Denote that policy hy T~. StopI)ig tilllme T
then is a lower bomund for
l's equilibrium strategy by the monotone structure of thle game (see Proposition 4.1 in Huang and
Li (1986)).
If y1 = oo in (4.14), then T
Otherwise given player l's strategy
T1 ,
= T 1 and lhence (T 1 , T22 ) is the umiqulle equilibrium..
player
2'sisp
i se (4.19) with
0 = 1/21, which is a
tighter upper bound for player 2's equilibrilmn strategies. However, if we imlpose the requirement of
sllbgalle perfection, then another uppel r bold for player 2's strategy is TP(ye)
since, in a sllgame
starting with the state below YV2 I)layer 2 wol(
exit riglt. away and hence it cannot clredibly emlploy
a exit time longer then T.(y). Denote ly T2 thle smallllr oe of tlhe two lower )olnd(s proposCed
alove. In fact,
Ti
where T'
= T.(1Y6){x-T,E(-o.y2,1} + T2 2 1TIE(Iy,2.C)i.)}
Let T 2 )e thle )est. resp)onse of player I in
(T*(. 1 ) A T( 1 )) V (T*(?2 ) A Tla) V T'22
response to T2, the upper boun(l
of player 2's perfect equililrilllll strategies, and( T 2 is a new lower
bomd of l's perfect eqnilibrim
strategies again blye
te
then for initial state z > wy, T21 = T22, a(nd lelce T
2
ootoniity
nt
ty
of te
= T a.s.
Thus for z > Yl
a.s. where T1 is an perfect eqlili)rilum strategy for player I sinlce T 2 < Tl < TI.
18
If
game.
< .a22,
T 1 = Tl,
Seuppose that
there is a stopping time S such that S + Tils = S. The perfection requlirement implies that
S > T (= T 11 ) a.s. Therefore, T = T a.s. for all the initial states x, and (Tx1 ,T 22 ) is the
unique perfect equilibrium. Otherwise, if .2r = c,
a.s. In this case, if y2 = Y21,
then T21 = T.(y)
then (T;,T2) = (T,,T2 ,) is another perfect equilibrium andl
' <T
< TT 1 TT 22 < T 2 < T2 for
any perfect equilibrium (T1 ,T2). Otherwise T12 is obtaillned ly setting Y12= y in Lemma 4.4, and
repeat the steps above. Since y' and '12 are b)oth increasing in n, they either diverge to infinity or
converge to yl and Y2 such that Y121 <l
y, = slp {
< y< Y
12
aind
E [Y2, Y121 :ml (Y, 11jY2 ) > 0)}
Y2; = sup{y E [y21, Yl : m 2(y, Y',1Y)
> 0}.
Clearly, T and T2 are the lower b)ound and the upper b1olmld( of the perfect equilibrium strategies
of player 1 and 2 respectively, and( (T*, T2) itself is a perfect clquilibriunm by Lemmlnla 4.4 and 4.5.
I
Proposition 4.5 provides a necessary and sufficient con(litioll for the uni(lueness of the perfect
equilibrium as well as a guideline for the construction of examples which have nmultiple perfect
equilibria.
5. Conclusion
This paper analyzes the pure-strategy perfect equilibria of a class of stopping games with
stochastically monotone payoffs.
We focus on stochastically (decreasing payoffs and the natural
interpretation of exit behavior by firms in a declining inlldustry, but an equivalent analysis applies
to the case of increasing payoffs which canl be interplreted as the entry games played by firmll
in
a growing industry. Also, the analysis of the continuous time Illodel is only for a Poisson jlump
process, but the methodology is applicalble to the case in whichll the lunderlying process is a general
additive process.
We find that these games generally have multiple perfect equilibria.
Ghemawat-Nalebuff and
The models of both
ldenberg-Tirole have demandll processes that decline detcrministically
and continuously. They obtain a ui(lllu
perfect equilibrium.
The Huang-Li Ipaper shows that a
unique perfect equilibrium also obtaills with a continuous stocllastic
Occss.
()Olr paper d(lemon-
strates (in both discrete and continuous tine) that an unl(erlyillng information process with sizable
jumps can yield multiple equilibria if no further assumIption is
players' payoff structure.
19
lalde ol tlle asymlmletry of the
References
1. Chaput, H., "Markov Gaines", Technical Report No. 33, Department of Operations Research,
Stanford University, 1974.
2. Cinlar, E., Introduction to Stochastic Processes, Prentice-Hall, New Jersey, 1975.
3. Dynkin, E., "Game Variant of a Probleml on Optimal Stopl)ing", Soviet Math., Dokl.10, 1969,
pp.270-274.
4. Fudenberg, D. and J. Tirole, "A Theory of Exit in Duopoly", Econoimetrica, 56, No. 4, 1986,
pp.943-960.
5. Ghemawat, P. and B. Nalellff, "Exit", Rand Journal f Economlics, 16, No. 2, 1985, pp.184194.
6. Harrison, J.M., Brownian Motion andl Buffered( Flow, John Wiley & Sons. New York, 1985.
7. Huang, C. and L. Li, "Contintuous Time StoI)ping Games",. Working Paper No. 1796-86, Sloan
School of Management, MIT, 1986.
8. Kreps, D. and R. Wilson, "Sequential Equilibria", Econolmetrica, 50, 1982, pp. 836-894.
9. Li, L., A Stochastic Theory of the Firm, Doctoral d(issertation, Northwestern University, 1984.
10. Mamer, J., "Monotone Stopping Gaimes",
Working Pa)per, (radullate
School of Management,
UCLA, 1986.
11. Port, S. and C. Stone, Brownian Motion and Classical Potential Theory, Academic Press, New
York, 1978.
12. Selten, R., "Reexamination of the Perfectness Concei)t for Equilibrilm Point in Extensive
Games", International Journal of
name
Theory, 4, 1975, 1)1).25-55.
13. Shiryayev, A., Optimal Stopping Rule, Sringer-Verlag, New York, 1978.
20
Xt
I
I
=-1(o)
7rf
(0)
-
-
- -
l
Z
I
I
Y
t
I
I
I
I
I
U
I
x
I
I
I
T
Figure 1.
t
Single-firm stopping problem in the monopoly
model
vz
At
-l
(0)
22
22
iT
Yt
12
t
y 21
t
iT1 2 (0)
7
21
(0)
21
r-1(0)
11
Y11
t
0
T2
Figure 2.
T12
t22
22T
t1l
T21
ll=
11
Four single-firm stopping problems in the
duopoly model.
-21-
t
Download