Lecture 1 - The Chinese University of Hong Kong

advertisement
STOCHASTIC MODELS
LECTURE 1
MARKOV CHAINS
Nan Chen
MSc Program in Financial Engineering
The Chinese University of Hong Kong
(ShenZhen)
Sept. 9, 2015
Outline
1. Introduction
2. Chapman-Kolmogrov
Equations
3. Classification of States
4. The Gambler’s ruin Problem
1.1 INTRODUCTION
What is a Stochastic Process?
• A stochastic process is a collection of random
variables that are indexed by time. Usually,
we denote it by
–
– (Xt , t ³ 0).
or
• Examples:
– Daily average temperature on CUHK-SZ campus
– Real-time stock price of Google
Motivation of Markov Chains
• Stochastic processes are widely used to
characterize the temporal relationship
between random variables.
• The simplest model should be that Xn are
independent of each other.
• But, the above model may not be able to
provide a reasonable approximation to
financial markets.
What is a Markov Chain?
• Let
be a stochastic process that
takes on a finite/countable number of
possible states. We call it by a Markov chain,
if the conditional distribution of Xn+1 depends
on the past observations
only
through Xn . Namely,
for all n.
The Markovian Property
• It can be shown that the definition of Markov
chains is equivalent to stating that
• In words, given the current state of the
process, its future and historical movements
are independent.
Financial Relevance: Efficient Market
Hypothesis
• The Markovian property turns out to be
highly relevant to financial modeling in light
of one of the most profound theory in the
history of modern finance --- efficient market
hypothesis.
• It states
– Market information, such as the information
reflected in the past record or the information
published in financial press, must be absorbed
and reflected quickly in the stock price.
More about EMH: a Thought
Experiment
• Let us start with the following thought
experiment:
Assume that Prof. Chen had invented an
formula which we could use to predict the
movements of Google stock price very
accurately.
What would happen if this formula was
unveiled to the public?
More about EMH: a Thought
Experiment
• Suppose that it predicted that Google’s
stock price would rise dramatically in three
days to US$700 from US$650.
– The prediction would induce a great wave of
immediate buy orders.
– Huge demands on Google’s stocks would push
its price to jump to $700 immediately.
– The formula fails!
• A true story of Edward Thorp and the BlackScholes formula
Implication of Efficient Market
Hypothesis
• One implication of EMH is that given the
current stock price, knowing its history will
help very little in predicting its future.
• Therefore, we should use Markov processes
to model the dynamic of financial variables.
Transition Matrix
• In this lecture, we only consider timehomogenous Markov chains; that is, the
transition probabilities P(Xn+1 = j | Xn = in ) are
independent of time n.
• Denote pij := P(Xn+1 = j | Xn = i). We then can use
the following matrix to characterize the
process.
Transition Matrix
• The transition matrix of a Markov chain must
be a stochastic matrix:
– pij ³ 0.
n
–
å pij = 1.
j=1
Example I: Forecasting the Weather
• Suppose that the chance of rain tomorrow in
Shenzhen depends on previous weather
conditions only through whether or not it is
raining today.
• Assume that if it rains today, then it will rain
tomorrow with probability 70%; and if it does
not rain today, then it will rain tomorrow with
prob. 50%.
• How can we use a Markov chain to model it?
Example II: 1-dimensional Random
Walk
• A Markov chain whose state space is given by
the integers 0,±1,±2,... is said to be a random
walk if, for some number 0 < p <1,
Pi,i+1 = p =1- Pi,i-1, i = 0,±1,±2,...
• We say the random walk is symmetric if
p =1/ 2 ; asymmetric if p ¹1/ 2.
1.2 CHAPMANKOLMOGOROV EQUATIONS
The Chapman-Kolmogorov Equations
• The CK equations provide a method for
computing the n -step transition probabilities
of a Markov chain.
•
Pijn+m = P ( Xn+m = j | X0 = i) = å Pikn Pkjm
k
or in a matrix form,
P n+m = P n P m.
Example III: Rain Probability
• Reconsider the situation in Example I. Given
that it is raining today, what is the probability
that it will rain four days from today?
Example IV: Urn and Balls
• An urn always contains 2 balls. Ball colors are
red and blue. At each stage a ball is randomly
chosen and then replaced by a new ball,
which with prob. 80% is the same color, and
with prob. 20% is the opposite color.
• If initially both balls are red, find the
probability that the fifth ball selected is red.
1.3 STATE CLASSIFICATION
Asymptotic Behavior of Markov
Chains
• It is frequently of interest to find the
asymptotic behavior of Pijn as n ®+¥.
• One may expect that the influence of the
initial state recedes in time and that
n
P
consequently, as n ®+¥, ij approaches a limit
which is independent of i.
• In order to analyze precisely this issue, we
need to introduce some principles of
classifying states of a Markov chain.
Example V
• Consider a Markov chain consisting of the 4
states 0, 1, 2, 3 and having transition
probability matrix
æ 1/ 2 1/ 2 0
0
ç
0
ç 1/ 2 1/ 2 0
ç 1/ 4 1/ 4 1/ 4 1/ 4
ç 0
0
0
1
è
ö
÷
÷
÷
÷
ø
• What is the most improbable state after
1,000 steps by your intuition?
Accessibility and Communication
• State j is said to be accessible from state i if
for some n , Pijn > 0.
– In the previous slide, state 3 is accessible from
state 2.
– But, state 2 is not accessible from state 3.
• Two states i and j are said to communicate
if they are accessible to each other. We write
i « j.
– States 0 and 1 communicate in the previous
example.
Simple Properties of Communication
• The relation of communication satisfies the
following three properties:
– State i communicates with itself;
– If state i communicates with state j, then state
j communicates with state i;
– If state i communicates with state j, and state j
communicates with state k, then state i
communicates with state k.
State Classes
• Two states that communicate are said to be
in the same class.
• It is an easy consequence of the three
properties in the last slide that any two
classes are either identical or disjoint. In
other words, the concept of communication
divides the state space into a number of
separate classes.
• In the previous example, we have three
classes: {0,1},{2},{3}.
Example VI: Irreducible Markov Chain
• Consider the Markov chain consisting of the
three states 0, 1, 2, and having transition
probability matrix
æ 1/ 2 1/ 2 0
ç
ç 1/ 2 1/ 4 1/ 4
ç 0 1/ 3 2 / 3
è
ö
÷
÷
÷
ø
How many classes does it contain?
• The Markov chain is said to be irreducible if
there is only one class.
Recurrence and Transience
• Consider an arbitrary state i in a generic
Markov chain. Define
f := P(Xn = i, Xv ¹ i, v =1, 2,..., n -1| X0 = i).
n
ii
n
ii
In other words, f represents the probability
that, starting from state i, the first return to
state i occurs at the nth step.
• Let
+¥
fi = å fiin .
n=1
Recurrence and Transience
(Continued)
• We say a state i is recurrent if fi =1. That is to
say, a state is recurrent if and only if, starting
from this state, the probability of returning to
it after some finite length of time is 100%.
• It is easy to argue, that if a state is recurrent,
then, starting from this state, the Markov
chain will return to it again, and again, and
again --- in fact, infinitely often.
Recurrence and Transience
(Continued)
• A non-recurrent state is said to be transient,
i.e., a transient state i satisfies fi <1.
• Starting from a transient state i,
– The process will never again revisit the state with
a positive probability 1- fi ;
– The process will revisit the state just once with a
probability fi (1- fi );
– The process will revisit the state just twice with a
probability fi 2 (1- fi );
– ……
Recurrence and Transience
(Continued)
• From the above two definitions, we can
easily see the following two conclusions:
– A transient state will only be visited a finite
number of times.
– In a finite-state Markov chain not all states can
be transient.
• In Example V, states 0, 1, 3 are recurrent, and
state 2 is transient.
One Commonly Used Criterion of
Recurrence
• Theorem: A state i is recurrent if and only if
+¥
n
P
å ii = +¥.
n=1
• You may refer to Example 4.18 in Ross to see
one application of this criterion to prove that
one-dimensional symmetric random walk is
recurrent.
Recurrence as a Class Property
• Theorem: If state i is recurrent, and state i
communicates with state j, then state j is
recurrent.
• Two conclusions can be drawn from the
theorem:
– Transience is also a class property.
– All states of a finite irreducible Markov chain are
recurrent.
Example VII
• Let the Markov chain consisting of the states
0, 1, 2, 3, and having transition probability
matrix
æ
ç
ç
ç
ç
è
0
1
0
0
0 1/ 2 1/ 2
0 0
0
1 0
0
1 0
0
ö
÷
÷
÷
÷
ø
Determine which states are transient and
which are recurrent.
Example VIII
• Discuss the recurrent property of a onedimensional random walk.
• Conclusion:
– Symmetric random walk is recurrent;
– Asymmetric random walk is not.
1.4 THE GAMBLER’S RUIN
PROBLEM
The Gambler’s Ruin Problem
• Consider a gambler who at each play of the
game has probability p of winning one dollar
and probability q =1- p of losing one dollar.
Assuming that successive plays of the game
are independent, what is the probability that,
starting with i dollars, the gambler fortune
will win N dollars before he ruins (i.e., his
fortune reaches 0)?
Markov Description of the Model
• If we let Xn denote the player’s fortune at
time n , then the process {Xn, n =1, 2,...} is a
Markov chain with transition probabilities
– P00 = PNN =1
– Pi,i+1 = p =1- Pi,i-1 , i =1,2,..., N -1.
• The Markov chain has three classes:
{0},{1, 2,..., N -1},{N}
Solution
• Let Pi be the probability that, starting with i
dollars, the gambler fortune will eventually
reach N.
• By conditioning on the outcome of the initial
play
Pi = pPi+1 + qPi-1 , i =1,2,..., N -1.
and P0 = 0, PN =1.
Solution (Continued)
• Hence, we obtain from the preceding slide
that
–
–
q
q
P2 - P1 = (P1 - P0 ) = P1;
p
p 2
æqö
q
P3 - P2 = (P2 - P1 ) = ç ÷ P1;
p
è pø
– ……
–
æqö
q
PN - PN-1 = (PN-1 - PN-2 ) = ç ÷
p
è pø
N-1
P1.
Solution (Continued)
• Adding all the equalities up, we obtain
–
–
ì
1 / N,
ï
P1 = í 1- (q / p)
ï 1- (q / p) N ,
î
ì
i / N,
ï
Pi = í 1- (q / p)i
,
ï
N
î 1- (q / p)
p =1/ 2;
p ¹1/ 2.
p =1/ 2;
p ¹1/ 2.
Solution (Continued)
• Note that, as N ®+¥,
ì
0,
ïï
i
Pi ® í
æqö
ï 1- ç ÷ ,
è pø
ïî
•
p £1/ 2;
p >1/ 2.
Thus, if p >1/ 2 , there is a positive probability that the gambler’s fortune
will increase indefinitely; while if p £1/ 2, the gambler will, with
probability 1, go ruin against an infinitely rich adversary (say, a casino).
Homework Assignments
• Read Ross Chapter 4.1, 4.2, 4.3, and 4.5 (you
may ignore 4.5.3).
• Answer Questions:
– Exercises 2, 3, 5, 6 (Page 261, Ross)
– Exercises 13, 14 (Page 262, Ross)
– Exercises 56, 57, 58 (Page 270, Ross)
– (Optional, Extra Bonus) Exercise 59 (page 270,
Ross).
Download