Mark Osegard, Ben Speidel, Megan Silberhorn, and Dickens Nyabuti

advertisement
Mark Osegard, Ben
Speidel, Megan
Silberhorn, and
Dickens Nyabuti
Discrete: Conditional Probability Mass Function
P{X = x | Y = y} = P{X = x, Y = y}
P{Y = y}
Continuous: Conditional Probability Density Function
f ( x, y )
fX | Y ( x | y ) :=
fX ( y )
Discrete:
E[X | Y = y ] =
xP{X = x | Y = y}
x
Continuous:
E[X | Y = y ] =
+∞
xfX | Y ( x, y )dx
−∞
y
E[X | Y = y ]
of y. We write this as
i.e.
is a function
E[X | Y ]
E [ X | Y ]( y ) = E [X | Y = y ]
(Conditional Expectation Function)
E [X ] = E [E [X | Y ]]
Clearly, when Y is discrete,
E [X | Y = y ]P{Y = y}
=
y
When Y is continuous,
=
+∞
−∞
E [X | Y = y ] fY ( y )dy
Recall, if X,Y are jointly continuous with
joint pdf f ( x, y )
Define:
and
f ( x, y )
fX | Y ( X | Y ) =
fY ( y )
E[X | Y = y ] =
+∞
−∞
xfX | Y ( X | Y )dx
+∞
E [X | Y = y ] fY ( y )dy =
−∞
+∞ +∞
xfX | Y ( x | y )dx fY ( y )dy
−∞ −∞
=
+∞+∞
xfX
| Y
(x | y ) f ( y )dxdy
Y
− ∞− ∞
f ( x, y )
x
=
fY ( y )dxdy
fY ( y )
− ∞− ∞
+∞ +∞
=
+∞ +∞
xf ( x, y )dxdy =
− ∞− ∞
+∞ +∞
xf (x, y )dydx
− ∞− ∞
(Fubini’s Theorem)
=
+∞ +∞
+∞
−∞ −∞
−∞
x f ( x, y )dydx =
f ( x, y )dy
+∞
xfX ( x )dx = E [X ]
−∞
Therefore, concluding
E [X ] = E [E [X | Y ]]
When Y is discrete,
E [X | Y = y ]P{Y = y}
=
y
When Y is continuous,
=
+∞
−∞
E [X | Y = y ] fY ( y )dy
[
Var ( X | Y ) = E ( X − E [X | Y ]) | Y
[
2
]
]
= E X | Y − (E [X | Y ])
2
2
Var ( X ) = E [Var ( X | Y )] + Var (E [X | Y ])
E [X ] = E [E [X | Y ]]
[
]
Var ( X | Y ) = E X | Y − (E [ X | Y ])
2
2
[[ ]
= E [E [X | Y ]]− E [(E [X | Y ]) ]
= E [X ]− E [(E [X | Y ]) ]
E [Var ( X | Y )] = E E X | Y − (E [X | Y ])
2
2
2
2
2
2
]
!
"
[(
)]
Var (E [ X | Y ]) = E E [X | Y ] − (E [E [ X | Y ]])
[
2
2
]
= E (E [X | Y ]) − (E [X ])
2
2
"
[ ]
E [Var ( X | Y )] + Var (E [ X | Y ]) = E X − (E [X ])
= Var ( X )
2
Var ( X ) = E [Var ( X | Y )] + Var (E [ X | Y ])
2
Definition
Application to Probability
Applications of Stopping Times to other
formulas
Basic Definition:
A Stopping Time for a process does exactly
that, it tells the process when to stop.
Ex) while ( x != 4 )
{ … }
The stopping time for this code fragment
would be the instance where x does equal 4.
#
Define:
Suppose we have a sequence of Random
Variables (all independent of each other)
Our sequence then would be:
X 1 , X 2 , X 3 ,...
$
From our previous slide we have the
sequence:
X 1 , X 2 , X 3 ,...
A discrete Random Variable N is a stopping
time for this sequence if :
{N=n}
Where n is independent of all following items
in the sequence
X n +1 , X n + 2 ,...
%
Summarizing the idea of stopping times with
Random Variables we see that the decision
made to stop the sequence at Random
Variable N depends solely on the values of
the sequence
X 1 , X 2 ,..., X n
Because of this, we then can see that N is
independent of all remaining values
Xm,m > n
$
Does Stopping Times affect expectation?
No!
Consider this statement:
S=
N
i =1
Xi
E[ S ] = E[ N ]E[ X ]
This formula, the formula used for Conditional
Expectation does remain unchanged
$
For an example of how to use stopping times
to solve a problem, we will now introduce to
you Wald’s Equation…
E[
N
i =1
X i ] = E[ N ]E[ X ]
&
' #
If {X1, X2, X3, …} are independent identically distributed
(iid) random variables having a finite expectation E[X], and
N is a stopping time for the sequence having finite
expectation E[N], then:
E
N
i =1
X i = E[ N ]E[ X ]
&
'
Let N1 = N represent the stopping time for the sequence
{X1, X2, …, XN }
1
Let N2 = the stopping time for the sequence
{X(N +1) , X(N +2), …, X(N +N )}
1
1
1
2
Let N3 = the stopping time for the sequence
{X(N +N +1) , X(N +N +2), …, X(N +N +N ) }
1
2
1
2
1
2
3
&
'
We can now define the sequence of stopping times as
{N i } i ≥ 1
where {Ni} clearly represents,
{N1 , N 2 , N 3 , ...}
and see the sequence is iid
&
'
"
If we define a sequence {Si} as,
where,
{S i } = {S 1+ S 2+..... + S m }
{S i } = S1 =
N1
i =1
Note: {Si} are iid
X i , S2 =
N1 + N 2
i = N1 +1
X i , ... , Sm =
N1 + N 2 +...+ N m
Xi
i = N1 + N 2 +...+ N m−1
&
'
"
{S i } = {S 1+ S 2+..... + S m }
{N1+ N2+ ... + Nm}
which are iid because the Xi’s are.
with common mean E [Xi ] = E [X]
&
'
"
By the Strong Law of Large Numbers,
lim
m →∞
{S1+ S2 + ... + Sm}
E[X]
=
{N1+ N2+ ... + Nm}
&
'
"
E[X ]
Also
{S 1+ S 2+..... + S m } {N 1+ N 2+..... + N m }
m
↓
E [S ]
m
let m → ∞
↓
E[N ]
So as we let
m→∞
E [ X ] = E [S ] E [N ]
Which can be manipulated into our preposition:
E[ S ] = E
N
i =1
X i = E[ N ]E[ X ]
(
Sample Conditional and Stopping times
in probability problem
A miner is trapped in a room containing three
doors. Door one leads to a tunnel that returns
to the same room after 4 days; door two
leads to a tunnel that returns to the same
room after 7 days; door three leads to
freedom after a 3 day journey. If the miner is
at all times equally likely to choose any of the
doors, find the expected value and the
variance of the time it takes the miner to
become free
Using Wald’s Equation:
X i ∈ {4,7,3}
N=
N = min{i | X i = 3}
"
,
"
#
"
!
1
∴ E [N ] = = 3
1
3
)
(
E [X ] = E [N ]E [ X i ]
'
14
E [X ] = 3 ∗ = 14 days
3
$
%&
&
Var ( X ) = E [Var ( X | N )] + Var (E [X | N ])
(
"
E[X | N = n] = E
=
n
i =1
n
i =1
E[X i | N = n]
Xi | N = n
"
E [X i | N = n] =
xP{X i = x | N = n}
= 4 P{X i = 4 | N = n}+ 7 P{X i = 7 | N = n}
+ 3P{X i = 3 | N = n}
∴ E [X i | N = n] =
11
i<n
2
3
i =n
"
n
i =1
E[X i | N = n] =
n −1
i =1
E [X i | N = n] + E [X i | N = n]
11
= (n − 1) + 3
2
11
5
= n−
2
2
11
5
∴ E[X | N ] = N −
2
2
"
Var (aX + b ) = a 2 Var ( x )
Var
(E [ X
121
=
Var
4
|N
]) = Var
11
5
N −
2
2
(N )
1
1−
1− P
3 = 6
=
Var ( N ) =
2
P2
1
3
Var ( X ) = E [Var ( X | N )] + Var (E [ X | N ])
||
||
?
363
2
"
Var ( X | N ) = Var
n
=
i =1
n
i =1
Xi | N = n
Var ( X i | N = n )
[
]
Var ( X i | N = n ) = E X | N = n − (E [X i | N = n])
[
]
E X |N =n =
2
i
2
i
x P{X i = x | N = n}
2
2
"
= 4 2 P{X i = 4 | N = n}+ 7 2 P{X i = 7 | N = n}
+ 32 P{X i = 3 | N = n}
65
i<n
2
E Xi | N = n = 2
9 i=n
[
]
65 121 9
−
=
i<n
Var ( X i | N = n ) = 2
4
4
9−9 = 0
i=n
"
Var ( X | N = n ) =
n −1
i =1
Var ( X i | N = n ) + Var ( X n | N = n )
9
= ( N − 1)
4
9
E [Var ( X | N )] = E ( N − 1)
4
9
9
9
= (E [N ] − E [1]) = (3 − 1) =
4
4
2
%
Var ( X ) = E [Var ( X | N )] + Var (E [ X | N ])
||
||
9
2
363
2
)
9 363 372
Var ( X ) = +
=
= 186
2
2
2
*
Dr. Steve Deckelman
Dr. Ayub Hossain
Who helped make this a success!!!!!!!!!!!!!!!!!!!!!!!
Download