Math-UA.233: Theory of Probability Lecture 19 Tim Austin

advertisement
Math-UA.233: Theory of Probability
Lecture 19
Tim Austin
From last time... 1
If A is a regular region in the plane, then the random vector
pX , Y q is uniformly distributed in A, or UnifpAq, if it is jointly
continuous with PDF
#
f px, y q 1
area A
0
pq
if px, y q lies in A
if px, y q lies outside A.
From last time... 2
Example (Ross E.g. 6.1d)
Suppose pX , Y q is uniformly distributed in a disk of radius R
centred at p0, 0q.
(b) Find the marginal density functions of X and Y .
(c) Let D be the distance from pX , Y q to the origin. Find
P tD ¤ r u for positive real values r .
(d) Find E rD s.
In this question, part (c) gives the CDF of D. To solve part (d),
we then observe that D is a continuous RV, differentiate to find
the PDF of D, and plug this into the definition of E rD s.
Functions of two jointly continuous RVs
This is an example of a very general problem.
Suppose X , Y are two RVs, and g px, y q is a real-valued
function of two real arguments. Then g pX , Y q is a new RV.
If X and Y are discrete, then so is g.
But if pX , Y q are jointly continuous with joint PDF f , then
g pX , Y q need not be continuous, even if g is a relatively ‘nice’
function.
In general, to find the CDF Fg pX ,Y q pu q, we have to integrate f
over the region tg px, y q ¤ u u. This can be very complicated —
sometimes it’s hard to even define that integral. (Yet another problem
stemming from the possibility of very complicated sets.) We won’t solve
problems of this kind in general.
But...
... just like for E rg pX qs, there is a simple formula for E rg pX , Y qs.
It has a discrete and a continuous version.
Proposition (Ross Prop 7.2.1)
Let g px, y q be a real-valued function of two real arguments. If X
and Y are discrete with joint PMF p, then
E rg pX , Y qs ¸
px,y q such that ppx,y q¡0
g px, y qp px, y q.
If X , Y are jointly continuous with joint PDF f , then
E rg pX , Y qs 8
»
»
8
8 8
g px, y qf px, y q dx dy.
No proof here. Just remember the analogy with E rg pX qs.
Example (Ross E.g. 7.2a)
An accident occurs at a point X that is uniformly distributed
along a road of length L. At the time of the accident, an
ambulance is at location Y that is also uniformly distributed on
the road. Assuming that X and Y are independent, find the
expected distance between the ambulance and the point of the
accident.
NOTE: In this example, “X and Y are independent” implies that
the random point pX , Y q is uniformly distributed in the square
t0 ¤ x, y ¤ Lu. More on this shortly...
Important consequence of the previous proposition: linearity of
expectation for jointly continuous RVs (Ross p282).
Proposition
If X and Y are jointly continuous, then
E rX
Y s E rX s
E rY s,
and similarly for larger collections of jointly continuous RVs.
Using some deeper theory, one can define expectation for more
general RVs, and one finds that this property still holds.
In this class, you may always quote linearity of expectation for
any RVs that come up. We’ll use this a lot later.
Independence of RVs (Ross Sec 6.2)
Now suppose that X and Y are any RVs — discrete,
continuous, whatever.
Definition (Ross eqn. 6.2.1)
X and Y are independent if the events
tX takes a value in Au
and
tY takes a value in B u
are independent for any two sets of real numbers A and B.
That is, “any event defined solely in terms of X is independent
from any event defined solely in terms of Y ”.
The definition of independence for three or more RVs
corresponds in the same way to independence for three or
more events.
It would be very tedious to check the above definition for all
possible sets A and B! Happily, we don’t have to.
Proposition (Ross p228-9)
X and Y are independent
Ÿ
if and only their joint CDF is given by
FX ,Y pa, b q FX paqFY pb q
Ÿ
(in discrete case) if and only if their joint PMF is given by
pX ,Y px, y q pX px qpY py q
Ÿ
(in jointly continuous case) if and only if their joint PDF is
given by
fX ,Y px, y q fX px qfY py q.
(Provided, as usual, that we don’t worry about issues with crazy choices of A and B)
So very often we are told about two RVs separately, told their
individual PMFs or PDFs, and told that they are independent.
Then, because of the independence, we also know what their
joint PMF or PDF has to be. (Without independent we can’t
deduce it from that other info.)
Example (Ross E.g. 6.2a)
Perform n
m independent trials with success probability p. Let
number of successes in first n trials
Y number of successes in remaining m trials.
X
Then these RVs are independent.
Independence appears naturally in the splitting property of
the Poisson distribution.
Example (Ross E.g. 6.2b)
The number of people who board a bus on a given day is a
Poipλq RV. Each of them is a senior with probability p,
independently of the others. Then the numbers of seniors and
non-seniors boarding the bus are independent RVs which are
Poippλq and Poipp1 p qλq, respectively.
Now some continuous examples:
Example (Buffon’s needle, Ross E.g. 6.2d)
A table is made of straight planks of wood of width D. A needle
of length L, where L ¤ D, is thrown haphazardly onto the table.
What is the probability that in its final location it crosses one of
the lines between planks?
KEY ASSUMPTION: The distance from the centre of the
needle to the nearest plank is independent from its angle, and
both are uniform.
Our previous characterizations of independence work just the
same for three or more RVs.
Example (Ross E.g. 6.2h)
Let X , Y and Z be independent Unifp0, 1q RVs. Find
P tX ¥ YZ u.
If pX , Y q are jointly continuous with joint PDF f , then we can
usually spot quickly whether they are independent. The
following observation can help.
Proposition (Ross Prop 6.2.1)
The above X and Y are independent if and only if f can be
expressed as
f px, y q g px qhpy q
using some functions g and h.
for all
8 x, y 8
(There’s also a version for discrete RVs: see
Ross)
The point is that you don’t have to know that g
to make this check.
fX and h fY
But you do still have to be careful when using this proposition!
Example (Ross E.g. 6.2f)
If the joint PDF of X and Y is
f px, y q 6e2x e3y
for 0 x, y
8
and 0 elsewhere, then X and Y are independent. How about if
f px, y q 24xy
and 0 elsewhere?
for 0 x, y
1 and 0 x
y
1
Transformations of jointly cts RVs (Ross sec 6.7)
For a single continuous RV X , we found a formula for the PDF
of g pX q for certain special functions g. Reminder from lec 16:
Suppose X is continuous with PDF fX , and that we always
have a ¤ X ¤ b for some a and b. Assume g px q is (i)
strictly increasing and (ii) differentiable on ra, b s. Then
Y g pX q is also continuous, g paq ¤ Y ¤ g pb q, and
fY py q fX pg 1 py qq pg 1 q1 py q
This is all still OK if a 8 or b
for g paq ¤ y
¤ g pbq.
8.
In this case, we can think of g as a change of variables from the
interval ra, b s to rg paq, g pb qs. Every value in the former gets
mapped to a unique value in the latter.
We now present a two-dimensional version of this result. It
takes some setting up...
Suppose that pX1 , X2 q are jointly continuous with PDF f .
Suppose that A is a regular region of the px1 , x2 q-plane such
that f is zero outside A, which implies that pX1 , X2 q lies in A with
probability 1.
A transformation of pX1 , X2 q occurs when we change to a new
two-dimensional coordinate system. This means we have a
map T from A to another regular region B in the py1 , y2 q-plane,
where the yi s are functions of the xi s:
y1
g1px1, x2 q,
y2
g2px1 , x2 q
for px1 , x2 q in A.
Then the transformed RVs are
pY1, Y2q pg1pX1, X2q, g2 pX1, X2qq.
We call this a change of variables if
Ÿ
T is one-one and onto, so there is an inverse T 1 from B
back to A. Then T 1 can also be represented by a pair of
functions:
x1
Ÿ
h1py1 , y2 q,
x2
h2py1 , y2 q
for py1 , y2 q in B.
The functions g1 , g2 have continuous partial derivatives on
all of A, and h1 , h2 have continuous partial derivatives on
all of B.
These conditions ensure that the change between px1 , x2 q and
py1 , y2 q really is a ‘smooth’ change between two coordinate
systems. No information is lost.
Example: changing between Cartesian and polar coordinates
(provided we don’t worry about the origin).
Under these conditions, we all remember the following facts
from Calculus III:
The Jacobian of T at the point px1 , x2 q is the determinant
J px1 , x2 q B g1
Bx
Bg12
Bx1
B g1
Bx
Bg22
Bx2
BBgx1 BBgx2 BBgx1 BBgx2
1
2
2
1
It appears in the change-of-variables formula for double
integrals:
¼
B
F py1 , y2 q dy1 dy2
¼
A
F pg1 px1 , x2 q, g2 px1 , x2 qq|J px1 , x2 q| dx1 dx2 .
Finally, we get to the probabilistic statement.
Theorem
Under the conditions described above, the transformed RVs
pY1, Y2q pg1pX1, X2q, g2pX1 , X2qq
are also jointly continuous, the point pY1 , Y2 q lies in B with
probability 1, and
fY1 ,Y2 py1 , y2 q fX1 ,X2 px1 , x2 q |J px1 , x2 q|1 ,
where xi
hi py1 , y2 q for i 1, 2.
Download