Uploaded by reeceh543

Functions and Transformations

advertisement
Lecture 1 – Week 1
Functions of Random Variables (Transformations)
Study:
Rice - Chapter 2.3 and Chapter 3.6
Wackerly - Chapter 6.4 and 6.6
Functions of random variables:
Sometimes it is necessary to find the distribution of a function of a random variable or a function
with more than one variable. Usually we find the density and then use theory to find the distribution
of the new random variable(s) and sometimes the new density is in the form of a familiar, a known
density such as the gamma or normal or beta.
Example of such questions.
If a random variable Y with a probability density function (pdf) is given by
Hence, we might be interested in a new variable and its density such as
Study the examples of Section 6.4 in Wackerly.
Applying Proposition B
UnivariateExample:
Let the random variable π‘Œ possess a uniform distribution on the interval (0,1).
a. Use the transformation method to find the pdf (probability density function) of the random
variable π‘Š = √π‘Œ. What is the name of this distribution? Find the expected value of W.
b. Find the cdf (probability distribution) of π‘Š = √π‘Œ.
1
Solution:
Uniform density,
𝑓(𝑦) = 1, 0 < 𝑦 < 1.
a. If π‘Š = √π‘Œ then π‘Œ = π‘Š 2 . Hence,
𝑑
= 2𝑀 .
𝑑𝑀
Applying Proposition B
π‘“π‘Š (𝑀) = π‘“π‘Œ (𝑀 2 ) × |2𝑀|
= 1 × 2𝑀
= 2𝑀
The interval of 𝑀 is
0<𝑦<1
0 < 𝑀2 < 1
0 < 𝑀 < 1.
The probability density function of random variable π‘Š is
2𝑀,
𝑓(𝑀) = {
0,
𝟎 < π’˜ < 𝟏,
π‘’π‘™π‘ π‘’π‘€β„Žπ‘’π‘Ÿπ‘’.
Can we identify this distribution? This must always be attempted. Some guidelines:
•
•
If the boundaries of the random variable π‘Œ is between 0 ≤ 𝑦 ≤ 1 then evaluate for either an
uniform of beta probability distribution
If the boundaries of the random variable π‘Œ is between 0 ≤ 𝑦 ≤ ∞ then evaluate for a
gamma, exponential of chi-square probability distribution.
Yes, by the boundaries it might be a beta.
We rewrite the variable factor of 𝑓(𝑀)
π‘“π‘Š (𝑀) = 2𝑀(1 − 𝑀)0 .
Then we have
𝛼 − 1 = 1 π‘Žπ‘›π‘‘ 𝛽 − 1 = 0.
Hence
𝛼 = 2 π‘Žπ‘›π‘‘ 𝛽 = 1
The constant factor
Γ(𝛼 + 𝛽) Γ(2 + 1) Γ(3) 2Γ(2)
=
=
=
= 2.
Γ(𝛽)Γ(𝛼) Γ(1)Γ(2) Γ(2)
Γ(2)
Thus π‘Š~π‘π‘’π‘‘π‘Ž(2,1).
b. Find the distribution of π‘Š = π‘Œ 2 ,
2
πΉπ‘Š (𝑀) = 𝑃(π‘Š ≤ 𝑀) = 𝑃(√π‘Œ ≤ 𝑀) = 𝑃(π‘Œ ≤ 𝑀 2 )
Then
𝑀2
1. 𝑑𝑑 = 𝑀 2
𝐹(𝑀) = ∫
0
Thus the cdf of random variable π‘Š
0,
𝑀 ≤ 0,
𝐹(𝑀) = {𝑀 2 , 0 < 𝑀 < 1,
1,
𝑀 ≥ 1.
Check:
𝑓(𝑀) =
𝑑
𝑑 2
𝐹(𝑀) =
𝑀 = 2𝑀.
𝑑𝑀
𝑑𝑀
3
Convolution
4
The Multivariate Transformation Method (General Case)
Note: 𝐽 𝑖𝑠 π‘‘β„Žπ‘’ π½π‘Žπ‘π‘œπ‘π‘–π‘Žπ‘›.
5
Example:
This Example of Rice must also be studied. Page 103. But I am doing it my way - study this.
Solution:
Normal: 𝑛~(0,1) 𝑖𝑓 𝑛~(2,3)
Standard Normal
It is given that
π‘ΏπŸ 𝒂𝒏𝒅 π‘ΏπŸ 𝒂𝒓𝒆 π’Šπ’π’…π’†π’‘π’†π’π’…π’†π’π’• and both are 𝒏(𝟎, 𝟏) distributed.
First let us have the pdf of the bivariate normal distribution. That is the joint distribution of
π‘ΏπŸ 𝒂𝒏𝒅 π‘ΏπŸ that are 𝒏(𝝁, 𝝈𝟐 ) distributed.
𝑓(π‘₯1 , π‘₯2 ) =
1
exp [−
2πœ‹πœŽ1 𝜎2 √1 − 𝜌2
π‘₯2 − πœ‡2 2
+ (
) }]
𝜎2
1
π‘₯1 − πœ‡1 2
π‘₯1 − πœ‡1 π‘₯2 − πœ‡2
{(
−
2𝜌
)
(
)(
)
2(1 − 𝜌2 )
𝜎1
𝜎1
𝜎2
The bivariate normal distribution:
So
𝑋1 𝑋2 ~𝑛(πœ‡1 , πœ‡2 , 𝜎12 , 𝜎22 , 𝜌)
If each variable has the standard normal distribution πœ‡1 = πœ‡2 = 𝟎 and 𝝈𝟐𝟏 = 𝝈𝟐𝟐 = 𝟏 AND If
π‘ΏπŸ 𝒂𝒏𝒅 π‘ΏπŸ 𝒂𝒓𝒆 π’Šπ’π’…π’†π’‘π’†π’π’…π’†π’π’• , 𝒕𝒉𝒆𝒏 𝝆 = 𝟎
Then the above density becomes, for the standard normal variates,
6
𝑓(π‘₯1 , π‘₯2 ) =
1
1
exp [− {π‘₯1 2 + π‘₯2 2 }]
2πœ‹
2
If π‘ΏπŸ = π’€πŸ and π‘ΏπŸ = π’€πŸ − π’€πŸ THEN the inverses are
π‘Œ1 = 𝑋1 and π‘Œ2 = 𝑋1 + 𝑋2 as already given by Rice and therefore
𝐽=|
𝑑 𝑦1 /𝑑π‘₯1
𝑑 𝑦2 /𝑑π‘₯1
𝑑 𝑦2 /𝑑 π‘₯2
1
|=|
𝑑 𝑦2 /𝑑 π‘₯2
1
0
|=1
1
Use
to find 𝒇(π’šπŸ , π’šπŸ ) the joint density when we
Substitute 𝑦1 = π‘₯1 and 𝑦2 = π‘₯1 + π‘₯2 into
𝑓(π‘₯1 , π‘₯2 ) =
1
1
exp [− {π‘₯1 2 + π‘₯2 2 }]
2πœ‹
2
so that
𝑓(𝑦1 , 𝑦2 ) =
1
1
exp [− {𝑦1 2 + (𝑦2 − 𝑦1 )2 }] × |𝐽|
⏟
2πœ‹
2
=1
Simplify
1
[− {𝑦1 2 + (𝑦2 − 𝑦1 )2 }]
2
1
= [− {𝑦1 2 + 𝑦22 − 2𝑦1 𝑦2 + 𝑦1 2 }]
2
Hence the joint density of π‘Œ1 π‘Žπ‘›π‘‘ π‘Œ2 is
𝑓(𝑦1 , 𝑦2 ) =
1
1
exp [− {2𝑦1 2 + 𝑦22 − 2 𝑦1 𝑦2 }]
2πœ‹
2
How do we recognise this as bivariate standard normal distribution?
We have to recognize all the parameters of a bivariate normal distribution.
We need to find all five parameters of the joint density of the two variables.
πœ‡1 ,
Where 𝜌 =
πœ‡2 ,
𝜎12 ,
πΆπ‘œπ‘£(π‘Œ1 ,π‘Œ2 )
𝜎1 ,𝜎2
Given:
7
𝜎22 π‘Žπ‘›π‘‘ 𝜌
If 𝑋1 = π‘Œ1 and 𝑋2 = π‘Œ2 − π‘Œ1 then π‘Œ1 = 𝑋1 and π‘Œ2 = 𝑋1 + 𝑋2
𝐸(π‘Œ1 ) = 𝐸( 𝑋1 ) = 0 = πœ‡1
𝐸(π‘Œ2 ) = 𝐸(𝑋1 + 𝑋2 ) = 𝐸(𝑋1 ) + 𝐸(𝑋2 ) = 0 = πœ‡2
π‘‰π‘Žπ‘Ÿ(π‘Œ1 ) = π‘‰π‘Žπ‘Ÿ( 𝑋1 ) = 1 = 𝜎12 ,
π‘‰π‘Žπ‘Ÿ(π‘Œ2 ) = π‘‰π‘Žπ‘Ÿ(𝑋1 + 𝑋2 ) = π‘‰π‘Žπ‘Ÿ(𝑋1 ) + π‘‰π‘Žπ‘Ÿ(𝑋2 ) = 1 + 1 = 2 = 𝜎22
Then 𝜎1 = 1 π‘Žπ‘›π‘‘ 𝜎2 = √2
We also want π‘ͺ𝒐𝒗(π’€πŸ , π’€πŸ ) that is in order to find 𝝆. In (Chapter 5 Wackerly) we have
π‘‰π‘Žπ‘Ÿ(π‘Œ2 − π‘Œ1 ) = π‘‰π‘Žπ‘Ÿ(π‘Œ2 ) + π‘‰π‘Žπ‘Ÿ(π‘Œ1 ) − 2πΆπ‘œπ‘£(π‘Œ1 , π‘Œ2 )
2πΆπ‘œπ‘£(π‘Œ1 , π‘Œ2 ) = π‘‰π‘Žπ‘Ÿ(π‘Œ2 ) + π‘‰π‘Žπ‘Ÿ(π‘Œ1 ) − π‘‰π‘Žπ‘Ÿ(π‘Œ2 − π‘Œ1 )
= π‘‰π‘Žπ‘Ÿ(𝑋1 + 𝑋2 ) + π‘‰π‘Žπ‘Ÿ(𝑋1 ) − π‘‰π‘Žπ‘Ÿ(𝑋1 + 𝑋2 − 𝑋1 )
= π‘‰π‘Žπ‘Ÿ(𝑋1 + 𝑋2 ) + π‘‰π‘Žπ‘Ÿ(𝑋1 ) − π‘‰π‘Žπ‘Ÿ(𝑋2 )
2πΆπ‘œπ‘£(π‘Œ1 , π‘Œ2 ) = 2 + 1 − 1
πΆπ‘œπ‘£(π‘Œ1 , π‘Œ2 ) = 1
The correlation coefficient 𝝆
𝜌=
1
1×√2
=
1
√2
so that we have
1 − 𝜌2 = 1 −
√1 −
𝜌2
= √1 − (
1 1
=
2 2
2
1
√2
1
2
) =√
Hence, the density function of a 5 parameter bivariate standard normal distribution
𝑓(𝑦1 , 𝑦2 ) =
1
𝑦1 − πœ‡1 2
𝑦1 − πœ‡1 𝑦2 − πœ‡2
exp [−
{(
−
2𝜌
)
(
)(
)
2(1 − 𝜌2 )
𝜎1
𝜎1
𝜎2
2πœ‹πœŽ1 𝜎2 √1 − 𝜌2
𝑦2 − πœ‡2 2
+ (
) }]
𝜎2
1
can be changed into the determined 𝑓(𝑦1 , 𝑦2 ) with the determined parameters
πœ‡1 = 0,
𝑓(𝑦1 , 𝑦2 ) =
𝜎12 = 1,
πœ‡2 = 0,
1
1
2πœ‹(1)√2(√2)
exp [−
1
1
2(2)
𝜎22 = 2 π‘Žπ‘›π‘‘ 𝜌 =
1
𝑦2
√2
√2
{(𝑦1 )2 − 2 ( ) ( 𝑦1 ) (
8
1
√2
𝑦
2
) + ( 2 ) }]
√2
1
=
2πœ‹
=
1
𝑦2
√2
√2
exp [− {(𝑦1 )2 − 2 ( ) ( 𝑦1 ) (
1
2πœ‹
exp [− {𝑦1 2 − ( 𝑦1 )( 𝑦2 ) +
𝑦2 2
2
𝑦
2
) + ( 2 ) }]
√2
}]
Thus, we have our joint pdf of π‘Œ1 π‘Žπ‘›π‘‘ π‘Œ2 as before
𝑓(𝑦1 , 𝑦2 ) =
Hence, π‘Œ1 π‘Œ2 ~𝑛(0,0,1,2,
1
√2
1
1
exp [− {2𝑦1 2 + 𝑦22 − 2 𝑦1 𝑦2 }]
2πœ‹
2
) or we say that the pdf of 𝑓(𝑦1 , 𝑦2 ) is that of a standard normal
probability distribution with parameter values πœ‡1 = 0, πœ‡2 = 0, 𝜎12 = 1, 𝜎22 = 2 π‘Žπ‘›π‘‘ 𝜌 =
1
√2
.
Another example:
Let π’€πŸ and π’€πŸ be independent exponentially distributed random variables with parameter 𝝀 =
𝟏. Consider the transformations
π’€πŸ = π‘ΏπŸ − π‘ΏπŸ and π’€πŸ = π‘ΏπŸ + π‘ΏπŸ
Find the marginal density of 𝑋2 .
I have typed the solution using 𝑋1 π‘Žπ‘›π‘‘ 𝑋2 but for writing and typing I would from now on rather use
π‘ˆ π‘Žπ‘›π‘‘ 𝑉.
Solution:
𝑓(𝑦1 ) = 𝑒 −𝑦1
𝑦1 > 0 and 𝑓(𝑦2 ) = 𝑒 −𝑦2
𝑦2 > 0
By independence there joint pdf
𝑓(𝑦1, , 𝑦2 ) = 𝑒 −𝑦1 × π‘’ −𝑦2 = 𝑒 −(𝑦1 +𝑦2 )
𝑦1 , 𝑦2 > 0
(due to independence)
Let 𝑦1 = π‘₯1 − π‘₯2 and 𝑦2 = π‘₯1 + π‘₯2 . Then
𝐽=
𝑑𝑦1
𝑑𝑦1
𝑑π‘₯
|𝑑𝑦21
𝑑π‘₯2
𝑑𝑦2 |
𝑑π‘₯1
𝑑π‘₯2
=|
1 −1
|=2
1 1
𝑓(π‘₯1, , π‘₯2 ) = 𝑓𝑦1 ,𝑦2 (π‘₯1 − π‘₯2 , π‘₯1 + π‘₯2 ) × 2
= 𝑒 −(π‘₯1 −π‘₯2 +π‘₯1 +π‘₯2 ) × 2
= 𝑒 −(2π‘₯1 ) × 2
= 2 𝑒 −2π‘₯1
π‘₯1 − π‘₯2 > 0 and
π‘₯1 + π‘₯2 > 0
Intervals:
π‘₯2 < π‘₯1 and
π‘₯1 + π‘₯2 > 0
Looking at the bounds we see that both π‘₯2 π‘Žπ‘›π‘‘ π‘₯1 are always larger than zero but π‘₯1 is
always larger than π‘₯2 , then this bound can be stated as 0 < π‘₯2 < π‘₯1 < ∞
𝑓(π‘₯1, , π‘₯2 ) = 2 𝑒 −2π‘₯1 is an exponential distribution with parameter πœ† = 2 or 𝛽 = 1/2,
𝑓(π‘₯1, , π‘₯2 ) =
1
1/2
𝑒
−
1
π‘₯
1/2 1
.
Find the marginal distribution if π‘₯2 .
9
∞
𝑓(π‘₯2 ) = ∫π‘₯ 2 𝑒 −2π‘₯1 𝑑π‘₯1
2
𝑓(π‘₯2 ) = 𝑒−2π‘₯2
0 < π‘₯2 < π‘₯1 < ∞.
________________________________________________________________________
End of Lecture
10
Download