µ µ σ σ ρ

advertisement
Statistical Inference
Test Set 2
1. Let X 1 , X 2 ,..., X n be a random sample from a U (−θ , 2θ ) population. Find MLE of θ .
2. Let X 1 , X 2 ,..., X n be a random sample from a Pareto population with density
βα β
, x > α , α > 0, β > 2. Find the MLEs of α , β .
x β +1
3. Let X 1 , X 2 ,..., X n be a random sample from a U (−θ , θ ) population. Find the MLE of θ .
4. Let X 1 , X 2 ,..., X n be a random sample from a lognormal population with density
f X ( x=
)
 1

exp − 2 (log e x − µ ) 2  , x > 0. Find the MLEs of µ and σ 2 .
σ x 2π
 2σ

5. Let ( X 1 , Y1 ), ( X 2 , Y2 ),..., ( X n , Yn ) be a random sample from a bivariate normal population
( x)
f X=
1
with parameters µ1 , µ2 , σ 12 , σ 22 , ρ . Find the MLEs of parameters.
6. Let X 1 , X 2 ,..., X n be a random sample from an inverse Gaussian distribution with density
 λ ( x − µ )2 
 λ 
f X ( x) = 
exp
−
 , x > 0 . Find the MLEs of parameters.
3 
2µ 2 x 
 2π x 

1/2
k
multinomial distribution with parameters n = ∑ X i ,
7. Let ( X 1 , X 2 ,..., X k ) have a
i =1
k
p1 ,  , pk ; 0 ≤ p1 ,  , pk ≤ 1, ∑ p j =
1, where n is known. Find the MLEs of p1 ,  , pk .
j =1
8. Let one observation be taken on a discrete random variable X with pmf p ( x | θ ) , given
below, where Θ ={1, 2,3} Find the MLE of θ .
1
2
3
4
x
1
1/2
3/5
1/3
1/6
θ
2
1/4
1/5
1/2
1/6
3
1/4
1/5
1/6
2/3
9. Let X 1 , X 2 ,..., X n be a random sample from the truncated double exponential distribution
with the density
e −| x|
f X ( x)
=
, | x | < θ , θ > 0.
2(1 − e −θ )
Find the MLE of θ .
10. Let X 1 , X 2 ,..., X n be a random sample from the Weibull distribution with the density
β
=
f X ( x) α β x β −1e −α x , x > 0, α > 0, β > 0.
Find MLE of α when β is known.
Hints and Solutions
1
, − θ < x(1) ≤ x(2) ≤  ≤ x( n ) < 2θ , θ > 0. Clearly
(3θ ) n
it is maximized with respect to θ , when θ takes its infimum. Hence,
X 

θˆML max  − X (1) , ( n )  .
=
2 

1. The likelihood function is L(θ=
, x)
2. The likelihood function is L=
(α , β , x )
β nα nβ
n
(∏ xi )
β +1
, x(1) > α , α > 0, β > 2. L is maximized
i =1
with respect to α when α takes its maximum. Hence αˆ ML = X (1) . Using this we can
rewrite the likelihood function=
as L′( β , x )
β n {x(1) }nβ
n
(∏ xi )
, β > 2. The log likelihood is
β +1
i =1
 n 
′
log L ( β , =
x ) n log β + nβ log x(1) − ( β + 1) log  ∏ xi  . This can be easily maximized
 i =1 
−1
1
X 
with respect to β and we get βˆML =  ∑ log (i )  .
X (1) 
 n
3. Arguing as in Sol. 1, we get θˆML =
max ( − X (1) , X ( n ) ) =
max | X i | .
1≤i ≤ n
4. Directly maximizing the log-likelihood function with respect to µ and σ 2 , we get
1
1
2
log
(log X i −µˆ ML ) 2 .
=
µˆ ML
=
X i , σˆ ML
∑
∑
n
n
5. The maximum likelihood estimators are given by
1 n
1 n
1 n
2
2
2
2
ˆ
ˆ
ˆ
ˆ
µˆ=
X
µ
Y
σ
X
X
σ
Y
Y
ρ
=
=
−
=
−
=
,
,
(
)
,
(
)
,
∑ i
∑ i
∑ ( X i − X )(Yi − Y ) / (σˆ1σˆ 2 ).
1
2
1
2
ni1 =
n i 1=
ni1
=
6. The maximum likelihood estimators are given by
−1
 1  n 1 1 
ˆ
ˆ
, λML   ∑
=
µ ML X=
− 
 n  i =1 X i X  
7. The maximum likelihood estimators are given by
Xk
X1
, , pˆ k
=
pˆ1 =
n
n
8.=
if x 1, 2
θˆML 1,=
if x 3
= 2,=
if x 4
= 3,=
9. θˆ =
max ( − X , X
ML
10. αˆ ML =
(1)
n
∑ xiβ
(n)
max | X
)=
1≤i ≤ n
i
|.
Download