EXAMPLES OF STATISTICAL LIKELIHOODS MAXIMUM

advertisement
EXAMPLES OF STATISTICAL LIKELIHOODS
MAXIMUM LIKELIHOOD ESTIMATES
                          
1. Binomial likelihood
Suppose that you perform n independent success/fail trials in which the success
probability is p at each trial. For i = 1, 2, …, n, let
0
Yi = 
1
if trial i is a failure
if trial i is a success
Let likelihood for Yi is Li = pYi 1  p  i . This is especially simple as Yi only has the
values 0 and 1. This likelihood will be either 1 – p or p.
1Y
n
The likelihood for the whole problem is L =
 Li =
i 1
  p 1  p  
n
1  Yi
Yi
i 1
n
Yi
n 
Yi
.
p i 1 1  p  
i 1
n
=
n
let
d
Use log L . Then do
 log L   0 and find easily pˆ ML 
dp
y
i 1
i
. In random
n
n
variable form, this is pˆ ML 
Y
i 1
n
i
.
2. Normal sample likelihood
Suppose that X1, X2, …, Xn is a sample from the normal distribution with mean  and
standard deviation . The likelihood is
n
1
2

  xi   
 2  xi    
 1
1
2 2 i 1
e
L =  Li =  
=
e 2

n /2
n  2 
i 1

i 1   2
n
1
n
Use then log L = n log    
n
1
log  2  
2
2 2
let
d
log
L
 0
 d 

let
 d log L  0
 d 
1
n
 x
i 1
i
  .
2
2
Solve the system
EXAMPLES OF STATISTICAL LIKELIHOODS
MAXIMUM LIKELIHOOD ESTIMATES
                          
The solution will be
ˆ ML  x


1
 ˆ ML  n

n
 x
i
i 1
 x
2
This does not have the usual n – 1 of the sample standard deviation s.
3. Regression likelihood
Suppose that x1, x2, … , xn are fixed numbers. Suppose that Y1, Y2, …, Yn are
independent random variables with Yi ~ N(0 + 1 xi , 2). The likelihood is
n
2
1
 2   yi  0  1 xi  
 2  yi  0  1 xi   
 1
1
2  i 1
2
e
L =  Li =  
=
e

n /2
n
  2 
i 1

i 1   2
n
1
n
2
The maximum likelihood estimates ̂ 0 and ̂1 are familiar. It can be shown that
ˆ ML 
1
n
 y
n
i 1
i
 ˆ 0  ˆ 1 xi  .
2
4. Exponential sample
Suppose that X1, X2, …, Xn is from the exponential density with mean . The likelihood
is
n
 1  xi 
1    xi
L =  Li =   e   = n e i 1

i 1
i 1  

n
n
1
The maximum likelihood estimate is ˆ ML  x . If the density were written as  e   x ,
1
the maximum likelihood estimate would be ˆ ML  .
x
5. Censored exponential sample
Suppose that X1, X2, …, Xn are independent random variables, each from the density
 e -x. Actually we are able to observe the Xi’s only if they have a value T or below.
2
EXAMPLES OF STATISTICAL LIKELIHOODS
MAXIMUM LIKELIHOOD ESTIMATES
                          
This corresponds to an experimental framework in which we are observing the lifetimes
of n independent objects (light bulbs, say), but that the experiment ceases at time T.
Suppose that K of the Xi’s are observed; call these values X 1 , X 2 , X 3 , .... , X K . The
remaining n - K values are censored at T ; operationally, this means that there were
n - K light bulbs still burning when the experiment stopped at time T.
The overall likelihood is
L = e
 T  n  K 


K
e
K
 xi
i 1
It is not at all obvious what would be the maximum likelihood estimate. Let’s take logs:
K
log L =  T  n  K   K log     xi
i 1
Then find
d
K
log L =  T  n  K  

d

This leads to ˆ ML =
K
T n  K  
K
 x
i 1
K
let
 x
i 1
.
 0
i
This is certainly interesting.
i
6. Birth process
Start with X0 = 1. Thereafter, let the conditional distribution of Xt | Xt – 1 = xt - 1 be
Poisson with parameter  xt – 1. The idea is that one female (for generation 0) produces a
number of daughters which follows a Poisson distribution with parameter . Thereafter,
these X1 daughters (generation 1) will independently produce daughters according to a
Poisson distribution with parameter . The total number of daughters for generation 2
follows a Poisson distribution with parameter  X1. Observe that if at any time the total
number of daughters produced is zero, then the process dies out forever.
3
EXAMPLES OF STATISTICAL LIKELIHOODS
MAXIMUM LIKELIHOOD ESTIMATES
                          
The likelihood through K generations is
K

L = e
 xi
K 1
 xi
 i 1
i 0
K
x !
i 1
K 1
x
xi 1
i
i 1
i
Then find
K 1
 K 
log L =    xi    xi  log    terms without  
i 0
 i 1 
We set up
K
K 1
d
log L =   xi 
d
i 0
x
i 1

i
let
 0
The solution is the surprising form
K
ˆ ML =
x
i 1
K 1
i
x
i 0
i
4
Download