# Lecture 6: Review of statistics – hypothesis testing and interval estimation ```Lecture 6:
Review of statistics – hypothesis testing and
interval estimation
BUEC 333
Professor David Jacks
1
Again, we care about sampling distributions as
they are fundamental to hypothesis testing.
Our goal is almost always to learn about the
population distribution.
Knowing the sampling distribution of our sample
estimator (e.g., the sample mean) gives us a way
to assess whether particular values of population
quantities (e.g., the population mean) are likely.
The importance of sampling distributions
2
An essential component of hypothesis testing
makes use of our knowledge of probability
distributions.
Generally, this involves figuring how far away a
particular observation (or statistic) is from its
expected (or hypothesized) value.
A convenient way of expressing this idea is in
terms of how many standard deviations an
The use of probability distributions
3
A woman posted on a blog that she had been
pregnant for 310 days (about 10.20 months) before
giving birth.
Completed pregnancies are normally distributed
with a mean of 266 days and a standard deviation
of 16 days.
If we subtract the mean from 310 and divide by
the standard deviation, this tells us how many
A “normal” example
4
0.20
0.15
0.10
0.05
0.00
200
z
222
x

244
266
288
310
-4
-3
-2
-1
0
1
2
3
4
310  266

16
Thus, it seems this women’s experience was a
very rare one…
A “normal” example
5
Likewise, we use hypothesis tests to evaluate
claims like μ = 6 or σ2 = 16.
Given the sampling distribution of a sample
statistic, we evaluate whether its observed value in
the sample is likely when the above claim is true.
We formalize this with two hypotheses.
For now, we focus on hypotheses with respect to
the population mean, but this approach can be
Hypothesis testing
6
Suppose we are interested in evaluation a specific
claim about the population mean. For instance:
a.) “the population mean is 6”
b.) “the population mean is positive”
The claim to be evaluated is the null hypothesis:
a.) H0 : μ = 6
b.) H0 : μ &gt; 0
We compare it to the alternative hypothesis:
a.) H1 : μ ≠ 6 (a “two-sided” alternative)
Null and alternative hypotheses
7
Step 1: Specify the null and its alternative.
Step 2: Compute the sample mean and variance.
Step 3: Use the estimates to construct a new
statistic (“a test statistic”) that has a known
sampling distribution when the null
hypothesis is true (“under the null”).
Note: the sampling distribution of the test statistic
will depend on the sampling distribution
How tests about the population mean work
8
Step 4: Evaluate whether the calculated value of
the test statistic is “likely” when the null
hypothesis is true.
We reject the null hypothesis if the value of the
test statistic is “unlikely”.
We do not reject the null hypothesis if the value of
the test statistic is “likely”.
How tests about the population mean work
9
Suppose we have a random sample of n
observations from a N(μ, σ2) distribution.
And we want to test the null against the alternative
hypothesis: H0 : μ = μ0 versus H1 : μ ≠ μ0.
Good place to start: the sample mean, X-bar;
if the null is true, then the sampling distribution of
X-bar is normal with mean μ0 and variance σ2/n.
Example: the t-test
10
Because X ~ N(μ0, σ2/n) under the null, we know
If we knew σ2 and computed Z, this would be our
test statistic: if Z is “close to” zero, it is likely that
the null hypothesis is true, and we would not reject
it (the same for “far from”/unlikely/would reject).
Problems with this approach: first, we do not
know σ2; furthermore, how to quantify “close to”?
Example: the t-test
11
General rule: Don’t know it? Estimate it!
It can be shown that
Note that this test statistic has two key properties:
1.) it does not contain any unknowns so we can
compute it.
2.) we know its sampling distribution so we can
use tables to see if a particular value is
“likely” or not under the null.
Example: the t-test
12
1.) Form the null and alternative hypotheses.
2.) Use the sample data to construct a test statistic
that has a known sampling distribution when
the null hypothesis is true.
3.) Decide whether to reject the null or not when
the value of the test statistic that we observe
is “likely” or not under the null.
This is how we do it (i.e. all hypothesis testing)
13
By knowing the sampling distribution of the test
statistic, we know the probability that the value of
the test statistic will fall in a given interval when
the null is true.
Corresponds to the area under the pdf of the test
statistic’s sampling distribution.
And if T is our test statistic,
for any α we can find L, U
Particular values of the test statistic—likely or not?
s
14
We can define a range of “acceptable” values of
the test statistic: let [L,U] be an interval such that
if the null is true, the test statistic falls in the
interval with probability 0.95 (or α=0.05).
If null is true and we draw 100 random samples,
the test statistic will fall in this interval 95 times;
if value of test statistic outside this interval, it is
“unlikely” that the null is true
Particular values of the test statistic—likely or not?
15
When testing a hypothesis, always the possibility
that we make one of two kinds of errors:
1.) Type I: erroneously reject the null when true.
2.) Type II: fail to reject the null when false.
Denote the probability of making a Type I error,
α, and call α the significance level of the test.
Denote the probability of making a Type II error,
β, and call (1–β)
No one is perfect: Type I and II errors
16
We usually choose a significance level α that
sounds appropriate (say, 0.05 or 0.10) and look for
a powerful test (small β).
But as with so many things, nothing comes for
free: there is a tradeoff between the two.
As α gets smaller, β must get bigger.
Example from the courtroom:
No one is perfect: Type I and II errors
17
No one is perfect: Type I and II errors
18
the # of times they went to the movies last year.
Using survey data, we calculate X  7.5, s 2  75 .
Test the hypothesis that the population mean of
movie attendance is 9: H0 : μ = 9 versus H1 : μ ≠ 9.
Assume that movie attendance is probably
normally distributed in the population and
construct the T statistic:
An example
19
Know that T ~ tn-1 = t120 if H0 is true.
For the t120 distribution at the 10% level of
significance (α = 0.10), the critical value is 1.658.
For the t120 distribution at the 5% level of
significance (α = 0.05), the critical value is 1.980.
This means that if H0 is true, then
Pr[−1.658 ≤ T ≤ 1.658] = 0.90
Pr[−1.980 ≤ T ≤ 1.980] = 0.95
An example
20
Another way to decide whether or not to reject:
construct a test statistic called W that has a known
sampling distribution when H0 is true.
Finally, ask “what is the probability of observing a
value of the statistic W as large as w when the null
hypothesis is true?” or determine:
Pr[− w ≤ W ≤ w] = 1 − p* (for two-sided alt.)
Pr[W ≤ w] = 1 − p* (for one-sided alternative)
p-values
21
The probability p* is called a p-value.
The p-value is the tail probability of the sampling
distribution W or the probability of observing a
value of W that is more extreme/unusual than w.
It is also the probability of making a Type I error if
we reject the null.
p-values
22
Previous examples of estimators (e.g., sample
mean) are called point estimators as they give us
a single value for a population parameter.
Alternative: interval estimator, an interval that
contains a population parameter with a known
probability.
An interval estimator of a population parameter Q
takes the form [L,U] where L and U are functions
of the data
Interval estimation
23
A 99% confidence interval (CI) for the population
mean μ is an interval [L,U] such that:
Pr[L ≤ μ ≤ U]=0.99
Problem: how to determine [L,U] so this is true?
A dumb, but illustrative way to achieve this:
1.) Pick a random value μ1 and construct the T to
test H0 : μ = μ1 versus H1 : μ ≠ μ1.
2.) If we reject H0, then μ1 is not in the interval;
if we do not reject H0, then μ1 is in it.
Example: confidence interval for the population mean
24
Easier way: make use of the sampling distribution
of our T statistic; when sampling from the normal
distribution, it is the case that
X 
T
~ t n 1
s/ n
And we can always look up the critical value
tn-1,α/2 that Pr[–tn-1,α/2 ≤ T ≤ tn-1,α/2 ] = 1 – α.
For a 99% CI, we have
Pr[–tn-1,0.005 ≤ T ≤ tn-1,0.005 ] = 0.99
Example: confidence interval for the population mean
25
0.99  Pr  tn-1,0.005  T  tn-1,0.005 


X 
 Pr  tn-1,0.005 
 tn-1,0.005 
s/ n


  s 

 s 
 Pr   
 tn-1,0.005  X    
 tn-1,0.005 
 n
  n

  s 

 s 
 Pr   
 tn-1,0.005  X     
 tn-1,0.005  X 
 n
  n



 s 
 s 
 Pr  X  
 tn-1,0.005    X  
 tn-1,0.005 
 n
 n


Example: confidence interval for the population mean
26
In 1907, Albert Michelson won the Nobel Prize
for developing precision optical instruments.
We can analyze his notebooks for this work.
In 1882, he measured the speed of light in air in
kilometers per second with a mean of 299,756.22
and a standard deviation of 107.11 (with n = 23).
We know now that the true speed of light is
299,710.50…does a 99% confidence interval
based on his data include this true value?
A final example of CIs
27


 s 
 s 
0.99  Pr  X  
 tn-1,0.005    X  
 tn-1,0.005 
 n
 n




 107.11 
 107.11 
0.99  Pr  X  
 tn-1,0.005    X  
 tn-1,0.005 
 23 
 23 


0.99  Pr  299, 756.22  62.98    299, 756.22  62.98
0.99  Pr  299, 693.24    299,819.20
A final example of CIs
28
Now know how to build a 99% CI for μ when
sampling from the normal distribution; same idea
applies to 90% or 95% CIs
Can also generalize from normal distribution…
just replace critical values from the t distribution
with those from standard normal
Can also generalize for other population
parameters: need a sample (test) statistic with a
known sampling distribution to build the CI.
The final word on CIs
29
```