P-Values in Hypothesis Testing
With the advent of statistical computing packages such as Excel or SAS or SPSS, a modified approach to hypothesis testing became feasible. This approach uses the p-value associated with the test statistic.
Defn : The p-value of a test is the smallest level of significance that would lead to rejection of the null hypothesis, given the value of the test statistic.
Note: The p-value is also the conditional probability of finding a value of the test statistic at least as extreme as the one actually found, given that the null hypothesis is true.
The statistical computing package that we use will calculate both the value of the test statistic and the associated p-value. The decision rule is: if the p-value is less than the chosen value of
, we reject the null hypothesis; otherwise, we fail to reject the null hypothesis.
Confidence Interval Estimates for Parameters
In any given estimation situation, we have no way of knowing how close the point estimate is to the true value of the parameter, since the true value of the parameter is unknown. We can be certain, however, that the estimated value is not equal to the true value.
We want to be able to say how good our estimate is. Hence, we want to extend the idea of a point estimate to the following type of estimation.
Defn : A confidence interval estimate of a parameter is an interval obtained based on a point estimate, together with a percentage that specifies how confident we are that the true value of the parameter lies in the interval. This percentage is called the confidence level, or confidence coefficient.
The general procedure for obtaining a confidence interval estimate for a parameter
is as follows:
1) We choose our level of confidence, 1-
, (usually 90% or 95% or 99%).
2) We find statistics L and U such that
U
1
.
Interpreting a Confidence Interval: We say that we are
1
100%
confident that the true value of
the parameter lies in the interval. This means that the interval was obtained by a method such that
1
100% of all intervals so obtained actually contain the true parameter value.
Inferences for Population Proportions
We want to be able to do inference about the value, p, of the proportion of a population possessing a certain characteristic. In order to do inference we will need to perform a binomial experiment. To review, an experiment is binomial if it possesses the following characteristics:
1) It consists of a fixed number, n, of independent and identical trials;
2) Each trial results in one of two possible outcomes; Success or Failure;
3) P(Success) = p for each of the trials.
1
Let Y = number of successes in our binomial experiment. Then Y is the sum of n independent and identically distributed Bernoulli random variables, and the Central Limit Theorem says that the random variable
Y n
~ Normal
p , p
1
p
n
, approximately, for large n.
We need one more theoretical result before we can construct our confidence interval estimate for p.
Slutsky’s Theorem tells us that if
ˆ p p
1
p
has an approximate standard normal distribution, then
ˆ
ˆ p
also has an approximate n n standard normal distribution.
Confidence Interval for p:
Given a confidence level, 1 -
, we can make the following statement, using the result from the C.L.T.:
2
P
z
2
1 p
z
2
1
. n
Rearranging, we obtain:
P
z
2
1 n
p
z
2 n
1
.
Hence an approximate (1 -
)100% confidence interval estimate for p is
z
P
ˆ
. n
2
Example : p. 184, Exercise 4-54 c)
Sample Size for a Specified Margin of Error:
As part of our experimental design, we want to specify the margin of error, E, that is acceptable for our estimate of p, and choose a sample size to insure that we achieve this margin of error. We let
E
z
2 p
1 n
p
. Solving for n, we obtain n
z
E
2
2 p
1
p
.
Now, we know E and
, but we need to find a usable value for p(1-p) before we can find the sample size. We use the fact that for any value of p between 0 and 1, we have p(1-p)
0.25. Then
3
z
2
2
1 n gives us an upper bound on the sample size that will insure that we will achieve our
E
4 desired margin of error with confidence level 1 -
.
Example : p. 184, Exercise 4-49 f)
Testing Hypotheses Concerning Proportions:
We want to test hypotheses of the following possible forms:
1) H
0
: p = p
0
vs. H a
: p
p
0
2) H
0
: p
p
0
vs. H a
: p < p
0
3) H
0
: p
p
0
vs. H a
: p > p
0
The test statistic to be used is Z
p
0
1 p
0
p
0
. Under the null hypothesis, the Central Limit n
Theorem says that this statistic has an approximate standard normal distribution.
For the three types of alternative hypotheses, the rejection regions are:
1) H a
: p
p
0
2) H
3) H a a
: p < p
: p > p
0
0
Reject H
0
if |z| > z(
Reject H
0
if z < -z(
)
Reject H
0
if z > z(
)
/2)
Example : p. 184, Exercise 4-54 a)
Sample Size for a Two-Sided Hypothesis Test for a Proportion:
As part of our experimental design, we want to find the sample size that will allow us to achieve a specified level of power for detecting an effect of a given size. We decide on our significance level,
.
We decide on the effect size,
, that we wish to be able to detect with probability 1 -
. Suppose that the null-hypothesized value of p is p
0
, and that the actual value is p
p
0
. Then the test statistic may be written as Z
ˆ p
0
p
0
1
p
0
p
0
1
p
0
. Therefore the actual distribution of the test statistic is Normal with mean making a Type II error is given by
z
2 p
0
1 n
p
0
z
2
p
0 n
1
p
0
n n
and standard deviation 1. Then the probability of p
0
1
p
0
n
.
Based on the above derivation, the necessary sample size to detect an effect of size
with probability
1 -
, when the significance level of the test is
, is given by n
z
p
0
2
1
p
0
z
p
0
Example : p. 185, Exercise 4-57 d)
1
p
0
2
.
4