Chapter 5 Slides (PPT)

advertisement
Chapter 5
Inferences Regarding Population
Central Values
Inferential Methods for Parameters
• Parameter: Numeric Description of a Population
• Statistic: Numeric Description of a Sample
• Statistical Inference: Use of observed statistics to
make statements regarding parameters
– Estimation: Predicting the unknown parameter based on
sample data. Can be either a single number (point estimate)
or a range (interval estimate)
– Testing: Using sample data to see whether we can rule out
specific values of an unknown parameter with a certain level
of confidence
Estimating with Confidence
• Goal: Estimate a population mean based on sample mean
• Unknown: Parameter (m)
• Known: Approximate Sampling Distribution of Statistic
 

Y ~ N  m,

n

• Recall: For a random variable that is normally distributed, the
probability that it will fall within 2 standard deviations of mean is
approximately 0.95

 

P m  2
Y  m 2
  0.95
n
n

Estimating with Confidence
• Although the parameter is unknown, it’s highly likely
that our sample mean (estimate) will lie within 2
standard deviations (aka standard errors) of the
population mean (parameter)
• Margin of Error: Measure of the upper bound in
sampling error with a fixed level (we will typically use
95%) of confidence. That will correspond to 2 standard
errors:
Margin of Error (95% Confidence ) : 2

n
Confidence Interval : estimate  margin of error
Confidence Interval for a Mean m
• Confidence Coefficient (1a): Probability (based on
repeated samples and construction of intervals) that a
confidence interval will contain the true mean m
• Common choices of 1-a and resulting intervals:
90% Confidence : y  1.645
95% Confidence : y  1.960
99% Confidence : y  2.576

n
1-a
0.90
0.95
0.99

n

n
(1  a )100% Confidence : y  za / 2

n
a /2
0.050
0.025
0.005
1-a /2
0.950
0.975
0.995
z_a /2
1.645
1.960
2.576
Standard Normal Distribution
a
2
a
2
1-a
 za / 2
0
za / 2
Normal Distribution
a
2
a
2
1-a
m  za / 2

n
m
m  za / 2

n
Philadelphia Monthly Rainfall (1825-1869)
1
Histogram
2
3
140
4
5
10
0
11
e
20
12
M
or
9
15
40
13
8
11
60
9
7
7
80
5
6
3
100
1
Frequency
120
13
14
15
m  3.68   1.92 Margin of error (n  20, C  95%) : 1.96
1.92
 0.84
20
4 Random Samples of Size n=20, 95% CI’s
Sample 1
Month
156
51
176
364
271
7
312
219
16
484
316
318
517
249
445
13
479
370
348
89
Mean
Mean-me
Mean+me
Rain
2.56
2.87
4.64
2.05
2.76
2.06
4.51
4.41
3.87
2.83
4.56
3.44
3.62
2.16
4.79
1.11
3.93
4.11
2.17
5.40
3.39
2.55
4.23
Ran#
0.0028
0.0050
0.0052
0.0082
0.0142
0.0145
0.0153
0.0160
0.0171
0.0190
0.0202
0.0257
0.0272
0.0301
0.0320
0.0324
0.0325
0.0345
0.0374
0.0380
Sample 2
Month
349
149
227
336
124
330
468
293
511
235
314
372
164
48
236
50
39
417
503
151
Rain
2.33
4.86
4.15
5.17
4.33
4.03
4.63
3.99
2.39
5.28
3.11
5.42
2.78
0.26
2.40
3.75
3.35
7.68
1.76
5.89
3.88
3.04
4.72
Ran#
0.0007
0.0013
0.0054
0.0073
0.0081
0.0101
0.0132
0.0145
0.0149
0.0172
0.0190
0.0260
0.0272
0.0281
0.0284
0.0319
0.0325
0.0333
0.0359
0.0361
Sample 3
Month
185
527
114
312
49
398
396
99
181
364
392
477
434
229
223
279
520
245
183
41
Rain
2.69
5.28
3.99
4.51
5.37
2.29
5.55
2.22
1.84
2.05
7.59
7.16
2.07
4.05
4.54
2.76
5.44
1.60
2.63
3.49
3.86
3.02
4.70
Ran#
0.0005
0.0029
0.0048
0.0084
0.0085
0.0166
0.0187
0.0233
0.0235
0.0244
0.0253
0.0283
0.0290
0.0318
0.0320
0.0364
0.0374
0.0374
0.0391
0.0395
Sample 4
Month
171
175
130
167
101
33
299
337
447
78
117
399
52
162
95
479
51
380
61
302
m  3.68   1.92 Margin of error (n  20, C  95%) : 1.96
Rain
1.50
2.52
1.22
3.35
5.88
0.79
2.60
1.85
3.55
3.53
3.57
1.09
4.99
6.60
2.59
3.93
2.87
6.00
1.63
2.87
3.15
2.31
3.99
Ran#
0.0011
0.0048
0.0085
0.0094
0.0133
0.0148
0.0164
0.0191
0.0193
0.0213
0.0224
0.0227
0.0240
0.0261
0.0296
0.0296
0.0303
0.0311
0.0324
0.0339
1.92
 0.84
20
Factors Effecting Confidence Interval Width
• Goal: Have precise (narrow) confidence intervals
– Confidence Level (1a) Increasing 1-a implies increasing
probability an interval contains parameter implies a wider
confidence interval. Reducing 1-a will shorten the interval
(at a cost in confidence)
– Sample size (n): Increasing n decreases standard error of
estimate, margin of error, and width of interval (Quadrupling
n cuts width in half)
– Standard Deviation (): More variable the individual
measurements, the wider the interval. Potential ways to
reduce  are to focus on more precise target population or
use more precise measuring instrument. Often nothing can be
done as nature determines 
Precautions
• Data should be simple random sample from population
(or at least can be treated as independent observations)
• More complex sampling designs have adjustments
made to formulas (see Texts such as Elementary Survey
Sampling by Scheaffer, Mendenhall, Ott)
• Biased sampling designs give meaningless results
• Small sample sizes from nonnormal distributions will
have coverage probabilities (1a) typically below the
nominal level
• Typically  is unknown. Replacing it with the sample
standard deviation s works as a good approximation in
large samples
Selecting the Sample Size
• Before collecting sample data, usually have a goal for
how large the margin of error should be to have useful
estimate of unknown parameter (particularly when
comparing two populations)
• Let E be the desired level of the margin of error and 
be the standard deviation of the population of
measurements (typically will be unknown and must be
estimated based on previous research or pilot study)
• The sample size giving this margin of error is:
  
 za / 2 
E  za / 2 
  n

 E 
 n
2
Hypothesis Tests
• Method of using sample (observed) data to challenge a
hypothesis regarding a state of nature (represented as
particular parameter value(s))
• Begin by stating a research hypothesis that challenges a
statement of “status quo” (or equality of 2 populations)
• State the current state or “status quo” as a statement
regarding population parameter(s)
• Obtain sample data and see to what extent it
agrees/disagrees with the “status quo”
• Conclude that the “status quo” is not true if observed
data are highly unlikely (low probability) if it were true
Elements of a Hypothesis Test (I)
• Null hypothesis (H0): Statement or theory being tested. Stated in
terms of parameter(s) and contains an equality. Test is set up
under the assumption of its truth.
• Alternative Hypothesis (Ha): Statement contradicting H0. Stated
in terms of parameter(s) and contains an inequality. Will only be
accepted if strong evidence refutes H0 based on sample data. May
be 1-sided or 2-sided, depending on theory being tested.
• Test Statistic (T.S.): Quantity measuring discrepancy between
sample statistic (estimate) and parameter value under H0
• Rejection Region (R.R.): Values of test statistic for which we
reject H0 in favor of Ha
• P-value: Probability (assuming H0 true) that we would observe
sample data (test statistic) this extreme or more extreme in favor
of the alternative hypothesis (Ha)
Example: Interference Effect
• Does the way items are presented effect task time?
–
–
–
–
–
–
Subjects shown list of color names in 2 colors: different/black
yi is the difference in times to read lists for subject i: diff-blk
H0: No interference effect: mean difference is 0 (m = 0)
Ha: Interference effect exists: mean difference > 0 (m > 0)
Assume standard deviation in differences is  = 8 (unrealistic*)
Experiment to be based on n=70 subjects
Parameter value under H 0 : m  0
Approximat e Distributi on of sample mean under H 0 : X ~ N (0,

n

8
 0.96)
70
Observed sample mean : x  2.39
How likely to observe sample mean difference  2.39 if m = 0?
Sampling Distribution of X-bar
P-value
0
2.39
Elements of a Hypothesis Test (II)
• Type I Error: Test resulting in rejection of H0 in favor
of Ha when H0 is in fact true
– P(Type I error) = a (typically .10, .05, or .01)
• Type II Error: Test resulting in failure to reject H0 in
favor of Ha when in fact Ha is true (H0 is false)
– P(Type II error) = b (depends on true parameter value)
• 1-Tailed Test: Test where the alternative hypothesis
states specifically that the parameter is strictly above
(below) the null value
• 2-Tailed Test: Test where the alternative hypothesis is
that the parameter is not equal to null value
(simultaneously tests “greater than” and “less than”)
Test Statistic
• Parameter: Population mean (m ) under H0 is m0
• Statistic (Estimator): Sample mean obtained from
sample measurements is y
• Standard Error of Estimator:  y   n
• Sampling Distribution of Estimator:
– Normal if shape of distribution of individual measurements is
normal
– Approximately normal regardless of shape for large samples
• Test Statistic: (labeled simply as z in text)
zobs
y  m0

 n
Note: Typically  is unknown and is replaced by s in large samples
Decision Rules and Rejection Regions
• Once a significance (a) level has been chosen a
decision rule can be stated, based on a critical value:
• 2-sided tests: H0: m = m0 Ha: m  m0
– If test statistic (zobs) > za/2 Reject Ho and conclude m > m0
– If test statistic (zobs) < -za/2 Reject Ho and conclude m < m0
– If -za/2 < zobs < za/2 Do not reject H0: m = m0
• 1-sided tests (Upper Tail): H0: m  m0 Ha: m > m0
– If test statistic (zobs) > za Reject Ho and conclude m > m0
– If zobs < za Do not reject H0: m  m0
• 1-sided tests (Lower Tail): H0: m  m0
Ha: m < m0
– If test statistic (zobs) < -za Reject Ho and conclude m < m0
– If zobs > -za Do not reject H0: m  m0
Computing the P-Value
• 2-sided Tests: How likely is it to observe a sample mean
as far of farther from the value of the parameter under
the null hypothesis? (H0: m  m0 Ha: m  m0)
Y  m0
 

Under H 0 : Y ~ N  m 0 ,

Z

~ N (0,1)


n

n
After obtaining the sample data, compute the mean and convert
it to a z-score (zobs) and find the area above |zobs| and below -|zobs|
from the standard normal (z) table
• 1-sided Tests: Obtain the area above zobs for upper tail tests
(Ha:m > m0) or below zobs for lower tail tests (Ha:m < m0)
Interference Effect (1-sided Test)
• Testing whether population mean time to read list of colors is
higher when color is written in different color
• Data: yi: difference score for subject i (Different-Black)
• Null hypothesis (H0): No interference effect (H0: m  0)
• Alternative hypothesis (Ha): Interference effect (Ha: m > 0)
• n = 70 subjects in experiment, reasonably large sample
Sample Data : y  2.39 s  7.81 n  70
2.39  0 2.39
Test Statistic (Based on s  7.81) : zobs 

 2.57
 7.81  0.93


 70 
Rejection Region (a  0.05) : Reject H 0 if zobs  z.05  1.645
P - value (Based on s  7.81) : P ( Z  2.57)  1  .9949  .0051
Conclude there is evidence of an interference effect (m > 0)
Interference Effect (2-sided Test)
• Testing whether population mean time to read list of colors is
effected (higher or lower) when color is written in different color
• Data: Xi: difference score for subject i (Different-Black)
• Null hypothesis (H0): No interference effect (H0: m = 0)
• Alternative hypothesis (Ha): Interference effect (+ or -) (Ha: m  0)
Sample Data : x  2.39 s  7.81 n  70
2.39  0 2.39
Test Statistic (Based on s  7.81) : zobs 

 2.57
 7.81  0.93


 70 
Rejection Region (a  0.05) : | zobs | z.05 / 2  1.96
P - value (Based on s  7.81) : 2 P ( Z | 2.57 |)  2(1  .9949)  .0102
Again, evidence of an interference effect (m > 0)
Equivalence of 2-sided Tests and CI’s
• For given a , a 2-sided test conducted at a significance
level will give equivalent results to a (1a) level
confidence interval:
– If entire interval > m0, P-value < a , zobs > za/2 (conclude m > m0)
– If entire interval < m0, P-value < a , zobs < -za/2 (conclude m < m0)
– If interval contains m0, P-value > a , -za/2< zobs < za/2 (don’t
conclude m m0)
• Confidence interval is the set of parameter values that we
would fail to reject the null hypothesis for (based on a 2sided test)
Power of a Test
• Power - Probability a test rejects H0 (depends on m)
– H0 True: Power = P(Type I error) = a
– H0 False: Power = 1-P(Type II error) = 1-b
· Example (Using context of interference data):
· H0: m = 0 HA: m > 0
 264 n=16
· Decision Rule: Reject H0 (at a=0.05 significance level) if:
zobs 
y0

2
n
y

 1.645 
4
y  3.29
Power of a Test
• Now suppose in reality that m = 3.0 (HA is true)
• Power now refers to the probability we (correctly)
reject the null hypothesis. Note that the sampling
distribution of the sample mean is approximately
normal, with mean 3.0 and standard deviation
(standard error) 2.0.
• Decision Rule (from last slide): Conclude population
mean interference effect is positive (greater than 0) if
the sample mean difference score is above 3.29
• Power for this case can be computed as:

P y  3.29
)
when
y ~ N (3.0,2.0)
Power of a Test
3.29  3.0


Power  P ( y  3.29)  P Z 
 0.145   1  .5576  .4424
2 .0


• All else being equal:
• As sample size increases, power increases
• As population variance decreases, power increases
• As the true mean gets further from m0 , power increases
Power of a Test
Distribution (H0)
Distribution (HA)
Power of Z-test
Fail to reject H0
Density of Sampling Distribution
Reject H0
.5576
.95
-5
H0(m=0)
.4424
HA(m=3)
.05
-3
-1
1
3
Y-bar
5
7
9
Power of Z-test
1
0.9
0.8
0.7
P(Reject H0)
0.6
n=16
n=32
0.5
n=64
n=80
0.4
0.3
0.2
0.1
0
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Mu
•Power Curves for sample sizes of 16,32,64,80 and varying true
values m from 0 to 5 with  = 8.
• For given m , power increases with sample size
• For given sample size, power increases with m
Sample Size Calculations for Fixed Power
• Goal - Choose sample size to have a favorable chance of detecting
important difference from m0 in 2-sided test: H0:m = m0 vs Ha:m  m0
• Step 1 - Define an important difference to be detected D):
– Case 1:  approximated from prior experience or pilot study - difference can
be stated in units of the data
– Case 2:  unknown - difference must be stated in units of standard
deviations of the data
Case 1 : D  m - m 0
m  m0
Case 2 :  

• Step 2 - Choose the desired power to detect the desired important
difference (1-b, typically at least .80). For 2-sided test:
Case 1 : n 

2
D
2
z
a /2
 zb )
2

z
Case 2 : n 
 zb )
2
a /2
2
Example - Interference Data
•
•
•
•
•
•
2-Sided Test: H0:m = 0 vs Ha:m  0
Set a = P(Type I Error) = 0.05
Choose important difference of |mm0|=D=2.0
Choose Power=P(Reject H0|D=2.0) = .90
Set b = P(Type II Error) = 1-Power = 1-.90 = .10
From study, we know  8
za / 2  z.05 / 2  z.025  1.96 z b  z.10  1.282
n
2
D2
z
a / 2  zb )
2
(8) 2
2

)

1
.
96

1
.
282
 168.2  169
2
(2)
Would need 169 subjects to have a .90 probability of detecting effect
Potential for Abuse of Tests
• Should choose a significance (a) level in advance
and report test conclusion (significant/nonsignificant)
as well as the P-value. Significance level of 0.05 is
widely used in the academic literature
• Very large sample sizes can detect very small
differences for a parameter value. A clinically
meaningful effect should be determined, and
confidence interval reported when possible
• A nonsignificant test result does not imply no effect
(that H0 is true).
• Many studies test many variables simultaneously.
This can increase overall type I error rates
Family of t-distributions
• Symmetric, Mound-shaped, centered at 0 (like the
standard normal (z) distribution
• Indexed by degrees of freedom (df), the number of
independent observations (deviations) comprising the
estimated standard deviation. For one sample
problems df = n-1
• Have heavier tails (more probability over extreme
ranges) than the z-distribution
• Converge to the z-distribution as df gets large
• Tables of critical values for certain upper tail
probabilities are available (Table 2, p. 1088)
Inference for Population Mean
• Practical Problem: Sample mean has sampling
distribution that is Normal with mean m and standard
deviation  / n (when the data are normal, and
approximately so for large samples).  is unknown.
• Have an estimate of  , s obtained from sample data.
Estimated standard error of the sample mean is:
s
SE x 
n
t
xm
~ t (n  1)
s
n
When the sample is SRS from N(m , )
then the t-statistic (same as z- with
estimated standard deviation) is
distributed t with n-1 degrees of
freedom
Probability
df
D
e
g
r
e
e
s
o
f
F
r
e
e
d
o
m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
40
50
60
80
100
1000
z*
0.25
1.000
0.816
0.765
0.741
0.727
0.718
0.711
0.706
0.703
0.700
0.697
0.695
0.694
0.692
0.691
0.690
0.689
0.688
0.688
0.687
0.686
0.686
0.685
0.685
0.684
0.684
0.684
0.683
0.683
0.683
0.681
0.679
0.679
0.678
0.677
0.675
0.674
0.2
1.376
1.061
0.978
0.941
0.920
0.906
0.896
0.889
0.883
0.879
0.876
0.873
0.870
0.868
0.866
0.865
0.863
0.862
0.861
0.860
0.859
0.858
0.858
0.857
0.856
0.856
0.855
0.855
0.854
0.854
0.851
0.849
0.848
0.846
0.845
0.842
0.842
0.15
1.963
1.386
1.250
1.190
1.156
1.134
1.119
1.108
1.100
1.093
1.088
1.083
1.079
1.076
1.074
1.071
1.069
1.067
1.066
1.064
1.063
1.061
1.060
1.059
1.058
1.058
1.057
1.056
1.055
1.055
1.050
1.047
1.045
1.043
1.042
1.037
1.036
0.1
3.078
1.886
1.638
1.533
1.476
1.440
1.415
1.397
1.383
1.372
1.363
1.356
1.350
1.345
1.341
1.337
1.333
1.330
1.328
1.325
1.323
1.321
1.319
1.318
1.316
1.315
1.314
1.313
1.311
1.310
1.303
1.299
1.296
1.292
1.290
1.282
1.282
0.05
6.314
2.920
2.353
2.132
2.015
1.943
1.895
1.860
1.833
1.812
1.796
1.782
1.771
1.761
1.753
1.746
1.740
1.734
1.729
1.725
1.721
1.717
1.714
1.711
1.708
1.706
1.703
1.701
1.699
1.697
1.684
1.676
1.671
1.664
1.660
1.646
1.645
0.025
12.71
4.303
3.182
2.776
2.571
2.447
2.365
2.306
2.262
2.228
2.201
2.179
2.160
2.145
2.131
2.120
2.110
2.101
2.093
2.086
2.080
2.074
2.069
2.064
2.060
2.056
2.052
2.048
2.045
2.042
2.021
2.009
2.000
1.990
1.984
1.962
1.960
0.02
15.89
4.849
3.482
2.999
2.757
2.612
2.517
2.449
2.398
2.359
2.328
2.303
2.282
2.264
2.249
2.235
2.224
2.214
2.205
2.197
2.189
2.183
2.177
2.172
2.167
2.162
2.158
2.154
2.150
2.147
2.123
2.109
2.099
2.088
2.081
2.056
2.054
Critical Values
0.01
31.82
6.965
4.541
3.747
3.365
3.143
2.998
2.896
2.821
2.764
2.718
2.681
2.650
2.624
2.602
2.583
2.567
2.552
2.539
2.528
2.518
2.508
2.500
2.492
2.485
2.479
2.473
2.467
2.462
2.457
2.423
2.403
2.390
2.374
2.364
2.330
2.326
0.005
63.66
9.925
5.841
4.604
4.032
3.707
3.499
3.355
3.250
3.169
3.106
3.055
3.012
2.977
2.947
2.921
2.898
2.878
2.861
2.845
2.831
2.819
2.807
2.797
2.787
2.779
2.771
2.763
2.756
2.750
2.704
2.678
2.660
2.639
2.626
2.581
2.576
0.0025
127.3
14.09
7.453
5.598
4.773
4.317
4.029
3.833
3.690
3.581
3.497
3.428
3.372
3.326
3.286
3.252
3.222
3.197
3.174
3.153
3.135
3.119
3.104
3.091
3.078
3.067
3.057
3.047
3.038
3.030
2.971
2.937
2.915
2.887
2.871
2.813
2.807
0.001
318.3
22.33
10.21
7.173
5.894
5.208
4.785
4.501
4.297
4.144
4.025
3.930
3.852
3.787
3.733
3.686
3.646
3.610
3.579
3.552
3.527
3.505
3.485
3.467
3.450
3.435
3.421
3.408
3.396
3.385
3.307
3.261
3.232
3.195
3.174
3.098
3.090
0.0005
636.6
31.60
12.92
8.610
6.869
5.959
5.408
5.041
4.781
4.587
4.437
4.318
4.221
4.140
4.073
4.015
3.965
3.922
3.883
3.850
3.819
3.792
3.768
3.745
3.725
3.707
3.689
3.674
3.660
3.646
3.551
3.496
3.460
3.416
3.390
3.300
3.290
C
r
i
t
i
c
a
l
V
a
l
u
e
s
t(5), t(15), t(25), z distributions
Density
t(5)
t(15)
t(25)
z
-4
-3
-2
-1
0
1
2
3
4
One-Sample Confidence Interval for m
• SRS from a population with mean m is obtained.
• Sample mean, sample standard deviation are obtained
• Degrees of freedom are df= n-1, and confidence level
(1-a) are selected
• Level (1-a) confidence interval of form:
s
y  ta / 2
n
ta / 2 selected from t - table so that P (ta / 2  t (n  1)  ta / 2 )  1  a
Procedure is theoretically derived based on normally distributed data,
but has been found to work well regardless for large n
1-Sample t-test (2-tailed alternative)
• 2-sided Test: H0: m = m0
Ha: m  m0
• Decision Rule (ta/2 such that P(t(n-1) ta/2)=a/2) :
– Conclude m > m0 if Test Statistic (tobs) is greater than ta/2
– Conclude m < m0 if Test Statistic (tobs) is less than -ta/2
– Do not conclude Conclude m  m0 otherwise
• P-value: 2P(t(n-1) |tobs|)
• Test Statistic:
tobs
y  m0

s/ n
P-value (2-tailed test)
t(n-1)
P -value
-4
-3
-2
-|tobs|
-|tobs|
-1
0
1
|tobs| 2
|tobs|
3
4
1-Sample t-test (1-tailed (upper) alternative)
• 1-sided Test: H0: m = m0
Ha: m > m0
• Decision Rule (ta such that P(t(n-1) ta)=a) :
– Conclude m > m0 if Test Statistic (tobs) is greater than ta
– Do not conclude m > m0 otherwise
• P-value: P(t(n-1) tobs)
• Test Statistic:
tobs
y  m0

s/ n
P-value (Upper Tail Test)
t(n-1)
P-value
-4
-3
-2
-1
0
1
tobs 2
3
4
1-Sample t-test (1-tailed (lower) alternative)
• 1-sided Test: H0: m = m0
Ha: m < m0
• Decision Rule (ta obtained such that P(t(n-1) ta)=a) :
– Conclude m < m0 if Test Statistic (tobs) is less than -ta
– Do not conclude m < m0 otherwise
• P-value: P(t(n-1) tobs)
• Test Statistic:
tobs
y  m0

s/ n
P-value (Lower Tail Test)
t(n-1)
P-value
-4
-3
-2
tobs
-1
0
1
tobs
2
3
4
Example: Mean Flight Time ATL/Honolulu
• Scheduled flight time: 580 minutes
• Sample: n=31 flights 10/2004 (treating as SRS from all
possible flights
• Test whether population mean flight time differs from
scheduled time
• H0: m = 580 Ha: m  580
• Critical value (2-sided test, a = 0.05, n-1=30 df): t.025=2.042
• Sample data, Test Statistic, P-value:
y  574.1 s  19.7 n  31
574.1  580  5.9
tobs 

 1.67
3.54
19.7 / 31
P - value  2 P(t (30) | 1.67 |) > 2 P(t (30)  1.697)  2(.05)  .10
Inference on a Population Median
• Median: “Middle” of a distribution (50th Percentile)
– Equal to Mean for symmetric distribution
– Below Mean for Right-skewed distribution
– Above Mean for Left-skewed dsitribution
• Confidence Interval for Population Median:
–
–
–
–
Sort observations from smallest to largest (y(1) ...y(n))
Obtain Lower (La/2) and Upper (Ua/2) “Bounds of Ranks”
Small Samples: Obtain Ca(2),n from Table 4 (p. 1091)
n
n
Large Samples: C
 z
La / 2  Ca ( 2 ),n  1
a ( 2 ), n
2
a /2
4
U a / 2  n  Ca ( 2 ),n

(1  a )100% CI for Median (M) : ( M L , M U )  y La / 2 ) , yUa / 2 )
)
Example - ATL/HNL Flight Times
• n=31,
• Small-Sample: C.05(2),31=9
• Large-Sample:
31
31
C.05( 2),31   1.96
 15.5  5.5  10
2
4
Sample
Size
Small
Large
La/2
Ua/2
ML
MU
10
11
22
21
567
568
583
582
Day
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Time
567
582
592
601
567
585
569
568
569
553
531
538
545
542
558
558
579
584
583
582
577
582
583
596
589
586
567
582
627
580
576
Ordered
531
538
542
545
553
558
558
567
567
567
568
569
569
576
577
579
580
582
582
582
582
583
583
584
585
586
589
592
596
601
627
Download