Chapter 3 Descriptive Statistics Population Mean Sample Mean

advertisement
Chapter 3 Descriptive Statistics
Sample Standard Deviation
Population Mean
Sample Mean
Computational Formulas for Population Variance
and Standard Deviation
Interquartile Range
Q3 – Q1
Sum of Deviations from the Arithmetic Mean is
Always Zero
Computational Formulas for Sample Variance and
Standard Deviation
Mean Absolute Deviation
Population Variance
z Score
Population Standard Deviation
Coefficient of Variation
Mean of Grouped Data
Empirical Rule*
Distance from the Mean
Values within the Distance
68%
95%
99.7%
where
i = the number of classes
f = class frequency
M = class midpoint
N = total frequencies (total number of data values)
*Based on the assumption that the data are
approximately normally distributed.
Chebyshev’s Theorem
Within k standard deviations of the mean,
Medium of Grouped Data
, lie at
least
where
proportion of the values.
Assumption: k > 1
Sample Variance
l = lower endpoint of the class containing the median
w = width of the class containing the median
f = frequency of the class containing the median
F = cumulative frequency of classes preceding the
class containing the median
N = total frequencies (total number of data values)
Formulas for Population Variance and Standard
Deviation of Grouped Data
Original Formula
Computational Version
where
f = frequency
M = class midpoint
N = ∑ , or total of the frequencies of the population
= grouped mean for the population
Formulas for Sample Variance and Standard
Deviation of Grouped Data
Original Formula
Computational Version
where
f = frequency
M = class midpoint
N = ∑ , or total of the frequencies of the population
= grouped mean for the sample
Coefficient of Skewness
where
= coefficient of skewness
= median
Independent Events X, Y
Chapter 4 Probability
Classical Method of Assigning Probabilities
Bayes’ Rule
where
N = total possible number of outcomes of an
experiment
= the number of outcomes in which the event
occurs out of N outcomes
Range of Possible Probabilities
Probability by Relative Frequency of Occurrence
Mutually Exclusive Events X and Y
Independent Events X and Y
Probability of the Complement of A
The mn Counting Rule
For an operation that can be done m ways and a second
operation that can be done n ways, the two operations
can then occur, in order, in mn ways. This rule can be
extended to cases with three or more operations.
General Law of Addition
where X, Y, are events and
X and Y.
Special Law of Addition
If X, Y are mutually exclusive,
General Law of Multiplication
Special Law of Multiplication
If X, Y are independent,
Law of Conditional Probability
If X and Y are independent events, the following must be
true:
is the intersection of
e = 2.718281…
Chapter 5 Discrete Distributions
Mean or Expected Value of a Discrete Distribution
Hypergeometric Formula
where
E(x) = long-run average
x = an outcome
P(x) = probability of that outcome
Variance of a Discrete Distribution
where
x = an outcome
P(x) = probability of a given outcome
= mean
Standard Deviation of a Discrete Distribution
Assumptions of the Binomial Distribution
- The experiment involves n identical trials.
- Each trial has only two possible outcomes denoted as
success or failure.
- Each trial is independent of the previous trials.
- The terms p and q remain constant throughout the
experiment, where the term p is the probability of
getting a success on any one trial and the term q = 1 –
p is the probability of getting a failure on any one
trial.
Binomial Formula
where
n = the number of trials (or the number being
sampled)
x = the number of successes desired
p = the probability of getting a success in one trial
= 1 – p = the probability of getting a failure in one
trial
Mean and Standard Deviation of a Binomial
Distribution
Poisson Formula
where
x = 0, 1, 2, 3, …
= long-run average
where
N = size of the population
n = sample size
A = number of successes in the population
x = number of successes in the sample; sampling is
done without replacement
Chapter 6 Continuous Distributions
Probability Density Function of a Uniform
Distribution
Mean and Standard Deviation of a Uniform
Distribution
Probabilities in a Uniform Distribution
where
Density Function of the Normal Distribution
where
 = mean of x
 = standard deviation of x
 = 3.14159 …
e – 2.71828
z formula
Exponential Probability Density Function
where
x
and e = 2.271828…
Probabilities of the Right Tail of the Exponential
Distribution
where
where
= sample proportion
n = sample size
p = population proportion
q=1–p
Chapter 7 Sampling and Sampling
Distributions
Determining the Value of k
where
n = sample size
N = population size
k = size of interval for selection
Central Limit Theorem
If samples of size n are drawn randomly from a
population that has a mean of and a standard
deviation of , the sample means, , are approximately
normally distributed for sufficiently large samples (n
30*) regardless of the shape of the population
distribution. If the population is normally distributed,
the sample means are normally distributed for any
sample size.
From mathematical expectation, it can be shown that
the mean of the sample means is the population mean:
and the standard deviation of the sample means (called
the standard error of the mean) is the standard
deviation of the population divided by the square root
of the sample size:
z Formula for Sample Means
z Formula for Sample Means of a Finite Population
Sample Proportion
where
x = number of items in a sample that have the
characteristic
n = number of items in the sample
z Formula for Sample Proportions for n
and n
Confidence Interval to Estimate the Population
Variance (8.6)
Chapter 8 Statistical Inference: Estimation for
Single Populations
Sample Size When Estimating
100(1 – a)% Confidence Interval to Estimate :
Known
Sample Size When Estimating p
where
=
the area under the normal curve outside the
confidence interval area
 = the area in one end (tail) of the distribution
outside the confidence interval
Confidence Interval to Estimate
Correction Factor
Using the Finite
Confidence Interval to Estimate : Population
Standard Deviation Unknown and the Population
Normally Distributed
Confidence Interval to Estimate p
where
= sample proportion
=
p = population proportion
n = sample size
Formula for Single Variance
where
p = population proportion
q= 1 – p
E = error of estimation
n = sample size
Chapter 9 Statistical Inference: Hypothesis
Testing for Single Populations
z Test for a Single Mean (9.1)
Formula to Test Hypotheses about
Population (9.2)
t Test for
with a Finite
(9.3)
z Test of a Population Proportion (9.4)
where
= sample proportion
p = population proportion
q=1–p
Formula for Testing Hypotheses about a Population
Variance (9.5)
Confidence Interval to Estimate μ1 − μ2 Assuming
the Population Variances are Unknown and Equal
(10.5)
Chapter 10 Statistical Inference: About Two
Populations
z Formula for the Difference in Two Sample Means
(Independent Samples and Population Variances
Known) (10.1)
where
= mean of population 1
= mean of population 2
= size of sample 1
= size of sample 2
Confidence Interval to Estimate
t Formula to Test the Difference in two Dependent
Populations (10.6)
where
n = number of pairs
d = sample difference in pairs
D = mean population difference
sd = standard deviation of sample difference
d = mean sample difference
−
(10.2)
Formulas for
and sd (10.7 and 10.8)
t Formula to Test the Difference in Means Assuming
and
are Equal (10.3)
Confidence Interval Formula to Estimate the
Difference in Related Populations, D (10.9)
t Formula to Test the Difference in Means (10.4)
z Formula for the Difference in Two Population
Proportions (10.10)
where
= proportion from sample 1
= proportion from sample 2
= size of sample 1
= size of sample 2
= proportion from population 1
= proportion from population 2
=
=
z Formula to Test the Difference in Population
Proportions (10.11)
where
and
Confidence Interval to Estimate p1 - p2 (10.12)
F Test for Two Population Variances (10.13)
Formula for Determining the Critical Value for the
Lower-Tail F (10.14)
Tukey’s HSD test
where
MSE = mean square error
n = sample size
= critical value of the studentized range
distribution from Table A.10
Tukey-Kramer formula
where
Chapter 11 Analysis of Variance and Design of
Experiments
Formulas for computing a one-way ANOVA
MSE = mean square error
= sample size for rth sample
= sample size for sth sample
= critical value of the studentized range
distribution from Table A.10
Formulas for computing a randomized block design
Formulas for computing a two-way ANOVA
Chapter 12 Correlation and Simple Regression
Analysis
(12.5) Coefficient of determination
(12.1) Pearson product-moment correlation
coefficient
Computational formula for r2
Equation of the simple regression line
t test of slope
(12.2) Slope of the regression line
=
(12.3) Alternative formula for slope
where
= the hypothesized slope
df = n-2
(12.4) y intercept of the regression line
(12.6) Confidence interval to estimate E(
given value of x
) for a
Sum of squares of error
Computational Formula for SSE
SSE
Standard error of the estimate
(12.7) Prediction interval to estimate y for a given
value of x
Chapter 13 Multiple Regression Analysis
The F value
Sum of squares of error
Standard error of the estimate
Coefficient of multiple determination
Adjusted R2
Chapter 14 Building Multiple Regression
Models
General Linear Regression Model (14.1)
y=
where
= the regression constant
, , … are the partial regression coefficients
, …,
are the independent variables
k = the number of independent variables
Variance inflation factor (14.2)
Chapter 15 Time-Series forecasting and Index
Numbers
Individual forecast error
Mean absolute deviation
Mean square error
Exponential smoothing
Durbin-Watson test
Lasperyres Price Index
Paasche Price Index
Chapter 16 Analysis of Categorical Data
Chi-Square Goodness-of-Fit Test (16.1)
Chi-Squared Test of Independence (16.2)
Chapter 17 Nonparametric Statistics
Large-sample runs test
Kruskal-Wallis test (17.3)
Friedman test (17.4)
Spearman’s rank correlation (17.5)
Mann-Whitney U test (small sample)
Mann-Whitney U test (large sample) (17.1)
Wilcoxon matched-pairs signed rank test (17.2)
Chapter 18 Statistical Quality Control
Centreline:
UCL:
LCL:
OR
UCL:
LCL:
R charts
Centreline:
UCL:
LCL:
p charts
Centreline:
UCL:
LCL:
c charts
Centreline:
UCL:
LCL:
Chapter 19 Decision Analysisgm
Bayes’ Rule
Expected Value of Sample Information
Expected Value of Sample Information = Expected
Monetary Value with Information – Expected Monetary
Value without Information
Download