CHAPTER 4 SAMPLES AND SAMPLING DISTRIBUTIONS 1. 2. 3. Why do We Use Samples? Probability Sampling 2.1. Simple Random Samples Sampling Distributions 3.1. The Sampling Distribution of the Sample Mean 𝑥̅ 3.1.1. The Expected Value of 𝑥̅ 3.1.1.1. The Relationship Between the Mean of the Parent Population and the Mean of All 𝑥̅ Values 3.1.2. The Variance of x̄ and Standard Error of 𝑥̅ 3.1.3. The Relationship Between the Variance of the Parent Population and the Variance of 𝑥̅ 3.1.4. The Shape of the Sampling Distribution of 𝑥̅ . The Relationship Between the Parent Population Distribution and the Sampling Distribution 3.1.5. Examples Using the Normal Sampling Distribution of 𝑥̅ 3.1.6. The Margin of Sampling Error (MOE) 3.1.7. Error Probability α 3.1.8. Determining the Sample Size for a Given MOE 3.2. The Sampling Distribution of the Sample Proportion p̄ 3.2.1. The Expected Value of p̄ 3.2.1.1. The Relationship Between the Parent Population Proportion and the Mean of All p̄ Values 3.2.2. The Variance of p̄ and Standard Error of p̄ 3.2.2.1. The Relationship Between Variance of the Binary Parent Population and the Variance of p̄ 3.2.3. The Sampling Distribution of p̄ as a Normal Distribution 3.2.4. Margin of Error for p̄ 3.2.5. Determining the Sample Size for a Given MOE 1. Why do We Use Samples? Sampling is the basis for inferential statistics. A sample is a segment of a population. It is, therefore, expected to reflect the population. By studying the characteristics of the sample one can make inferences about the population. There are several reasons why we take a part of the population to study rather than taking a full census of the population. These are: Samples cost less. Sampling takes less time. Samples are more accurate. Sample observations are usually of higher quality because they are better screened for errors in measurement and for duplication and misclassifications; Samples can be destroyed to gain information about quality (destructive sampling). 2. Probability Sampling A sample in which each element of the population has a known and nonzero chance of being selected is called a probability sample. Chapter 4—Sampling Distributions Page 1 of 29 2.1. Simple Random Samples A simple random sample is a probability sample in which all possible samples of size n are equally likely to be chosen. To explain this requirement, let the population consist of letters A, B, C, D, and E. Since there are five items in the population, then 𝑁 = 5. We want to select a sample of size 3, that is, 𝑛 = 3. Since sampling is random (the letters are written on little balls and are put in a bowl), there is more than one way that we can select 3 items from 5 items. Using the combination formula, the total number of possible samples is C(N, n) = C(5, 3) = 10. The following is the list of all 10 possible samples: ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE The definition of SRS implies that each sample has the equal chance of 0.10 of being selected. This process of simple random selection applies to a finite (small) population. The simple random selection process is different when the population is not finite (large). Even when the population is relatively small, the application of the definition becomes very cumbersome. For example, what if the population size is 50 and we want to select a sample of size 10. How many different samples are possible? Using the combination formula, the total number of possible samples is 10,272,278,170. It would be impractical! to list all the 10.3 billion possible samples and select one of them at random. The correct procedure to select a random sample is to assign a serial number to each of the population elements and select the sample by drawing a pre-specified number of serial numbers at random (use the "random numbers table"). 3. Sampling Distributions A sampling distribution is a probability distribution of a sample statistic. Recall from Chapter 1 that a sample statistic is a summary characteristic computed from sample data. Since a sample statistic is a summary characteristic obtained from a randomly selected sample, the sample statistic is then a random variable. The value assigned to the sample statistic is randomly determined. Furthermore, because a sample statistic is a random variable, it has a probability distribution. The probability distribution of a sample statistic is called a sampling distribution. ̅ 3.1. The Sampling Distribution of the Sample Mean 𝒙 Since 𝑥̅ is a summary characteristic computed from sample data, then it is a sample statistic. The probability distribution of 𝑥̅ is called the sampling distribution of 𝑥̅ . The reason we are able to define a probability distribution for 𝑥̅ is that 𝑥̅ is a random variable. The value of 𝑥̅ is determined by the samples chosen through a random process. To illustrate the sampling distribution of 𝑥̅ in the simplest terms, consider the following example. The Jones family has five children. The following table lists the age of the children. Since we are considering the age of all the Jones’ children, then the age data constitutes a population. Name Ann Beth Charlotte David Eric Chapter 4—Sampling Distributions Age 𝑥 3 6 9 12 15 Page 2 of 29 Suppose, as an experiment, we want to estimate the average age of the children by taking a sample of size three. Note that for estimation purposes only a single sample of a size 𝑛 is randomly selected. Thus, a single random sample selected from the above “population” may result in the sample elements, say, Ann, Beth and David, with corresponding values {3, 6, 12}. But we know this is one of the 10 possible samples. 1 There are nine other possible samples that we could have randomly selected. Next table lists all the ten possible samples of size n = 3 that we may select from a population of size 𝑁 = 5. The table also shows the average age computed from the values of each sample. Sample Composition A B C A B D A B E A C D A C E A D E B C D B C E B D E C D E Sample Values x 3 6 9 3 6 12 3 6 15 3 9 12 3 9 15 3 12 15 6 9 12 6 9 15 6 12 15 9 12 15 Sample Mean ∑𝑥 𝑥̅ = 𝑛 6 7 8 8 9 10 9 10 11 12 In above table note that the 𝑥̅ values 8, 9 and 10 appear twice. Since three of the ten x̄ are repeated, then there are seven distinct values of 𝑥̅ . Next table shows the sampling distribution of 𝑥̅ , which is the listing of all 7 possible values the random variable 𝑥̅ can take on along with the probability (relative frequency) associated with each value. Since in the sampling process values 8, 9 and 10 each occur twice, then the probability 2 associated with these values is = 0.20. The sampling distribution of the sample mean age is then, 10 Sampling Distribution of 𝑥̅ 𝑥̅ 𝑓(𝑥̅ ) 6 0.1 7 0.1 8 0.2 9 0.2 10 0.2 11 0.1 12 0.1 1.0 The following diagram shows the chart of the sampling distribution. Using the combination formula C(N, n), there are C(5, 3) = 10 different samples of size three selected from 5 objects without replacement. 1 Chapter 4—Sampling Distributions Page 3 of 29 Sampling Distribution of x̅ f(x̅ ) 0.2 0.1 0.1 6 7 0.2 8 9 0.2 10 0.1 0.1 11 12 x̅ ̅ 3.1.1. The Expected Value of 𝒙 The sample statistic 𝑥̅ is a random variable with a probability distribution. Like all other random variables, therefore, 𝑥̅ has an expected value and a variance. The expected value of 𝑥̅ is the (weighted) average of all the sample means. The weights are the probability associated with each value of the sample mean. Since the expected value represents the average of all possible sample means, it is also denoted by the symbol μ𝑥 . Expected value of the sample mean: E(𝑥̅ ) = μ𝑥 = 𝑥̅ 𝑓 (𝑥̅ ) In the Jones family example the expected value of the sampling distribution of 𝑥̅ is determined as shown in following table. Calculation of μ x 𝑥̅ 𝑓(𝑥̅ ) 𝑥̅ 𝑓(𝑥̅ ) 6 7 8 9 10 11 12 0.1 0.1 0.2 0.2 0.2 0.1 0.1 0.6 0.7 1.6 1.8 2.0 1.1 1.2 E(𝑥̅ ) = μ𝑥 = 𝑥̅ 𝑓(𝑥̅ ) = 9.0 Note that we may compute μ𝑥 directly from the 10 unweighted 𝑥̅ values. In that case, μ𝑥 = ∑𝑥̅ 𝑛 = 6 + 7 + 8 + 9 + 10 + 9 + 11 + 12 90 = =9 10 10 Chapter 4—Sampling Distributions Page 4 of 29 3.1.1.1. The Relationship Between the Mean of the Parent Population ̅ Values and the Mean of All 𝒙 To show an important relationship between the expected value of 𝑥̅ (the average of the sample means, μ𝑥 ) and the mean of the parent population μ, determine the parent population mean directly from the Jones family children population age data in. μ= ∑𝑥 𝑁 = 3 + 6 + 9 + 12 + 15 45 = =9 5 5 The parent population average age μ = 9 is exactly the same as the mean of 𝑥̅ . That is, the mean value of all possible sample means is equal to the mean of the parent population—the mean of the means equals the mean. E(𝑥̅ ) = μ𝑥 = μ This equality is not coincidental for this example. The equality of the expected value of the sampling distribution of 𝑥̅ and the population mean μ is true for all sampling distributions of 𝑥̅ . The mean of the means equals the mean!2 3.1.2. The Variance and the Standard Error of x̄ The variance of 𝑥̅ , denoted by var(𝑥̅ ), like any other variance measure, is simply the mean squared deviation of the random variable 𝑥̅ . Since within the random variable framework the mean and expected value convey the same meaning, then we can express the variance of 𝑥̅ as the expected value (weighted mean) of the squared deviations of 𝑥̅ : var(𝑥̅ ) = E[(𝑥̅ − 𝜇)2 ] = ∑(𝑥̅ − 𝜇)2 𝑓(𝑥̅ ) Next table shows the calculation of var(𝑥̅ ) as the expected value of squared deviations. Calculation of var(𝑥̅ ) = E[(𝑥̅ − 𝜇)2 ] 𝑥̅ 𝑓(𝑥̅ ) 6 0.1 7 0.1 8 0.2 9 0.2 10 0.2 11 0.1 12 0.1 var(𝑥̅ ) = E[(𝑥̅ − 𝜇)2 ] = 2 (𝑥̅ − 𝜇)2 𝑓(𝑥̅ ) 0.9 0.4 0.2 0.0 0.2 0.4 0.9 3.0 See Appendix for the mathematical proof that E(𝑥̅ ) = 𝜇. Chapter 4—Sampling Distributions Page 5 of 29 ̅ and is denoted by 𝐬𝐞(𝒙 ̅). The standard error is The standard deviation of 𝑥̅ is called the standard error of 𝒙 a measure of the dispersion of all possible 𝑥̅ values around the mean of 𝑥̅ . It is the positive square root of the var(𝑥̅ ). For the Jones family example: se(𝑥̅ ) = √var(𝑥̅ ) = √3 = 1.732 3.1.2.1. The Relationship Between the Variance of Parent Population ̅ and the Variance of 𝒙 Going back to the population age data, compute the population variance, using the variance formula we learned in Chapter 1: σ2 = ∑(𝑥 − μ)2 𝑁 = 90 = 18 5 Note that var(𝑥̅ ) ≠ σ2 . This is always the case. However, there is a definite relationship between var(𝑥̅ ) and σ2. This relationship is shown as var(𝑥̅ ) = σ2 𝑁 − 𝑛 ( ) 𝑛 𝑁−1 From the Jones family example var(𝑥̅ ) = 18 5 − 3 ( )=3 3 5−1 In the var(𝑥̅ ) formula, pay special attention to the term ( correction factor (FPCF). 𝑁−𝑛 𝑁−1 ). This term is called the finite population 𝑁−𝑛 ( ) 𝑁−1 𝑛 When the population is finite or small, as in the example above, the sample size relative to the population, , is 𝑁 3 large: = 60%. When population is nonfinite or large this ratio becomes insignificant, the FPCF approaches 5 1 and, therefore, it plays no role in the var(𝑥) formula. The tendency of the FPCF to approach 1 as N gets larger is shown in the following table. A sample size of 𝑛 = 10 is used to show this tendency. Chapter 4—Sampling Distributions Page 6 of 29 Finite Population Correction Factor as N Increases (for n = 10) 𝑁−𝑛 N 𝑁−1 25 0.6250 50 0.8163 100 0.9091 1,000 0.9910 10,000 0.9991 100,000 0.9999 1000,000 1.0000 ̅ becomes 3 Thus, for large populations, the variance of 𝒙 σ2 var(𝑥̅ ) = 𝑛 ̅, as the square root of var(𝑥̅ ) is then, The standard error of 𝒙 se(𝑥̅ ) = σ √𝑛 ̅ Values 3.1.3. The Number of Possible Samples and 𝒙 To explain the concepts of sampling distribution, expected value, and standard error of the sampling distribution, we used a simple example where from a very small parent population (𝑁 = 5) we took very small samples (𝑛 = 3). The number of possible samples (, the Greek letter nu) is determined using the combination formula: ν = C(𝑁, 𝑛) = C(5, 3) = 10. When the population size 𝑁 increases, even with small sample size 𝑛, the number of possible samples ν, and the number of corresponding 𝑥̅ values computed from these samples, quickly rises to astronomical levels. The following table shows this clearly. N 5 10 50 100 n 3 3 5 10 ν 10 120 2,118,760 17,310,309,456,440 In Chapter 1 we used the example of residents of a Florida retirement community as the population, where N = 608, from which we selected a single sample of size 𝑛 = 40 to explain the difference between the population parameter μ and the sample statistic 𝑥̅ . For that explanation we used only a single sample the 3 See Appendix for the mathematical proof. Chapter 4—Sampling Distributions Page 7 of 29 values of which were selected randomly. This sample yielded a sample mean of 𝑥̅ = 62.8. This was only one sample and one 𝑥̅ among the following possible number of 𝑥̅ values: ν = 749,670,807,490,441,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000. Summary of the Different Variance Concepts and Formulas With the introduction of the variance of 𝑥̅ we have added a new variance concept to the two we learned in Chapter 1. These variance concepts are summarized below: Population Variance measures the mean squared deviation of population data from the population mean: σ2 = ∑(𝑥 − μ)2 𝑁 Sample Variance measures the mean squared deviation of a sample data from the sample mean: 2 𝑠 = ∑(𝑥 − 𝑥̅ )2 𝑛−1 Variance of the mean 𝑥̅ measures the mean squared deviation of all possible 𝑥̅ values from the mean of 𝑥̅ . Since in all sampling problems there are astronomically large number of 𝑥̅ values, there is no formula to compute the var(𝑥̅ ) from all possible values of 𝑥̅ . Rather, if the population variance is given, var(𝑥̅ ) is determined as follows: var(𝑥̅ ) = σ2 𝑛 ̅. The Relationship Between the 3.1.4. The Shape of the Sampling Distribution of 𝒙 Parent Population Distribution and the Sampling distribution The foundation of inferential statistics is the sampling distribution. We use the sampling distribution of 𝑥̅ to infer about the population mean μ. The shape of the sampling distribution plays a vital role in inferential statistics. In order make the inference about the population parameter, the sampling distribution must have a specific shape. The required shape of distribution is the normal distribution. If the sampling distribution is not normal, then it cannot be used for inferential statistics. At the outset, the most important issue to understand is that the shape of the sampling distribution of 𝑥̅ depends on one of two things: (1) the shape or distribution of the population data set, and/or (2) the size of the sample (𝑛). 3.1.4.1. When the Parent Population Has a Normal (Bell-Shaped) Distribution The first practical conclusion from this discussion is that when the parent population has a normal (bellshaped) distribution with mean μ and standard deviation σ, the sampling distribution of 𝑥̅ also has a ̅) = μ𝒙 = μ and standard deviation (standard error) 𝐬𝐞(𝑥̅ ) = σ⁄√𝑛 . normal distribution with mean 𝐄(𝒙 Chapter 4—Sampling Distributions Page 8 of 29 When the parent population distribution is normal with mean μ and standard deviation σ, ... σ Parent population distribution x ... the sampling distribution of is also normal with mean μ and standard error Sampling distribution of x̄ μ 3.1.4.2. When the parent population is not normally distributed When the parent population is not normally distributed, the shape of the sampling distribution will depend on the sample size 𝑛. The sampling distribution of 𝑥̅ will approach normal as the size of the sample increases. The rule thumb is, if the sample size is 30 or more, the sampling distribution of 𝑥̅ will be treated as if normal. This conclusion is based on the Central Limit Theorem. Chapter 4—Sampling Distributions Page 9 of 29 When the parent population distribution is NOT normal, ... x ... the sampling distribution of is approximatedly normal with mean μ and standard error , if n ≥ 30. μ This property of the sampling distribution makes statistical inference about μ possible even when the population is not normally distributed. ̅ 3.1.5. Examples Using the Normally Distributed Sampling Distribution of 𝒙 The subsequent chapters are all devoted essentially to inferential statistic, where we will apply the basic concepts we learned in this chapter to infer about characteristics of population data by analyzing the characteristics of sample data. Inferences about a summary characteristic of the population data, for now the mean μ, from the mean of a sample are never exact statements. These inferences, instead, are probabilistic statements. To make these probabilistic statements, and be able to state the exact probabilities, it is essential that the sampling distribution of 𝑥̅ be normal. The following examples are typical applications of the normal distribution to the sampling distribution of 𝑥̅ . What we learn from these examples, will help us with understanding of inferences about the population mean in the subsequent chapters. Example 1 In a bottling plant the amount of soda in each 32-ounce bottle is a normally distributed random variable with a mean μ = 32 ounces and standard deviation of σ = 0.3 ounces. a) If a single bottle is randomly selected, what is the probability that it contains between 31.8 and 32.2 ounces of soda? Alternatively stated, given the mean and standard deviation of the fill of bottles, what fraction (proportion, or percentage) of the bottles contain between 31.8 and 32.2 ounces of soda? Chapter 4—Sampling Distributions Page 10 of 29 Note: This part of the problem does not deal with sampling distribution. It is shown, however, to explain how to differentiate between the probability of 𝑥 (the random variable representing the parent population) and the probability of 𝑥̅ (the random variable representing the sample means). μ = 32 σ = 0.3 P(31.8 < 𝑥 < 32.2) 𝑧= 𝑥 − μ 31.8 − 32 = = −0.67 and 0.67 σ 0.3 P(−0.67 < z < 0.67) = 0.4971 b) If a sample of size n = 9 bottles is taken, what is the probability that the mean of this sample, x̄ , is between 31.8 and 32.2 ounces? Alternatively stated, what fraction (proportion, or percentage) of the means obtained from samples of size n = 9 fall within 31.8 and 32.2 ounces? Now you are dealing with the probability distribution of 𝑥̅ . Since the parent population of bottles is normal, then the distribution of 𝑥̅ values (the sampling distribution of 𝑥̅ ) is also normal with the following mean and standard deviation (standard error): μ𝒙 = μ = 32 se(𝑥̅ ) = σ √𝑛 = 0.3 √9 = 0.1 The objective is now to find P(31.8 < 𝑥̅ < 32.2) First we must convert the normal random variable 𝑥̅ to the standard normal z. The z conversion formula is 𝑧= 𝑥̅ − μ se(𝑥̅ ) Using this formula we can find 𝑧1 = 31.8 − 32 = −2.00 0.1 𝑧2 = 32.2 − 32 = 2.00 0.1 and P(−2.00 < z < 2.00) = 0.9545 Chapter 4—Sampling Distributions Page 11 of 29 0.2514 0.4971 0.2514 x 0.0228 0.0228 0.9545 31.8 32.2 x̄ Note that in these two examples, even though the distribution of 𝑥 (the parent population) and sampling distribution of 𝑥̅ both have the same mean (μ = 32), the same interval (31.8-32.2) contains 95.5% of all the 𝑥̅ values, but only 49.7% of the 𝑥 values. The reason for this difference is that the 𝑥̅ values are far less dispersed than the 𝑥 values. And, this is because the standard deviation of the distribution of 𝑥̅ , se(𝑥̅ ) = σ⁄√𝑛, is smaller than 𝜎, the standard deviation of 𝑥. The 𝑥̅ values are much more closely clustered around the mean 𝜇 = 32 than the 𝑥 values. 3.1.6. The Margin of Sampling Error The next example is used to explain the extremely important concept of the margin of sampling error (𝑴𝑶𝑬). This concept plays a crucial rule in inferential statistic. You must always keep 𝑀𝑂𝐸 in mind when dealing with the sampling distribution of a sample statistic. Example 2 A given population has a mean of 50 and a standard deviation of 18. Consider the sampling distribution of the means of samples of size 36 obtained from this population. Find the interval of 𝑥̅ values that contains the middle 90 percent of all possible 𝑥̅ values. First, establish the parameters of the distribution of the population, and the parameters of the sampling distribution. In the population, 𝑥 is normally distributed with mean the mean and standard deviation: 𝜇 = 50 𝜎 = 18 In the sampling distribution, 𝑥̅ is normally distributed (because the parent population is normal) with mean and standard deviation (standard error): μ𝑥 = μ = 50 se(𝑥̅ ) = 𝜎 √𝑛 Chapter 4—Sampling Distributions = 18 √36 =3 Page 12 of 29 Consider the following diagram showing the distribution of 𝑥̅ where 𝑥̅1 and 𝑥̅2 represent the upper and lower end of the interval which contains the middle 90% of all possible sample means obtained from samples of size 𝑛 = 36. The objective is to find the values of 𝑥̅1 and 𝑥̅2 . P(x̄ ₁ ≤ x̄ ≤ x̄ ₂) = 0.90 0.90 0.05 0.05 50 x̅ ₁ x̅ ₂ x̅ You can find 𝑥̅1 and 𝑥̅2 using the formula that converts the normal random variable 𝑥̅ into the standard normal random variable 𝑧: 𝑧= 𝑥̅ − μ se(𝑥̅ ) From which you can solve for 𝑥̅ : 𝑥̅ = μ + 𝑧 ∙ se(𝑥̅ ) The term 𝑧 ∙ se(𝑥̅ ) in this formula is called the margin of sampling error or simply the margin of error (𝑴𝑶𝑬). 𝑀𝑂𝐸 = 𝑧 ∙ se(𝑥̅ ) To find MOE, first compute the standard error of 𝑥̅ . se(𝑥̅ ) = σ √𝑛 = 18 √36 =3 The value for 𝑧 is determined as follows: Note that the middle area within the interval is 90%. Thus, the two tail areas are 5% each. Therefore, the z score corresponding to 𝑥̅2 is the 𝑧 score that bounds a right tail area of 5%, that is, 𝑧0.05 = 1.64. Thus, 𝑀𝑂𝐸 = 𝑧0.05 se(𝑥̅ ) = 1.64(3) = 4.92 The margin of error of 4.92 simply implies that the middle 90% of all possible 𝑥̅ values fall within ±4.92 (data units) from the population mean μ. The lower and upper ends of the interval are thus: 𝑥̅𝐿 = 50 − 4.92 = 45.02 𝑥̅𝑈 = 50 + 4.92 = 54.92 Chapter 4—Sampling Distributions Page 13 of 29 MOE = z0.05 se(x̅ ) 0.90 0.05 0.05 45.02 −4.92 50 +4.92 54.92 Again, the lower and upper boundaries of this interval indicates that the middle 90% of all 𝑥̅ fall within the interval bounded by 45.08 and 54.92. Stated differently, 90% of the means computed from samples of size 𝑛 = 36 deviate from the parent population mean by no more than ±4.92. Example 3 In the previous example, where μ = 50 and σ = 18, find the interval that contains the middle 95% of all the means obtained from samples of size 𝑛 = 36. Form this example we must find the 95% margin of error. 𝑀𝑂𝐸 = 𝑧0.025 se(𝑥̅ ) = 1.96(3) = 5.88 MOE = z0.025∙se(x̄ ) 0.95 0.025 44.12 0.025 −5.88 50 +5.88 55.88 Thus, 𝑥̅1 = 50 − 5.88 = 44.12 𝑥̅2 = 50 + 5.88 = 55.88 Example 4 In the soda bottle example, where μ = 32 ounces and σ = 0.3 ounces, find the interval that contains the middle 95% of the means obtained from samples of size 𝑛 = 25 bottles. Since the middle interval to contain 95% of all 𝑥̅ values, then then each tail area would contain 2.5% of 𝑥̅ ’s. The 𝑧 score that bounds a tail area of 0.025 is 𝑧0.025 = 1.96. Chapter 4—Sampling Distributions Page 14 of 29 𝑥̅1 , 𝑥̅2 = μ ± 𝑀𝑂𝐸 𝑀𝑂𝐸 = 𝑧0.025 se(𝑥̅ ) se(𝑥̅ ) = 0.3⁄√25 = 0.06 𝑀𝑂𝐸 = 1.96(0.06) = 0.118 𝑥̅1 , 𝑥̅2 = 32 ± 0.118 = (31.882,32.118) We can, therefore, state that of every 100 samples of size 25 that we select from the population of soda bottles, we expect 95 of them to have a sample mean fill that is between 31.88 and 32.12 ounces. 3.1.7. Error Probability α In computing the 𝑀𝑂𝐸 in the first two examples in this section, each 𝑀𝑂𝐸 involved a specified probability. The first required a middle interval with a 90% margin of error, and the second a 95% MOE. In the first example, the middle interval built around μ using a 90% MOE contained 90% of all possible sample means. Thus 10% of the sample means fell outside the interval, that is, they deviated from μ by more than the established MOE. Thus, in that example, if a random sample of size 𝑛 = 36 were selected from the population, there was a 10% probability that the sample mean deviated from the μ = 50 by more than ±4.92. This 10% probability is called the error probability and is denoted by the Greek letter α. In the second example, 95% of sample means deviated from μ = 50 by no more than ±5.88. The error probability in that example was, therefore, α = 0.05. Using the α as a general symbol for error probability, the 𝑀𝑂𝐸 formula can then be written as: 𝑀𝑂𝐸 = 𝑧α⁄2 se(𝑥̅ ) Note that the subscript of 𝑧 is 𝛼⁄2, since we divide the error probability equally between the two tails of the normal curve. 3.1.8. Determining the Sample Size for a Given Margin of Error In the margin of error formula 𝑀𝑂𝐸 = 𝑧α⁄2 se(𝑥̅ ), the standard error is se(𝑥̅ ) = σ⁄√𝑛. Thus, 𝑀𝑂𝐸 = 𝑧α⁄2 σ √𝑛 This indicates that the 𝑀𝑂𝐸 varies inversely with the sample size n. The bigger the sample size, the narrower the 𝑀𝑂𝐸. In many statistical questions you are required to determine the sample size for a specified 𝑀𝑂𝐸. To determine 𝑛, we can reconfigure the 𝑀𝑂𝐸 formula as follows: √𝑛 = 𝑧α⁄2 σ 𝑀𝑂𝐸 Chapter 4—Sampling Distributions Page 15 of 29 Squaring both sides, we obtain the formula to determine the sample size 𝒏 for a given 𝑴𝑶𝑬. 𝑛=( 𝑧α⁄2 σ 2 ) 𝑀𝑂𝐸 Example 5 In the previous example, where μ = 32 ounces and σ = 0.3 ounces, what should the sample size be so that 95% of all possible sample means fall within a margin of error of 0.08 (𝑀𝑂𝐸 = 0.08) ounces from the population mean? Given a 95% 𝑀𝑂𝐸, the error probability is then α = 0.05. 𝑛=( 𝑧α⁄2 σ 2 1.96 × 0.3 2 ) =( ) = 54.02 𝑀𝑂𝐸 0.08 Rounded up, n = 55. Note that in this example, we are interested in a narrower margin of error (0.08 versus 0.118). To make 𝑀𝑂𝐸 narrower and, hence, the interval more precise, we must increase the sample size. Of every 100 means obtained from samples of size n = 55 bottles, 95 of them are expected to fall within ±0.08 ounces from the mean of all bottles filled by the machine. ̅ 3.2. The Sampling Distribution of the Sample Proportion 𝒑 Consider a population of size 𝑁. Let 𝑥 be the number of elements in the population that have a given attribute. Assign the number “1” to the elements with this attribute and “0” to all others. Then the population data is binary, and 𝑥 is a binary variable. As explained in Chapter 1, the mean of a binary population data set is called the proportion and is denoted by π. We use the same formula to compute the population proportion formula as for the population mean: π= ∑𝑥 𝑁 For example, in a given academic year a total of 37,196 students (full-time equivalent) were enrolled at a major university campus, of whom 30,131 were undergraduate students. Assigning “1” to “undergraduate student”, then the population proportion of undergraduates enrolled at this campus is: π= 30,131 = 0.81 37,196 Now, suppose a sample of size n students is taken from the population. The proportion of undergraduates in the sample, the sample proportion, is 𝑝̅ = ∑𝑥 𝑛 Suppose you took a sample of 𝑛 = 200 students of whom ∑𝑥 = 156 were undergraduate students, then the sample proportion is, Chapter 4—Sampling Distributions Page 16 of 29 𝑝̅ = 156 = 0.78 200 Note that, like 𝑥̅ , which is the sample statistic estimating the population parameter µ, 𝑝̅ is also a sample statistic, now estimating the population parameter π. Like 𝑥̅ , 𝑝̅ is then a random variable because its value is determined by the outcome of a random experiment—the experiment being selecting a random sample. The ̅. probability distribution of 𝑝̅ is called the sampling distribution of 𝒑 To explain how the sampling distribution is generated, consider the Jones family example used in explaining the sampling distribution of 𝑥̅ . In this case, instead of the age of the children, we are interested in a nonquantitative attribute of the children, their gender (male/female). To show how the concepts of the sampling distributions of 𝑥̅ and 𝑝̅ are closely related, assign the value “1” to “female” (the attribute of interest in this example) and “0” to “male”. The following table shows the population elements by gender and the numeric assignment to each gender. Gender of the Jones Family Children Name Gender Numeric Assignment Ann F 1 Beth F 1 Charlotte F 1 David M 0 Eric M 0 The proportion of female in the population of the Jones family children is, π= 3 = 0.60 5 Now, we conduct an experiment by taking a sample of size 𝑛 = 3 to “estimate” the population proportion. For samples of size 𝑛 = 3, there are 10 samples possible with the sample proportion of females shown in the following table. Sample Proportion of Females Among the Jones Family Children Sample Proportion ∑𝑥 Sample Values 𝑝̅ = 𝑥𝑖 Sample Composition 𝑛 A B C 1 1 1 3/3 A B D 1 1 0 2/3 A B E 1 1 0 2/3 A C D 1 1 0 2/3 A C E 1 1 0 2/3 A D E 1 0 0 1/3 B C D 1 1 0 2/3 B C E 1 1 0 2/3 B D E 1 0 0 1/3 C D E 1 0 0 1/3 The sampling distribution of 𝑝̅, the proportion of females, is shown below as the relative frequency of the proportions in previous table. Chapter 4—Sampling Distributions Page 17 of 29 Sampling Distribution of 𝑝̅ 𝑝̅ 𝑓(𝑝̅) 1/3 2/3 3/3 0.30 0.60 0.10 1.00 ̅ 3.2.1. The Expected Value (Mean) of 𝒑 The sample statistic 𝑝̅ is a random variable with a probability distribution. Again, like all other random variables, 𝑝̅ has an expected value and a standard deviation. The expected value of 𝑝̅ is the (weighted) mean of all the sample proportions. The weights are the probability associated with each value of the sample proportion. Since the expected value represents the mean of all possible sample proportions, it is also denoted by the symbol μ𝑝 . E(𝑝̅) = μ𝑝 = ∑𝑝̅𝑓 (𝑝̅ ) Using the sampling distribution of the sample proportion of females shown in the previous table, the calculation of the mean of 𝑝̅ is shown as follows. 𝑝̅ Calculation of E(p̄ ) 𝑓(𝑝̅) 𝑝̅𝑓 (𝑝̅ ) 1/3 0.30 2/3 0.60 3/3 0.10 E(𝑝̅ ) = μ𝑝 = ∑𝑝̅𝑓(𝑝̅) = 0.10 0.40 0.10 0.60 Alternatively, we can compute μ𝑝 directly from the 10 unweighted 𝑝̅ values. μ𝑝 = 3⁄3 + 2⁄3 + 2⁄3 + 2⁄3 + 2⁄3 + 1⁄3 + 2⁄3 + 2⁄3 + 1⁄3 + 1⁄3 18⁄3 = = 0.60 10 10 3.2.1.1. The Relationship Between the Parent Population Proportion ̅ Values and the Mean of All 𝒑 Now, considering the binary population data of the gender of the children, three out of five children are female. Therefore, the population proportion is, π = 3⁄5 = 0.60 Note the important conclusion here that the mean of all possible sample proportions is exactly the same as the population proportion π.4 4 See Appendix for the proof that E(𝑝̅ ) = π Chapter 4—Sampling Distributions Page 18 of 29 E(𝑝̅ ) = μ𝑝 = π Recall that at the start of this discussion it was stated that the proportion is a special case of the mean where the values in the data set are binary values 0’s and 1’s. Thus, the mean of the sampling distribution of 𝑝̅ and the mean of sampling distribution of 𝑥̅ are both equal to the population mean. Only the symbols differ— π is the mean of the population when the data is binary, and μ is the mean of non-binary data. ̅ 3.2.2. The Variance and Standard Error of 𝒑 The variance of the random variable 𝑝̅, denoted by var(𝑝̅), is the expected value, or weighted mean squared deviation, of 𝑝̅. var(𝑝̅) = E[(𝑝̅ − μ𝑝 )2 ] = ∑(𝑝̅ − μ𝑝 )2 𝑓(𝑝̅) Since μ𝑝 = 𝜋, then, var(𝑝̅) = E[(𝑝̅ − π)2 ] = ∑(𝑝̅ − π)2 𝑓(𝑝̅ ) Calculation of var(𝑝̅ ) from Sampling Distribution of 𝑝̅ 𝑝̅ 𝑓(𝑝̅ ) (𝑝̅ − π)2 𝑓(𝑝̅) 1/3 2/3 3/3 0.30 0.60 0.10 0.021 0.003 0.016 ∑(𝑝̅ − π)2 𝑓(𝑝̅) = 0.040 ̅ is simply the square root of the variance: The standard error of 𝒑 se(𝑝̅ ) = √var(𝑝̅ ) se(𝑝̅ ) = √0.04 = 0.2 3.2.2.1. The Relationship Between Variance of the Parent Population and the Variance of 𝒑 To explain the relationship, let’s first compute the variance of the parent population in our Jones’ children example. Using the appropriate symbols for the binary population data, recalling from Chapter 1, the population variance is: σ2 = π(1 − π) Thus, for the Jones family children binary data, σ2 = 0.6(1 − 0.6) = 0.24 The variance of p̄ is then, Chapter 4—Sampling Distributions Page 19 of 29 var(𝑝̅) = Thus, var(𝑝̅) = π(1 − π) 𝑁 − 𝑛 ( ) 𝑛 𝑁−1 0.24 5 − 3 ( ) = 0.04 3 5−1 When the population is non-finite the FPCF approaches 1 and disappears from the picture and the formula for var(𝑝̅) becomes simply:5 var(𝑝̅ ) = π(1 − π) 𝑛 The standard error of 𝑝̅ is then, π(1 − π) se(𝑝̅ ) = √ 𝑛 3.2.3. The Sampling Distribution of 𝒑 as a Normal Distribution In the binomial distribution, as the number of independent trials increases (and if probability of success π is closer to 0.5), then the distribution of the binomial random variable 𝑥, the number of successes in the trial, can be approximated by the normal distribution. The rule of thumb for 𝑥 to be approximately normally distributed is: 𝑛π = 5 and 𝑛π(1 − π) = 5 Now, rather than 𝑥, we are interested in the distribution of the random variable 𝑝̅. Note that 𝑝̅ is a linear transformation of 𝑥, the number of successes (the number of 1’s in the binary sample data): 𝑝̅ = 𝑥 𝑛 1 We transform 𝑥 to 𝑝̅ by multiplying 𝑥 by the constant . Thus, if 𝑥 is approximately normal, then its linear 𝑛 transformation 𝑝̅ is also approximately normal. (Only the location of the normal curve along the number line changes and not its shape.) The following diagram shows the sampling distribution of 𝑝̅ as a normal distribution with a mean of π and the standard deviation (standard error) of se(𝑝̅) = √ 5 For the proof that var(𝑝̅ ) = Chapter 4—Sampling Distributions π(1−π) 𝑛 π(1−π) 𝑛 . see Appendix. Page 20 of 29 Sampling Distribution of p̄ E(p̄) = π p̄ The following examples use the normal distribution to solve probabilities involving the sampling distribution of p̄ . Example 6 Sixty eight percent (68%) of vehicles on Indiana interstate highways violate the speed limit (π = 0.68). A sample of 500 vehicles are randomly clocked for speed. What is the probability that more than 70% of vehicles in the sample violate the speed limit? Find P(𝑝̅ > 0.70) Since the requirements for normal approximation are satisfied (nπ = 340, and nπ (1 − π) = 231.2), then 𝑝̅ is normally distributed with the following parameters: μ𝑝 = π = 0.68 π(1 − π) 0.68(1 − 0.68) se(𝑝̅ ) = √ =√ = 0.0209 𝑛 500 The formula to transform the normally distributed 𝑝̅ to 𝑧 is: 𝑧= 𝑝̅ − π se(𝑝̅ ) The z score is then 𝑧= 0.70 − 0.68 = 0.96 0.0209 P(𝑧 > 0.96) = 0.1685 Chapter 4—Sampling Distributions Page 21 of 29 P(p̅ > 0.70) 0.1685 0.68 0.70 p̅ The diagram indicates that 0.1685 proportion (16.85%) of sample proportions obtained from random samples of 𝑛 = 500 would exceed 0.70. Example 7 In the previous example, what is the probability that the sample proportion is within 3 percentage points from the population proportion? Alternatively stated, what proportion (percentage) of 𝑝̅ values computed from repeated samples of size 𝑛 = 500 are within 3 percentage points (±0.03) from the population proportion? P(π − 0.03 < 𝑝̅ < 𝜋 + 0.03) 𝜋 = 0.68 𝑛 = 500 π(1 − π) se(𝑝̅ ) = √ = 0.0209 𝑛 P(0.68 − 0.03 < 𝑝̅ < 0.68 + 0.03) P(0.65 < 𝑝̅ < 0.71) = ______ 𝑧1 = 0.65 − 0.68 = −1.44 0.0209 𝑧2 = 0.71 − 0.68 = 1.44 0.0209 P(−1.44 < 𝑧 < 1.44) = 1 − 2(0.0749) = 0.8502 Chapter 4—Sampling Distributions Page 22 of 29 P(π − 0.03 < p̄ < π − 0.03) 0.8502 0.65 π = 0.68 p̅ 0.71 Example 8 In the previous example, what proportion (or proportion) of 𝑝̅ values computed from samples of size n = 500 fall within 4 percentage points (±0.04) from the population proportion? P(𝜋 − 0.04 < 𝑝̅ < 𝜋 + 0.04) 𝜋 = 0.68 𝑛 = 500 π(1 − π) se(𝑝̅ ) = √ = 0.0209 𝑛 P(0.68 − 0.04 < 𝑝̅ < 0.68 + 0.04) P(0.64 < 𝑝̅ < 0.72) = ______ 𝑧1 = 0.64 − 0.68 = −1.91 0.0209 𝑧2 = 0.72 − 0.68 = 1.91 0.0209 P(−1.91 < 𝑧 < 1.91) = 0.9438 P(π − 0.04 < p̄ < π − 0.04) 0.9438 0.64 π = 0.68 0.72 p̅ As the diagram shows 94.38% of 𝑝̅ values computed from samples of size 𝑛 = 500 fall within ±0.04 (±4 Chapter 4—Sampling Distributions Page 23 of 29 percentage points) from the population proportion π = 0.68, that is, they fall within the interval bounded by 𝑝𝐿 = 0.64 and 𝑝𝑈 = 0.72. Example 9 In the previous example, what proportion (or percentage) of 𝑝̅ values computed from samples of size n = 1,000 fall within ±3 percentage points (±0.03) from the population proportion? P(π − 0.03 < 𝑝̅ < 𝜋 + 0.03) P(0.68 − 0.03 < 𝑝̅ < 0.68 + 0.03) P(0.65 < 𝑝̅ < 0.71) = ______ Note that even though the 𝑝̅ interval is the same as in the Example 7, the probability will be different because the sample size is larger. We need to recalculate the standard error of 𝑝̅ taking into account the new, larger, sample size. π(1 − π) 0.68(1 − 0.68) se(𝑝) = √ =√ = 0.0148 𝑛 1000 𝑧= 0.65 − 0.68 = −2.03 and 2.03 0.0148 P(−2.03 < 𝑧 < 2.03) = 1 − 2 × 0.0212 = 0.9576 P(π − 0.03 < p̄ < π − 0.03) 0.9576 0.63 π = 0.68 0.71 p̅ 3.2.4. Margin of Error for 𝒑 Similar to the discussion of 𝑀𝑂𝐸 for 𝑥̅ , the concept of margin of error for 𝑝̅ plays a crucial rule in inferential statistics. This is why we place a special emphasis on this topic. The following example involves the 𝑀𝑂𝐸 for 𝑝̅. Example 10 Given that the population proportion of vehicles violating the legal speed limit is 0.68, using the sample size of n = 1,000, in the sampling distribution of 𝑝̅ find the interval of 𝑝̅ values which contains the middle 90% of all sample proportions computed from random samples of size 𝑛 = 1,000. To find the lower and upper ends of the interval, you must add to and subtract from π a certain quantity (in this case, a proportion, or percentage points). The lower end and upper end are denoted by, respectively, 𝑝̅1 and 𝑝̅2 . Chapter 4—Sampling Distributions Page 24 of 29 0.90 0.05 p̅ ₁ 0.05 π = 0.68 p̅₂ p̄ The quantity added to and subtracted from π is the 𝑀𝑂𝐸. To obtain the 𝑀𝑂𝐸 for 𝑝 rearrange 𝑧= 𝑝−π se(𝑝) by solving for 𝑝: 𝑝 = 𝜋 + 𝑧 ∙ se(𝑝) Thus, to obtain 𝑝1 we must subtract 𝑧 ∙ se(𝑝) and for 𝑝2 must add 𝑧 ∙ se(𝑝). 𝑝1 , 𝑝2 = 𝜋 ± 𝑧 ∙ se(𝑝) We know π = 0.68 and, given 𝑛 = 1,000, se(𝑝) = 0.0148. Since we want 90% of all sample proportions to be included in the interval, then of the remaining α = 10% (recall that α is called the error probability), one half will be on the right tail and the other half on the left tail outside the interval. The margin of statistical error is then, 𝑀𝑂𝐸 = 𝑧α⁄2 se(𝑝) Since α = 0.10, the relevant z-score is 𝑧α⁄2 = 𝑧0.05 = 1.64. The 𝑀𝑂𝐸 for the 90% interval is then: 𝑀𝑂𝐸 = 1.64(0.0148) = 0.024 The lower and upper end of the interval are therefore: 𝑝1 , 𝑝2 = 𝜋 ± 𝑧0.05 se(𝑝) = 0.68 ± 0.024 = (0.656,0.704) This means that if you took repeated samples of 1,000 vehicles and computed the proportion in each sample which violated the speed limit, then 90% of these proportions would have values ranging from 0.656 to 0.704. Alternatively stated, 90% of sample proportions would deviated from π by no more than ±0.025, or ±2.5 percentage points. Chapter 4—Sampling Distributions Page 25 of 29 0.90 0.05 0.05 π = 0.68 p̄ Example 11 Suppose in a certain election a candidate received 55% of the votes. What proportion (or percentage) of sample proportions obtained from repeated samples of size 𝑛 = 600 voters each would fall within ±3 percentage points (±0.03) of the population proportion of 0.55? The objective here is to find P(𝜋 − 0.03 < 𝑝̅ < 𝜋 + 0.03) 𝜋 = 0.55 𝑛 = 600 P(0.52 < 𝑝̅ < 0.58) 𝜋(1 − 𝜋) 0.55(1 − 0.55) se(𝑝) = √ =√ = 0.0203 𝑛 600 𝑧= 𝑝−𝜋 se(𝑝) 𝑧1 = 0.52 − 0.55 = −1.48 0.0203 𝑧2 = 0.58 − 0.55 = 1.48 0.0203 P(−1.48 < 𝑧 < 1.48) = 0.8611 Therefore, about 86% of sample proportions would deviate from π = 0.55 by no more than ±0.03, or by no more than 3 percentage points. Example 12 In the previous example, where π = 0.55, what interval of 𝑝 values would contain the middle 95% of 𝑝 values of all possible samples of size 𝑛 = 600? Now that we have learned about 𝑀𝑂𝐸, then the lower and upper ends of the interval are: 𝑝𝐿 , 𝑝𝑈 = 𝜋 ± 𝑀𝑂𝐸 Since the interval is to contain 95% of all sample proportions, then the error probability is α = 0.05. The Chapter 4—Sampling Distributions Page 26 of 29 margin of error is then, 𝑀𝑂𝐸 = 𝑧α⁄2 se(𝑝) where relevant z-score is 𝑧α⁄2 = 𝑧0.025 = 1.96, and se(𝑝) = 0.0203. Thus, 𝑀𝑂𝐸 = 1.96(0.0203) = 0.0398 or (≈ 0.04) That is, 95% of sample proportions in samples of size 600 fall within 0.04 (or 4 percentage points) from the population proportion of 0.55. 𝑝𝐿 , 𝑝𝑈 = 0.55 ± 0.04 = (0.51,0.59) ̅ 3.2.5. Determining the Sample Size for a Given 𝑴𝑶𝑬 for 𝒑 Once again, in many inferential statistics questions you will be asked to determine the sample size that yields a desired margin of error for 𝑝. Considering the formula for the margin of error for 𝑝, the 𝑀𝑂𝐸 varies inversely with sample size. 𝑀𝑂𝐸 = 𝑧α⁄2 se(𝑝) 𝑀𝑂𝐸 = 𝑧α⁄2 √ π(1 − π) 𝑛 We can rearrange this formula to solve for n. Squaring both sides and then solving for n we obtain the formula to determine the sample size for a given 𝑴𝑶𝑬. 𝑧α⁄2 2 𝑛=( ) 𝜋(1 − 𝜋) 𝑀𝑂𝐸 Example In the previous question, where π = 0.55, what is the minimum sample size so that the probability that the sample proportion is within ±0.02 (or 2 percentage points) from the population proportion is 95%? Here we are looking for a 95% 𝑀𝑂𝐸. Therefore, the error probability is α = 0.05, and 𝑧α⁄2 = 𝑧0.025 = 1.96. We want the margin of error to be 𝑀𝑂𝐸 = 0.02. 1.96 2 𝑛=( ) (0.55)(1 − 0.55) = 2376.99 0.02 𝑛 = 2,377 (always round up) Chapter 4—Sampling Distributions Page 27 of 29 Appendix The proof that E(𝑥) = E ( ∑𝑥 𝑛 E(𝑥) = μ𝑥 = μ ) E(𝑥) = 1 E(𝑥1 + 𝑥2 + ⋯ + 𝑥𝑛 ) 𝑛 E(𝑥) = 1 [E(𝑥1 ) + E(𝑥2 ) + ⋯ + E(𝑥𝑛 )] 𝑛 Since all 𝑥𝑖 are selected from the same population, then E(𝑥1 ) = E(𝑥2 ) = ⋯ = E(𝑥𝑛 ) = μ Therefore, E(𝑥) = 1 𝑛μ (μ + μ + ⋯ + μ) = =μ 𝑛 𝑛 The proof that var(𝑥̅ ) = var(𝑥) = var ( var(𝑥) = ∑𝑥 𝑛 σ2 𝑛 ) 1 var(𝑥1 + 𝑥2 + ⋯ + 𝑥𝑛 ) 𝑛2 Since all 𝑥𝑖 are independently selected from the same population, var(𝑥1 + 𝑥2 + ⋯ + 𝑥𝑛 ) = var(𝑥1 ) + var(𝑥2 ) + ⋯ + var(𝑥𝑛 ) and, var(𝑥1 ) = var(𝑥2 ) = ⋯ = var(𝑥𝑛 ) = σ2 Thus, var(𝑥) = 1 2 𝑛σ2 σ2 2 2) (σ + σ + ⋯ + σ = = 𝑛2 𝑛2 𝑛 The proof that E(p̄ ) = π: After taking a sample of size n, determining the number of successes 𝑥 in the sample (number of “1’s” in the binary data) is a Bernoulli process. Thus 𝑥 has a binomial distribution. In Chapter 2 it was shown that the expected value of a binomial random variable is: Chapter 4—Sampling Distributions Page 28 of 29 E(𝑥) = 𝑛π Since 𝑝= 𝑥 𝑛 then, 𝑥 = 𝑛𝑝 Substituting 𝑛𝑝 for 𝑥 in E(𝑥) = 𝑛π, E(𝑛𝑝) = 𝑛π For a given sample size n, then E(𝑛𝑝) = 𝑛E(𝑝) 𝑛E(𝑝) = 𝑛π Dividing both sides by 𝑛, E(𝑝) = π The proof that var(𝑝) = π(1 − π) 𝑛 Once again, since the number of successes (𝑥) in a sample has a binomial distribution, then the variance of 𝑥 is: var(𝑥) = 𝑛π(1 − π) Substituting for 𝑥 = 𝑛𝑝, we have: var(𝑛𝑝) = 𝑛π(1 − π) 𝑛2 var(𝑝) = 𝑛π(1 − π) var(𝑝) = π(1 − π) 𝑛 Chapter 4—Sampling Distributions Page 29 of 29