Statistics 111 - Lecture 18 Final review June 2, 2008 Stat 111 - Lecture 18 - Review 1 Administrative Notes • Pass in homework • Final exam on Thursday • Office hours today from 12:30-2:00pm June 2, 2008 Stat 111 - Lecture 18 - Review 2 Final Exam • Tomorrow from 10:40-12:10 • Show up a bit early, if you arrive after 10:40 you will be wasting time for your exam • Calculators are definitely needed! • Single 8.5 x 11 cheat sheet (two-sided) allowed • Sample midterm is posted on the website • Solutions for the midterm are posted on the website June 2, 2008 Stat 111 - Lecture 18 - Review 3 Final Exam • The test is going to be long – 6 or 7 questions • Pace yourself • Do NOT leave anything blank! • I want to give you credit for what you know give me an excuse to give you points! • The questions are NOT in order of difficulty • If you get stuck on a question put it on hold and try a different question June 2, 2008 Stat 111 - Lecture 18 - Review 4 Outline • • • • • • • • • Collecting Data (Chapter 3) Exploring Data - One variable (Chapter 1) Exploring Data - Two variables (Chapter 2) Probability (Chapter 4) Sampling Distributions (Chapter 5) Introduction to Inference (Chapter 6) Inference for Population Means (Chapter 7) Inference for Population Proportions (Chapter 8) Inference for Regression (Chapter 10) June 2, 2008 Stat 111 - Lecture 18 - Review 5 Experiments Treatment Group Population • • • • • Experimental Units Control Group Treatment No Treatment Try to establish the causal effect of a treatment Key is reducing presence of confounding variables Matching: ensure treatment/control groups are very similar on observed variables eg. race, gender, age Randomization: randomly dividing into treatment or control leads to groups that are similar on observed and unobserved confounding variables Double-Blinding: both subjects and evaluators don’t know who is in treatment group vs. control group June 2, 2008 Stat 111 - Lecture 18 - Review 6 Sampling and Surveys Population ? Parameter Sampling Sample Inference Estimation Statistic • Just like in experiments, we must be cautious of potential sources of bias in our sampling results • Voluntary response samples, undercoverage, nonresponse, untrue-response, wording of questions • Simple Random Sampling: less biased since each individual in the population has an equal chance of being included in the sample June 2, 2008 Stat 111 - Lecture 18 - Review 7 Different Types of Graphs • A distribution describes what values a variable takes and how frequently these values occur • Boxplots are good for center,spread, and outliers but don’t indicate shape of a distribution • Histograms much more effective at displaying the shape of a distribution June 2, 2008 Stat 111 - Lecture 18 - Review 8 Measures of Center and Spread • Center: Mean π₯π π₯1 + π₯2 + β― + π₯π π= = π π • Spread: Standard Deviation π = (π₯π − π₯)2 π−1 • For outliers or asymmetry, median/IQR are better • Center: Median - “middle number in distribution” • Spread: Inter-Quartile Range IQR = Q3 - Q1 • We use mean and SD more since most distributions we use are symmetric with no outliers(eg. Normal) June 2, 2008 Stat 111 - Lecture 18 - Review 9 Relationships between continuous var. • Scatterplot examines relationship between response variable (Y) and a explanatory variable (X): Education and Mortality: r = -0.51 • Positive vs. negativeassociations • Correlation is a measure of the strength of linear relationship between variables X and Y • r near 1 or -1 means strong linear relationship • r near 0 means weak linear relationship • Linear Regression: come back to later… June 2, 2008 Stat 111 - Lecture 18 - Review 10 Probability • Random process: outcome not known exactly, but have probability distribution of possible outcomes • Event: outcome of random process with prob. P(A) • Probability calculations: combinations of rules • • • • Equally likely outcomes rule Complement rule Additive rule for disjoint events Multiplication rule for independent events • Random variable: a numerical outcome or summary of a random process • Discrete r.v. has a finite number of distinct values • Continuous r.v. has a non-countable number of values • Linear transformations of variables June 2, 2008 Stat 111 - Lecture 18 - Review 11 The Normal Distribution • The Normal distribution has center ο and spread ο³2 N(0,1) N(-1,2) N(0,2) N(2,1) • Have tables for any probability from the standard normal distribution (ο = 0 and ο³2 = 1) • Standardization: converting X which has a N(ο,ο³2) distribution to Z which has a N(0,1) distribution: π−π π= π • Reverse standardization: converting a standard normal Z into a non-standard normal X ππ + π = π June 2, 2008 Stat 111 - Lecture 18 - Review 12 Inference using Samples Population ? Parameters: ο or p Sampling Sample Inference Estimation Statistics: π or π • Continuous: pop. mean estimated by sample mean • Discrete: pop. proportion estimated by sample proportion • Key for inference: Sampling Distributions • Distribution of values taken by statistic in all possible samples from the same population June 2, 2008 Stat 111 - Lecture 18 - Review 13 Sampling Distribution of Sample Mean • The center of the sampling distribution of the sample mean is the population mean: mean(π) = π • Over all samples, the sample mean will, on average, be equal to the population mean (no guarantees for 1 sample!) • The standard deviation of the sampling distribution π of the sample mean is ππ· π = π • As sample size increases, standard deviation of the sample mean decreases! • Central Limit Theorem: if the sample size is large enough, then the sample mean π has an approximately Normal distribution June 2, 2008 Stat 111 - Lecture 18 - Review 14 Binomial/Normal Dist. For Proportions • Sample count Y follows Binomial distribution which we can calculate from Binomial tables in small samples • If the sample size is large (n·p and n(1-p)≥ 10), sample count Y follows a Normal distribution: mean(π) = ππ ππ· π = ππ(1 − π) • If the sample size is large, the sample proportion also approximately follows a Normal distribution: mean(π) = π ππ· π = June 2, 2008 π(1 − π) π Stat 111 - Lecture 18 - Review 15 Summary of Sampling Distribution Type of Data Continuous Unknown Parameter ο Statistic π Variability of Statistic ππ· π = π π Distribution of Statistic Normal (if n large) Binomal (if n small) Count Xi = 0 or 1 June 2, 2008 p π ππ· π = π(1 − π) π Normal (if n large) Stat 111 - Lecture 18 - Review 16 Introduction to Inference • Use sample estimate as center of a confidenceinterval of likely values for population parameter • All confidence intervals have the same form: Estimate ± Margin of Error • The margin of error is always some multiple of the standard deviation (or standard error) of statistic • Hypothesis test: data supports specific hypothesis? 1. Formulate your Null and Alternative Hypotheses 2. Calculate the test statistic: difference between data and your null hypothesis 3. Find the p-value for the test statistic: how probable is your data if the null hypothesis is true? June 2, 2008 Stat 111 - Lecture 18 - Review 17 Inference: Single Population Mean ο • Known SD ο³: confidence intervals and test statistics involve standard deviation and normal critical values π−π ∗ π π ,π + π ∗ π π= π π − π0 π π • Unknown SD ο³: confidence intervals and test statistics involve standard error and critical values from a t distribution with n-1 degrees of freedom π− ∗ π‘π−1 π π ,π + ∗ π‘π−1 π π π= π − π0 π π • t distribution has wider tails (more conservative) June 2, 2008 Stat 111 - Lecture 18 - Review 18 Inference: Comparing Means ο1and ο2 • Known ο³1 andο³2: two-sample Z statistic uses normal distribution π1 − π2 π= π12 π22 + π1 π2 • Unknown ο³1 andο³2: two-sample T statistic uses t distribution with degrees of freedom = min(n1-1,n2-1) π1 − π2 π= π 12 π 22 π 12 π 22 ∗ ∗ π1 − π2 − π‘π + , π − π2 + π‘π + π 12 π 22 π1 π2 1 π1 π2 + π1 π2 • Matched pairs: instead of difference of two samples X1 and X2, do a one-sample test on the difference d π= June 2, 2008 ππ − 0 π π π ππ − ∗ π‘π−1 Stat 111 - Lecture 18 - Review π π π , ππ + ∗ π‘π−1 π π π 19 Inference: Population Proportion p • Confidence interval for p uses the Normal distribution and the sample proportion: π−π ∗ π(1 − π) π(1 − π) ∗ ,π + π π π π where π = π • Hypothesis test for p = p0 also uses the Normal distribution and the sample proportion: π − π0 π= π0 (1 − π0 ) π June 2, 2008 Stat 111 - Lecture 18 - Review 20 Inference: Comparing Proportions p1and p2 • Hypothesis test for p1 - p2 = 0 uses Normal distribution and complicated test statistic π1 π2 π1 − π2 where π1 = and π2 = π= π1 π2 ππΈ(π1 − π2 ) with pooled standard error: π1 + π2 ππΈ π1 − π2 = ππ (1 − ππ ) + where ππ = π1 π2 π1 + π2 • Confidence interval for p1 = p2 also uses Normal distribution and sample proportions 1 π1 − π2 β π June 2, 2008 ∗ 1 π1 (1 − π1 ) π2 (1 − π2 ) + π1 π2 Stat 111 - Lecture 18 - Review 21 Linear Regression • Use best fit line to summarize linear relationship between two continuous variables X and Y: ππ = πΌ + π½ππ • The slope (π = π β π π¦ π π₯): average change you get in the Y variable if you increased the X variable by one • The intercept( π = π − ππ ):average value of the Y variable when the X variable is equal to zero • Linear equation can be used to predict response variable Y for a value of our explanatory variable X June 2, 2008 Stat 111 - Lecture 18 - Review 22 Significance in Linear Regression • Does the regression line show a significant linear relationship between the two variables? H0 : ο’= 0 versus Ha : ο’οΉ 0 • Uses the t distribution with n-2 degrees of freedom and a test statistic calculated from JMP output π π= ππΈ(π) • Can also calculate confidence intervals using JMP output and t distribution with n-2 degrees of freedom ∗ π β π‘π−2 ππΈ(π) June 2, 2008 ∗ π β π‘π−2 ππΈ(π) Stat 111 - Lecture 18 - Review 23 Good Luck on Final! • I have office hours until 2pm today • Might be able to meet at other times June 2, 2008 Stat 111 - Lecture 18 - Review 24