Handoutn 15 Correlation and regression

advertisement

How can we explore the association between two quantitative variables?

An association exists between two variables if a particular value of one variable is more likely to occur with certain values of the other variable.

For higher levels of energy use, does the CO

2

level in the atmosphere tend to be higher? If so, then there is an association between energy use and CO

2

level.

Positive Association: As x goes up, y tends to go up.

Negative Association: As x goes up, y tends to go down.

Correlation and Regression

How can we explore the relationship between two quantitative variables?

Graphically, we can construct a scatterplot.

Numerically, we can calculate a correlation coefficient and a regression equation.

Correlation

The Pearson correlation coefficient, r, measures the strength and the direction of a straight-line relationship .

•The strength of the relationship is determined by the closeness of the points to a straight line.

•The direction is determined by whether one variable generally increases or generally decreases when the other variable increases.

• r is always between –1 and +1

• magnitude indicates the strength

r = –1 or +1 indicates a perfect linear relationship

• sign indicates the direction

r = 0 indicates no linear relationship

The following data were collected to study the relationship between the sale price, y and the total appraised value, x, of a residential property located in an upscale neighborhood.

Property

1 x

2 y

2 x 2

4 y 2

4 xy

4

2 3 5

9 25 15

3 4 7

16 49 28

4 5 10

25 100 50

5 6 11

20 35

36

90

121

299

66

163

 x  y  x

2

 y

2  xy

Pearson correlation coefficient, r. r

 n  xy

(  x )(  y ) n (  x

2

)

(  x )

2 n (  y

2

)

(  y )

2

Association Does Not Imply Causation

Example

Among all elementary school children, the relationship between the number of cavities in a child’s teeth and the size of his or her vocabulary is strong and positive.

Number of cavities and vocabulary size are both related to age.

Example Consumption of hot chocolate is negatively correlated with crime rate.

Both are responses to cold weather.

Regression

We’ve seen how to explore the relationship between two quantitative variables graphically with a scatterplot. When the relationship has a straight-line pattern, the Pearson correlation coefficient describes it numerically. We can analyze the data further by finding an equation for the straight line that best describes the pattern. This equation predicts the value of the response(y) variable from the value of the explanatory variable.

Much of mathematics is devoted to studying variables that are deterministically related. Saying that x and y are related in this manner means that once we are told the value of x, the value of y is completely specified. For example, suppose the cost for a small pizza at a restaurant if $10 plus $.75 per slice. If we let x= # toppings and y = price of pizza, then y=10+.75x. If we order a 3-topping pizza, then y = 10+.75(3) = 12.25

There are many variables x and y that would appear to be related to one another, but not in a deterministic fashion. Suppose we examine the relationship between x=high school GPA and Y=college GPA. The value of y cannot be determined just from knowledge of x, and two different students could have the same x value but have very different y values. Yet there is a tendency for those students who have high (low) high school GPAs also to have high(low) college GPAs. Knowledge of a student’s high school GPA should be quite helpful in enabling us to predict how that person will do in college.

Regression analysis is the part of statistics that deals with investigation of the relationship between two or more variables related in a nondeterministic fashion.

Historical Note: The statistical use of the word regression dates back to Francis Galton, who studied heredity in the late 1800’s. One of Galton’s interests was whether or not a man’s height as an adult could be predicted by his parents’ heights. He discovered that it could, but the relationship was such that very tall parents tended to have children who were shorter than they were, and very short parents tended to have children taller than themselves. He initially described this phenomenon by saying that there was a “reversion to mediocrity” but later changed to the terminology “regression to mediocrity.”

The least-squares line is the line that makes the sum of the squares of the vertical distances of the data points from the line as small as possible.

Equation for Least Squares (Regression) Line y 

=

 ˆ o

  ˆ

1 x

 ˆ

1

 n

 xy

 n

 x

2

(

 x )(

(

 x )

2 y )

ˆ

0

 y

 

ˆ

1 x

Scatterplot with Least Squares Line

2

0

6

4

14

12

10

8

Y = -2.2 + 2.3X

R-Squared = 0.980

2 3 5 6 4

App val

Equation for Least Squares Line : y

ˆ

= -2.2 + 2.3x

Appraisal Value, x

$100,000

Sale Price, y

$100,000

(y - y

ˆ

) (y - y

ˆ

)

2

2

3

4

5

6

2

5

7

10

11

2.4

4.7

7

9.3

11.6

-.4

.3

0

.7

-.6

.16

.09

0

(y y ˆ

) 2 = 1.1

.49

.36

*************************************************************************************

The method of least squares chooses the prediction line y 

=

B o

+

1 x that minimizes the sum of the squared errors of prediction

(y y ˆ

)

2

for all sample points.

*************************************************************************************

When talking about regression equations, the following are terms used for x and y x: predictor variable, explanatory variable, or independent variable y: response variable or dependent variable

Extrapolation is the use of the least-squares line for prediction outside the range of values of the explanatory variable x that you used to obtain the line. Extrapolation should not be done!

Measuring the Contribution of x in Predicting y

We can consider how much the errors of prediction of y were reduced by using the information provided by x. r

2

(Coefficient of Determination) =

 2

(y - y) -

 

(y - y)

2

2

The coefficient of determination, r

2

, represents the proportion of the total sample variation in y (measured by the sum of squares of deviations of the sample y values about their mean y ) that is explained by (or attributed to) the linear relationship between x and y.

Appraisal Value, x $100,000

Sale Price, y

$100,000 y

ˆ y

 y ( y

 y )

2 ( y

 y )

2

2

3

4

5

6

2

5

7

10

11

2.4

4.7

7

9.3

11.6

-.4

.3

0

.7

-.6

.16

.09

0

.49

.36

25

4

0

9

16 r 2 (Coefficient of Determination) =

 2

(y - y) -

 

(y - y)

2

2

1.1 54

=

54

1 .

1

54

.

98

Interpretation: 98% of the total sample variation in y is explained by the straight-line relationship between y and x, with the total sample variation in y being measured by the sum of squares of deviations of the sample y values about their mean y .

Interpretation: An r

2 of .98 means that the sum of squares of deviations of the y values about their predicted values has been reduced 98% by the use of the least squares equation y.

= -2.2 + 2.3x, instead of y , to predict

Inference in Regression

The inferential parts of regression use the tools of confidence intervals and significance tests. They provide inference about the regression equation in the population of interest.

Suppose a fire insurance company wants to relate the amount of fire damage in major residential fires to the distance between the residence and the nearest fire station. The study is to be conducted in a large suburb or a major city; a sample of fifteen recent fires in this suburb is selected. The amount of damage, y, and the distance, x, between the fire and the nearest fire station are recorded for each fire.

Distance from fire station x, miles

3.4

1.8

4.6

2.3

3.1

5.5

.7

3.0

2.6

4.3

2.1

1.1

6.1

4.8

3.8

Fire Damage y, thousands of dollars

26.2

17.8

31.3

23.1

27.5

36.0

14.1

22.3

19.6

31.3

24.0

17.3

43.2

36.4

26.1

Model for simple linear regression

 y

  o

 

1 x

The x variable is called the predictor (independent) variable and the y variable is called the response (dependent) variable.

Assumptions necessary for inference in regression:

1.

The straight line regression model is valid.

2.

The population values of y at each value of x follow a normal distribution, with the same standard deviation at each x value.

3.

The observations are independent.

It is important to remember that a model merely approximates reality. In practice, the population means of the conditional distributions would not perfectly follow a straight line. The conditional distributions would not be exactly normal. The population standard deviation would not be exactly the same for each conditional distribution. But even though a model does not describe reality exactly, a model is useful if the assumptions are close to being satisfied.

How can the researchers use the data to predict fire damage for a house located 4 miles from the fire station?

How can the researchers estimate the mean amount of damage for all houses located 4 miles from the fire station?

For linear regression, we make the assumption that

 y|x

= o

  

1 x

The above is often called the population regression equation Unfortunately,

and o

1

are unknown parameters. In practice, we estimate the population regression equation using the prediction equation for the sample data.

The sample regression equation is denoted as follows, y 

=

 ˆ o

  ˆ

1 x

To come up with the slope and y-intercept for the sample regression equation, we can use the method of least-squares.

The method of least squares chooses the prediction line of prediction

(y - y 

) 2 for all sample points. y 

=

B o

+

1 x that minimizes the sum of the squared errors

Hypothesis test for B

1

Frequently, the null hypothesis of interest is

1

=0. When this is the case, the population regression line is a horizontal line and a change in x yields no predicted change in y, and it follows that x has no value in predicting y.

H o

:

1

= 0 H a

: B

1

0

If H o

is rejected, we can conclude that a useful linear relationship exists between x and y. p=.000 thus we can reject Ho and conclude that a useful linear relationship exists between distance from the fire station and fire damage. The sample evidence indicates that x contributes information for the prediction of y using a linear model for the relationship between fire damage and distance from the fire station.

Confidence Interval for the Population Mean of y at a given value of x and Prediction Interval for y given x

A confidence interval for

 y|x

estimates the population mean of y for a given value of x.

A prediction interval for y provides an estimate for an individual value of y for a given value of x.

It is easier to predict an average value of y than an individual y value, so the confidence interval will always be narrower than the prediction interval.

A 100(1-

)% Prediction Interval for an individual new value of y at a certain value of x

Suppose the insurance company wants to predict the fire damage if a major residential fire were to occur 3.5 miles from the nearest fire station. The model yields a 95% prediction interval of $22,324 to $32,667 for fire damage in a major residential fire 3.5 miles from the nearest fire station.

A 100(1-

)% Confidence Interval for the mean value of y at a certain value of x

Suppose the insurance company wants to estimate the average fire damage for major residential fires that occur 3.5 miles from the nearest fire station. The model yields a 95% confidence interval of $26,190 to $28,801 for average fire damage for major residential fires that occur 3.5 miles from the nearest fire station.

Minitab Output

Regression Analysis: damage versus distance

The regression equation is damage = 10.3 + 4.92 distance

Predictor Coef SE Coef T P

Constant 10.278 1.420 7.24 0.000 distance 4.9193 0.3927 12.53 0.000 (H

0

: B

1

=0)

S = 2.31635 R-Sq = 92.3% R-Sq(adj) = 91.8%

Analysis of Variance

Source DF SS MS F P

Regression 1 841.77 841.77 156.89 0.000

Residual Error 13 69.75 5.37

Total 14 911.52

Predicted Values for New Observations

New

Obs Fit SE Fit 95% CI 95% PI

1 27.496 0.604 (26.190, 28.801) (22.324, 32.667)

Values of Predictors for New Observations

New

Obs distance

1 3.50

Regression Plot

Y = 10.2779 + 4.91933X

R-Sq = 0.923

45

35

25

15

0 1 2 3

Distance

4 5 6

Checking to See if Assumptions Have Been Satisfied

Residuals Versus the Fitted Values

(response is C2)

2

1

0

4

3

-3

-4

-1

-2

20

Fitted Value

30 40

Normal Probability Plot

.999

.99

.95

.80

.50

.20

.05

.01

.001

Average: -0.0000000

StDev: 2.23209

N: 15

-3 -2 -1 0

RESI1

1 2 3

Anderson-Darling Normality Test

A-Squared: 0.299

P-Value: 0.540

Download