BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL •

advertisement
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
•
Why do some people go to college while others do not?
•
Why do some women enter the labor force while others do not?
•
Why do some people buy houses while others rent?
•
Why do some people migrate while others stay put?
Economists are often interested in the factors behind the decision-making of individuals or
enterprises, examples being shown above.
1
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
•
Why do some people go to college while others do not?
•
Why do some women enter the labor force while others do not?
•
Why do some people buy houses while others rent?
•
Why do some people migrate while others stay put?
The models that have been developed for this purpose are known as qualitative response or
binary choice models, with the outcome, which we will denote Y, being assigned a value of
1 if the event occurs and 0 otherwise.
2
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
•
Why do some people go to college while others do not?
•
Why do some women enter the labor force while others do not?
•
Why do some people buy houses while others rent?
•
Why do some people migrate while others stay put?
Models with more than two possible outcomes have also been developed, but we will
confine our attention to binary choice models.
3
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
pi  p(Yi  1)   1   2 X i
The simplest binary choice model is the linear probability model where, as the name
implies, the probability of the event occurring, p, is assumed to be a linear function of a set
of explanatory variables.
4
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
y, p
pi  p(Yi  1)   1   2 X i
1
1 +2Xi
1
0
Xi
X
Graphically, the relationship is as shown, if there is just one explanatory variable.
5
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
pi  p(Yi  1)   1   2 X i
Of course p is unobservable. One has data on only the outcome, Y. In the linear probability
model this is used like a dummy variable for the dependent variable.
6
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
•
Why do some people graduate from high school while others drop
out?
As an illustration, we will take the question shown above. We will define a variable GRAD
which is equal to 1 if the individual graduated from high school, and 0 otherwise.
7
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
. g GRAD = 0
. replace GRAD = 1 if S > 11
(509 real changes made)
. reg GRAD ASVABC
Source |
SS
df
MS
-------------+-----------------------------Model | 2.46607893
1 2.46607893
Residual | 26.7542914
538 .049729166
-------------+-----------------------------Total | 29.2203704
539
.05421219
Number of obs
F( 1,
538)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
540
49.59
0.0000
0.0844
0.0827
.223
-----------------------------------------------------------------------------GRAD |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------ASVABC |
.0070697
.0010039
7.04
0.000
.0050976
.0090419
_cons |
.5794711
.0524502
11.05
0.000
.4764387
.6825035
------------------------------------------------------------------------------
The Stata output above shows the construction of the variable GRAD. It is first set to 0 for
all respondents, and then changed to 1 for those who had more than 11 years of schooling.
8
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
. g GRAD = 0
. replace GRAD = 1 if S > 11
(509 real changes made)
. reg GRAD ASVABC
Source |
SS
df
MS
-------------+-----------------------------Model | 2.46607893
1 2.46607893
Residual | 26.7542914
538 .049729166
-------------+-----------------------------Total | 29.2203704
539
.05421219
Number of obs
F( 1,
538)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
540
49.59
0.0000
0.0844
0.0827
.223
-----------------------------------------------------------------------------GRAD |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------ASVABC |
.0070697
.0010039
7.04
0.000
.0050976
.0090419
_cons |
.5794711
.0524502
11.05
0.000
.4764387
.6825035
------------------------------------------------------------------------------
Here is the result of regressing GRAD on ASVABC. It suggests that every additional point
on the ASVABC score increases the probability of graduating by 0.007, that is, 0.7%.
9
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
. g GRAD = 0
. replace GRAD = 1 if S > 11
(509 real changes made)
. reg GRAD ASVABC
Source |
SS
df
MS
-------------+-----------------------------Model | 2.46607893
1 2.46607893
Residual | 26.7542914
538 .049729166
-------------+-----------------------------Total | 29.2203704
539
.05421219
Number of obs
F( 1,
538)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
540
49.59
0.0000
0.0844
0.0827
.223
-----------------------------------------------------------------------------GRAD |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------ASVABC |
.0070697
.0010039
7.04
0.000
.0050976
.0090419
_cons |
.5794711
.0524502
11.05
0.000
.4764387
.6825035
------------------------------------------------------------------------------
The intercept has no sensible meaning. Literally it suggests that a respondent with a 0
ASVABC score has a 58% probability of graduating. However a score of 0 is not possible.
10
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
pi  p(Yi  1)   1   2 X i
Unfortunately, the linear probability model has some serious shortcomings. First, there are
problems with the disturbance term.
11
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
pi  p(Yi  1)   1   2 X i
Yi  E (Yi )  ui
As usual, the value of the dependent variable Yi in observation i has a nonstochastic
component and a random component. The nonstochastic component depends on Xi and
the parameters. The random component is the disturbance term.
12
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
pi  p(Yi  1)   1   2 X i
Yi  E (Yi )  ui
E (Yi )  1  pi  0  (1  pi )  pi   1   2 X i
The nonstochastic component in observation i is its expected value in that observation.
This is simple to compute, because it can take only two values. It is 1 with probability pi and
0 with probability (1 – pi) The expected value in observation i is therefore 1 + 2Xi.
13
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
pi  p(Yi  1)   1   2 X i
Yi  E (Yi )  ui
E (Yi )  1  pi  0  (1  pi )  pi   1   2 X i
Yi   1   2 X i  ui
This means that we can rewrite the model as shown.
14
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
Y, p
pi  p(Yi  1)   1   2 X i
1
1 +2Xi
1
0
Xi
X
The probability function is thus also the nonstochastic component of the relationship
between Y and X.
15
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
pi  p(Yi  1)   1   2 X i
Yi  E (Yi )  ui
E (Yi )  1  pi  0  (1  pi )  pi   1   2 X i
Yi   1   2 X i  ui
Yi  1  ui  1   1   2 X i
Yi  0  ui    1   2 X i
In observation i, for Yi to be 1, ui must be (1 – 1 – 2Xi). For Yi to be 0, ui must be (– 1 –
2Xi).
16
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
Y, p
1
pi  p(Yi  1)   1   2 X i
A
1 – 1 – 2Xi
1 +2Xi
1
1 + 2Xi
B
0
Xi
X
The two possible values, which give rise to the observations A and B, are illustrated in the
diagram. Since u does not have a normal distribution, the standard errors and test
statistics are invalid. Its distribution is not even continuous.
17
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
 u2  (  1   2 X i )(1   1   2 X i )
i
Y, p
A
1
1 – 1 – 2Xi
1 +2Xi
1
1 + 2Xi
B
0
Xi
X
Further, it can be shown that the population variance of the disturbance term in observation
i is given by (1 + 2Xi)(1 – 1 – 2Xi). This changes with Xi, and so the distribution is
heteroscedastic.
18
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
Y, p
A
1
1 – 1 – 2Xi
1 +2Xi
1
1 + 2Xi
B
0
Xi
X
Yet another shortcoming of the linear probability model is that it may predict probabilities of
more than 1, as shown here. It may also predict probabilities less than 0.
19
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
. g GRAD = 0
. replace GRAD = 1 if S > 11
(509 real changes made)
. reg GRAD ASVABC
Source |
SS
df
MS
-------------+-----------------------------Model | 2.46607893
1 2.46607893
Residual | 26.7542914
538 .049729166
-------------+-----------------------------Total | 29.2203704
539
.05421219
Number of obs
F( 1,
538)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
540
49.59
0.0000
0.0844
0.0827
.223
-----------------------------------------------------------------------------GRAD |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------ASVABC |
.0070697
.0010039
7.04
0.000
.0050976
.0090419
_cons |
.5794711
.0524502
11.05
0.000
.4764387
.6825035
-----------------------------------------------------------------------------. predict PROB
The Stata command for saving the fitted values from a regression is predict, followed by the
name that you wish to give to the fitted values. We are calling them PROB.
20
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
. tab PROB if PROB > 1
Fitted |
values |
Freq.
Percent
Cum.
------------+----------------------------------1.000381 |
6
4.76
4.76
1.002308 |
9
7.14
11.90
1.004236 |
7
5.56
17.46
1.006163 |
3
2.38
19.84
*********************************************
1.040855 |
11
8.73
93.65
1.042783 |
3
2.38
96.03
1.04471 |
2
1.59
97.62
1.046638 |
3
2.38
100.00
------------+----------------------------------Total |
126
100.00
tab is the Stata command for tabulating the values of a variable, and for cross-tabulating
two or more variables. We see that there are 126 observations where the fitted value is
greater than 1. (The middle rows of the table have been omitted.)
21
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
. tab PROB if PROB > 1
Fitted |
values |
Freq.
Percent
Cum.
------------+----------------------------------1.000381 |
6
4.76
4.76
1.002308 |
9
7.14
11.90
1.004236 |
7
5.56
17.46
1.006163 |
3
2.38
19.84
*********************************************
1.040855 |
11
8.73
93.65
1.042783 |
3
2.38
96.03
1.04471 |
2
1.59
97.62
1.046638 |
3
2.38
100.00
------------+----------------------------------Total |
126
100.00
. tab PROB if PROB < 0
no observations
In this example there were no fitted values of less than 0.
22
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
. tab PROB if PROB > 1
Fitted |
values |
Freq.
Percent
Cum.
------------+----------------------------------1.000381 |
6
4.76
4.76
1.002308 |
9
7.14
11.90
1.004236 |
7
5.56
17.46
1.006163 |
3
2.38
19.84
*********************************************
1.040855 |
11
8.73
93.65
1.042783 |
3
2.38
96.03
1.04471 |
2
1.59
97.62
1.046638 |
3
2.38
100.00
------------+----------------------------------Total |
126
100.00
. tab PROB if PROB < 0
no observations
The main advantage of the linear probability model over logit and probit analysis, the
alternatives considered in the next two sequences, is that it is much easier to fit. For this
reason it used to be recommended for initial, exploratory work.
23
BINARY CHOICE MODELS: LINEAR PROBABILITY MODEL
. tab PROB if PROB > 1
Fitted |
values |
Freq.
Percent
Cum.
------------+----------------------------------1.000381 |
6
4.76
4.76
1.002308 |
9
7.14
11.90
1.004236 |
7
5.56
17.46
1.006163 |
3
2.38
19.84
*********************************************
1.040855 |
11
8.73
93.65
1.042783 |
3
2.38
96.03
1.04471 |
2
1.59
97.62
1.046638 |
3
2.38
100.00
------------+----------------------------------Total |
126
100.00
. tab PROB if PROB < 0
no observations
However, this consideration is no longer relevant, now that computers are so fast and
powerful, and logit and probit are typically standard features of regression applications.
24
Download