Classification Table Notes

advertisement
Interpreting the Classification Table from Proc Logistic
STAT4330/8330
STAT4330/8330 PRIESTLEY
Page 1
Consider the following classification table:
Classification Table
Correct
Prob
Incorrect
Non-
Percentages
Non-
Sensi-
Speci-
False
False
Level
Event
Event
Event
Event
Correct
tivity
ficity
POS
NEG
0.000
306
0
1439
0
17.5
100.0
0.0
82.5
.
0.100
273
394
1045
33
38.2
89.2
27.4
79.3
7.7
0.200
162
1023
416
144
67.9
52.9
71.1
72.0
12.3
0.300
71
1306
133
235
78.9
23.2
90.8
65.2
15.2
0.400
24
1411
28
282
82.2
7.8
98.1
53.8
16.7
0.500
6
1433
6
300
82.5
2.0
99.6
50.0
17.3
0.600
0
1439
0
306
82.5
0.0
100.0
.
17.5
0.700
0
1439
0
306
82.5
0.0
100.0
.
17.5
0.800
0
1439
0
306
82.5
0.0
100.0
.
17.5
0.900
0
1439
0
306
82.5
0.0
100.0
.
17.5
1.000
0
1439
0
306
82.5
0.0
100.0
.
17.5
Interpretation Guide:
Correct Event = number of observations which were true “events” (1) classified as
“events” (1).
Correct NON-Event = number of observations which were true “Non events” (0)
classified as “Non-Event” (0).
Incorrect Event = number of observations which were true “Non events” (0) classified as
“Events” (1).
Note that the Correct Non-Events + Incorrect Events will total the number of True 0s in the
dataset.
Incorrect Non-Event = number of observations which were true “Events” (1) classified
as “Non-Events”.
Note that the Correct Events + Incorrect Non Events will total the number of True 1s in the
dataset.
STAT4330/8330 PRIESTLEY
Page 2
Correct = Overall percentage of observations classified correctly (Correct Event +
Correct Non-Event)/(Correct Event + Correct Non-Event +Incorrect Event + Incorrect
Non-Event).
Sensitivity = Percent of true Events (1) classified as Event (1)… (Correct Event/(Correct
Event + Incorrect NonEvent)).
Specificity = Percent of true Non-Events (0) classified as Non Events (0)… (Correct Non
Event/(Correct Non Event + Incorrect Event)).
False Positive = Percent of Total observations classified as “Events” (1) which were true
“Non events” (0)… (Incorrect Events/(Incorrect Events + Correct Events).
False Negative = Percent of Total observations classified as “Non Events” (0) which
were true “Events” (1)… (Incorrect Non Events/(Incorrect Non Events + Correct Non
events)).
PREDICTED 1
PREDICTED 0
STAT4330/8330 PRIESTLEY
TRUE 1
Sensitivity
False Negative
TRUE 0
False Positive
Specificity
Page 3
Example:
From the highlighted row in the classification table, we know the following:
1. The designated “cut off” for a prediction to be classified as a “1” .3 –any observation
with a predicted score of.3 or above will be classified as a 1. And, of course, any
observation with a predicted score of less than .3 would be classified as a 0.
2.
Across the whole dataset, there were a total of 306 observations which were true “1”s
and 1439 true “0”s.
3. Of these observations, 71 of the 1s were classified as 1s. 1306 of the 0’s were classified as
0s. 133 of the 0’s were classified as 1s and 235 of the 1s were classified as 0s.
4. Of the observations, 78.9% are correctly classified – (71+1306)/(71+1306+133+235).
5. Of the true 1s, 23.2% are correctly classified – (71/(71+235)).
6. Of the true 0s, 90.8% are correctly classified – (1306/(1306+133)).
7. Of those classified as a 1, 65.2% were incorrectly classified (133/(133+71).
8. Of those classified as a 0, 15.2% were incorrectly classified (235/(235+1306)).
STAT4330/8330 PRIESTLEY
Page 4
Download