datamining-lect9b

advertisement
DATA MINING
LECTURE 9
Classification
Decision Trees
Evaluation
Illustrating Classification Task
Tid
Attrib1
Attrib2
Attrib3
1
Yes
Large
125K
No
2
No
Medium
100K
No
3
No
Small
70K
No
4
Yes
Medium
120K
No
5
No
Large
95K
Yes
6
No
Medium
60K
No
7
Yes
Large
220K
No
8
No
Small
85K
Yes
9
No
Medium
75K
No
10
No
Small
90K
Yes
Learning
algorithm
Class
Induction
Learn
Model
Model
10
Training Set
Tid
Attrib1
Attrib2
11
No
Small
55K
?
12
Yes
Medium
80K
?
13
Yes
Large
110K
?
14
No
Small
95K
?
15
No
Large
67K
?
10
Test Set
Attrib3
Apply
Model
Class
Deduction
Examples of Classification Task
• Predicting tumor cells as benign or malignant
• Classifying credit card transactions as legitimate or
fraudulent
• Categorizing news stories as finance,
weather, entertainment, sports, etc
• Identifying spam email, spam web pages, adult
content
• Categorizing web users, and web queries
Evaluation of classification models
• Counts of test records that are correctly (or
Actual Class
incorrectly) predicted by the classification model
• Confusion matrix
Predicted Class
Class = 1 Class = 0
Class = 1 f11
f10
Class = 0 f01
f00
f11  f 00
# correct prediction s
Accuracy 

total # of prediction s f11  f10  f 01  f 00
f10  f 01
# wrong prediction s
Error rate 

total # of prediction s f11  f10  f 01  f 00
Example of a Decision Tree
Tid Refund Marital
Status
Taxable
Income Cheat
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
60K
Splitting Attributes
Refund
Yes
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
NO
> 80K
YES
10
Training Data
Married
Model: Decision Tree
Apply Model to Test Data
Test Data
Start from the root of tree.
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
Apply Model to Test Data
Test Data
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
Apply Model to Test Data
Test Data
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
Apply Model to Test Data
Test Data
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
Apply Model to Test Data
Test Data
Refund
Yes
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
Apply Model to Test Data
Test Data
Refund
Yes
Refund Marital
Status
Taxable
Income Cheat
No
80K
Married
?
10
No
NO
MarSt
Single, Divorced
TaxInc
< 80K
NO
Married
NO
> 80K
YES
Assign Cheat to “No”
Decision Tree Classification Task
Tid
Attrib1
Attrib2
Attrib3
Class
1
Yes
Large
125K
No
2
No
Medium
100K
No
3
No
Small
70K
No
4
Yes
Medium
120K
No
5
No
Large
95K
Yes
6
No
Medium
60K
No
7
Yes
Large
220K
No
8
No
Small
85K
Yes
9
No
Medium
75K
No
10
No
Small
90K
Yes
Tree
Induction
algorithm
Induction
Learn
Model
Model
10
Training Set
Tid
Attrib1
Attrib2
11
No
Small
55K
?
12
Yes
Medium
80K
?
13
Yes
Large
110K
?
14
No
Small
95K
?
15
No
Large
67K
?
10
Test Set
Attrib3
Apply
Model
Class
Deduction
Decision
Tree
Decision Tree Induction
• Many Algorithms:
• Hunt’s Algorithm (one of the earliest)
• CART
• ID3, C4.5
• SLIQ,SPRINT
General Structure of Hunt’s Algorithm
• Let Dt be the set of training
records that reach a node t
• General Procedure:
• If Dt contains records that
belong the same class yt, then
t is a leaf node labeled as yt
• If Dt is an empty set, then t is a
leaf node labeled by the default
class, yd
• If Dt contains records that
belong to more than one class,
use an attribute test to split the
data into smaller subsets.
• Recursively apply the procedure
to each subset.
Tid Refund Marital
Status
Taxable
Income Cheat
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
10
Dt
?
60K
Constructing decision-trees (pseudocode)
GenDecTree(Sample S, Features F)
1.
If stopping_condition(S,F) = true then
a.
b.
c.
2.
3.
4.
5.
root = createNode()
root.test_condition = findBestSplit(S,F)
V = {v| v a possible outcome of root.test_condition}
for each value vєV:
a.
b.
c.
6.
leaf = createNode()
leaf.label= Classify(S)
return leaf
Sv: = {s | root.test_condition(s) = v and s є S};
child = TreeGrowth(Sv ,F) ;
Add child as a descent of root and label the edge (rootchild) as v
return root
Tree Induction
• Greedy strategy.
• At each node pick the best split
• How to determine the best split?
• Find the split that minimizes impurity
• How to decide when to stop splitting?
How to determine the Best Split
• Greedy approach:
• Nodes with homogeneous class distribution are
preferred
• Need a measure of node impurity:
C0: 5
C1: 5
C0: 9
C1: 1
Non-homogeneous,
Homogeneous,
High degree of impurity
Low degree of impurity
Measuring Node Impurity
• p(i|t): fraction of records associated with node t
belonging to class i
c
Entropy (t )   p(i | t ) log p(i | t )
i 1
c
Gini (t )  1    p (i | t )
2
i 1
Classification error(t )  1  max i  p (i | t )
Gain
• Gain of an attribute split: compare the impurity
of the parent node with the impurity of the child
nodes
k
  I ( parent )  
j 1
N (v j )
N
I (v j )
• Maximizing the gain  Minimizing the weighted
average impurity measure of children nodes
• If I() = Entropy(), then Δinfo is called information
gain
Splitting based on impurity
• Impurity measures favor attributes with large
number of values
• A test condition with large number of outcomes
may not be desirable
• # of records in each partition is too small to make
predictions
Gain Ratio
• Gain Ratio:
GAIN
n
n
GainRATIO 
SplitINFO    log
SplitINFO
n
n
Split
split
k
i
i 1
Parent Node, p is split into k partitions
ni is the number of records in partition i
• Adjusts Information Gain by the entropy of the
partitioning (SplitINFO). Higher entropy partitioning
(large number of small partitions) is penalized!
• Used in C4.5
• Designed to overcome the disadvantage of Information
Gain
i
Comparison among Splitting Criteria
For a 2-class problem:
The different impurity measures are consistent
Stopping Criteria for Tree Induction
• Stop expanding a node when all the records
belong to the same class
• Stop expanding a node when all the records have
similar attribute values
• Early termination (to be discussed later)
Decision Tree Based Classification
• Advantages:
• Inexpensive to construct
• Extremely fast at classifying unknown records
• Easy to interpret for small-sized trees
• Accuracy is comparable to other classification
techniques for many simple data sets
Example: C4.5
• Simple depth-first construction.
• Uses Information Gain
• Sorts Continuous Attributes at each node.
• Needs entire data to fit in memory.
• Unsuitable for Large Datasets.
• Needs out-of-core sorting.
• You can download the software from:
http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz
Other Issues
• Data Fragmentation
• Search Strategy
• Expressiveness
• Tree Replication
Data Fragmentation
• Number of instances gets smaller as you traverse
down the tree
• Number of instances at the leaf nodes could be
too small to make any statistically significant
decision
Search Strategy
• Finding an optimal decision tree is NP-hard
• The algorithm presented so far uses a greedy,
top-down, recursive partitioning strategy to
induce a reasonable solution
• Other strategies?
• Bottom-up
• Bi-directional
Expressiveness
• Decision tree provides expressive representation for
learning discrete-valued function
• But they do not generalize well to certain types of
Boolean functions
• Example: parity function:
• Class = 1 if there is an even number of Boolean attributes with truth
value = True
• Class = 0 if there is an odd number of Boolean attributes with truth
value = True
• For accurate modeling, must have a complete tree
• Not expressive enough for modeling continuous variables
• Particularly when test condition involves only a single
attribute at-a-time
Decision Boundary
1
0.9
x < 0.43?
0.8
0.7
Yes
No
y
0.6
y < 0.33?
y < 0.47?
0.5
0.4
Yes
0.3
0.2
:4
:0
0.1
No
:0
:4
Yes
No
:0
:3
0
0
0.1
0.2
0.3
0.4
0.5
x
0.6
0.7
0.8
0.9
1
• Border line between two neighboring regions of different classes is known
as decision boundary
• Decision boundary is parallel to axes because test condition involves a
single attribute at-a-time
• The type of decision boundary of the classifier captures the expressiveness
of the classifier
:4
:0
Oblique Decision Trees
x+y<1
Class = +
• Test condition may involve multiple attributes
• More expressive representation
• Finding optimal test condition is computationally expensive
Class =
Tree Replication
P
Q
S
0
R
0
Q
1
S
0
1
0
1
• Same subtree appears in multiple branches
Practical Issues of Classification
• Underfitting and Overfitting
• Missing Values
• Costs of Classification
Underfitting and Overfitting (Example)
500 circular and 500
triangular data points.
Circular points:
0.5  sqrt(x12+x22)  1
Triangular points:
sqrt(x12+x22) > 0.5 or
sqrt(x12+x22) < 1
Underfitting and Overfitting
Overfitting
Underfitting: when model is too simple, both training and test errors are large
Overfitting due to Noise
Decision boundary is distorted by noise point
Overfitting due to Insufficient Examples
Lack of data points in the lower half of the diagram makes it difficult to
predict correctly the class labels of that region
- Insufficient number of training records in the region causes the decision
tree to predict the test examples using other training records that are
irrelevant to the classification task
Notes on Overfitting
• Overfitting results in decision trees that are more
complex than necessary
• Training error no longer provides a good estimate
of how well the tree will perform on previously
unseen records
• The model does not generalize well
• Need new ways for estimating errors
Estimating Generalization Errors
• Re-substitution errors: error on training ( e(t) )
• Generalization errors: error on testing ( e’(t))
• Methods for estimating generalization errors:
• Optimistic approach: e’(t) = e(t)
• Pessimistic approach:
• For each leaf node: e’(t) = (e(t)+0.5)
• Total errors: e’(T) = e(T) + N  0.5 (N: number of leaf nodes)
• For a tree with 30 leaf nodes and 10 errors on training
(out of 1000 instances):
Training error = 10/1000 = 1%
Generalization error = (10 + 300.5)/1000 = 2.5%
• Reduced error pruning (REP):
• uses validation dataset to estimate generalization
error
• Validation set reduces the amount of training data.
Occam’s Razor
• Given two models of similar generalization errors,
one should prefer the simpler model over the
more complex model
• For complex models, there is a greater chance
that it was fitted accidentally by errors in data
• Therefore, one should include model complexity
when evaluating a model
Minimum Description Length (MDL)
X
X1
X2
X3
X4
y
1
0
0
1
…
…
Xn
1
A?
Yes
No
0
B?
B1
A
B2
C?
1
C1
C2
0
1
B
X
X1
X2
X3
X4
y
?
?
?
?
…
…
Xn
?
• Cost(Model,Data) = Cost(Data|Model) + Cost(Model)
• Cost is the number of bits needed for encoding.
• Search for the least costly model.
• Cost(Data|Model) encodes the misclassification errors.
• Cost(Model) uses node encoding (number of children)
plus splitting condition encoding.
How to Address Overfitting
• Pre-Pruning (Early Stopping Rule)
• Stop the algorithm before it becomes a fully-grown tree
• Typical stopping conditions for a node:
• Stop if all instances belong to the same class
• Stop if all the attribute values are the same
• More restrictive conditions:
• Stop if number of instances is less than some user-specified
threshold
• Stop if class distribution of instances are independent of the available
features (e.g., using  2 test)
• Stop if expanding the current node does not improve impurity
measures (e.g., Gini or information gain).
How to Address Overfitting…
• Post-pruning
• Grow decision tree to its entirety
• Trim the nodes of the decision tree in a bottom-up
fashion
• If generalization error improves after trimming, replace
sub-tree by a leaf node.
• Class label of leaf node is determined from majority
class of instances in the sub-tree
• Can use MDL for post-pruning
Example of Post-Pruning
Training Error (Before splitting) = 10/30
Class = Yes
Class = No
Pessimistic error = (10 + 0.5)/30 = 10.5/30
20
Training Error (After splitting) = 9/30
10
Pessimistic error (After splitting)
Error = 10/30
= (9 + 4  0.5)/30 = 11/30
A?
A1
PRUNE!
A4
A3
A2
Class = Yes
8
Class = Yes
3
Class = Yes
4
Class = Yes
5
Class = No
4
Class = No
4
Class = No
1
Class = No
1
Handling Missing Attribute Values
• Missing values affect decision tree construction in
three different ways:
• Affects how impurity measures are computed
• Affects how to distribute instance with missing value to
child nodes
• Affects how a test instance with missing value is
classified
Computing Impurity Measure
Before Splitting:
Entropy(Parent)
= -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
Tid Refund Marital
Status
Taxable
Income Class
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
Refund=Yes
Refund=No
5
No
Divorced 95K
Yes
Refund=?
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
Entropy(Refund=Yes) = 0
9
No
Married
75K
No
10
?
Single
90K
Yes
Entropy(Refund=No)
= -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183
60K
Class Class
= Yes = No
0
3
2
4
1
0
Split on Refund:
10
Missing
value
Entropy(Children)
= 0.3 (0) + 0.6 (0.9183) = 0.551
Gain = 0.9  (0.8813 – 0.551) = 0.3303
Distribute Instances
Tid Refund Marital
Status
Taxable
Income Class
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced 95K
Yes
6
No
Married
No
7
Yes
Divorced 220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
60K
Tid Refund Marital
Status
Taxable
Income Class
10
90K
Single
?
Yes
10
Refund
Yes
No
Class=Yes
0 + 3/9
Class=Yes
2 + 6/9
Class=No
3
Class=No
4
Probability that Refund=Yes is 3/9
10
Refund
Yes
Probability that Refund=No is 6/9
No
Class=Yes
0
Cheat=Yes
2
Class=No
3
Cheat=No
4
Assign record to the left child with
weight = 3/9 and to the right child with
weight = 6/9
Classify Instances
New record:
Married
Tid Refund Marital
Status
Taxable
Income Class
11
85K
No
?
Refund
NO
Divorced Total
Class=No
3
1
0
4
Class=Yes
6/9
1
1
2.67
Total
3.67
2
1
6.67
?
10
Yes
Single
No
Single,
Divorced
MarSt
Married
TaxInc
< 80K
NO
NO
> 80K
YES
Probability that Marital Status
= Married is 3.67/6.67
Probability that Marital Status
={Single,Divorced} is 3/6.67
Model Evaluation
• Metrics for Performance Evaluation
• How to evaluate the performance of a model?
• Methods for Performance Evaluation
• How to obtain reliable estimates?
• Methods for Model Comparison
• How to compare the relative performance among
competing models?
Model Evaluation
• Metrics for Performance Evaluation
• How to evaluate the performance of a model?
• Methods for Performance Evaluation
• How to obtain reliable estimates?
• Methods for Model Comparison
• How to compare the relative performance among
competing models?
Metrics for Performance Evaluation
• Focus on the predictive capability of a model
• Rather than how fast it takes to classify or build models,
scalability, etc.
• Confusion Matrix:
PREDICTED CLASS
Class=Yes
Class=Yes
ACTUAL
CLASS Class=No
a
Class=No
b
a: TP (true positive)
b: FN (false negative)
c: FP (false positive)
d: TN (true negative)
c
d
Metrics for Performance Evaluation…
PREDICTED CLASS
Class=Yes
Class=Yes
ACTUAL
CLASS Class=No
• Most widely-used metric:
Class=No
a
(TP)
b
(FN)
c
(FP)
d
(TN)
ad
TP  TN
Accuracy 

a  b  c  d TP  TN  FP  FN
Limitation of Accuracy
• Consider a 2-class problem
• Number of Class 0 examples = 9990
• Number of Class 1 examples = 10
• If model predicts everything to be class 0,
accuracy is 9990/10000 = 99.9 %
• Accuracy is misleading because model does not detect
any class 1 example
Cost Matrix
PREDICTED CLASS
C(i|j)
Class=Yes
Class=Yes
C(Yes|Yes)
C(No|Yes)
C(Yes|No)
C(No|No)
ACTUAL
CLASS Class=No
Class=No
C(i|j): Cost of misclassifying class j example as class i
Computing Cost of Classification
Cost
Matrix
ACTUAL
CLASS
Model
M1
ACTUAL
CLASS
PREDICTED CLASS
+
-
+
150
40
-
60
250
Accuracy = 80%
Cost = 3910
PREDICTED CLASS
C(i|j)
+
-
+
-1
100
-
1
0
Model
M2
ACTUAL
CLASS
PREDICTED CLASS
+
-
+
250
45
-
5
200
Accuracy = 90%
Cost = 4255
Cost vs Accuracy
Count
PREDICTED CLASS
Class=Yes
ACTUAL
CLASS
Class=No
Class=Yes
a
b
Class=No
c
d
Accuracy is proportional to cost if
1. C(Yes|No)=C(No|Yes) = q
2. C(Yes|Yes)=C(No|No) = p
N=a+b+c+d
Accuracy = (a + d)/N
Cost
PREDICTED CLASS
Class=Yes
ACTUAL
CLASS
Class=No
Class=Yes
p
q
Class=No
q
p
Cost = p (a + d) + q (b + c)
= p (a + d) + q (N – a – d)
= q N – (q – p)(a + d)
= N [q – (q-p)  Accuracy]
Cost-Sensitive Measures
a
TP

a  c TP  FP
a
TP
Recall (r) 

a  b TP  FN
2rp
2a
2TP
F - measure (F) 


r  p 2a  b  c 2TP  FP  FN
Precision (p) 



Precision is biased towards C(Yes|Yes) & C(Yes|No)
Recall is biased towards C(Yes|Yes) & C(No|Yes)
F-measure is biased towards all except C(No|No)
wa  w d
Weighted Accuracy 
wa  wb wc  w d
1
1
4
2
3
4
Model Evaluation
• Metrics for Performance Evaluation
• How to evaluate the performance of a model?
• Methods for Performance Evaluation
• How to obtain reliable estimates?
• Methods for Model Comparison
• How to compare the relative performance among
competing models?
Methods for Performance Evaluation
• How to obtain a reliable estimate of
performance?
• Performance of a model may depend on other
factors besides the learning algorithm:
• Class distribution
• Cost of misclassification
• Size of training and test sets
Dealing with class Imbalance
• If the class we are interested in is very rare, then
the classifier will ignore it.
• The class imbalance problem
• Solution
• We can modify the optimization criterion by using a cost
sensitive metric
• We can balance the class distribution
• Sample from the larger class so that the size of the two classes
is the same
• Replicate the data of the class of interest so that the classes are
balanced
• Over-fitting issues
Learning Curve

Learning curve shows
how accuracy changes
with varying sample size

Requires a sampling
schedule for creating
learning curve
Effect of small sample size:
-
Bias in the estimate
-
Variance of estimate
Methods of Estimation
• Holdout
• Reserve 2/3 for training and 1/3 for testing
• Random subsampling
• Repeated holdout
• Cross validation
• Partition data into k disjoint subsets
• k-fold: train on k-1 partitions, test on the remaining one
• Leave-one-out: k=n
• Bootstrap
• Sampling with replacement
Model Evaluation
• Metrics for Performance Evaluation
• How to evaluate the performance of a model?
• Methods for Performance Evaluation
• How to obtain reliable estimates?
• Methods for Model Comparison
• How to compare the relative performance among
competing models?
ROC (Receiver Operating Characteristic)
• Developed in 1950s for signal detection theory to
analyze noisy signals
• Characterize the trade-off between positive hits and
false alarms
• ROC curve plots TPR (on the y-axis) against FPR
(on the x-axis)
PREDICTED CLASS
TP
TPR 
TP  FN
FP
FPR 
FP  TN
Yes
No
Yes
a
(TP)
b
(FN)
No
c
(FP)
d
(TN)
Actual
ROC (Receiver Operating Characteristic)
• Performance of each classifier represented as a
point on the ROC curve
• changing the threshold of algorithm, sample distribution
or cost matrix changes the location of the point
ROC Curve
- 1-dimensional data set containing 2 classes (positive and negative)
- any points located at x > t is classified as positive
At threshold t:
TP=0.5, FN=0.5, FP=0.12, FN=0.88
ROC Curve
(TP,FP):
• (0,0): declare everything
to be negative class
• (1,1): declare everything
to be positive class
• (1,0): ideal
PREDICTED CLASS
• Diagonal line:
Yes
• Random guessing
• Below diagonal line:
• prediction is opposite of
the true class
No
Yes
a
(TP)
b
(FN)
No
c
(FP)
d
(TN)
Actual
Using ROC for Model Comparison

No model consistently
outperform the other
 M1 is better for
small FPR
 M2 is better for
large FPR

Area Under the ROC
curve (AUC)

Ideal: Area = 1

Random guess:
 Area
= 0.5
How to Construct an ROC curve
Instance
P(+|A)
True Class
1
0.95
+
2
0.93
+
3
0.87
-
4
0.85
-
5
0.85
-
6
0.85
+
7
0.76
-
8
0.53
+
9
0.43
-
10
0.25
+
• Use classifier that produces
posterior probability for each
test instance P(+|A)
• Sort the instances according
to P(+|A) in decreasing order
• Apply threshold at each
unique value of P(+|A)
• Count the number of TP, FP,
TN, FN at each threshold
• TP rate, TPR = TP/(TP+FN)
• FP rate, FPR = FP/(FP + TN)
How to construct an ROC curve
+
-
+
-
-
-
+
-
+
+
0.25
0.43
0.53
0.76
0.85
0.85
0.85
0.87
0.93
0.95
1.00
TP
5
4
4
3
3
3
3
2
2
1
0
FP
5
5
4
4
3
2
1
1
0
0
0
TN
0
0
1
1
2
3
4
4
5
5
5
FN
0
1
1
2
2
2
2
3
3
4
5
TPR
1
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.2
0
FPR
1
1
0.8
0.8
0.6
0.4
0.2
0.2
0
0
0
Class
P
Threshold
>=
ROC Curve:
ROC curve vs Precision-Recall curve
Area Under the Curve (AUC) as a single number for evaluation
Test of Significance
• Given two models:
• Model M1: accuracy = 85%, tested on 30 instances
• Model M2: accuracy = 75%, tested on 5000 instances
• Can we say M1 is better than M2?
• How much confidence can we place on accuracy of M1
and M2?
• Can the difference in performance measure be explained
as a result of random fluctuations in the test set?
Confidence Interval for Accuracy
• Prediction can be regarded as a Bernoulli trial
• A Bernoulli trial has 2 possible outcomes
• Possible outcomes for prediction: correct or wrong
• Collection of Bernoulli trials has a Binomial distribution:
• x  Bin(N, p)
x: number of correct predictions
• e.g: Toss a fair coin 50 times, how many heads would turn up?
Expected number of heads = Np = 50  0.5 = 25
• Given x (# of correct predictions) or equivalently,
acc=x/N, and N (# of test instances),
Can we predict p (true accuracy of model)?
Confidence Interval for Accuracy
Area = 1 - 
• For large test sets (N > 30),
• acc has a normal distribution
with mean p and variance
p(1-p)/N
P( Z 
 /2
acc  p
Z
p (1  p ) / N
1 / 2
)
 1
• Confidence Interval for p:
Z/2
Z1-  /2
2  N  acc  Z  Z  4  N  acc  4  N  acc
p
2( N  Z )
2
 /2
2
 /2
2
 /2
2
Confidence Interval for Accuracy
• Consider a model that produces an accuracy of
80% when evaluated on 100 test instances:
• N=100, acc = 0.8
1-
• Let 1- = 0.95 (95% confidence)
Z
0.99 2.58
• From probability table, Z/2=1.96
0.98 2.33
N
50
100
500
1000
5000
p(lower)
0.670
0.711
0.763
0.774
0.789
p(upper)
0.888
0.866
0.833
0.824
0.811
0.95 1.96
0.90 1.65
Comparing Performance of 2 Models
• Given two models, say M1 and M2, which is
better?
• M1 is tested on D1 (size=n1), found error rate = e1
• M2 is tested on D2 (size=n2), found error rate = e2
• Assume D1 and D2 are independent
• If n1 and n2 are sufficiently large, then
e1 ~ N 1 ,  1 
e2 ~ N 2 ,  2 
• Approximate:
e (1  e )
ˆ 
n
i
i
i
i
Comparing Performance of 2 Models
• To test if performance difference is statistically
significant: d = e1 – e2
• d ~ N(dt,t) where dt is the true difference
• Since D1 and D2 are independent, their variance adds
up:
      ˆ  ˆ
2
t
2
1
2
2
2
1
2
2
e1(1  e1) e2(1  e2)


n1
n2
• At (1-) confidence level,
d  d  Z ˆ
t
 /2
t
An Illustrative Example
• Given: M1: n1 = 30, e1 = 0.15
M2: n2 = 5000, e2 = 0.25
• d = |e2 – e1| = 0.1 (2-sided test)
0.15(1  0.15) 0.25(1  0.25)
ˆ 

 0.0043
30
5000
d
• At 95% confidence level, Z/2=1.96
d  0.100  1.96  0.0043  0.100  0.128
t
=> Interval contains 0 => difference may not be
statistically significant
Download